Sarah Taraporewalla's Technical Ramblings

Experience Report: Feature Toggling

| Comments

Last week, I shared my experiences with git and feature branching. As I mentioned, we moved away from this and towards feature toggles and branch by abstraction. This is what happened when we did…

Our focus on feature toggles and branch by abstraction was quite accidental. We were changing a fundamental way our system handled calculations by moving from doing them ourselves to asking an external service to perform key calculations for us. As with many an integration point, the external system was not quite ready by the time we started to introduce the modifications, so testing our side was proving to be tricker than initially anticipated. One day, the pair working on the story came to me and said “Sarah, we are having problems changing our tests. We are going to need to change a whole bunch of tests, because they all rely on this value to be calculated on the fly but we now go out to the external system”[1]. We started spitballing ideas, when suddenly, it came to me in a light bulb moment. “Why don’t we wrap this class with a switcher”, I said. “The switcher can be told which calculator to use - the existing one, or the new external service”. BAM. All of a sudden, it made brilliant sense.

You see, one we had this switcher in, we were then free to continue developing this new feature, share the code with everyone and not stuff up our testing in the process[2]. So we continued along, quite happily, with our QA toggling the switch when he wanted to test the new stuff, and switching it back when he wanted to do testing on the other features.

That might have been the end of the story, the brilliance of this solution being completely lost, until the inevitable happened[3]. The external team delivered working code, but it was not production ready - it was way too slow for us to use it in the specified manner. Uhoh! Here come the managers in a series of crisis meetings. After various performance tests, back-and-fro with the other team, and debating what the performance should be, our business sponsors were faced with a dilemma. What should they do? They can’t release the software with the way it was performing, but they had a whole bunch of other features in that release that they were desperate to deploy.

“I can always turn off that feature”, I said timidly in one meeting. “We haven’t removed the ability to calculate internally, we have just hidden it behind a switch. All we need to do is release the current version with that switch turned to internal calculations. Then, when the performance has improved, we can switch the calculations to use the external system. We don’t even need a new deployment for that. It will happen automatically with a simple configuration change on the fly”. Silence. The room was gobsmacked. “You mean, the business has control of when this feature is to be deployed?”, they thought. Of course. That is how it should be.

This incident was probably the only time when my sponsor came to me and told me that I had done a brilliant job[4], that I had saved the company a lot of money by having the foresight to have this architecture in place.

Our imagination and innovation grew from there.

There was a feature in the UI that our product owner wanted to get approval from the rest of the business to remove but it was taking a while to jump through all the necessary[5] hoops. As a release was scheduled for the next day and the next not for another month it was really important to get approval ASAP. But we didn’t get approval in time and the product owner didn’t care. You see, we added in another toggle around the visibility of that feature. That way when the decision finally came through, one week after the release, the business turned it off and started to see the reward immediately without waiting another 3 weeks for the next release train to come along. Yet another time when we saved the company valuable money.

I also started to talk about allowing our super users to have the external calculations for their plans, and internal calculations for all other uses so that we could get first hand feedback to determine when the performance was indeed good enough. The performance had not improved by the time I rolled off, but we were certainly starting to put thought into this plan.

My mind races to think of all the other uses. Take AB testing for instance. How easy would it be to use these to gather statistics, then discard the unpopular path. BAM - mind explosion.

You see, Feature Toggling is not just a different way to achieve feature branching. It is an architectural choice that, sure, helps with maintaining mainline development but the power is that it hands back control of what features should be enabled and when to the business. And it is there, ready for you to harness. No extra steps required. As awesome as Git is[6], you just cannot compete with something that allows you to not only control what features go into a release but what features are live at any one time[7].

So now you know how we were able to save money by introducing Feature Toggles. What could you do, if you had feature toggles?


Update 16 July 2011 Whenever you write an experience report, you run the risk of missing out some of your thoughts, assuming that they come across clearly. Thank you to everyone who has given me feedback on this post. I am adding answers at the bottom to address some of the questions that have been raised since originally posting this.

How do you ensure that what is going into prod works? We achieved this by making the default configuration always be that of production. So, for our QA to test a certain incomplete feature in our QA environment, he had to turn it on explicitly. When a new build was pushed out, the configuration had to be changed again. My friend, Chris Bird recommends that you could consider separating feature flags from the deploy package and make it environmental config. This would mean the QA would not be configuring after each deployment. Perhaps considering tools like Chef to control the environmental config?

How do you know what toggle is on? We had an application configuration page which showed really cool things such as build numbers, db migration version and configuration values of the properties (and where the configuration was set eg app.config, prod.config). This page also exposed the values of all known feature toggles.

When do you remove the toggle? And who makes that decision? I would agree with everyone that they have a TTL. Although we didn’t do this, you could put a TTL on the configuration page…or how long it has been in place. Then, after it hits a threshold you remove it. On the project, once it was live we tidied up the code following the boyscout rule. You could also have a TTL wall, making it visible to everybody. Also, if you know how long the feature has been there without being turned on, you could get that stats for monitoring waste. As to who makes the decision, I would say that if the business knows you have these toggles, talk to them about keeping them in vs taking them out and the risk/debt you are carrying as a result. My friend, Chris Bird recommends marking the toggles with expirations through a mechanism like ignore attributes in the code. When those ignores expire it breaks a test (and then the build) to warn you about removing these attributes. This could also be a nice technique for reminding you when these things need to be addressed.

Don’t you get an explosion of combination of toggles? Yes, so remove them if they are no longer needed.

If you switch it on in prod, what about testing? How can you ensure that it is tested? Doing this was completely to get around the fact that the business wouldn’t let us release more frequently. We pushed and pushed and pushed really hard, but we were only able to do fortnightly releases twice before they thought the overhead was too much. What we did was to test it in UAT with the different combinations, so even though we switched it on in prod, we tested it thoroughly in UAT first. I would prefer to just push out a new release, but failing that, this was a good alternative. And I appreciate that that sentiment is not clear in the post, so I hope this clears things up.


[1] full confession - our tests were too unwieldy and smelly. This challenge just highlighted to me the problems with our tests. [2] before this, we were faced with keeping the code on a separate branch, but with the amount of refactoring that was going on in that crucial area, we were not too keen on the day we had to merge back into master [3] to be fair, given my talk on integration systems, I should have known better… [4] if you ever wanted to hear a case study of how negative reinforcing loops impacts team morale, I have some stories for you… [5] let’s face it - they weren’t necessary [6] and it truly is awesome [7] yes, I know the argument is to remove the toggles once it is turned on, which of course simplifies the code. I leave it as an exercise for the reader to determine the best course of action

Experience Report: Branch by Feature

| Comments

I have been made aware of some negative responses to Martin Fowler & Mike Mason’s discussion on Branch By Feature.

Perhaps unsurprisingly, I agree with Martin & Mike’s view on branching. I have been on several projects that have adopted various strategies like branching by features, feature toggling & branching by abstraction so I have personal history to back up what others say about branching. But, the argument goes that that was when we were using source control systems that made branching and merging difficult and painful, like CVS, subversion and Perforce. This is true. But on my last client, we used git and branch by feature. This is my experience working in this way….

At my last client, we used started by using Git and branch-by-feature[1]. Features would take anywhere from 1 day to 5 days to complete. At the end of the feature, it would be merged back to master (having been good little girls and boys and continuously merging from master …that is CI isn’t it???)[2], pushed and then CI would trigger.

Guess what happened - tests on CI would fail because CI was running a bigger suite of tests than our cmd line. Ok, so we didn’t have CI running on the branch, but really - what harm did it do? So you had to spend a little extra time fixing those tests….we’re not buying that whole “you’re not really doing CI” argument. Branch by Feature is cool - look - you can clearly see which changes relate to a story and you can back them all out if you need to.

Then, QA gets their mits on it for the first time. Those little devils..they’re raising little pink stickies on my story! Well, better go fix them….on master. Wait - so the paradigm of branch-by-feature is only until the dev’s think its finished?[3]

But lol - a nice big juicy refactor story comes along. You know the one - the kind where you roll up your sleeve, batten down the hatch cause you are going to be there for a while. After 5 -7 days, you poke your head up again, and decide - jobs a goodin’, lets merge and push to master. But despite how well git handles merges, if you have removed/renamed/modified a method that a new piece of code relies on, git cannot help you. So you fix all compilation problems, and run tests (of course, we’re agile) and BOOOM!!! failed tests. They are harder to decipher. And didn’t we fix half of these before? grrr….so, 5-7 days after you finished the refactor, you spend another 1-2 days in merge hell…and this was with GIT!!!

Boy, you think we would’ve learnt that first time. But in spectacular face-palm fashion, we persevered with branch-by-feature always promising we would do it better next time. Fool me once, shame on — shame on you. Fool me — You can’t get fooled again.

Don’t worry - we did learn. We added feature toggles to help with the testing of a story (also using branch-by-abstraction). Then, when the 3rd party the new code was talking to wasn’t ready by the release date, we switched the toggle back to the old way, released all the other new features, thoroughly impressed the client (who believed that we would need to wait another 3 months for the 3rd party) and felt chuffed ourselves. After that, we hopped onto the release train, all stories developed on master with feature toggles where necessary and got the business back into fortnightly cycles, without the pain of branching-per-release[4]

Next time, I don’t think that I would be so diplomatic towards branch-by-feature….if I see it again, I think that I might stamp it out straight away. Every time I have seen it (and this is on 6 mthly branches, monthly branches, fortnightly, weekly & daily) the same problems keep coming up[5].

[1] actually - to clarify…when I rolled onto the project, that is what the team was doing so I decided to see how well it would go… [2] for those who are sarcastically-challenged, that was said tongue-in-cheek. [3] that wasn’t the only strategy tried..but it was the quickest for small fixes…for larger change there was a debate as to whether you should create a new branch or just merge current master into the old branch and continue…both send shivers down my spine… [4] we also tried various techniques with branching-per-release…we had another Go pipeline setup so that any releases or hotfixes would move through that. For a little while, this was important as UAT uncovered some stuff to fix, but in the end it was unnecessary as we just released whatever was the lastest good build - and toggled off all the other incomplete features. [5] actually…git’s branching does make it really cool for spikes and helping you work out how to refactor a piece of code…but once discovered, ditch it and start from scratch on master

Easy A

| Comments

I have heard the car metaphor many times before - you know the one - “are we building a ford, or a ferrari?”. But that metaphor though has never really sat well with me - I couldn’t relate to it, and everyone always says “oh no, we are building a ford” and then proceed to describe the bells and whistles. Lately, I have started to look at other metaphors, and I have found one that is currently working for me.

At school, there were some subjects that I really wanted to do well in, and get As. Then there were subjects that I needed to take for prerequisites, but all I needed was to pass, so a C in that (or even a D) was perfectly fine by me.

When we develop software, there are some features that we really want to do well in and get right - these are our distinguishers. Then, there are other features that we just need to (e.g. to keep up with the market), but they don’t distinguish us from our competitors, so we want to spend only enough effort to get it in, without doing something distinguishing.

At school, there were also subjects that were basically easy As - I could put very little effort into studying for them, and I could take home top marks. Then, there also subjects that no matter how hard I studying, how much effort I put in, all I could pull off was a C.

When we develop software, there are some features that we can do really well at, for minimum effort. Then, there are features that are really trying to get in, and when we do, they might be a bit clunky or constantly requiring effort to maintain.

At school, there was also a limit to the number of subjects I could study at once. If I took more subjects then my normal limit, then they would all suffer - even the easy A weren’t so achievable. And I swapping between them all the time was difficult, so I would dedicate time to study on one at a time (usually half a day before I switched).

I think you can see where I am going…fill in the blanks about Work in Progress limits and “multi”-tasking.

So, thinking about stories and requests as school grades now gives a different perspective to prioritisation and valuing. We can start talking about getting outstanding As or just passing, about how some stories are easy to do well at (so let’s just do them) and others will take an effort to just pass (is that effort worth it in the end? Perhaps the time could be used for other features?). And for me, this metaphor is much more accessible - I have been there, I remember what it was like to study hard for subjects and get no where, and others be really easy and I remember prioritising my subjects based on how difficult it was (a really hard subject taken with an easy subject was the way to do it).

So - what grade would you like for that?

How to Mug Tiny Types

| Comments

I love tiny types. I love how they make me feel, how they make me laugh, how easy it is to understand what is going on when they are around. The one annoying aspect, however, is when they have to cross application boundaries. Whenever you need to persist them, or present them onto the screen, in order to get to the value, you usually expose the underlying wrapped primitive. This always feels so icky reaching into them and stealing their values. But, on my current project, we up with some neat ideas as to how to rob our tiny types for their precious.

Here is how it works:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
public interface IVictim<T>
{
  void MuggedBy(IRobber<T> robber);
}

public interface IRobber<T>
{
  T StealFrom(IVictim<T> victim);
  void Steal(T valuables);
}

public class Percentage : IVictim<T>
{
  private readonly decimal percentage;
  public Percentage(decimal percentage)
  {
      this.percentage = percentage;
  }

  public void MuggedBy(IRobber<decimal> robber)
  {
      robber.Steal(percentage);
  }
}

public class ValuablesRobber<T> : IRobber<T>
{
  private T stolenGoods;

  public T StealFrom(IVictim<T> victim)
  {
      victim.MuggedBy(this);
      return stolenGoods;
  }

  public void Steal(T valuables)
  {
      stolenGoods = valuables;
  }
}

public class PercentageFormatter : IFormatter
{
  public string Format(object toFormat)
  {
      if (toFormat == null) return string.Empty;
      var percentage = (Percentage)toFormat;
      var value = new ValuablesRobber<decimal>().StealFrom(percentage);
      return value.ToString("0.00") + "%";
  }
}

Pagination Made Easy

| Comments

Ok, so I am doing a little happy dance right now, because I managed to get pagination into our application in less than a day. It is not your traditional pagination: where you specify the page number (and if you are lucky enough, also the page size). I find no real user meaning behind this. Instead, it allows you to specfy the range you want to look at. So, for the first 20 items, you would look at the items starting at 1 and ending at 20. To look at the next 20 items, you start at 21 and ending at 40. So far, that is just a different implementation to Page 2. Now, suppose the items that are actually interesting to you are items 10-30. In the Page model, you need to go back and forth between page 1 and 2. In this model, you only need to look at items starting at 10 and ending at 30. Brilliant huh! Nice implementations of this would be sliding windows, facebook style “more items”, and the version that we have (a hybrid between predefined pages and the ability to specify an exact range).

Here is how I did it (tests and other junk removed for readability):

.Net Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
public class Pagination<PaginatedType> : IEnumerable<Pagination<PaginatedType>.Page>
{
  private readonly IList<Page> pages = new List<Page>();
  private readonly Page currentPage;
  private readonly IEnumerable<PaginatedType> paginatedCollection;
  private readonly int maximumCount;

  public Pagination(IEnumerable<PaginatedType> paginatedCollection, int maxCount, Page currentPage)
  {
      this.maximumCount = maxCount;
      this.paginatedCollection = paginatedCollection;
      var pageSize = currentPage.PageSize();
      for (var i = 1; i < maxCount; i+=pageSize)
      {
          pages.Add(new Page {StartingAt = i, EndingAt = i+pageSize-1});
      }
      this.currentPage = currentPage;
  }

  public IEnumerable<PaginatedType> PaginatedCollection
  {
      get { return paginatedCollection; }
  }

  public int MaximumCount
  {
      get { return maximumCount; }
  }

  public Page CurrentPage
  {
      get { return currentPage; }
  }

  public bool IsTheCurrentPage(Page page)
  {
      return currentPage.Equals(page);
  }

  public IEnumerator<Page> GetEnumerator()
  {
      return pages.GetEnumerator();
  }

  IEnumerator IEnumerable.GetEnumerator()
  {
      return GetEnumerator();
  }

  public struct Page
  {
      public int StartingAt;
      public int EndingAt;

      public string Name()
      {
          return string.Format("{0}-{1}", StartingAt,EndingAt);
      }
      public int PageSize()
      {
          return EndingAt - StartingAt + 1;
      }
      public bool IsValid()
      {
          return StartingAt <= EndingAt &amp;&amp; StartingAt > 0;
      }
  }
}

public interface IWillFindYouTreasures
{
  Pagination<Treasure> FindTreasuresFor(Pagination<Treasure>.Page page);
  Pagination<Treasure>.Page FirstPage();
}

public class PaginatedTreasureFinder : IWillFindYouTreasures
{
  private readonly IPropertyStore propertyStore;
  public PaginatedTreasureFinder(IPropertyStore propertyStore)
  {
      this.propertyStore = propertyStore;
  }

  public Pagination<Treasure> FindTreasureFor(Pagination<Treasure>.Page page)
  {
      if (!page.IsValid())
      {
          return new Pagination<Treasure>(new Treasure[] {}, Treasure.Count, FirstPage());
      }

      var criteria = DetachedCriteria.For<Treasure>()            
          .SetFirstResult(page.StartingAt-1)
          .SetMaxResults(page.PageSize());

      var treasures = ActiveRecordMediator<Treasure>.FindAll(criteria);
      return new Pagination<Treasure>(treasures, Treasures.Count, page);
  }

  public Pagination<Treasure>.Page FirstPage()
  {
      var endingAt = propertyStore.Get(ApplicationProperty.FirstPageEndingAt).AsIntOr(1);
      return new Pagination<Treasure>.Page {StartingAt = 1, EndingAt = endingAt};
  }
}

public class TreasureController : Controller
{
  private readonly IWillFindYouTreasures treasureFinder;

  public TreasureController(IWillFindYouTreasures treasureFinder)
  {
      this.treasureFinder = treasureFinder;
  }

 [AcceptVerbs(HttpVerbs.Get)]
  public ActionResult Show(int id, int startingAt, int endingAt)
  {
      var page = new Pagination<Treasure>.Page {StartingAt = startingAt, EndingAt = endingAt};
      var treasures = treasureFinder.FindTreasureFor(page);
      ViewData["Pages"] = treasures;
      return View("Show");
  }
}

View

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<div>
  <span>There are currently <span class="treasure-count"><%= Pages.MaximumCount %></span> treasures available.</span>
</div>

<div class="pagination custom-page-select">
  <div class="options">To view specific treasures, please enter the number to start at <%= (RawHtml)Html.TextBox("StartingAt", Pages.CurrentPage.StartingAt)%> and to finish at <%= (RawHtml)Html.TextBox("EndingAt", Pages.CurrentPage.EndingAt)%> and click <%=(RawHtml)Html.ActionLink("Go", "Show", "Treasure", new { }, new { id = "paginateDirectly" })%></div>
  <div class="clear"/>
</div>

<div class="treasures-list">
  <% new TreasuresRenderer().RenderTreasures(Pages.PaginatedCollection); %>
</div>

<div class="pagination menu">
  <span class="info">Please select a range of treasures to view.</span>
  <ul>
      <% foreach (var page in Pages) { %>
      <li class="pages<%= Pages.IsTheCurrentPage(page)? " currentPage" : string.Empty %>"><%=(RawHtml) Html.ActionLink(page.Name(), "Show", "Treasure", new {startingAt=page.StartingAt,endingAt=page.EndingAt}, new {}) %></li>
      <% } %>
  </ul>
</div>

Helpful Javascript

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
function SetupPagination() {
  $(".pages a").click(function () {
      $("#content-main-container").load(
                  $(this).attr('href'),
                  function () {
                      SetupPagination();
                  });
      return false;
  });

  $('a#paginateDirectly').click(function () {
      var url = $(this).attr('href') + '?StartingAt=' + $('input[name="StartingAt"]').val() + '&amp;EndingAt=' + $('input[name="EndingAt"]').val();
      $("#content-main-container").load(
                  url,
                  function () {
                      SetupPagination();
                  });
      return false;
  });

  $('input[name="StartingAt"]').numeric().limit(9);
  $('input[name="EndingAt"]').numeric().limit(9);

  $('input[name="EndingAt"]').keyup(function (e) {
      var key = e.charCode ? e.charCode : e.keyCode ? e.keyCode : 0;
      // return should trigger the pagination
      if (key == 13) { $('a#paginateDirectly').click(); }
  });
}

Code Contracts in .Net4.0: First Impressions

| Comments

Code Contracts is one of the new features in .Net4.0 which brings a little bit of formal specification to .Net applications. As someone who has a background in formal specifications[1] I thought it might be interesting to see what this new tool offers .Net developers.

What is formal specification? Formal specification is a way that allows us to mathematically analyse our programs to formally verify that it is correct according to the specification. There are 3 basic constructs: pre condition, post condition and invariant. Preconditions are the rules which describe the state of the world before an action, post conditions are the rules which describe the state of the world after an action, and invariants are the rules which are always true. Using a trivial example of a bank account (which is not allowed to go in the red):

Pre condition to Withdraw: withdraw amount > 0 Post condition to Deposit: balance > deposit amount Invariant: account balance > 0

So, then a valid program would look something like:

account = new Account() account.deposit(200) account.withdraw(100)

All good there. Now what happens when I do the following?

account = new Account() account.deposit(200) account.withdraw(600)

My formal specification analysis would (should) verify that the last withdraw would mean balance is -400 < 0 which fails the invariant. So the program has a problem. Note that according to formal specification if you actually ran this invalid program, the result would be indeterminate: that is to say that (formally speaking) we cannot tell you what would happen (it may fail gracefully or blow up dramatically).

Ok, so good so far but can you see where this stuff might be useful? Do you ever explicitly write precondition code? The type that looks something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Account
{
  public void Deposit(int amount)
  {
      if (amount < 0) { return } // or throw exception or something else horrible
      //else continue
  }
}

class MyBanks
{
  private Account account = new Account();
  public void Deposit(int amount){
      if (amount < 0) { return }
      account.Deposit(amount)
  }
}

I have seen this style a lot, especially when people heavily unit test each class for every possibility that could happen. In reality, your other classes bomb out before you get to the if statement in Account, but you really want to describe a precondition on deposit. I have felt that being able to unobtrusively define pre conditions, post conditions and invariants on classes has been lacking in common programming languages/ecosystems. So, lets take a look at CodeContracts which is coming bundled with .Net 4.0

.Net 4.0 Code Contracts Code contracts support pre-, post-conditions and invariants. They have the same meaning as I previously described. They can statically analyse your solution on compilation and also dynamically analyse during runtime. What would our previous solution look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class Account
{
  private int balance
 
 [ContractInvariantMethod]
  private void BalanceHasToAlwaysBeInTheBlack()
  {
      // Note to reader: only conditions can be in here
      Contract.Invariant(balance >= 0)
  }
  
  public void Deposit(int amount)
  {
      Contract.Requires(amount > 0)
      Contract.Ensures(balance > amount)   // Note to reader: you can also ensure on the returned object e.g. Contract.Ensures(Contract.Result<int>() > 0)

      balance += amount
  }
  
  public void Withdraw(int amount)
  {
      Contract.Requires(amount > 0)
      
      balance -= amount
  }
}

Running statical analysis (hint: turn it on in the Properties tab panel) on account.Deposit(-400) would show produce a warning to say that this breaks the precondition however if (amount > 0){ account.Deposit(amount)} would be validated. If you did not listen to the warning and carried on regardless, in production the application would throw an exception.

So, what do you think? In my opinion, it is an interesting idea and in theory I would love to use it. However, Microsoft’s execution has a lot to be desired. The analysis does not seem to find everything (the analysis checker said that balance/50 would break the invariant). I don’t think it is unobtrusive. Perhaps you can farm the Contracts off to a meaningfully-named method but I’m not sure that it works like that.

I am also a little worried about the implications of this. I was talking to a few people who didn’t write tests: they believed that unit tests were not DRY so they don’t like doing it so they were really excited by code contracts because it made the specification DRY. However, effective TDD is more around understanding what to test than the tests themselves and it is the same thought process involved when formally specifying the system. I worry the level of false confidence this gives to developers who are not used to writing effective tests now.

I am eager to work with them properly and explore how they can be used successfully. There is some good in there, so that I don’t want to throw the baby out with the bathwater.

[1] for my university honours project I was working on a theorem prover in prolog for formal specifications written in B

Conferences: SDC and GoTo

| Comments

I have been getting my speaking voice ready, and now I am getting set to use it. This week, I am speaking about how to integrate with other systems in my talk titled The Three Pronged Approach To Integrating Systems at Scandinavian Developer Conference 2011 in Gothenburg, Sweden. I am also running a conversation corner about feedback. If you want to follow what is going on, the hash tag is #scandev

Next month, I will be speaking at Goto CPH (in Copenhagen of course) in a talk titled What about People over Process? I am really looking forward to this one - the form is a little looser than I usually go for but it is an accumulation of all the psychology books I have been reading of late.

TED Talk: Sheryl Sandberg: Why We Have Too Few Women Leaders

| Comments

Do you like TED? I have been enjoying the TED app on the iPad lately, and I recently saw the TED talk given by Sheryl Sandberg, Facebook’s COO - Why we have too few women leaders.

This talk was really interesting, and I highly recommend watching it. In it, she gives three really great pieces of advice for fellow females:

  1. Sit at the table - women systematically underrate their own abilities. As a result, women don’t negotiate for themselves in the workplace. Women attribute success to external factors, men attribute success to themselves. No one gets to the corner office by sitting on the side not at the table and no one gets the promotion if they don’t think they deserve their success or they don’t know their own success. Unfortunately, success and likability are correlated positively for men and correlated negatively for women. How good are we as managers for seeing that men are reaching for opportunities more than the women?

  2. Make your partner a real partner - make your partnership an equal partnership. Be kind to the fathers at Mummy & Me classes. Make childrearing as important a job as any others, for both men and women. Societal pressures are just as hard for men who want to stay home to look after the kids, whilst their wives are at work, as it is for the wives to be leaders. We can’t solve the problems of the lack of female leaders if we neglect to also solve the problems of the lack of stay-at-home fathers

  3. Don’t leave before you leave - basically, don’t back away from opportunities because you don’t think you would be able to fit it in with a family life or maternity leave, which could happen sometime in the future, but is not in your reality right now. From the moment a women starts having a child, and she starts thinking of how to make room for that child in her already busy life. And from that moment, she doesn’t raise her hand any more, look for a promotion, start a new project. The problem is that even if she were to get pregnant that day, with 9 months incubation, a year maternity leave and a few months getting back into work, it is almost 2 years before she can think of raising her hand again. And the reality of the situation is that it is usually a few years before she even starts to have children - in a lot of cases, before she even has a partner. Sheryl’s words are inspiring - Keep your foot on the gas pedal until you actually leave.

Sheryl is very engaging in this talk, and I throughly recommend watching this video!

How Gender Stereotypes Influence Emerging Career Aspirations

| Comments

I have just finished watching Shelly Correll’s talk on How Gender Stereotypes Influence Emerging Career Aspirations, a video filled with really great research to back up what I have been thinking about of late. I don’t want to summarise too much, I would say go and watch it, but for the main highlights…
Shelly is a professor of Sociology at Stamford University. She chose to research the affect of gender stereotypes on students choosing their careers, especially as they make decisions to enter Science Technology Engineering or Maths (STEM) subjects. She conducted numerous controlled experiments to find out how stereotypes influence the performance of tasks and also the subject’s self assessment of their ability to do the task. She found that when the subjects were primed with negative stereotypes prior, even when it was a simple as filling in demographic information, they performed worse that subjects who were not primed. Even more interesting, was subjects that were primed with positive stereotypes prior to the tasks performed better than the control group.
Another study conducted at Stamford compared the self assessment of male and female students who were equal in terms of grades and marks on tests. She found that the girls were more likely to underrate their ability than the boys were. She further showed that a person’s belief in their ability was a strong contributing factor for determining if they would pursue careers where that ability was needed. She therefore concluded that as girls underrate their ability in areas like maths and science, they don’t gravitate to subjects like AP Calculus in senior grades. She also showed that the opposite effect encouraged more girls into those subjects.
There is often outcry around attracting more women to STEM that either women are not capable or that they have no interested in these subjects. She brilliantly debunked these myths. The first by showing the bell curve of men vs women SAT maths scores, and proved that there was no significant difference between the two curves and the second myth by displaying a graph which showed a rise (trending upward) in the percentage of women represented in physics courses from 1977 to 2006, corresponding to a lot of time, money and effort spent by the National Science Foundation to encouraging women into these fields which indicates that it is working and that something can be done to make women more interested (good news for us).
She argues that there are four basic principals of how gender stereotypes affect people:
  • Stereotypic biases often occur out of awareness
  • Biases are more extreme in uncertain settings - when people don’t know what to do, gender stereotypes fill in the gaps
  • The impact of stereotypes change when beliefs in the local setting change
  • Stereotypes also bias the standards gatekeepers use to assess competence
This last point is interesting and she showed a few studies to support this point:
  • Experimental study of the evaluation of engineering internship application finds that women are judged by a harsher standard
  • Experimental study of the evaluation of police chief candidates find that evaluations change criteria when evaluating women vs men
  • Study of student evaluation of professors show female professors judged more harshly
  • Women leaders experience a double bind - when they wish to prove that they belong, they come across assertive and are not liked
  • Since stereotypes affect judgements of others we must change gender beliefs that are operating in an organisation not just change individual women’s beliefs. Therefore we need to fix organisations rather than fixing women.
So, what can be done? Shelley suggests three ways:
Control the message: what are the gender beliefs that are operating in the organisation? How does the organisation present itself? e.g. Carnie Mellon changed the way people saw computer science away from just geek which led in an increase in the percentage of women from 7% to 42%. In another study, two videos were shown to elite female maths students at Stamford about further education in maths; one video showed images with a balanced proportion of male and female engineers, and the other had an unbalanced group of mainly men - apart from the images the videos were identical. They found that the women who watched the balanced video had a more positive effect that the women who saw the unbalanced video, but even more interestingly there was a physical reaction to the unbalanced video, one which sociologists also see when people feel like they don’t belong.
Make performance standards clearer and communicate them clearly. Teach tacit knowledge. e.g. when female students dropped out of engineering courses they were asked their reason, which was in the majority that their grades were too low. However, these students were achieving higher grades than men that stayed in the course. By making the performance standards explicit, the students could have seen that their grades were satisfactory for the course and that low grades did not indicate failure in the course.
Hold gatekeepers accountable for gender disparities. It is important to keep thinking about how our policies and procedures affect career relevant decisions.
It was a great presentation, and she finished with three things that individuals can do - understand how stereotypes might affect your own career decision making; realize that negative feedback is common and productive; and promote organisation change.

How to Decrease Risk When Designing Integrated Systems

| Comments

How often do you get to work on completely isolated systems? It is seemingly more and more common that your system needs to talk to another system to function, whether that be a legacy system, an external API or even one that has not been developed yet. Every system that you talk to adds risk to your project; technical risk, quality risk and delivery risk. But never fear - help is at hand.

Come to my London Dot Net User Group talk next Thursday (18th November) at Skills Matter and you will hear about how to design your system to decrease this risk, how to ensure the quality of the overall system and how to manage the successful delivery of the system. Not only that, but you will hear real-life cases, some success stories but also many failures and you too can learn from the mistakes I witnessed. Have questions around the integration points in your ecosystem? Fire them up, and lets see if we can work through them together.

Registration is at http://skillsmatter.com/podcast/open-source-dot-net/how-to-design-your-system-to-decrease-risks

Thanks to LDNUG, Skills Matter and Rachel Laycock for finding me a soap box to stand on.