Code Style Quality

As a developer you should breathe code quality. Specially if you work with a team and it is important that others can understand faster what you wrote. But let’s see some advantages and examples on how to implement a Quality Code Style in a team.


There are many, many reasons for a good code styling, the main ones are:
– it is good for a better maintenance;
– it must improve code review
– it saves disk space (but not always)
– it improves performance

So the key is: having a good code styling and make sure everyone will use the same code styling.

Implementing it

The 2 ways we have to implement it is with tools and code review.

There are several tools to help everyone to use the good standards of code styling, and some of them can integrate with build systems like Jenkins or TeamCity.

We can also define a threshold. After that threshold, build system should reject the build.


One of this styling extensions is Stylecop. With Stylecop we can avoid double empty space, code after a closing bracket or bad ordering of functions or members.

It is very easy to setup in development environments, as there is a Visual Studio extension and, integrating with TeamCity the build can fail if the stylecop detect a certain amount of violations (configured).

So, in the beginning, in order to implement the policy in a team, we need to make sure all the team understands the importance of code styling and start with one project, so the team can get used to.
Add the extension to that project and make an high threshold (like 50 violations).

After some time, we should reduce the threshold, until it reaches zero. Also we should progressively implement the same policy to all projects.

Peer Review

There is still the idea that peer reviewing is blaming someone. It is not. Peer Review is team building.
As a team we should be concerned with code quality and as a developer, because I am focused on a task, it is normal to make mistakes that others can catch.

Style coding should be included in peer reviews.
What should a reviewer highlight?

1. Dead spaces.

Dead spaces are spending bytes on our disk and for nothing.

It is true that space is not too expensive however if we have 100 files with dead spaces, let’s say an average of 50, it will be 50 * 100 bytes. And we all know that we have more than 100 files with 50 dead space each 🙂

There are some extensions for Visual Studio that help us to keep our code without any dead space:

a) Trailing Whitespace Visualizer

b) Trailing Space Flagger

2. Long lines.

This one will improve our team performance, as it is much easier to review a code with correct line breaks.

Specially if the reviewer uses a side by side code review layout. The recommended line lenght is 100 cols however 105 is acceptable.

There are also a extension to help on this. Actually is a package of extensions that most of the people is using called “Productivity Power Tools”

3. Wet code.

Wet code means “Not DRY code” where DRY means Don’t Repeat Yourself. It is a best practice and you can find more info on Wikipedia.

4. Too deeply code block.

A good example is when we have a chain of ifs without else and even if we can’t made a single if, we probably can invert the if most of the times.

(very very simple) Example:

public void function(){

we can change this by:

public void function(){


5. Chains of functions should be avoided

It is very common we try to reduce our code to only one line of code, but sometimes it is truly useful if we can see better what’s going on.


public string OneLineCode(bool isThisTrue){
    return isThisTrue
        ? someFunction(AnotherFunction(someGlobalVariable, someEnum.Where(x => x.SomeBoolExpression).Select(someSelectDelegation).FirstOrDefault()))
        : someFunction(thirdFunction(string.Empty, someOtherGlobalVar));

As we can see, this is really hard to understand. However, if we change it to

public string MoreThanOneLineCode(bool isThisTrue){

        var parameter = someEnum.Where(x => x.SomeBoolExpression)
        var someFunctionParameter = AnotherFunction(someGlobalVariable, parameter);
        return someFunction(someFunctionParameter);

    var thirdFunctionReturn = thirdFunction(string.Empty, someOtherGlobalVar);
    return someFunction(thirdFunctionReturn);

We have 2 advantages: it is more readable and we can debug it in VS much better, as we have the values in vars for each step.

6. Unused variables.

It is very common when we refactor code, we left some unused variables forgotten in the code.

As a reviewer we should pay special attention to unused input variables and global variables as well.

One variant of this problem is unused Injected objects. This can also be caused by refactoring and it is very dangerous to be left if we don’t need it. Think in this example (It happened to me a couple of years ago).

I have an object than in the constructor has several injected objects and each one of this has injected objects etc.. This is a normal scenario and nothing abnormal is here.

The problem is when we don’t need to inject it in first instance.. We can spend some precious milliseconds (or even seconds) loading this chain for nothing. Also we are using unnecessary memory space. So this is a big problem to avoid.
(Tip. For performance debugging, this can be a possible bottleneck. Consider to lazy load your injected objects)

7. Duplicate empty lines.

Please, get rid of it.. Spaces in code are really awesome for readability however more than one empty line together is wast of space (\r\n are 2 chars, so 2 bytes (ASCII based) per empty line)

And there are many many more examples.

Quick enzyme JS TDD

With the current wave of change in front-end web development, it’s easy to get lost in a sea of options when it comes to choosing a stack to develop with. The same goes for setting up a test environment depending on which frameworks you’re using, you might be inclined to use different libraries and test runners to better suit the application workflow and logic.

So, we’ll go through all the necessary steps to implement a simple way to get you started with writing tests for React components, using ES2015 syntax and the well documented Airbnb’s testing utility, enzyme. While we’re at it, we’ll set up a constant test runner for TDD

Lets GO!

Retroing the Retros

After recently reading the book Scrum Mastery by Geoff Watts I was reminded of something very important that I learnt after going through the motions of sprint start, end, retro over and over again; people get bored of the same old same old. It is important to keep things engaging with fresh ideas.

When I joined Spotlight over a year ago, everything was new and exciting. While that was the case for me, I had forgotten that everyone else had been doing the same for a long time. So, I decided to bring back a little bit of play into work. Here are a couple of my personal favourites.

The Walking Dead inspired zombie Retro. The team (us) represented by the poor man on the left prepared to fight the zombie attack. Here we added green stickies representing tools needed by us to face the attack. In other words everything that is helping us succeed.

The zombies on the right are representing everything that is standing in the way of our success. Here we added pink stickies with everything that is in the way of our success.

The Middle part covered by orange stickies represents measures taken by us to prepare for the zombie attack. 

Retro - Zombies

Balloon Flight Retro. This is a much simpler one that focussed on:

  • What is stopping us
  • What is helping us


More ideas are always welcomed.

Although I have had a burst of interesting sessions recently, the real challenge is in keeping it going. Watch out for some Legospectives in the near future!

image.png adds GNU Terry Pratchett compatibility

With the sad passing of fantasy author extraordinaire Sir Terry Pratchett, a small internet project has sprung up to immortalise him in the code of webservers and emails everywhere. The project is GNU Terry Pratchett.

Now every visit to the main site serves the header ‘X-Clacks-Overhead: GNU Terry Pratchett’ for the foreseeable future as a part of our Varnish configuration.


Spotlight, Dynamics CRM, and the age-old question of “build vs buy”

We’re in the early stages of creating a new membership system built on Microsoft Dynamics CRM 2015. This isn’t a decision we’ve taken lightly – the decision to buy vs. build is always complex where software is concerned, as Mike Hadlow explains in this excellent blog post. In our case, though, there’s two good reasons why we’ve decided to integrate an off-the-shelf solution.

First, we are willing to change our own business process to suit the software. We’ve been printing books since 1927, and our business model is tightly coupled to our publishing process. As part of this project, we’re going to decouple those two areas of activity. Publishing is a differentiator for us, and always will be, but membership is not – whilst it’s important that it works, it’s not hugely important how it works. If CRM can offer us a “pit of success”, we’ll happily do what’s necessary to fall in it.

Second – we really understand what it costs to write our own software. We’ve got a solid, mature agile process, which links in to our time-tracking system. One of the high-level metrics that we track is “cost per point” – how much, in pounds, does it cost us to get a single story point into production? It’s easy for businesses to think of off-the-shelf software as “expensive”, because it has a price tag, and bespoke software as ‘free’ because you’re already paying developers’ salaries, but that’s a pretty naive way to look at it. When you factor in the opportunity cost of all the things we could have been doing instead of reinventing the CRM wheel, the off-the-shelf option starts to look a lot more attractive even when it carries a hefty up-front price tag.

That’s two good reasons why we’re going with CRM2015. Now for two good reasons why CRM projects fail, and what we’re doing to mitigate them.

First – scope creep. CRM vendors will happily sell it CRM as the solution to all your problems, and then they’ll start showing off marketing campaigns and case management and Outlook integration and web portals and everybody’s eyes light up like this is the most amazing thing they’ve ever seen… and next thing you know you’ve got a 300-page “requirements” document and everyone’s got so carried away by what’s possible that they’ve forgotten what they were trying to fix in the first place. I’ve seen this happen first-hand, and it doesn’t work, and the reason it doesn’t work is that the project isn’t being driven by prioritised requirements, it’s being driven by wish-lists.

So… start with a problem. Any successful business probably has dozens of things that could be done better – so list them, analyze them, identify dependencies. Work out which one to solve first, and focus. CRM isn’t fundamentally different to any other software project. Identify your milestones. Be absolutely brutal about the MVP – what is the simplest possible thing that’s better than what we’ve got right now? Build that. Ship that. Get people using it. In our case, CRM’s first outing is going to be as a replacement for GroupMail, our email-merge tool, and that’s it. We’ll integrate just enough data that CRM can send personalised email to a specific group of customers, we’ll ship it, and we’ll use it – and then we’ll iterate based on feedback and lessons learned. We already have a pretty good idea what we’ll do after that, but we’re not going to worry about it until we’ve delivered that first MVP release.

I think the second reason CRM projects fail is over-extension. Dynamics CRM is a really powerful, flexible platform, and with enough consultants effort you can probably get it to do just about anything. But that doesn’t mean it’s the best solution. Sure, there’s going to be cases where it makes sense to customise CRM by adding a new field or some validation rules. Spotlight holds the same core “business” data  as any other company – what’s your name? Where do you live? what’s your email address? Is your account up-to-date? Off-the-shelf CRM is very, very good at managing this sort of information – and once you’ve got this core information in CRM, there’s dozens of off-the-shelf marketing tools available to help you use it more effectively.

But Spotlight stores a lot more than that. We also store all the information that appears on your professional acting CV – height, weight, eye colour, hairstyle, skills, credits. We store details of almost all the productions being cast in the UK, we track tens of thousands of CV submissions every day, and millions of job notification emails each week. We manage terabytes of photography and video clips.  You probably could get CRM to manage all this information. But hey, you can open a beer bottle with a dollar bill – doesn’t mean it’s a good idea, though.


The overlap – the green bit – is where CRM solves one of our problems. The blue bit is things CRM does that aren’t really relevant to us – not at the moment, anyway. The red bit is the stuff that we’re going to keep out of CRM. And that dark area on the border… that’s representation, which is a tremendously complicated tangle of operational data, business data and publishing data that we’re going to have to work out as part of our service roadmap. Fun.

Now, different people – and systems! – have different expectations about what “CRM” and “membership” mean.

  • To our customers, good CRM means it’s easy to join Spotlight, it’s easy to manage your account, it’s easy to talk to us and get answers to your questions. You know how you hate phoning the electric company because every time you get through you talk to a different person who has no idea what’s going on, and your “my account” page on their website says one thing and your bill says something else and the person on the phone doesn’t agree with either of them? Yeah. Imagine the complete opposite of that.
  • To our marketing team, good CRM is about accurate data, effective marketing campaigns, happy customers – and, yes, revenue. It’s about helping us work out what we’re doing right and what we’re doing wrong, giving us the intelligence we need to make decisions about new products and initiatives.
  • To our software team,  good CRM means easy access to the data and processes you’ll need to build great products. Responsive systems, logical data structures, simple integration patterns and intuitive API endpoints. In other words, if you want to build an awesome online tool that’s only available to customers with a current, paid-up Spotlight Actors membership, it should be trivial to work out whether the current user can see it or not.

Same system, same data, three radically different use cases. So here’s how we’re proposing to make it work:


Astute readers will notice a slim black box marked “abstraction layer”. That’s how we’re going to fool the rest of our stack into thinking that Dynamics CRM 2015 is a ReSTful microservice, so tune in over the next couple of weeks to find out how it works, what sort of patterns and techniques we’re using, and how we’re going to test and monitor it.

(Astute readers may also notice a resemblance between Spotlight’s customer base and the cast of Game of Thrones… well, that’s because Spotlight’s customers are the cast of Game of Thrones. I told you working here was awesome.)

JavaScript Tools, Frameworks and Libraries

There are a staggering number of JavaScript Tools, Frameworks and Libraries these days. It’s hard to know what to use.

JavaScript logos

We’re developing a JavaScript mobile web application (we’ll write more about this) and we needed to decide what to use.

Here’s what we’ve gone with:

We’re also using SASS for styles and compile this to CSS.

The reason for this is we are trying not to use one monolithic framework but picking the best libraries to do each job really well. For example, we are only using Backbone to do the URL routing (Underscore is a dependency). We hope this will allow us to swap parts out when we want rather than being locked into a certain way of doing things.

Keeping everything small and separate is great for development but for production we bundle it all up for performance and version it for cache busting. You can see an early version of our work here:


Development and Deployment Pipeline

Here is how we get ideas to production at Spotlight. I’ve borrowed the sales funnel analogy to illustrate our latest process.

pipeline funnel

  1. Ideas go into Trello to start with where they are evaluated
  2. Good ideas get turned into stories which we put into Pivotal Tracker along with other stories
  3. Stories get planned, worked on and the resulting code gets put into GitHub along with other minor fixes
    1. We make heavy use of pull requests so code that can’t be automatically merged is rejected to be reworked
    2. All code must get reviewed and this makes it very easy to see the changes
    3. Commits are tagged and Pivotal gets updated automatically
  4. The result of the code merged into master is built by TeamCity and unit tests are run
    1. If the build or any tests fail then the code is rejected
    2. The pull request is marked as good to merge automatically if everything is green
    3. The build outputs are put into our internal NuGet repository
  5. We then use Octopus to deploy the release to the test environment (in AWS)
    1. A suite of automated Selenium regression tests are then run against the deployed code
    2. Manual testing is then performed if required
    3. If everything is good then the code is promoted to the live environment (also in AWS)

The key here is automation. A huge amount of the above is automated. A commit to GitHub will trigger a large amount of processes to check that the code won’t break anything. When a pull request is merged the master build will be deployed to the test environment overnight and regression tested by automated browsers. This process allows us to release more often and reduce risk.

However, the above is just a subset. The environments at AWS are built with Chef so if we need a new one it is just a click away. This makes it repeatable and more consistent. We can be confident that the test environment is as close to live as possible. Everything is monitored with automated alerts to highlight issues as early as possible. We also have some interesting ways of surfacing this information which we hope to write about soon.

Future plans

Blue-Green deployment

This is having the test environment become the live environment (usually by switching the DNS) so you don’t need to promote anything.

Automatic push to live

We’d like to have the confidence to have a change go all the way to live without human intervention but we’ll need to improve the test coverage first.


We’ll keep you updated as we go.