Code Style Quality

As a developer you should breathe code quality. Specially if you work with a team and it is important that others can understand faster what you wrote. But let’s see some advantages and examples on how to implement a Quality Code Style in a team.

ADVANTAGES

There are many, many reasons for a good code styling, the main ones are:
– it is good for a better maintenance;
– it must improve code review
– it saves disk space (but not always)
– it improves performance

So the key is: having a good code styling and make sure everyone will use the same code styling.

Implementing it

The 2 ways we have to implement it is with tools and code review.

There are several tools to help everyone to use the good standards of code styling, and some of them can integrate with build systems like Jenkins or TeamCity.

We can also define a threshold. After that threshold, build system should reject the build.

Stylecop

One of this styling extensions is Stylecop. With Stylecop we can avoid double empty space, code after a closing bracket or bad ordering of functions or members.

It is very easy to setup in development environments, as there is a Visual Studio extension and, integrating with TeamCity the build can fail if the stylecop detect a certain amount of violations (configured).

So, in the beginning, in order to implement the policy in a team, we need to make sure all the team understands the importance of code styling and start with one project, so the team can get used to.
Add the extension to that project and make an high threshold (like 50 violations).

After some time, we should reduce the threshold, until it reaches zero. Also we should progressively implement the same policy to all projects.

Peer Review

There is still the idea that peer reviewing is blaming someone. It is not. Peer Review is team building.
As a team we should be concerned with code quality and as a developer, because I am focused on a task, it is normal to make mistakes that others can catch.

Style coding should be included in peer reviews.
What should a reviewer highlight?

1. Dead spaces.

Dead spaces are spending bytes on our disk and for nothing.

It is true that space is not too expensive however if we have 100 files with dead spaces, let’s say an average of 50, it will be 50 * 100 bytes. And we all know that we have more than 100 files with 50 dead space each 🙂

There are some extensions for Visual Studio that help us to keep our code without any dead space:

a) Trailing Whitespace Visualizer

b) Trailing Space Flagger

2. Long lines.

This one will improve our team performance, as it is much easier to review a code with correct line breaks.

Specially if the reviewer uses a side by side code review layout. The recommended line lenght is 100 cols however 105 is acceptable.

There are also a extension to help on this. Actually is a package of extensions that most of the people is using called “Productivity Power Tools”

3. Wet code.

Wet code means “Not DRY code” where DRY means Don’t Repeat Yourself. It is a best practice and you can find more info on Wikipedia.

4. Too deeply code block.

A good example is when we have a chain of ifs without else and even if we can’t made a single if, we probably can invert the if most of the times.

(very very simple) Example:

public void function(){
    someCodeHere();
    if(someBooleanExpression){
        moreCodeHere();
        if(anotherBool){
            AndMoreCodeHere();
        }
    }
}

we can change this by:

public void function(){
    someCodeHere();
    if(!someBooleanExpression)
        return;

    moreCodeHere();
    if(!anotherBool)
        return;

    AndMoreCodeHere();
}
5. Chains of functions should be avoided

It is very common we try to reduce our code to only one line of code, but sometimes it is truly useful if we can see better what’s going on.

Example:

public string OneLineCode(bool isThisTrue){
    return isThisTrue
        ? someFunction(AnotherFunction(someGlobalVariable, someEnum.Where(x => x.SomeBoolExpression).Select(someSelectDelegation).FirstOrDefault()))
        : someFunction(thirdFunction(string.Empty, someOtherGlobalVar));
}

As we can see, this is really hard to understand. However, if we change it to

public string MoreThanOneLineCode(bool isThisTrue){

    if(isThisTrue){
        var parameter = someEnum.Where(x => x.SomeBoolExpression)
                            .Select(someSelectDelegation)
                            .FirstOrDefault());
        var someFunctionParameter = AnotherFunction(someGlobalVariable, parameter);
        return someFunction(someFunctionParameter);
    }

    var thirdFunctionReturn = thirdFunction(string.Empty, someOtherGlobalVar);
    return someFunction(thirdFunctionReturn);
}

We have 2 advantages: it is more readable and we can debug it in VS much better, as we have the values in vars for each step.

6. Unused variables.

It is very common when we refactor code, we left some unused variables forgotten in the code.

As a reviewer we should pay special attention to unused input variables and global variables as well.

One variant of this problem is unused Injected objects. This can also be caused by refactoring and it is very dangerous to be left if we don’t need it. Think in this example (It happened to me a couple of years ago).

I have an object than in the constructor has several injected objects and each one of this has injected objects etc.. This is a normal scenario and nothing abnormal is here.

The problem is when we don’t need to inject it in first instance.. We can spend some precious milliseconds (or even seconds) loading this chain for nothing. Also we are using unnecessary memory space. So this is a big problem to avoid.
(Tip. For performance debugging, this can be a possible bottleneck. Consider to lazy load your injected objects)

7. Duplicate empty lines.

Please, get rid of it.. Spaces in code are really awesome for readability however more than one empty line together is wast of space (\r\n are 2 chars, so 2 bytes (ASCII based) per empty line)

And there are many many more examples.

AWS SAM .Net Core

This tutorial will explain the fundamentals of AWS SAM and start you with some basic usage.

What is AWS SAM?

The AWS Serverless Application Model (AWS SAM) is a model to define serverless applications. AWS SAM is natively supported by AWS CloudFormation and defines simplified syntax for expressing serverless resources. The specification currently covers APIs, Lambda functions and Amazon DynamoDB tables. SAM is available under Apache 2.0 for AWS partners and customers to adopt and extend within their own toolsets.

To create a serverless application using SAM, first, you create a SAM template: a JSON or YAML configuration file that describes your Lambda functions, API endpoints and the other resources in your application. Then, you test, upload, and deploy your application using the AWS SAM CLI. During deployment, SAM automatically translates your application’s specification into CloudFormation syntax, filling in default values for any unspecified properties and determining the appropriate mappings and invocation permissions to setup for any Lambda functions.

Lets Go!

Test Quality

Philosophy of unit testing and why its important.

Who owns software quality?

Surprisingly when asked this question most people will answer “our QA team”. If that is how you feel about software quality it is very hard to then be AGILE and delivery software that adds business value and is of high quality. At this stage you are probably think that is harsh but step back a minute quality is own by everyone in the business if one fails the whole chain will break. As a development team we should work towards delivering high code quality so lets have a look at some best practices about testing…

Lets Go!

Pair Programming

This will explain the fundamentals of Pair Programming and the why and how.

What is Pair Programming?

Pair programming is an agile development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.

While reviewing, the observer also considers the “strategic” direction of the work, coming up with ideas for improvements and likely future problems to address. This frees the driver to focus all of their attention on the “tactical” aspects of completing the current task, using the observer as a safety net and guide.

Lets Go!

Docker Fundamentals

This tutorial will explain the fundamentals of Docker and start you with some basic usage.

What is Docker?

Docker is open source software to pack, ship and run any application as a lightweight container. Containers are completely hardware and platform independent so you don’t have to worry about whether what you are creating will run everywhere.

In the past virtual machines have been used to accomplish many if these same goals. However, Docker containers are smaller and have far less overhead than VMs. VMs are not portable as different VM runtime environments are very different. Docker containers are extremely portable. Finally, VMs were not built with software developers in mind; they contain no concept of versioning, and logging/monitoring is very difficult. Docker images, on the other hand, are built from layers that can be version controlled. Docker has logging functionality readily available for use.

You might be wondering what could go into a “container”. Well, anything! You can isolate pieces of your system into separate containers. You could potentially have a container for nginx, a container for MongoDB, and one for Redis. Containers are very easy to setup. Major projects like nginx, MongoDB, and Redis all offer free Docker images for you to use; you can install and run any of these containers with just one shell command. This is much easier than using a virtual machine (even with something like Vagrant).

Lets Go!

Deployment Through the Ages

Vanessa Love just posted this intriguing little snippet on Twitter:

//platform.twitter.com/widgets.js

And I got halfway through sticking some notes into the Google doc, and then thought actually this might make a fun blog post. So here’s how deployment has evolved over the 14 years since I first took over the hallowed mantle of webmaster@spotlight.com.

2003: Beyond Compare (maybe?)

The whole site was classic ASP – no compilation, no build process, all connection credentials and other settings were managed as application variables in the global.asa file. On a good day, I’d get code running on my workstation, test it in our main target browsers, and deploy it using a visual folder comparison tool. It might have been Beyond Compare; it might have been something else. I honestly can’t remember and the whole thing is lost in the mists of time. But that was basically the process – you’d have the production codebase on one half of your screen and your localhost codebase on the other half, and you’d cherry-pick the bits that needed to be copied across.

Of course, when something went wrong in production, I’d end up reversing the process – edit code directly on live (via UNC share), normally with the phone wedged against my shoulder and a user on the other end; fix the bug, verify the user was happy, and then do a file sync in the other direction to get everything from production back onto localhost. Talk about a tight feedback loop – sometimes I’d do half-a-dozen “deployments” in one phone call. It was a simpler time, dear reader. Rollback plan was to hammer Ctrl-Z until it’s working again; disaster recovery was tape backups of the complete source tree and database every night, and the occasional copy’n’paste backup of wwwroot before doing something ambitious.

Incidentally, I still use Beyond Compare almost daily – I have it configured as my merge tool for fixing Git merge conflicts. It’s fantastic.

2005: Subversion

Once we hired a second developer (hey Dan!) the Beyond Compare approach didn’t really work so well any more, so we set up a Subversion server. You’d get stuff running on localhost, test it, maybe share an http://www.spotlight.com.dylan-pc/ link (hooray for local wildcard DNS) so other people could see it, and when they were happy, you’d do an svn commit, log into the production web server (yep, the production web server – just the one!) and do an svn update. That would pull down the latest code, update everything in-place. There was still the occasional urgent production bugfix. One of my worst habits was that I’d fix something on production and then forget to svn commit the changes, so the next time someone did a deployment (hey Dan!) they’d inadvertently reintroduce whatever bug had just been fixed and we’d get upset people phoning up asking why it was broken AGAIN.

2006: FinalBuilder

This is where we start doing things with ASP.NET in a big way. I still dream about OnItemDataBound sometimes… and wake up screaming, covered in sweat. Fun times. The code has all long since been deleted but I fear the memories will haunt me to my grave.

Anyway. By this point we already had the Subversion server, so we had a look around for something that would check out and compile .NET code, and went with FinalBuilder. It had a GUI for authoring build pipelines and processes, some very neat features, and could deploy .NET applications to IIS servers. This was pretty sophisticated for 2006.

2008: test server and msdeploy

After one too many botched FinalBuilder deployments, we decided that a dedicated test environment and a better deployment process might be a good idea. Microsoft had just released a preview of a new deployment tool called MSDeploy, and it was awesome. We set up a ‘staging environment’ – it was a spare Dell PowerEdge server that lived under my desk, and I’d know when somebody accidentally wrote an infinite loop because I’d hear the fans spin up. We’d commit changes to Subversion, FinalBuilder would build and deploy them onto the test server, we’d give everything a bit of a kicking in IE8 and Firefox (no Google Chrome until September 2008, remember!) and then – and this was magic back in 2008 – you’d use msdeploy.exe to replicate the entire test server into production! Compared to the tedious and error-prone checking of IIS settings, application pools and so on, this was brilliant. Plus we’d use msdeploy to replicate the live server onto new developers’ workstations, which was a really fast, easy way to get them a local snapshot of a working live system. For the bits that still ran interpreted code, anyway.

2011: TeamCity All The Things!

By now we had separate dev, staging and production environments, and msdeploy just wasn’t cutting it any more. We needed something that can actually build different deployments for each environments – connection strings, credentials, and so on. And there’s now support in Visual Studio for doing XML configuration transforms, so you create a different config file for every environment, check those into revision control, and get different builds for each environment. I can’t remember exactly why we abandoned FinalBuilder for TeamCity, but it was definitely a good idea – TeamCity has been the backbone of our build process ever since, and it’s a fantastically powerful piece of kit.

2012: Subversion to GitHub

At this point, we’d grown from me, on my own doing webmaster stuff, to a team of about six developers. Even Subversion is starting to creak a bit, especially when you’re trying to merge long-lived branches and getting dozens of merge conflicts, so we start moving stuff across to GitHub. It takes a while – I’m talking months – for the whole team to stop thinking of Git as ‘unnecessarily complicated Subversion’ and really grok the workflow, but we got there in the end.

Our deployment process at this point was to commit to the Git master branch, and wait for TeamCity to build the development version of the package. This would get built and deployed. Once it was tested, you’d use TeamCity to build and deploy the staging version – and if that went OK, you’d build and deploy production. Like very step on this journey, it was better than anything we’d had before, but had some obvious drawbacks. Like the fact we had several hundred separate TeamCity jobs and no consistent way of managing them all.

2013: Octopus Deploy and Klondike

When we started migrating from TeamCity 6 to TeamCity 7, it became rapidly apparent that our “build everything several times” process… well, it sucked. It was high-maintenance, used loads of storage space and unnecessary CPU cycles, and we needed a better system.

Enter Octopus Deploy, whose killer feature for us was the ability to compile a .NET web application or project into a deployment NuGet package (an “octopack”), and then apply configuration settings during deployment. We could build a single package, and then use Octopus to deploy and configure it to dev, staging and live. This was an absolute game-changer for us. We set up TeamCity to do continuous integration, so that every commit to a master branch would trigger a package build… and before long, our biggest problem was that we had so many packages in TeamCity that the built-in NuGet server started creaking.

This started life as an experimental build of themotleyfool/NuGet.Lucene – which we actually deployed onto a server we called “Klondike” (because klondike > gold rush > get nuggets fast!) – and it worked rather nicely. Incidentally, that NuGet.Lucene library is now the engine behind themotleyfool/Klondike, a full-spec NuGet hosting application – and I believe our internal hostname was actually the inspiration for their project name. That was a lot of fun for the 18 months or so that Klondike existed but we were still running the old NuGet.Lucene codebase on a server called ‘klondike’. It’s OK, we’ve now upgraded it and everything’s lovely.

It was also in 2013 that we started exploring the idea of automatic semantic versioning – I wrote a post in October 2013 explaining how we hacked together an early version of this. Here’s another post from January 2017 explaining how it’s evolved. We’re still working on it. Versioning is hard.

And now?

So right now, our build process works something like this.

  1. Grab the ticket you’re working on – we use Pivotal Tracker to manage our backlogs
  2. Create a new GitHub branch, with a name like 12345678_fix_the_microfleems – where 12345678 is the ticket ID number
  3. Fix the microfleems.
  4. Push your changes to your branch, and open a pull request. TeamCity will have picked up the pull request, checked out the merge head and built a deployable pre-release package (on some projects, versioning for this is completely automated)
  5. Use Octopus Deploy to deploy the prerelease package onto the dev environment. This is where you get to tweak and troubleshoot your deployment steps.
  6. Once you’re happy, mark the ticket as ‘finished’. This means it’s ready for code review. One of the other developers will pick it up, read the code, make sure it runs locally and deploys to the dev environment, and then mark it as ‘delivered’.
  7. Once it’s delivered, one of our testers will pick it up, test it on the dev environment, run it past any business stakeholders or users, and make sure we’ve done the right thing and done it right.
  8. Finally, the ticket is accepted. The pull request is merged, the branch is deleted. TeamCity builds a release package. We use Octopus to deploy that to staging, check everything looks good, and then promote it to production.

And what’s on our wishlist?

  • Better production-grade smoke testing. Zero-footprint tests we can run that will validate common user journeys and scenarios as part of every deployment – and which potentially also run as part of routine monitoring, and can even be used as the basis for load testing.
  • Automated release notes. Close the loop, link the Octopus deployments back to the Pivotal tickets, so that when we do a production deployment, we can create release notes based on the ticket titles, we can email the person who requested the ticket saying that it’s now gone live, that kind of thing.
  • Deployments on the dashboards. We want to see every deployment as an event on the same dashboards that monitor network, CPU, memory, user sessions – so if you deploy a change that radically affects system resources, it’s immediately obvious there might be a correlation.
  • Full-on continuous deployment. Merge the PR and let the machines do the rest.

So there you go – fourteen years worth of continuous deployments. Of course, alongside all this, we’ve moved from unpacking Dell PowerEdge servers and installing Windows 2003 on them to running Chef scripts that spin up virtual machines in AWS and shut them down again when nobody’s using them – but hey, that’s another story.