With the current wave of change in front-end web development, it’s easy to get lost in a sea of options when it comes to choosing a stack to develop with. The same goes for setting up a test environment depending on which frameworks you’re using, you might be inclined to use different libraries and test runners to better suit the application workflow and logic.
So, we’ll go through all the necessary steps to implement a simple way to get you started with writing tests for React components, using ES2015 syntax and the well documented Airbnb’s testing utility, enzyme. While we’re at it, we’ll set up a constant test runner for TDD
Here’s what we’ve gone with:
We’re also using SASS for styles and compile this to CSS.
The reason for this is we are trying not to use one monolithic framework but picking the best libraries to do each job really well. For example, we are only using Backbone to do the URL routing (Underscore is a dependency). We hope this will allow us to swap parts out when we want rather than being locked into a certain way of doing things.
Keeping everything small and separate is great for development but for production we bundle it all up for performance and version it for cache busting. You can see an early version of our work here:
Here is how we get ideas to production at Spotlight. I’ve borrowed the sales funnel analogy to illustrate our latest process.
- Ideas go into Trello to start with where they are evaluated
- Good ideas get turned into stories which we put into Pivotal Tracker along with other stories
- Stories get planned, worked on and the resulting code gets put into GitHub along with other minor fixes
- We make heavy use of pull requests so code that can’t be automatically merged is rejected to be reworked
- All code must get reviewed and this makes it very easy to see the changes
- Commits are tagged and Pivotal gets updated automatically
- The result of the code merged into master is built by TeamCity and unit tests are run
- If the build or any tests fail then the code is rejected
- The pull request is marked as good to merge automatically if everything is green
- The build outputs are put into our internal NuGet repository
- We then use Octopus to deploy the release to the test environment (in AWS)
- A suite of automated Selenium regression tests are then run against the deployed code
- Manual testing is then performed if required
- If everything is good then the code is promoted to the live environment (also in AWS)
The key here is automation. A huge amount of the above is automated. A commit to GitHub will trigger a large amount of processes to check that the code won’t break anything. When a pull request is merged the master build will be deployed to the test environment overnight and regression tested by automated browsers. This process allows us to release more often and reduce risk.
However, the above is just a subset. The environments at AWS are built with Chef so if we need a new one it is just a click away. This makes it repeatable and more consistent. We can be confident that the test environment is as close to live as possible. Everything is monitored with automated alerts to highlight issues as early as possible. We also have some interesting ways of surfacing this information which we hope to write about soon.
This is having the test environment become the live environment (usually by switching the DNS) so you don’t need to promote anything.
Automatic push to live
We’d like to have the confidence to have a change go all the way to live without human intervention but we’ll need to improve the test coverage first.
We’ll keep you updated as we go.