Affordances, Signifiers, and Cartographobia

One of the teams here is putting the finishing touches on a new online version of Spotlight Contacts, our venerable and much-loved industry guide that started life as a printed handbook way back in 1947. Along the way, we’ve learned some very interesting things about data, and how people perceive that their data is being used.

image

One of the features of the new online version is that every listing includes a location map – a little embedded Google Map showing the business’ location. When we rolled this feature out as part of a recent beta, we got some very unhappy advertisers asking us to please remove the map from their listing immediately. Now, most of these were freelancers who work from home – so you can understand their concerns. But what’s really interesting is that in most cases, they were quite happy for their full street address to stay on the page – it was just the map that they were worried about.

Of course, this immediately resulted in a quite a lot of “what? they want to keep the address and remove the map? ha ha! that’s daft!” from developers – who, as you well know, are prone to occasional outbursts of apoplectic indignation when they have to let go of their abstractions and engage with reality for any length of time – but when you think about it, it actually makes quite a lot of sense.

See, street addresses are used for lots of things. They’re used on contracts and invoices, they’re used to post letters and deliver packages. Yes, you can also use somebody’s address to go and pay them a visit, but there are many, many reasons why you might need to know somebody’s address that have nothing to do with you turning up on their doorstep. In UX parlance, we’d say that the address affords all of these interactions – the presence of a street address enables us to post a letter, write a contract or plan a trip.

A map, on the other hand, only affords one kind of interaction; it tells you how to actually visit somewhere. But because of this, a map is also a signifier. It sends a message saying “come and visit us” – because if you weren’t actually planning to visit us, why would you need to know that Spotlight’s office at 7 Leicester Place is actually in between the cinema and the church, down one of the little alleys that run between Leicester Square and Chinatown? For posting a letter or writing a contract, you don’t care – the street address is enough. But by including a map, you’re sending a message that says “hey – stop round next time you’re in the neighbourhood”, and it’s easy to see why that’s not really something you want if you’re a freelancer working from your home.

It’s important to consider this distinction between affordances and signifiers when you’re designing your user interactions. Don’t just think about what your system can do – think about all the subtle and not-so-subtle messages that your UI is sending.

Here’s the classic Far Side cartoon “Midvale School for the Gifted”, which provides us with some great examples of affordances and signifiers. The fact you can pull the door is an affordance. The sign saying PULL is a signifier – but the handle is both. Looking at it gives you a clue “hey – I could probably pull that!” – and when you do, voila, the door swings open. If you’ve ever found a door where you have to grasp the handle and push,, then you’ve found a false affordance – a handle that’s sat there saying ‘pull me…’ and when you do, nothing happens. And, in software as in the Far Side, there’s going to be times when all the affordances and signifiers in the world are no match for your users’ astonishing capacity to ignore them all and persist in doing it wrong.

(Far Side © Gary Larson)

ASP.NET Core 1.0 High Performance

Former Spotlighter James Singleton – who worked on our web team for several years and built some of our most popular applications, including our video/voice upload and playback platform – has just published his first book, ASP.NET Core 1.0 High Performance. Since the book includes one or two things that James learnt during his time here at Spotlight, he was gracious enough to invite me to contribute the foreword – and since the whole point of a foreword is to tell you all why the book is worth buying, I figured I’d just post the whole thing. Read the foreword, then read the book (or better still, buy it then read it.)

TL;DR: it’s a really good book aimed at .NET developers who want to improve application performance, it’s out now, and you can buy your copy direct from packtpub.com.

And that foreword in full, in case you’re not convinced:

 

“The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.”
– Henry Petroski

We live in the age of distributed systems. Computers have shrunk from room-sized industrial mainframes to embedded devices smaller than a thumbnail. However, at the same time, the software applications that we build, maintain and use every day have grown beyond measure. We create distributed applications that run on clusters of virtual machines scattered all over the world, and billions of people rely on these systems, such as email, chat, social networks, productivity applications and banking, every day. We’re online 24 hours a day, 7 days a week,  and we’re hooked on instant gratification. A generation ago we’d happily wait until after the weekend for a cheque to clear, or allow 28 days for delivery; today, we expect instant feedback, and why shouldn’t we? The modern web is real-time, immediate, on-demand, built on packets of data flashing round the world at the speed of light, and when it isn’t, we notice. We’ve all had that sinking feeling… you know, when you’ve just put your credit card number into a page to buy some expensive concert tickets, and the site takes just a little too long to respond. Performance and responsiveness are a fundamental part of delivering great user experience in the distributed age. However, for a working developer trying to ship your next feature on time, performance is often one of the most challenging requirements. How do you find the bottlenecks in your application performance? How do you measure the impact of those problems? How do you analyse them, design and test solutions and workarounds, and monitor them in production so you can be confident they won’t happen again?

This book has the answers. Inside, James Singleton presents a pragmatic, in-depth and balanced discussion of modern performance optimization techniques, and how to apply them to your .NET and web applications. Starting from the premise that we should treat performance as a core feature of our systems, James shows how you can use profiling tools like Glimpse, MiniProfiler, Fiddler and Wireshark to track down the bottlenecks and bugs that are causing your performance problems. He addresses the scientific principles behind effective performance tuning – monitoring, instrumentation, and the importance of using accurate and repeatable measurements when you’re making changes to a running system to try and improve performance.

The book goes on to discuss almost every aspect of modern application development – database tuning, hardware optimisations, compression algorithms, network protocols, object-relational mappers. For each topic, James describes the symptoms of common performance problems, identifies the underlying causes of those symptoms, and then describes the patterns and tools you can use to measure and fix those underlying causes in your own applications. There’s in-depth discussion of high-performance software patterns like asynchronous methods and message queues, accompanied by real-world examples showing how to implement these patterns in the latest versions of the .NET framework. Finally, James shows how you can not only load test your applications as part of your release pipeline, but can continuously monitor and measure your systems in production, letting you find and fix potential problems long before they start upsetting your end users.

When I worked with James here at Spotlight, he consistently demonstrated a remarkable breadth of knowledge, from ASP.NET to Arduinos, from Resharper to resistors. One day he’d be building reactive front-end interfaces in ASP.NET and JavaScript, the next he’d be creating build monitors by wiring microcontrollers into Star Wars toys, or working out how to connect the bathroom door lock to the intranet so that our bicycling employees could see from their desks when the office shower was free. Since James moved on from Spotlight, I’ve been following his work with Cleanweb and Computing 4 Kids Education. He’s one of those rare developers who really understands the social and environmental implications of technology – that whether it’s delivering great user interactions or just saving electricity, improving your systems’ performance is a great way to delight your users. With this book, James has distilled years of hands-on lessons and experience into a truly excellent all-round reference for .NET developers who want to understand how to build responsive, scalable applications. It’s a great resource for new developers who want to develop a holistic understanding of application performance, but the coverage of cutting-edge techniques and patterns means it’s also ideal for more experienced developers who want to make sure they’re not getting left behind. Buy it, read it, share it with your team, and let’s make the web a better place.

Check it out. The chapter on caching & message queueing is particularly good 🙂

Quality of Life with Git and Pivotal

At Spotlight we are currently using Pivotal Tracker as our planning tool and git for version control. One of the obvious things to do is to connect code changes to stories by including the story id in the branch and commit messages. This allows you to cross reference between the two and track why you did what code changes.

But developers are lazy beasts, the pivotal numbers are long, and life is too short to write [#123456789] a gazillion times a day. Let’s see how we can improve it!

Automated branch creation

If you follow the standard git workflow, the first thing to do when starting some work is to create a branch. The branch should have a story number, as well as a meaningful title. Luckily, pivotal has a very nice REST api we can use directly from curl to free us from burden of typing the branch name:

#!/bin/sh
TOKEN='pivotal-token-goes-here'
PROJECT_ID='your-project-id-goes-here'
STORY_ID=`grep -o '[0-9]*' <<< $1} `
NAME=`curl -X GET -H "X-TrackerToken: $TOKEN" "https://www.pivotaltracker.com/services/v5/projects/$PROJECT_ID/stories/$STORY_ID" | grep -o '"name":"[^"]*"'| head -1 | sed "s/'//" | sed s/'"name":"'// | sed s/'"'//g | sed s/' '/'_'/g | sed s/'#'//g | sed s~/~_~g | sed s/,//g`
branchName=${STORY_ID}_${NAME}

git checkout -b $branchName
git push origin -u $branchName

Yes, my bash is horrendous, so if your eyes are melting from reading it here is what it does: you pass the pivotal story id, the script curls the story as json, extracts the name and replaces characters that would upset git. Then it creates a branch and pushes it to the origin upstream repo.

Stick it into a file (eg. grab.sh), put this in your PATH and set chmod 755. For added automation, set it as your git alias and hey presto! Guaranteed to work 99% of the time.

hnrioma

This gets the job done, the only downside being that the branch names sometimes end up being too long. But that’s just an added incentive to keep the story titles short.

Add [#story id] if you are committing to a story branch

We can (hopefully) assume that people will not start their branch with a number, so we can try to filter them on commit:

#!/bin/sh
BranchName=`git rev-parse --abbrev-ref HEAD`
TicketNo=`grep -o '^[0-9]*' <<< $BranchName`
if [ -n "$TicketNo" ] 
 then 
 git commit -m "[#$TicketNo] $1"
 else
 git commit -m "$1"
fi

Now you can just type your commit message, and git will add the story number if you are on a story branch. That saves us 11 keystrokes per commit! How cool is that?

aa0g9cb

And just for completion, the git aliases I am using that go into .gitconfig

[alias]
 cm = !commit.sh
 grab = !grab.sh

Enjoy one line git-pivotal experience!

Exupérianism: Improving Things by Removing Things

Last night this popped up on Twitter:

Last year, as part of migrating our main web stack to AWS, we created a set of conventions for things like connection strings and API endpoint addresses across our various environments, and then updated all of our legacy systems to use these conventions instead of fragile per-environment configuration. This meant deleting quite a lot of code, and reviewing pull requests with a lot more red lines than green lines in them – I once reviewed a PR which removed 600 lines of code across fifteen different files, and added nothing. No new files, no new lines – no green at all – and yet that change made one of our most complicated applications completely environment-agnostic. It was absolutely delightful.

When I saw John’s tweet, what instantly came to mind was a quote from Antoine de Saint-Exupéry:

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”

So how about we adopt the term “Exupérian” for any change which improves something by making it smaller or simpler? The commit that removes 600 lines of unnecessary configuration. The copy-edit that turns fifteen thousand words of unstructured waffle into ten thousand words of focused, elegant writing. Maybe even that one weekend you spent going through all the clutter in your garage and finally getting rid of your unwanted lamps and old VHS tapes.

Saint-Exupéry was talking about designing aircraft, but I think the principle is equally applicable to software, to writing, to music, to architecture – in fact, to just about any creative process. I was submitting papers to a couple of conferences last week, and discovered that Øredev has a 1,000-character limit for session descriptions. Turns out my session descriptions all end up around 2,000-3,000 characters, and editing those down to 1,000 characters is really hard. But – it made them better. You look at every single word, you think ‘does it still work if I remove this?’, and it’s surprising how often the answer is ‘yes’.

Go on, give it a try. Do something #exuperian today. Edit that email before you send it. Remove those two classes that you’re sure aren’t used any more but you’re scared to delete in case they break something. Throw out the dead batteries and expired coupons from your desk drawer. Remove a pointless feature nobody really wants.

Maybe you even have an EU cookie banner you can get rid of? 🙂

Agile Tour London

Here’s a bit of a delayed blog post about my not-so-recent visit to The Agile Tour London on 23rd October 2015. There isn’t going to be another one in London until late next year so it’s great to be able to share what went on whilst it’s still relevant!

It was an interesting event; I had moments of “oh wow this is brilliant” followed by “what am I doing here?!”, learned some new tricks and refreshed some old practices.

In a nutshell I would consider it a success. A couple of topics covered that I particularly enjoyed and which had a definite impact were:

  1. The frequency of releases to customers: 

We all talk about how it is a good thing to get early feedback from customers and get new features out there soon as possible, starting with a minimal viable product. But in that passion to deliver fast, what we sometimes fail to understand is how often customers actually want or need updates. It can sometimes be more disruptive than constructive to be releasing too frequently, especially if it means a new feature release actually disrupts a customer’s day-to-day job.

  1. How fast we really are going compared to the rest of the world:

One of the main objectives of Agile is to achieve continuous improvement. There are a number of key metrics to help measure success such as velocity, cycle times…the list goes on. It does help to see if a team is improving and moving forwards, but if you have multiple teams how do you know how the teams compare? To take it a step further, do you know if your teams are doing as good as the rest of the industry?  Where do you stand?

As this was an interactive session there were a lot of ideas flowing around the room. One that I liked (mainly because it was mine) was that of an App which could record measures and compare and score teams across companies. Another one was cross-company agile workshops. At the time we were discussing this idea, the risk of these workshops causing a big overhead became apparent but having thought about it, there could be actual work done if they were correctly planned and structured.

Those of you who are interested and are “clever” enough to search the internet will now find some funny looking pictures of me from the conference.

Merry Christmas and a Happy New Year.

Spotlight on… Future Decoded and Project Oxford

I was at ExCel earlier this week for Microsoft’s annual Future Decoded event. Future Decoded’s a combination of big-picture keynote speeches – Internet of Things, quantum computing, artificial intelligence – and focused talks on current and future Microsoft technology like ASP.NET 5, Windows 10, the new Roslyn compiler infrastructure. It’s always an excellent event, but something that really jumped out at me this year was a talk by Chris Bishop from Microsoft Research about Project Oxford, a set of AI services for dealing with speech, natural language – and human faces. As you can appreciate, human faces are a hugely important part of casting. From 10×8″ headshots to online portfolios, a performer’s photographs have always been an essential part of any sort of casting service, and Spotlight is no different.

We humans are sociable animals, and one of the things we are astonishingly good at is recognising each other’s faces – our parents, our friends, celebrities, even the grainy photocopies in the picture round of your local pub quiz. This capacity to detect and recognise faces is vital to our social groups and communities, and accurate face recognition has long been one of the holy grails of artificial intelligence research. Over the last decade, there’s been some remarkable developments in the areas of computer vision associated with human faces.

First, there’s face detection – analysing a photograph, and working out if there’s any people in it.  Like this example from Apple’s iOS libraries:

When I visited Japan in 2007, Sony were proudly showing off a cutting-edge digital camera that would detect human faces and adjust the autofocus so that your subjects’ faces would be in focus – very cool, very innovative, very expensive. Eight years later, most of us have a phone in our pocket that can do face detection via a built-in camera, and if it doesn’t, Facebook will detect the faces when you upload your photographs.

So… what’s next? The really exciting thing – certainly from a casting perspective – is face recognition, and being able to measure similarity between faces. How many casting briefs have you seen looking for someone to play a historical figure, or brothers and sisters of a character who’s already been cast? Or those breakdowns looking for a “Kate Winslet type” or a “Michael Fassbender type”?

Among the technologies Microsoft demonstrated at ExCel on Wednesday was Project Oxford’s “similar face search” capability. It’s available via an HTTP API from Microsoft Research, but they’ve also put together this rather neat demo called TwinsOrNot.net. So I decided to kick it around a bit and see what it can do – and, since this is Spotlight, I’ve tried it out on a couple of castings to see how well Project Oxford thinks these performers matched the people they’re portraying.

imageimage

imageimage

That’s Morgan Freeman, who portrayed Nelson Mandela in “Invictus”; Tom Hanks playing Walt Disney in “Saving Mr Banks”, Helen Mirren playing Elizabeth II in “The Queen” – and Sasha Baron Cohen, who was in talks to play Freddie Mercury in a Queen biopic, but it was confirmed in 2013 that Cohen was no longer involved and the project is now on hold.

Of course, making a fun online technology demo is one thing; to actually turn this kind of technology into a usable casting tool is still some way off. For starters, the processing power involved in this kind of analysis is considerable – there’s nearly a quarter of a million performer photographs in Spotlight’s database, so to analyse our whole data set for similarity would mean analysing over sixty million pairs of photographs, and Microsoft’s beta programme is currently limited to 5,000 requests per month. But not long ago, this kind of stuff wasn’t just expensive, it was actually impossible, and with the cost of computation halving every eighteen months, it won’t be long before this kind of research opens up a whole new range of possibilities for digital casting tools.

In the meantime, head over to TwinsOrNot.net to try it out for yourself, or read more about it at Microsoft’s Project Oxford site.

Axure RP: Magic Lego for Development Prototyping

Once upon a time, prototyping web apps was easy. You’d draw every page, and then use a site map to demonstrate which links went where. Every page was static; nothing moved, there was no Ajax, no infinite scrolling, no drag’n’drop, and most websites were actually about as interactive as a Choose Your Own Adventure novel. Well, those days are gone. People expect more – richer UIs, better responsiveness, less postbacks and waiting around for pages to load – and with libraries like jQuery, there’s really no excuse for not delivering code that satisfies those expectations.

imageQuestion is – how do you prototype a rich user interface? How do you draw a picture of something that won’t sit still? For me, that’s where Axure RP comes in. Axure is a “tool for rapidly creating wireframes, prototypes and specifications for applications and web sites”. It’s a commercial product, that I’ve been using for many years now, and I’ve still not found anything that comes close for rapid prototyping.

In everyday use, it’s like a weird and wonderful cross between Balsamiq, Visual Basic, and Lego.

  • Balsamiq, because it’s easy to mock up static user interfaces by dragging buttons, inputs and form elements on to your page.
  • Visual Basic, because it’s easy to add behaviour to those elements using click handlers, events and dynamic controls.
  • Lego, partly because it’s fun, but mainly because it’s really easy to see that you’re looking at a prototype and not a finished product.

imageThe game Populous was designed using Lego. I grew up with Lego. From a very early age, I learned to use Lego bricks to express ideas. I knew every single brick I owned. I could demonstrate an idea I’d had for a car, or a spaceship, or a robot, by assembling these reusable components into a prototype with spinning wheels and moving parts and a sense of scale and colour. Working entirely in plastic bricks actually become very liberating, because it stops you worrying about materials and finishes, and allows you to focus entirely on expressing ideas.

Have you ever showed someone a Lego house and had them say “Hey, that looks great! When can we move in?” No. People know a Lego house is not a real house. They appreciate that the point of a Lego – or cardboard, or clay – model is to demonstrate what you’re planning to do, not show off what you’ve already done.

Now, have you ever showed anyone an HTML mockup of a web app and had them say “Hey, that looks great! When do we launch?” – and then they look crestfallen when you explain that you haven’t actually started the build yet?

Software is magical invisible thought-stuff,  and the vast majority of our interactions with the systems we build are based on looking at pictures on a screen. It’s understandably difficult to tell the the difference between actual working software and “pictures” of working software. The distinction between between an HTML mockup and a completed web app isn’t nearly as clear as the distinction between a Lego house and a real one – and understandably so. HTML is HTML – whether it was hacked together late last night in Notepad or generated in the cloud by your domain-driven MVC application framework. The difference doesn’t become apparent until they actually start clicking things – by which point it’s too late; you’ve made your first impression (“wow, the new app is done!”) and it’s all downhill from there. Axure gives you a wonderful degree of control over this – whilst it’s perfectly possible to build really high-fidelity prototypes with color schemes and typography, it’s also easy to override your styles by applying “sketchiness” to your wireframes – check out this example from a prototype of Spotlight’s authentication system. This is the same wireframe – on the left, we’ve overridden the font and cranked the sketchiness up to produce a back-of-a-napkin look; on the right is the same wireframe, but with the default typefaces and a much higher-fidelity appearance:

image     image

I think the hardest questions in software are “what are we doing?” and “are we done yet?”. I think good prototypes are absolutely instrumental in answering those questions, and any tool that can help us refine those prototypes without falling into the trap of “well, it looks finished” has to be a Good Thing.