Every user interface designer is familiar with the procedure to some extent: To find out what a user interface needs to look and behave like it’s certainly a good idea to create a prototype and evaluate it with potential users. Users will tell you what’s still nagging them and therefore should be improved before coding starts. So, in the beginning of any UI design process everything is about change – you create a prototype and already expect it to require modifications in order to work alright. As you – and most likely your client, too – want changes to be as cost-efficient as possible, you are better off taking a change-friendly prototyping method or tool. This is especially true in early stages of the project your ideas of potential solutions are rather vague. In this early phase, most often you don’t even know the exact problem for which you are in hunt of a solution. You are still analyzing more than you are designing.
In this respect, to work change-friendly and cheap, it’s wise to start your prototype roughly (maybe as a paper pencil sketch) and make it more sophisticated the more you understand the requirements, that is what users need (or what they know they don’t need) and to what extent your client wants to give them what they need (yes, this is not always in line). Finally, when a prototype has reached a certain level of expressiveness, it can even serve as a “living specification” for developers to tell them how the front end should look and feel like. These prototypes are sometimes referred to as high-fidelity prototypes. As soon as developers know what exactly to code, your high-fidelity prototype can die in dignity. It has no future. But wait… or has it?
Though the described approach sounds perfectly plausible and indeed makes sense in many situations, it needs to be slightly reconsidered in the context of new UI paradigms.
A new challenge
Think of what in these days is commonly called “natural” user interface (NUI). These fancy multi-touch and alike playgrounds that are on their way to replace – or at least augment – our “good old” graphical user interfaces (GUIs), just like GUIs once replaced command line interfaces (CLIs), surely lower the burden for users to interact with a system as everything is more direct than with a mouse and a pointer.
UIs become natural – well, almost
Regarding multi-touch apps, you just directly tap on what you like to manipulate or perform a gesture on it and there you go. And NUIs are not solely about multi-touch: a speech recognition system is a NUI, too. You just say what you want and the system does it for you. And let’s forget for a while that though being called “natural” user interfaces they are still far away from being really natural – they just feel more natural than before. With multi-touch systems most gestures are rather implicit and you don’t get any convincing tactile feedback so far. And talking to a machine as with speech recognition systems can be pretty embarrassing. Nonetheless, NUIs are brilliant stuff and they will conquer the world.
A nightmare (to come) for developers
Unfortunately, what is a brilliant thing for the user is a nightmare for the developer. Admittedly, we are still ramping up the hype cycle (especially regarding multi-touch technologies) so that even programmers are so fascinated by what’s possible that they willingly take the extra effort and burn the (extra) midnight oil to get the job done. However, this enthusiasm won‘t last forever – NUIs will become common and so will be their implementation. What seems to be a problem exclusively affecting developers, in fact is a problem for the UI designers, too: what’s hard to develop on the frontend most often is hard to design, prototype and specify, too.
Especially multi-touch UIs are delicate: There are so many nifty details influencing the user experience that it takes a lot of effort to capture them comprehensively. Which gesture triggers what action? How many fingers should be used to perform a certain gesture? How fast do these fingers have to be moved? How does the object or scene being manipulated behave along time to keep up a proper cause-effect interplay?
Prototyping becomes more expensive…
As long as you concentrate your design efforts on a simple photo sorting application, you don’t run into problems. You can easily prototype this experience by – well – sorting real photos. However, multi-touch applications will grow more complex, gestures will occur in greater variety and (hopefully) help to solve more realistic problems. For the UI designer, this means being faced with an unfamiliar challenge. It’s almost impossible for a UI designer to cheaply prototype and this way learn about the experience of solving a complex problem using gestures if there’s no suitable analogy in real life.
Of course, you can prototype parts and this way approximate to a decent design, but you will always have that nagging feeling of having missed something, uncertain that you really transport what you actually intended. So, as a UI designer in order to design amazing interaction experiences for NUIs, you have to be prepared for even more racking your brain, performing more and smaller prototyping-feedback cycles and requiring more time and expertise to create an expressive prototype. Christopher Alexander once said:
“Things that are good have a certain kind of structure. You can’t get that structure except dynamically. Period. In nature you’ve got continuous very-small-feedback-loop adaptation going on, which is why things get to be harmonious. That’s why they have the qualities we value. If it wasn’t for the time dimension, it wouldn’t happen. Yet here we are playing the major role creating the world, and we haven’t figured this out. That is a very serious matter.”
Alexander, as a building architect, has nothing to do with GUIs or NUIs or the like. Still there is so much universal truth in his words that they can smoothly be transferred to the domain of user interface architecture: the more natural and harmonious a UI designer wants a user interface to be, the more time and the more iterations it will take him to arrive there. Good designers may require less iterations than poor ones but still they won’t make it without any (unless they make a one-to-one copy of what nature itself already offers). Getting back to prototyping this leads to a simple deductive reasoning: creating good UI designs in the future will require more feedback loops, more feedback loops result in prototyping becoming more expensive and the more expensive prototyping is, the higher the burden is to throw prototypes away.
…too expensive to throw things away
So rather than throw it away, what else can you do with your prototype? Of course, you can evolve it over time, which means that you always tie on what you already have and just add or modify the differences that – by receiving feedback – you learned are missing or in need for a change. Even after development has finished, you can put the prototype on hold and get back to it later, whenever it’s necessary. And let us not deceive ourselves: even when development has finished realizing the right requirements – the day will come when one of them will become invalid so that you need to dig up your almost-forgotten creation and make the changes.
But how effective is it really to evolve prototypes as described and is it enough to face the challenges that modern UI paradigms provoke? To answer this question, it is a good idea to illuminate what a modern prototyping tool is capable of supporting the UI designer with. After all, what cannot be realized through a prototyping tool can hardly be an ingredient of an effective practical approach.
In the second part of this two-part article, I will shed light on how this works using Expression Blend, as – at this time – Blend offers the largest set of possibilities we have faced so far in order to squeeze out the most of a prototype.