I am Alex Maughan.
User experience designer and bullet-proof front-end developer.

26 February 2015

Basics, please


There’s a roadside restaurant about an hour outside of Cape Town, heading east along the coast. I’ve eaten there many times. The route is a common one for us, being a national road that eventually takes you to the Garden Route and the many holiday towns that sit anxiously along it during the off-season. It’s also the route we had to take to get to our wedding venue – a return trip we did more than a couple times.

The food has always been good. Despite this, I’ve never really saved it as a noteworthy place in memory. My mind, I guess, was always on the road ahead.

Before leaving for a short break last week, one of my wife’s friends made special mention of this very restaurant we’d eaten at many times. Neither of us had any strong opinions about the place, but maybe it had gotten a whole lot better? So, we made a point of stopping and getting something to eat, even though it was an odd point for us to do so on this particular journey.

The place had changed. They had done some significant decor remodelling, following an American cum Scandinavian pseudo barn look. Retro light bulbs, exposed beams, and lots of chalk boards. The kind of aesthetic that seems to have been designed with Instagram filters in mind. In Cape Town, it would be a lumbersexual haven. Instead of hipsters with MacBooks, however, there were a few overweight South African farmers. Maybe some of them were actually lumber jacks, albeit the hands-on-hips managers of South African black labour kind.

Despite being too late for breakfast and too early for lunch on a weekday, the place was mostly full. We were directed to our Table For Two, whereupon the frustrations of an obvious negligence to certain basics began.

The chair I was forced to sit at – the one my wife so adeptly avoided – was clearly the brainchild of a sadistic chiropractor who wished to have his revenge on a world that called him a quack. Its horribly uncomfortable design was made worse by the floor slanting away from the table. It felt like I was falling backwards into a hungry concertina.

After trying the whole stiff upper lip thing, instilled in me by my British parents, I eventually gave up and joined my wife on her side of our very small table, where she had the back-saving mercy of a good ol’ wooden bench. The two gentlemen either side of us, also with smart wives who had secured the refuge of the level-floored bench side of the table, looked at me as if I had broken our communal bond of suffering. Both were waiting impatiently for their bill, so they knew their rotation to the front was nearly over anyway, so best to just hold tight until orders from General Visa or Colonel MasterCard gave them respite.

By this point my eyes were reccy’ing the joint for a better table, and I soon spotted some happy customers making their leave from a table that seemed to hold residence on level ground, and was mercifully surrounded by chairs that didn’t exude malevolence. We quickly grabbed our drinks and eating tools and claimed the dirty table for ourselves. Problem solved.

Problem not solved. My post-torture happiness was broken when the food arrived. The plate that acted as a divide between my food and the table fancied himself as a bit of breakdancer. It had been so badly warped by heat, that a bulging, ceramic tumour protruded from its bottom. While attempting the humble acquisition of a bite of food, the plate would spin round and flip up. I was left trying to figure out if I was involved in some kind of gustatory rodeo, or if I was sparring with the plate reincarnation of Mohammed Ali. The rumble in the jungle was actually my confused stomach, which didn’t understand all the delays.

This time I did indeed leave that restaurant I had eaten at many times before with a mental note. The note simply read, “Never go there again.”

In any type of service or product there are undeniable basics that can’t be ignored. As a newly unemployed citizen of a capitalistic world, when I choose to pay to have a sit-down meal at your establishment, the fundamentals of being able to sit with at least a modicum of comfort and place food in my fat face without the crockery attacking me are nonnegotiable.

This indifference to basics is a longstanding lament, that reaches into many experiences in my life outside of feeding myself, including my working career. Why are so many of us willing to neglect the basics while spending huge amounts of time and money on fancy non-essentials? Why do we think it’s okay to add more features and spruce up the aesthetics while certain fundamentals go uncared for?

The expense of creating an Instagram-friendly barn look – although certainly appreciated in and of itself – didn’t retain me as a customer. The decision to add more tables – even if it meant placing them on uneven ground – may have increased short-term turnover, but myself and the two miserable fellows who were sat either side of me are not going to be long-term customers. We didn’t care how pretty those retro lightbulbs were, and we certainly didn’t care about any of the other cool aesthetics while we were being gnawed at by the hungry concertinas you called chairs.

Maybe this indifference to basics is okay. It seems most businesses think so. Millions are spent trying to attract new customers with fancy peripherals to make up for the ones that left angrily. The strange thing is that the basics are so much cheaper to take care of than all that fancy market-prodding decoration. By all means, work on aesthetics and features that enhance experiences, but neglecting the basics while doing so is stupid. Well, unless you think customer churn is a good thing, that is.

28 January 2015

The cool kids sit at the back

 1 function whatADesignerDo(skills) {
 2    for (var i=1; i<=skills.length; i++) {
 3        if (skills[i] == coding) {
 4            human.log("Come with us, we're going... er, well, we're going forward!");
 5            return pride;
 6        } else if (i == skills.length && skills[i] != coding) {
 7            human.log("You've been left behind, you illiterate fool!");
 8            return shame;
 9        }
10    }
11 }

A few months ago, a colleague told me about a friend, a designer, who had looked at what we were working on and said she regretted not learning to code. She said she really wanted to remedy this. The word regret and a compulsion to fix this so-called problem stuck uncomfortably in my mind.

A few weeks later I went out for drinks with a friend who’s a programmer. He told me that he, along with other programmers, had all agreed learning to code would soon be like learning to read – a literacy prerequisite. I strongly disagreed with the assertion that everyone should feel compelled to learn to code. Maybe it was the alcohol, or maybe I was still struggling to formulate why I disagreed at the time, but I don’t think I put forward a very strong argument.

Coincidentally, around the same time, a talented designer emailed me and asked if I could recommend some web development courses or resources. Probably to his annoyance, I ended up typing out quite a wordy response. I wasn’t all that sure if finding the right course was the most important thing when deciding to learn to code. Like the alcohol-fuelled conversation with my programmer friend, I was struggling to nail down my own hesitation to all this everyone should code stuff. So, almost as a way to help sure-up my intuitive hesitation, my somewhat lengthy response instead mostly went on about how I had ended up being a designer who codes, hoping this would provide some context, and point to what I thought was the most important thing: One’s reason to learn to code in the first place.

Look at me, I can code.

As I pointed out in my email, the first real bit of meaty development I did was with ActionScript 3. Yeah, you know, that language native to that thing we don’t like talking about. The F word. Learning ActionScript was not a course-structured experience by any means. I did a lot of Googling, read many helpful blog posts, and I bought a couple books too. Slowly but surely I was able to understand the formal API documentation, which at the beginning was pure gibberish to me. From then on things got a lot easier. This naturally lead me onto other things, like JavaScript, XML, PHP, MySQL (I know, more counter-vogue swear words there) which I used to power the content side of things for various Flash experiments I built. Most of what I built wasn’t for money. They were just experiments done late at night, outside of my normal day job at the time. 1

The real crux of this was that I made lots of things I eventually trashed with no one seeing them. Not because I was ashamed of them – quite the opposite – they just weren’t ever intended to be broadcasted. I did end up being paid to build dynamic SWF-based things that clients could update themselves, but my foray into the world of ActionScript was mostly about getting a kick out of seeing some plain text that little ol’ me wrote and compiled into a working interface.

Classes and class packages and polymorphism were exciting to me, because, er, well… because how cool is it to be able to reuse the same code in multiple places and in different ways? Okay, maybe not Johnny Cash cool, but still cool. My questionable definition of cool meant it was easier to push through the many head-banging frustrations. There was a lot of joy surrounding all those moments of frustration. I haven’t written a line of ActionScript in years, but it opened a world of development to me. It made API documentation and config jargon more accessible across a range of languages and development tools. I have since become a decent web developer thanks to something Steve Jobs called a “spaghetti-ball piece of technology”.

So, I’m not a programmer, but if someone asked, “Are there any web developers in the room?”, I’d certainly raise my hand. The strange thing is I don’t think I ever set out to be a web developer. I kind of, sort of, did set out to be a designer towards the end of university, after realising Journalism wasn’t the literary soothsayer I hoped it to be. I thoroughly enjoyed my other majors, Anthropology and Linguistics, both of which silently nagged at my subconscious, pointing to the huge potential in understanding the whole computer versus human thing.

The development side of things, on the other hand, simply came about through an anti-social and nerdy drive to build digital interfaces. It has since become more than that. Amateur interest has morphed into profession.

No code, no value?

I do think being a developer has helped me to be a better designer, but there are loads of other things that I could have learned instead of coding that would have similarly improved my design value. It’s a worry when we start to elevate coding above all the other wonderful things that make designers good at what they do.

It’s particularly worrying when designers start to feel obligated to learn to code, even if they take no pleasure from it. Doing something you don’t enjoy just because you feel you ought to is a potentially wasteful path to tread. It will make the difficulty of learning it that much harder. The frustrating times will be unbearable, and it will take time away from other things you could have been so much better at.

There are so many different things we all want to do, but we can’t do all of them, so it would be a shame to sacrifice something just because you felt obligated to do something else. This is not to discourage anyone from learning to code. I just think you need to be careful about your motivations when considering it, because people seldom become good at things they don’t like.

Those recent experiences I mentioned above – and my intuitive hesitation and skepticism towards them – were brought into greater clarity by Rian van der Merwe’s Left Behind: Designers Who Don’t Code Edition. It’s a critical-thinking gem, but I’ll just quote his conclusion,

So go forth, follow the design thing you’re most interested in. If that’s coding, awesome. If it’s how to best understand user needs and translate that into design systems, go do that. As long as you do it well, you won’t be left behind.

Despite admitting that coding has helped certain aspects of my particular designer mind, these aspects don’t come close to covering the numerous things great designers adeptly deal with on a daily basis. Human experiences are far from discrete. There’s no neat unit that measures experiences, despite what some may have you believe. There’s a reason fewer experience is grammatically incorrect. Accordingly, designing for those experiences requires some heavy stuff that coding just isn’t helpful with. By buying into the idea that you have to code to be a relevant and valuable designer, you’re allowing others to undermine this multitudinous collection of skills and knowledge. You’re essentially acquiescing to a belief that code itself makes experiences – that plain text with squiggly brackets and semi-colons is somehow the content of the message.

Coding != literacy

Asserting that learning to code is the new learning to read is a spurious comparison. Coding and literacy are two very different things. Literacy is about gaining access to information and knowledge. It’s not about knowing how to engineer an eReader, as difficult as that may be. Literacy is about being able to digest and comprehend the contents of that vessel. Like many others, I’m a literate reader of hardcopy books, yet I know nothing about printing presses.

Code is not the new literacy, it is the workplace – an immensely creative factory, if you will – where a new kind of literacy is being assembled. It’s a kind of literacy that experienced designers really do know a lot about without having to code.

Ugly origins that entrench the divide

I think what riles me most about “designers who don’t code will be left behind” or “not coding will soon be a mark of illiteracy” is this nasty, yet simple, underlying fact: Those pushing others to code with beliefs like these seem to do so because they fail to see value in people who can’t code. The assumption being that they are in need of upping their game. It’s a disgustingly arrogant judgement. As is typical of arrogance, it is blind to the vast collection of things outside its own little bubble of knowledge.

When this assertion is thrown at designers in particular, it perhaps has some origin in everybody thinking they’re a designer, which can diminish perceived value. A designer constantly faces inexperienced and uneducated criticisms from non-designers. It is the norm, and we’re very much used to it. We’re suitably grateful for it if there’s been some thought behind the critique, but the fact that we don’t see much noise being made for this alternative is telling: Developers need to learn how to design, or they’ll be left behind. There’s clearly a perceived value imbalance.

Great designs look obvious when completed. They just feel right and work for their particular context. I think this unfortunately leads some to label design as easy. Of course you would do it that way. It clearly makes the most sense. Now, can we just get on with the difficult part of coding this up? We forget how not-so-obvious it was before the design process kicked into gear. These assumed-obvious endpoints are easily camouflaged from being made manifest when you’re caught up in constructing eloquent code and data models that require deeply specific knowledge to understand. This clever, deep-thinking code stuff leaves very little thought, time, and energy for end-users, who are are easily forgotten or completely misunderstood.

It’s not just end-user needs that designers of various disciplines have to deal with, which is a complex mine field all in itself, there are also business goals, stakeholder agendas, and loads and loads of politics in which priorities, roadmaps, and best practice approaches are fought over behind a thin veil of civility. Great designers creatively keep things on course; tending to that flimsy civility with the use of many tools not found within the domain of coding.

Some may argue that designers learning to code will help narrow the divide between design and development. They may point to front-end developers as a good example. This would be a misunderstanding of the underlying nature of the divide. Front-end developers can act as translators, but that doesn’t necessarily make each and everyone of them a full-time user experience designer or information architect. It doesn’t mean a front-end developer will magically have expertise in graphic design, and it certainly doesn’t guarantee she somehow has the time to do all these things, even if badly.

Attitudes and processes are what break down unhealthy divisions, not code. Thinking that someone has reduced value because they can’t code violates the correct attitude requirement.

Save me a seat at the back

So, if by some cruel and self-fulfilling prophecy, which brings about a future where the definition of literacy changes completely, and designers and other humans who can’t code get branded as illiterate and are left behind – whatever the hell that actually means – then I’d be more than happy to go sit at the back and be illiterate too. There’ll be all sorts of fun and mind-bending things to learn back there, of that I am sure.


  1. One of the things I got quite into was creating full-screen, adaptive interfaces. This was before the term Responsive Web Design had even been coined. Myself, along with many other Flash developers at the time, were playing around with content choreography long before HTML websites started showing lots of those @media query things in their CSS. I was squeezing, shuffling, and rearranging content modules along a continuum of screen sizes with ActionScript window listeners. It meant the responsive web, from a design perspective at least, wasn’t all that new to me when it became a thing. I also learnt a lot about performance during this time, never allowing my compiled SWFs to go over 100kb, taking advantage of Flash Player’s intelligent vector rendering as well as lazy loading other assets to hugely reduce the upfront payload. This is something most contemporary JavaScript-heavy apps really struggle with, many of them failing dismally on even mediocre lines. I know, that intelligent Flash Player I refer to gobbled memory, and the whole concept of a compiled SWF broke the http protocol. It certainly wasn’t progressively enhanced. But certain unfounded myths about Flash nonetheless still amuse me.
7 January 2015

Facebook & lesser-spotted intrinsics

Some sheep

A common approach to understanding our motivations, on a very broad and superficial level at least, is to first make a separation between intrinsic and extrinsic forces. Extrinsic motivations are, by and large, all those socio-economic factors that drive you to do (or not to do) something, such as financial compensation or a congratulatory high-five. Intrinsic motivations are all about doing something because you derive value from it for its own sake, such as sitting by yourself and drawing something for which you have no intention of being paid or congratulated. You just do it because you enjoy it, regardless of anyone else seeing it.

That sounds pretty straight forward right? Well, I’ve always struggled with this simple demarcation. The understanding of intrinsic motivations, in particular, has always been a bit of an anthropological and philosophical problem for me.

Most of what I’ve read that specifically zeroes in on the whole intrinsic versus extrinsic thing tends to paint a picture of two, neatly distinct forces. These motivations may come together – joining forces, if you will – in any given circumstance, but they are still ultimately described to have the ostensible ability to exist independently of each other.

However, thinking about it a bit, you’ll admit that intrinsic motivations can quickly become muddled with extrinsic ones, while the opposite is not necessarily the case. In other words, extrinsic reasons can regularly hold their own without being influenced by intrinsic ones. Intrinsic motivations seem to be less able in maintaining the same kind of autonomy.

Consequently, one can’t help but feel that intrinsic motivations are fragile and rare. How long will a person do something purely due to intrinsic motivations before that thing gets reappropriated, or even discarded, by extrinsic forces? As a child grows up, intrinsic motivations get culled to make way for more socially and economically beneficial motivations, or they get realigned to serve these motivations.

For example, a person who enjoys drawing, just for the sake of it, may become a professional illustrator whose motivations become strongly driven by other people appreciating her work. So much so, that enjoyment for the sake of it eventually gets replaced by enjoyment in other’s approval. Even in isolation, after a long period of outside-looking habituation, how can she not be influenced by thoughts on what other people will think about her personal doodles?

I don’t wish to assert that this fragility and subsequent rareness necessarily gives intrinsic motivations greater value than extrinsic ones, but I do believe it’s a shame when certain contexts promote their wholesale slaughter.

The dominance of extrinsic motivations

I can think of many circumstances where extrinsic forces operate independently of intrinsic motivations. Smoking for the first time due to peer pressure, even though it makes you feel nauseous, is a decent example. That, or going to a job you hate every day because you need the monthly pay cheque. Many examples come to mind, all of which don’t proffer any direct, isolated value to the individual, but engender strong socio-economic (extrinsic) factors that nonetheless motivate the person to do them.

The intrinsic side of things is a tad more tricky and elusive. It’s a struggle to identify intrinsic forces that aren’t, in at least some way, the result of extrinsic foundations. Most intrinsic carrots eventually become the manufactured product of powerful extrinsic factories. You may eat that carrot at home, in isolation from others, but the reason you’re eating it is still very much thanks to the social farm and factory chugging along outside. 1

I could even turn this on myself. Do I occasionally motivate myself to work through dense intellectual material that may have no manifest socio-economic application because I enjoy the process of turning the creaking cogs of my mind? Or, do I do it because I enjoy the way in which this type of subject matter and thought processes positions me within society? Am I driven by forces that ultimately elevate my socio-economic standing? Put more simply, do I just want to be considered clever by others?

This is a hard one because, conversely, such activity often undermines my short-term socio-economic value within popular society by earmarking me as a bit of a nerdy and liberal outsider. It can, oddly enough, result in me being less able to freely socialise with others. But I continue to enjoy it regardless. So, my motivation could, perhaps, be explained by a need to delineate myself from others – yet again being the ugly consequence of that ever-present othering we seem so programmed to follow.

Another example is exercise (as in the physical exercise of one’s body). Many people believe they exercise because of an intrinsic motivation to improve their physical wellbeing – that being healthier in the long run, as well the endorphins released in the short run, increases happiness. Most, I’m sure, would admit that there are external forces at play, such as enjoying sport-related camaraderie with friends, or having a more attractive body. But a large contingent wouldn’t be all that ready to admit that there may be a total lack of intrinsic reasons altogether. There would, I suspect, be strong objection to such a claim, even though the supposed intrinsic reasons start to fall apart as soon as you change the socio-economic settings around them even just a little. Take away their trendy gyms, sexy exercise gear, and tasty smoothies, and suddenly exercise is, in isolation of other factors, no longer enjoyable. Walk where? No ways – that sounds horrible.

It is for this reason that working classes around the world have found the modern exercise routines of the middle class so absurdly laughable. I’ve similarly found it amusing that the parking bays closest to the gym entrance are always the most prized by patrons supposedly in search of exercise.

I’m not criticising trendy gyms, sexy exercise clothes, tasty smoothies, or SUVs parked close to the gym door. Rather, I’m questioning whether there are any true intrinsic exercise motivations in that whole setup. It’s hard to not to be sceptical. Doing only certain types of exercise, because it is externally fashionable to do so, within very specific confines, surely means that your motivation to exercise isn’t all that intrinsic, if at all.

This example reminds me of an episode in the sitcom Friends where Rachel is embarrassed by Phoebe’s wacky style of running in public. The scriptwriters clearly saw the purity and value in what is an intrinsic motivation for Phoebe to enjoy herself. In life outside of TV scripts, some may be open to doing things like this, but I think very few would do it simply because they intrinsically enjoyed it like Lisa Kudrow’s character. I fear they’d do it just to show others how wacky they can be (and then enjoy the attention this brings). They’d record their funny running style and upload it onto Facebook in the hopes of getting a Like spike. On that note…

Facebook gets the whole extrinsic domination thing

Whether it was the tendency for any new technology or space to eventually become a perpetuation of greater socio-economic norms, or whether it was a conscious decision to capitalise on the ubiquitous power of extrinsic forces, Facebook has reaped billions from working with this irresistible lure.

Facebook openly focuses on the extrinsic. They seem to understand that intrinsic forces are, much of the time, nice but long-dead self-evaluations we hang onto in order to feel unique and special in a world overweight with other humans. Nothing you do on Facebook is ever just for you. Every action is a socially and economically laden undertaking, and, because we are so predominantly driven by extrinsic factors, we can hardly get enough of it, making Facebook the most actively engaged site on the web.

Zuckerberg constantly bangs on about sharing. His public relations line is very simple: Facebook enables sharing on a scale never before allowed, and this sharing empowers individuals who were previously ignored by old media. I’d prefer to describe Facebook as a money-making thing created on a relatively new other thing with a primary modus operandi of taking advantage of our overwhelming extrinsic tendencies. Using the traversing power of the web, it has since pumped steroids into an already bulging accumulation of external motivations. Facebook has given us a bottomless cup of extrinsic intoxicants no matter where we are, 24/7, and delivers it to and from various socio-economic permutations around the globe. It’s the information-sharing equivalent of the trendy gym example, just on a much grander scale.

Extrinsic forces consequently dominate more than ever, with intrinsic rarities being increasingly exposed to conditions they can’t survive. Additionally, it starts to shine a questionable light on the few intrinsic forces that may remain. It makes me think that intrinsic forces may now be a myth we hold onto – a hopeful stand in defiance against our flailing agency. It makes me wonder if trying to identify intrinsic versus extrinsic motivations is even useful anymore.

All this extrinsic pushing and pulling isn’t all bad, but I think we should be a little more aware of it, especially in an age where we’re raised to believe that we’re all little unique snowflakes and all our goals are waiting for us at the end of self-actualisation.

Just an observation, I guess

You may feel that I’m painting an absurdly bleak picture devoid of agency – rambling on like an armchair sociologist about a world in which we’re not far off from being sheep that are herded by our own outward-looking neuroses. But to reiterate an earlier footnote, I’m honestly not this deterministic at all. I’m just concerned about certain contexts in which an existing imbalance is made even stronger.

Similarly, the fact that I personally stopped using Facebook a few years ago may also give you the impression that I have chosen to paint Facebook as a big ol’ baddy because its entire existence operates on an even more extensive and homogenised version of the existing extrinsic dominance. This is not entirely true either. I’m fully aware that taking advantage of extrinsic forces is at the heart of most businesses, and we’re all free to stop using Facebook at any time.

So what is this all about? Mostly, I’d prefer to think that this is about pointing to underlying truths. We too often fool ourselves into misunderstanding things, like the success of Facebook and why we use it, in ways that advance misleading reasons, creating smoke screens behind which particular values are elevated over others.

The prioritisation of values isn’t bad (we all have our own value biases, after all), but this particular, smoke-screened prioritisation comes insidiously packaged with the misguiding pretence that the de-prioritised values were not in fact undermined. It tells you that you’re still primarily going to the gym to exercise, even though you can’t help yourself from parking as close to the entrance door as possible, getting that smoothie, and looking sexy in your new training gear.

There’s nothing wrong with smoothies, and there’s nothing wrong with jiggling your well-attired arse in public. But clearly something’s a little off if you think this is purely about the cardiovascular exercise of your muscles and respiratory system. Similarly, I think its absurd to accept Zuckerberg’s claim that Facebook is just about sharing. If it is, they’ve certainly stretched the definition of the word to tenuous limits. The web is already fundamentally about sharing, Facebook just managed to amplify the extrinsic motivations involved to addictive self-serving levels. I fear this increases the rate at which purely intrinsic motivations are being killed off or unknowingly re-appropriated.

I guess by at least being a little clearer about our driving motivations we can start to focus on some of the more fundamental aspects in the things we do and say (if we want to, that is), rather than using false premises to try decorate the banal superficialities we now lump on top of those fundamental aspects. Maybe it will make conversations just a little bit more interesting and a tiny bit more honest. Maybe it will give us a gentle push away from the drowning homogeny every now and then. But, then again, maybe it won’t.

Maybe you could have better spent the time used to read this by checking your Facebook feed instead. I don’t know, I guess I’m too biased to say for sure. I just hope you don’t unknowingly misplace any more of your intrinsic motivations in the process, because I truly believe it’s a shame to lose them.


  1. This is not to say I hold a deterministic viewpoint. I favour the post-structuralist belief that there are too many power mutations and feedback loops going on within society for true determinism to be accurate. Rather, the old chestnut, no man is an island comes to mind. Only I fear we increasingly fool ourselves into thinking we’re on an island when we’re not, blindly allowing our intrinsic motivations to wash away in an unseen sea.