Menu

A log of Alex Maughan’s thoughts.
Mostly focused on design, with scratchings below the superficial. Thirdly, he also wrote this intro.

14 September 2014

The diminishing returns of being stubborn

Copernican geocentric system

In 1922, Alexander Friedmann published an article in the journal Zeitschrift für Physik, where he proposed three different ways in which the universe could be understood in response to a nascent concept, the cosmological constant. Put simply, the cosmological constant was a bit of scientific and mathematical hackery contrived by Einstein in order to maintain an unsubstantiated belief in a static and eternal universe.

At this time, Einstein was riding a huge wave of scientific rock stardom, having just published his general law of relativity which had now officially replaced Newtonian physics as being the most accurate way of understanding gravity and how this gravity governed the movement of objects in space (planets, moons, stars etc).

Despite replacing Newtonian gravity, Einstein’s gravity and his new flexible spacetime concept still predicted a universe heading for ultimate disaster. His formula compelled the ultimate realisation that the attractions between each planet and star would cause all of them to eventually converge into a cosmic pile-up of epic proportions. While Newton chose to conclude that this almighty crash was being held back by the almighty himself (i.e. God), Einstein sought a less religious explanation by introducing the cosmological constant, an arbitrary number indicating the exact force of anti-gravity required to keep the universe static; to hold it steady for eternity.

Friedmann proposed that the universe could instead be understood in three different ways:

  1. A high density universe in which the above mentioned pile-up would eventually happen.
  2. A low density universe whereby objects in space were slowly but surely moving further away from one another creating an ever-expanding universe.
  3. A perfectly dense universe that was at a perfect symmetry between 1 and 2, in that an initial separating push had occurred with just the right amount of force to create a stabilising balance against the pull of gravity.

Friedmann’s article was a mathematical argument. Einstein took no time in shooting it down, even going as far to publicly declare that Friedmann’s calculations were flawed. Friedmann’s calculations were in fact correct, but being denounced by the scientific rock star of the age meant his work was given no more consideration. Friedmann died only 3 years later after contracting an illness. He died away from his wife and child, unknown by most in the scientific community. Those who did know him saw him in a tarnished light, thanks to Einstein’s condemnation.

To make matters worse, Einstein would go on to prematurely dismiss yet another great mind. Georges Lemâitre was a Belgian priest and cosmologist who had impressed the likes of Eddington with his mathematical prowess after a short stint at Cambridge. Lemâitre studied theoretical physics in Belgium as well as spending some time at Harvard and MIT. Not knowing anything about Friedmann’s work, he had begun developing a theory of the universe that unwittingly reiterated Friedmann’s work of a non-stationary universe. Lemâitre proposed that it all originated from an exploding primeval atom.

In 1927, at a conference in Brussels, Lemâitre approached Einstein with his theory. After pointing out that he had already been proposed something similar by Friedmann a few years before, Einstein once again dismissed the idea saying, “Your calculations are correct, but your physics is abominable.” 1 Einstein had learned not to spuriously accuse proponents of a non-stationary universe of being mathematically incorrect, but persisted with his own beliefs by dismissing it all the same.

Thanks for the amateur history lesson, Alex, but what the hell is the point of all of this? Well, as you could of guessed from the title of this post, this little snippet of scientific history helped solidify some thinking I’ve been doing around stubbornness. You see, Einstein was blessed with a brilliantly stubborn attitude that helped him realise radical shifts in science, but he was equally cursed by this same stubbornness later in his life. By refusing to even consider that his own subjectively unsubstantiated belief in a stationary universe could be wrong, he acted as a horrible impediment to two very promising scientists. He was in many ways a modern equivalent of the Catholic church during the days of Copernicus and Galileo. This may sound hyperbolic, but I wager Friedmann and Lemâitre would consider it spot on.

Now, I don’t for one second want to give off the idea that the likes of Friedmann, Lemâitre, Einstein, Copernicus, and Galileo have any real equivalency to what I do on a daily basis. One of the reasons I enjoy reading about scientific history is that it consistently provides a jolting perspective on how laughably inconsequential I am and the work I do – a humbling reminder that I really shouldn’t be so emotionally invested in the efficacy of a website interface. Although, this being said, I think I can say that stubbornness has had a similar(ish) part to play in my life, as it has with others around me.

With most things it’s a general stubbornness that gets you started and keeps you going. You have to be stubborn against your own doubts, against your own inability to learn fast enough. Additionally, all knowledge domains come pre-packed with contemporary trends, wrapped and sold on by fashionistas who instruct you on how that domain should be thought about, spoken about, and executed in practice. Your stubbornness, therefore, can also be an isolating influence, being that thing that makes you stick your head down and mumble “bah humbug!” to outside influences.

There’s a focused appeal to hunkering down in your own mind. It becomes a stubborn leitmotif day after day, year after year; periodically building in complexity, which crescendos before crashing down into moments of calm; a temporary y-axis flatline. It is in this calm that I have made my best and most sudden jumps in improvement – a beautiful irony of metaphors, as if the flatline provided the trampoline I needed. This improvement is all, in my opinion, thanks to stubbornness.

But there’s that unfortunate downside of stubbornness which can impede your own progress after a certain level of competency has been reached. The more confident I’ve become in a certain area of knowledge, the more subjectivity starts to masquerade itself as objectivity. It is easy to fall into the trap of knowing, instead of admitting that you continue to remain more in a state of not knowing than knowing. This false state of knowing is socially and economically enticing. Pay grades and social standing are structured around it.

Einstein kind of recognised this problem when stating, “To punish me for my contempt for authority, Fate made me an authority myself.” In all his spiteful arrogance and demanding authority, Newton also admitted to the vastness of not knowing in the face of only a tiny bit of knowing,

I do not know what I may appear to the world, but to myself I seem to have been only a little boy playing on the seashore, and diverting myself now and then in finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay undiscovered before me.

Being stubborn can help you uncover little gems of knowledge (or really big ones if you have the amazing talent and intellect of all of those scientists mentioned above). It can drive personal discoveries with an exacting joy at getting better at something. But sometimes instead of gems, that same stubbornness can result in you polishing your own little knowledge turd, trying to rub off the fecal imperfections inherent in that knowledge instead of simply admitting that you’re ultimately shining a piece of shit.

I think I need to be more wary of the diminishing returns of stubbornness. Letting go of my precious little turds may be easier said than done, but I guess I have to try.

Notes:

  1. This and other quotes in the post are taken from The Big Bang by Simon Singh.
17 August 2014

Art and technology

In the documentary Tim’s Vermeer Tim Jenison, a complete newbie to painting, paints Johannes Vermeer’s The Music Lesson based on a real-life reproduction of the painting’s composition, which he painstakingly constructs himself in a warehouse. He does this using an optical setup of lenses and mirrors built on the foundations of what many believe artists like Vermeer used back in the day.

Jenison believes he’s re-discovered the means by which Vermeer was able to create his unbelievable, photographic-like paintings. His method, although requiring creative ingenuity to think up and build, as well as huge amounts of patience and perseverance, transforms the act of painting itself into a mechanical and objective process – it transforms the human doing the painting into a non-subjective machine.

Whether or not his painting is proof that Vermeer’s long-acclaimed genius is actually mechanically replicable is not something I feel a need to explore. Rather, my interest lies more in our culturally emotive reactions to this debate. The very strong feelings so many of us have against the idea of reducing art to a technologically driven and mechanical process is significant (to me, at least).

Tim’s documentary, as well as various books that advance the theory of artists using optical devices, are criticised for missing the point of Art; that by focusing on technological trickery, one naively misunderstands what it is that makes Vermeer one of history’s most celebrated artists.

So what does it lack? The film implies anyone can make a beautiful work of art with the right application of science. There is no need for mystical ideas like genius. But the mysterious genius of Vermeer is exactly what’s missing from Tim’s Vermeer. It is arrogant to deny the enigmatic nature of Vermeer’s art.

I think the Art-loving outcry emanates from how we choose to define and, in turn, place cultural importance on Art, with a capital A.

Art is a brilliantly illusive concept. There’s much confusion and flexibility around its definition, yet most of us have very emotive and philosophical affiliations to it. It seems to transcend ideas around aesthetics, and scoffs condescendingly at the proposal of it merely being form without function. Art is a prevailing means of commentary and expression. Most of all, it is regularly seen as the end result of genius. Because of this, only a special few are celebrated from one generation to the next.

I really don’t think I’m imagining the very real rift that consequently exists in the minds of many, whereby Art is seen as a special human-only artefact, while technology is the clever, but otherwise ugly, Frankenstein child of our increasingly mechanised drudgery. The two are considered by many as immiscible, or rather that technology detracts from the true essence of Art; that it somehow robs Art of its expression, diminishing its human commentary.

In The Story of Art, E.H. Gombrich writes,

There really is no such thing as Art. There are only artists. Once these were men [sic] who took coloured earth and roughened out the forms of bison on the walls of a cave; today some buy their paints, and design posters for hoardings; they did and do many other things. There is no harm in calling all these activities art as long as we keep in mind that such a word may mean very different things in different times and places, and as long as we realise that Art with a capital A has no existence.

There are three important things to take from this very popular introduction to Art (I’m persisting with the capital A, for reasons soon to be explained):

  1. The only thing that defines something as Art is whether a human artist produced it.
  2. There are many different definitions and these are mutable over time and geography.
  3. Art with a capital A does not exist, apparently.

Points 1 and 2 reaffirm my preceding assumptions.

Point 3 is where things become wonderfully contradictory. You see, Gombrich has chosen to denounce Art with a capital A because it brings with it with too many intellectual pitfalls, which he’d rather avoid. By saying Art doesn’t exist, he avoids having to define it. This is fine, I guess, but as I’ll now try to explain, it is, I’m afraid, a contradictory cop out.

It is impossible for Art not to exist. If there was only art with a lowercase a, then any person who calls herself an artist should be celebrated as one. I’m personally okay with this, but it seems the Art world, including old Gombrich, aren’t (my own emphasis added):

Praise is so much duller than criticism, and the inclusion of some amusing monstrosities might have offered some light relief. But the reader would have been justified in asking why something I found objectionable should find a place in a book devoted to art and not to non-art, particularly if this meant leaving out a true masterpiece.

Okay, so if I’m following all of this correctly, it seems there are and there are not things worthy of being called ‘art’. I’m sorry, but this is when art starts to eat all of its greens and grows into a big and healthy Art – it most certainly comes into existence when you say there is ‘art’ and ‘non-art’.

Gombrich’s hugely circulated book on the subject operates on the epistemological foundation that his subject matter can only be addressed in light of the humans who produce it and that, as humans, we define it differently over time and space. This democratic foundation quickly crumbles under contradictory conservatism, however, because he clearly reaffirms well-known Art-type criticisms by deeming only some productions as being worthy of this categorisation.

Logically speaking, one is left confused by a book framed as an introduction to a select group of celebrated people who produce something that supposedly does not exist, but has simultaneously existed in various forms since troglodytes started vandalising their caves.

Considering Gumbrich’s work is such a huge bestseller (the copy I own at home is a whopping 15th edition), is it safe to assume his framing of, and thinking around, Art is shared by a good many of us? I venture to think so. Most of us seem to share this judgemental and contradictory understanding of Art, even if we try to democratise the way we talk about it at times. I think the reason is that it fundamentally comes down to us using Art as a way to elevate ourselves; a means by which we point at our human uniqueness in relation to other animals and, in more recent times, as a cultural device to argue for our value over machinery and the ever-booming ingenuity of technology.

What is a bit concerning is that by creating this dichotomy between Art and technology, we seem to do so with the assumption that technology is anti-human. The harsh criticisms of Tim Jenison’s painting of a Vermeer fails to recognise that here is a man of glorious creativity and talent, and is by all accounts unbelievably dedicated to any undertaking he pursues. What he achieves is just as worthy of celebration in my mind as Vermeer’s unbelievable talent.

There are Vermeers everywhere. Some use paint, some use hammers, some use keyboards – but all of them use technology in some way or form. Something as simple as a paintbrush is technology. There’s nothing more human than the technology we invent and use to do great things.

As long as we don’t lose sight of the inherent humanity in our technology (which we unfortunately do from time to time), I think we’ll be okay. Art can continue to be mysterious, uplifting, and judgemental, but so can technology and it will continue to be involved in the production of Art, whether or not Art snobs like it or choose to admit it.

6 August 2014

A node package and Grunt workflow

NPM and Grunt logos

At work I’ve been faced with the problem of maintaining front-end code that is shared across different applications worked on by different teams of developers. In addition to redesigning and rebuilding a consumer-facing e-commerce-type website, there’s a significantly more hefty operational side of the business, made up of various kinds of admin interfaces that have either yet to be developed or are in need of some serious TLC.

So, there are currently two general collections of front-end code that need to be managed across a number of different apps: the new consumer-facing and the more admin-type collection of front-end components.

Some background on the actual code

Having written 99.9% of all this front-end code myself, these collections have followed a modular approach that I strongly believe in. The CSS is a compilation of component-driven Sass modules, while the JavaScript is made up of various object literals, which contain reusable variables and methods that are as concise in their purpose as possible. Each object is assigned a jQuery 1 global namespace, so its methods can be called from anywhere within the application as and when needed. Some of these objects implement custom or 3rd-party plugins – the latter of which are kept to a strict minimum to reduce 3rd-party code bloat.

The problem

The problem was how do I keep working on this front-end code in a fast, iterative manner while not having to manually update code from one application to the next, constantly being worried about whether changes in the context of one app would end up breaking another.

I frequently remind myself that the code I spend a bit too much time agonising over at times ultimately only exists for the sole purpose of delivering an enjoyable interface to a user in the best way possible. The preciousness or cleverness of my code setup should not dictate decisions around the user experience design. This means the approach can’t be rigid to fast design iterations.

If it becomes clear that a certain change or addition will improve the user experience, I don’t want that change or addition to be potentially sidelined or delayed due to it not easily falling inline with a code-centric workflow or some misplaced sense of code conservation. I need to be able to break away from my my nice, clean framework when a more optimised interface design requires it, but I need to do this without sullying the cleanliness of the fundamental design principles behind that framework, nor with it having breaking consequences elsewhere. I need the shared tools necessary to make these changes cleanly if possible, or, if not, I need to be able to create some technical debt that is easy to quarantine from one day to the next, as well as from one application to the next, but can still be deployed immediately for its specific use. I can then go back to this slightly faster and looser code and work out if it is in fact technical debt 2, and if so, look into repaying that debt by working it more cleanly into the base framework.

Beyond this need for design flexibility, there’s also the need to be able to make more considered, code-centric changes that improve the base framework, but to be able to do this in a selective fashion from one application to the next. This means updates can remain flexible to both mine and other developers’ priorities, and are also sensitive to any app-specific requirements that should temporarily delay or even skip certain updates. The aim is to do this while still only making this update to a single, distributed code repository.

So, tell us code monkey, how?

First, I worked on how best to divide up the code. What I settled on was the following:

  1. A core framework
  2. A framework specific to the consumer site
  3. A framework for admin interfaces

Both 2 and 3 are dependant on the core (1). Pretty simple, nothing too fancy about that. What was a little less simple is how I hoped to work with these repos, and how I hoped other front-end devs (if we ever found any to join the company) would work in tandem on them across different applications without crossing swords. I didn’t want to be challenged to a dual to the death every time I pushed changes to the origin. The idea of being stabbed in the face at a very efficient rate of 60fps by another front-end developer scares me.

What about Bower?

Talking through this problem with a smart and helpful senior developer in the team, his suggestion was to have a look into Bower. This made complete sense, as Bower is a tool for automating the management of front-end components and their relative dependencies. We were already using it for 3rd-party components. One could convert those 3 repositories mentioned above into versioned Bower components, and then each app can download the correct version it requires (which in turn downloads the correct version of the core framework it requires).

So I got stuck in, all excited-like. Unfortunately, the more I played around with Bower in relation to these repos, the more Bower felt wrong for this. Bower is perfect for 3rd-party stuff, as hooking up to the browser-ready distribution files of other people’s generic components is perfectly suited to how one normally uses a 3rd-party component.

One naturally installs Bower components into a publicly accessible directory, because these components are front-end in nature and because you should be using the components as they have been automatically provided – why else would you be using a dependancy automation tool like Bower if you weren’t? You would concat and minify for production, but these components would be public files all the same.

This, in my mind at least, means that you start breaking your assumptions around Bower components as soon as you start installing source files that require server-side compilation (such as Sass and SVG files and maybe, who knows, CoffeeScript files in the future).

The key difference here is that these repos are intended to be used during front-end development, not simply included or implemented. The Bower workflow kind of assumes the latter. What we really needed was a server-side module manager for front-end development. So the same, very smart developer suggested Node Packaged Modules (NPM) instead, and he setup a private registry on a local development server for me to start playing with.

I’ve been using the NPM approach (with a strong process dependancy on Grunt) for a few sprints now, and it feels like a great recipe thus far. Below is some more detail about the approach.

The solution: NPM & Grunt sitting in a (dependency) tree…

Using a private NPM registry had its problems. It couldn’t be accessed outside of the office network (I work a lot from home after hours, especially when a brainwave strikes me), but most importantly it caused problems with the continuous integration build process on the staging server. In short, there were network security issues with build process. So we instead switched to having the NPM config fetch the packages directly from Github, using git tags for versioning. The same concept as before, except instead of publishing to an NPM registry, I just push them to their repos on Github, tagging them with a version number. It, in effect, removed a step for me, as I would have maintained a git repo in addition to the NPM registry anyway.

Firstly let’s take a look at the structure of the 3 repositories in question (1, 2, and 3 above – core, consume-facing, and admin-type UIs).

Each repo contains the following:

  • An images directory with uncompressed and fully editable SVGs
  • A styles directory made up of .scss files organised into:
    • modules (global component-centric modules, each in a separate file to make it easier to cherry pick the ones you want)
    • sections (bespoke styling for certain pages or sections, also in separate files based on the page or section)
    • ie (IE specific styles which follow Paul Irish’s recommended approach to IE overrides)
  • A scripts directory made up of a collection of component-centric JavaScript objects, with reusable methods (once again in separate files).
  • An NPM config (package.json)
  • A Bower config for 3rd-party components (bower.json)

The core repo doesn’t have a ‘/styles/sections’ directory, as it never directly implements any particular page or interface. Other than this, all repos follow the same file and folder signature. The core repo contains most of the modular .scss and .js files, with the consumer-facing and admin repos extending these with their own global (i.e. section-agnostic) modules as well as with their own section-specific styling. Unlike the consumer-facing repo, the admin repo’s sections refer more to types of sections or page areas, as apposed to actual pages, as it is more generic in nature.

The admin repo introduces some different design patterns, both macro and micro in scale, as well as bringing in some more heavy data-orientated components and edit controls.

The consumer-facing repo has some global elements specific to it, while introducing a lot more page-specific styling and breakpoints to squeeze out the best design possible for our customers across a huge continuum of screen sizes and browser capabilities.

You could ask the question, Why create a repo for the consumer UI, if it is only applied to one application? That’s a mighty fine question. I’m starting to like you. Well, the answer is two-fold. Firstly, this repo can be used for other consumer-facing offerings, which have been proposed for the future. Secondly, this repo can be automatically pulled into some form of externalised documentation if need be, used to codify and communicate design patterns in a centralised place for the whole company. In other words, for the creation of a pattern or UI library of sorts that can play an important part in bolstering organisational memory – the thing so crucial to creating cohesive user experiences.

Each app then includes the admin or consumer-facing packages in their NPM config, pointing specifically to the version (via the git tag) which they are using. Those versions, in turn will be dependant on a specific version of the core (as defined in their own NPM config). An app can pull in both the admin and consumer-facing package and more (as does our actual consumer-facing app, as it has backend admin pages). Because each are kept in their own directory within the node modules directory with each in turn having their own version of the core package, you can continue to keep them separate for different sections of the app.

All you need to do is run ‘npm install’ and wazam! you have the packages you require with the correct version of core automatically pulled in. In addition to this you can add a ‘postinstall’ declaration to your packages that kick starts a process of your choosing. So we used this to kick start the installation of 3rd-party Bower components specific to each front-end package. This means, by simply running ‘npm install’ in the app, we get all our necessary front-end packages installed along with their Bower components, without having to commit these components to the package repositories themselves.

Now the second very pertinent question you may ask is, Does this mean you have to keep pushing commits to your package repos in order to see your changes reflected in the app you’re currently working on? I’m really starting to like you now, because that is indeed a great question. The answer is: npm link, which allows you to effortlessly define symlinks to your own npm registry on your development machine. Meaning any updates to your local copy will be reflected immediately in the app. Can I have a what what? Pretty sweet, huh? Once you’re done working on a specific task, feature, bug fix etc, simply update your version number and commit it with a matching git tag version. 3

How do all these things get compiled?

Now that we have all the front-end source files we need to start doing some rapid, kick-ass design implementations on a specific app, how do we actually gather and compile these files in an efficient manner. The answer is Grunt. I’d been using Grunt for a while now, but for me it has really come into its own with this particular workflow.

With the source files now available within the node modules directory, I use Grunt to automate the following (in order):

  1. Copy any browser ready scripts (3rd-party components and my own package scripts to a public location to be included into the document. I also copy all SVG source files from each package into an assets folder specific to the app in question. Seeing as all the packages follow the same directory structure, this automation is pretty painless to setup and maintain, and appropriate nesting can be maintained (vis-a-vis different versions of core etc).
  2. I then run an SVG minifier to reduce file sizes (due to all sorts of meta information and redundancies, generally from something like Adobe Illustrator). There are savings in excess of 60%, so the juice is definitely worth the squeeze.
  3. With the optimised SVGs in place, I then run grunticon, which is a wonderful plugin that generates all the necessary SVG CSS data and classes (according to my own file organisation and class naming specifications), and then uses phantomjs to create fallback PNGs for browsers that don’t support SVG. Can I have a HellsYeah!?
  4. I then do an initial compile of the Sass brought in from the packages, including the SVG data. We now have fully scalable and retina sharp images and all of them are loaded with a single http request, ready and waiting in the cache for every page thereafter. Load times after the initial load (which is pretty darn quick itself) are super-lightning quick. 4
  5. I then have some uglification setup to keep an eye on combined and minified file sizes of scripts. This will only really come into play when we start live deployments, though.

So that’s that in a very wordy hat. I could write a more fine-grained collection of posts breaking down the thinking behind certain things later, but the idea here was to share a top-level approach to what I think is quite a nice code management and workflow.

Notes:

  1. Using jQuery is a somewhat unavoidable and pragmatic choice, especially within a team.
  2. Technical debt is sometimes hard to define with front-end code – but this is a whole post in itself, which I may address at some other time
  3. I’ve yet to start using proper versioning standards (around breaking changes etc), but I will start doing that once we get closer to actual go-live deployment. Right now I’m the only one working with these packages, so it currently isn’t a problem if the version number lacks that kind of implicit information.
  4. I am busy working on splitting the image data stuff out from the main CSS and using yepnope.js to conditionally load it at the end of the document (based on SVG support) to increase initial load speed. But this is my own pedantic like-to-do right now, as the combined CSS is still surprisingly small (with all image data used throughout the consumer site, as an example, currently sitting at only 70kb).