Stepping back from our design systems

Component led design has become very popular recently. Responsive design is complex and it is understandable why alternative approaches focusing on components have become more popular. To be clear, I am an advocate of design systems.

However, at the end of the day this method of working is primarily a method for solving difficult internal problems that we face in design, development and content management. The end users of our digital products do not recognise the atoms, elements and molecules with which our design systems operate. Sure the user can benefit from a greater sense of consistency, but lets be realistic, users will not consciously operate at the component level when we have spent the last 25 years talking about “webpages”. It is this difference in mindsets and the familiarity with our design systems that I want to talk about.

We are experts in our design systems
So we have conducted the component audit, built the style guide and identified every element in our atomic design approach.

Your collaborative approach with the client means that their content production team are equally well versed in the new design approach. After three months of working on the project you and your client recognise every atom, element and molecule that you have crafted. The project is going well.

My concern is that the longer we work on a website (either in its design or post launch) the more expert we become in it. We remember all of the previous design decisions we have taken. We recognise that “Input Field Atom” that actually makes up the “Email Sign Up Molecule”. We know the CMS inside out and how we can mix and match components to make our desired outcomes. Ultimately, we can look at a page and we can quickly tell exactly how it is built.

We have become experts in our design system.

The problem with becoming an expert in something is that you find it that much harder to empathise with a novice. To look at something for the first time and understand what you are seeing. Let me use an example from the 1970’s to explain what I mean.

Expertise in Chess
In 1973 two psychologists called William Chase and Herbert Simon were conducting research at Carnegie-Mellon University on Perception in Chess. Specifically they were exploring what it was that expert Chess players saw that novices didn’t.

In essence their experiment went as follows:

  • They had various types of chess player ranging from “master” to “novice”
  • Each participant in the study was shown a chess position for five seconds. They were then given a board and a set of pieces and asked to recreate the position they had just seen.
  • Participants were judged on their ability to get as many of the chess pieces in their correct positions.
  • Some of the positions shown to participants were taken from actual chess games whilst other positions were complete random presentations of pieces on the board.
A 2D representation of a chess board showing a traditional chess position

Figure 1: A position taken from an actual chess game

A 2D representation of a chess board showing pieces randomly placed

Figure 2: Chess pieces randomly placed on the board. This position is unlikely to occur in a real game

Chase and Simon took their results and compared the performance of “masters” with “novices”. What they found became one of the most famous studies in psychology around expertise and performance (this study is taught on most undergraduate psychology courses).

As you would expect, chess “masters” were better than “novices” at recalling positions. However this finding only applied for positions taken from real life games. When the results of the random board positions were compared between “masters” and “novices” there was little difference between the two groups in terms of memory recall.

Subsequent analysis showed that chess masters were able to use their extensive knowledge and memory of thousands of real world positions to effectively chunk up the chess boards they were shown in the study. For example, a castled king position is a very common position in chess involving a king and rook with three pawns in front of them (see figure below)

Two castled king positions side by side with one highlighted in its constituent components

Figure 3: What a “master” sees as one component a “novice” sees as five

It was proposed that chess “masters” would see the castled king position as one group whilst novices would see up to five pieces. The more expert a chess player then the greater likelihood of them recognising a configuration of pieces and being able to recall them easier.

Therefore, when the random chess positions were shown a masters expertise was effectively nullified as they were thrown back onto the resources of the short term memory, just like the novice group.

Stepping back from our expertise
Although not a direct comparison with the world of component based web design, I have thought about this famous psychology study several times in recent months and the impact that expertise has upon our perception of a configuration of components.

Specifically, expertise in our design systems means that we can recognise every component so quickly that of course the page looks good to us when we just to add a few more components. A type of “wood for the trees” syndrome can take effect.

Responsive design systems are by their very nature complex. However, we must never forget that a complex arrangement of components can appear simple to us because by our very nature we are experts in that system.

Put yourself in the shoes of the user, the “novice” in my analogy. They are seeing the board for the first time and they are not certain where all the pieces go or what they do. Maybe the board has been set up nicely for them but then again maybe its a horribly complex position that only a “master” could understand.

Component led design has many benefits to us as a community. But as I stated at the beginning, these benefits are primarily internal to our design, development and content processes.

As we embrace this more flexible yet potentially more complex way of working we must ensure that this added complexity does not shine through into our final configurations.

Our design teams and content producers (especially them!) must remember to take the time to step back and look at the bigger picture at the page level. As the diagram in figure 4 shows (and what I was getting in an earlier post from 2013), there are good and bad ways of combining elements to create either complexity or simplicity (taken from 101 Things I learnt in Architecture School).

12 rectangles are arranged in different configurations on the page. On the left they form a jumbled collective. On the right they are arranged to form the outline of three overlapping rectangles with the smaller ones visible inside. Thus the 12 shapes are reduced to the informed simplicity of three blocks.

Figure 4: Combining elements to make complex and simple shapes

Whilst component based design systems are providing us with a lot of value internally, we must ensure that our final external facing configuration of components is not random or complex but familiar and uncluttered. Only then can we ensure that we are giving our users a playable position to start from.

References

  1. Chase & Simon (1973). Perception in Chess
  2. Frederick (2007). 101 things I learnt in Architecture School

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.

Showing your organisational underpants: A case study in failure

So I was having a highly enjoyable time in Brussels at the weekend with some good friends when I experienced a quite catastrophic failure of a service from a major high street bank. Don’t worry, this isn’t going to turn into a ranty blog post. I was just so stunned by the immense failure of the service that I felt the need to document it because it is a lovely example of a number of well intentioned decisions that together culminate in a quite unbelievable customer experience.

“Something is wrong with my debit card…”
I wandered down to reception of my hotel on Sunday morning in order to pay for breakfast in the main hall. I found myself light on cash so decided to pay with my debit card. Three failed attempts to pay later I swapped to a credit card which was duly accepted first time. This led me to believe it was a problem with my debit card. However I put it down to a dodgy card machine or dirty card and carried on with my day by jumping onto the Eurostar back to the UK.

At London St Pancras train station I wandered over to buy a tube ticket and again tried to use the debit card. When this transaction was cancelled I became concerned. Having borrowed £10 off my friend for the ticket we hopped on the tube to Paddington.

“Lets call Card Enquiries…”
At Paddington train station I decided to call the Card Enquiries hotline. Im paraphrasing the interaction but basically my conversation with the automated machine went like this:

Automated system: Please provide bank card details
Me: Done
Automated system: Please provide date of birth
Me: Done
Automated system: Please provide 2nd and 8th digits of telephone security number you set up 7 years ago.
Me: um..what?
Automated system: You have 20 seconds to comply…

Before the system got too ED-209 on me, I found myself being transferred to a a very nice human being who started to walk me though a series of further security questions. This in itself did not concern or irritate me as being a bank that holds my money I was welcoming of a high level of security. However, as the reason for my problems became clearer I became more incredulous.

“Its a benefit Mr Fisher”
The nice human on the phone (after the 5 minute security pad down) explained that my debit card had been cancelled because they were upgrading me to a new “Super Duper Card” (ok I admit that its not called that and I shouldn’t work in financial branding but stick with me). The “Super Duper Card” was a perk because of my loyalty to the bank with “lots of cool new features”. The rest of the conversation went like this:

Me: That all sounds great but I didn’t ask for this “Super Duper Card” and now I have no money
Nice human from bank: I know you didn’t ask for it Mr Fisher but we did write to you and tell that this was happening
Me: when?
Nice human from bank: January
Me: Come again?

Yes thats right. Five months previously they had written to me to announce the “Super Duper Card” via post. I had had no correspondence since.

Nice human from bank: We posted a letter on January 11th explaining the “Super Duper Card”. Can you not remember?
Me: No I’m afraid I can’t. Im as surprised as you are to be honest (sarcasm gets you no where)
Nice human from bank: Well we have posted your new Super Duper Card to you on the 15th May thats why your other one has been cancelled
Me: I haven’t received any new cards in the post…

So who has my bank card?
By this point I was starting to get concerned. Where was this new mystery bank card? Perhaps a worse question was who has my new bank card?

I hung up on “nice human from bank” and got on my train. On the train I rang my wife and asked her two things:

  1. Was there any unopened letters lying around the house?
  2. Could I borrow some money?

Turns out there were no unopened letters and her interest rates were 7%. The day was getting worse.

I called the card enquiries number again and having dispatched the ED-209 style automated voice service in quick order got through to “Nice human from bank #2”. I started to make more headway into the mystery:

Nice human from bank #2: It turns out Mr Fisher that your new Super Duper Card was posted on the 15th May but because of security restrictions we sent it to your local branch instead of your home.
Me: What security restrictions?
Nice human from bank #2: You changed address in the last three months which means we can’t send any new bank cards…
Me You mean the new bank card I didn’t ask for?
Nice human from bank #2: Yes thats the one. Anyway we can’t send you any bank cards to your home for the first three months.
Me: Was anyone going to tell me that I had a new Super Duper Card waiting at my local branch for the last weeks?
Nice human from bank #2: Um no. Its a fully automated system. I don’t why you haven’t been contacted.

I thanked “Nice human from bank #2” and hung up. I have tried to add some humour to this experience but quite honestly lets think about it in a little more detail.

On reflection…
What a devastatingly bad customer experience.

This colossal muck up occurs across four channels (post, contact centre, online banking and high street store), began five months previously and ultimately impacted the customer with a series of very negative consequences.

What was obviously meant to be a nice thing (a customer upgrade to the Super Duper Card) has been so poorly implemented that the consequences for me (the customer) are as follows:

  • Ive spent almost 30 minutes on the phone getting transferred between places with no resolution;
  • Ive been left in a foreign country with no money;
  • I have to leave work at lunchtime to walk to my local branch to collect my new card that I didn’t order. Incidentally when I got to the branch my telephone number was written on the envelope but no-one had seen fit to ring it.
  • Throughout all of this was the concern of who might have my bank card…

Showing your organisational underpants
I think what surprises me the most is not how bad the experience was but how relaxed I acted! I should have been livid and ranty!

Instead having worked in UX for a while, all I ended up seeing throughout was a logical decision taken by some well meaning individual at the bank during the design phase:

Someone thought it would be nice to give me a Super Duper card with lots of perks.

Someone thought they would write via post to tell me this five months before it actually happens.

Someone thought telephone banking needs a really obscure random number that a customer only uses every 7 years.

Someone thought that having moved house they shouldn’t send me anything to that address for three months

Someone thought they should cancel cards without checking with the customer first.

Everyone didn’t see the bigger picture. The customer paid the price.

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.

Why structure matters – The second task

When I was asked to speak in Bristol at this years World Information Architecture Day (WIAD), I found myself contemplating a range of topics to present. Being the keynote with a 30 minute window, I wanted to find a balance between opinion piece and practicality. My eventual topic “Experience, Errors and Structure” was well received, which was very satisfying as it is a theme that I have been carrying through much of my work in recent years.

My goal with my presentation was to acknowledge some of the history of UX (human factors / ergonomics / UCD – Delete as appropriate) and its movement from an earlier focus on usability and optimisation of software, product and web interfaces to the provision of some form of experience across channels.

However, I was also keen to point out that in the rush to provide an “experience”, I felt there had been a loss of focus on the immense value of a well designed digital product structure and taxonomy. Some would call this classic IA thinking or maybe even systems thinking. I talked about how thinking about structure and taxonomy can:

  • underpin the design of many dynamic experiences;
  • mitigates the occurrence of human error within the system;
  • provides a level of inherent usability to the system no matter how tailored the experience has become for the user.

As I talked about taxonomy and structure being the foundation of experience design I realised that I wanted to find a simple example I could talk to people about that would emphasise the inherent value of user centric site structures, over more dynamic, personalised or tailored experiences. From talking to my team, clients and peers i’ve started to call this example “the second task”.

The second task
Our industry will talk a lot about how users come to a digital product and how we “convert them” once they are there (am I only the only one who thinks this sounds weird and stalkerish?).
Social media, email marketing, personalisation, geolocation contextual targeting are techniques that have the ability to route users intelligently and directly to relevant content like never before. They are powerful value adding techniques that enable user’s to successfully locate content that can can often be buried deep in our websites.

So lets assume we have been successful in our goal of helping a user achieve whatever behaviour or task they wanted to when they visited our website? For example:

  • The came and read the article on our website that they found on Twitter.
  • They logged in and changed their address after receiving an email from their utility provider.
  • They bought and downloaded the ebook they saw advertised in the window of the shop in the high street.

What next?

Are they going to leave or are they going to do something else with our website?

After consuming that first initial piece of content on the website (and assuming they are still engaged), a user can be presented with two logical next steps in order to continue:

  1. Available navigation options (assuming they are intuitive)
  2. In-page content (assuming they are relevant)

Both of these “second task” options are presented as a direct consequence of the structural and taxonomy level thinking we have completed during the early phases of design. When thinking about “the second task” it is our site structure and taxonomy that takes over as the primary facilitator for the continuation of a user’s journey.

When we consider “the second task” (and we really should when thinking about cross sell, upsell and prolonged engagement opportunities), in my opinion, we are really acknowledging why thinking about structure is important.

Why structure matters

“A day without taxonomies is not found” Jared Spool (from the Accidental Taxonomist)

If you haven’t read it yet, Mark Boulton’s article “Structure First. Content Always” is an excellent exploration of how thinking about structure really is the crux of successful web design.

It is not my intention with this post to dismiss many of the dynamic powerful techniques that we can use to tailor and personalise relevant content for users today. I understand the value these bring.

My intent with this post (and I believe what I was getting at in my talk at WIAD 2015) was to emphasise that the need for logical, user centric structures and taxonomies has never gone away. We can tailor and personalise experiences as much as we want but it is my belief that sooner or later there will be a need for a user to fall back upon the underlying structure presented to them.

When you consider the time and effort invested in the pursuit of fully tailored, personalised experiences that some organisations are striving for, you have to ask whether some good old fashioned IA and Content Strategy thinking wouldn’t have a faster return on investment and bigger impact instead.

After all, “the second task” isn’t going away, nor should anybody looking for deeper engagement want it to.

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.

Understanding our customers Minimum Expected Product

In “Designing a service no-one wants to use” I discussed combining Scott Jenson’s Value vs. Pain model with the experience realms of Pine and Gilmore. The aim of this approach was to identify opportunities for adding experiential value within a service.

I am referring to those occasions in a journey that result in positive, memorable, moments for the customer. Those delightful moments that could range from a quirky fun micro-interaction, a customer fear reassuringly allayed or the discovery of an entirely new set of functionality in the service.

The role of expectation in memorable moments
Discussion around these types of moments is not new. However, what is interesting is the consistency with which practitioners have discussed the role of customer expectation in determining their likelihood to occur. Whilst flicking through an old note book from 2011 I found the following doodle to myself:

What is a peak experience for a customer?
When my expectations are surpassed and my goals are achieved

Jon Fisher, November 2011

Despite writing this alone in the pub (it was £5 for a pie and pint to be fair), it appears I am not alone in my thinking. Pine and Gilmore discuss one measure of customer satisfaction in The Experience Economy as follows:

Customer satisfaction = What customer expects to get minus what customer perceives they get.

Dave Power III of J.D. Power & Associates

Giles Colborne in his presentation Designing for Delight  talks about delightful moments occurring following periods of anxiety. It is during these moments of anxiety when a customers expectations of a successful resolution to an issue are at their lowest.

There are many other sources that discuss the role of customer expectation in perception of a service but at its simplest I like the analogy of going to the cinema.

At some point we have all left a cinema and said something like “it wasn’t as good as I thought it would be“. If we are honest with ourselves how does our enjoyment of a movie change having read numerous positive or negative reviews vs. not having read any at all prior to watching the film? What expectations have we set ourselves?

Pain Points and low expectations
The service design community has consistently advocated identifying customer pain points and eliminating them as part of best practice.

A graph showing customer journey along the x axis and satisfaction along the y axis. The lowest point on the graph is highlighted.

Existing service pain points are obviously the place of low expectations

There are numerous definitions of pain points but Ratcliffe and McNeil ( a great book on Agile Experience Design) give a nice concise summary of types of pain point in a customer journey:

  • Manual workarounds;
  • Sources of customer frustration;
  • Anywhere that involves the use of legacy technology;
  • Points of high customer attrition;
  • Sources of customer complaints.

Eliminating pain points are the quick wins of design because these are the areas with the lowest existing customer expectations. More than likely, customers have already experienced these pain points (unless its a brand new service) and therefore have their low expectations firmly set.

In contrast, when things work well people don’t want change because they struggle to visualise how it could possibly be improved (plus if we are honest they are probably scared that the design team will screw it up). Acknowledgement of this fact may be part of the origins of Henry Ford’s thinking when he said:

“If I had asked people what they wanted, they would have said faster horses.”

Henry Ford

Deriving requirements – the Minimum Expected Product?
Hopefully by now you can see the powerful role that customer expectations play in both the the perception and subsequent design of a product or service.

Sophie Dennis discussed the combination of the Peak End Rule and the Kanban model for deriving a Minimum Viable Experience at her excellent talk Getting UX Stuff Done at UX Bristol last year. In my opinion, the strength of her technique was recognising that customer expectations have to be surpassed at some point (i.e. the peak in the peak end rule) whilst recognising that, in the context of a minimum viable product (MVP), we cannot do everything and must prioritise.

The Kano model showing three lines on a graph defining basic requirements, performance pay offs and excitement generators

The Kano Model from Sophie Dennis “Getting UX Stuff Done”

To further elaborate upon her approach, my point would be that we should be determining customer expectations around functionality much sooner in the development of our MVPs.

Whilst an upfront piece of user research is often conducted by project teams to identify user needs, we are missing a trick if we do not define the expectations around these potential requirements.

For example, on a recent project me and my team had developed a prototype and were excitedly sat watching users engage in our lab. We had developed a solution for a particularly complex filtering problem that was going down a storm with users but there was one small problem. It was trickier to build and implement than an alternative design solution the team was also thinking about.

The question was would customers miss the more complex functionality if it wasn’t there? Were they expecting it as a minimum or was our observed positive reactions from customers a direct result of their expectations being delightfully surpassed? Could we choose the alternative design option to ease our build and implementation whilst maintaining an acceptable level of customer satisfaction?

Without going into further detail on the project, I hope you can see the powerful role that expectation is playing in this context.

Customer expectation is a two way street
My key point is that expectation works both ways for project teams:

  1. Understanding expectations enables us to spot opportunities for adding experiential value by surpassing existing perceptions, thus delighting our customers.
  2. But also understanding expectation enables our project teams to understand, prioritise and justifiably de-scope functionality accordingly so that we can strategically invest our time and effort in the right places whilst maintaining acceptable customer satisfaction levels.

Both of these benefits can only be obtained early in a project lifecycle (most likely when we are conducting our initial research) and only by asking the right questions. Its all well and good defining our minimum viable product as a project team but perhaps a better way of looking at it would be to define the Minimum Expected Product from our customer base.

Viewed from this perspective we would then be in a position to both optimise our design efforts and delight our customers. Truly a happy place to be!

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.

Someone needs to do it!

This is not a blog post about definitions. It most certainly isn’t a blog post about job titles. Its a blog post written to humbly request a different viewpoint on the world in which we work.

As anybody who has worked in our community for a while will testify, every few months or so a conversation will start up about defining what we do. Levels of granularity will be endlessly argued on Twitter. Overlapping skill sets debated. Accusations of land grabs by X discipline over Y discipline will fly.

Stop.

I don’t care. This is why.

If you are working in the context of multi channel digital delivery in 2014 then you will be familiar with some of the following skill sets:

  • Service Design
  • Content strategy
  • Information Architecture
  • User Research
  • Interaction Design
  • Visual Design
  • Front End Development
  • Back End Development
  • Accessibility

When you look at the list above, I ask you not to see a list of job titles but a list of design considerations that must be completed in order for the project to be successfully delivered.
A number of steps that must be gone through. A set of things that when done correctly, lead onto other happier things. If any of these things are not done then the likelihood of our project suffering is high.

We need to be considering accessibility from the start. We need to be thinking about structured content. We need to be considering what our users need and want. We need to be thinking about brand application. I could go on…

The reason I don’t care for definitions and job titles is that as long as the things in the list above are being competently handled on my project, I don’t really care who does them.

Assuming your project team has competence in all of the above disciplines, your project will be ok.

Specialisation and project teams

Its the nature of a maturing industry that over time a wider range of specialisations will start to appear, quickly followed by the establishment of umbrella terms (user experience anyone?).

Those specialisms, in my opinion, arise over time because the community starts to discover an intrinsic value in their practice. The community discovers that when they do a thing from the list of disciplines above, more often than not their projects benefit. Any discipline or skill set that did not offer value to a project design process would quickly whither on the vine and die.

For example, at some point in time somebody realised the inherent benefits of talking to end users. A few years later the “user researcher” specialism is born. I would therefore propose that user research offers intrinsic value to a design process.

But does it necessarily have to be completed by a “user researcher”? If its a big project team with a properly qualified user researcher on the team then sure go ahead. But what if I also had a fully qualified Information Architect who also happens to be competent at user research on the project team? Hmmm, dilemma time.

Or not.

I don’t care who does it. As long as the user research (you know that intrinsic value adding thing that is really important) is competently handled, I really don’t care.

Don’t have an accessibility consultant on your project but do have a cracking front end developer who specialises in web accessibility. Great!

Got a content strategist and an information architect on the same team? Let them fight to the death using blunt spoons in a pit of tar. Ahem. I digress…

My blindingly obvious point is that when viewed in the context of getting stuff done, suddenly job titles and definitions just don’t seem that important anymore.

There is a whole range of value adding activities that need to be completed for a project to be successful. They are things that simply need to happen. As long as they are done correctly, who does them is just academic.

Therefore next time you are assembling a project team don’t tick off job titles, tick off competencies instead.

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.

When reviews stop being useful

Customer reviews are ubiquitous on the web today. Its hard to visit any site that sells a product or experience that doesn’t utilise customer reviews (usually on a five point scale). Whilst I acknowledge that the little five stars are not about to go anywhere soon, there is a scenario where they stop being useful.

Weddings, Zip wires and Mayan temples

In September I got married and went on honeymoon to Central America. As a consequence, In the last year I have found myself thinking (amongst other things) about the following three things:

Wedding venues: Where do me and my wife want to hold one of the important days of our lives?

Rainforest zip wires: Which company should we use for flying over the canopy of a Costa Rican rainforest?

Mayan ruins: Which of the 1000 year old long lost Maya pyramids poking out of the rainforest should we visit?

To quickly summarise 12 months, both our wedding and honeymoon were incredible. However, one thing thing that didn’t help us in planning was a customer review. The problem is, its very hard to find a wedding venue, Costa Rican zip wire or Maya temple that doesn’t come with a ridiculously good review…

Customer objectivity and bias
Its easy to see how any of these three activities will naturally come with excellent customer reviews, they are great things to do! Read any of the descriptions of the activities above and I challenge you not to inherently imagine how splendid (or awesome for my North American friends) they can be.

A photo of the author zip lining above a Costa Rican rainforest

How can anyone not give this five stars? But there are dozens of companies in Santa Elana, Costa Rica

However, when you look a little deeper at these activities (and talk with people who have done them) you spot two factors that helps explain the high ratings of customer reviews:

Unique or rare events: Typically these types of experience are unique or very rare in our daily lives. For example, most people who visit Costa Rica, rarely do more than one zip line trip.

Positive emotions: These kinds of experiences elicit a highly positive emotional response from us (well you’d hope so on your wedding day!)

In other words, If you only do something once and it felt great you are likely to give it a high score when filling out a customer review. Whilst this in itself is not a problem (I don’t begrudge people having a good time!) it does make the selection process when trying to purchase one of these experiences more difficult.

Your purchasing decision is harder
As a prospective customer looking to purchase one of these rare, positive experiences I simply stop using customer reviews. They just don’t enable differentiation in the selection process.

For more common experiences, customers are able to extrapolate and provide better critiques because they have a greater base of experience to allow comparison. For example, a business traveller may stay at several hotels a year and they generally don’t tend to be a particular highlight of their calendar year. Therefore, a customers reviews can be more objective. Thus I find Trip Advisor more useful for booking accommodation than trips to Maya pyramids in Guatamala.

Is this really a problem?

Everybody seems to be having a good time right? True. I guess my problem is that not all five star experiences are created equal. Anecdotally from our honeymoon travels, ourselves and several other travellers we met visited multiple Maya ruins across Central America. What quickly became clear is that some ruins were much better than others and yet they all had high scores on Trip Advisor. If you’ve only visited one set of Maya ruins then you will give it a great review, they are inherently cool. The paradox is that because you only visited one maya site (or got married once or zip lined once etc) you have no base of reference on which to base your experience.

A screenshot of Calakmul ruins on Trip Advisor with a 5 star review

The ruins at Calakmul with a five star review on Trip Advisor

A screenshot of Tikal ruins with a five star review on Trip Advisor

The temples at Tikal with a 5 star review on Trip Advisor

Contradiction and content implications

I’m not silly enough to suggest that having a five star review is a bad thing. What I am suggesting is that it should be possible to identify types of experiences where reviews will be less useful to prospective customers because of the combination of ultra rare and positive emotive circumstances I have been discussing here.

If we can identify these types of experiences, it enables us to consider what other factors we could potentially use to help differentiate these experiences for prospective customers. For example, if we know that reviews are less useful for decision making, then how does this change our content requirements for our individual product page? Is there additional content that we can add that is the clincher. For example, the fact that your zip line company offers couples zip lining options (how romantic on your honeymoon).

Despite me bashing reviews in this context, they will always be essential content requirements for customers. If only as an initial filter in decision making. The difficulty is that if all your competitors are proudly displaying their five star reviews then you too must puff out your chest and your “Trip Advisor Excellent 2013” award. You can’t afford to be the exception in the marketplace, even if in so doing your unique proposition is lost in a sea of five star reviews. A true contradiction if ever I saw one.

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.

Nine heuristics for designing cross channel services

Last year myself and two associates completed an extensive piece of work on “Sense Making in Cross Channel Design”. A key theme of this paper was exploring how a customer’s understanding can diminish as they transition between various channels of a service. At the conclusion of the paper we had identified nine useful heuristics, observations or considerations when evaluating or planning a cross channel service.

To start the new year (and to ease me back into the blog after returning from honeymoon), I thought it would be useful to provide a short post that pulls out these nine heuristics.

Interlude: Channel switching and information scent

Before we continue, I feel it is important to highlight the importance of information scent on the degradation of understanding in a service. In our original research paper it became clear just how potentially impairing a channel transition can be to the overall success of a customer in a service (see one of my earlier blog posts). Therefore, several of the heuristics below revolve exclusively around the preservation of information scent across channels i.e. how do we help a customer resume a task that they previously started in another channel in our service.

For a deeper understanding of information scent (or any of the points listed below) then I would read our full paper or refer to either Information Foraging Theory by Pirolli (2007) or Spool et al (2004).

Cross channel design heuristics

So here we are then! Nine heuristics, rules or observations that can be used to support customer understanding and help avoid designing failure states into a cross channel service:

  1. In the digital age, the cost of moving channel is very small. For example, It costs me nothing to shut down a browser when I can’t find what I am looking for. Therefore, if our information layer is weak or ill-informed the likelihood of a customer leaving our service or changing to an alternative channel (or competitor) is high;
  2. When looking for information in a cross-channel user experience, customers are effectively conducting a number of evaluations when moving through and across our channels. They are effectively asking themselves “What is the likelihood that this channel can satisfy my informational needs?” In the event that the answer of that question is “Low” then the customer will either switch channel or leave the service entirely;
  3. Do not under estimate the effect that switching channels can have in reducing or eliminating information scent. Every time a customer changes channel they are effectively resetting the information path and beginning a new information forage;
  4. Do not under estimate the role of time in diminishing information scent for a customer. The length of time between customers switching between channels can range from seconds to days. We must consider the length of time likely to elapse and design strong information scents accordingly;
  5. Clear and immediate proximal cues will need to be provided for the major informational needs on all major entry points for a channel. For example, if your service offers a “my favourites” or “wish list” functionally then ensure it is prominent on all major entry points to the website. These information needs should be identified early in the design process and mapped across channels;
  6. Identify the information needs that need to be carried between channels and provide suitable digital functionality (for example, email links and social sharing) that can carry the information scent for us. The topic of carrying information needs and problems associated with this act are explicitly discussed on this blog here;
  7. Basic consistency in taxonomies is still essential for the reinforcement of a strong information scent across channels. The number of navigational paths that a customer has available to them in any given channel is an important consideration in the success of future information retrieval in alternative channels. How can we aid people in their information retrieval when some channels offer a single path whilst others offer as many as eight or more? In my experience, this point is such a common failing of services that its no wonder their has been a resurgent interest recently in content strategy and classic information architecture (its also one of the reasons why I’m such a fan of responsive web design; from both an information scent as well as an accessibility perspective).
  8. Identify the types of failure states that can result in channel switch in a cross-channel experience as well as the “natural” exit points for a task. For example, an “out of stock” result would immediately halt a customers’ task and necessitate a channel switch just as much as if the customer had successfully found what they were looking for. In such an event , what information can you provide to a customer in transitioning to another channel in your service?;
  9. The digital literacy of your services various audience groups and the relative maturity of some channel interaction patterns could have major impacts in the success of a cross-channel experiences. This last point may just be a factor of time as our industry moves forward and more established interaction patterns are recognised by end users.

Conclusion

Obviously designing services involving multiple channels is a lot more complex than the above nine heuristics. However, over time I have been surprised how often I have seen a failure in a system that can be attributed back to one or more of these points. Use them at the start of a project or half way through, it doesn’t really matter. I consistently find them a useful tool for informing decisions throughout the design process. Happy New Year!

References
Pirolli, P. (2007). Information Foraging Theory: Adaptive Interaction with Information. Oxford University Press.

Spool, J. M., Perfetti, C., & Brittan, D. (2004). Designing for the scent of information. User Interface Engineering.

Thanks for reading!

If you enjoyed what you read then please do share it, I’m always appreciative of the support.  You can also follow me on Twitter here.