Which of these two celebrities do people prefer?
This was one of the questions we asked participants at the launch of UXPA Ireland recently, when we ran a small experiment in eye-tracking at the beginning of the event. It was more for fun really and to demonstrate eye-tracking and we asked those who came along to look at some images of people and answer some simple questions.
In this question we asked participants to take a look at these two photos and tell us who they prefer.
And this is what the eye-tracking heatmaps showed us:
This shows both pictures received similar visual attention; which could lead us to the conclusion that participants liked both equally. However, when participants were asked to write down their preference – Liam Neilsen came out on top as clear winner!
This experiment might have been for a bit of fun but it’s an example to demonstrate that eye-tracking does not tell us what the user is thinking but rather what they are looking at (and not looking at). And this is something we are constantly reminding our clients when showing them heatmap results; just because users look at something for a long time that does not necessarily mean they are very interested in it, they could be staring at it because they just can’t understand it!
So why use eye-tracking then? And how does it work?
First off, what is eye-tracking?
An eye-tracker is a piece of hardware that sits under the computer screen and uses infrared cameras to measure where users are looking. Eye-tracking is based on the eye-mind hypothesis that says that people look at what they are thinking about. In eye-tracking we are saying that visual attention (looking at something) is proxy for mental attention (thinking of something) (From Jacob Neilsen’s Book – Eyetracking Web usability).
When the gaze rests on something this is called a fixation. Eye-tracking research measures the number and duration of fixations and then shows this information in the form of a heatmap (showing the amount of visual attention an image received –as above) or a scanpath (image below) – the path our eyes make when looking at an image.
Recording this visual attention would be difficult to capture otherwise. Certainly combined with other usability data such as observation, interviews etc. it can give us additional insights into the user experience.
It is important to note that eye-tracking records foveal vision and not peripheral (it can’t read this- it’s too fast) and although heatmaps and scanpaths will not show this activity, users may still perceive or be aware of design elements through peripheral vision .
What do we use eye-tracking for?
Eye-tracking is particularly useful for comparing different designs, testing optimal scanpaths and testing the flow from task to task. Also, a number of research studies (Lorigo et al., 2008) investigating different behaviours in eye-tracking activity have shown that; particular in relation to search based tasks; that patterns in eye-tracking results emerge. One such pattern is known as the ‘F pattern scanpath– where users look across a search page, then halfway down the page, glancing across the results and then go further down the page. Another search behaviour patterns is known as the ‘golden triangle’ where a lot of visual attention is focused on the top right hand side of page.
Useful to know when planning SEO tactics!
Eye-tracking is often combined with think aloud or talk aloud protocols (where participants are asked to talk about what they are doing while completing tasks) but the jury is out on whether this interferes with participants natural behaviour…
And to another Nielsen (Jacob) and some of his tips on eye-tracking research from his book on Eye-tracking Usability;
- Nielsen contents “ to create an effective heat map for a given Web page, we made sure to include eye tracking recordings from 30 users on that page “ ( P25)
- He reminds us that it is important to give people some open-ended tasks – this helps us see what people look at rather than influencing or imposing behaviour on them.
- And as Nielsen points out – it is always a good idea to have some metric to measure different tasks – He suggests measuring time on task with a stop watch (from the time user looked at screen until they tell us they are finished) but personally I have always found this very difficult to do properly in real lab settings.
- I think it’s a good idea, at the very least to have some rating scales metrics, even if based on opinion, such as rate on scale how easy it was to carry out this task – can use Jeff Sauro’s Single Ease Question scale (researcher can rate too)
And finally, as talked about above, the biggest take away for me is not to rely on eye-tracking alone to explain user behaviour.
A picture may paint a 100 words but it may well be the wrong story….
Experiment at UXPA with Abi & Louise @ Paddy Power
Lorigo, L., Haridasan, M., Brynjarsdóttir, H., Xia, L., Joachims, T., Gay, G., Granka, L. A., Pellacini, F., & Pan, B. (2008). Eye tracking and online search: Lessons learned and challenges ahead. Journal of the American Society for Information Science and Technology, JASIST, 59(7), 1041-1052.