trenchant.org

by adam mathes
archive · subscribe

Apple's Touch Bar Is an Inhuman Interface

Touch Bar is a tentpole feature of the MacBook Pro I just ordered.

I’ll reserve final judgment until I actually can use it for a while – but I am extremely skeptical it makes sense from a usability perspective. On first principles it seems – awful.

Modal Keys

Function keys are bad interfaces because they are modal – they change what they do depending on application context in unpredictable ways without clear indication.

Other keys do the same expected thing at all times. (Mostly.) So it’s hard to know what a function key will do and that modality makes harder to use them without error, and they have a high learning curve.

I am old enough to remember putting plastic overlays on top of function keys so their usage within WordPerfect was clear when you looked down.

That was, to put it mildly, a less than ideal interface.

The best application of these function key relics from decades past has been dedicating them to media keys (volume, play, etc) that you can depend on and develop muscle memory for. People actually use those to change volume, brightness, and stop music.

Removing that so that you can have variable touch inputs that require you to look down seems like an odd tradeoff to me.

Did we just make a prettier version of those WordPerfect overlays? At least those were all buttons, these can be virtual sliders, buttons, dials, or any number of touch interfaces.

Inputs and Outputs

What makes laptops and desktops different from touch devices is you manipulate on-screen entities using off-screen input devices.

Input below, output above. Eyes on output, fingers on inputs.

If you have to look, process, and focus on the input device while the output device is elsewhere you will slow progress to a halt as your attention shifts between input and output.

This is why we train to touch type rather than hunt and peck for each key, and why looking at your mouse while trying to point to something is going to make it impossible to succeed. If you can’t operate off-screen inputs with muscle memory, then your input takes away the focus from the entity you are manipulating on-screen.

Input = Output

Touch devices are different since the input and output devices are the same. You end up directly manipulating objects of interest, no context shifting or mappings needed.

The Squishy Terrible Middle

By putting a touch interface that requires visual attention in between the dedicated input (keyboard and trackpad) and output (screen) of a laptop, this suggests replacing fast actions with ones that would seem to be necessarily slower.

It also just mixes two very different models – direct mapping (keyboard/trackpad) and direct manipulation (touch) in a way that I can only assume will make our brains hurt.

How / Why / Huh?

While this may be interpreted as the marquee feature of the device, my guess is that it may have started more conservatively – adding in the technology to enable TouchID necessitated most of the guts of an Apple Watch / touch device, so might as well add in a screen too and multitouch!

But when your user experience ignore fundamentals of human computer interaction research and basics of human factors, glitzy marketable features and cool factor will wear off quickly.

I’d be more surprised about this but after 3D Touch (hey! let’s take the least disocverable, hardest to use feature of Android and make it more complicated by adding pressure as a variable!), pinch to zoom on a tiny Apple Watch, and nearly everything about Apple TV, I’m worried there are too few “no’s” in Cupertino right now.