© Provided by Atlantic Media, Inc. Using Microsoft HoloLens |
By Cassie Werber, Quartz
There will come a time when sitting in front of a screen, tapping
keys with your fingers, will seem impossibly clunky and laughably
old-fashioned. Instead, researchers say, the most likely next phase of
human-computer interaction will be typing in “thin air” while gazing—for
example—at the ocean.
As computers become smaller and—ultimately—largely virtual,
researchers are having to think of other ways for us to interact with
them. The current generation of smartphone users may be willing to learn
to type with just two thumbs; some of us have carried around bluetooth
keyboards for train journeys and conferences. But no one wants to
connect a keyboard to a smart watch, and while voice recognition
software is improving,
users are highly resistant to talking to computers in public. If the
future of screens is augmented reality—as Tim Cook, Apple’s CEO has predicted—
a virtual space where the user sees both what’s really in front of them
and the “objects” on the screen, how are we going to interact with it?
The question is not just a philosophical one, but has major commercial applications. Per
Ola Kristensson, a professor in Interactive Systems Engineering at the
University of Cambridge, says desks in the future will likely be almost
entirely free from hardware: No laptop, no phone, and certainly no
keyboard. A move away from restrictive un-ergonomic
keyboards would help those affected by problems like carpal tunnel
syndrome, repetitive strain injury, back pain, and eye strain, which
have become some of the modern worker’s most troublesome physical
problems, and can be debilitating.
There are two big
challenges to designing human-computer interaction in this reality,
Kristensson explained. Designers no longer control what the user is
looking at (what he calls the “pixel-space” of the traditional screen);
and if someone’s work takes place in a space without traditional
limits—no buttons being pressed or screens swiped—there’s an added
challenge of working out what it is the user wants to do.
Kristensson presents a possible solution to this challenge in a forthcoming research paper.
The researchers used a head-mounted piece of hardware to project a
virtual screen and a virtual keyboard, on which users “typed.” The
fingers don’t touch anything; their movement was detected by a
depth-sensor, which tracked both where the fingers typed, and how ‘deep’
they pressed into the virtual surface.
The problem with the
resultant data was that the imprecision of hitting a virtual key rather
than a real one meant lots of errors. To correct for this, the
researchers created a model of what humans were most likely to type.
Working from a dataset of billions of words from Twitter, blogs, and
other online sources, they used machine learning to train a program to
recognize the most-common letter combinations. (The technology, which
currently exists in English and German, is similar to that used in
predictive text on a traditional smartphone.)
The speed at which
users learned to type virtually was fascinating. After users had
practiced on the visible keyboard, they removed all the letters. Users’
typing speed barely altered. “It turns out people remember the
QWERTY keyboard very well,” Kristensson says. “So you can actually not
show the keys at all. It can be completely blank, and people can still
type.” Rather than screensavers of beautiful views, we could be staring
at the views themselves.
But wait: If we’re removing the keyboard, shouldn’t we be removing the need to move the fingers as if they’re touching a keyboard? Indeed, technology allowing people to type simply by looking at the letters does exist. Early
eye-gaze typing, developed for adults with cognitive function but
reduced mobility, was once slow and straining, but has come a long way.
At a recent artificial intelligence conference in Cambridge, Kristensson
presented a method for typing which combines eye-tracking with
prediction, meaning that a user can slide their gaze from letter to
letter, not dwelling, and not worrying if their gaze ‘touches’
intervening letters. (It’s somewhat similar to the Android keyboards
that allow a user to slide their finger from letter to letter; Kristensson was one of the inventors of that technology back in 2002.)
The
Tobii-Dynavox, a robust tablet-like computer that can be controlled
with only the gaze, is one example of “dwell-free” eye technology being
marketed to people with decreased motor function.
All these solutions, of course, maintain at least the idea
of a keyboard. Is it necessary? Interacting with a computer via thought
alone is a tantalizing prospect. In fact, thought-only typing is
technically possible, but as previous experiments have shown, it’s still
a laborious process.
Facebook
was for a time seriously researching brain-only interaction, though the
head of its moonshot division at the time, Regina Dugan, has since
left. Mind-typing, Kristensson says, had engendered a lot of interest, but relies on signals too faint and imprecise to lead to real outcomes—or as he puts it: “the equivalent of being outside a concert house and trying to listen for the flute.” To make it actually efficient, he says, you’d need to drill a port in your head.
There are those who want to take that step. Elon Musk has said he sees the future of AI as necessarily involving the implantation of hardware in the human brain, and has suggested a fine mesh of electrodes that can knit, over time, with brain cells.
How
soon until it’s normal to see someone lying on a comfortable sofa,
gazing at the ceiling, and using a pair of glasses or nearby eye-tracker
to compose an email or write a novel? Tech companies are secretive, but
“everybody’s racing to figure out how to do this and how to make it deployable,” Kristensson says.
COMMENTS