HITC senior staff member
Michael Harris is chairing a panel at the 1998
SIGGRAPH Conference titled "Interfaces
for Humans: Natural Interaction, Tangible Data and
Beyond." Huge advances in interface modalities
are evident and imminent; Mr. Harris and five interface
design luminaries will demonstrate and explore some of
the most interesting, promising and clever of these, as
well as their integration into powerful multimodal
systems.
When users talk about computers, they usually describe
the interfaces - because, for most users, the interface is
the system. As Bill Buxton says, "The most powerful
force in shaping people's mental model of the nature of
the beast is that which they see, feel, and hear."
It seemed to take forever for toggle switch panels to
evolve into today's WIMPs, and both are still visual /
motor based controls; in fact, switch panels were
probably more haptically satisfying! "Keyboards only
work for people who know the Roman alphabet. In twenty
years, people will laugh at us for calling that
technology," says Mark Lucente.
Now, thanks to exponential increases in commonly
available computer power and versatility (and concomitant
cost decreases), significant progress in interface
modalities and their affordances can be perceived. This
panel will explore the most interesting, promising and
clever of these advances. We will emphasize demonstrable
and practical stuff; we'll have hardware to monkey with,
ideas to ponder and try.
This is a gadget-intensive topic, and we will present
gadgets galore. Input devices that can tell systems where
users are looking, the gestures they are making, the
direction and content of their sounds and speech, and
what and how they are touching. Display devices that
image directly onto the retina, high resolution miniature
LCDs, and spatial sound generators. Some of these
innovative transducers operate not only non-invasively
but invisibly. "No one should ever have to see a
computer. The complexity should be soaked into the world
around you," says Lucente.
While humans are adept at sensory integration and data
fusion, computers are far less so. It is clear (and
probably has been since Glowflow in 1968) that multimodal
interaction is a seminal goal, and that achieving it is a
formidable challenge. Now that computational power seems
be catching up with algorithmic understanding, the
panelists can report and discuss exciting progress in
this area.
Our panelists have decades of experience in interface
design. Their perspectives are theoretical and pragmatic,
incremental and radical; their work is elegantly
inspiring and often delightfully unconventional. All were
considered visionaries, but now their visions are
achievable and industry is paying attention. They are
seasoned practitioners with their own viewpoints, all are
articulate, and none is shy; the Q&A and discussion
periods promise to be stimulating.
Interfaces to newborn technology are usually
"close to the machine": early automobiles had
spark advance levers, mixture adjustments, hand
throttles, choke controls. As automobiles have evolved,
their affordances have moved "closer to the
user": speed / stop / reverse. We're tracking a
similar evolution in Human-Computer Interaction (HCI)
space - perhaps interfaces are finally growing up? We'll
talk about it...