The afternoon started with Noah Richardson of TellMe (now a Microsoft subsidiary) who are all about voice interfaces.
He showed some videos of user tests talking about how poorly the voice interface work (say it again, slowly…) and then some girls in an intercept interview finding some aspects of it wonderful.
When it works, it’s magic. When not, it’s awful.
TellMe apps are about search, and other contextual content, like nearby movies, sports scores, stock info, etc. The phone captures the audio and then over to the servers which do the voice recognition.
Others do this. Google and Yahoo! allow you to talk to an existing search function. Vlingo integrates to social networks and so on, and has worked on avoiding confusion and distracted use contexts. And of course a lot of them in the iPhone app store.
Phones are, generally, getting smaller. Typing can be difficult, due to small or no keypads. Or headsets where typing is irrelevant, are becoming common or could become the phone. Also, this could help with the driving environment, and there are those who cannot type due to disabilities.
Voice applications more than any must abide by the “keep it simple” mantra. They also need to indicate that this is a different interaction, not typing but talking. Marketing doesn’t always help; you can’t /really/ say what you want, and expect to get it.
TellMe did the local (like, location-based) search functions for the Sprint Samsung Instinct (presented by one of the designers at last year’s Design for Mobile).
Context is king. A distracted user can still enter info with speech. It’s not what the user says, but what they get that matters.
They also spend a lot of time, working on how audio feedback works to help with the interaction, and to avoid causing distractions after input to avoid having to glance to read results. And even one of the favorites around our office, yell at your phone so it can talk back and you find it. Good anecdote about the smart resume-when-idle readback, revealing your personal search to others.
There should be ways of reducing the criticality of errors, as countdowns and cancels for dialing by voice.
How to speak from the idle screen? Used to be more dedicated voice dial/voice memo buttons. Now, not so much. Talked a little about, in different terms, enabling technologies vs. getting tasks done. Basically, I think, no one wants to run a voice application, but using voice to get data entered to get stuff done.
Voice is a very natural way to interact, and communicate. I like this thought, especially as everyone cannot get over saying how innate touch/gesture is, but most gestures are arbitrary, and learned, not innate.
The mouse did not displace the keyboard. Jeff piped up to say they don’t play well together, but they still do something parallel. I guess he expects voice will overlay, instead of replacing anything.
He is, like me, seeing an increasing arrival of networked, intelligent devices, all of which will need interfaces, each of which will work in increasingly diverse contexts.
This sparked a really good discussion, since everyone has used at least a bad one, and seen the magic moments, and so many new products (iPod, Ford Sync, etc.). It also came up that Nuance is both a competitor and partner.
A buddy of ours (and one of those still actually at Sprint) John Ochenas stepped in due to a technical issue getting James’ videos to run, and told us about their design and implementation of the One Click UI.
The 3G network was all about data, and was hugely expensive. So Sprint had to justify the expense by trying to push adoption of data intensive services. A simple menu full of favorites, due to the ability of product folks to stick their stuff in there, got really overloaded, complex and not full of favorites.
Support was an issue, as many top line services were complex, or difficult or buggy. Billing was awful, and resulted in lots of refunds. People don’t want data, they want content.
Overall, they were a marketing and merchandizing company, and just pushed stuff from vendors. So in 2003 they decided to start to switch from a stick (trial and then pay) to the carrot, with themes and the on-demand service. But they were still acting opportunistically, or as they could sneak in with some neat-o keen design.
His design organization worked for years on a giant spec for the Action UI, but they didn’t have the pull, and no one cared. It died. But then (and though he doesn’t say it, about the time the iPhone comes out) the business starts understanding that the UI is the product. They had objectives to :
- Crete value through personalization
- Empower customers with a fealess approach todiscovery
- Increase data adoption
- Increase engagement with a consistent experience
The genesis was Bell Canada’s HGUI. A horizontal scroller to select menu items, from an otherwise not occupied idle/widget screen. They worked collaboratively with Frog (though this might be secret as he just showed a picture of a frog logo, without the word spoken at any point) to make this with this giant bundle of people on a relatively tight schedule.
He shows a design, and is pretty happy… but they didn’t get it. Referencing my discussion of the home page focus, they descoped a bunch of stuff, so he got essentially the home screen of this. Avoid overcommitment, to avoid being disappointed.
75% of users who get the phone are familiar with customization. They use 9 apps vs 7 for typical customers. Much higher data use, by plan not as overage. And there is a new style guide to enforce a consistent UI.
Are continuing. Will be release in the next few months. And on and on. They are also rigorously testing, and improving based on the various, especially in-field, user testing.
When you have to invent new terms to describe the UI, you may have a problem. And consistency is still not there. When you launch an app. you get a coffee cup and a loading screen. For 2009, put it everywhere. Add touch. Build a 2.0 version. Android, maybe. Complete freedom to design, but don’t change anything. And he showed off a few peeks into the design.
Too much conflict, so there is no main menu, just a carousel. I didn’t really get how this is different; it just looks like a different menu to me. They added “bubbles” as (for example) notifications on the idle screen.
He talked for a bit at the end about other learnings. Like that Frog (I guess, thought they were in charge) whereas the large, usually-empowered internal team thought they were.
James Haliburton finally got to present after the afternoon break (yogurt and fruit!). He showed off some demo videos, and passed around devices, to show 3D including with accelerometers to allow it to tilt and you look around stuff.
Projector phones showed some neat possibilities of moving the device from the personal to a shared space. They specifically used the most boring things possible as the demo (look at your calendar) to see if they could make it neater.
They have been around long enough to make some of the first color icons for mobiles, so have seen it move from buttons to touch and they now expect ubiquitous computing, and though he didn’t say so, augmented reality types of things I guess.
But really, for all the Acer and Topps and all those 3D things, what is the point? Bill Buxton says “Everything is best for something, and worst for something else.” Consider the value of 3D.
He says three key value categories:
- Visual style and feeback – the wow factor, and where we are now
- Flexible information visualization
- Naturalized interaction – Making it feel more natural, and letting the user feel lazier
A lot of this “3D” is not mesh, but 2D planes in 3D space.
Space does not have to be complex, or use a real world metaphor.
For natural UIs, simplicity is not the key per se, but flattening the metaphor. Users should be able to understand it easily. There is, for example, a subtle movement (at least in shadow) even when idle, so you understand it is 3D and are not surprised by it.
Use caution with design and self-testing. Tapping a key that shows it moves sideways? That’s you; better support users who try to swipe it sideways.
Visually holistic transitions is important in keeping consistency, and a believable interface.
Too much for me to write down, but he has a chart and series of patterns they use to define (and I guess design) 3D UI. Look it up if you are interested in pursuing this stuff.
They spend a lot of time prototyping in very simple ways. Paper. Oranges (it’s a sphere) and so on.
His favorite attribute of 3D is true browsing, like flipping though albums in the record store, vs. looking at each item, serially, and completely.
Doesn’t think there should yet be a set of best practices as it’s too new, and you’d be constraining it too much.
And lastly, Barbara talked about a talk, centered around one of her favorite phrases (and a peeve, as it’s misquoted) “A foolish consistency is the hobgoblin of little minds.”
Overall, this is speaking out against “We want the same experiences on all devices.” Much as they said when the iGoogle-specific iPhone site was pulled down. This is, again, not far off from our One Web discussions.
And when she started showing off how McDonald’s changes it’s face, context came up again. One size does not fit all. The BART mobile site doesn’t need to have space between items, designed for touch screens, when it’s a small and scroll-and-select device. Even that is different.
There was a long discussion of what ended up being one web, about keeping a consistent experience, but it got wrapped up in interaction or interface, which is different. But a good discussion. Neither small, poor organizations or large, complex organizations can manage to build more than one actual /site/, but they can offer multiple presentational detail variations based on a single set of content and software.