Friday, October 26, 2007

Conversations 2007 conference

Just back from the Nuance Conversations 2007 conference in Boca Raton, Fla. I must say, as my first time at this conference it was quite different from the conferences I've attended in the past. There were several good presentations, some pretty modest presentations, and an overload of social events.

The keynote speech was by futurist James Canton. He gave a polished presentation that I thought was a little light on analysis. I heard a lot of buzzwords and some predictions I've been hearing for years, like the influence of artificial intelligence on our lives.

Many of the sessions were devoted to speech interfaces for mobile devices. The mobile keynote was very good. Scott Kliger, Jingle Networks, talked about free directory assistance, and how 800 FREE 411 has taken off. Most impressive was the amount of advertising money that will move from printed phone books-$14 billion-to somewhere else when the books are no longer printed. The free DA services will take a good portion of that. He noted that the most requested single number in the US is California Dept. of Motor Vehicles.

The big news was Nuance's purchase of Viecore. It was announced at one of the keynote talks. That generated some buzz.

One of the final sessions made reference to the keynote speaker, with a presentation on "Top 10 reasons you know your futurist missed the mark." Of the couple I remember, "Refers to 'accessing the Internets,' 'predicts that the Cleveland Indians will win the ALCS.'" I've never heard a conference speaker crack on the keynote speaker before.

Sunday, October 21, 2007

IVRs aren't your company's biggest problem

It's easy to blame IVRs for a lot of your company's customer service problems. Because it's true-sometimes the IVR just isn't very well designed. A lot of times, though, it helps to look past the IVR into business practices.

Here's an opinion piece that takes issue with a Gethuman recommendation to send callers to a live CSR if one is available. The author of the article points out correctly that there's value to companies in identifying high value customers, and giving them preferred service. However, you can still prioritize customers without violating the Gethuman principle he identifies.

I have other problems with the Gethuman principle, and with the author's unstated assumption in the article. Both assume that IVR contacts are necessarily bad, or at least worse than a interaction with a human. For simple interactions that involve call transfers, account balances, password resets and so on, there's no reason why the IVR interaction should be less satisfying than one with a CSR. Better, in fact, if you add a little time in queue to reach a CSR. The fact that many IVRs give such poor service even for simple requests is due to failures of design and implementation, not to a limitation of the technology itself. You can serve simple requests from high value customers using IVR just as well as with CSRs if your IVR is well done.

The author identifies IVR as the culprit for poor customer service. In many cases, that's true, the IVR is poorly designed. The other, harder problem is that many companies are bad at identifying high value customers, and in defining what it means to provide high level service. What good does it do to get a caller to a CSR who mumbles, can't find information, doesn't recognize opportunities to provide additional service, doesn't answer the unasked question, and so on? Those problems aren't solved with even the best IVR.

The upshot of this is that IVR designers need to be able to see the big picture, and understand what problems the business is trying to solve. If they can't do that, they'll wind up spending a lot of time designing IVRs that don't address real issues.

Friday, October 12, 2007

Low bar vs. high bar usability tests

Here are three usability test situations that I've encountered in the past.

  • An interaction designer looks at the graphical page headers she's been sent by a graphics designer and something looks wrong: the words on the images seem blurry and hard to read. She tries to enhance the images in Photoshop and asks me for me opinion of the enhanced images. We quickly design a comparative usability test of the images. The original images (A) are placed next to the enhanced (B) images, and three questions appear below each image: (1) on a 1-5 scale which image appears sharper (1=image A is much sharper, 5=image B is much sharper)? (2) Which image is easier to read? and (3) Which do you prefer? The test materials are put in a Word document and sent to 12 office co-workers. The tests are returned and tabulated in less than three hours-the enhanced images are judged sharper, easier to read, and are preferred by a clear margin. The results are sent to the business lead with the recommendation to consider using the enhanced images until further testing is completed. The business lead who employed the graphic designer rejects the recommendation, saying that the data weren't valid because the test participants weren't real customers.


  • I'm designing a main menu for a touchtone IVR. I'm not familiar with the terms used by the business for the menu options, but the client assures me that callers will understand the options, even though callers have no more knowledge about that business than I do. I type up some alternative wordings for the menu options then mail them out to some coworkers with the question below each set of options: please list the types of services or products you would expect to find associated with each option. I collect my responses for the next two days, and the results aren't encouraging. I show my results to the client and suggest a different strategy for naming the menu options, with follow up testing on the proposed menus. The test results are rejected because the methodology is too dissimilar to an actual IVR experience.


  • I'm designing a menu for a speech IVR and I'm not sure the menu options the business wants are going to be discriminable to the speech recognizer. I ask that a one-menu prototype be created so I can test the discriminability of the options. The request is refused by a manager because, I'm informed, I'm not a real customer so it's not a valid test.

In each case the designs of the tests that I've described barely qualify as usability tests. They don't follow the procedures that you might find in classic texts on usability testing like the Handbook of Usability Testing or A Practical Guide to Usability Testing. The first example looks more like the procedure for an eye exam: "which do you prefer, A or B? A or B?" The second example looks like a poorly conducted card sort. The third looks more like a QA procedure. So what's an experimental psychologist like me doing running these poorly designed tests?

I call these sorts of usability tests low bar usability tests. They are simple, easy to design and execute usability tests that answer the question, "is it OK to proceed with this design or idea?" Since many interface designs take a good deal of time and effort to complete, and usability tests themselves can take a good deal of time to perform, a low bar usability test can tell the designer if he or she is on the right track. If the design passes the low bar test, design continues, and the next usability test, the high bar usability test, is run according to the canonical texts. If the design fails the simple low bar test, there's no point in taking the design in that particular direction and running a high bar test. It's time to step back and re-design.

To take one example, if I test the goodness of some menu options in an IVR I'm designing and I can't get the recognizer to reliably distinguish my utterances, should I continue with this design and get data from real customers? Of course not. I've worked with IVRs for years and I'm very good at getting them to do what I want them to do. If I can't use the menu no one else will be able to. If the menu passes this low bar test, does it show that the menu is ready for prime time? Of course not. I still need to run a high bar usability test. But at least I can move forward with the knowledge that the menu has a chance of working.

You should add low bar usability testing to your discount usability bag of tricks, remembering to explain the rationale for running a less-than-perfect test procedure to those who learned usability testing by the book.

Thursday, October 11, 2007

Triux.org - Triangle User Experience

My blog is now linked to by Triux.org. Abe Crystal maintains the blog, which features short articles and interviews with local designers. It also carries local job listings that don't appear on the national job boards like Monster.com. So, if you're a Triangle-area designer looking to change jobs, it's a nice resource.

Thursday, October 4, 2007

Where do I find "The Talent Hunt?"

Here's another article by BusinessWeek. (This one may be behind a password, so I'll quote the salient points.) "The Talent Hunt: Design programs are shaping a new generation of creative managers" begins with a promising observation. Proctor and Gamble are working with student designers at the University of Cincinnati to brainstorm new products for sustainable living. That's fine, as far as it goes. From the title, I expected to read more about the companies who are "hunting" design thinkers. Instead, the article focuses on the universities that are combining design programs with management programs. An interesting article about a useful practice, to be sure, but not what I'm trying to find out right now. I'm already in a program with interdisciplinary classes between design and management graduate students and I'm enjoying it immensely.

I need to know more about all the enlightened companies that value design thinking. It's fine that so many schools (60, by BusinessWeek's count elsewhere on their web site) see the importance of developing people who can cross disciplines. So the supply will be there. I just wonder if the demand is there, or whether this is a issue that BusinessWeek thought was important and is taking upon itself the task of educating corporate America. Good for them, but I'm still waiting for the article that talks about companies' desperate search for those hard-to-find manager-designers.

Wednesday, October 3, 2007

Upcoming: Conversations 2007 conference

My company is sending me to the Conversations 2007 conference Oct. 21-24 in Boca Raton, Fla. I'm always happy to attend conferences, meet people, and learn about what's going on the industry. The conference is hosted by Nuance. Company-sponsored conferences have a different flavor than academic conferences (e.g., HFES). There's a lot more selling done, a lot less data presented, but there are more opportunities to meet business contacts. I'll have a full report when I get back.