Friday, December 28, 2007
Problem decomposition can go wrong when the problem is not sufficiently decomposed and an overly-simple solution is applied to the constituent parts. In the design of information websites this is seen clearly when individual pages appear as large masses of text with different topics and headers on one page. In such cases the designer may have committed to a simple navigation scheme before understanding the amount and complexity of the information he or she was dealing with. In some cases the project was underfunded or understaffed and the unfortunate designer was left with only bad choices before the time or money ran out on the project. In any case, the consumers of the information are left to struggle with a site with minimal navigation and verbose pages of text. Many FAQ pages are designed as such not because customers ask the questions posed on the page, but because the designer hadn't the time and/or ability to break the information down properly and supply adequate navigation.
The name of this design anti-pattern derives from the Mother of All Problem Decomposition failures, the Exploding Whale incident, popularized by a newpaper column by Dave Barry. In his column Barry describes an attempt to remove a whale carcass from a beach by the use of dynamite. For many years the story was considered an urban legend until a video of the event was discovered and circulated online.
Some things to note about the Exploding Whale. The first step in the solution to the problem (dynamite) was regularly and successfully applied to the removal of large rocks from roadways, so the engineers had reason to believe it would work in this case. As noted in the video of the newscast, they had never dealt with whale removal before, and were inexperienced in this particular domain. The engineers incorrectly assumed that pieces of the carcass would be rendered small enough to be handled by the second part of the solution (the numerous scavengers). This failed in two ways, as humorously described by Barry. Many of the carcass pieces were too big to be eaten by scavengers, and the scavenging birds were frightened off by the blast.
Of interest, the newsman responsible for the video notes at the end of the segment that the engineers will, in the future, know "what not to do" when removing a whale carcass. He understood that this event qualifies as an anti-pattern.
Saturday, December 22, 2007
In what became known as The Christmas Truce of 1914, German and British soldiers fighting in France defied their commanding officers and celebrated Christmas with each other in no-man's-land between the barbed wire and trenches. As in the song, the Germans initiated the truce. The soldiers exchanged presents and sang carols. They played football (soccer) with each other. The truce lasted only a day in some places, in others it lasted into the new year. After the truce ended the massacre began again.
The truce is one of many stories that demonstrate that the people in the trenches often have a good deal more sense than their supposed superiors. A lesson for people who would aspire to be leaders.
Wednesday, December 19, 2007
Nearly every project I've worked on has employed some sort of issues list that is used to track project issues that need to be resolved. An issue usually includes a number, issue name, date opened, priority level, name of a responsible party, and due date. Project leads work the issues list on a regular basis, usually starting with the oldest issue, pushing the responsible parties to provide solutions so the issues can be closed. This is a good technique for ensuring that issues are resolved on a timely basis and the project is kept on track.
A problem occurs when the technique is applied to the early stages of user interface design. Design is by nature iterative. Good designers sketch high level solutions to design as requirements are gathered, but resist making a commitment on some individual aspect of a design until the framework is understood. Designers also produce alternative designs to one problem as a way of exploring different solutions to a single problem.
Project team members who want to "help" with design will often place design issues on the issues list early in the project. Where will the navigation bar go? How many sections will this web site have? Will we offer help on every page, and for which fields? In the worst-case scenario, a business contact is named as responsible party, bypassing the designer completely. The project lead then works the issues list, pushing to close the interface issues before the design framework is complete.
Imagine trying to construct a jigsaw puzzle this way: you're handed a bag of puzzle pieces. You don't know the dimensions of the puzzle, nor what the completed picture looks like. You're allowed to reach into the bag and take one piece, then place it on the table in front of you. Once you've placed the piece you can't move it. You take the next piece, and place it on the table. Proceed in this fashion until the bag is empty. What do you think the picture will look like?
That's the issues-list approach to user interface design. If you find yourself on a project with a project lead that doesn't understand design take the time to explain how design works, with reassurances that the dates can still be met. And, in the early stages of design, keep those design issues off the issues list.
Thursday, December 13, 2007
Saturday, December 8, 2007
You have some top-of-flow menus in which the caller is offered several services. Main menu is always a top-of-flow menu. The order of options can influence what the caller selects, and you have some latitude in ordering those items. Let's consider a bottom-of-flow menu, like the presentation of a payment amount. In any pay-by-phone system the caller is given a payment amount and then offered the option to pay using the IVR. Since the company is taking money from a customer, they'll usually make it pretty easy to talk to a representative at this point. How do you order the menu options?
I noticed when I wrote the options for this menu that I followed an unnamed convention that I call psychological distance from the task. I've seen this done so often, and I do it so often, that I don't even give it much thought. In the pay by phone system, from the caller's perspective, they need to select an account to pay, get the payment amount, decide how much they want to pay, indicate their method of payment, pay some or all of the bill, and make sure the company understood their request. Then the caller continues with another task or set of tasks. Here's the ordering of this bottom-of-the flow menu: Repeat amount, Make payment, Main menu, Talk to agent.
How is this related to psychological distance from task? The options are, abstractly, Re-do this step, Go to next step, Leave this task, Leave the IVR. That maps pretty closely to decision the caller makes when presented with an amount. If a caller intends to pay a bill they have to be sure how much the bill is for. They are at the "get payment amount" step. They won't move to the next step until they have the amount. If they didn't hear the amount the only thing they are thinking about is hearing that amount again.
The ordering is also consistent with what the company prefers the caller to do--serve themselves using the IVR before asking for an agent. Some terminal state bottom-of-flow menus include the phrase, "if you're finished you may hang up." Where does that belong in the menu? Obviously, at the end, since it means Stop Contact with the company.
The missing piece in all of this, and the thing that makes ordering menus difficult sometimes, is knowing what the caller wants to do next. In the pay-by-phone example it's obvious. In many others it isn't so easy. The lack of a good task analysis and understanding of why people call leads to the disagreement over ordering menus and menu options. That's why "best practices" rules for ordering menus will never be enough.
Monday, December 3, 2007
Adams gave a nice interview on his experience with working at home. He's right about the eating part. It's really hard to stay out of the kitchen during the day.
Saturday, December 1, 2007
The tricks almost never work, or don't work the way the owners intend them to work. If a caller really wants to talk to a representative they'll figure out a way to do it. Eventually. And once they get to a real CSR after they've been plagued by IVR tricks they often aren't very happy. I listen to a lot of calls between customers and IVRs and then follow the calls into the call center. Some customers remain calm with the CSR after a poor experience in the IVR. Others do not, and take out their frustration on the CSR. No one goes away thinking better of a company after a miserable IVR experience.
My recommendation to companies considering using tricks to keep customers in the automation: work on the quality of your IVR first. Monitor, survey, read the reports, improve. Once you're satisfied that the IVR operates flawlessly you can consider using some small inducements (a nice way of saying a subtle trick) to keep callers in the IVR. If it's done properly you might be able to increase your automation rate slightly with no cost to the user experience. However, it all depends on first getting the IVR right. Do the hard stuff first, worry about the tricks later.
Friday, November 23, 2007
The creators of the MBTI specifically state that the test shouldn't be used for selection. They specifically state that no personality type is better than another. The test is a self report measure of one's personal preferences for interacting with others. It doesn't claim to measure ability or motivation. Some researchers question its validity and reliability, and the evidence that it predicts on-the-job performance is mixed at best. But the tool is out there, and HR types are going to use what tools are available when they need help with selection.
Disclaimer: Even though I'm a Ph.D. psychologist, I didn't study personality theory in grad school. I haven't read the primary literature, only the HR and Org Behavior textbook versions of the literature, so I don't claim in-depth knowledge about personality testing. Apparently, though, there is evidence that the five-factor model (Big Five) predicts job performance for certain kinds of jobs. Researchers point to some studies that show that people with high scores on the extroversion and conscientiousness scales tend to perform well on the job. Those sorts of results are enough to encourage HR departments to employ the tests for selection.
What does this mean to you? If you're presented with a personality test during the hiring process you would be advised to answer in a way that maximizes your extroversion and conscientiousness scores. "I love to go to parties and talk to a lot of people. Agree or Disagree?" "I'm not satisfied until the job is done. Agree or Disagree?" The questions are nearly that obvious. What you want to do is get past the HR screening portion of the selection process and talk to the hiring manager about the real requirements for the job. At that point you can find out whether things like mixing with strangers at parties is a necessary function of the job.
Good luck, and let me know if you encounter personality testing during your interview.
Monday, November 19, 2007
Some of Speech Technology's readership is savvy enough to offer usable suggestions. Most businesses don't have the luxury of simply asking their customers for design advice, which is usually bad practice. Customers can tell you what works for them and what doesn't, but they can't do your design for you. Of course, someone at the magazine will need to sort out the conflicting advice they get.
Kudos to Speech Technology. The first step to solving a problem is to recognize that you have a problem. Sometimes that's the hardest thing for a company to do.
Monday, November 12, 2007
So when I was reading Jack and Suzy Welch's column on executive decision making in Business Week I almost split a gut when they used the phrase "the usual suspects" in the same way. Jack, of course, was an enormously successful CEO and leader, and it's impressive that he's able to see problems and solutions that seem almost to go unremarked on in many companies. At my old company, any suggestion from an employee that a kickoff include other than The Usual Suspects would have been rejected out of hand.
The Welches advocate for change agents in their column. Designers are by nature change agents. They're good to have around because they can see things from different perspectives. Execs and decision makers need to figure out how to identify and include them in new initiatives. And to stop relying so heavily on The Usual Suspects.
Friday, November 9, 2007
Trying to understand requirements from visual web designs is something I call design archeology. Any software interface is a collection of design decisions constrained by business requirements, user requirements, technical limitations, prior practice, existing standards, and often just the arbitrary personal preference of the designer. One field on a web page follows another because there's a business or technical reason, and sometimes there's no reason at all. Working backwards from design to requirements is a little like a physical archeologist trying to learn something about the way a culture functioned by excavating the site of an ancient city. The archeologist, digging through the dirt, makes inferences about artefacts based on the depth to which they are buried, the condition they are in, their location within a room, their proximity to one another, and so on. It's guesswork a lot of the time.
So it is with design archeology. I have the advantage over my counterparts because I can ask my clients questions about my hypotheses. Often, though, the creators of the web application have moved on and the answer is, "It's done that way because that's the way it was when I arrived." Often the original business rules were never captured anywhere but in the design itself.
Of course, it's important to parse out the requirements from the arbitrary aspects of the design because you need to know what you can safely change and what you need to preserve in your own design. And so it goes: dig dig dig, find, form hypothesis, check with client, dig some more. All in pursuit of a usable VUI design that will serve callers' needs.
Friday, November 2, 2007
Follow the SitePal link on the upper right side of the page and you'll discover that the company is promoting these avatars as a means of increasing conversion rates on e-commerce web sites. They even provide a high level description of a study. I'm skeptical. I'd really like to see some larger, more-conclusive studies before I'm convinced that an avatar can improve conversion rates. And I'd like an explanation as to why they work.
The demo raises questions about a hypothesis called the The Uncanny Valley phenomenon. To quote the wikipedia definition, the hypothesis states:
As a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong repulsion. However, as the appearance and motion continue to become less distinguishable from a human being, the emotional response becomes positive once more and approaches human-to-human empathy level.
The Max Headroom videos deliberately exploited the creepiness of a close-but-not-human avatar. The Uncanny Valley is an attractive hypothesis, but there's not a lot of real data to support it. On the other hand, I've seen badly-implemented, unironic Max Headroom-ish trying-too-hard-to-look-real avatars on the websites of major companies and wondered, "what are those people thinking? Have they tested that? That thing is TWITCHING and STARING at me!" The Uncanny Valley was at work. The Oddcast demo is really well done, in part because its not real enough to fall into the Uncanny Valley, but I'm curious about the real business benefit of these avatars.
Friday, October 26, 2007
The keynote speech was by futurist James Canton. He gave a polished presentation that I thought was a little light on analysis. I heard a lot of buzzwords and some predictions I've been hearing for years, like the influence of artificial intelligence on our lives.
Many of the sessions were devoted to speech interfaces for mobile devices. The mobile keynote was very good. Scott Kliger, Jingle Networks, talked about free directory assistance, and how 800 FREE 411 has taken off. Most impressive was the amount of advertising money that will move from printed phone books-$14 billion-to somewhere else when the books are no longer printed. The free DA services will take a good portion of that. He noted that the most requested single number in the US is California Dept. of Motor Vehicles.
The big news was Nuance's purchase of Viecore. It was announced at one of the keynote talks. That generated some buzz.
One of the final sessions made reference to the keynote speaker, with a presentation on "Top 10 reasons you know your futurist missed the mark." Of the couple I remember, "Refers to 'accessing the Internets,' 'predicts that the Cleveland Indians will win the ALCS.'" I've never heard a conference speaker crack on the keynote speaker before.
Sunday, October 21, 2007
Here's an opinion piece that takes issue with a Gethuman recommendation to send callers to a live CSR if one is available. The author of the article points out correctly that there's value to companies in identifying high value customers, and giving them preferred service. However, you can still prioritize customers without violating the Gethuman principle he identifies.
I have other problems with the Gethuman principle, and with the author's unstated assumption in the article. Both assume that IVR contacts are necessarily bad, or at least worse than a interaction with a human. For simple interactions that involve call transfers, account balances, password resets and so on, there's no reason why the IVR interaction should be less satisfying than one with a CSR. Better, in fact, if you add a little time in queue to reach a CSR. The fact that many IVRs give such poor service even for simple requests is due to failures of design and implementation, not to a limitation of the technology itself. You can serve simple requests from high value customers using IVR just as well as with CSRs if your IVR is well done.
The author identifies IVR as the culprit for poor customer service. In many cases, that's true, the IVR is poorly designed. The other, harder problem is that many companies are bad at identifying high value customers, and in defining what it means to provide high level service. What good does it do to get a caller to a CSR who mumbles, can't find information, doesn't recognize opportunities to provide additional service, doesn't answer the unasked question, and so on? Those problems aren't solved with even the best IVR.
The upshot of this is that IVR designers need to be able to see the big picture, and understand what problems the business is trying to solve. If they can't do that, they'll wind up spending a lot of time designing IVRs that don't address real issues.
Friday, October 12, 2007
- An interaction designer looks at the graphical page headers she's been sent by a graphics designer and something looks wrong: the words on the images seem blurry and hard to read. She tries to enhance the images in Photoshop and asks me for me opinion of the enhanced images. We quickly design a comparative usability test of the images. The original images (A) are placed next to the enhanced (B) images, and three questions appear below each image: (1) on a 1-5 scale which image appears sharper (1=image A is much sharper, 5=image B is much sharper)? (2) Which image is easier to read? and (3) Which do you prefer? The test materials are put in a Word document and sent to 12 office co-workers. The tests are returned and tabulated in less than three hours-the enhanced images are judged sharper, easier to read, and are preferred by a clear margin. The results are sent to the business lead with the recommendation to consider using the enhanced images until further testing is completed. The business lead who employed the graphic designer rejects the recommendation, saying that the data weren't valid because the test participants weren't real customers.
- I'm designing a main menu for a touchtone IVR. I'm not familiar with the terms used by the business for the menu options, but the client assures me that callers will understand the options, even though callers have no more knowledge about that business than I do. I type up some alternative wordings for the menu options then mail them out to some coworkers with the question below each set of options: please list the types of services or products you would expect to find associated with each option. I collect my responses for the next two days, and the results aren't encouraging. I show my results to the client and suggest a different strategy for naming the menu options, with follow up testing on the proposed menus. The test results are rejected because the methodology is too dissimilar to an actual IVR experience.
- I'm designing a menu for a speech IVR and I'm not sure the menu options the business wants are going to be discriminable to the speech recognizer. I ask that a one-menu prototype be created so I can test the discriminability of the options. The request is refused by a manager because, I'm informed, I'm not a real customer so it's not a valid test.
In each case the designs of the tests that I've described barely qualify as usability tests. They don't follow the procedures that you might find in classic texts on usability testing like the Handbook of Usability Testing or A Practical Guide to Usability Testing. The first example looks more like the procedure for an eye exam: "which do you prefer, A or B? A or B?" The second example looks like a poorly conducted card sort. The third looks more like a QA procedure. So what's an experimental psychologist like me doing running these poorly designed tests?
I call these sorts of usability tests low bar usability tests. They are simple, easy to design and execute usability tests that answer the question, "is it OK to proceed with this design or idea?" Since many interface designs take a good deal of time and effort to complete, and usability tests themselves can take a good deal of time to perform, a low bar usability test can tell the designer if he or she is on the right track. If the design passes the low bar test, design continues, and the next usability test, the high bar usability test, is run according to the canonical texts. If the design fails the simple low bar test, there's no point in taking the design in that particular direction and running a high bar test. It's time to step back and re-design.
To take one example, if I test the goodness of some menu options in an IVR I'm designing and I can't get the recognizer to reliably distinguish my utterances, should I continue with this design and get data from real customers? Of course not. I've worked with IVRs for years and I'm very good at getting them to do what I want them to do. If I can't use the menu no one else will be able to. If the menu passes this low bar test, does it show that the menu is ready for prime time? Of course not. I still need to run a high bar usability test. But at least I can move forward with the knowledge that the menu has a chance of working.
You should add low bar usability testing to your discount usability bag of tricks, remembering to explain the rationale for running a less-than-perfect test procedure to those who learned usability testing by the book.
Thursday, October 11, 2007
Thursday, October 4, 2007
Here's another article by BusinessWeek. (This one may be behind a password, so I'll quote the salient points.) "The Talent Hunt: Design programs are shaping a new generation of creative managers" begins with a promising observation. Proctor and Gamble are working with student designers at the University of Cincinnati to brainstorm new products for sustainable living. That's fine, as far as it goes. From the title, I expected to read more about the companies who are "hunting" design thinkers. Instead, the article focuses on the universities that are combining design programs with management programs. An interesting article about a useful practice, to be sure, but not what I'm trying to find out right now. I'm already in a program with interdisciplinary classes between design and management graduate students and I'm enjoying it immensely.
I need to know more about all the enlightened companies that value design thinking. It's fine that so many schools (60, by BusinessWeek's count elsewhere on their web site) see the importance of developing people who can cross disciplines. So the supply will be there. I just wonder if the demand is there, or whether this is a issue that BusinessWeek thought was important and is taking upon itself the task of educating corporate America. Good for them, but I'm still waiting for the article that talks about companies' desperate search for those hard-to-find manager-designers.
Wednesday, October 3, 2007
Sunday, September 30, 2007
One of the really powerful features is the support LinkedIn gives for job searches. Specifically, how to find and connect with hiring managers in the companies you've targetted for a position. By now, everyone knows that you don't send your applications to HR, you try to make contact with hiring managers. It's not that easy, though. However, LinkedIn gives you a way of finding your connections to those individuals. It supports good practice for finding a job, and as a designer, I appreciate the thought that went into doing something this well.
Monday, September 24, 2007
The article addresses the question of how to provide management skills to people who have worked as designers. A few schools, like Illinois Institute of Technology, have design programs that provide some business training, but most design programs do not. For me, the interesting thing is that companies are starting to recognize the value of having people with designer sensibility in upper management.
Of course, I've entered the NCSU MBA program in order to develop my business skills. I'll get a better sense of how many companies value people with a foot in both worlds, so to speak, as I talk to managers for the many companies that are based here in the Raleigh Durham area.
Friday, September 21, 2007
In fact, many blues musicians from that time and place (primarily Mississippi and Texas) were already gone. The Depression had wiped out the small recording companies that had recorded the artists, tastes in music had changed, and nearly all of the old time blues artists had died or otherwise faded into obscurity. A few, however, remained, and there are several dramatic stories of how some of the musicians were rediscovered.
For blues fans it was like travelling to London to find a group of theater actors from the Elizabethan era still performing plays by Shakespeare. The musicians were of a completely different place and time, playing in a unique, emotional style that had been lost for decades. A few, like Skip James, Son House, Mississippi John Hurt, and Bukka White, enjoyed a few short years of popularity in the mid 1960s as they played the college music and folk festival circuit and re-recorded some of their earlier songs.
Here is Skip James singing "Crow Jane" and "Devil got my Woman." James had lost a little bit of his facility with the guitar and vocal power when these movies were taken, but we're fortunate to have anything at all. It's hard to find anything more emotionally powerful than Son House singing "Death Letter." It's possible to find some video of John Hurt, Mance Lipscomb, and Lightnin' Hopkins as well. Give a listen.
Thursday, September 13, 2007
I became aware of Alex in the mid 1990's when I was doing research on spoken word perception. A visitor to the lab where I worked described meeting Alex, who was already famous among language researchers. She was required to wear sterile surgical scrubs before entering the room where Alex was kept. We lab workers were very impressed with our visitor and her story of having met near royalty in the language world.
One of the issues regarding Alex's use of vocalizations was whether the bird was using language in a human sense. That is, did Alex use words and word order (syntax) in order to express ideas? Selective portions of written transcripts of conversations between scientists and Alex appear to show language use at about a three-year-old level. A portion of a conversation is found in this article in Scientific American.
The really compelling examples of Alex's language skill are the recorded conversations. It's hard for people to see and hear these snippets of interactions and not think that the bird is using language in a human way. Maybe we're wired this way, to accept human vocalizations as expressing human-level emotion and thought.
In a way, our speech IVRs are a little like Alex. They're programmed to produce human vocalizations and recognize and act on snippets of input ("utterances" is the preferred term, instead of words, phrases, or sentences). From this very meager verbal repertoire the humans who call and talk to the systems tend to ascribe capabilities to the system that don't exist. This shows up in callers' generating long, verbose utterances, and courtesy language, and descriptions of such systems as "smart" or "stupid," depending on the systems' performance. Unlike Alex, however, most speech IVRs are neither cute nor engaging, and people who talk to them are usually just trying to get a request fulfilled and get on with their day. Designers who "personify" their VUIs-that is, name them and embue them with personality-are usually making a mistake.
To designers of VUIs I submit the Alex Rule: Don't personify your VUI unless you can make it as charming as Alex. You can't create a VUI as charming as Alex, so don't try.
Saturday, September 8, 2007
I enjoy reading Bob Sutton's stuff on the management and the corporate world. I've blogged about his excellent book The No Asshole Rule. One of his great strengths is the ability to write these perfect little epigrammatic phrases that resonate strongly with anyone who has spent significant time industry and still operates with some self awareness. To the aforementioned example he states as one of his core beliefs:
He goes on to state that "innovation often happens despite rather than because of senior management." He illustrates his statements with anecdotes from his own experience, and solicits others to contribute their own stories. Reading his take on the corporate life vindicates a lot of opinions I've formed about working for companies but was never able enough to clearly articulate.
Surely his beliefs about "leadership by getting out of the way" seem like heresy coming from a professor of management at Stanford, but many of Sutton's opinions challenge the conventional wisdom on the role of managers in companies. That's why he's worth reading.
Monday, September 3, 2007
Sometimes I'm asked if automated phone systems are costing people their jobs. That's a fair question, because some businesses that put in automation do so only with the misguided goal of saving money by cutting a lot of CSRs. I don't think it costs jobs in the US, and in fact I think IVRs that are implemented correctly make CSRs' jobs more enjoyable. I've done analysis in enough call centers to know that one of the the biggest problems call centers face is turnover: keeping experienced CSRs in their seats.
CSRs have a tough job. They work with difficult software systems to serve customers who are sometimes abusive. They are constantly monitored by supervisors and QA analysts. The pay isn't that great. Career paths aren't that attractive. And, depending on the call center, a high proportion of the calls are mind-numbingly simple and tedious: password resets, transfers to another department, account balances, "did you receive my check/paperwork/order yet?" types of questions, and so on. People who don't find satisfaction with their jobs tend to move on, and there isn't a great deal of satisfaction in resetting passwords day after day.
It's that last category of call that needs to be automated properly - the simple, repetitive requests that can occupy a large proportion of a CSRs day. Keeping all of the simple questions away from the CSRs would allow them to spend all of their time on questions that require thought and expertise. Of course, the call center managers need to do their part and provide the sort of training that allows CSRs to deliver real value and properly reward those who do.
So I'll go on thinking that I'm doing my little part for the gallant CSRs who work in our call centers until someone sets me straight.
Friday, August 31, 2007
I couldn't buy this magazine fast enough. Articles about the design of products large and small, using design to solve problems, educational approaches to design, green design, lots of illustrations, and a whole lot of other good stuff. Pick up Good if you see it in your local news stand, or go to goodmagazine.com.
Saturday, August 25, 2007
Now, the first thing I want to know when I read about a survey is the methodology used, the questions asked, and the see the real data. This CNET article that I linked to doesn't give specifics, and the published report is very expensive, so I can't vouch for the quality of the research. I'm quite sure that the survey respondents were thinking of efficient, well-designed IVRs, since there's almost nothing so aggravating than a malfunctioning IVR. But, if the generalization that a group of English-speaking callers prefer speech IVRs to overseas CSRs, then there exists data that partially refutes the argument that customers always prefer to speak to a live CSR.
Personally, I always take a shot at using self service on the phone, since I never know how long I'll be waiting in queue before talking to a person. Maybe it's just professional curiosity, but I like seeing how different IVRs operate. And if the IVR can't give me what I want, I haven't lost anything. I'm assuming that a lot of people feel the same way, but I don't have any data. The CNET article is a pointer to some real research that should be replicated in the US and elsewhere.
Thursday, August 16, 2007
Then I saw this article about speech recognition and GPS for handheld devices. The article rightly points out that manual input is difficult to design and implement on small handheld devices, and speech (once you get the recognition working correctly) is a natural candidate for input. Hey, there's no training involved - we already know how to talk into small devices, right?
The article states "Google research director, Peter Norvig, has indicated that Google is currently spending more on speech and translation than any other area." The article goes on to suggest that Google will produce a competitor to the iPhone that incorporates both speech and GPS. This makes sense. Google took a big step away from desktop-based search with GOOG 411. Enabling location-based search on an easy to use handset is a next, very large, logical step. I'm guessing that Google will partner with a company that produces handsets to implement their ideas. It will be interesting to see what they develop.
Sunday, August 12, 2007
If you've read my blog, you know that my interests are in design and innovation. The NCSU MBA offers a concentration in Innovation Management. When I was researching MBA programs in the area the description of this program, and what I learned from talking to people in the program, sold itself to me. It's exactly what I was looking for in an MBA. I knew that NCSU also has very good human factors and engineering departments, so I'll be able to collaborate with people in those departments as well.
All of the students in the part-time program work full time at companies and already have significant work experience, so I'll be learning from them as well as the instructors. I must admit, I'm pretty excited about getting back to school and learning this material, and networking with professionals who are at the same place I'm at in my career. This is going to be an interesting tw0 and a half years.
Monday, August 6, 2007
O’Reilly Media, Inc. 2007
This is a short, readable little book about the current state of thinking about the concept of “innovation.” In it, he addresses what he’s identified as the conventional wisdom on innovation from a variety of perspectives: historical, process, outcomes. Some of the myths could be summarized in this way.
- Popular stories about inventors claim that great ideas, including problem and solution, spring fully formed into the inventor’s head in a moment of “epiphany.” In fact, according to Berkun, this almost never occurs.
- The published histories of invention usually describe a straight line between idea and realization. The real histories of inventions are often littered with false starts, dead ends, and bad decisions; the published histories tend to leave out the messiness and uncertainty.
- We’ve all been conditioned to believe that great ideas sell themselves, and we become discouraged if our ideas aren't recognized immediately for their value. In truth, people (customers, clients, investors) usually fear truly innovative ideas.
- The more impressive the manager’s title, the better their ideas. Managers are managers because they produce, recognize, and properly evaluate innovative ideas.
In short, our understanding of how innovation occurs is wrong, and therefore our ability to judge innovation in the present is faulty. Berkun debunks these myths with an excellent selection of anecdotes and research into the history of innovation to support his thesis.
So how should we judge whether an idea or product is innovative? Judgment is best applied in retrospect, after the idea or product has been accepted or rejected. In the present, it’s impossible to tell whether an idea is innovative, in part due to the myths we subscribe to about innovation. What the author is doing is applying the Darwinian theory of natural selection to the domain of product innovation. Products are placed on market and fail or succeed based not only on the features of the product but on the characteristics of the environment, i.e., the marketplace, or what the environment is selecting for. An innovative product may fail in the current environment, but succeed later, when the environment has changed. It is only in retrospect that we are able to view the product as innovative, because it has succeeded.
Berkun’s book is an easy, but thought-provoking read for anyone interested in the topics of innovation and product design. Recommended.
[This post was reprinted in the Triangle Usability Professional Association's (TriUPA) Fall 2007 newsletter.]
Monday, July 30, 2007
I read an interesting news item on Jim Baker recently. He's moving from Carnegie Mellon to Johns Hopkins University to work on a new research project for the Defense Department and the National Security Agency. The NSA wants to be able to conduct surveillance on millions of phone conversations, and today's speech recognition software isn't up to the task. Thus, they've funded one of the founders of the field to get speech recognition unstuck.
I'm conflicted about this. I admire Baker for the pioneering work he did on speech recognition. I understand that his initial research was funded by the DoD through ARPA, and much good came of that research eventually. Baker is good enough to be able to move the engineering of speech recognition forward. However, I don't trust the motives of the NSA, and if this is made to work properly we'll have on our hands another Big Brother-styled technology available to the government for eavesdropping on US citizens.
Maybe we'll get lucky. Maybe the funding will run out, the program will be seen as an expensive failure, and the NSA will say "so long and good luck." Then Baker will release some really groundbreaking research that will be of benefit to everyone working in speech. That's what I'd like think.
Wednesday, July 25, 2007
I work at home, from an office just off the front door of my house in Durham, NC. I've joined a growing number of employees who work from home, so in that sense, at least, I'm at the front of the movement out of traditional office spaces and into alternative work environments. Companies like the arrangement because it saves them from moving employees and giving them office (or cubicle, or workroom, or anything else) space. I like the arrangment for a number of reasons.
- I'm a VUI designer for telephony systems, so much of my work is well-suited for working by telephone.
- I can focus on doing work. I don't spend much time on administrative stuff and office politics, the twin curses of my previous positions.
- I spend no time all day on unproductive travel by car.
- I can listen to my blues and old-time music and no one complains.
- I've always been good about writing reports as evidence of my work, and that's a useful skill to have if you work remotely.
I guess I could list some shortcomings to this arrangement. One is that I don't get to meet some of the people I work with and rely on, so there's no chance to socialize and get to know them as people. Or at least it isn't as easy. On a long term basis, you need to be in a traditional office setting in order to move up, but I'm not worried about moving up. If someone has any experiences with the pitfalls of work at home, let me know.
Tuesday, July 17, 2007
Whoa. Let's take a breath here. Recordings of interactions between a caller and an IVR don't necessarily mean that they're being used for "phonemic analysis." I listen to recorded calls all the time as part of tuning exercises to improve an IVR application's performance, but there's no "phonemic analysis" involved. And as far as storing voice prints, for the amount of speech that GOOG 411 requires for a search, it would be a pretty ineffective way of collecting a voice print. Not to say that it couldn't be done, but it's not the way voice prints are usually collected.
There's no doubt about one thing: people get very concerned over voice prints and other types of biometrics. I've conducted research on consumers' perceptions of voice prints and what it takes to get people to trust the technology enough to use it. There is genuine mistrust of biometric technologies that companies who employ biometrics need to deal with.
However, I can't find any reference to voice prints in any of the information provided in this article. The author read "recordings" and thought "voice prints." If that's a typical response from a customer to a "calls recorded for quality" announcement, then we all need to do some serious customer education. If Google is, in fact, collecting voice prints, I'd sure like to know how they are doing it.
Friday, July 6, 2007
However, the answer to the question, "Why is it taking so long for speech technology to catch on," is simpler than the article lets on. For speech technology, there's really no "catching on" to do. The reason people don't like to use them is because there are so many bad ones out there, and the likelihood that people have had to deal with bad speech systems is pretty high. That sets expectations for the next time someone calls and encounters an unfamiliar speech system. If speech systems were uniformly good then people would use them. A question more to the point is, "Why are there so many bad speech systems out there?"
Part of the reason is due to the difficulty of implementing speech recognition systems properly. I don't mean being able to code up a small application that can pass a handful of test cases conducted under optimal conditions by one speaker in a test lab during QA. I'm talking about large, enterprise-critical applications that handle thousands of calls a day from callers all over the country under every condition imaginable. Getting big applications to function properly takes work and re-work and tuning and constant monitoring. It takes knowing all of the tricks that are available to improving speech recognition performance, and the willingness to implement those tricks despite the expense. Veteran IT people with no experience in speech recognition underestimate the amount of work it takes to get the speech systems working right. People who work in speech recognition say that the technology is mature, and that's correct in a somewhat narrow sense. There still isn't enough experience with the technology to prevent a lot of half-right systems from being released.
The larger problem, however, isn't with the technology. It's with the managers and stakeholders who try to do too much with speech recognition systems. Speech recognition applications are intended to handle simple, repetitive requests that don't require thinking: password resets, simple routing requests, caller identification, forms, rate information. The logic to implementing speech applications is that the applications handle the simple stuff, and everything else gets passed off to an agent. Unfortunately, managers tend to get carried away with the possibilities of a speech system and insist on functionality that is difficult to implement properly, will rarely or never be used, or simply doesn't solve a business problem. It's Jetsonian thinking at work. More than that, though, it's an attitude that customers are only too willing to use anything you put in front of them.
I talked to Ballantine after one of his presentations. He'd made some good points about building systems to solve business problems and addressed an issue that had been bothering me about some speech advocates' and business owners' insistence on creating "natural sounding" systems. I said that I didn't think we should be trying to create systems that could pass the Turing Test. He readily agreed. We'll move the practice forward when we keep our focus on what customers want, and create our business cases to align with their needs.
Thursday, July 5, 2007
With that said, I found a simple DTMF (touchtone only) application that has a persona that really hits the mark. Geek Techs (877 433-5835) IVR is just a simple routing application, but the voice perfectly evokes an image of an earnest, helpful, socially challenged computer geek with black horn-rimmed glasses held together by tape and a plastic pocket protector. The IVR also contains a bit of fun for callers who may already be pretty frustrated with their computers:
“To hear the sound of a computer becoming absolutely disintegrated by a 10 pound sledgehammer, press 5.”
Of course, if the application misroutes callers or they're left in queue for a long time, the goodwill created by the IVR's persona will disintegrate as well. There's a limit to what a good persona will buy you.
Monday, July 2, 2007
I went to lunch last week with about eight others in the TriUPA group. We talked shop, traded business cards, and caught up on everyone's news. Chapter president Abe Crystal gave me a nice new book, The Myths of Innovation by Scott Berkun, on the condition that I write a review of it for the TriUPA website. I'll do that soon, I promise.
Having other designers around to talk to and trade ideas with is invaluable. I was in a town for many years that couldn't for various reasons support a community of designers, and it was a long, tough slog. If you're fortunate to be in a place that supports an active community, be grateful for what you have, and say a thanks to the people who do the work to pull events together (thanks Abe and Jackson). If you don't have that community, think about what you can do to build one. It's a lot of work, and I'm sorry to say that I haven't done my part, but it's a valuable thing, and credit accrues to those who make the effort.
Tuesday, June 26, 2007
There are, in fact, lots of data. Nearly all implementations go through at least one tuning cycle in which caller utterances are recorded and transcribed and matched to the speech recognizer's responses. These data are then analyzed for things such as grammar coverage and the appropriateness of the prompts and so on. Each tuning exercise generates a LOT of data. Unfortunately, the data are usually locked up by the company conducting the tuning exercise, never to see the light of day. There are legitimate reasons why companies don't share their data. Publication of the data is of little value to the company that produces it, and competitors could take advantage of it without responding in kind. Even if scrubbed, data could be used to identify the company or business units for which the data were produced. So I can see why companies are disinclined to distribute their data.
Still, I wonder what sort of intervention could be put in place to motivate companies to share their tuning data for the purpose of improving the best practices in the industry. If anyone has ideas, let me know.
Thursday, June 21, 2007
Rule 1: Don't over-engineer your system
Replicants Roy and Leon were built as fighters, presumably to protect property on off-world colonies. They were also created smarter and stronger than humans, and could pass for humans on Earth. Is it a good idea to build autonomous agents that are smarter and stronger than you and give them the capacity and motive to kill? No. It's a bad idea. Build your VUI system to do what its users need it to do and nothing more. And don't try to build something that could pass for a human.
Corollary to Rule 1: Keep your eye on the ROI
The technically sophisticated replicants were designed to self destruct after four years on fears that they could develop uncontrollable feelings and emotions if allowed to live any longer. Replacing advanced technology is extremely expensive - better to build your system to last, and maintain it as needed.
Rule 2: Words are powerful shapers of behavior - be brief and to the point
Deckard and Sebastian were manipulated into doing what others wanted them to do; Deckard into chasing replicants and Sebastian into setting a meeting between Roy and Tyrell. The dialogs required to do this were terse but gave sufficient direction to the targets that they understood what needed to be done. Good VUI dialogs are short and give users direction on what they need to say, without resorting to the painful "In order to do x, say x" construction.
Rule 3: Our methods for evaluating advanced technology isn't good enough - we need better tools
The Voight-Kampff test for determining whether a subject was replicant or human was nearly obsolete. It took Deckard, a trained evaluator, 100 questions to determine Rachael's identity. Our evaluation methods for VUIs are in a similar state. We still mostly rely on usability testing and questionnaires that were developed to evaluate web pages and GUIs. I gave a presentation at a UPA workshop in 2002 and pointed out that our old usability test protocols aren't sufficient for evaluating things like autonomous agents and speech systems. We need better evaluation methods.
Tuesday, June 19, 2007
The little twist introduced in the movie is not over the replicants' supposed intelligence. The replicants clearly are intelligent. The movie even tweaks an old AI argument in one scene when one of the replicants beats its creator in chess. At one time it was said that machines could never play a good game of chess because chess requires insight and intuition, those very high level forms of intelligence, that machines could never possess. What's more, machines can only do what you program them to do, so how can you program something that's better than yourself? Of course, that argument has been settled pretty decisively.
Instead, the movie explores the idea of whether the artificial beings are capable of emotion and feeling. The replicants show a great range of emotion: anger and cunning, loyalty, sadness at the death of one of their group, resentment over the treatment of others in their class, and fear over their impending deaths. In fact, they demonstrate far more emotion than the humans in the movie. Harrison Ford's character is a tough guy detective type who drinks in order to suppress his emotions. His employer shows little concern for him despite his poor condition.
So, the question. If the replicants display what appears to humans as emotion, are they really feeling emotion? If so, do we need to be concerned over their exploitation? Were the replicants emotions something that they were programmed to display, or was this a side effect of having made them so complex? Lots of interesting questions, with no answers forthcoming in the movie. Certainly Ford's detective was affected by the replicants. He feels physically ill at having destroyed one, and it's suggested that the demands of the job are one of the reasons he drinks so much. By the end of the movie, one's sympathies are with the exploited replicants.
What does this have to do with VUI? Besides the obvious fact that the replicants have astonishing speech recognition capabilities (there's not a single "Sorry, I didn't understand" in the whole movie) there may be lessons for those who concern themselves with the design of the VUIs persona. I'll discuss the lessons in my next posting.
Thursday, June 14, 2007
"Calls recorded for quality. GOOG 411 experimental. What city and state?"
It's quick and businesslike. No effort to be cute or fancy. Let me be the first to say (at least I'm think I'm the first) that Google has apparently tried to capture the visual presentation of its web page in an auditory presentation. It succeeds. Its web page is just a white page with a logo, an input box, two buttons, and a small number of links. If you were trying to translate that visual presentation into a VUI you couldn't do a better job than Google has.
By being simple and almost terse, Google created a unique, differentiated experience. If its speech browser's performance is as good as its web site, it will have a winner.
Sunday, June 3, 2007
- The second danger of standards and guidelines: if taken as gospel, they constrain active thinking by the designers who rely on them.
Designers who make informed judgments about stepping outside of common practice to try something new are engaging in what Diego Rodriguez and Ryan Jacoby call "design thinking:" the act of taking risks in order to produce distinctive and usable products. Rodriquez and Jacoby's excellent article on design thinking asserts that designers take risks in order to learn and to excel, but mitigate risks using skills that should be in every designers skill set: prototyping, storytelling, and the ability to actively listen to customers.
So where does that leave user interface guidelines?
In fact, knowing when to ignore rules and push boundaries and when to re-use portions of previous designs and previous practice are both characteristics of good designers. Well-written guidelines are one way that re-use is achieved, and re-use is generally a good thing. The best guidelines are created (designed) from user data and from practice. They're a way of capturing the experience designers have with their designers and putting it into a usable form. The trick is in knowing when to re-use and when to take a chance and design something truly novel.
The VUI design world, in particular, suffers from a lack of published, reliable data on which to make informed decisions about VUI design. Go to any CHI or HFES conference and you'll find loads of studies on web and GUI applications - those domains have been under investigation for years. Not so in VUI design. As some speakers at VUI conferences insist, we really do need better guidelines for design. Before we get there, though, we need more and better data.
Take home message: yes, "design thinking" is good. Guidelines can be misused if taken as gospel. But we need a way to re-use our successes without resorting to copying. That's design thinking as well.
Tuesday, May 29, 2007
First, some definitions. The VUI best practices article and those it references aren't talking about the same thing, and therein lies some confusion. In the narrowest sense, standards are technical specification that, if implemented properly, ensure interoperability. More generally, standards are published criteria against which a product or service are measured. You may design the most usable, functional, beautiful table lamp ever invented, but if the plug doesn't follow standards for electrical plugs and it doesn't fit into your country's wall socket then your product will fail in the marketplace. The HFES 200.4 Software User Interface standard for Voice Input/Output and Telephony is a set of recommendations that, if followed by IVR product developers, seeks to improve interoperability by presenting end users with a common set of interactions, terminology, and business processes regardless of IVR platform or application. This standard was derived from both data and common practice and was drafted and reviewed over the course of several years by a committee of industry practitioners. Standards are written to be unambiguous and easily interpretable by their users.
User interface design guidelines are suggestions and recommendations also derived from data and common practice. Any set of guidelines, even those that are well written, will have those that conflict with one another, and are subject to interpretation. For example, a common IVR guideline is that prompts should be clearly written and easily understood. Another common guideline is that dialogs should support the efficient completion of tasks. These are both good guidelines, but there's a tradeoff between clarity and efficiency, and part of the skill of a dialog designer is to strike a balance between the two.
"Best practices" could refer to nearly anything: standards, user interface guidelines, business processes, development process, fulfillment, metrics, the treatment of customers, etc. A lot of designers will insist (as I often do) that user-centered design is "best practice" for user interface design and development. The GetHuman standard mixes suggested best practice for business process with IVR design guidelines. So, back to the original topic: what are the dangers of VUI best practices and standards?
First off, a digression. Let's assume that the guidelines or proposed best design practices do provide, in fact, good design guidance and are interpretable by their intended end users: interface designers. This isn't always so. Case in point: some managers of a sales force with which I was acquainted had complained about the quality of the online applications that the sales force and the customers needed to complete to receive products and services. The sales force was required to handle applications for multiple lines of business, and each had its own ideosyncratic, de facto standards for creating applications. The look-and-feel of the applications were very different per line of business. A decision was made to standardize the look-and-feel of the various online applications. This was sold to the sales force in terms of improved consistency across the various applications and reduced training for staff - what you learned about the flow of one application applied as well to others. The online applications were sold to the lines of business as reducing paperwork in their back offices (the online forms replaced traditional paper applications). The standards were sold to the designers of the applications as reducing their work load. Some major design decisions were made for them, and the resulting constraints on their design meant that they wouldn't need to "reinvent the wheel" every time they had to design a new application. This is all motherhood and apple pie stuff, so far.
- The first danger of standards and guidelines: they can be so poorly written as to be unusable.
Unfortunately, the authors of the guidelines had created a document that was uninterpretable, even by experienced user interface designers. The guidelines were often arbitrary, vague, and conflicting. Moreover, QA was done by the guideline authors themselves, a situation that you avoid for the same reasons you split up development and code QA. The designers complained bitterly about the quality of the guidelines and of the QA process itself, but any feedback on the guidelines themselves were reacted to with defensiveness by the authors. This is a known issue among designers, and a number of sessions at conferences such as Usability Professionals Association are organized around how best to create usable recommendations and guidelines. End of digression.
But what if the standards and documented best practices are well written, consistent, and do in fact represent good guidance? That's a harder question. I think that's the case that the article VUI best practices and standards was addressing. I'll talk about that in my next blog entry.
Wednesday, May 23, 2007
Toby Dodge, an Iraq expert at University of London, resurrected memories of the Retreat with his observations on the construction of the new US embassy in Iraq.
"A fortress-style embassy, with a huge staff, will remain in Baghdad until helicopters come to airlift the last man and woman from the roof," he said, adding his own advice to the architects of the building: "Include a large roof."
I've thought about the Retreat from Saigon at times during projects when it becomes clear to everyone (except a couple of reality-challenged project backers) that the project is going down. Key people with the political connections to do so announce that they've been reassigned to a different, higher-priority project. Managers who talked for weeks about "our" project and "our" decisions start talking about "your" project and "your" decisions. Contractors are abruptly removed and thanked for their efforts, or disappear without explanation. Those are the folks who had a seat on their helicopter. I've usually been one of the unfortunates who have been left on the roof, waiting for the chopper that never arrives.
That's project life. If you've worked projects long enough you've probably had your own Retreat from Saigon moments.
Tuesday, May 22, 2007
Kelley on how to make innovation work:
How to Cross-Pollinate
- Show and tell. The IDEO Tech Box, a collection of hundreds of promising technologies, is a systematic approach to collecting and sharing what we know.
- Hire people with diverse backgrounds. Sift through the job applications looking for someone who will expand your talent pool or stretch the firm's capabilities.
- Create multidisciplinary project rooms and create lots of space for accidental or impromptu meetings among people from disparate groups.
- Cross cultures and geographies. A well-blended international staff seems to cross-pollinate naturally from other cultures.
- Host a weekly speaker series. Nearly every week, a world-class thinker shows up to share their thoughts with us.
- Learn from visitors. Listen to what clients or prospective clients say about their industry, their company, their point of view.
- Seek out diverse projects. A broad range of client work allows you to cross-pollinate from one world to another.
- Coach more, direct less. Good executives and managers inspire their staffs to develop their confidence and skills so they can seize critical "big game" opportunities.
- Celebrate passing. Break teams into smaller groups of three to six to increase the number of triangles where team members can pass ideas and responsibilities.
- Everybody touches the ball. Find one or more responsibilities for every player.
- Teach overlapping skills. Create opportunities for team members to assume nontraditional roles and push forward initiatives. Find out team members' unique passions and interests and put them to work.
- Less dribbling, more goals. Encourage the sharing of ideas and initiatives. Solo dribbling can give a project the critical first push, but then you need teamwork to bring a project home.
- How will your company define a successful innovation program?
- How will your organization fund the innovation process?
- What corporate resources will be available to support your effort?
- How often will the stakeholder groups meet to review your innovation propositions?
- How many task teams will you sponsor yearly? How often will you put together these teams?
- How much logistical support will be given to your innovation staff?
- What rewards or recognition can people expect for participating in this program?
Good stuff. Kelley, as program director at IDEO, has obviously lived through the scenarios he describes in his book, and his recommendations for doing innovation is spot on.
I was working on a small self-initiated project to introduce some innovation concepts to a company that I worked for. One of the team members asked, "how do we get this company to think innovatively?" I said, "how does this company treat people who try to innovate?" We talked about the existing reward structure for innovators (none) and the likelihood that the person would wind up in trouble for trespassing on someone else's turf (high). It's all about the system of rewards and punishments, folks. All of the "innovation camps" in the world can't overcome an environment where innovation and design is punished.
Monday, May 21, 2007
I must say, BusinessWeek really gets it when it comes to writing about innovation. They know who to talk to and how to extract the important stuff the innovators are trying to get across. For the most part, they recognize hype when they see it. Here are some of their lessons from Kelley's book.
Seven Secrets of Effective Brainstorming
- Sharpen your focus. Focusing on a specific latent customer need or one step of the customer journey can often spark a good ideation session.
- Mind the playground rules. Go for quantity, encourage wild ideas, be visual, defer judgement, one conversation at a time.
- Number your ideas. Numbering your ideas motivates participants, sets a pace, and adds a little structure. A hundred ideas per hour is usually a sign of a good, fluid brainstorm.
- Jump and build. You may have a flurry of ideas, and then they start to get repetitive or peter out. That's when the facilitator may need to suggest switching gears.
- Remember to use the space. Write and draw your concepts with markers on giant Post-Its stuck to every vertical surface.
- Stretch first. Ask attendees to do a little homework on the subject the night before. Play a zippy word game to clear the mind and set aside everyday distractions.
- Get physical. At IDEO, we keep foam core, tubing, duct tape, hot-melt glue guns, and other prototyping basics on hand to sketch, diagram, and make models.
- Make room for 15 to 20 people. Even if the core project teams will be small, you'll want to share the results (and even work in process) with lots of your colleagues.
- Dedicate the space to innovation. Your creative efforts need to live on without scheduling or moving.
- Leave ample wall space for sketch boards, maps, pictures, and other engaging visuals. Don't use delicate surfaces or precious materials that would inhibit maximum creative use of all vertical and horizontal surfaces.
- Locate your lab in a place convenient to most team members. Make it near enough for even part-time team members to drop in...but far enough away so they can't hear their desk phone ringing.
- Foster an abundance mentality. Stock the lab with an oversupply of innovation staples: prototyping kits, Post-Its of every size and color, masking tape, blank storyboard frames, fat-tipped felt markets for drawing, X-Acto knives, and so on.
- Practice the Zen principle of "beginner's mind" -- they have the wisdom to observe with a truly open mind.
- Embrace human behavior with all its surprises. Don't judge, empathize.
- Draw inferences by listening to your intuition. Don't be afraid to draw on your own instincts when developing hypotheses about the emotional underpinnings of observed human behavior.
- Seek out epiphanies through a sense of "vuja de." Vuja de is the opposite of déjà vu. It's a sense of seeing something for the first time, even if you have witnessed it many times before.
- Search for clues in the trash bin. Look for insights where they're least expected -- before customers arrive, after they leave, even in the garbage. Look beyond the obvious, and seek inspiration in unusual places.
I watched a company go through the process of trying to learn how to innovate. They got the physical part mostly right: they cleared a lot of dedicated space and put new equipment and comfy chairs in those spaces, and designated people as "creative types" to use the space to "innovate." It was the process stuff they couldn't get right. The "creative types" revelled in their positions as company-designated innovators, were extremely territorial, were rigidly inflexible in their approach to problem solving, and shot down the ideas of others who weren't part of their in-group. If customers were consulted at all, it was only as a way of legitimizing the creative types' own opinions of what needed to be created. The managers in charge seemed to be oblivious to the fact that the creative types had no real committment to innovation other than as a means of promoting their own agendas.
Bottom line: Creative types have a duty to create and innovate. Managers need to learn enough about the innovation (or design) process to recognize the proper behavior, reward the good and discourage the bad.
Sunday, May 20, 2007
The book describes the 10 personas, or roles, that designers can play on a project. These are not job categories, but describe the different tasks that need to be performed in order to move an innovation effort forward. The Anthropologist is the field worker who looks for insights about consumer behavior by observing people and collecting qualitative data. The Experimenter creates prototypes of potential products and tests the prototypes with potential customers. The Hurdler is the person who is able to overcome obstacles in order to get the effort completed. The other roles include: The Collaborator, The Director, The Experience Architect, The Cross-Pollinator, and The Storyteller. Each role description is backed up by case studies that illustrate the role in action.
Some of the role descriptions require education and training to perform properly. For example, The Anthropologist and Experimenter tasks are part of most Human Factors analysts’ education and training. Other roles described in this book, such as The Hurdler, reveal an approach to getting work done that is not specific to any discipline. The book stresses the need for collaboration and describes what effective collaboration looks like. The author cautions against relying on a single expert or guru to deliver an innovative solution; something that seems to me to occur in companies far too often.
The author spends a good deal of time abusing those who play the pernicious role of The Devil’s Advocate. These are the people who attack others’ ideas during brainstorming sessions without taking responsibility for their actions. Kelley claims that The Devil’s Advocate is the single biggest innovation killer. It appears to me, having watched people play this role in so many project meetings, that there are more people playing this role effectively than the other 10 roles combined. Kelley offers practical suggestions for overcoming the negative effects of The Devil’s Advocate.
BusinessWeek carried an article about the IDEO innovation process in its Nov. 7 2005 issue that is derived from some of the advice given in the book. If you subscribe to BusinessWeek, you can access the article by clicking the provided link. I should say, as someone who has participated in and facilitated brainstorming sessions, that some of the ideas presented in this article seem obvious on their face but are often extremely difficult to put into practice.
If you can't follow the link I'll excerpt part of the article in my next blog entry.
Saturday, May 19, 2007
I could go on and on about the things I like about this book, but it would be better for you to read the book yourself. Highly recommended. Read it now: Robert Sutton, The No Asshole Rule: Building a Civilized Workplace and Surviving One that Isn't. Warner Business Books, 2007.