6 - Insight-Driven Innovation I

6 - Insight-Driven Innovation I sxr133

Lesson 6 Overview

Lesson 6 Overview mrs110

Summary

So where we have understood the overall innovation landscape and slowly increased our factor of magnifcation over the past five weeks, this week we continue in our highest magnification, but now we begin experimenting.

If we have created an innovation using the tools covered thus far, we have hopefully identified something which is in a fruitful vein, fits organizational strengths, and appears to have at least some potential to create value and impact. So we're off to a good start.

This week, we add to the toolset means by which we can quickly understand if we have created something with promise. The intent is to understand -- in a concise way and with minimal relative expense -- if the concept has 'legs.' Are we seeing the kind of spark of innovation in the market that we would expect? Are there weaknesses? Are there other veins of richness being identified in research which may be of even greater value than the initial concept?

We are now entering the phase of development where the concept is initially revealed to some sample of the public, and an initial view of what value the concept actually delivers into the market.

If we have done our due dilligence up to this point and indeed ended up inventing the next 'Corgi saddle,' now is exactly when we need to find out: before we have spend considerable time, effort, and resources on bringing the concept to fruition.

Learning Outcomes

By the end of this lesson, you should be able to:

  • create a well-structured plan for insight research as a basis to substantiate innovation opportunities;
  • discern the strengths and weaknesses in various research methodologies, as well as the stages of concept development in which they are most appropriate;
  • articulate the insight mindset, as opposed to a "project ownership" mindset.

Lesson Roadmap

To Read Chapters 11 and 12 (Keeley, et al.)

Documents and assets as noted/linked in the Lesson (optional)
To Do

Case Assignment: Structuring research

  • Case Post
  • Case Response
  • Peer Voting

Questions?

If you have any questions, please send them to my axj153@psu.edu Faculty email. I will check daily to respond. If your question is one that is relevant to the entire class, I may respond to the entire class rather than individually.

Truths of Research and Innovation

Truths of Research and Innovation ksc17

A Common Conflict

image of opening aluminum briefcase
Credit: Storage and protection of cash and valuable goods by efired is licensed under depositphotos.com Standard License.

Allow me to tell you a tale, one which has been disguised to protect the innocent.

I had the occasion to be a party to some research on "highly designed" vitamin bottles some years ago.

In this case, these vitamin bottles were not designed by some local or regional design firm, but were four designs crafted by literally a rock star of the design world. He has been on covers of major magazines, TED Talks, the whole bit. Let's call him Jacques.

These prototype bottles were 3D printed–before 3D printing was a thing–and were delivered to the test site, by hand, in four separate aluminum briefcases. They were unveiled with reverence usually reserved for ancient artifacts and the like, as they were extracted from their custom cut foam liners with white cotton-gloved hands. The cost just to build the prototypes could have bought a decent house at the time. (This is not an exaggeration, as I had momentary thoughts of absconding to Mexico with four aluminum briefcases and their precious cargos.)

They were indeed "highly designed."

So, these bottles were then sent through usability testing with the exact types of customers who would be using these "highly designed" bottles in the real world. People poked and prodded at them, which is terrifying if you are responsible for the prototypes' well-being (they indeed had a full-time handler), and rather entertaining as the observer.

There was one design called "Oval," which was a short, squat bottle with what looked like a blunted and flared 3" diameter "crown" on top. If you twisted the crown, a little port would raise from the center of the top of the crown, and this little port had a slot exactly the width of a vitamin, plus 1 mm. If I were to estimate the production cost of this tiny mechanical marvel, it would have been in the $15 range. Just for the bottle. By general retail packaging cost rule of thumb, that would have made a garden-variety bottle of drug-store vitamins around $150. (By the way, the bottle was single-use).

A funny thing happened as person after person used the vitamin bottles. Each thought they were on a hidden camera show.

Here's why: The design would dispense 10-15 vitamins at a time through that tiny slot as if they were shot out of a Lilliputian cannon. Many times they landed loudly on the glass tabletop as they were ejected.

From here, people did what they would naturally do, which is try to put the aforementioned vitamins back into the bottle. With any conventional bottle, this entails the usual 'hand cup and shake' maneuver. Not so with this "highly designed" bottle.

In this case, the participants would place the pills carefully back onto the tiny crown, daintily pushing them round and round in the hopes that one would fit into the tiny slot and drop back into the bottle. What was hilarious was that, for older participants, this typically entailed holding the bottle about 5 inches from one's eyes, and poking with all the gentle intent of trying to make a ladybug walk more quickly down a set of tiny stairs.

So, one might think that the design house would have received the message that the prototype *might* need a little refinement for usability.

Some weeks later, the research findings were presented, with Jacques in attendance. The researchers brought video and verbatim quotes of the encounters, as well as offering their own ratings, based on the participants' encounters, on a variety of facets. Without saying as much, it was clear the designs, in the presented iterations, scored between a C and an F- in the eyes of the participants. (The scores were actually presented on a soft scale, using descriptors instead of letter or percentage grades.)

Remember that by "participant" we are referring to a significant cross-section of the people who could conceivably be in the sweet spot to purchase this bottle. If you are prone to capitalistic dreaming, replace "participant" with "wallet with feet."

Jacques grew more and more agitated as the researchers presented, until, about 10 minutes in, he leapt up screaming, "Who are you? Who are you? You have no idea what you are talking about! You have no idea about design! These people (pointing at participant video frozen on the screen) are idiots! You chose them to insult me! I will not have this!" With this, he stormed out of the room, and his Senior Handler scurried after him. This left Junior Handler 1 and Junior Handler 2 in the room, who looked helpless for a couple minutes as the presentation continued, until they both decided to leave as well. Jacques was never heard from again.

Why do I tell you this story? Because in developing new offerings, there is a Jacques in all of us.

There is a natural tendency for us to protect what we create, and this is not a tendency that ever serves us well in innovation.

Truths of Research, Creative, and Innovation

As we begin to test early-phase innovations, we indeed will have at least three personas or roles at play within ourselves, namely, the researcher, the creator, and the innovator. It is a conflict that really plays out no differently within each of us as it does between people, but when it happens internally, it is quiet and especially dangerous.

Here's what can happen, and we see it time and time again:

  1. You do the research and critical analysis, and find a space for sustainability-driven innovation that is exciting both for yourself and the organization.
  2. You get early buy-in, early project is greenlighted.
  3. You then take ownership of this fledgling innovation, both operationally and emotionally.
  4. There are no clear answers, so you have some initial research performed on the early-phase prototype.
  5. The research has warning signs interspersed with promising findings.
  6. Warning signs are not heeded or minimized due to emotional engagement.
  7. The offering suffers significantly in Beta (or in the market).

There is also a pervasive belief in the "creative genius" archetype, one that tells us anyone who is truly "creative" should be able to lock themselves in a room and just create. This is what we pay creative or innovative people to do, correct? To work in "genius isolation" and rely on only their flawed personal perspectives to create offerings which will be sold to hundreds of thousands, if not millions of individuals? We should believe in creative genius, but great research can make average people perform like creative geniuses.

I would suggest a shift in mindset as we embark into the joint acts of creating and trying to find flaws in our creations.

The Insight Mindset

The following are a few mantras I have relied upon over the years and tend to come back to when in the thick of research and gathering insights.

  • Real artists love constraints. Igor Stravinsky was once quoted as saying, "The more constraints one imposes, the more one frees one's self. And the arbitrariness of the constraint serves only to obtain precision of execution." Rest assured, many constraints will be revealed as we research the offering, from cost to timing. These are simply design constraints working to focus and refine the offering to better fit the market.
  • Every result is a victory, and brings us closer to truth and innovation. It can serve us very well to take the "detached scientist" approach in all of our innovation work, but especially in early phase research. Those who are over-protective of the offering see negative results or feedback as a defeat, those who take the detached scientist view see every valid result, positive or negative, as valuable. Taking this view is actually extremely freeing, as it allows you to concentrate more on ultimate success rather than being temporarily "right."
  • Save emotion for when it is a strength, not a weakness. Much of this comes down to introspection, but know when you are getting in your own way, or when you are trying to make excuses for the findings.
  • If you aren't failing, you're either lying or not trying. Many of the greatest inventors in history failed miserably for years before finding success. If you are attaching your personal pride and ego to the success of a first draft offering, you're going to see failure as terminal, as opposed to a gateway.
  • Untested intuition fuels ego and little else. Everyone wants to be "the guru" or "the rainmaker," with knowledge and experience so vast that they can predict the future. They don't exist. If you want the offering to fail, go with untested intuition.
  • Sometimes creation is destructive. You may find that the second prototype is vastly different than the first, and so you have to start over. You may also find that the new offering cannibalizes the first. It's OK. Remember that innovation is neither clean nor a defined path.
Five words:

Inner “Jacques”:

Hit the road.

Introduction to Methodologies

Introduction to Methodologies jls164

All Failures are Human in Origin

lines of different thicknesses and colors
Credit: roscolux paper spectrum by is licensed under CC BY-SA 2.0 DEED

There was a famous automotive engineer and designer named Carroll Smith, whose fastidious attention to detail has won his teams major victories at virtually every level. I do not use the term "fastidious attention to detail" lightly: he has written a 224-page book devoted solely to nut and bolt selection.

There is one phrase he is most famous for: "There is no such thing as material failure. All failures are human in origin." He argues that the role of the engineer is to account for everything from metal porosity to poor machining to the tendencies of the driver, and if a component fails in a crucial moment, the responsibility lies on a human–not an inert material.

While this is certainly a compelling platform for engineering, his statement may be adapted and applied to consumer research, as well. Research methodologies do not fail, research design is typically where the failure lies... and humans are responsible for this. As we will see, each methodology has strengths and weaknesses, and it is our responsibility to not only account for those, but to be smart about research design.

How do we tend to fail in consumer research design?

  • Asking too many questions. The survey that goes on for 40 minutes? Not only will few participants complete the survey, the validity of those who do complete drops off a cliff as they get bored and disengage. Open-ended questions become one-word answers, at best.
  • Asking the wrong level of question. Using a survey to ask people deep or emotional questions and expecting them to write tomes of explanation is not going to happen.
  • Not accounting for self-reporting bias. Humans are notoriously bad at self-reporting even the most basic behaviors. Asking people hypothetical questions about hypothetical products purchased with hypothetical money held in a hypothetical bank account can be a very dangerous path if not structured with high levels of statistic validity.
  • Poor participant selection. This is one which is a perennial failure, as there is a bias to select groups who are more engaged with a product or current customers because they are easier to obtain. Ever consider that much academic research relies on 18-22 year old undergraduates as the core participant pool?
  • Poor participant screening. Especially online, people can say virtually anything if they think it will make them eligible for the research. This can lead to very dirty data.
  • Poor segmentation. Neglecting groups of interest, or not appropriately weighting one segment or another, can significantly distort the research findings. You can even have ironclad statistical significance... but with the wrong customers.
  • Imprecise or confusing questions. "Have you or have you not encountered questions which are not designed to illuminate, but appear to be designed to obfuscate a straightforward answer?"
  • Survey fatigue. Much like the Tragedy of the Commons, researchers who are inconsiderate of participants and oversurvey will reduce the chances of those people participating in research in the future. This is a very real problem when you are dealing with limited pools of participants to begin with.
  • ...and others

Note that none of the above are methodological failures, but failures on the part of the researcher. While professionals and professors alike devote their lives to the art and science of research design, in our case, we will be looking to perform quick–but "clean"–research which can be built upon. In essence, we will seek to perform research which will not be thrown away after a month, but which may be built upon to create a research narrative which accompanies our revisions to the offering over time.

What is freeing about research design is that if it is sound and done well, the results are the results. No explanation is needed for negative outcomes, no congratulations accepted for positive outcomes.

Our role is to think, structure, execute, and learn as much as possible. Only after, may we interpret results.

A Spectrum of Options

In the following pages, we will examine a few methodologies and techniques especially well-suited to testing early-phase ideas, messages, and offerings.

When discussing the realities and underpinnings of these research techniques, we will consider how each fits into a few different spectrums:

  • Speed. How long does it take to go from zero to final findings?
  • Cost. Where does the methodology fall on the cost spectrum? Do you have the ability to do a battery of projects on your offering and the offerings of competitors, or is it a more involved and limited scope "deep dive" to explore an idea?
  • Level of insight. Are the insights provided by the findings likely more toward the shallow and superficial or deep and subconscious?
  • Verbatims. Does the methodology provide brief participant sound bytes or deep transcripts?
  • Remote/In-person. Can you start the research from your office and monitor results in real-time, or does the research require face-to-face interaction with participants?
  • Ideation. What are the chances participants will come up with related ideas or concepts to your offering during the course of the research?

I will also share some experiences with the methodologies and applications of technologies you may find interesting. Some methodologies may be timeless, but there are ways you can deploy the methodologies which can also produce dividends for the research and generally make things more efficient.

Five words:

Many potential

research design pitfalls.

Surveys

Surveys jls164

A Classic Research Methodology

Surveys are the most used, and unfortunately, the most abused research methodology. It is the least common denominator, in that everyone from 5th graders to college professors can use it, but it takes a significant amount of thought to structure a valid, scalable survey.

There was a time in the lifespan of this classic technique when participants felt "special" when asked to complete a survey that their opinion mattered. JD Power and Nielsen were masters of this, and Nielsen became famous for its technique of enclosing a crisp dollar bill in every survey mailed to create a sense of obligation on the part of the recipient.

With the advent of the internet, online surveys have become incredibly pervasive in our daily lives. To understand just how pervasive surveying is, I decided to count how many surveys I was presented with while going about my usual business on a Saturday. The final count? Fifteen, including three on shopping receipts. (I was fortunate in that CVS was an early errand that day, so I had an ample 24" receipt/scroll upon which to record the day's findings.)

Consider also that this survey overload is a reinforcing cycle: researchers receive fewer responses to surveys, so require higher number of survey sends (impressions) to reach the level of validity they require. Now multiply this escalation in survey sends by thousands upon thousands of companies, researchers, and media, all of which are in a similar situation. In essence, we have our own miniature Tragedy of The Commons in survey research, in that it is a race to deplete what is an openly-accessible, but finite resource: attention.

From USA Today, January 7, 2012, "For some consumers, surveys breed feedback fatigue":

"Survey fatigue" has long been a concern among pollsters. Some social scientists fear a pushback on feedback could hamper important government data-gathering, as for the census or unemployment statistics.

If more people say no to those, "the data, possibly, become less trustworthy," said Judith Tanur, a retired Stony Brook University sociology professor specializing in survey methodology.

Response rates have been sinking fast in traditional public-opinion phone polls, including political ones, said Scott Keeter, the Pew Research Center's survey director and the president of the American Association for Public Opinion Research. Pew's response rates have fallen from about 36 percent in 1997 to 11 percent last year, he said. The rate includes households that weren't reachable, as well as those that said no.

The Associated Press conducts regular public opinion polling around the world and has seen similar trends in response rates. There's little consensus among researchers on whether lower response rates, in themselves, make results less reliable.

Keeter attributes the decline more to privacy concerns and an ever-busier population than to survey fatigue. But the flurry of customer-feedback requests "undoubtedly contributes to people putting up their guard," he said.

This has become a very real problem in consumer research, so we must get creative in how we first contact participants, continue the research relationship, and how we survey in the first place. We will discuss this a bit more in a moment, but in some cases, it can be helpful to take non-traditional approaches to find, approach, and continue relationships with your participant pool.

Duke University Initiative on Survey Methodology

In regard to pure survey design, Duke University has an excellent set of condensed Tipsheets to help you create well-structured surveys. They have examples throughout to help explain not only the problem and solution, but also what the question looks like in practice.

These Tipsheets cover the major points you need to pay attention to in any survey design, and are a great resource to bookmark. If you happen to have staff helping to write or deploy surveys, it can be useful to point them to the Duke Tipsheets to not only help them do a better job in adding questions to the survey, but also in reviewing the other questions.

Eleven Common Mistakes in Survey Design

Beyond the prescriptions of the Duke Tipsheets, there are quite a few common survey mistakes that happen in the field, and anyone can have significantly negative outcomes for the survey results and completion. These mistakes are not limited to just the less experienced, they happen all the time to experienced research designers.

What can make mistakes especially damaging is if they happen early in a series of surveys. So, for example, let's imagine you seek to create a "survey narrative" over time, building a robust set of baseline data on a core set of questions. If you commit an especially egregious mistake, such as omitting what would be a very common response to one of the questions, it not only distorts the results of that survey, but every survey that contains that question. You may have built 3 years of baseline data, but the results may be in question because of one carried miscue from the first survey.

Mistake #1: Survey Too Long

In a short-attention, low-engagement context, lengthy surveys are a liability. Participants will start to offer superficial, single word, or repetitive answers, or might abandon the survey altogether. Especially in the case of research pools you intend to survey over time, abandons can reduce the chance that your participants will opt in to the next round of survey. Furthermore, long surveys can skew results invisibly through attrition of participants, as we likely do not desire responses from only those who can devote 50 uninterrupted minutes in the middle of a workday.

Solutions: If you have a survey over 10 minutes, strongly consider carrying content to another survey or otherwise splitting the survey. Regardless of timing of the survey, one of the first things to do before the participant begins is to be clear about average completion time (i.e. "Average time required: 7 mins"). This creates not only an expectation on the part of the participant, but a type of implied contract. In the end, it is also common courtesy and good research practice. If you have stimuli integrated into the survey, such as animatics, videos, or prototypes, you can stretch the 10 minute boundary, but you still need to be conscious of (actual) time to complete.

Quite a few of the online survey tools have a feature you can enable to chart progress (as a percentage or bar chart) at the top of the page. This can allow someone to know where there are in the survey, generally allows better pacing, and can reduce opt-outs.

You may also have success gaining participants by using especially short survey length as a selling point. For example, a 'one-minute survey,' or even a 'one question survey.' There are a few mechanisms for continuance with a willing participant you can then use, such as an opt-in for future surveys in the future.

Mistake #2: Questions Too Long

Multiple clause questions with multiple modifiers are confusing to participants, and can lead to dirty data without you ever knowing it. There is no mechanism for you to know your results were skewed.

Solutions: Even in highly-educated participant pools, simplicity and clarity are paramount. Chances are, multiple clause questions can be worded more simply, and if they can't be, you will need to split the question. Save the lawyerly questions for the courtroom, counselor.

It can also be a great practice for larger surveys to include a simple mechanism to allow the participant to "flag" questions. You can add this as a cell at the bottom of the survey page, and some survey software allows it as a pop-in tab from the side of the survey page. Either way, the goal is for a participant to let you know they found the question confusing and to perhaps put a sentence of why.

Mistake #3: Survey Hard to Internalize for Participants

This is a classic issue that often goes unnoticed or unchallenged. These are surveys wherein each question is voiced in the third person, and using impersonal language and referring to "one" or "it" question constructions. For example: "It has been argued that one could tie a shoe with one hand. Agree or disagree." These types of constructions add a layer of interpretation for the participant, and may move their answers from their personal thoughts and feelings and into the hypothetical. For example, for the shoe tying, am I being asked if I can tie a shoe with one hand, if someone else conceivably could, of if I am aware of the argument.

Solutions: Be direct and personal. Your participants' responses are only valid in reference to themselves, not the thoughts of feelings of others or hypothetical "ones" hovering somewhere in the ether. Replace "one" and "it" with "you." If the more personal approach to these questions seems casual, it's because it is. If you prefer your surveys to sound clinical, yet be unclear and ineffective, that is entirely your prerogative.

Mistake #4: Survey Poorly Vetted

This tends to be more of an overall issue, and can range from typos and awkward questions to a lack of logical flow through the survey.

Solutions: Use a group of peers or "pre-deploy" your survey to a small group of live participants and contact them immediately after. This is especially easy to do on the web, as you can watch survey completions come in as they happen, and call the participant to ask if all of the questions made sense and if there are any suggestions they would have. The goal here is to take just a handful of live, unprompted survey participants and intercept them immediately after taking the survey.

Mistake #5: Choppy or No Flow

Surveys should feel like a good interview for participants, having a logical flow and seamlessly working from one question to the next. Choppy surveys tend to feel like being interviewed by a 4th grader: The questions in isolation may be valid, but it feels like you are being barraged with unlinked questions flowing from a stream of consciousness. This seems to be a more stylistic concern, but it can indeed confuse and detach participants from the survey.

Solutions: Don't be afraid to use "chapters" or breaker pages to allow your participants a bit of a break and to shift gears. So, if I were transitioning from a series of questions about demographics and into asking about experiences with the product category, I would insert a blank breaker page, and note something like, "Your Experiences With [Category]: In the following section of this survey, we would like to understand your thoughts, feelings and experiences with [category]. By [category], we are referring to products that [definition of the category to make sure all are on the same page]. It can be helpful to take a moment to think of specific times you have used (category) in the past to help you remember. Please restrict your answers to your experiences, and not the experiences of others."

Generally, the goal is to start with the more simple and straightforward background questions, and build on the questions until the most complex or emotionally charged are at about the 80% completion mark. The last 20% of the survey tends to act as a "cool-down" and reflection, capturing any closing remarks, feedback, or narrative responses. As a rule of thumb, if there is a chronological flow to the actual experience being examined (i.e., first impressions, use, disposal), the survey should mirror it.

Mistake #6: Incomplete Answer Choices

There are few faster ways to lose a well-intentioned participant than to run them through two or three questions that do not allow them to answer as they intend. Essentially, the participant realizes that they will not be able to express their thoughts, and there is therefore no reason to complete the survey. They abandon the survey.

Solutions: The simplest way to resolve this problem is simple: always include open-ended "Other" as a selection. Not only will it allow you to capture the responses, but "Other" responses can be a hotbed for new thinking and unexpected answers.

Having an internal review and a limited "pre-deploy" of the survey will also help you avoid many incomplete answers.

Mistake #7: Limited Variation in Questions

Having thirty Agree/Disagree questions in a row is not a terribly engaging survey for participants.

Solutions: Before writing any questions, list the topics you seek to address in your survey. It may help to arrange them in outline format to create chapters, but the overall goal is to avoid general line-listing of topics that breed uninspired questions. While you do not need to balance the types of questions in your survey, it can be helpful to give it a read-through with an eye toward the question type to make sure you have some variation.

Mistake #8: Swapping Axes or Scales

Having four of five questions with the scale arranged left to right, and the fifth with the scale reversed can lead to some unintended answers and the dirty data that comes with this confusion or omission.

Solutions: While it can occasionally be useful to add a "check question" to make sure that a participant is not running through the survey and clicking in the same place every time, varying question types does the same function. Having thirty Agree/Disagree questions in a row necessitates a check question or an axis reversal, but you shouldn't have thirty of the same question type in a row to begin with. One way or another, you shouldn't need to do axis reversals.

Mistake #9: No Open-Ended Components

Too often, expediency in calculation overrules the thoroughness of results. People tend to lean toward questions that can be calculated and tallied for this reason, or that they do not know how to treat or score open-ended questions.

Solutions: Although there are a wide variety of ways you can compile and summarize open-ended responses, remember that just because you capture data does not mean that you have to instantly undertake calculations and manipulations. So, for example, if you have twenty questions ready to go and five open-ended questions you aren't sure how you're going to score, deploy the survey. As long as you are able to capture the open-ended responses, that information does not have a shelf life.

Mistake #10: No Filters or Branches

The researcher does not include filter questions and decision points in the survey (i.e., asking a group different questions based on their response to an earlier question). For this reason, the questions are either inaccurate for a large proportion of participants, or the logic and structure of the overly-broad questions are so convoluted it becomes difficult to read. Either way, the end result is not good.

Solutions: I would argue that one of the most useful functional benefits of online survey tools is the ability to introduce filter questions to create a branched survey. Use it. Not having branches and filter questions when they are needed is usually a sign of a poorly executed survey, an inattentive researcher, or generally sloppy research by thinking that you can send everyone every question, and they will answer them all.

If you want to check for the need for filters and branches in your survey, try taking the survey acting as a member of any different groups of participants. If the survey is all about understanding your thoughts about a product trial, and you have the potential that a handful of people receiving the survey have not yet used the product, include a filter question to separate that group and ask them questions specific to their experience, why they haven't used the product yet, etc.

Mistake #11: Timing Mistakes

When surveys are related to an experience the participant had with a prototype, for example. And there is a two-month lag because the researcher waited until everyone in the beta group had received their prototype. The participant has had the prototype on the shelf for one and a half of those two months and has forgotten all of their first impressions and reactions.

Solutions: Lay out the project timing before you begin. If there are going to be any significant lag times between the experience you seek to understand and survey deployment, either split the beta groups and survey them separately, or simply include the survey file with the prototype itself. I tend to be a heavy proponent of including the survey right along with the product/offering being tested, as it tends to make participants more attentive to the experience and remind them to think and record their thoughts and feelings.

It is also possible to send a survey too soon in regard to understanding experiences, as you can send the survey before the participant has even received the product.

Selected Tools for Survey Research

While the best practices for survey design and question structure are the same regardless of how you deploy the survey, there are a few different approaches we may use to fit our research needs. Some offer almost instantaneous results by using massive pools of participants, while others allow you the flexibility to deploy surveys very quickly and effectively. For the sake of this discussion, I am going to assume we are already aware of our ability to survey via hard copy, phone, and other conventional means, especially for "nearfield" groups like customers.

In testing spaces and potential offerings, these tools can be used to deploy anything from surveys to a closed beta group of customers, to conducting wide-open public polling to gauge how many are familiar with a topic.

For example, I work with a charity benefiting children with Neurofibromatosis Type 1 (NF1), of which there are as many sufferers as multiple sclerosis worldwide (about 3MM). I wanted a quick gauge of awareness to test a hypothesis as I was setting up a messaging and branding platform, so I did a quick online awareness test with 500 American adults. The findings were that MS awareness was around 94% in American adults, while NF1 was less than 3%. Should I have needed ironclad results, I could have taken the next steps, but for my purposes, a sample of 500 was ample for my needs. Best of all, I had the responses within 30 minutes.

Here are a few tools and where they tend to fall on the survey spectrum:

  • Survey Monkey, Question Pro, and other online survey suites. If you've never used any of these, they're usually chock-full of features, inexpensive, well-developed, and can do an acceptable job of performing basic summary calculations. You basically build the survey, and deploy a link to each participant that allows them to log into to the stand-alone survey pages.
  • Wufoo: This is a relatively new tool, but it allows you to take similar functionality and embed it into existing websites and pages. This can be useful if you seek to understand an online experience while the participant is on that page, for example.
  • Amazon Mechanical Turk: Still in beta, Amazon Mechanical Turk (or, mTurk) is a massive clearinghouse for what they call "human intelligence tasks" or "HITs." In essence, it is a marketplace for people to do very small tasks, such as completing surveys. This audience is worldwide, but because there are so many people participating already, you can use the included screening tools to find the type of audience and demographics you are looking for. The jobs are priced just as any free market would be, in that you could post a ten question survey job for $.05, and if it is perceived as low pay for the amount of work, people just won't complete the job. I tend to use mTurk for very early, public opinion-type testing when I need a same-day read on something, especially if I would like an international component.
  • PickFu, Survata, and others: These are more conventional online panels where you can set the demographics, psychographics, and other criteria for those you would like included in your survey. In most cases, these people have already registered and been confirmed with the online panel service, so you can feel more comfortable in the knowledge that validity is usually quite good.
  • Google Consumer Surveys: While many might defer to the massive pool of potential participants and the Google name, I have found that this service has one major flaw: it uses something akin to Google ad serving logic to embed your survey as an interrupter in articles, YouTube videos, etc. For them to access the media, they have to answer your question. Needless to say, you're getting a lot of low interest, and frankly angry people. Many times, people will either answer out of spite ("I just want to read this article!"), enter a string of random letters, or click whatever they need to get to the media. It's a great interface and a great idea, but the deployment mechanism has been a significant issue in regard to response quality.

A Goal for any Research: Creating a Narrative Over Time

Especially in the case of surveys, we want to not think of "a point in time" or a "snapshot," we want to think of a continuous line of research that needs to tell a story. If we want to understand shifts in perception over time in a closed group, for example, we need to pay attention to details like making sure we ask the same questions in the same order to help make sure our narrative is not skewed or interrupted.

The "snapshot" frame for research tends to lead to fractured efforts without an overarching structure or goal, and online surveys especially can worsen this condition with their instant feedback.

It may seem a bit esoteric at this point, but we will work on it in the Case this week a bit, as it is important to be able to build the entire narrative and understanding of the offering or topic. Isolated, unlinked efforts can be more confusing or distracting than they are worth.

Five words:

A cornerstone of offering research.

Message/Proposition Testing

Message/Proposition Testing axj153

Putting a Net Into the Collective Stream

image of dragnet in ocean
Credit: To Manage Fish Populations, Scientists Study An Entire Ecosystem. National Oceanic and Atmospheric Administration (NOAA) (Public Domain).

Creating an early prototype of an offering or a concept to be tested is hard enough, as there are many variables at play. From messaging to features to pricing and more, much of the offering may still be yet to be determined. It is indeed a fun, creative endeavor, but the same possibilities which you identified in WGB analysis can be daunting to pare down.

What we should seek to do at this point is to try to not necessarily eliminate variables altogether, but to try to limit the variables to series of ranges. Gerald Zaltman, Professor Emeritus at Harvard University, used to refer to this type of thinking as, "Understanding the direction in which the wind is blowing, but not yet concerning ourselves with exact wind speed."

For practical purposes, we do not want to test 200 concepts or prototypes, but we we want to put our best thinking into perhaps five concepts.

All of the research methodologies we are covering in this Lesson are intended to help the team better understand the space and how the proposed offering "fits." What is unique about message and proposition testing is that it offers us a look at how the offering performs in a the live environment.

Think of the internet as this seemingly infinite stream of customers and information, but in which we have a limited view of the individuals or their thought processes at any one time. What we seek to do with message or proposition testing is to place a sampling net into the water and see what ends up in the net. If eight of our ten sampling nets come up empty time after time again, we know we can stop sampling in that area of the stream for the time being and focus our efforts on the two sampling net regions which did show some promise.

So, how can we do this? By using the massive sample sizes of the internet and pay-per-click advertising to act as our sample nets.

First, a primer on Google AdWords/Pay-Per-Click advertising. Please watch the following 3:24 video.

Video: Learn The ABCs of AdWords (3:24)

Credit: Google Ads. "Learn The ABCs of AdWords." YouTube. June 17, 2013.

Online advertising has a language all its own and if it sounds like a foreign language to you, you're not alone. But it's important to get comfortable with the terms so that you can make the most out of your AdWords investment. To help make sense of it all, here's a scenario: Owen is planning a wedding and Brenda is a photographer. Brenda uses AdWords to advertise online to people who are looking for a photographer. This is one of her ads. Brenda takes three types of photos: Babies, real estate, and weddings. She uses different ads for each area of her business. Each collection of ads makes up an ad group. Brenda assigns to each ad group the words and phrases that are relevant to that part of her business. These are keywords. AdWords uses keywords to help decide which ads to show to people searching for things online. Brenda's three ad groups make up a campaign. The campaign is where Brenda decides big picture things, like her preferences for the devices her ads will show up on, and how much she spends.

Owen types "experienced wedding photographer" into Google.com. The phrase "experienced wedding photographer" is his search term. He sees two types of search results: organic search results located in the middle of the page are the websites that match Owen's search term. No one can pay to appear in these results. The second type of results, paid results, are usually located at the top, bottom, or right side of the page. These are ads from businesses that are using AdWords. In most cases, an advertiser is charged when someone like Owen clicks one of these ads.

Does Brenda's ad appear when Owen makes his search? That depends. Whenever someone uses Google to search there's an auction that determines which ads appear and in which order. Two main factors determine the outcome: How much an advertiser is willing to pay for a click, which is a bid, and something called "Quality Score." Quality Score is an estimate of how relevant and useful your ad and the page on your website it links to are to someone seeing your ad. Together, bid and Quality Score determine where and if Brenda's ad appears on Owen's search results page.

Bids and budget are different. Your bids affect how much you'll spend each time someone clicks one of your ads. Your budget affects how much you'll spend each day on your entire campaign, which influences how often your ads are shown. As it turns out, Brenda's ad appears on Owen's search results page. This is an impression. Owen clicks Brenda's ad to find out more on her website. This is a click. Owen likes what he sees on Brenda's website and hires her to photograph his wedding. Brenda's ad has gotten Owen to do something valuable. Hire her for an event. This is a conversion. Owen is a satisfied customer. Brenda is a happy advertiser. These are results.


This tutorial provides a view into what the AdWords interface looks like, and some of the practical concerns when starting out. It is really quite straightforward to get started with AdWords, and there are many tutorial resources, but there is an entire profession devoted to the art and science of PPC advertising. Our goal in this early phase research is not to engage in advanced e-commerce, but simply to conduct a small, highly focused and controlled market test to help us focus our research efforts. Please watch the following 12:00 video.

Video: How to Set up a Google Adwords Campaign (12:00)

Credit: Abols IT Solutions. "How to Set up a Google Adwords Campaign." YouTube. February 5, 2014.

So you want to set up a Google AdWords campaign so that you can start to show up in Google AdWords within a few minutes after completing your campaign setup. So I want to take you through the steps of how to do that.

The first thing that you're gonna do when you're in Google AdWords, and in today's world it's pretty easy with Google to start that campaign. They make it pretty simple you know if you have a one hundred dollar coupon that they've sent you in the mail, if you don't let me know and I can get you one, but you can start it off with a hundred dollar credit. So, here's we're going to do. We're going to create a campaign and we'll call it Plastics Molding Seminars. And so, the first thing that they're going to give you the option is the search and display networks. Now search, you want search only in the beginning because in, in the beginning I don't know why they do this to have to change my... hold on one second. Plastics Molding Seminars. So in the beginning, you want to limit your campaign reach, so that you can fine tune it and then when it's working for you then you can expand your reach. Typical of starting out in any market you're not going to go test a theory or a new concept nationwide/internationally you're gonna test in select markets. So, you want to choose search network only and you really just want to choose the standard. Don't get into all features necessarily or product listing ads - we can come back to that later.

The next thing you want to do is Google search network. You can also show up in the search partners, but again if you want to just focus on Google alone in the beginning when you're setting up that campaign especially if you have a limited budget you might want to just use Google search network only and not have include Search Partners. In terms of all available devices, you can run everything all on one, however, best practice would be to remove the mobile devices with full browsers, but keep desktop and laptop, and keep tablets with full browsers and we'll talk more about this later. In terms of Geo settings or locations, US or you could choose select area. You could say I just want to show up in Atlanta or Georgia. Keep in mind that the more targeted your reach the less traffic potential you will get, but in the beginning if you have a local search product that you only way to promote clearly would want to put your zip code or your country. I'm sorry, or your city or region here. For right now for the purpose of this I'm gonna choose all of US and now I'm ready for bidding and budget. Here I'm gonna say that I'll manually set my bids for click. You see that AdWords wants me I'm to choose at words will set my bids. I do not want Google choosing my bid management strategy.

The default that we'll start out at 250 and we'll go with a hundred dollar per day budget. Now here's where a lot of people make mistakes. A lot people will put a very low default bid. There are a lot of reasons for that, but in the beginning that might choose twenty five cents a click, fifty cents a click. In terms of budget a lot of people put ten dollars fifteen dollars a day and the way they're coming up with that is based on if I have three thousand, let's just say to spend, I'll divide that by 30 days, let's just say, and therefore I'll have a hundred dollar per day budget. So 100 times 33,000. And so if somebody says why don't have three thousand dollar budget maybe I only have a thousand dollar budget a lot of times their dividing that amongst 30 days. With Google you really have to say you'll be willing to spend more. Especially if your default bids in order to get on the first page and in the top three positions are gonna be 250 or more. You're going to have to show Google that you're willing to spend more money. Doesn't mean that you necessarily will if you manage it properly. I rarely run into my daily budget, except for during Christmas for holiday retail.

So location ad extensions I'll do a separate thing on ad extensions. For right now for the purpose of this I'm going to ignore them I'll pass them, but we will have a separate session just on ad extensions. So then you're ready to save and continue, and now you're going create a name for your ad group. So, we've got plastics molding seminars we'll call this and Molding Seminars. Now we're ready to write an ad.

So what is it that you want to say about your ad? Now keep in mind that you have twenty five characters your headline, description line 1, and description line 2 have 35 characters. Display URL also only 35 characters and the destination URL as well, so let's do the headline. (typing) So you see that that I can't get one more letter in there. Right, so for example say plastics molding seminar, new schedule posted, see what the experts. I'll change this. (typing) And of course it's going to take a few minutes figure out exactly what you want to say. (typing) That's good for now. Remember you can always change your ad, and then of course your site. Orbitalplastics.com in this case. Remember, this is my display, so if I want to work in another word here I could come in and put molding seminars and of course I can't get that in there. So, I can put in another word like training. And then my actual destination URL should be the exact URL, not necessarily the homepage. Right, so, in this case we'll put that page. It shows you what it's gonna look like on the side and also at the top and it tells you that you have the option to add more extensions we will get back to that separate video, and now you're ready for your keywords. So we've got molding seminars, molding training, plastics molding seminars, and plastics molding workshop.

Now let me tell you a little bit about the match types you see here. You've got two types of match shapes in this particular listing. You have a broad match modifier which you see with the plus signs. You also have a phrase match molding training with the quotes, and then I've also used by broad match modifier down here again. Broad match modifier says that these two keywords molding seminars must exist in the user's search query, however, they are not required any particular keyword order. Whereas this one says that molding training in this case is that this word molding must exist in a search and it must exist alongside with the keyword training. And so, you typically get higher click-through rates on molding training in quotes, however, because you're requiring that the key words exist in that exact order, it may limit your reach. You may not get as much traffic, but you get a better click through rate and oftentimes a little better lead conversion rate for something like this. This keyword here plastics molding seminars. You know what I'm actually gonna take away the phrase match here, and I'm going to keep one and broad match. And so, Google supposed to find search queries and show our ad in our pages for keyword searches on any one of those keywords. And it could be in any particular order. There could be contextual relationship something similar to this. There is a possibility you open yourself up in this one with broad match that there may be non-relevant keywords so you need to manage it and monitor your keywords in your search queries data closely. And then you have something along like the last one which is the word plastics must exist the word molding must exist, and the word workshop. One thing can also do, is you can take away the requirement on workshop opening this last word up. I know I want it to be plastics, I know I want to be molding, but it workshop that could be subjective.

One last one. Let's do a singular. I'll put broad match modifier in front of this one and one more. Plastic, mou, that's for the British spelling. So let's just see what this does for it. I've already set up my default bid. I'm now ready to save and continue to billing. So where are we located? We're in the United States.

And our last recommendation here before we conclude this video training session is.. one moment here while it loads. You want to make sure that they bill you in retrospect as opposed to in advance. Right, so enter your information here and you're settings but you want to make sure that you set it up so they're not billing you in advance, but they're billing you after the ads and the spent has run. Thanks for enjoying this with me. Hope it was helpful.

Early-Phase Message or Proposition Testing With Google AdWords

Step 0: Go rogue.

Have you ever wondered why so many great innovations and companies are born from garages? Innovation is messy and emergent. It's loud, it's chaotic, and it's definitely not tidy and clean and neat. Therefore, it isn't something people generally want taking place in the house. This also holds true when testing early-phase concepts.

Think of the core brand of your organization as "the house." We don't want to disrupt that when creating offerings, so we want to find a space away from the house in which we can work without disrupting the house. This is why we do our testing and creation in safe places which won't be a nuisance. Call it "going rogue" or "working in the garage," but we need to provide some isolation and insulation when creating and testing.

Step 1: Group your messages or propositions.

If you have thirty interesting messages or propositions from survey research or ideations, try to group them into five or so groups or topics at this point. For example, if you have some very powerful customer quotes from an earlier survey, group them into "Customer Quotes," and if there is a highly differentiated attribute that has been well-received in surveys, you could have a "Lead Attribute" group, and so on. Our goal here is to condense all of the different ideas as tightly as we can, so that we may then see which theme appears to show the most promise.

Step 2: Create PPC versions of the lead message or proposition.

For each theme you have identified, create four or five different variations of each. These will be your AdWords ads. These will provide us with some very early indicators of what message or proposition may be the most interesting to customers in a live environment. While there is an entire science behind conversion and what makes customers buy, consider each click on your research ad as a "vote." We are still quite far away from selling anything, but a click is the first step a potential customer could take to show interest. If customers don't even show interest by clicking, they could never take the next step, no matter how appealing it may be.

Step 3: Select keywords for your ads.

This can get very, very complex. The easiest way is to use the same keywords for all of your ads to eliminate that variable, and Google will show suggested keywords directly on the page as you begin entering a few. We (i.e., a consultant) can worry about dialing in keywords in the live campaign, but at this point we want to keep things simple.

Step 4: Create a landing page to capture information.

At this point, you don't have a product to sell, but yet you're testing in the live market. How do you get around this? Land any of the Adwords clicks on an information gathering page, because they were interested in the proposition, and we would want to potentially interview them. For this reason, our AdWords ad can link to a page that asks the visitor to "sign up to receive more information" or "sign up if they would like to be part of a beta test." This signup should be the first thing on the page, and it may indeed be the only thing.

Remember, we probably do not want to disclose the organization at this point, lest competition catch wind of our early phase projects. So our goal is for interested people to sign up for more information, and we may then fold them into the research.

Step 5: Closely monitor every metric possible.

By understanding the number of clicks each "mini-proposition" receives compared to the total number of impressions for the ad (known as Clickthrough Rate or CTR and expressed as a percentage), we may have some very early window into messages that may show more promise than others. If we see that Message A receives 80% more clicks than Message B over the span of two weeks, we may pencil in that Message A is what we will use as our headline in subsequent testing. This is a very simplistic view of analytics, but the nice part is that analytics are forever: they will be captured, and you can filter and refine them however you may prefer at a later date.

Step 6: Repeat, repeat, repeat.

As our research and offering progresses, so too can our PPC advertising testing. As we will see later in the semester, PPC will be a cornerstone of our beta testing, as it will help to drive traffic into our microsite, from which we will refine the offering even further.

Remember the Insight Mindset, and that our goal for early phase research is to replace intuition with data, and that it doesn't have to be exhaustive or perfect at this point. Over time, we seek to replace unknowns with "somewhat knowns," and "somewhat knowns" with "confirmed knowns."

Advantages to Using Pay-Per-Click Advertising for Early Phase Message/Proposition Testing

Some may consider the use of a commercial online advertising tool to be a bit removed from more "pure" research techniques. I would argue that "pure" research techniques are pure because they are theoretical, and that our goal as researchers at this point is to quickly understand if the offering has merit, and what customers find most attractive about the offering.

Here are a few ways PPC message testing can play a valuable part in your early-phase research mix:

  • It is incredibly fast. You can set up a campaign and start testing messages in under an hour.
  • It can be instantly revised or discontinued. Want to try a different message or variant? Change it in 5 minutes and push it live instantly. Very, very few other media allow you that kind of flexibility.
  • It allows you to test the message/proposition with those interested in related terms at that moment. If I want to understand how a new sustainable beer growler proposition is received, I do not have to rely on mass media: I can expose people actively searching for "beer," "sustainable beer," and other related keywords at that time. In essence, I can make the stream to be sampled as wide or as narrow as I want, and dip the sampling net in at any time.
  • It offers massive impressions/sample size at minimal cost. Depending on keywords used, you can have data from hundreds of thousands of ad impressions in a few days. You pay only per click (not impression), and you may very likely find that your cost per click is $2 or less. Dollar for dollar, this can be extremely effective research.
  • It levels the playing field to an extent. Visually, all of the ads at the top of Google pages are text, and have the same length constraints. There are no special fonts or room for flashy animation, and large companies can not buy larger ads to crowd others out. This allows an almost clinical type research, in that the entire market is using the same constraints for stimuli. This would be akin to a magazine requiring all advertisers use the same font and number of characters on a white page for their ads.
Five words:

Powerful tools

enable nimble research.

Focus Groups

Focus Groups jls164

A Common Choice of Less-Experienced Research

Focus groups tend to be one of the "go to" choices for early-phase consumer research, many times because the methodology is common, and resources, such as facilities and moderators, are generally easy to locate. I would argue that focus groups are not well suited to the needs of early-phase innovation research, the type which we would be prone to conduct.

I've spent thousands of hours both "behind the glass" and "in the room" in research facilities conducting fieldwork of various types, and have had the occasion to both observe and participate as a member in focus groups. Even in the well-moderated sessions, I tended to come away thinking about how skewed the discussions tended to become, and how much I would have wanted to interview the participants one-on-one.

The effect of the focus group format on validity of findings as opposed to one-on-one depth interviews have received quite a bit of scholarly attention, and papers have been written exploring methodological issues with focus groups. This abstract from Boateng (2012) summarizes the findings of the overall body of research:

The efficacy of Focus Group Discussion as a qualitative data collection methodology is put on the line by empirically comparing and contrasting data from two FGD sessions and one-on-one interviews to ascertain the consistency in terms of data retrieved from respondents using these two data collection methodologies. The study is guided by the hypothesis that data obtained by FGD may be influenced by groupthink rather than individual respondents' perspectives. A critical scrutiny of the data that emanated from the two organized focus groups discussion departed quite significantly from the data that was elicited from the one-on-one qualitative interviews. The difference in responses confirms that FGDs are not fully insulated from the shackles of groupthink. It is recommended, among others, that though FGD can stand unilaterally as a research methodology for nonsensitive topics with no direct personal implications for respondents; researchers should be encouraged to adopt FGD in league with other methodologies in a form of triangulation or mixed methodological approach for a more quality data, bearing in mind the central role occupied by data in the scientific research process.

Furthermore, in my experiences, the group discussion format of focus groups tends to elicit the following behaviors, each of which has its own way of biasing or eroding the findings. (If the group is composed of 18-35 year-old males and females, multiply the biasing factor by 3):

  • The Domineer. In a group setting, the domineer will tend to overtake the discussion, at times even acting as a deputy moderator. In rare cases, or with an inexperienced moderator, this person can take the discussion entirely off the rails, but the more common form tends to be that of steering the discussion.
  • The Silent One. The polar opposite of The Domineer, this person will simply withdraw from the discussion (sometimes in response to The Domineer). In the research field, these participants are known as "Facility Wallpaper," as they tend to generally blend into the background, and will eat every available snack.
  • The Genius. A close relative of The Domineer, this behavior will also result in overtaking conversations, but typically in very unrelated or odd tangents. This personality tends to want to be right in any discussion, or otherwise prove to be the most knowledgeable.
  • The Critic. If testing prototypes, everything will be "horrible" or "dumb." While specific criticism is a primary goal of research, the critic will tend to wilt when asking for specifics of the criticism. This makes the feedback especially hard to analyze.
  • The Lovefest. The opposite of The Critic, this person will love everything. If a brand is presented, they will go on at (unrelated) length about the brand and how they love everything they do. This person will not react specifically to the stimuli at hand, and tend to work in generalities.
  • The Impractical Inventor. If you blended The Domineer, The Genius, and The Critic, you have The Impractical Inventor. Upon seeing a prototype, they will devise Rube Goldbergian solutions, or the idea that your packaging can be 'fixed' with 3D printing. Cost, practicality, or the current technology available to man are of no concern. Will derail the group into a wild ideation session.
  • The Ad Exec. The blend of The Genius and The Critic, and cousin to The Impractical Inventor. This personality type will devise slogans in the meeting, and will probably be the one to add comments and revisions to your nicely mounted $300 ad concept boards with a Sharpie while you aren't looking.

My overall point with this example is that, much like any social gathering with people who do not know each other, when you get 8-12 people in a room in a single conversation, people "overact" or take on personality traits they otherwise would not. Impromptu caucuses will form before your eyes, as people with similar thoughts will band together.

I offer the following as a humorous example of some of the exhibited traits you might see in a focus group. Interestingly, the morning after this Saturday Night Live sketch aired, the consumer research world exploded in agreement and story as to how realistic "Linda" was! Please watch the following 6:49 video.

Video: Taste Test - SNL (6:49)

Hidden Valley Ranch Taste Test SNL

ROGER: Hi there, thank you all for coming in. My name is Roger. I'm going to be running today's focus group.

LINDA: Alright, Wheeww! [CLAPPING] Roger! Roger! Roger!

ROGER: Thank you...hahaha. Ok, the products we are going to be testing today is a new line of dressing from Hidden Valley Ranch.

LINDA: Awesome. Whewww! Awesome! Hidden Valley Ranch! HVR! HVR! HVR!

ROGER: Hahaha, ok. Alright. Ok, we love your enthusiasm.

LINDA: I love your product, man. I love your product. Linda, I'm Linda. Love their product, man. Yes.

ROGER: Ok, alright. Great, Linda.

LINDA: Good stuff.

ROGER: Ok, we're going to put three new dressings in front of you to taste and then we just want your feedback.

LINDA: Alright, well they're going to be awesome, Roger. They really are.

ROGER: Alright, well let's just wait until we taste them and then we can discuss.

LINDA: HA, ok.

ROGER: Ok, let's start with number one.

LINDA:[SPOONS DRESSING INTO MOUTH] I'm getting strawberry. I'm getting kiwi. Man, I'm getting a big can of kiwi. Big can of kiwi. Hi, Kiwi!

MARK: You know what. It just tastes like ranch dressing with bacon to me.

ROGER: Ok, that's right. It's our new bacon ranch dressing.

LINDA: That was really good Mark. Man, what a palette. That was.. You nailed it. That was awesome.

MARK: Thanks.

LINDA: That's nice.

SUE: Yeah, I'm not really tasting the bacon at all.

LINDA: Are you kidding me? It's, It's, It's loaded with bacon. The product is loaded with bacon. Come on. It's in the name!

SUE: I know.

LINDA: God! I just, I think, I mean she's going to ruin it Roger. We have this good thing happening and then your kind... You're going to ruin this for us.

ROGER: Ok.

LINDA: She's ruining it for us.

ROGER: Ok, hey Linda everyone's opinion is valid Linda, ok. We just want your honest reactions. Actually one thing we do here for fun is we give an extra 50 dollars to whoever has the best comment today.

MARK: Oh, cool.

SUE: Oh that's fun.

LINDA: Oh my God.

ROGER: Yeah,

LINDA: 50 bucks.

ROGER: You might even hear your quote in the ad campaign. So uh, ok. Let's move onto number two.

LINDA: Man I could really use that cash, Mark. I mean, That cash could really get me out of a couple of jams. I just..alright. Alright. Game on.

ROGER: Alright.

LINDA: Game on.

ROGER: Ok. Alright, so why don't we just get started.

LINDA: There's a Hidden Valley Ranch party in my mouth. Hidden Valley Ranch party in my mouth. Write that down for your... There's a Hidden Valley Ranch party in my mouth. There's a Hidden Valley Ranch party in my mouth. You wanna write that? Party, Hidden Valley Ranch, in my mouth! You wanna write that down for the campaign?

ROGER: I'm probably not going to write that down.

LINDA: Linda, write Linda Hidden Valley Ranch.

MARK: I like this one, it's got a real kick.

ROGER: Oh, ok. Hey I like that. It's got a real kick. That's our new garlic ranch blast.

LINDA: You betcha it's a blast! You betcha its a blast. In lightning bolts. You betcha its a blast! Linda...You betcha it's a blast!

ROGER: Linda, please you haven't even tasted it yet.

SUE: Hey, this could, this could even make my husband eat salad.

ROGER: Hahahaha, ok, I like that. That's a great comment, Sue.

LINDA: Um. Could you garlic ranch blast me now? Could you garlic ranch blast me now? Could you garlic ranch blast me now? Write that down. Could you garlic ranch blast me now?

SUE: Ok, she's just doing the Verizon slogan.

LINDA: It's a hugely successful campaign. She doesn't get it.

ROGER: Ok. Why don't..

LINDA: She doesn't get it.

ROGER: Why don't we just move on?

LINDA: Do do do do do. I'm garlic ranch blasting it!

SUE: Oh, that's McDonalds.

LINDA: Do do do do do. I'm garlic...God. Shut up, Sue. Every...We all hate you. So much. We, We hate you! We hate your guts!

ROGER: Ok, please. Let's just taste the third one. Ok. It's our new pepper jack ranch.

LINDA: That's jack-tastic! That's jatastic! That's Ja-tastic! That's Ja-tastic! Did you write that down? That's Ja-tastic!

MARK: You know, this would be great on a burrito.

LINDA: I'd, I'd put it on a burrito.

ROGER: Ok, that's good. Maybe a little more descriptive.

MARK AND LINDA: Um, it's not just for salads anymore.

ROGER: Wow.

LINDA: not just for salads anymore.

ROGER: That is great Mark. That is great.

LINDA: And Linda...and Linda. so we'll split that 50. 50, 25, 25 will get me out of one jam. Group effort, not Sue. Not you. 25, 25. I was telling Sue earlier, you know, Hidden Valley Ranch is not for salads anymore.

SUE: She didn't, she never said that.

LINDA: Yes, I did Ro...she has got to go. Got to go.

SUE: Now, this ones way too spicy for me, I'm sorry.

MARK: Not me, I could eat a whole bottle.

LINDA: I could eat a whole bottle too. See... [LINDA SQUEEZES BOTTLE OF RANCH ONTO FACE]

That's ja-tastic! Do do do do do. That's good til the last....do do do. Good to the last bit.

ROGER: Ok.

LINDA: This is awesome,

ROGER: Ok. Ok.

LINDA: This is awesome!

ROGER: Just stop it. Stop it please! I will give you 50 dollars just to leave.

LINDA: Deal. Can we make it 30? Because I'm going to need just a little more of this,

ROGER: Yeah.

LINDA: ...good stuff.

ROGER: Fine. Just go ahead please.

LINDA: Boy, that's real good. Do do do do do ja-tastic!

Where do focus groups "fit"?

If we break apart some of the research "jobs" we would likely need in evaluating ideas or early-phase concepts, the role of the focus group becomes more and more niche.

Exploring the space broadly.

Surveys do a better job at understanding the overall space than focus groups, and are far less expensive. Furthermore, you are receiving "clean data" in a survey, unbiased by social pressures of the group or groupthink.

Exploring the space deeply.

Individual interviews will give you far more depth than any focus group, while allowing the interviewer to explore topics and ideas of interest.

Prototype or usability testing.

Observation and ethnography will tell you more about use phase in real application. Individual interviews will tell you more about initial impressions of a prototype in a controlled environment, free from group biases.

Concept testing.

Message or proposition testing in a live environment will provide far more realistic, specific, and practicable information.

This leaves us with focus groups being used as ideation sessions to generate ideas and creative. Needless to say, in these applications, focus groups may have far more in common with the SNL skit than you might like.

Five words:

Significantly flawed

(but still popular.)

Depth Interviews and Observation

Depth Interviews and Observation jls164

A Versatile Combination

Also referred to as individual interviews, IDI (in-depth interviews), or one-on-one interviews, the depth interview methodology may seem quite straightforward but can take years to do well. Luckily for our efforts, the depth interview is also one of the most accessible types of research: it is simply one person guiding the discussion and asking questions, and the interviewee responds. For this reason, many depth interviews tend to be quite conversational and natural in format, as this will also allow the participant to discuss the topic freely. The truly adept depth interviewers take it a level further, eliciting specific stories and experiences from the participant and using various probes to explore ideas further.

Dr. Kelly Page offers a nicely composed overview of depth interview techniques:

Depth Interviews in Applied Marketing Research from Kelly Page

Slide 1: Title

Qualitative Marketing Research – Depth Interviews Week 4 (2) Dr. Kelly Page Cardiff Business School E: pagekl@cardiff.ac.uk T: @drkellypage T: @caseinsights FB: kelly@caseinsights.com

Slide 2: Summary

  • What Are In-depth Interviews?
  • Applications of Depth Interviews
  • Key Features
  • How Are We To Think Of An In-depth Interview?
  • The Art of a Good Interview
  • Interview Techniques & The Interviewer
  • Managing the Interview
  • Constructing a Discussion Guide
  • Wording Questions
  • Types of Probes
  • Tape-Recorded Interviews
  • Transcribing Interviews
  • In-depth Interviews: Advantages & Limitations

Slide 3: What Are In-depth Interviews?

  • An unstructured, direct, personal interview in which a single respondent is questioned and probed by an experienced interviewer to uncover underlying motivations, beliefs, attitudes, and feelings on a topic
    • In-depth interviews aim to explore the complexity and in-process nature of meanings and interpretations that cannot be examined using positivist methodologies.
    • In-depth interviews are more like conversations than structured questionnaires.
    • In-depth interviews stand in 'stark contrast' to structured interviews.

Slide 4: Applications of Depth Interviews

  • Professionals
  • Children
  • Detailed probing
  • Confidential, sensitive, embarrassing topics
  • Avoiding strong social norms
  • Complicated behavior
  • Competitors
  • Sensory Experiences

Slide 5: Key Features

  • A methodology that attempts to be more conversational and engaging, hence requires greater skill and experience.
  • The level of skill required means that it is common for interviews to be conducted by the researchers themselves.
  • It is both inductive and deductive, but often, it is assumed that all relevant questions are not known prior to the research
  • It makes use of some of the assumptions of grounded theory that attempts to build up understandings of general patterns and important issues through the process of interviewing.
  • It can involve a single half-hour interview with each participant, or it may involve several sessions each of two hours duration, or up to twenty-five sessions in some cases.

Slide 6: How Are We To Think Of An In-depth Interview?

  • An in-depth interview is like the half of a very good conversation when we are listening.
  • The focus is on the other person's own, meaning contexts.
  • Good interviewing is achieved out of a fascination with how other people make their lives meaningful and worthwhile.
  • It is this inquisitiveness that motivates the in-depth interviewer who uncovers new and exciting insights.
  • The hardest work for most interviewers is to keep quiet and to ‘listen actively’.
  • One of the most important skills to learn in interviewing is that of keeping silent.

Slide 7: The Art of a Good Interview

Creative interviewing involves the use of many strategies and tactics of interaction, largely based on an understanding of friendly feelings and intimacy, to optimize cooperative, mutual disclosure and a creative search for mutual understanding.

- (Douglas, 1985: 25)

Slide 8: Interview Techniques

  • Conducting a good in-depth interview is an art that cannot be achieved by following rules. But, there are many skills, rules of thumb and practical guidelines which may facilitate a good interview.
  • Its all about ‘experience’:
    • People often say things like: “My experience is very different to other people’s and may not interest you”, etc.
    • Researchers need to reassure the participants that they are OK and their experiences, whatever it may be, is what we are interested in.
    • You may say: “We are interested in everyone’s experience of having a baby”, or “We think your experience of … is quite common and we are interested in our story”.

Slide 9: The Interviewer

  • Some argue that interviewers should be of similar age, gender, ethnicity, class, and sexual orientation to the people being interviewed.
  • This is not necessarily the case. But, in some cases, it may be appropriate to select particular types of interviewers.
    • E.g., in a study of living with HIV/AIDS, the gender of the interviewer may not be important if the interview focuses on working life.
    • But, it would be more appropriate for a woman to interview another woman about complications during her pregnancy rather than a male interviewer.

Slide 10: Managing the Interview (1)

  • Once, a sampling strategy has been decided upon, there are things to be considered.
  • Introductions:
    • Having someone introduce you is very helpful. If someone the participant trusts introduces you, the process of gaining their trust will have been already begun.
  • Permissions:
    • It may be essential to obtain permission from formal or informal gatekeepers.
    • A database of participants can be very useful to ensure that all the appropriate phone calls and confirmations have been completed.
  • Time:
    • In-depth interviewing typically requires a relatively large investment of time and energy in recruiting participants and arranging the interview.

Slide 11: Managing the Interview (2)

  • Location:
    • Deciding where to conduct the interview can be difficult. Most people will feel more comfortable and relaxed in their own homes.
  • Conduct:
    • When the researcher actually arrives at the interview, you need to settle into the interview location and wait until the participant is ready.
    • Do not leave immediately soon after the interview is finished. Hang around for a cup of tea or chat is a good strategy. It makes the participant feel that you are really interested in his/her story.
  • After an interview:
    • It is important for the researcher to have an opportunity to debrief with someone else working on the project, or familiar with the issues dealt with in the project.
    • This is particularly important when the topic deals with sensitive or emotionally charged issues.

Slide 12: Constructing a Discussion Guide

  • Although the in-depth interviews are 'open' and often exploratory, a discussion guide, theme list or inventory of important topics is typically used.
  • Discussion guides are best kept to one-two pages. This ensures that it can be referred to without having to flip too many pages over, which can be very distracting.
  • It may also be appropriate to use a separate theme list for each interview. The theme list is a useful place to take notes and record questions that should be returned to later in the interview.
  • At the beginning of the interviews, explain the purpose of the interview and emphasis that we are interested in their story, that they are the expert.
  • Try to stress that the criteria for what is important or relevant are what the participant thinks is important.

Slide 13: Wording Questions

  • While questions are not prescribed beforehand, the general topics and themes of the interview are typically already decided upon.
  • Dialogue is best enabled through a questioning strategy they describe as 'no-knowing’. Understanding is best gained through questions born of a genuine curiosity for that which is 'not-known' about that which has just been said.
  • Questions should be open-ended.
  • Questions that are best avoided include those that appear as if they are a test of knowledge.
    • Try not to ask questions that begin 'what do you know about this?
  • Rather, start with questions that invite people to share.
    • Such as: 'Tell me about that'.
  • It is also a good idea to avoid technical phrases.

Slide 14: Types of Probes

  • Elaboration probes - ask for more detail:
    • 'Can you tell me a little more about that?'
    • 'What did she say to you?’
  • Continuation probes - encourage the participant to keep talking:
    • 'Go on.‘ or 'What happened then?'
    • Body language such as a raised eyebrow can also serve as a probe.
  • Clarification probes - aim to resolve ambiguities or confusions about meaning:
    • 'I'm not sure I understand what you mean by that.''
    • 'Do you mean you saw her do that?‘
  • Attention probes - indicate that the interviewer is paying attention to what is being said:
    • 'That's really interesting.‘ or 'I see.'

Slide 15: Types of Probes

  • Completion probes - encourage to finish a particular line of thought:
    • 'You said that you spoke to him, what happened then?'
    • 'Are you suggesting there was some reason for that?’
  • Evidence - seek to identify how sure a person is of their interpretation, and should be used carefully:
    • 'How certain are you that things happened in that order?'
    • 'How likely is it that you might change your opinion on that?’
  • Participants often laugh in response to nervousness or ambiguity rather than simply because something is funny. If this is the case, laughter is often a good cue for a probe or further exploration

Slide 16: Closing the Interview

  • Toward the end of interviews, it may be worthwhile to reflect back to the participant some of the main themes of the interview, to check that the interviewer has understood the main responses and interpretations that have been described.
  • At the very end of an interview, we always ask the participant if there is anything else that they think is important in understanding the issue under discussion that has not already been covered.
  • This question sometimes produces surprising results suggesting a completely different approach to an issue or problem.
  • The key to asking questions during in-depth interviewing is to let them follow, as much as possible, from what the participant is saying.
  • Theme lists should not so much direct questions, as remind interviewers of the topics that need to be covered.

Slide 17: Tape-Recorded Interviews

  • Practically, the interview must be taped so that we may capture what the participant says in-depth.
  • The recording of the interview must be with the consent of the participant.
  • Make sure that the tape and microphone are working.
  • Bring extra cassettes and batteries.
  • Quote:

 I always try and use a tape-recorder, for some very pragmatic reasons: I want to interact with the interviewee, and I don’t want to spend a lot of my time head-down and writing. Also, the tape provides me with a much more detailed record of our verbal interaction than any amount of note taking or reflection could offer.  (Rapley, 2004: 18)

Slide 18: Transcribing Interviews

  • All interviews must be transcribed for data analysis.
  • The careful attention to the tape required during transcription sensitizes the interviewer to ways in which they could have asked questions differently or to cues that were missed.
  • Often, all conversations in the interview will be transcribed.
  • Researchers may want to include things like the length of pauses, other sounds like laughter or even ‘um’.
  • An indication of who is speaking is necessary, e.g., the researcher, the participants, family member (if present), etc.
  • Transcripts need to be checked through to ensure that technical terms and difficult areas have been correctly transcribed.

Slide 19: In-depth Interviews: Advantages

  • In-depth interviews are an excellent way of discovering the subjective meanings and interpretations that people give to their experiences.
  • In-depth interviews allow aspects of social life, such as social processes and negotiated interactions, to be studied that could not be studied in any other way.
  • While it is important to examine pre-existing theory, in-depth interviews allow new understandings and theories to be developed during the research process, particularly grounded theory.
  • People's responses are less influenced by the direct presence of their peers during in-depth interviews.
  • People generally find the experience rewarding.

Slide 20: In-Depth Interviews: Limitations

  • Investment:
    • In-depth interviewing requires considerable investments of time, money and energy.
    • This investment needs to weighed against the research problem and goals.
  • Evolution
    • Understandings and experiences are developed from interview to interview.
    • By comparison, new ideas can be responded to immediately by all other participants in a focus group.
  • Skills
    • In-depth interviewing is difficult to do well.
    • It requires persistence and sensitivity to the complexities of interpersonal interaction.
    • It may not always be appropriate to delegate the task of interviewing to research assistants.

Slide 21: Summary Slide

  • What Are In-depth Interviews?
  • Applications of Depth Interviews
  • Key Features
  • How Are We To Think Of An In-depth Interview?
  • The Art of a Good Interview
  • Interview Techniques & The Interviewer
  • Managing the Interview
  • Constructing a Discussion Guide
  • Wording Questions
  • Types of Probes
  • Tape-Recorded Interviews
  • Transcribing Interviews
  • In-depth Interviews: Advantages & Limitations

Slide 22: Licensing information

The content of this work is of shared interest between the author, Kelly Page and other parties who have contributed and/or provided support for the generation of the content detailed within. This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 2.0 UK: England & Wales. http://creativecommons.org/ Kelly Page (cc)

Credit: Kelly Page, Cardiff Business School

There are a few additions and points of emphasis I would make to the slides:

  • Natural conversation is essential. For this reason, you want to practice the same cues you would in any engaged conversation: eye contact and active listening. Furiously jotting notes or reading questions do nothing but reinforce it as an interview, and not a conversation, and your findings may suffer.
  • Be a great listener. The best interviewers use the fewest words and are incredibly nonbiasing. They do not "color comment" or prod the interview along. I can tell you I have conducted 1.5 hour interviews with participants and probably uttered no more than a paragraph of words after the initial introductions.
  • Stories are extremely valuable. If we are interviewing someone about their experiences using a prototype, you want to hear short stories about their experiences. Hearing "I had a little bit of trouble with the instructions" is good, but hearing a participant specifically discuss how the instructions gave them trouble is the goal.
  • Interviewer similarity. I very much tend to fall on the side of dissimilar interviewers, unless the topic is highly sensitive. Some of the most compelling interviews I have ever witnessed or taken part in were people being interviewed on topics of which the interviewer had no knowledge or experience. Vegetarians interviewing people about their thoughts and feelings on a steakhouse concept. A 20-something Venezuelan woman interviewing men about the baseball experience and relationships with their fathers. Why does it work? Because the participant assumes the interviewer knows nothing and assumes nothing. This is a recipe for an excellent interview, as we can gain an understanding of the entire experience.
  • Be playful. Again, bring a childlike curiosity and assume nothing. Have fun. What you will many times realize is the meaning you assumed the participant would have for a very basic concept or response was very different than yours.
  • Avoid interviewing from behind a desk. Tactically, the office desk interview is challenging: there are distractions and interruptions, they are in a place of total comfort (and power), and the body positioning is adversarial instead of conversational. You want to either walk/sit alongside the participant, or sit at a 90 degree angle. Sitting facing each other tends to feel forced and more like a job interview than a conversation.
  • Transcription. If you are serious about doing analysis, don't rely on notes, have a transcript made. I would strongly recommend against trying to type it yourself, as even the professionals can't type fast enough to keep up with the speed of speaking, so there is significant fast forwarding and rewinding (they use foot pedals). The easiest way? The upper levels of voice recognition software (less than $300) usually allow you to load an MP3 and create a transcript in a few minutes directly from the audio file. Bear in mind, it won't have paragraphs or perfect punctuation, but it does an exceptionally good job of capturing the words in a very fast and cost effective way.
  • Skype. Face-to-face depth interviews involve traveling to the locations, or setting up multiple interviews in one location, but Skype or other video conferencing suites can be an excellent option. Participants do not need to leave their home or office, and you do not have to travel to different locations. Does it offer the full experience of being physically present? No, it does not... but it's also significantly less expensive and more efficient.

Pairing Observation with Depth Interviews

Observing the participant interacting with or reacting to a concept can be an invaluable way to gain unfiltered insights. While observation may be conducted as a methodology by itself or paired with many other research techniques, it is especially potent when paired with depth interviews. This potency comes from the fact that the interviewer may first depth interview the participant on the topic or offering category, learning about their experiences, thoughts, and feelings. After understanding the participant's thoughts and feelings about the general topic, they may then be exposed to the offering or prototype being tested to provide full feedback.

There are three major types of observation you will most commonly see when researching early-phase concepts or offerings:

Covert Observation.

Covert observation is exactly as it sounds: You are observing a behavior or interaction without the participant knowing it, either by camera or by blending into the surroundings in a public space. Only after you have observed the behavior fully do you reveal to the participants that you have been observing them.

In practice, covert observation tends to be limited to occasions when you can actually locate a camera somewhere to see how a group of participants might interact with a machine, for example. In a sense, proposition and message testing acts as a form of covert observation in that you are watching online behaviors, preferences, and analytics to create your findings.

Overt Observation - Detached.

Detached overt observation is similar to covert observation with the exception that you let the participant know you will be observing them, and then retreat to a detached vantage point to watch their behaviors. Overt detached tends to be a bit limiting in that participants may act a bit differently if they know they are being watched or recorded from afar, especially if they are the only participant.

Overt Observation - Narrative.

Also sometimes referred to as "side-by-side" observation, this is when you not only tell the participant you will be watching them, but when you stay just behind them or by their side and ask them to narrate the experience. This is especially potent in usability or refinement studies, where participants may first try to interact with the product in an unguided way, then asking the researcher questions afterward. If they are operating from an "insight mindset," overt narrative observation is especially beneficial for designers and those most involved in the offering, as it allows them to see the experience through the eyes of the participant.

Much of the value of the pairing of depth interviews with observational research is that it provides the researcher with a frame of reference for the participants' experiences from which to better understand the observational portion. For example, if the participant talked about their electrical engineering degree and expertise in solar energy in the depth interview, and then had trouble understanding and interacting with the solar controller offering being observed, that should stand as a significant warning sign that an experienced lead user is having trouble understanding the product. If only using the observational research, one could think that the older gentleman was just not technologically savvy or otherwise try to fabricate some backstory to explain their trouble interacting with the offering.

Five words:

Flexible and illuminating if done well

Closing Remarks

Closing Remarks jls164
X-ray Fluorescence Mapping of a van Gogh painting
Credit: Joris Dik, Koen Janssens, Geert Van Der Snickt, Luuk van der Loeff, Karen Rickers, and Marine Cotte. “Visualization of a Lost Painting by Vincent van Gogh Using Synchrotron Radiation Based X-ray Fluorescence Elemental Mapping.” Analytical Chemistry 2008 80 (16), 6436-6442 DOI: 10.1021/ac800965g

I long so much to make beautiful things. But beautiful things require effort—and disappointment and perseverance."

Research, Truth, and Disappointment

In this Lesson, we have started to move from the theoreticals of strategy and into the realities of research and testing and learning things about the offering that no other organization in the world may know. It is these specific research efforts which can create meaningful organizational knowledge of the kind which is not found in textbooks or the boilerplate slide decks of consultants. The early-phase research you conduct on an offering is unique to your organization. At this point in time, it is likely no one holds more practical knowledge on this specific offering than your team.

But, many times, the first round of research will result in more unknowns than when you first started. As the unknowns expand, it can be easy to lose sight not only of the core proposition and strategy of the offering, but the research itself. In view of the overall project, the research phase can be both exciting and depressing for many of the same reasons: you are finally gaining real data on the offering, but rarely are all signs wildly positive. There are areas of further research, areas which are still unclear, and some of these areas may be crucial to the viability of the offering. It can indeed feel as if the research is being dragged down further and further into the unknown.

Take a breath.

You'll be fine.

Remember the Insight Mindset. Chant it if need be. But, remember that things shouldn't be clear at this point. Promising projects will have more questions than answers in early phases. Every result is a victory, and brings us closer to truth and innovation. The only way to continue to gain results and data is to continue to research and try to find the overall storyline of the offering.

Which brings us to the van Gogh at top. That portrait of a woman is not superimposed over the painting of a grassy meadow, van Gogh actually painted "Patch of Grass" over the portrait of the woman. This would not be the only time van Gogh would paint over complete works, in fact, it is estimated that up to a third of his known works are painted over his earlier works.

Even as a frustrated and resource-constrained artist in need of more canvasses on which to paint, in his letters, van Gogh would go on to say:

"I've just kept on ceaselessly painting in order to learn painting."

He saw it not as a destruction of earlier works and efforts, but an act of continuous learning and evolving his craft. We should all aspire to be able to destroy our earlier works while embracing it as an opportunity to learn.

As we begin creating more and more advanced versions of the offering, it is important to know that frustration is inevitable, results will be unclear, and questions will multiply.

Like painting, take some solace in the fact that if it was easy, everyone could do it.

Creating a Path Into the Unknown

To refresh ourselves, our goals specifically for this Lesson are to:

  • create a well-structured plan for insight research as a basis to substantiate innovation opportunities;
  • discern the strengths and weaknesses in various research methodologies, as well as the stages of concept development in which they are most appropriate;
  • articulate the insight mindset, as opposed to a "project ownership" mindset.

To this end, this week's Case will pick up where we left off last week, and will take the next step by framing the types of insights we seek to gain in our initial research effort.