Monday, August 20, 2007

What is the group really doing with the topic?

In Gestalt Psychotherapy, there are four common ways to avoid contact. First off - contact between human being is considerer a model of health in Gestalt - the more contact one makes, the healthier he or she is. Anyhow, the four ways are confluence, introjection, projection and retroflection. While they may seem like fancy words, they're really quite easy to understand. Anyhow, one of the reasons I like focus groups is that I get a chance to see which avoidance a group uses when I discuss a particular topic. The results are often quite revealing.

First, a brief synopsis of the four avoidances.

Confluence This is often a state of cluelessness and self-absorption. Teenagers are notorious for it. I'm reminded of the scene in Ferris Bueler's Day Off where Ben Stein is taking attendance among the class. The kids in it have absolutely no interest in him because they do not even recognize that he is there. That is the key symptom of confluence - in the group, people have no interest in you or the topic. There's an energy in the focus group room that gets sucked-out of it when a group is confluent, or there is a sense of phoniness. There can also be the exact opposite - too strong a sense of endearment to the particular topic, kind of like a puppy love if you will - which is also a teenage characteristic.

IntrojectionEver have a group that seems to passively and politely accept or reject what you are saying? It's the well-considered "Ummmmm.... I see what you're getting at" response. Or it's the "I don't know Brian - what you're saying doesn't sit well with me" response. What is actually happening is that the person is deciding to either accept or reject themselves and their beliefs, and not me or my topics.

ProjectionIn this type of neurotic behavior, group members actually feel something - it could be fear, anger, joy, sadness or pleasure, but they do not own it. Instead, they disown it in some sort of way - usually by not taking responsibility for their thoughts, feelings or actions. Instead, what they do is put these on someone or something else.

RetroflectionAt this point, the group is using its brains to sit on the fence and avoid contact. That is, it is wondering - should we risk exposing ourselves and our feelings? Should we take a chance and embrace what is being said?

So what I look for is how the group gravitates and behaves on a particular topic. While behaviours may start off disparate, within a few minutes of the topic, a skilled researcher can often tell how the group is behaving. From there, proper probes and client recommendations can be drawn.

Rather than give specific rules for specific behaviours, I will use examples. The fact is, there is no specific actions based on just observed behaviour, and besides, a lot depends on the nature of the product or communication being tested.

In my first example, my client was selling a service, but people did not want to acquire it because they were unsure about many of its attributes. Within the focus groups, this uncertainty took the form of projection - people were scared of this product, and people were even more scared about their lack of knowledge concerning it. My recommendation to the client was NOT to come up with solutions to each of the objections, or make the product more appealing. Since the product involved personal sales, I simply told the client to have its sales reps listen, emphasize and say "this is a difficult product to wrap your mind around." An indirect acknowledgment of the fear surrounding the product would make it more salable. Overloading with more information would have caused further fear, projection and distancing from the sales force.

The second example involved examining people's perceptions of safety and crime in their communities so that communications messages could be developed. What I noticed in the groups is that people were retroflecting their fears - they were spending a lot of time describing issues in the community, but stopped short of saying that they were personally scared for their lives, even though I knew they were. I began launching a few probes to see if I could get the participants to contact the deep fear inside of them, but they were having none of it. All of my probes brought-up further justifications and intellectualizations, which was a sign that what I was saying was really making them uncomfortable. The communications recommendation to the client was easy - do not mention words like "personal safety" or "harm". Instead focus a message on the fact that safety equals comfort and gentleness. Safety need not come with increased vigilance, with a lock-down of one's freedoms or with increased enforcement. Instead, increased safety can come organically from more community involvement and improved infrastructure measures. The goal of the communication was to provide safety alternatives that would not further fan flames of fear, but rather reduce the anxiety people would have about increased safety measures.

Thursday, August 09, 2007

We're A Hit Down Under!

I recently did a search on my company's name on Google, as was very surprised to find that two account executives from Markinor, a leading firm in South Africa had used my thoughts on projective techniques when submitting a paper to SAMRA, the South African Market Research Association.

To download the paper, click on this link, and note their distinction between "metaphoric and emotional" research responses.

Groups Vs. One-On-Ones

One of the things that has always given me pause for thought is whether to use groups or one-on-one interviews to conduct qualitative research. I've finally figured it out. Focus groups are generally good at producing middle-of-the-road results - say the kind of results you want when you want to reach an audience in a very general way - like with mass advertising. In this instance someone is likely looking for a lowest-common-denominator type of marketing. I will clarify that there are a lot of uses for this kind of approach. One-on-one interviews, however, are much more useful when testing something that is new, unique or takes time to be adopted.

Let's start with groups. Early on in my career, someone asked me "why do groups when you can just do a series of one-on-one interviews?" The answer always stuck with me - focus groups produce "group dynamics", and those are the key results of focus groups - it is not necessarily what people say that is important. We get to watch how an idea changes, where there is resistance to it, and where there is acceptance of it. We get to see how strongly people hold on to ideas, and how they react to having their ideas changed, rejected or challenged. A marketer can observe these dynamics and figure-out what arguments or factors will move people and change their opinions. This is why I love using groups to evaluate policy issues, which are very dynamic and flexible.

What prompted me to write this blog is a continual interest in why groups can still perform so badly at predicting certain product successes/failures, and why people always assume that "mediocre" products are something that have been "focus grouped." I finally came up with my solution. In observing group dynamics, I've come to realize that the group is always moving towards (or away from) something. Ten people take an idea, play with it and either change it or reach an opinion about it. When group processes like this happen, you can't help but get a watered-down version of the original.

In a focus group people are not themselves - it is sort of like "mob mentality" minus the violence - we would do and say things in a group that we would not do and say as individuals (hmmmm - maybe that's why groups are such bad predictors of behaviour). In a focus group, people ARE swayed by dominators - they are shy and they do advance their agendas. A moderator is there to observe, balance and interpret these phenomena. As such, they are not negatives - this is what happens in real life as people live in a dynamic world, and if what is being tested is something that needs to meet these criteria.

Let me give two positive examples of this. The first is a psychologist put jellybeans in a jar and asked individuals to guess the number inside. Each individual answer was significantly off, but when all answers were averaged together, it was surprising how close they came to the truth. The second one comes from Malcom Gladwell's book "Blink". In it he describes how market research and focus groups that are used to predict Top-40 hits were very tough on a singer named Kenna, yet the music industry claimed that his was the most innovative sound they had heard in a long time. Gladwell asks how can focus groups differ so much from experts? The answer is what I wrote above - focus groups are great when you want to appeal to the masses (and I know nothing more mass-oriented than Top 40 Radio), and that's what the research is geared towards.

It is worthwhile to note that what Gladwell fails to ask is whether it was actually a good thing that the groups were so negative towards the singer. If indeed standard groups produce "mass-oriented results", then what the groups are saying is that this person will not be a hit on Top-40. Maybe, however, there are other marketing avenues to get this singer across. Perhaps the masses do not hear what the record executives hear, and therefore Top-40 is too wide an audience. That to me is what the focus groups are saying. It is not a negative that the groups did not like the Kenna - in fact he even says "the problem is that this type of music requires a leap of faith." The fact is though, that Top-40 programers do not take leaps of faith.

So, this brings me to one-on-ones. In these settings, people make decisions independently, and the moderator has more time to probe deeply. Now what do I specifically mean by probing more deeply (focus groups often claim that they do "deep dives" into people's opinions, so how much more deep can a one-on-one interview get?) Well, Gladwell in "Blink" says that when people evaluate new and unique products, they don't have the words to describe them, or relevant frames of reference. As such, people need time to develop these - a moderator can take the time to develop this language with an individual participant more easily than in a group setting. A moderator is able to initiate more challenges and probes in a one-on-one interview.

Evaluating a new product or even a new signer in a group does not leave room for individual opinion. Moreover, in a group, people are not willing to take the "leap of faith" that Kenna recognizes is necessary to enjoy his music. A one-on-one will determine whether there is long-term success for a product. A group will help determine if there is mass, instant-term appeal. The issue for marketers is to determine which is more important to them. Unfortunately, in my opinion as a consumer, the latter seem to dominate.

Tuesday, July 31, 2007

"In The 1950's most major advertising agencies employed Freudian psychoanalysts"

This is a quote from Malcolm Gladwell's book "Blink". The central premise of the book is that humans make snap judgments within a blink of an eye - BUT that this mechanism works at an unconscious or "unaware" level, and attempts to have people explain WHY or HOW they reach these quick decisions often ends in failure. In fact, asking people to explain what the actual snap judgment was is often an exercise in futility.

Gladwell makes the quote because the exploration of the why and how we make decisions is the purview of psychoanalysts and psychotherapists. In my own psychotherapy training, every time I asked "why" to a mock client - my instructors (and the entire class, if they were watching, would cringe at the question). When I was reviewing other students who asked questions like "HOW does that make you feel", I'd call "Dr. Phil" on them.

So how do psychotherapists get at hearing these snap decisions, and at asking the hows and whys? Typically, we look at two things - inconsistencies and explanations. Inconsistencies are often what academic psychologists spend a lot of time studying. They study physiological responses when people lie, they map the human face or they grade how people interact with one another in order to
get deeper understandings of the relationships between individuals. Psychotherapists, however, do not have access to the monitors or methods these academics use - so we end up using what happens in the "here and now" as a way of digging deeper.

We are trained to look for inconsistencies and "explanations". There's a saying that goes something like "I can't hear what you're saying, because what you're not saying is deafening." It's what's not being said that is the useful stuff. For example, if we ask someone to comment on whether they like the look of a new product and they say "Ummmm... well... it's kind of ugly." What is of interest to us is the hesitation the participant is presenting us with, and not what they are saying. There is a reason for the hesitation. What the moderator needs to do is get to the bottom of it as it relates to the client's product (as opposed to relating to their psychological issues).

What I would probably do is just repeat the words "kind of..." back to the participant and see where it goes from there. They may say "Well, I don't know... the colour is a bit off and the sides are too angular." At that point I have all I need - the participant has said "Well, I don't know..." At that point, I know that the transaction is less about the ugliness of the product, and more about the participant trying to come up with words to describe something that is new to them. The door is wide open for me to get deeper meaning words than "ugly" to describe the product. I may say "I see - the colour and the sides. Let's not focus on those. You said it was ugly, yet I get a sense that that word was just your initial reaction. What else is going on?"

Besides having a hesitant participant, an exact opposite situation may occur. A person may say "Oh my God!! Look at the thing. It's a design disaster. My two year old could take Lego and do a better version of it than that!!" From there I may say, "Well, tell me all about it... let it rip." What I will do is let the person explain away, and exhaust themselves, their need to be superior and their need to explain - what this person wants is an audience, and a chance to be heard - whether or not what they are saying is useful or true. I will generally not be roped-in by any of the explanation given (unless something is really striking). Once they are done, I will say something like "Now that you're done... look at it again - and this time, tell me what you really think." In this situation, it is critical to ensure that the participant does not think that that I am trying to bias or change his answer in any way. What I am trying to do is get past the initial BS, and get to opinions that are not based on a person's neurotic desire to criticize.

In closing - Malcolm Gladwell states that when people evaluate new products, they often do not have the language to describe them. This is true - and another axiom of new product development is that people tend to criticize more than they compliment - people don't like change. There is no doubt that initial reactions play a very strong role in marketing - competition is fierce, and many companies can't take the "time" to see if something will succeed or not (which to me is a very sad state of affairs). However, what is initially "figural" for someone about a new product WILL eventually change or shift. It is one of the central beliefs of psychotherapy - people do have the ability to change. The examples above attempt to show how someone can go from an initial impression of a product to the next stage. A good marketing campaign will use this information to get people there faster, and with a higher rate of success.

Anyone wonder why we are called moderators

When I started moderating about 10 years ago I wondered why that was the title given to us. Why aren't we called "interviewers", "facilitators" or "leaders"?

The answer suddenly hit me as I completed my training in psychotherapy. In Gestalt, one of the models of a healthy person is one who can deal with circumstances in an "even-keeled way" and take responsibility for his or her actions. In other words, they can moderate their behaviour.

Looking further at the definition of the verb "to moderate", we see it means "to reduce the excessiveness of; make less violent, severe, intense, or rigorous: to moderate the sharpness of one's words."

To me this gives the perfect definition of a focus group moderator - they are there to make sure that everyone's opinions, thoughts and emotions get equal time - no one person or thought should dominate. What this means to me, however, is that I am moderating the whole person within the qualitative research. My goal is not to just let a person's head or rational thoughts dominate the discussion. My goal is to view participants as a whole and moderate between their head, heart, emotions, dreams and fears. I know I have done a good job as a moderator when I have moderated the "sharpness of one's words" and brought out "the richness of one's emotions".

Communicating Research To Dumb-Dumbs

Bet the title caught your eye. For any clients or potential clients, the title is not what you think at all!! I actually had a client that I just finished an assignment for a few weeks ago. The main objective was to determine how they could build better relationships with one of the constituencies they serve. A specific focus was to determine how my client could present scientific research to their constituents in a way that would convince them of its validity.

In my focus groups, when I asked about their use of scientific research, my BS detector went off when I heard their responses. Most of the groups were comprised of individuals who did not have a research or scientific background - so to them research was "complicated" and "biased". Moreover, participants doubted the methods and conclusions drawn, saying they just did not know how the research could be so precise as to isolate specific effects of certain substances. I had no reason to doubt these stated responses, but the BS detector told me that these participants were actually scared of research and embarrassed about not knowing how to use or interpret it. This was the unsaid truth in the groups.

Knowing this, my recommendation to the client was not modify how research was presented - rather I simply told them to present the research as is, but to LISTEN and actually AGREE with most everything their constituents had to say about the research, even though, in my clients eyes what was being said was not true. I said in my presentation "What you need to realize is that these people don't know how to use research and are scared of it. If you actually agree with them in general, and then softly point-out point out how the research you are discussing is different, they will be much more likely to listen to you. You can teach them without triggering their concerns over not knowing too much about scientific research."

The recommendation is based on a number of theories in Gestalt Psychotherapy:

- In every transaction, the other person has "the power", not you. If you can yield to this, your mind begins to open-up and listen to the other person. In this example, my client's constituents are actually obliterating the research with a simple thought - how's that for power. They don't have to take any action - all the constituents have to do is say "I don't believe what you're putting in front of me." When this happens, you're dead in the water, and there is no way that you will be able to out maneuver this state of mind. There's a wonderful saying I use - "Never argue with an idiot - they just drag you down to their level and beat you with experience." Now I'm not calling anyone involved in this research an idiot, but the saying holds. If someone is not going to see your point of view, no matter how "right" you are, you're the one that walks away frustrated and angry. The other person walks away "victorious", because they could care less about increasing their knowledge - they care more about winning and have in fact done so.

- There is a wonderful model called the "Co-dependency triangle". In it, people involved in any transaction can be described as "Persecutors", "Rescuers" and "Victims." In my client's case, the constituents were the Persecutors, and my clients were the Victims (i.e. "How could you not accept this research... it's completely valid"). The way to deal with a Persecutor is not to react to what they are saying, but to simply listen to it, keep listening and then show that you have listened. The persecutor simply wants to be heard. When that happens, the transaction opens to more possibilities.

- The Gestalt Cycle is a model that explains how a person makes contact with his or her environment. The very first part of the cycle assumes that a person is able to recognize that his environment is saying something to him (ever day-dream at the office and have someone call your name for 5 seconds - you're missing the environment), and that what the environment says is valid (ever feel slightly warm, but wait too long before you blast your A/C - you're dismissing yourself and your hot environment). My client was not recognizing any of their constituent's objections as valid. When this happens, my client is in a better position to respond.

In closing, I have taken on as my position to bring "humanity" into market research. While I have used three very "highfalutin" theories to describe why my client should engage with their constituent's objections, the bottom line is that I am simply recommending that my client make some human contact and show empathy to someone who is a bit scared and does not know how to express it. When looked at this way, who needs the theory, eh?

It's Time To Stop Using Qualitative Research As A "Pass/Fail" System

Qualitative Research, and in particular, Focus Groups, are the target of a lot of criticism because of their inability to accurately determine the success or failure of what is being tested. If I hear one more New Coke story, or Malcolm Gladwell's Herman Miller example in "Blink", I think I'll puke. However, despite my gastrointestinal reflux reflex, the fact is these stories ring true, and qualitative research (and I would argue Quant as well, but I don't do either Windows or Quant so I won't speak to it) should not be used to pass or fail a product.

A good psychotherapist will know within a second what someone's reaction is to a new product, regardless of what they state. A good researcher will know how to use that reaction to get relevant data for their clients. The main way to do this is to take the "focus" off the "evaluation" and put it more into a moderated discussion, or into a realm where people do not have to justify anything. "Evaluations" and "justifications" (e.g. Why did you react this way? What do you like... what don't you like...) have a very technical term in Gestalt Psychotherapy - they are called Bullshit. What ends up happening is that even though the researcher/therapist can see how a person reacts to a new product within a second, often the person doing the reacting is completely unaware of their reaction. As such, when they are asked to justify or explain, they end up being confused and spewing-out answers in order to please themselves or the moderator. This also has the effect of "fixing" a person onto a specified answer. Participants will not want to be seen as "flip-flopping" in front of a group, so they tend to keep justifying a position that may have changed.

There are very few reactions someone can have when seeing something new - positive, negative or neutral (neutral could also be called confused). What needs to happen in a product evaluation is that even though the moderator knows initial reaction, the moderator must let participants sit with their thoughts and feelings for 30-60 seconds before people have a chance to speak.

From there, the moderator simply probes with a "Well..." and lets the discussion flow from there. What begins to happen in the discussion is that people are much more in-tune with their initial impressions (they've had a chance to sort through their own BS so their clarity of thinking is better), and will often what will happen is that people will begin to refer to their initial reaction, and discuss how it changes as they have had time to sit with their impression, or to hear others in a group.

So, here's where the pass/fail concept gets thrown out the window. Someone could have had a negative initial split second view of the product, but when they sit with their thoughts and feelings, the negative impression can be melted away - and this is what the product test measures - the change in opinion (if any) from initial reaction through to a final opinion. The change can go from negative to positive, positive to negative or from any change in between. The only thing that a moderator needs to be on the lookout for is whether the change in views someone has is a real phenomenon or whether it a result of "group-think", an attempt to please, or an attempt to "kybosh" a good idea. These three behaviours are considered neurotic and unproductive, and a good moderator will know how to get around them, and know the extent of legitimate influence or neurotic influence.

So what we wind-up getting from a concept test is not a pass/fail result at all. Instead what we get is a dynamic result (which the last time I checked is the way a market actually operates). We can measure initial opinions (which, to a good marketer or advertiser should mean very little - their job is to change opinions after-all), but more importantly we can measure how those opinions can be changed and influenced. The group discussion will illuminate what factors changed their initial impressions of the product, or what factors keep people stuck in their initial impressions. The goal for the moderator is to ensure that the conversation is kept free of that very technical Gestalt term - bullshit.

And speaking of that, it is worthwhile to return to the initial split-second reaction observed by the moderator when the product is first exposed - he needs to be keenly aware of it as people are speaking. It is possible that someone could "fudge" their explanations, or change their story based on what they hear in the group. It is important for the moderator to check-out what a participant is saying versus the initial observed reaction. There is nothing wrong with a moderator calling someone and saying "I hear that you initially said you liked Product X, but I happened to see you out of the corner of my eye and I would have made a bet that you didn't like it - just let me know how far off I am." All this does is re-frame the participant back to their initial thoughts so that the data is more accurate, and we can more accurately measure the progression of thoughts and opinions.

Monday, July 30, 2007

If You Want "Real Usage" Figures Go Elsewhere!

Most reported "usage" data from qualitative research should be treated as suspect, and I would go so far as to recommend that my clients NOT make decisions based on usage data reported in qualitative research.

Before I go into details, let me provide a key reason for my recommendation. I have a summer cottage, and a few times a year we have myriad visitors, like cousins and friends of cousins who go to the local casino. The funny thing is that most every visitor to this casino comes back "a winner" - they doubled their money, beat the bank, killed at Blackjack and knew just how to bet at Roulette. My question to them is simply this - "What farking casino are you people going to, because every time I go, I lose money." When I challenge them with this, what invariably happens is that these "killer winnings" get whittled down to statements like "well, we won enough to pay for dinner" or "I did well at roulette, but I lost a lot at blackjack."

Now, most people know exactly what's going on. A bunch of beer-drinkin, nature-luvin, Muskoka-bound, middle-aged Canadian "dudes" want to "play up" their winnings so that they can puff-out their chests bigger than the next guy. Of course, we know full-well that the odds of so many people winning so much money are slim to none.

So, how does this relate to reported usage in qualitative research? There are three ways. The first is rather intuitive - many people in a group will "fudge" just a bit to blend-in. The fudging may be to ensure that they are equal with the group, or ensure that what they say corresponds to social norms. Some may just be scared of telling the truth. This forms the basis for why we can't rely on qualitative research to provide us with numbers of any sort of reliability.

However, the next two reasons are where a deeper understanding of psychology comes into play. The second reason my story relates to qualitative research is because it is not the reported numbers or usage that is important to us. What is important to us is the deeper feelings and thoughts that underly the answer. If a moderator's BS detector is going-off full-tilt like mine does when I hear these "casino stories", I know to probe further. What is it about this reporting that is common to most of these people? There is a hope that they will win, and a fear of looking foolish in front of others.

The third way this relates to qualitative research is what I do after the fact. In my casino example, I actually stood quite firm. The way I questioned people about the casino they went to was such that they knew I would likely follow-up with "Exactly how much did you win?" or "Tell me the method you used that night?" When people have to compound their "exaggeration", they usually back-down. Now, in a focus group - it is highly unlikely that I would ask "what casino are you going to..." - mainly because I mentioned that it is not important for me to get the truth in this instance. Instead, what I would likely do is say "Wow - there's a lot of excitement going on in here... Can someone show me what it is like to win a lot of money at a Casino?" Engaging them in their fantasy will show the client how to enhance a brand, experience or communication to play to the perception of excitement that these players have. Attempting to find the "truth" of the matter would serve to ground participants in a reality that they would rather not face, and one that would not be appropriate for a casino message.

Thursday, March 15, 2007

Oh Go On - Pry Into People's Personal Lives

Recently finished a series of focus groups testing ad concepts. Fairly standard fare. As part of the concept, the agency presented us with some materials that attempted to show what people did in their private lives if they had spare time. The agency attempted to use humour to illustrate this, and our client (rightfully so) disagreed with he approach.

However, what the client asked us to do in the groups was "find-out what people do privately in their spare time" so that we could present creative that was relevant. This too, was also a noble goal. The issue was, however, that the entire two hours of the groups were taken-up by testing about 16 different versions of the creative, so all we could do is once we presented the creative with the spare time concept, we asked "what kinds of indulgences do you do with your spare time?"

The question absolutely flopped, and the client was quite insistent on asking it. In all of the groups that I did, participants said things like "I thought you said these groups were confidential" and "You expect us to tell you that?" While I was able to manoeuvre around those objections and get to some sort of answer, I knew the response was far from truthful, honest or thorough.

While I have very little concern about this for the project I was doing (the main objective was concept testing - this was just an very incidental add-on), what bugs me is that my client was unable to see that this question would get him no valuable information. For some reason, he felt that it was OK, in the middle of a focus group, to stop everything and ask people to reveal what they indulge in during their free time - and what was worse, is that he actually expected people to shift gears and answer the question.

The issue is that during the concept testing, people were very much thinking "in their heads" - they were evaluating, using logic, rating the ads, etc... They were not in their emotions, and were not at a place where they could share anything significant about their lives - yet smack in the middle of a group, the client expected to gather this information.

What concerns me is that there are people out there who are responsible for advertising and communication and still feel that people function like robots and approach things in a strictly logical and patterned way. "They're being exposed to activities they do in their free time in the ads, so ask them about the indulgences..." When I mentioned that he probably would not get much information out of this, the client just seemed to look through me, as if he did not understand.

Anyhow, the creative test went well, and the client got the information needed. Moreover, this was not a time to tell the client to do a fancy projective or other psychological exploration of free-time activities. What this client does need to learn, however, is that people are not logical, but emotional. Once the client accepts that, he may be open to the idea that emotions (such as fear of exposing oneself in a focus group) follow their own set of rules and patterns, and that there is a certain logic to those "soft mushy" emotions that can be used both in focus group and effective communications campaigns.

Sunday, January 07, 2007

"Raving Fans" Researches "Friendliness"

I was recently reading "Raving Fans" by Ken Blanchard. It is a book I read every few years just to refesh myself about good customer service - afterall most customer service out there really sucks. It is nice to know that there are some people out there who are preaching that customer service can be an easy system to implement.

Anyhow, there was a reference in the book which said "Our research shows that friendly people talk about things not related to business." I thought this to be a very interesting comment. Businesses and marketers tend to measure the cold, hard and factual aspescts of a business. They ask questions like "What makes you satisfied", "What featuers do you want in the product" or "How does our quality compare to that of our competition" and on and on this fact-finding goes.

In light of these "fact-finding questions" (which I do not discount as important), it is refreshing to see someone a sense that there are people out there who ask "what does frieldliness look like" or "what does joy mean to you" or "describe a fun experience." These are emotional questions, and they are actually how a business delivers its marketing and its service. I mean think of it - the quote suggests that people who want to deliver good customer service actually have a vision of what "friendly service" means. They are not making assumptions, and they are not imposing their own definitions - they are actually researching their customers.

Now, since I do have a background in psychotherapy, I know that simply asking "what does friendliness look like" is a sure-fire way to turn an emotional research topic into the same-old fact-finding missions that many researchers typically use. If a researcher wants to get a full understanding of what a friendly experience looks like they need to get a respondent to a place that is outside their logical head responses. They need the respondent to be able to tap their own emotions in order to answer the question.

What I want to note, however, is that there is no justice in doing emotional research and then attempting to "mechanically" implement a friendly customer service plan. For example, it does no good to say "Our research shows that friendly service means that we should not talk about business - therefore, all employees will talk about the weather, sports or a top-5 TV show." What this does is that it objectifies the customer, the employee and the experience that they will share.

What needs to be done is to trust that employees will know, from their own experiences, how to be friendly and how to give friendly service. However, to help them along, researchers should videotape customers describing, feeling and actually receiving friendly customer service and then show this to employees. From here, we can give employees much broader guidelines in terms of what to talk about and to evaluate each customer encounter as a special opportunity to make real friendly contact, because employees will see exactly what a customer who is receicing friendly service looks like.