Our favourite measure for customer happiness is the Net Promoter Score (NPS). It doesn't measure happiness directly, we'd need brain electrodes for that, but it's a good proxy. The NPS simply asks,
How likely are you to recommend us to others?
Arguably, this is what matters most. We like to see our customers happy, but the likelihood of them spreading the word and helping us grow is particularly interesting.
We've spent a lot time improving how we measure our NPS. By sharing this work and a few anecdotes, we hope to start a discussion and elicit even better ideas.
Happiness vs Growth
Not every business needs happy customers. Many do just fine with one-time sales from large markets. The problem with this approach is it requires a lot of money to constantly reach and convert new customers.
Alternatively, a business can grow on its merits. It's a scary proposition, but this approach is cheap and quite satisfying. It works like this:
Happy customers return again and tell their friends, which drives organic growth.
In the beginning, it may not work so neatly. If a business fails to create value, customers will fall away. New customers may come from more press coverage, but unless they leave satisfied, they too will fall away. This cycle repeats until the product offering is good enough to drive net positive growth.
What is the NPS?
The NPS formula betrays its simplicity. So, let's take a different approach and first decipher the name:
- Net = minus expenses
- Promoter = someone likely to refer you
- Score = a tally over time
Promoter Score (without the 'Net') just considers people who are likely to refer you. If you survey 100 people and 60 said they'd refer you, your Promoter Score would be as follows:
NPS = 60/100 = 60
(Note: since the NPS is a 'score', it doesn't include the % sign.)
By adding the 'Net', we must adjust for expenses. In the case of the NPS, an expense is someone who's likely to refer you negatively. This person offsets the good word of others, so the NPS rightly takes this into account.
If you surveyed 100 people and 60 said they'd refer you, while 10 said they'd never refer you, your NPS would be as follows:
NPS = 60/100 - 10/100 = 50
Imagine these customers sitting around a dinner table.
Another friend walks in and asks,
"I want to book a round-the-world trip, but I'm having trouble. Can anyone help?"
- 60 would say, "Use Flightfox, their experts rock!"
- 10 would say, "Don't use Flightfox, their experts suck!"
Keep in mind, there are still another 30 people around the table who previously used Flightfox, but they were passive. They likely had a good experience, but not good enough to proselytize.
Now, imagine yourself as this person looking for help. With 60 promoters and 10 detractors, would you use Flightfox? What if the numbers were 40 and 25? Or 10 and 15. You can see how the concept of the NPS plays out in real life.
Tough Love
Consider rating the last restaurant you visited. On a scale of 0 to 10, how likely would you refer it to family and friends?
012345678910Not At All LikelyNeutralExtremely Likely
Maybe a 7/10? That's reasonable, but still nice, right? Well, the NPS doesn't know nice; it only knows tough love. See how the NPS views these ratings:
012345678910DetractorPassivePromoter
Let's say this restaurant surveyed 100 people:
- They received 10 ratings of 9/10+
- They received 70 ratings of 7/10 or 8/10
- They received 20 ratings of 0/10 to 6/10
Wow, 80% of customers rated 7/10 or higher! Time to sing from the rooftops, right? Not according to the NPS. The NPS of this restaurant would actually be negative:
NPS = 10/100 (promoters) minus 20/100 (detractors) = -10
If you look closely at the formula, you'll see how promoters, passives and detractors impact your NPS:
NPS =Promoters - Detractors< NumeratorTotal< Denominator
- Promoters increase the numerator and denominator, which increases NPS quickly
- Passives decrease the denominator only, which decreases NPS slowly
- Detractors decrease the numerator and denominator, which decreases NPS quickly
Imagine the restaurant improved and hit an NPS of 50. Their results may look like this:
- They received 70 ratings of 9/10+
- They received 10 ratings of 7/10 or 8/10
- They received 20 ratings of 0/10 to 6/10
This shows how difficult it is to achieve a high NPS. Imagine running a business in which 70 out of 100 customers rate you a 9/10 or a 10/10.
Not easy.
It's not enough for people to like you (7 or 8), they must love you (9 or 10). Here are some more interesting points:
- The NPS scale goes from -100 to +100
- For a positive NPS, you must have more promoters than detractors
- You can simplify the formula to (promoters - detractors)/total
Everything Matters
A restaurant needs more than good food to ensure a high NPS. In fact, there are three drivers of a customer's rating:
- Functional Value
- Was there enough food?
- Was it reasonably priced?
- Was the wait too long?
- Emotional Value
- Did the food taste good?
- Did the waiter say please and thank you?
- Did the waitress wink at you?
- Behavioural Bias
- Are you happy or sad?
- Are you prone to high or low ratings?
- Are you a business owner and do you sympathize?
The subjectiveness of a customer ratings doesn't say anything about the robustness of the NPS concept. Critics say the NPS is flawed because it's susceptible to environmental factors, but that's nonsense. As long as the customer is influenced by their environment, our proxy metric for customer happiness should also be influenced by the customer's environment.
Sample Bias
The easiest way to increase your NPS is to only survey happy customers. Sometimes people do this on purpose, but often it happens without you even knowing.
For example,
- Do you survey refunded customers?
- Do you survey users on free plans?
- Do you survey customer segments in the right proportions?
The goal is to survey every customer or at least a cross-section that accurately represents your entire customer population.
For a time at Flightfox, we unintentionally skewed our NPS in the negative direction. We made rating optional for happy customers, but mandatory to receive a refund. While 100% of refunded (unhappy) customers rated us, only 20% of happy customers rated us. That was a tough month, until we realized the sampling error.
The holy grail is surveying all customers at the point of delivery. That's when their impression of value is fresh.
At Flightfox, we now ask every customer (even those refunded) to rate us immediately at the point of delivery. We include all the ratings in our final metric to keep it conservative.
Fear of Reprisal
Ratings are rarely anonymous. Customers aren't naïve either, they know by definition everything they share is... shared. This can lead to a fear of reprisal.
Remember the early days of eBay? If you gave someone a low rating, they'd reciprocate. You wanted to maintain a high rating, so you only gave high ratings too.
We unknowingly made this mistake. When Flightfox dropped crowdsourcing, the following happened:
- Customers worked 1-on-1 with a single expert
- Experts provided a concierge level of service
- NPS shot up from 30 to 90
If Apple's NPS is about 60, how could little ol' Flightfox clock 90? I wondered, "what changed?" Then I found this:
Not only had we made the rating personal by mentioning the expert's name, but there was a large photo of the expert's face right there above the rating form.
Aha!
Customers were now rating the expert, not Flightfox. That seemed innocent enough, but we contacted customers and it became clear they feared reprisal.
Easy fix! Or so we thought.
We removed the expert's name, removed the portrait, and stripped the rating page of any personality. NPS dropped, but it was still in the high 70s, which was still too high.
Aha! (again)
What about including both ratings?
- How likely are you to refer [expert name]? and...
- How likely are you to refer Flightfox?
This gave customers an out. They could give their expert a 10 out of 10, but give the BBCE (big bad corporate entity) a rating it really deserved.
The next week:
- Average expert rating = 75
- Average Flightfox rating = 26
Oh, yikes. Had we just wasted months thinking our new product was much better only to realize our hope was based on nothing more than a manipulative rating page?
Not quite.
It was still a tiny sample size and NPS is unreliable with a sample below about 100 ratings. After that it stabilized to levels higher than ever before.
Revenue-Adjusted NPS
While we were feeling cocky about our latest NPS discoveries, we came up with another idea: what about revenue-adjusted NPS?
Instead of,
NPS = (# of promoters - # of detractors)/total #
What about,
NPS = ($ of promoters - $ of detractors)/total $
We tried it for a while, but it caused an odd behaviour. Day-to-day, we were asking each other how much a particular customer had paid and then metered our attention accordingly.
This may sound reasonable, but we concluded the following:
Customers who pay more should receive more, but not necessarily matter more.
So after a few weeks we reverted to the traditional calculation and liked that it was simpler to calculate on the fly.
Tip or Tweet
What if customers aren't honest? What if they choose any old rating to get to the next page? Wouldn't that negate everything about our focus on the NPS?
We wanted to test the following:
- Does referral intent = customer happiness?
- Does referral intent = real referrals?
- Does referral intent = growth?
We added a feature to dynamically display social sharing buttons if the customer selected a 7/10 or higher. Not everyone has a social account, and even fewer are willing to pollute their feeds with referrals, but we gave it a shot.
Yay! About 10% of people shared.
But, we received many more shares on Twitter and Facebook than we recorded with this feature. People were sharing on their own terms, which made sense.
***
At the same time, our experts were getting vocal about allowing tips. Customers expressed a desire to tip, and we'd handled a few through PayPal, so we decided to build the functionality. We realized this would also be a good confirmation of our NPS.
Truth be told, I didn't expect many tips, maybe one per month. I don't come from a tipping culture, which is also why I was hesitant to allow tips in the first place.
We even had an in-house bet.
Most of the team predicted between two and five a month. When we interrupted our lead developer, Oscar, he grunted, "30". He clearly wasn't listening, so we laughed and put his number on the whiteboard:
- Todd = 1
- Lauren = 3
- Grace = 5
- Oscar = 30 (HA!)
So we built a page we call Tip or Tweet.
If you rate us a 7/10 or higher, we ask if you'd like to tip or tweet. The minimum tip is $10 and the tweet box is wired up and ready for you to post your referral with ease.
Guess what happened? :O
- In May, 20% of customers who rated 7/10+ left a tip
- The average tip was US$19, almost double the minimum
- More people tipped than tweeted
In our minds, if there were ever proof our high-NPS customers were happy, it was that they were putting up cold hard cash.
Oscar won the bet.
***
We still of course don't know whether intent to refer drives organic growth. It should in theory, but a better question we're asking is as follows:
Is a fanatical focus on customer happiness a great way to build our business?
In fact, that's the overarching question of this blog, which we discuss here: The Happy Customer Experiment.
Our Results
Here's a summary of what we've done so far to measure customer happiness with greater accuracy:
- Ask everyone to rate, even refunded customers
- Depersonalize the rating page to reduce fear of reprisal
- Add sharing and tipping features to confirm our NPS
Here is Flightfox's NPS data for 2014:
Firstly, keep in mind this is only 2014 because we rebuilt our system over the New Year. In 2013, our NPS ranged from -10 to a maximum of about 30.
Our 2014 goal was to get Flightfox NPS above 50 and keep it there. Rightly or wrongly, we consider this milestone a sign of reaching product-market fit. We've now maintained 50+ for 7 weeks, so our new goal is 60.
If you have any NPS war stories or ideas how we can improve our measurement of NPS, please leave a comment below.
All the best,
@todsul