Nick Disabato on Increasing Your Profits with A/B Testing

N

Today, on the Early-Stage Founder Show, I’m talking with Nick Disabato who runs Draft, a design consultancy that uses research-driven A/B tests to generate revenue for online businesses.

In 2017, there are very few founders out there who aren’t familiar with the concept of A/B testing, but even with that awareness, very few have actually established a consistent testing process in their own startup. A big reason for that is it can be hard to know where to start and what to actually test.

In our chat, Nick lays out the exact process he follows to use A/B testing to make more money for his clients and he also shares lessons you can apply even if you don’t have a lot of traffic.

If you’re growing but you feel like you could be growing faster, then this is the episode for you.

Subscribe on iTunes

Topics covered:

  • (00:21) – Nick discusses the problems in the approach many startups take when using AB testing.
    • (00:26) – Practicalities in testing.
    • (00:44) – Mindset shift.
  • (02:08) – Why it is so important for companies to use testing correctly.
    • (02:21) – The fundamental principle of optimization.
  • (03:41) – Nick walks us through the research-driven approach to testing that is used by his company Draft.
    • (05:53) – Discussing the long-term approach aspect of the Draft Revise process.
  • (06:51) – Nick discusses the next step after the initial optimization, which is the research process.
    • (07:13) – Quantitative research.
    • (07:27) – Qualitative research.
  • (08:56) – An exploration of how the various research, and the data it uncovers, can inform the AB testing process.
    • (09:16) – Click maps.
    • (09:55) – Heat maps.
  • (10:56) – Nick shares some specific examples of businesses he optimized and prepared AB tests for using research.
  • (14:24) – Nick explains the details of the customer interview and survey processes in determining customer motivation.
    • (16:56) – The different angle needed for SaaS companies.
  • (19:24) – Translating research data into actionable tests.
    • (20:53) – Ranking ideas for potential tests.
    • (21:37) – The subject of high-risk, high-reward when it comes to testing.
  • (24:37) – The nuts and bolts practicalities of building a test.
  • (26:48) – Establishing the goals of the test.
    • (26:57) – For SaaS
    • (27:42) – For e-commerce
  • (28:03) – How to evaluate when the testing phase is over.
    • (28:23) – Statistics and sample size.
  • (33:56) – Nick summarizes how he currently works with companies to implement his process of optimization and testing.
    • (35:00) – How he evaluates when a company is ready to utilize testing.

Rapid-fire Questions:

  • (40:01) – What do you spend too much time doing?
  • (40:28) – What do you not spend enough time doing?
  • (40:59) – What are you hoping to accomplish in the next quarter?

Resources mentioned:

  • Hotjar (12:51) – Heat map software to see what your visitors are actually doing on your site.
  • UserTesting (13:06) – Usability software which uses live voice recordings during the testing of your site.
  • Typeform (14:55) – Online survey software Nick recommends for gathering customer data.
  • Wufoo (14:56) – Online survey software Nick recommends for gathering customer data.
  • AB Testing Manual (17:26) – A course put together by Nick and Patrick MacKenzie.
  • Trello (19:52) – Web-based project management application tool Nick uses to build tests.
  • VWO (25:00) – AB testing and analytics tool.
  • Optimizely (25:00) – AB testing and analytics tool.
  • Google Optimize (25:00) – AB testing and analytics tool.
  • Baremetrics (27:20) – Stripe analytics database that Nick recommends

Where to learn more:

To hear more from Nick and learn how Draft can help increase your revenue, or to sign up for one of their courses, head over to Draft.nu

If you’re curious about putting the information Nick was generous to share into practice, check out his most recent course at ABTestingManual.com.

Transcript:

Andy:  Nick, thanks so much for coming on the show today.
Nick:  00:00:03  Thanks you so much for having me, I really appreciate it.

Andy:  And so I’d be surprised if there was practically any startup founder out there who’s not familiar at least with the concept of AB testing.  But I also know that most startups aren’t running that many tests and those who are aren’t really taking a sophisticated approach to it.  00:00:21  So in your experience, what is wrong with the way most startups are approaching AB testing?

Nick:  00:00:26  So there’s kind of one of two things that typically happens.  There’s either a —  they’re heeding the best practices so it’s like, “ok what are the things that you AB test?”  Well it’s your call-to-action, it’s your headline, it’s maybe a little bit of copy, something like that.  00:00:42  Which sometimes works, but you haven’t researched what copy or in what way to change it.

00:00:45  And then the other thing that is common to happen is there hasn’t really been a mindset shift around focusing on data.  You’re using AB testing to settle internal debates or 00:01:00  figure out what looks the prettiest or something like that.

And those are — they might be useful for, you know, defanging an internal debate and being like, “Just test it.”  But that’s kind of putting a bandaid on it and not really addressing the underlying issue.  The issue is you want to improve conversions, and you want to generate revenue for the business.

Well, how are you doing that by settling an internal debate right?  It feels like you’re coming at the wrong solution in that situation.  00:01:27

And so I see both of those things happening pretty frequently.  You end up with, either people are embracing still the kind of move fast and break things mindset, where they are debating internally and not really thinking about their customers or analyzing their customers.  Or you end up with the situation where people are just kind of finding what has worked well for other previous AB testing case studies.

Andy:  00:01:53  It’s funny, because when I do talk to a lot of startup founders about whether or not they’re testing, a lot of times, “Ya, ya I know I should be doing this, but…” and they’ll rattle off a few reasons for why they just don’t have the time, or why it’s not that important to them.  00:02:08

And so, from your perspective, someone who has worked with a lot of companies and seen the results, why is it so important to really get this right and build this into the way a company functions?

Nick:  00:02:21  So the fundamental issue, and I think people are agreeing with this, it’s the fundamental principle of optimization.  Right?  You put out a redesign, you put a lot of time and effort into it and it looks great, but it’s not perfect.

And you want to make it — it’s the whole point — you want to make it optimal.  00:02:39  Pavlov Hoffman from ConversionXL often says that your website is leaking money, and optimization plugs the leaks.  I definitely agree with that.

There’s usually a situation where there’s a usability flaw you might not know of, or there’s a group of customers that might have objections that you haven’t addressed yet, or there’s certain interactional issues at the website, where people are beelining for something that doesn’t actually fulfill your business objectives, and all of those things are hugely problematic right?

00:03:05  And so it’s not just that you’re AB testing, but that you’re assessing what that behaviour looks like, and figuring out how to match people’s expectations around the site and deliver something that they really want.  00:03:19  And you’re using AB testing as a measuring tool around that right?

AB testing is very much the means, but it’s not the whole thing.  It’s like saying, “Oh I have this hammer, now I’m going to build a house.”  Well, you need the blueprint, you need the process, you need everything that goes into how to use that hammer effectively.  And maybe a hammer’s not the right tool for it in the first place.

Andy:  00:03:41  And I think that transitions really well into what we’re going to spend the bulk of the show talking about, and that is the actual process of how to use that hammer well.  00:03:51  Because I know at Draft, you use a research driven approach to AB testing, and while — like we were saying before, the exact approach is going to vary for every client you work with, but can you walk us through what a typical process looks like?

Nick:  00:04:04  Yeah, so I have a process.  If you go to Draft.nu/method, it’s called The Draft Method.  It seems like every design agency has their own process and that one is ours.  Basically, you come in the door and there’s always going to be a lot of one-off tweaks that we’re going to be doing.

I have never in my life encountered a business that came in the door looking flawless.  Because they probably hired me for a reason right?  We go through and configure funnels, and goals and Google Analytics.

Because often you have just left your Google Analytics installed [Inaudible 00:04:37] and I come in and figure out, well how do we track revenue?  How do we track conversion rate?  Sometimes you did that and it broke, or it was reporting the wrong data in the wrong way, so I’ll clean up messy data reporting on those fronts.

I’ll run heat and scroll maps on every page in your funnel and assess where people are actually clicking and how far they’re scrolling down on a webpage.  Figure out what 00:05:00 constitutes success for the engagement.  A lot of the times people come in and they’re like, “Well we want you to make us money.”

And I’m like, “Cool.  How should I go about doing that?”

And they’re like, “Well make the conversion rate go up.”

I’m like, “I know that.  Who should I be talking to, as far as a customer to increase the conversion rate?  Is it everybody?  And I try to use that sort of thing out as a consultant.”

Andy:  00:05:30  A lot of this seems like almost setting the foundation.  And I think that this is one step that when doing it yourself, as in an early-stage founder or employer or whatever, that you skip a lot of this.  But if you don’t have some of this tracking in place, if you don’t know specifically what goals you  have in place, it’s going to be impossible to actually know if you’re making progress.

Nick:  00:05:53  Yeah.  Yeah.  And it’s very much like a long-term, durable process right?  So one of the questions I ask when people apply to do Draft Revise with me, is how quickly do you expect changes to manifest.  And if you say two weeks, that is a thing to be talking about at our initial call right?  If you say three months, then that seems to me to be a little bit more of a realistic expectation there.  So, you — and that seems like, “Oh my god, three months I’m going to be doing all of these things!”

Well, you’re changing your process right?  And you’re changing your mindset around it.  And then you start to see returns, and it’s scary.  It’s something where you’ve been operating in an industry that only knows quick wins for so long, and unlearning those bad lessons is scary, from a business owner standpoint.  00:06:45  I understand that probably as well as anybody.

Andy:  00:06:51  And so once you do have this kind of first step in place, once you’ve gotten — like we said before, the foundation.  You’ve cleaned things up.  You’ve got it so that you can hit the ground running at least.  You’re still not just going to start throwing up a blind test to settle some debate or anything like that right away.  So what comes next?  What comes after that initial optimization?

Nick:  00:07:10  So it’s that thing that everybody loves cutting out of budgets.  It’s research.  And research can take a lot of forms.  It can be either quantitative research, so what I mentioned:  analytics, heat and scroll maps, click maps, maybe even survey scores, if you’re doing Net Promoter 00:07:27  [Resource Mentioned] or something like that.

And it’s also qualitative.  You can do free text survey responses.  Sometimes it’s the same survey.  Usability tests.  Customer interviews, that sort of stuff.  Those are all really valuable ways of assessing customer motivations.

And I tell you, I’ve never encountered a situation where I’ve done research and it hasn’t surprised people on the team.  So even if you think — the number one objection is, “We already know our customers.”  And I’m like, “Well actually, probably you don’t.  And also you hired me, so let’s try this thing.”

And I’ll fall on my own sword on it honestly.  I’ll be willing to risk, if this — just you deadpan your way through it, and you don’t care, fire me.  That’s great.  And nobody fires me after I do the research, so that’s great.  But so there’s the upfront process of 00:08:20 doing it and usually I’m doing interviews and a little bit more substantial, like deep dive into it.

I’ll run an annual survey for all of your previous customers, or something like that, and get a lot of information right at the beginning.  00:08:32  But it’s also kind of continuous.  So as you’re running AB tests, I’m still looking at your analytics, and I’m still looking at your AB test results and your heat maps, and determining how customer behaviour is changing as we’re mucking with your funnel.  So, it’s a constant process of change followed by measurement and it’s all basically in the service of listening to people.  I spend a lot of time listening to people.

Andy:  00:08:56  And so there’s a few things I want to unpack there.  First, there’s the quantitative side, where you have those heat and scroll maps, you have the click maps, what are you looking for in that data?  What does, for example, a click map — what will that show you?  How will that help inform a test?

Nick:  00:09:16  That’s a great question.  So I’m looking at where people are clicking and also where people are not clicking.  Those are the two fundamental things that you care about.  But also, as far as heat maps and click maps are concerned, you can kind of quantitatively asses what proportion of people are clicking on what things.  00:09:34

So a click map is basically there are a lot of dotted lines around every dom element on your page and it shows you that this number of people and this share of clicks occurred here.  And it’s actually, when you hover over it, you get a little tool-tip map that tells you, “Ok, 28% of people who clicked on your primary call to action and 8% on your secondary.”  Something like that.

00:09:55  Heat maps are a little bit fuzzier.  They’re just, this one is red and this one is purple and it’s all on a grey background.  It’s saying that a lot of people are clicking here, they’re beelining for that, and that seems to be a clear thing that’s happening.  And that’s something you can show to a client.  For one, it’s fascinating, because clients are always just shocked to be getting back heat maps.  They’ve never seen that sort of thing happen on their business before.  But more importantly, it actually gives you a tremendous amount of insight 00:10:19 and helps you control the narrative about how your customers are operating on a page.

Andy:  00:10:29  And so for that, is it really just trying to verify that?  Every page should have a clear goal for what you want users to do, and obviously, every founder has their own internal story they tell themselves of what a user goes through to actually sign up, or do whatever it is that they want them to do.  Are you using these tools to verify that they are actually doing that?  Or what do some of these insights help lead you to do?

Nick:  00:10:56  Yeah, that’s some of it.  I’ll give a couple of anecdotes and then I’ll talk about the broader process for AB testing.  One of the situations — I worked with a SaaS business and they put up this PDF toolkit that was extremely valuable about configuring DNS for your website.

And they were like, “People are going to love this!  It’s a really good way of content marketing!”  And no one went for the toolkit, and nobody even scrolled to the toolkit and they all just beelined to pricing.  And I was like, “Well, this is not happening.  We can either make the toolkit head-slappingly prominent, which is possible, a lot of people do that as an [Inaudible 00:11:27], or we can fight a different battle.”

Other situations where people are — if they’re on, say, an e-commerce site and they go to a product page, and it’s a popular product and the bounce rate away from there is mostly to view other products, well, they’re probably trying to compare products.

So, if that’s the case, then add a product comparison table on this page to help them understand what is going on.  00:12:00  And to bring this back to the broader process, you’re saying, “Ok well, they’re doing this, let’s come up with a speculation as to why.

And maybe I’m right, maybe I’m wrong about that.  Now what do we do about that?  Is this good for us?”  If everybody is zooming on the buy button, then great, you know.  That’s terrific.  But usually, the picture is a little muddier.  It’s not — you end up with people bouncing back to the store, clicking the footer for some reason, and looking at social media.  Maybe we should add an Instagram roll, or something like that.  And you’re trying 00:12:35 to think why are they doing that.

And so the answer is, sometimes I come up with a decent hunch:  They’re doing it because X, let’s do this.  Or, the answer is, let’s research more into it, so we can do what is called behaviour recordings.  Hotjar.com 00:12:51 [Resource Mentioned] allows you to do this pretty easily.  And basically, you’re watching their finger if it’s a touch device or their cursor if it’s a desktop, as they go around a webpage.  And you’re following along.

Or, run a usability test on a site like UserTesting.com 00:13:06 [Resource Mentioned] where you get somebody to vocalize their monologue as they go through and complete a task.  So, you just tell them, “Buy this product.  Find the blue one and buy the blue one and here’s a fake credit card.”  And they go through and they do that.  But they’re talking out loud as they’re doing this.  They’re giving you their impressions of the page.

And so that gives you a great deal more detailed insight into what they might be thinking that allows you to confirm or deny whether or not you are right.  I don’t know — I just find that I’m less wrong at the end of the day.  And I’m always surprised as to what people’s motivations are and the whole point is that you spend a lot of time paying careful attention to that.  Does that answer that?

Andy:  00:13:47  Yeah, I think so.  And to go a little deeper, because you also talked about the customer’s interviews.  And the only reason why I’m stressing so much on this one phase so far, is because I think you’re right, that people’s gut reaction, their instinctive reaction when someone says we need to dig more into this.

As they say, “I already know my customers, this is a waste of time, let’s move on.”  But it really is so important, because as you said, when you actually do it right you’re going to have so many surprises, and if you’re surprised, that means you’re learning, and the more you can learn, the better you can test and the faster you can test.

And so that’s why I’m trying to dig into this more.  00:14:24  So on the customer interview side of it, are you truly just — say you have a good idea, you have your hypothesis by looking at some of the other data, and you want to dig a little deeper.  Are you just asking, “Hey why didn’t you buy this?” or is there a bit more nuance to it than that?

Nick:  00:14:38  I mean sometimes I’m doing that.  But there are a few different points where I can go at this.  One of them is surveys.  So, I can do a survey where I just send it out to all previous customers and something on like Typeform 00:14:55 [Resource Mentioned] or Wufoo 00:14:56 [Resource Mentioned] and it asks questions about like, “How did you find out about us?  Are you still using the product?  What competitors did you vet?  What was the last thing that held you back from purchasing?” Etc., etc., etc..

But one thing I love getting in place for my clients, is more of a continuous surveying process.  00:15:14  So, on the thank you page of your buying product, this is amazing for e-commerce stores, through a one-question survey that was like, “Tell us about your experience today.” and it’s just a free text field.

And most people aren’t going to fill that out, but you’re always going to get somebody filling it out that was like, “I was frustrated by this” or “At one point I was skeptical, but I’m excited to be getting my thing” or  you set up a lifecycle email sequence where, two days after delivery, and you can delay it further for international customers or something like that.  But two days after domestic delivery, you just email them and are like, “Have you had a chance to take a look at this?  Can you just let us know any thoughts, what held you back from purchasing?”  A few example questions, right.  00:16:03

You want to know — the goal is to tease out what people’s motivations were about it, and also who else they were looking at.  So, are there significant customer objections that you may or may not have actually addressed.  Who else did they look at and what did they like about what they looked at?  And you’re using all of those things to eventually craft a pitch that you can test.  00:16:30  None of this — it’s funny, I’m not even talking about how to run an AB test, because there are a million places — go on VWRow.com’s blog and 00:16:38  [Resource Mentioned] [To Be Confirmed] and read all about how to run an AB test.  But the answer is to come up with, what is the testable idea.  Because that is the number question that I get.  00:16:49  What should I test?  And the answer is, “I have no idea, go through this process.”

Andy:  00:16:56  Those are great examples for the e-commerce and just thinking about how that would apply to SaaS as well.  During that trial period, during those first few months where someone has become a paying customer, where they have likely, very recently, gone through more of a buying decision and done some comparison, and had some objections.  That seems like the perfect time to be asking a lot of these questions of your newer users.  Would you agree with that?

Nick:  00:17:21  Yeah, yeah.  My friend, Patrick MacKenzie, I did a whole course with him called, The AB Testing Manual 00:17:26  [Resource Mentioned] at one point.  But he has a separate course all about running life-cycle emails for SaaS.  And the whole conceit of it, spoiler alert, is that you are doing that to get the person on-boarded better, so maybe there’s data migration issues, or integration with your website, or it’s a B2B thing, or something like that.

But also, keeping the channel open to support, so that if they are having problems, or if there was an expectational mismatch, or there might be marketing lessons to be gathered, you are making sure that they are listening.  00:18:02  I can’t tell you how many SaaS don’t send an email from — that looks like it is coming from the CEO or something like that.  Like two days after you sign up.  And that is the most — one thing that you can do to show that you’re actually paying attention to the customer.

Andy:  00:18:19  And people respond to those and they share a lot of valuable insights.  It’s mind-boggling to me as well that they’re not at least trying to open some kind of communication with the customer, especially at that point in the life cycle.

Nick:  00:18:34  Yeah, absolutely.  Absolutely.  As far as why you didn’t buy — like sometimes I’ll do that at the end of a drip campaign, so I’ll have you sign up to join a fan club or something like that, or you’ll get the Toolkit PDF or something if they’re doing a more concerted content marketing initiative, but eventually, you provide value, value, value, value and then you do a soft sell, value, value, hard sell, and then after the hard sell, you wait two days, “Is there any reason why you weren’t considering this at this time?  Reply to this email.”

Oh you get amazing stuff.  00:19:08  People are apologizing for it too.  They’re like, “I’m so sorry that it’s not for us at this time.  We’ll keep you in mind for sure.”  Because you gave them all of this value for like two weeks.  You get a lot of really good insight from it, and you begin a professional relationship with them too.

Andy:  00:19:24  For sure.  And this research phase, I know we could spend an entire, multiple hours series talking about, but why you did say it is a continuous process, at this point, say that you have had at least some clear hypotheses, where to go, what some of those core problems are that you do want to test, what happens at that point?  How do you start translating those ideas, those tests, into an actual test that you run?

Nick:  00:19:52  So I run a gigantic Trello [Resource Mentioned] board with my clients and I have a whole template that I just stamp out and I put it out there.  It’s basically at the very beginning of the Trello board.  All the ideas that come in and they slowly move across the right to eventually become built out tests.  And when people suggest ideas, and anybody can join this and suggest ideas, but the whole point is that ideas end up in the trash very quickly when you have suggested something and it is not backed up with some form of research.

For me, research is any information gathering process that involves the customer.  It’s not, “Jane over in marketing thought this” it’s “Jane over in marketing talked to John over in support and heard that a lot of people were clamouring for this.”

And now that’s interesting, because you’ve actually talked to the customer.  So, you put in ideas at the very beginning, and we rank them by basically, 00:20:53 how feasible is it to build out, and what potential impact it might have on the funnel.

Like, is it happening to 5% of customers, it is happening to 80% of customers?  Is that 5% of customers 90% of our revenue?  Maybe we should coddling our nicer customers.  So, yeah there’s a fine art in prioritizing and determining feasibility of it.  A lot of the time, with AB testing, you get high-risk, high-reward.  And I’ll tell you, as you have successfully picked your low-hanging fruit, it only gets higher-risk, higher-reward, because you’re getting to the point where it’s like…

Andy:  00:21:37  And when you say high-risk, high-reward, in this sense it means redesigning an entirely new landing page?  Or doing a more involved, higher upfront investment type of test?

Nick:  00:21:49  That’s exactly right.  Like you say, “Well, we need to take a wrecking ball to the funnel and get a lot of developer time over here.  And we believe that it has the potential to bump conversions by 12% because we have this ream of research over here that tells us it.”

And I end up running into that point after like six months of working with you.  It’s one of those things where we’ve done all of the easy stuff, and you think I’m a wizard, and then we start to get into the harder, grittier things that require developer time, or we run into a situation where we’re like untangling a lot of technical debt.  00:22:26

And that happens more on e-commerce than on SaaS businesses, to be honest.  Yeah, because their store can be hobbled with technical debt and there’s so much dynamic content.  But usually with a SaaS business, it’s like your marketing funnel is either the public directory on your Rails installation or a WordPress site that clamps onto your Rails installations.

So, it’s like static content.  And we can muck with static content, that’s totally cool.  00:22:57  But to zoom back out to the process, we basically have the Trello board, I act as project manager about it, and we get into a lot of conversations.

And some of them are meant to defang the hippo, the highest paid person’s opinion, and they’re meant to inject data and customer insights into the whole process.  And then either things get rolled out as one-off changes, those tend to be less frequent, or they get slotted into the queue for testing.  I usually plan out the next three or four tests, and we’re usually building the N plus 1nth test 00:23:33  [To Be Confirmed] as we’re running a test.

Andy:  00:23:38  And for those one-off changes, that’s just if you uncover something that’s objectively this is a bug, or this doesn’t work the way it’s intended to.

Nick:  00:23:45  Yeah, this was last week — I encountered a store where there was eleven different videos, because they had one video for each product on a whole store.  They were tiny little thumbnail things and they popped up a video, and it was on Wistia and then they had the Wistia JavaScript library called eleven times.  And it was a 1.2 megabyte slug.  And so I was like, “Why is this webpage taking me — I’m on my business class, fancy fibre internet and why is it taking 5 seconds to download” and I look in Chrome, “That’s horrible!  Can you just find and delete like ten of these please?”  And that got in the Trello board.  So that’s an example of:  You’re messing up, fix it.  But everybody has it!

Andy:  00:24:37  Oh for sure.  And then on the other side, when you’re not in the one-off bucket ((?)), you’re in the testing side of things, what does it actually look like to build out a test?  As someone who is not a designer and who’s not super technical, I could play around with VWO and stuff I guess, but what does it actually look like from your perspective to really prototype and build out some of these tests?  Because I know 00:25:00  [Inaudible 00:25:00] that simple.

Nick:  00:25:00  Yeah so there’s one of two ways you can do it.  You can go in VWO and click on individual elements and edit.  There’s a WYSWYG editor.  On VWO, Optimizely and Google Optimize 00:25:09  [Resource Mentioned] all three of those frameworks have that.  And so that’s great for small stuff like call-to-action, even copy if you want to rework your whole ditch.  Click the parent div, you’re done.

The other one is where you just have to URLs and you shunt 50% of the traffic that goes to say your homepage over to /welcome or /start or something.  And so you build basically a separate funnel that has — I like to call it the synonym funnel, where you have start instead of home, plans instead of pricing, join instead of signup and you just edit pages on your DeadHub Repo 00:25:52  [To Be Confirmed] or whatever have you, and then you say, “Ok this is the variant page.”

And that is terrific for big changes that you’re going to be doing where you’re knocking down your pricing grid or any dynamic content, so if you actually want to change your pricing and charge $20 dollars instead of $25 dollars, like do it on that side.  It also, for Shoppify stores, there’s some plug-ins that allow you to use git queries 00:26:19  [To Be Confirmed] to change — to create variance of product listings so that you can maintain the same stock.

So you’re not spinning up a separate product and you don’t have to do inventory issues.  Reconciliation between the two products.  Because I’ve seen other clients, they’re like, “We’ll just do that.”  And I’m like, “Nooooooo.  No bro.”  Yeah, you want to use git queries on that front to make substantial changes.

00:26:48  So you’re doing that and then you just go into VWO and you send 50% of traffic over to that.  That’s the building the test bit, and the other side is creating the goals.  00:26:57  So that’s usually a one-time set up thing, where I’ll do at least, if you’re SaaS, it’s clicks to pricing, clicks on each individual plan, clicks on any plan, visits to sign up page, raw sign ups, if you have trial conversions, then I go into Google Analytics and I measure plan length or I’ll suggest you signup for Baremetrics 00:27:20  [Resource Mentioned], which is a Stripe analytics database.

And that’s really good at figuring out things like MRR, and churn and envy and stuff like that 00:27:29  [To Be Confirmed]  And then you — what’s it — ArtBoo((?)), hits to thank you page, completed on-boarding, that sort of stuff.  So I just track the heck out of them.

00:27:42  With e-commerce, it’s like hits to the cart, hits to checkout, hits to the thank your page, Artboo and that’s it.  And AOV.  But yes, I’m measuring all of those things, then you start it, and then you don’t look at it for three weeks.

Andy:  00:28:03  When you’re getting into the actual testing phase, when the test is up there and running, how do you know when to stop it?  How do you know the result is significant?  All of that.  What thought process do you follow to evaluate when the testing phase is over?

Nick:  00:28:23  That’s a great question.  So sample size is calculated with good ol’ fashioned statistics.  So I have a sense of, at least your past month’s traffic to your website, as well as your conversion rate for that period, and usually your conversion rate isn’t that terribly fluctuant, unless you’re a store and you just went through the holidays, or a sale or something like that.

Most people, that’s not the case.  You know your conversion rate and you can just tell it to me and in an elevator or whatever.  So, I take those and basically I run them through a JavaScripts utility online that every AB tester uses, from a fellow named Evan Miller 00:29:05, and it’s basically his sample size calculator.

And the reason you do that is because you want the minimum sample size that’s necessary to get to 95% confidence.  Sometimes you can do with higher confidence, but you’re basically running an experiment and the experiment generates basically a bell curve, and so you can say, Ok, well let’s say it’s a winner and your conversion rate was bonkers.  Like it went up 30%.  Well, ok, during that month it went up 30% and we’re at 95% confidence.  That’s pretty great.  That means you have a pretty narrow bell curve, but in practice, you’re long-term conversion rate, when you make this change to your funnel, could be anywhere around there.  It could be 26%, it could be 38%, we don’t know.  What we know is that rolling out the change is a good idea.

Andy:  00:29:57  I see.  It’s a directional test.  You’re saying, “I have this confidence rate that this is an improvement over what we have right now, but not necessarily that it’s going to hold true the lift that was seen during the test itself.

Nick:  00:30:11  Yes, that’s exactly right.  So you get the sample size and then you figure out exactly how many people need to get in per variation and then you look at the monthly traffic, and you say, “Ok, we got this many people, we have to run the test for this long.”  And so it’s one of those situations where you want to run it for at least a week and usually not more than four or five weeks.

There reason you want to do it for at least a week is to incorporate any vary issues that can occur on a weekend.  Almost every business I encounter is heavily weekly, no matter what it is.  You could have most of your sales on a weekend or a weekday, but that’s how it is.  And what was the other one — oh, not more than a month.  You want to do that because otherwise you’re not going to get a crazy good ROI from it.  You’re going to be running a test every six months and it’s going to be weird.

Andy:  00:31:03  That was something that at MicroConf, Lars Lofgren from I Will Teach 00:31:08 was talking about.  If it takes longer than that to get a significant sample size, then literally, it can’t be that big of a win, so it’s better to just move on to something else.

Nick:  00:31:20  Yeah.  And there’s also the possibility that just weird variations will pollute it at that point.  If you’re doing other things, like content marketing, or guest blogging, or whatever, it’s bound to happen that some weird butterfly flapping its wings thing is going to happen to your business and revise your conversion rate up or down.  And you’re going to have that polluting your AB test data.  So, don’t do that.

Andy:  00:31:43  And so you have this run for at least a week, ideally not more than four, you find a winner, and then you just ship it out to everybody, or what happens then?

Nick:  00:31:56  Yeah, so I go in and I look at what the data looks like.  So there’s sometimes, when if we have a loser, I’ll write up that the conversion rate went down and here’s why.  Losers are actually better than inconclusive results, because it means we found something people care about.  And that is interesting.  So now, we’ve hit a rail here, what can we do with this?  Is it something that we should modify?  Is it something that we should stay away from?  Now we have something to talk about.  00:32:26

If it’s a winner, yeah roll it out to everybody.  That’s totally fine.  00:32:28  If it’s inconclusive, we usually don’t learn a whole lot.  You’ll get a 2% conversion rate improvement rate at 45% confidence and that teaches you literally nothing.  So in that situation, I usually don’t do anything.  I usually favour control.  There’s sometimes, when we’re determining at least it won’t harm us to make this change, and we’re already enthusiastic about the change, so let’s make the change.

So, for example, I worked with an 00:32:57 everyday KeySmart for a long time, and I have a whole case study about them, and I basically removed all but one of their products from their website that was one of their lines of KeySmart, and ran an AB test around it.  Determined that no one cared, they just wanted a KeySmart, they didn’t care about all the other models, and so we just rolled that out to everybody.  It made no difference in their conversion rate or their sales, except everybody started buying this one model of KeySmart and then they could reduce their manufacturing expenses by 40%.

Andy:  00:33:37  So there’s bigger, systemic factors in play rather than just the immediate revenue side of things.

Nick:  00:33:44  I actually never improved their conversion rate, but I saved them 40% of manufacturing expenses.

Andy:  00:33:51  Yeah, you made a lot of money though.

Nick:  00:33:54  I made them a lot of money!

Andy:  00:33:56  So with that, I’m curious to tie things together, how do you work with companies right now, to help them implement this process?

Nick:  00:34:04  So I’ll come through and just implement the method that we’re talking about.  I know that sounds terribly simplistic.  It depends heavily on the client right.  So I’ll come in, ask a bunch of questions about what objectives they have about their business, get to know them a little better, and if we determine it’s a good fit, I kick off with research, start poking around your Google Analytics, running heat maps, doing all the more quantitative stuff I do while I wait for qualitative results to come in.

And I’m usually just working with your team.  00:34:37  If you have internal developers I’ll use them.  If you want me to use developers, I’ll hire contractors and we’ll build a little Justice League and get it done.  But yeah, I do this for people and the best outcome is that I teach them everything they need to know about AB testing and take it internally and fire me.  That is the best possible outcome.

Andy:  00:35:00  It’s funny that you mention that, because as an outsider who follows the industry, understands the value of testing, I mean when you have agile development and you have the lead startup movement, when you have all of these iterative processes, I clearly see why this matters, and it blows my mind how some huge companies aren’t using sophisticated testing methodologies, and it also blows my mind on the other side when you have these startups that are barely off the ground that spend all their time running really basic tests.

But I don’t really have a good quantitative answer for when it should come into play.  When it makes sense for a startup to start doing this.  00:35:38  So I want to ask you at what point does it really make sense for a startup to dedicate internal or external resources to AB testing.

And obviously I’m not asking for a specific dollar amount, but roughly how do you qualify your own clients for when this makes sense for them to invest?

Nick:  00:35:54  Yeah so I — that’s a great question — I usually say that the instant you can get measurable statistically results with AB testing, you should start doing it.  But there’s also nothing stopping you from optimizing your funnel.  All of the one-off tweaks I’m talking about, like the 11 Wistia scripts or improving page read, or talking to your customers.  You don’t have to not talk to your customers if you get fewer than the number I’m about to provide you, but usually as a yard stick, at least 1000 transactions a month.

And that is — that puts it firmly in safe territory.  There’s a grey area and so you’re running one test a month in the grey area and you have to get firmer wins.  So one of the things about sample size is basically, you can detect smaller changes if you have a lot of traffic coming in.  And you can do it more confidently by doing that.

But you can always be testing with a smaller amount of traffic and understand big home runs.  You’re just going to be able to call tests in favour of variant.  00:37:11  So, it’s definitely — it’s so weird to bring terms of art into what basically is an unscrewwithable mathematical process.  I usually say 1000.

If you’re doing less, still work on optimization stuff.  I’ve talked plenty in the past about what you can do if you don’t have enough statistical significance.  00:37:35  My number one recommendation is to put together broader outreach campaigns and get that traffic in in a way that’s more durable.  Don’t just run a gigantic adwords campaign that’s expensive.

Andy:  00:37:48  And so when you say 1000 transactions, if I’m testing primarily for — that doesn’t necessarily have to be if you’re a SaaS startup, that doesn’t necessarily have to be 1000 new customers every month.  It could be 1000 trials, or it could be 1000 email opt-ins, depending on what piece of the funnel you are testing.  Or am I wrong with that?

Nick:  00:38:06  I would say 1000 conversions from trial.  You want the key revenue generating thing.  It’s a high-volume thing for a lot of SaaS businesses.  But there are many that might be listening to your podcast that fall within that and are doing this.  And they will really should be.  By that point, it gets to be head-slapping.

Andy:  00:38:30  Even on the other side of it, I think you hit on some really good points, if you don’t have those levels of customers coming in, it’s ok, you just need to change the approach and focus either  on those bigger wins, those bigger changes that have a higher magnitude that don’t need as much of a sample size to prove significance, or just work on — not even or, but and work on those one-off fixes, those optimizations that clearly are problems that need to be fixed.

Again, it is never bad to be talking your customers, so don’t say, “Just because I don’t have that many coming in, I’m not going to talk to them, I’m going to wait until then” because there’s a good chance if you wait until then you probably won’t actually get there.

Nick:  00:39:14  Yeah, if you take all of the things that I teach you about talking to your customers in all of my books and never hire me because you don’t have enough traffic, but you still find a way to create something that is more delightful for people and something that they actually want to purchase from you, we all won.

It’s not a zero-sum game where I hope that I didn’t lead you along for 45 minutes of this podcast and say, “Oh well now this isn’t for me” and paused it and went and did something else.  There’s so much else to optimization that is not just AB testing.

Andy:  00:39:52  I fully agree with that, I’m glad you phrased it that way.  Before we do wrap up, I do like to ask my guests just a few quick rapid-fire questions.  I go through them quickly, but your answers don’t need to be short.  00:40:01  And so the first one is just what do you currently spend too much time doing?

Nick:  00:40:07  God.  Too much?  I think I spend most of my time writing at this point, and I feel like writing is an inefficient process.  I wish I could get on more calls and fewer reports.  But I love writing, and I’m pretty decent at it, so I don’t know.

Andy:  00:40:28  Is there something you don’t think you’re spending enough time doing?

Nick:  00:40:31  Yeah, I’m probably not spending enough time — actually, let me find something else — I’m not spending enough time speaking at conferences to be entirely honest.  I did that for a really long period of time and it always got me a lot of clients, and a lot of really great professional relationships came out of it.  I’m going to be doing one, knock on wood, in August.  That’s kind of an effort for me.

Andy:  00:40:59  Over the next, say three months, do you have any goals that you’re working towards within your consultancy.

Nick:  00:41:06  Yes, I do.  [Laughter]  I’m going to be launching a couple new products.  One of them I’m actually guinea pig testing with one of my oldest clients, where I rip apart your analytics every month and provide design recommendations without having to AB test anything.

And there might be AB testing suggestions out of that if you are doing that as part of your strategy, but I’ve been — this one guy he basically runs a 20 person company, and there’s no ownership over either Google Analytics or Mixed Panel, and they’re hoovering down so much about their customers, and I was like, I’ll own this and help you define product direction out of it.  00:41:49

He was like, “Great.  Go.”  And that was six weeks ago, and it’s been going really well so far.  So I want to launch that.  That’s the big thing.

Andy:  00:41:57  Nick, it has been great chatting with you today.  If listeners want to hear more from you, just to learn more about how Draft can help them increase their revenue, where are the best places for them to go?

Nick:  00:42:05  So my website is Draft.edu and you can go there, sign up for one of my courses for design or AB testing.  If you’re curious about putting this into practice in your business, you can go to ABTestingManual.com, that is my most recent course where I basically teach you everything I said on this call.

Andy:  00:42:27  Perfect.  It was a lot of fun chatting and I really appreciate the time.

Nick:  Thank you so much, appreciate it.