Skip to content
Supercharge Your Pipeline
What I’ve Learned After More Than a Year of GDD Web Experiments (2)

What I’ve Learned After More Than a Year of GDD Web Experiments

June 19, 2018


By Juli Durante

When HubSpot first rolled out its Growth-Driven Design (GDD) process, I think we were all a little intrigued about an incremental website rollout process that combined what we know about buyers and their needs with data collection.

For some, the GDD process focuses primarily on building launchpad websites and then expanding them page by page and template by template.

While there’s nothing wrong with this approach, the team at SmartBug tends to do things a little bit differently. As a marketer first and foremost, my role is usually to help organizations bring in more traffic to their websites and convert that traffic into leads.

While blogging and premium content are cornerstones of high-performing inbound marketing programs, website pages and performance are an equally important part of building your conversion funnel.

For that reason, I've spent the last couple of years running conversion rate optimization and GDD experiments—and learned quite a few lessons along the way.

There is little more important than solid experiment design.

Think back to middle school, when you may have learned about the scientific method, which forces us to ask a question, collect some information, formulate a hypothesis, and draw a conclusion ... then, re-hypothesize and test some more. Essentially, this is what we do with GDD. However, it’s sometimes easier said than done. As curious marketers, we want to create tests, see marginal uplift, test again, test again, test again. This will work sometimes, but not always. Experiment design is just as important as the experiments we choose to run. As an example, let’s think about running an experiment on the footer of a website.

  • What’s the question? What are you looking for when you consider the footer? Perhaps you’re asking if footer navigation plays a role in driving traffic deeper into your website.
  • What data can you collect today? Where does the footer appear? Do you have any information about how those links are clicked on or used today? Do you know how far average users are scrolling down on your site pages? Some qualitative and quantitative data from heat-mapping and visitor-tracking software can help.
  • What’s your hypothesis? For example: “Adding additional links to the footer for popular site pages will result in more visitors reaching those pages more easily.”
  • How will you measure your hypothesis? More on measurement later, but give this a good, hard think-through before you proceed.
  • How long do you need to run your test for? How much data do you think you’ll need to make a confident result? Confession: A colleague sent me this calculator from Unbounce, and I’ve never looked back.
  • Document the hypothesis and testing metrics.

As you can see, there’s a lot to designing an experiment beyond saying, “Is a purple or blue CTA better?” While it may be better to test something than nothing, if you’re spending your time creating inconsequential tests, you’re spinning your wheels and not driving results.

Here are the materials you need to support shifting from a traditional website  to a growth-driven design website.


Measurement isn’t easy. Especially if you don’t think about it in advance.

Some tests are easy to measure, like CTA performance; HubSpot will show you right in the tool. Others are more complicated, like choosing metrics for a test with a more open-ended goal, such as influencing contacts to do something. Event tracking in HubSpot and Google may be a helpful starting point, but often, you will need to pull data from a few different sources to truly measure these tests. Starting with your ultimate goal (maybe pages per session or SQL generation), work backwards and ask how, how, how, how. How will we know? How will we see? How will we omit? How will we account for? The more times you ask, the more solid a plan you’ll have for measurement.

It’s important to ask yourself these questions up front, not once an experiment is rolling or concluded, because you may need to build something new in order to measure accurately. If this is the case, you’ll end up running your experiment, realizing you can’t actually prove your hypothesis, and then having to start over—never a good sign.

On the other side of measurement is statistical significance, which basically tells us if our test result is important. Kissmetrics makes a handy calculator that I use for every test I run—sometimes to surprising results. Keep in mind that you want to be as confident as possible that ending your test in favor of making a change (versus ending your test and deciding not to make the variation permanent) will result in a significant, or important, result. Sometimes, that means a lift of a 0.3 percent conversion rate is a win. At other times, you may need to shoot for a 5 or even 10 percent lift. You can even use this calculator in advance of testing to help model your hypothesis; e.g., “If we can see an X percent lift after XXXX clicks, we’ll know with 98 percent confidence that the change will improve overall performance.”


You’re going to have to figure out some new stuff.

Using HubSpot for inbound marketing is fairly straightforward. When you start thinking about HubSpot as part of your GDD or CRO tech stack, it might get a little more complicated. There is tremendous opportunity to learn how to do new things within HubSpot—new reports, list-building criteria, deeper aspects of tools—but there is also opportunity to add data to what HubSpot can provide and create a truly well-rounded picture of your website’s performance.


Most tests are flat.

We all want our time to be rewarded, and with CRO, the reward is usually a successful test. Except when your test comes out flat. And so does the next. And so does the next. This doesn’t mean you’re failing; it just means your hypothesis, while interesting, isn’t quite proved. Sometimes, we don’t collect enough data (even when we think we do), and we just need to keep a test running to get some concrete evidence. Sometimes, we started small with testing—like with CTA color—and realized that our buyers just don’t care about shades of red as much as we do. That’s OK, because we still learned something. And as good testers, we’ll keep testing. And keep testing. And keep testing. And keep testing.

How-to-Sell-GDD-to-Your-Boss-cover

Become an expert on the benefits of the growth-driven design process with:

How to Sell GDD to Your Boss

Check It Out
Topics: Web Development, Website Design, Growth-Driven Design