image credit: bionicteaching
A lot of companies talk about how to build a data-driven company cultures. And while this is important (and something we explicitly wrote in our company values at LaunchBit), we found that we needed to get more specific and tactical for our marketing team.
My co-founder Jennifer and I are very data-oriented people. In day-to-day conversations, we’ll build arguments with facts and experiments. We both are engineers by training, and I even TA-ed and tutored basic statistics at MIT. Data is really important to us and something we naturally wanted to explicitly write into our company values. And we did.
You must turn your philosophy on data into action
But, even if you’re data-driven in personality, it can be very easy to not carry-through in your team dynamics.
In 2011, when we first started LaunchBit, we would look at our numbers religiously. We would carefully keep track on spreadsheets our conversions for everything. From conversion rates of investors (even down to the category of investor: gender, race!) to conversion rates of our customers, we would track it. But slowly over time, by 2012 and early 2013, we were getting sloppy. Especially on the customer acquisition front, which is what I own here at LaunchBit. We had so many experiments going on that we got to the point where we had too many spreadsheets floating around, and it was tough to keep track of everything! If you have too many dashboards and accounts to check, at some point, you stop checking them and forget all about them. After a year or so, it basically became one big bear.
image credit: tambako
So I stopped taking a look at all data. I feel sheepish in saying that I used to chastise people for not looking at their Google Analytics accounts or knowing their numbers. But, guess what — it’s very easy to not check when you have so many experiments going on and too much data to check.
So, we regrouped and realized we had a mess on our hands. We had too many experiments that were running, forgotten about, and we needed to end the ones that were not working. We needed a better way to structure experiments.
How we structure experiments today
image credit: mobilestreetlife
We are now much more methodical about marketing experiments at LaunchBit. Here’s how we structure them.
1) We hold a standing product meeting each week to talk about our experiments.
Everyone involved in customer acquisition is in this meeting plus head of product (Jennifer). We spend half the time discussing the status of existing experiments and the other half of the time developing new experiments. These experiments are generally small: in 1-2 week sprints.
2) We use Google Docs to keep a running list of experiments
When we create a new experiment, we’ll come up with a structure and rough timeframe. For example, when we tested Twitter headlines, we determined:
- What our hypothesis was — e.g. we’d be able to find a statistically significant headline
- How many headlines we were going to test
- On what schedule to tweet our headlines
- What data we needed to see if our hypothesis was correct
- How we would track and document data
- What timeframe we would run the experiment for
We would then set action items and respective owners for the week to make progress on this experiment.
3) We discuss experiments still in play on a weekly basis
Each week, for experiments that are near or past their expected timeframe, we determine whether to kill, continue, or expand the experiment — this is probably the most important part. All too often experiments are not very conclusive, so you need them to run longer than initially expected. This is where experiments often go to the land of the dead, because people lose track of them. We decided that we needed to take an active stance in monitoring experiments and actively decide an experiment’s status. If we decide to continue an experiment, it gets another timeframe added to it as if it’s a new experiment.
One of the reasons why experiments often end up in a limbo zone is that you rarely have breakout results either way. Much like how you don’t want your company to be in the land of the walking dead, you don’t want your marketing experiments to be here either. Having too many experiments build up in the zombie zone makes for unmanageable experimentation. So, eventually, all experiments must have a definitive end, and the only way to ensure this is to put strict timelines on experiments so that you can actively review.
There is an opportunity cost to having too many experiments
What is often unspoken is that there is an opportunity cost to having too many experiments. You have limited time to manage and actively tweak a certain number of experiments. By putting time constraints on experiments, you are forcing your existing experiments to compete against new experiment ideas you may have. Each time you want to extend the lifetime of an experiment, you are actively deciding against doing a new potential experiment, which means you believe that the current experiment has a better shot at “winning” than something else you could be trying.
It is difficult to build a data-driven marketing team
Even if you’re a data-driven person, it can be difficult to build a data-driven team as you scale. One of the reasons our data-collection fell apart in the first place is that chaos started occurring as we started expanding our team and number of activities, etc. Chaos is natural in startups and fast-growing teams. And it’s the antithesis of structure, which is exactly what you need to be data-driven. You can still be speedy and data-driven with structure. But, to initially put that in place requires taking a step back and spending the time to put processes in place. That requires taking a momentary pause, which causes a lot of people to cringe, but it’s worthwhile in the long-run.
How do you create a data-driven marketing team?