Data in context: The best practices that may not be best practice.

Being digital marketers, we love to share the practices that help us supercharge campaign performance and make the most out of our data. However, there are cases in which ‘best practices’ are not best for your brand / client and may actually harm performance. 

It’s important to remember paid media is never a one size fits all.

 It sounds simple, but it’s so important to consider the wider picture when thinking about your approach to paid media. To give some more colour to this, I thought I’d share some pitfalls we see commonly and how to avoid them. 

  1. Doing too much with little data


    You’ll commonly see performance marketers bang on about segmenting campaigns, audience groups, or asset groups to give you more visibility & control. While this can provide insight into the best performing areas, if you’re working with a small volume of data – chances are all this work will do more harm than good.

    Your account / campaigns / ad groups need a sufficient amount of data to learn. By segmenting when data volume is low, you’re not only harming the algorithms learning, but also your own. Making performance decisions based off a small-data set is never a good idea!

    What should you do if you have a small amount of data?

    Look to collate – your structure doesn’t have to be sophisticated. If you don’t have the budget to spend, you should look to build up your dataset using time & a broader campaign structure. You can then make incremental changes to your campaigns knowing those changes are strongly backed by your data.

    This might Involve having one campaign in meta with a prospecting ad set and a remarketing ad set. Or it might involve starting out on search with phrase and exact match types within one campaign.

    Consider the volume of data you have and ‘feeding’ the algorithm before segmenting!

  2. Being ‘too into the data’ & unrealistic expectations


    As performance marketers we love being data driven, we know your data is your most valuable asset and understanding it is a massive strength. However, another pitfall we see is people looking to find solutions in their data without acknowledging the wider context. 

    For example, brands may see their consideration / remarketing campaigns are performing excellently and redistribute budget to weight these campaigns more. However, when performance later drops off, we see people looking to boost performance by looking at their campaign’s metrics in isolation. i.e. they may work to improve creative based on lower engagement rates, remove inefficient placements, or alter budgets, when drops in performance are likely related to disrupting the funnel and oversaturating a warm audience. 

    Another example of this may be brands going after deeper conversions in relatively new/ competitive territory. We see marketers following the data & optimising campaigns to those deeper conversions, but ultimately ending up with high CPAs and a low ROI despite making those data driven decisions. 

    In these cases, it’s important to take a step back and look at the ‘fluffy’ side of marketing. Think about your product, the consumer journey, and your funnel. 

    • Is this conversion realistic for the audience we are targeting?
    • Do our ads push the audience to convert in this way?
    • Is the platform / campaign type appropriate for the goal we wish to achieve? 


    Or broader questions you may want to ask are:

    • Is our product/service fully developed and at a stage where marketing efforts will prove fruitful?
    • Is digital marketing right for us?

  3. Being too eager to test 

    We LOVE testing and learning as an agency – so much so, we have a company document dedicated to it (cool I know). 

    Testing is how we learn about best practices and discover new ways to maximise performance. But sometimes we see this go sideways and often it’s because the testing process isn’t given the room it needs and tests aren’t thought out properly.

    When we see tests spend, yield no learnings or even result in performance decisions that affect campaigns adversely its because individuals:

    1. Are testing when there is limited room to learn
    2. Are not leaving room to learn 
    3. Have not thought about a clear test set up


    What do I mean by this?

      1. When data is low-level or when accounts are in periods of instability, there isn’t a consistent stream of data coming through so it’s hard to monitor the impact of your test as performance will fluctuate regardless of any changes made
      2. A question commonly asked is ‘how long should we wait before we conclude the test’ and unfortunately the answer is ‘as long as It takes’.

          Sometimes we give a guideline based on data volumes, but the truth is, we can only conclude a test once we see a significant amount of data. For some brands this may take 2 weeks, for others it may take a lot longer. It’s important not to conclude tests prematurely and make performance decisions on low data volumes.

        1. Its important you have a clear set up when looking to test something or you may conclude your test with no learnings. To avoid this you want to have answers to the following questions

          • What is my hypothesis?
          • Is there a clear difference between the conditions im testing?
          • What KPI am I looking to see a change in?

    1. Being static with learnings

      When running a campaign for a long time, marketers will gain a sense of what works best for their campaigns and continue with those practices. However, while it’s important to follow the data, it’s more important to ensure you’re testing new features and giving other platforms/ campaigns/ audiences an opportunity to shine.

      With Ad platforms consistently making changes, particularly in moving to automation – it’s vital brands move with those changes and learn how their campaigns can benefit from them.

      Its also important brands continually test new platforms, targeting methods, strategies etc. in order to keep growing their accounts.

      Learning is so important in the digital marketing space and you should always look to challenge any assumptions / long-standing conclusions made!

    To conclude being a successful digital marketer is a balancing act, we should be data-driven, yet flexible, and be able to think in the grey. 

    Thank you for reading! Hopefully these points are helpful in avoiding some of the pitfalls mentioned – I’d be keen to hear any further thoughts / insights from yourselves.

    Feel free to reach out if you have any questions / thoughts at rosa@houseofperformance.co.uk

    Scroll to Top