Published: Monday December 18, 2017
A blog by Rob Frost, Propositions and Solutions Manager, GBG Datacare
Every year, customer data degrades by 10-20%. Why? Well around 1% of the population dies, 10% moves house, postcodes are changed, and people change their email addresses and phone numbers.
So, it’s fair to say data needs to be maintained to stay in good health. But it’s also important to recognise that not all data degrades at the same rate. Whether it’s on customer segments, products or customer type, the time it takes to deteriorate will vary. And it’s not all equal. Data provided by existing customers may be more of a priority to a business than that held against prospective customers, for example.
Frequently auditing your data is a crucial way to monitor degradation, enabling you to fix any errors that it reveals. In my opinion, many businesses perform data audits too infrequently and across too broad a base and this comes with consequences.
The cost of a delayed data cleanse can be higher than expected because the longer time between audits, the more data will degrade and need to be fixed. If you’re not staying on top of it you could have a big cost to update. It’s better to be proactive than reactive.
With GDPR coming into full effect in 2018, it’s never been more important to keep data clean. Article 5 of the regulation states “every reasonable step…” must be taken to fix data that is inaccurate. But how do you know what to fix if you haven’t properly audited?
Here are my top tips on how to be smart with your customer data audits:
- Focus your audits
Running audits across different products or different customer types (active, lapsed, prospects etc.) allows you to compare the results, and gives you the specificity to justify investment from the individual teams responsible.
If you claim to your product manager that the address quality of the customer base is 95%, this is somewhat inconsequential. However, if you are armed with evidence that says overall address cleanliness is at 95%, but the product’s customer address score is 80%; this puts you in a position to investigate further.
Regular auditing gives businesses a way of benchmarking internally across different business areas and products. If all data across the company is checked and you begin to see big differences in the data quality results, you can dig a little deeper. For example, if you sell two products and one set of results shows that telephone number data is 10% wrong, there may be other issues at play.
Examining other variables, such as opt-ins, contact data validation and field populations and distributions will provide a holistic view of this subset of data. For instance, the product manager who has emails against 50% of their product customers may not realise that only 60% are verified and valid. (Therefore, only 30 in every 100 customers are ‘emailable’ and not the 50 in 100 as initially thought!).
If other products have higher opt-in rates and a higher percentage of valid email addresses, it presents an opportunity to question if something is amiss. Maybe there is an issue surrounding data capture, the on-boarding process, systems passing the right data and datasets across to the customer database, and so on. It becomes something that can be actioned and investigated, and if required, corrected.
- Regularity is key
Regular audits can be used similarly to exception reports - fixing and improving data quality when it drops below a certain level, such as when the number of invalid telephone numbers reaches 10% or addresses drop below 90% PAF match. This approach was particularly successful when I previously worked with a leading UK charity. The investment was frequent and regulated, and a high level of data quality was maintained across all the differing supporter types. This allowed me to focus attention on the areas that needed attention, or focus on pending marketing campaigns.
Being smarter in the way you run data audits puts you in control of your data, your spending, and allows you to focus improvements on the data that is most important (or most likely to be used) and may identify issues that have been hidden. It will also provide control and measurement, and allow you to more easily justify the investment in data quality.