November 15, 2022
5 minutes read
Information Insight

5 Common Roadblocks Caused by Bad Data (And How to Fix Them)

When data at your organization is bad, you know it. You can sense employee frustration, feel the heat of having to scramble to find answers, and realize that your company doesn’t know what it doesn’t know because there are data quality issues that are causing huge gaps in understanding the whole picture. Bad data impacts every aspect of an organization, but to break it down into simpler terms, it refers to data that is inaccurate, inaccessible, incomplete, inconsistent, or duplicated. Sound familiar?

Every organization possesses bad data. Gartner estimates that each year, poor data quality costs organizations an average of $12.9 million. But the levels of that bad data and how much it impacts the business vary from company to company. 

data quality

At Kenway, we often see a snowball effect come into play when identifying bad data. Typically, organizations are aware of the steps they’ve taken to accumulate all of this bad data, but along the way, they have lost control of how to stop its creation.  Companies shouldn’t layer on band-aids to temporarily fix issues that arise from corrupted data, but instead find and fix the problem at its source – it is the only way to achieve long-term, data-driven results that your company can have confidence in. 

In this blog, we look at the common roadblocks caused by bad data and provide solutions for how to resolve them.



Data interoperability is the exchange of data between different systems without losing the meaning of that data. When done correctly, data interoperability should allow applications, databases, or other systems to connect seamlessly and communicate with each other. 

Digital transformation depends heavily on data interoperability, but organizations often need help to address their interoperability issues. Like Legos, they should be able to swap out their applications and capabilities easily, but if the pieces don’t match, the effort to integrate new and updated technology becomes complex and expensive.

We often see issues with data interoperability arise when a company is going through a merger or acquisition. M&A has all the hallmarks of bad data with issues that range from duplicate data to data that is altogether inaccessible as the two companies have their own data definitions and rules, and getting those to mesh with each other is a challenge. 


There’s a popular acronym used by data scientists, GIGO which stands for “garbage in, garbage out.” It refers to the quality of the output being determined by the quality of the input. So, if your data is inaccurate from the start, you will likely end up with results that are just as bad. 

Companies need to be able to rely on non-garbage data for their reporting and analytics. A survey by Ocient of IT professionals found that 78% believe that their organization’s ability to analyze data is closely linked to its bottom line

In a perfect world, companies should be able to look at information and then take actions. But if those actions are based on bad data, a company is flying blind in its ability to drive trustworthy intelligence and decision-making. This obstacle can lead to frustrations in surfacing current state information, such as financial results or projections, to a company’s decision-makers.

GIGO Dilbert



Deloitte’s State of AI 5th Edition research report found that 94% of business leaders agree that AI is critical to success over the next five years. But when low-quality data is used to train and develop AI and ML for predictive analysis, it can lead to unfortunate outcomes.

Let’s take the example of when Watson Health was used for cancer diagnostics. The AI tool was trained with hypothetical cases provided by a small group of doctors in a single hospital instead of real patient data. The data reflected the doctors’ own biases and blind spots and wasn’t necessarily generalizable to all patient cases. 

Watson Health was accused of making inaccurate and unsafe recommendations which led high-profile hospital partners to cancel their collaborations with Watson. This example isn’t to say that AI and ML damage businesses–your organization just needs to be cognizant that dependable data when leveraging these tools is essential when forecasting future predictions.


All companies want to move from good to great. But when operational efficiencies are created by bad data, it makes it challenging for an organization to move fast and take action. 

A survey of data engineers found that data professionals spend 40% of their time evaluating or checking data quality. Three-quarters of those surveyed take four or more hours to detect a data quality incident, and about half said it takes an average of nine hours to resolve the issue once identified. 

If you constantly rely on your data engineers to evaluate, interpret, and fix your bad data, you are losing money by the minute. You can start making automated decisions and actions by moving to automating data and bringing in employees only when necessary. With this approach, data becomes the lynchpin you need to achieve economy at scale quickly.


This one can be summed up as getting into trouble. When organizations leverage inaccurate data, it can lead to costly issues. There is the financial impact of regulatory risks, such as when a company must pay a fine for breaking a data privacy law. 

One of the most well-known examples is the EU’s General Data Protection Regulation (GDPR) law. There have been over 900 fines since the GDPR took effect in May 2018, with Amazon, Google, Facebook, and WhatsApp all enduring hits.

These data mistakes can also come at a cost to a company’s reputation. This scenario is often seen in marketing, such as when duplicate CRM data causes an existing customer to receive a sales email multiple times. It causes confusion for the customer and hurts brand perception.


Companies that are struggling to conquer these bad data roadblocks should consider looking at their people, processes, and technology around these three areas: 

  • Data governance is the collection of clearly defined policies, procedures, standards, and/or roles that ensure the effective and efficient use of data in enabling an organization to achieve its goals. 
  • Data literacy is the ability to read, understand, create, and communicate data as information.
  • Project oversight is the process of assuring the quality of project management and delivery to reduce risk and improve outcomes.

Kenway used its capabilities around data governance, data literacy, and project oversight to help a global financial services provider improve its risk rating and regulatory compliance processes through a data governance operating structure. The financial services provider needed more visibility into their client risk rating due to inconsistent client records. The organization had incomplete and disjointed client information across its customer relationship management (CRM) system and its separate banking service platforms. The organization could not accurately aggregate client holdings, understand risk exposure at a client or portfolio level, or identify potential compliance risks. 

Compounding the operational challenges was impending regulation. The financial services provider was subject to federal Anti-Money Laundering (AML) and Know-Your-Client (KYC) regulations that would require standardized and consistent client information. To address these issues, Kenway collaborated with the client on an enterprise-wide data governance framework to properly oversee the creation, ingestion, maintenance, and consumption of its client information.

Read more about this case study here. >>

Having bad data affects almost every area of an organization. It touches on revenue, performance, and outcomes because companies cannot make business decisions based on data they cannot rely on or trust. If you have run into roadblocks with the quality of your data, connect with us to learn how to turn your data from bad to good.

How Can We Help?