Navigating the Business Case for an MDM Implementation

One of the most challenging aspects of a Master Data Management (MDM) Project is, surprisingly, not part of the implementation! Over the years of helping Clients to create insights from their data, one of the most challenging activities that I have run into in my career is developing the business case for the solution and getting buy-in from project stakeholders. This is not only important to secure funding for the project, but also to maintain momentum and support for the program. Part of the complexity for developing a business case for an MDM solution stems from the fact that, unlike your more typical technology projects, MDM implementations sit at the intersection of business, operations, compliance, and technology and require a large number of stakeholders to participate as well as a broad set of sponsors. As usual, we can extract wisdom from this type of project to apply to other projects, in particular, large cross organizational projects and digital transformation projects.

Key Decisions in MDM Implementation

By their very nature, Master Data Management implementations require key decisions on enterprise data that span multiple regions and departments. These key decisions include, but are not limited to, data ownership, business rules, data quality standards, and the appropriate flow of this data across the systems landscape. Many businesses lack the robust Data Governance required to make these decisions well. The data types involved include core enterprise data in one or more of the following domains: product or services data, customer data, employee data, vendor data, and location data. All of these data types are touched by critical business functions – ergo a large number of stakeholders are required.

Further complicating things is that articulating business benefits related to higher data quality, the key outcome of an MDM implementation, is difficult. Data, in and of itself, has no value. However, its value emerges when you evaluate the cost of status quo - the impact poorly managed data has on an organization. 

In order to gain executive support and sponsorship, focus should be placed on the business outcomes rather than the data itself.

Developing a Strategic MDM Value Proposition

It is imperative that a clear value proposition be defined as one embarks on an MDM Journey – and, yes, it will be a journey. To best articulate the value, one should start with why the company should solve the problems that an MDM solution is intended to solve. This “why” should have enduring quality beyond the current time period. Regulatory compliance, long-term strategic success, accomplishing a challenging social issue are all the caliber of candidates we’re seeking. It is much easier to rally support and sponsorship when the business case is based on why something should be done rather than the details of execution.

To this end, MDM journey success should be evaluated on more than the initial return on investment directly associated with a data repository as that will tend to be low given the high implementation costs. In fact, the real value of MDM is to provide increased responsiveness to business priorities. This is achieved by delivering foundational enterprise data to users, customers, partners, and other individuals inside or outside of the organization who can make improved decisions because of it. This requires investment in data interfaces as well as the required enhancements to legacy systems to build the capabilities which will enable them to consume the new, high-quality data and to deliver on the promised value. When building the business case for an MDM solution, care should be taken to clearly differentiate between the benefits that result from the MDM implementation proper, and those that are enabled by the enhanced capabilities and improved data in downstream systems and/or processes.

Evaluating MDM Success

In terms of Return on Investment (ROI), when determining the financial value likely to be delivered by an MDM solution, four broad categories ought to be considered:

In many cases, there are competing projects pursuing the same benefits – ensure the appropriate sponsors are identified and agree to apportion benefits to the MDM implementation project accordingly.

In terms of the costs for the MDM implementation, care should be taken to include the costs of all investments required to generate the defined benefits. The MDM implementation itself, could represent as much as half or more of the overall cost of the project, but a significantly lower percentage of the benefits. While phased implementation approaches are the recommended way to proceed, care should be taken to ensure that costs and benefits of each phase are clearly understood. Early phases will be quite expensive and won’t deliver as much value; however, later phases will be less expensive and deliver much greater value. Rigor in gathering this financial information during the planning and delivery will ensure future projects will be given careful consideration based on track record.

MDM implementation costs to consider include:

Connect with Kenway for Business Case Development and MDM Assessment

In the end, once the MDM solution is implemented, the demonstrated benefits realized by the project rarely exactly tie to those outlined at the outset. It is important that benefits be reassessed and communicated throughout the project as the business will not stand idle while MDM is implemented.

Creating an MDM business case can be challenging, but by sharing a clear and compelling value proposition, focusing on real business value, and comparing those to the full breadth of costs, you can showcase the benefit that an MDM implementation can bring to your organization today, and into the future! 

If you need help developing a business case for your needs or providing a current state assessment of your MDM environment, connect with one of our consultants to learn more.

 

Cloud Data Warehouses and Data Warehouse Modernization

I can still remember my first time working with a data warehouse. The year was 2013, and I was working on my first post-school data engagement with an IT software organization. We were going to build a sales-centric enterprise data warehouse that would extract, cleanse and integrate a variety of data into a single, large repository, transforming the way the business managed their sales lifecycle. This enabled their team to leverage data in all facets of the sales lifecycle and optimize their ability to close deals.

I was fascinated by this concept, and its value proposition was clear and powerful. I was hooked at that point and have subsequently spent the majority of my career working on data engineering, data warehousing, and business intelligence solutions.

Shortly after this initial engagement, I began learning about cloud data warehouses. Today, organizations use tools like Azure Synapse and Snowflake to manage massive volumes of data every day. But it took a while to get to this point. Here’s a look at how data warehousing solutions have evolved, and what to consider as you modernize your approach to data storage.

Why Data Warehouse Modernization Is So Important

For many years, regardless of the industry, company size, or the BI platform, data warehouse structure was essentially the same. At the core, there would be a separate relational database to house the data, typically leveraging dimensional design schemas. A nightly data integration process would be developed to extract data from the line of business applications to load the data. These two components would make up the backend of the data warehouse and take the most time and effort to implement.

On the front end, there would be any number of business intelligence tools to give users direct access to slice and dice the data. This solution supported the operational and management reporting with respect to “what happened” types of business questions.

This was the typical data warehouse for many years, and it has served us well. However, new trends are causing it to break in several different ways, including but not limited to data growth, fast query expectations from users, non-relational/unstructured data and cloud-sourced data. Organizations are unable to meet the growing need to integrate and analyze a wide variety of data being generated from social, mobile and sensor data. Seventy-seven percent say that data intelligence is a major challenge. More importantly, these data warehouses struggle to answer the forward looking predictive questions necessary to run the business at the required levels of granularity or in a timely manner.

However, modern solutions, like cloud data warehouses can be designed to handle these new trends.

The Modern Data Warehouse Structure: What to Consider

Data warehouse modernization can have a different meaning depending on the organization’s level of Business Intelligence (BI) maturity. Modernization is relative to the organization’s current capabilities and needs. Some organizations today are still struggling with basic reporting and often export data into Excel to organize, filter and analyze their data. Because Excel offers benefits in reporting, some organizations often fail to see the value of investing in BI. Others have very mature data warehouse capabilities with multiple data platforms, advanced reporting tools and sophisticated power users.

From Kenway’s experience, many organizations are expected to upgrade their data warehouses and some of their analytical tools over the next several years. This may require a multi-platform environment to handle both the traditional data warehouse reporting needs and to handle big data analytics. It also may require a transition to a cloud data warehouse solution.

When thinking about modernizing your existing data warehouse, start by evaluating your existing reporting capabilities and revisit the original business drivers and assumptions. Start by asking the following questions to determine if you have a need to modernize your data warehouse:

Cloud Data Warehouses and Other Modern Storage Solutions

Cloud data warehouses are now widely touted as the future of data warehousing. They enable organizations to keep up with ever-expanding amounts of data. Data professionals say that data volumes grow by 63% every month at their companies. Many organizations are already short on IT talent, and managing on-premise solutions becomes unwieldy when data volumes are growing that rapidly. With a cloud data warehouse, you can rely on a third-party to maintain the hardware and system updates for your database needs and allocate IT resources to other, business-critical tasks.

Along with a cloud data warehouse, there are other new tools, techniques and data platforms available today that can be used to achieve data warehouse modernization:

Modernize Your Database

In conclusion, traditional data warehouses were never designed to handle the volume, variety and velocity of today’s data centric applications. Therefore, many organizations will need a more modern data warehouse platform to address many emerging business and technology requirements.

Are you interested in learning more about how Kenway can help you modernize your organization’s data warehouse? Kenway’s experts can help. Connect with us today at [email protected] for a consultation.

Cloud Data Warehouse FAQs

What is a cloud data warehouse?

A cloud data warehouse is a cloud platform acting as a centralized data store and serving data for analytical use-cases. Cloud data warehouses sit adjacent to a broad toolbox of public cloud data services and enables integration and use of these services to deliver applied data use-cases.

What is the difference between a cloud warehouse and a data warehouse?

Whereas traditional data warehouses require organizations to deploy and maintain on-premise hardware and software, cloud warehouses don’t require any physical hardware.

How does cloud data warehousing work?

With cloud data warehouses, third-party vendors manage all hardware and software updates. Data is stored in the cloud, and can be accessed from anywhere. When an organization needs to increase its storage capacity, it can simply upgrade its account with the vendor — there’s no need to add more on-premise hardware.

Is AWS a data warehouse?

AWS provides a wide variety of managed services, including data warehousing solutions.

 

Power BI vs. Tableau vs. Qlik: Which BI Tool is Best?

“Just give me the data.” When Kenway Consulting engages in a Business Intelligence (BI) project, many of them begin with that simple phrase— “Just give me the data.” Organizations want their data from various source systems in the hands of their power users. Doing so allows them to leverage the industry expertise and analytical mindsets for which they hired these resources. To maximize our value during a BI project, we believe in getting our clients the data that addresses their highest impact business questions early in the data discovery phase and then iteratively developing it in an in-memory data visualization tool.

We use in-memory analytics and data visualization tools because they allow:

However, just as no two clients’ needs are the same, we have learned that we cannot simply pick one tool to address every engagement. In an effort to best serve our clients, Kenway recently undertook a hands-on research project to vet Power BI vs Tableau vs Qlik.

Power BI vs. Tableau vs. Qlik: Our Research

Here is how it worked. We built a report in Qlik Sense, and used it to provide a benchmark against two major competitors: Microsoft Power BI and Tableau so we could compare Power BI vs Tableau vs Qlik. We reviewed the products on their ability to fulfill a few of the common use cases we have seen with our clients:

Before we begin our intergalactic adventure in data, here is some background on the exercise:

So let’s compare Power BI vs Tableau vs Qlik!

Data Extraction

Directly importing our data files using all three tools was quite easy. They all had user-friendly data loading wizards that allow you to quickly find files on your hard drive, make some minor manipulations, and incorporate them into your application.

The most striking difference was the number of data sources available via the versions we used. Power BI Desktop led the way in this category—out of the box, it allows users to utilize the wizard to extract from various file structures, databases, online services, and other applications. Qlik Sense also allows for a large spectrum of data sources to be incorporated; however, it requires a bit more technical savvy and/or searching to do so. Tableau Public limits users to local files, OData connections, and the Azure Marketplace DataMarket. However, if you choose to upgrade to the Professional Version, you get access to the same breath of sources as above and out-of-the-box connectivity as Power BI.

Outside of using the data loading wizards, Qlik Sense and Power BI provided much more robust scripting languages than Tableau. Qlik Sense’s data load editing language resembles SQL, a language familiar to many people with database experience. Power BI utilizes a language called Power Query. It is similar to F#, an object-oriented coding language. Tableau’s data loader allows users to make minor transformations for a loaded dataset (adding calculated values, grouping values, defining joins between tables, etc.); however, its lack of a coding language limits the number of tasks you can accomplish. For most use cases, the data will have to be prepared at the source level (e.g. modifying the files, creating views and/or tables in the desired model, etc.).

Once the data was loaded into the applications, Qlik Sense is able to differentiate itself from the other two products by the final data model it is able to utilize. Qlik’s associative data model allows Qlik Sense to string together connections between each table with every other table in the data model. This allows users to develop unique analyses across seemingly disparate data tables. While Tableau and Power BI are also able to bring in multiple data sets and data sources into their models, as users add on varying layers of complexity to the data model, they must also be more cognizant of the impacts on the data model.

For more information around each application’s connectivity, scripting, data load times, data compression abilities, and data modeling strengths and weaknesses, please see our full Data Wars Whitepaper.

Data Loading Breakdown

Executive Dashboard

Not surprisingly, all three of the tools were able to address our baseline reporting case—the Executive Dashboard.

As you can see, each tool was able to make a polished, user-friendly dashboard. Users are able to make line charts, scatter plots, and bar charts easily and can enhance them by adding filters. Furthermore, each of them supports a community of custom developed add-ons. The one we used here is by our friends at Tableau Augmented Analytics, formerly Narrative Science, (denoted by their logo). They have developed an add-on for Qlik Sense, Power BI, and Tableau that creates text summaries of your visualizations.

When it comes to Power BI vs Tableau vs Qlik from a default visualization standpoint, Tableau and Power BI came with more visualization types than Qlik Sense. While utilizing Qlik’s marketplace and customizing its standard visualizations allows Qlik Sense to make up some ground, this could be overly burdensome for less technical audiences.

Ultimately, we give a slight edge to Tableau in the visualization creation and organization space—the application’s interface has users create objects in separate tabs and then consolidate them into a single dashboard using a drag and drop design.

Tableau and Power BI also have an advantage when it comes to data manipulation on the visualization layer. They provide the user with wizards on the visualization layer to group fields, create hierarchies within fields, apply rules to fields, and create auto-filters for fields. The uses for these can range from making calendar fields (month, quarter, year, etc.) to developing drill down logic.

If users are embarking upon data discovery exercises, Qlik Sense’ white-green-gray filter functionality differentiates it from the other two. The white-green-gray color pattern defines whether a field is included in the current set, directly chosen for the current set, or excluded from the current set, respectively. This is useful in highlighting items like missed opportunities.

For further details around how the tools recognize field types (dates, locations, etc.), allow for heat map creation, enable users to build custom fields, and facilitate data discovers, please read our Data Wars Whitepaper.

Executive Dashboard Development

Customer Segmentation

With the basic use-cases covered, we wanted to see which tool handled some of our more complex business needs. The first that we looked into was customer segmentation. Many of our clients look to group their customers based on dynamic, automatically updated business rules. As this dataset was sales data, we decided to try and group them using the following:

Impressively, all of the tools were able to accomplish this segmentation. We used Qlik Sense’s and Power BI’s aforementioned scripting languages to develop these into the data model. For Tableau, we were able to string together multiple custom fields in the visualization layer to develop the needed segmentations.

Flows

Another key transformation in which our clients have found value is flows. This is used in customer service routing, order fulfillment, customer purchase pattern analysis, and other examples. Because of the ability to create custom scripts in Qlik Sense, we are able to recreate the logic for these. While we were unable to accomplish this with Power BI, we believe it could have been re-created with more time. Tableau would require the data to be prepared outside of the tool, likely in the source system.

For more information around how customer segmentations and flows were incorporated into the tools, please see our full Data Wars Whitepaper.

Summary

Here’s what we learned comparing Power BI vs Tableau vs Qlik:

Enjoyed our journey, we hope you did—we certainly learned a lot and got to geek out a little. Stay tuned for more information as these tools evolve and shift and new tools are added to the in-memory analytics ecosystem. Maybe we'll compare Power BI vs Tableau vs Qlik again. Or perhaps a new data visualization tool with in-memory analytics will arise.

Want to learn more about Kenway’s experience with Business Intelligence and data visualization tools? Drop us a line at [email protected].

 

Why Data Management Matters

Look around at the IT assets that your company provides. You see your tangible assets like your laptop and cell phone. Then think beyond the assets that you can’t visually see, like your Slack app, Outlook email license, or malware protection. Now, take a step back and go deeper to think about all the processes those tools are powering, such as using your phone’s address book contact information to send an email to a client or DMing a colleague with a new sales lead. 

Data powers a lot of our IT, and our IT assets produce data as a byproduct of operations, and the natural conclusion is that it is an IT asset. But that conclusion is wrong – data is a business asset waiting to be turned into a product. And one that can generate real value. 

And like anything that can generate value, you need to understand the intention behind your company’s use for its data. Keep it secure. Use it properly. And then, it can be leveraged for greater gains and, sometimes, alternative uses.

At this point, most enterprises are sitting on the proverbial data oilfield with no means of accessing or utilizing those untapped resources. There’s a recognition that the oil is there, and all that is needed are some talented engineers to tap the reserve, and the value will start to flow.

But in reality, once that oil is out of the ground, it must be processed, refined, shipped, and matched with its appropriate use. It’s the same concept with data. Data is the asset, information is the product – and that product only generates value in the hands of the right consumer.

According to a NewVantage Partners survey of IT and business executives, many are not maximizing the business potential of their data. Only 39.7% of respondents said they’re managing data as a business asset, and just 26.5% said they’ve created a data-driven organization. 

Here we’ll examine why the management of your data matters, how you can overcome the challenges of achieving effective data management, and the consequences if you continue to treat your data as just another IT asset. 

WHAT IS DATA MANAGEMENT?

Data management is the practice of collecting, organizing, protecting, storing, and maintaining the data created and collected by an organization, with the ultimate goal of making the information within the data accessible and useful to business operations. 

The concept of data management started back in the 1980s when the floppy disk was all the rage. It has since evolved with the acceleration of technology. We now have new kids on the data block with the Internet of Things (IoT), artificial intelligence (AI), virtual data warehouses, and cloud technologies. These innovative advances have enabled market participants to compete on analytics at lower costs, with fewer barriers to entry. 

At the same time, the digital revolution has made the world of data more complex and vastly larger as the world converts everything to software solutions. Most enterprises have committed to digital transformations, and a significant challenge associated with these programs is capturing the operational data and converting it for practical use within the analytics space. We call this data management. 

An acronym that is being used more frequently in the analytics space to refer to the key dimensions of analytical data is DATSIS. It stands for discoverable, addressable, trustworthy, self-describing, interoperable, and secure, meaning in order for a company’s data to be effective, reliable and valuable, it should be:

The term data governance is often grouped together with data management. Data management and data governance are partners in supporting an organization’s business decisions, actions, and goals. Although these two functions have some overlap, they each have different purposes. Data governance done right establishes business ownership over data assets, ensuring coherence across systems and departments and enabling contextual use. Data management codifies and enforces those policies and procedures to build quality right into the system. 

WHAT DATA MANAGEMENT CHALLENGES DO COMPANIES FACE?

At Kenway Consulting, we’ve seen companies experience deep struggles with their data management, unable to bridge the gap between business needs and technical capabilities. Yet modern data platforms have made scalable and flexible an achievable goal. 

Kenway recently worked with an asset management company that needed a scalable and manageable data solution that would provide invaluable insights and analytics to better target its wealth advisor clients. The solution also needed to retain agility and flexibility to facilitate rapid prototyping and ad-hoc analysis. 

The asset management company was obtaining client data from multiple providers without a single source of truth. The data was scattered, outdated, and duplicated, resulting in difficulties garnering Customer-360 insights as well as poor business outcomes around its efforts for sales, marketing, and product distribution.

The asset management company’s data management challenges are all too common. According to research findings from SnapLogic, IT decision-makers report that, on average, 42% of data management processes that could be automated are currently being done manually, taking up valuable time and resources. As a result, almost all respondents (93%) believe improvements are needed in how they collect, manage, store, and analyze data.

These are some of the pain points that arise when a company has a disorganized and disjointed data management system:

WHY LOOK AT MANAGING DATA DIFFERENTLY TO ACHIEVE RESULTS?

An important point to remember is that data management extends beyond the technical aspects of an organization. Data management is far-reaching and touches on a company’s culture, mindset, and people. This is because the primary goal of data management is to benefit the organization and produce business outcomes. 

Data is a business asset that can be revenue-generating, revenue protecting, and help to decrease costs. Businesses can better achieve the results they want by looking at data management through multiple lenses, including the entire organization’s strategic goals to the more tactical perspectives of the data steward, the data author, and the analytics and reports consumer. Developing an understanding of the many personas in the enterprise analytics ecosystem is critical to driving data management that scales and creates compounding value.

The Q1 2022 Alation State of Data Culture Report examines the correlation between data culture and revenue. It uses metrics from the Data Culture Index, which measures an organization’s fitness to enable data-driven decision-making. Only 15% of companies qualified as a top-tier data culture, meaning widespread adoption of data search/discovery, data literacy, and data governance, versus 29% from Q3 2021.

When data mismanagement occurs, the effects can trickle down to all parts of an organization. It can cause a so-called “time suck effect” throughout the company, with people grappling with uncertainty around critical metrics, operational issues, or questions of meaning/definitions. By creating a culture driven by data, businesses can empower their people to realize the full potential of their data and properly manage it.

HOW KENWAY CONSULTING HELPS YOUR COMPANY AVOID THE “TIME SUCK EFFECT”

Going back to that asset management company example, Kenway addressed the client’s data management challenges around targeting and lead qualification. The data solution enabled the sales teams to eliminate manual data consolidation tasks and to spend that time leveraging insights from the now consistent and accurate data. Kenway helped to improve the organization’s strategic decisioning by making consistent, high-quality data accessible and shared across the entire organization.

Kenway has guided many companies across financial services, healthcare, telecommunications, and more in restructuring their data management systems for business success. Connect with us or browse our client success stories to learn how we can help you with your data management and make your data a viable business asset.

 

The CDO’s toolbox: Data Governance & Data Management

Until the early 2000s, most firms accumulated data at a manageable rate and were able to collect, store and use that information with little additional effort. However, over the past two decades, the introduction of automation mechanisms (i.e., robotics, IoT, etc.), social media, and cloud storage has made it increasingly cheap and easy to collect and store data.

Today, organizations are accumulating an unsurmountable amount of information. Innovative academics have identified a plethora of ways to leverage data, and tech giants have developed (virtually) limitless computing power to process it. The world is now exponentially accumulating information so fast and vast, that we’re observing instances where laws are struggling to protect it and companies are frantically trying to leverage it.

As one would expect, with an increase in the amount of available data comes an increase in usage across organizations. From executives and employees, to customers and regulators, it’s become a vital component of interaction across a myriad of stakeholders, making it extremely critical for competing in today’s emerging data-driven economy.

Data as a Strategic Asset

This increase in data usage has led us to a world where the quality of data matters. When the data that is stored has inconsistencies in form, has missing components, or is not up to date it can have a significant impact on an organization.

A colleague shared with me an example of this quality issue occurring during her previous job at a global bank. When generating monthly reports that showed details about funds (i.e., returns, exposure by geographic region, product type, currency, etc.), she found herself spending a ton of time exporting data from systems, making manual adjustments so that it was in the correct form, realizing it didn’t look right, going back to the numbers to figure out what was wrong, and so on.

In such situations, poor data quality can turn out to be costly because analysts and managers often find themselves spending more time preparing data than analyzing it. Unlike a decade or two ago, today, poor data quality has a direct and greater financial impact. Governing and managing it inadequately is often indicated by subpar operational performance (i.e., bad decisions due to incorrect reporting), reputation loss (i.e., data leaks), or worse (i.e., regulatory fines due to non-compliance with privacy laws).

Data Governance and Data Management Lay the Foundation

To thrive in today’s data economy, the CDO/CIO office is often pressured to be intentional about continuously improving data quality across the organization. To do this, they need to rely on both Data Governance and Data Management mechanisms. While the concepts of Data Governance and Data Management are commonly understood and documented, one of the major differences between the two is that Data Governance is a strategy (i.e., macro) and Data Management is a practice (i.e., micro). But both are necessary for any organization to thrive.

Organizations that formalize Data Governance have roles and responsibilities defined for data ownership and stewardship. They also have policies, procedures and enforcement in place to ensure that data quality standards are upheld from data entry to data delivery. On the other hand, Data Management is prevalent in organizations that have tools and technologies that serve various purposes, such as visibility into metadata (i.e., what data is in which table, measure calculations, data lineage, etc.), access controls, etc.

Examples of common problems solved by Data Governance and Data Management:

Both Data Governance and Data Management are complementary in nature and essential for an organization to run optimally in today’s world. In the ideal state, a combination of these two would allow organizations to understand the positive effects data can have on their business and offer the ability to create that impact with little effort. Such an ideal further enables organizations to connect the right insights with the right people for driving value all the way from optimizing business processes to propelling innovation.

Driving Value from Data Governance and Data Management

Structuring and initiating Data Governance and Data Management implementations can appear daunting and intimidating. Common trends suggest organizations often struggle to launch and sustain a Data Governance program, or lack buy-in on basic Data Management tools because it seems to be an expensive proposition.

Kenway’s approach to data is different. By understanding and prioritizing based on use cases supported by business objectives, organizations are enabled to structure and pace the development of their data infrastructure in such a way that the project could potentially pay for itself by adding immediate ROI to business initiatives.

We’d love to learn more about your organization’s Data Governance and Data Management initiatives. Drop us a note at [email protected], or check out our Information Insight page to learn more.

Request a Data Governance Consultation

 

 

Data Management – An Imperative in Today’s Environment and Beyond

Just when businesses thought they were ahead of the game, disruption threw yet another curveball.

The coronavirus pandemic has radically changed demand for products and services in every sector, exposing weaknesses and fragility in global supply chains, service networks, operations and IT capabilities. Nevertheless, while unprecedented levels of uncertainty bring challenges, they also bring opportunities.

It has been striking how well and how fast many companies have adapted during this time, achieving new levels of visibility, agility, productivity and end-user connectivity. While there is much credence to be lent to the organizational, operational and cultural characteristics of organizations that have been able to successfully adapt to our new normal, it is organizations with the greatest alignment between sales, marketing and product teams, as well as access to clean, accurate, timely customer data that have been able to capitalize most effectively on rapidly changing consumer demands and trends in the current economic landscape. In short, companies with the most robust data management capabilities have been able to protect, generate and capture more value than their counterparts by nearly every measure in recent months, and a very timely study supports this thesis. On June 9, 2020, InsideView, a leader in business-to-business (B2B) research and intelligence, announced the results of its third sales and marketing alignment report, Unlocking Revenue Performance in the New Normal, based on a survey of more than 400 global sales, marketing and operations leaders. The study was conducted in February and March 2020, just as the international shelter-in-place policies and economic downturns began to spread across the globe, and the results were humbling.

We are all familiar with the benefits of well-aligned sales organizations: better targeting, more effective sales teams, higher win rates, sustainable competitive advantages, greater responsiveness, agility and more. We are also familiar with how too many companies claim to aspire to alignment yet continue to do little to achieve it, especially when it comes to the customer data foundation upon which nearly all sales and marketing activities are built.

This year, according to InsideView’s research, three facts remain crystal clear for a majority of companies:

  1. Your sales and marketing data is a mess.
  2. You know you need to fix it.
  3. But you don’t have a plan to fix it.

Nothing underscores this point more than the disconnect between those who said data is a high priority (71%) and those who admitted to doing little or nothing to keep it clean (80%). Although a promising 41% of respondents claimed to perform manual maintenance based on data governance rules, a reasonable assumption can be made that these respondents are doing the bare minimum and are in the relatively early-maturity stages of their data governance and data management journeys. Furthermore, while performing offline data cleansing services 1-2 times per year is certainly helpful, it is far too infrequent to facilitate accurate data-driven decision-making on a consistent basis.

So, why the gap? Before exploring why it’s important to overcome the barriers to alignment, let’s first explore how much poor data quality is costing us.

The Cost of Poor Data Quality

A 2019 survey by CSO Insights found that up to 27% of salespeople’s time can be wasted due to bad data: pursuing low-quality leads, fixing reports, investigating mismatched leads, and so on. How costly is this lost efficiency? Let’s start with a simple exercise and take a more conservative estimate that 20% of salespeople’s productivity is lost due to poor data quality.

Imagine you work for an organization with 10 sales development representatives (SDRs) who are each paid $100,000 and generate, on average, $4 million to the sales pipeline per year. Let’s further assume that your organization’s average win rate is 20%. Using these assumptions, the impacts to your organization’s top and bottom lines are undeniable.

In our example, the total cost of wasted resources and lost revenue is $1,800,000 which, on its own, should be more than enough to spur action. However, this is just the directly measurable financial impact.

What about the regulatory risks and penalties you’ll be charged if you aren’t able to provide regulators with clean, accurate customer data or if you aren’t able to comply with customers’ right to be forgotten? What about the reputational damage when a salesperson makes a prospecting call to an existing client, the day after their relationship manager made a similar call? What about the prolonged sales cycle and inability to up- and cross-sell to current customers? What about the inability to personalize the customer experience, predict churn rates, and extend customers’ lifetime values? The list of costs, risks and uncaptured revenue quickly mushrooms.

Let’s refer to our example once more. We assumed you worked for a company with 10 SDRs, but what if you worked for a company with 100, 500, or 1,000+ SDRs? Even if you estimate only a 5-10% loss of productivity as a result of poor data quality – meaning, your SDRs spend a respectable 90-95% of their time unencumbered by bad data – the costs of wasted resources and foregone revenue remain substantially and undeniably high.

I encourage you to use your own company’s figures and participate in the exercise above. You will likely find that your results are more than enough to justify an investment in improving data management capabilities. In fact, in 2017, Gartner measured the average financial impact of poor data on businesses to be $9.7 million per year.

So, why haven’t you made the investment? You understand the costs associated with poor data management, but are you aware of the benefits of establishing a robust data management environment?

The Benefits of Robust Data Management

As we discussed earlier, while unprecedented levels of uncertainty bring challenges, they also bring opportunities. Following the short-term mitigative efforts to protect resources and preserve liquidity during a crisis, business leaders are afforded the rare opportunity to critically assess the returns of historical, current and future investments with new perspectives and a heightened sense of clarity.

One such competency that many business leaders have scrutinized recently and identified as an opportunity for improvement as a result of COVID-19 has been their organizations’ data management capabilities. Not only does robust data management enable corporations to navigate times of tremendous uncertainty, it allows data-driven decision-making to continue during times of economic recovery and expansion – two critical phases of the economic cycle that serve as optimal periods of time for organizations to generate growth and capture market share.

To be clear, although robust data management will assist companies tremendously when steering through crises, its value may be greatest (and likely the most underappreciated) during times of economic expansion when complacency is easily hidden by the shadows of growing profits, only to be revealed once the next crisis arrives. One would hope that his/her organization has spent the most recent expansion with prudence and haste: investing in data management, diligently capturing market share and positioning itself to capitalize on gaps left in the market by competitors in the next downturn.

Before we explore the financial and operational benefits of mature data management, let’s illustrate  data management and visualize how it can produce benefits from a practical perspective. Below is an example of master data management (MDM), in which three separate records exist for the same customer. These records may have been generated by three different entities – an internal employee, a third-party vendor and a manufacturer, for example – in varying source systems. By cleansing, validating and consolidating the records with match/merge capabilities and preferred attributes determined by business members, we can create a single golden record for the customer.

Immediately, we begin to viscerally understand the vast benefits of maintaining a robust data management strategy. In our pictorial example, with each department now sourcing its data from the same central repository, a number of business outcomes can be realized, ranging from reduced risks and costs, to enhanced customer experiences and increased revenues, to improved technology scalability.

Reduced Risks and Costs

Enhanced Customer Experiences and Increased Revenues

Improved Strategic Scalability

It is important to note that these benefits are not mutually exclusive. Rather, they can be, and often are, achieved simultaneously. As a result of improved data integrity, departments spend less time generating, deciphering and validating reports and more time prospecting, servicing existing clients and discovering novel insights, thereby reducing costs, improving understanding and increasing revenues. While customer service representatives begin to more effectively personalize the customer experience, the risk of reputational damage decreases, and customer lifetime values increase.

It is no wonder SiriusDecisions found that when sales, marketing and product departments are well-aligned on customer data, organizations experience 19% faster revenue growth and 15% higher profitability.

What to Do

Thus far, we have explored data management in today’s environment, the costs of misalignment and poor data quality, and the benefits of mature data management. However, it would be naïve and misleading to illustrate robust data management as the sole ingredient to achieving the ever-desired state of organizational utopia in which all decisions are made with perfectly complete and accurate data. Rather, there is no perfect utopia.

Data management is an ongoing journey of iterative improvement that requires continuous learning and commitment – commitment not only to desired business outcomes, but to one another. Data management does not exist in a vacuum, and it is not solely IT’s responsibility. It is reliant on both the business and IT collaborating to build a broader ecosystem in which data is treated as an asset, and both parties participate to govern data policies, procedures and quality controls to steward data through creation, ingestion, maintenance and consumption.

Alas, where to begin? Embarking on the journey I’ve described can seem daunting. Whether it’s not knowing where to start, doubting the validity of the aforementioned benefits due to previously failed investments or fearing little progress can be made due to internal politics or lack of expertise, the uncertainty can seem overwhelming. But just as the challenges of COVID-19 seemed overwhelming, you were able to overcome them, and likely did so by partnering with colleagues and working as a team to venture into new and uncharted territory. I recommend the same approach here.

Take it from us. Not only has Kenway successfully partnered with clients to help them along their data management journeys, but we’ve embarked on our own journey and learned countless lessons along the way. Rather than duplicating effort and facing this uncharted territory alone, let us help and share with you what we’ve learned.

Now, back to that question of “Where to begin?” If you’re willing to take the leap, let’s start with a conversation. Connect with me on LinkedIn or email me directly at [email protected]. I’d love to talk.

 

Information Insight – Helping to Tell Your Company’s Story

As my colleague, Kevin, recently discussed, we have changed our What at Kenway. That is, we have rebranded our Capabilities and Services in order to provide clarity into the areas where we provide the most value to our clients.

With this change came the need to update some of our messaging — updated elevator speeches, realignment of project summaries, and new ways to describe what we do on a day-to-day basis as a Kenway Consultant.

Starting with our elevator speeches, I worked with some of my teammates to begin updating the ways we informed clients and prospects about Kenway’s value proposition. Unfortunately, we got stuck. We did all the things you assume would be valuable: we reviewed the organizational changes, we looked at introductory messages that other companies presented, we tried switching out the Capabilities and Services in our old messaging with the new ones. Unfortunately, none of it worked.  It all felt over-rehearsed and unauthentic.

At this point, one of my teammates (probably out of frustration), asked aloud, “What is the story we are trying to tell here?” This completely reframed the conversation. Instead of trying to simply describe our new Capabilities and Services, we began discussing the journey on which we wanted to take the listener, what we wanted them to take away from the interaction.

Some of this is reflected in Kevin’s article when he articulates that:

What we learned from this elevator speech exercise was that the important pieces to telling a compelling story were:

These learnings also translate well into how Kenway approaches our Information Insight Capability. More importantly, it allows us to factor in some key missteps such as:

At Kenway, Information Insight entails our Data Management, Data Governance, and Business Intelligence (BI) & Analytics Services. Similar to how people shape an effective story, Data Management helps you collect and index data; Data Governance ensures that the right context in terms of data origination and refinement are used; and BI & Analytics ensure that your data is presented in such a way that it provides actionable information.

Furthermore, we take an iterative, collaborative, and cross-functional approach to our engagements. This helps to avoid the missteps mentioned above. Through iteratively addressing high-impact issues or opportunities, we focus our efforts on key projects and avoid situations where teams gather data for the sake of gathering data — often expending high effort for small problems. We augment this approach with frequent client interaction to ensure that our teams are answering the right questions and optimally focusing their time to deliver value.

Finally, Kenway ensures that, regardless of the core Service driving a project, we bring the right skills, at the right time, in the right volume, for the right duration. This allows us to provide more holistic solutions to meet our clients’ needs. For example, a BI & Analytics project will not only benefit from our ability to model data and create data visualization applications, but it will also leverage Kenway’s knowledge to integrate external data sources through Data Management, and understand the quality and definitions of the data through Data Governance.

By combining our knowledge and technical skills around our Services with an approach that ensures alignment to our clients’ goals, Kenway helps companies to tell their stories. Working together, we help to understand past performance, view what is driving the business today, and gain insight into what opportunities and risks could arise in the future.

We would love to help your organization tell its story! Send us a note at [email protected] or check out the Information Insight page of our website HERE to learn more.

 

The Cost of Dirty Data

One of Kenway’s core capabilities is Information Insight. Using a combination of our Data Governance, Data Management and Business Intelligence services, we try to help our clients better leverage their data as an asset. The reason that we group these three services together, instead of presenting them separately, is because of our belief that you must have all three to create a robust data environment, which in turn increases our clients’ return on investment (ROI) on their data initiatives. Let’s take a step back and define some terms.

ROI: Regarding Information Insight engagements, our clients’ primary investment is in the effort and time it takes for Kenway to collect, clean and present data according to clients’ requirements, along with any technology expenditures (e.g. environments, software, etc.). The return on this investment is generally comprised of the speed, quality and value of the environments, visualizations and other deliverables that enable a client to make faster and more accurate decisions.

Robust Data Environment: A data environment is considered robust when it equally addresses elements of data governance, data management and business intelligence. This provides an organization the ability to understand:

As you can see below, a key aspect of a robust data environment is balance. Missing aspects of any of these disciplines hampers the value of your environment as a whole.

 

As a budding data analysis provider, whose work has mostly focused on data visualization efforts, I see an obvious question embedded in the above visual. Is it possible to quantify the impact of the presence of data governance on the number of hours it takes to complete a data visualization project? Specifically, if there is a lack of available, clean and trusted data—that is, we have a lot of “dirty data”—how does that impact the effort required for the project? Further, how does this impact the ROI of the project?

To try and answer this question, my colleague, Jon Chua, and I kicked-off our quest with some basic data collection. We first took an inventory of all the data visualization projects Kenway has done over the last six years. We then looked at the number of hours it took to complete each project and proceeded to dig into the details by considering the size and complexity of the scope of each of the projects.

By reaching out to the resources that were involved in the projects, we got an understanding of the current state of the clients’ data governance maturity level at the beginning of each project. Furthermore, we also tried to get a sense of the scope of the project by asking how many dashboards were required and the relative complexity of them.

Having this data in place allowed us to perform some basic exploration. We started by simply plotting the hours taken to complete a project against the number of dashboard views required by the client.

 

Typically, we would expect this to be a smooth and steadily rising line where more dashboard views were related to more hours. However, as you can see in the above graph, that is not the case. We notice a multi-modal distribution, which suggests there are other factors, such as data governance, that contribute to accurately estimating the number of hours.

To try and weigh the impact of data governance, we ran a regression analysis to see the relationship between hours required and the number of dashboards with the presence of data governance. Utilizing R to perform this analysis, we got a resulting model of:

Interpreting the above model, we see that there are at least 98  hours required to complete a project when the required number of visualizations to complete is zero irrespective of the presence/absence of data governance. We can think of this number as a baseline effort required to complete the up-front tasks necessary to create a visualization, such as onboarding, gathering requirements, defining scope, etc.

Next, consider the scenario where the client has formalized data governance. They have the roles and responsibilities defined for data ownership and stewardship, they have a clear understanding of their data sources, and they have mechanisms (i.e. policies, procedures and enforcement) in place to ensure that data quality standards are upheld during data entry. In this case, we observe that the total hours required for each additional dashboard is, on average, 13 () hours

On the other side, the absence of data governance can be incredibly costly. In such a scenario, the number of hours required to create a dashboard is almost tripled to 38. In other words, we observe that, on average, each dashboard required an additional 25 hours ( when data governance was not in place.

Poor data governance, or the absence of it, is one of the leading causes of “dirty data.” Per our analysis, we believe that the effort to collect, catalog and cleanse the data in the absence of data governance has a significant, measurable impact on the overall effort of creating data visualizations. Based on these findings, the up-front cost of setting up a data governance framework at your organization has clear benefits when you factor in the future cost savings!

Want to learn more about data governance? Check out Kenway’s approach HERE or send us a line at [email protected]!

 

Know Thy Data

An Incredible RBI Season

Dante Bichette had 133 RBI’s in 1999.

Bichette is a former major league baseball player.  He spent time with five different franchises, but his longest tenure was with the Colorado Rockies, where he played outfield during the 1999 season.

RBI stands for “run batted in,” and is a baseball statistic intended to measure the number of runs an offensive player is responsible for producing.  For instance, if there’s a player on third base, and the batter hits a single, allowing the player on third to score, then the batter gets one RBI.

In 1999, Bichette had 133.  This is a large number.  It was the 8th highest number of RBI’s in all of major league baseball that year.  It’s more than Giancarlo Stanton had in 2017, when he led the majors with 132.  In fact, 133 would have been enough to lead the league in each of the last 4 seasons, and 6 of the last 8.  Bichette’s 1999 season is the 170th greatest individual RBI season since 1900, which might not seem like much, but consider that there have been over 15,000 individual seasons in that period of time.

The RBI tells us that Dante Bichette produced a lot of runs in the 1999 season, and, after all, the batter’s job is to produce runs.  Given that information, it might be reasonable to think that Dante Bichette was a good, perhaps even great, baseball player in 1999.

Taking a Second Look at the Numbers

With all due respect to Mr. Bichette and the Colorado Rockies organization, Bichette was neither great nor even good in 1999.  Bichette was a bad major league baseball player in 1999 and produced negative value for his team.  The Rockies would have won more games in the 1999 season if they had replaced Bichette with an outfielder from their AAA Minor League affiliate (at the time, the Colorado Springs Sky Sox), or just about any other professional baseball player.

How is that possible?  After establishing that Bichette’s RBI total, a number indicative of his offensive production, was elite in 1999, how was he an objectively bad professional outfielder?

It goes without saying that the threshold for playing major league baseball is incomprehensibly high, and calling Bichette a “bad” baseball player is untrue.  To judge a player’s value in practical terms, baseball statisticians invented the concept of the “replacement-level player.”  Such a player is one that’s readily available to any club that wants him, and, in real terms, can be described as a player at the high level of a franchise’s minor league organization.  This player can be called up to the major league club to “replace” another player with little or no cost to the organization.  If a player on the team is performing below this level, he shouldn’t be on the team, because he can be readily replaced at a lower cost.

Bichette was performing so far below the replacement level in 1999 that he cost the Colorado Rockies the equivalent of more than 2 wins over the course of the season.

There are many reasons for this, but let’s talk about the RBI first.  The RBI is a statistic with good intentions, but those intentions cloud the truth that the metric is not particularly effective at measuring value.  Teams win games by scoring runs, so it would make sense to identify not only the players that score those runs, but also the players whose performance at the plate allow those players to score runs.  The problem is that in order to get an RBI, unless you hit a home run, it requires context that is outside of the batter’s control: someone has to be on base.

Bichette batted with a lot of players on base during the 1999 season.  The on-base ability of his teammates allowed him to balloon his RBI totals, in spite of being roughly league average at the plate.  Bichette’s on-base percentage was below league average.  His home run total and slugging percentage were high, but he played his home games in Denver, where the thin air notoriously inflates power numbers.

However, teams also win games by preventing runs, something that Bichette did extraordinarily poorly.  By defensive metrics, Bichette was the worst defender in all of major league baseball in 1999.

The end result is a complete picture of Bichette’s 1999 season, which stands in stark contrast to only looking at the RBI.  The RBI is a statistic that requires accuracy, but it’s a perfect example of why data requires quality and governance, in addition to accuracy.

Do You Know all the Numbers? 

Baseball is a game, but it’s also a business.  They didn’t know it at the time, but Colorado paid Bichette a veteran’s salary for production that they perhaps could have gotten from someone making the league minimum.  Think about your business.  Think about all the roles, projects, programs, investments.  Do you think that all of those entities are delivering positive value?  Do you really know how much value every decision is delivering?

This was a challenge faced by one of our clients.  The client had a project that was great in theory.  It was a good idea, and good ideas deliver value, right?  As demonstrated by the RBI above, knowing a number isn’t enough.  Trusting a number requires an understanding of what it means and from where it originates.  Otherwise, it’s just a number, and it can make you look foolish.  In our client’s case, the number that they were using seemed foolproof.

The client had a Short Messaging Service (SMS) program, where a text message would go out to their customers, with the hope that the message would either prevent the customer from making a call that would cost the client money (e.g. the message would contain an appointment reminder) or generate a call that produces revenue (e.g. the customer calls to make a payment).  This sounds like a no-brainer.  The concept is solid enough that one could be forgiven for assuming it can’t miss, much like the RBI.

Kenway doesn’t believe in making such assumptions, so we performed a deep analysis of the data, merging SMS databases with Interactive Voice Recognition (IVR) call system databases, looking for call prevention and/or generation.  Kenway evaluated a cost calculator that combined labor and materials costs associated with the desired benefits of the SMS program, along with efficacy of the initiative altogether.  In doing so, we could understand the data and assign precise cost-based values to every message sent from client to customer.  Compare this to baseball, where you can combine context, home-field advantage, performance in other facets of the game, etc., to create a single metric that measures a complete view of a player’s overall value to his team, rather than using one number (e.g. RBI) that sounds good in theory.

Kenway’s analysis confirmed some of the organization’s return-on-investment assumptions, but was also able to shed light on initiatives which may not have been providing the benefit needed to justify their continuation.  Then, not only was Kenway able to perform this analysis, but we also implemented a new process that allowed the client to continue to perform audits on their own for as long as the project would continue.  The end result is an organization that runs more efficiently and has more accurate and detailed insight into the projects that it is funding.

As for the Colorado Rockies, they lost 90 games and finished 12.5 games out of the playoffs.  Short of replacing Dante Bichette with the ghost of Babe Ruth, no recommendation was going to save them.

If you have numbers, but you’re not sure about their meaning or effectiveness, we would like to hear from you at [email protected].

 

Data Wars: White Paper on In-Memory Reporting Tools

“Just give me the data.” When Kenway Consulting engages in a Business Intelligence (BI) project, many of them begin with that simple phrase— “Just give me the data.” Organizations want their data from various source systems in the hands of their power users. Doing so allows them to leverage the industry expertise and analytical mindsets for which they hired these resources. To maximize our value during a BI project, we believe in getting our clients the data that addresses their highest impact business questions early in the data discovery phase and then iteratively developing it in an in-memory data visualization tool...