Integration solutions that complement business objectives and maximize return on investment while minimizing operational overhead

Big data analytics visualization

Data integration involves the orchestration of data inputs and outputs between various data repositories, data lakes, and warehouses via technologies such as pipelines, event streams, or services. Effective data orchestration ensures data is readily available for reporting, downstream applications, and various operational activities throughout the business.

Connect with a Kenway Data Solutions Consultant Today

Our Insights

Through a multitude of client engagements, across several years, and within a wide peer group of consulting professionals, Kenway has developed several key insights and core principles that we believe are integral to the success of any endeavor in the data space

Create Product Mindset

Data integration technologies facilitate the supply and consumption of data (in various forms) to data products.

Simplify Infrastructure

Choose PaaS offerings for the entire stack where possible. Solutions where pipeline tools are on-premises and data storage on the cloud are problematic, with increased latency data travel. Additionally, on-premise solutions are often complex to maintain from a skills and cost standpoint.

Simplify architecture

Build simple data pipelines to support data domains and use cases. Avoid duplication of data which fulfills similar purposes across the solution. Build upfront mechanisms to ensure development of reports and pipelines are utilizing production data.

Unwind dependencies

Ensure that discrete operations are adequately decoupled (i.e., archival, masking, standardizing, etc.). Keep data models and domains decoupled until business requirements necessitate it by utilizing a single warehouse with multiple schemas or data marts.

Sample Data Product Ecosystem

Sample Data Product Ecosystem


Adhering to the principles outlined above, some of the expected benefits to your data integration effort are as follows:


Improved long-term supportability by ensuring  supply of skills needed to maintain the solution after go-live.

Speed & Adaptability

Increased speed to market and adaptability given new features or changing requirements.


Increased trust in data by reducing complexity and increased transparency regarding transformations.

Kenway's Approach

Kenway offers a flexible and tailored implementation approach. We aim to put forth the best recommendations for each client by focusing on understanding the unique requirements while remaining technology agnostic. However, based on our experience on a wide array of data integration projects, we generally keep the following in mind:

Conceptualize data products with regards to storage vs reporting

Reports should be decoupled from storage (from a product framework). Consumption patterns should be considered for different business needs – data science operations vs downstream applications.

Scope teams for the technical skills and the throughput needed

Data Engineering, Data Modeling, Cloud Architecture & Infrastructure. Business stakeholder and SME – this is often the constricting factor and other efforts should be sized to suit.

Rationalize the skills needed in-house vs what can be outsourced

Pipeline development could be easily repeatable, helping to remove key man dependencies. Data modelling with SME involvement could be cultivated in-house – often the limiting factor so keep this top of mind.

Groom cross-functional requirements with DEV teams and the business

Build product roadmaps before project plans. Empower development teams to become more product oriented vs relying entirely on project management processes to tie requirements together.

Continue reading some recent Kenway articles: