Introduction: The data dependency pressure cooker
Finance teams in modern enterprises are under more pressure than ever. From hitting financial targets and forecasting performance to assessing and managing financial risk and evaluating the performance of departments, products, and services, finance departments are expected to help drive their organization forward. And in an era of belt-tightening and cutbacks, getting the numbers right isn’t just important—it’s imperative.
The key to driving an organization forward is data and finance teams have a mountain of it. But meeting expectations can be extremely difficult. The modern data ecosystem facing finance teams is hugely complex and dynamic. It’s also constantly evolving as data architectures become increasingly fluid. Building pipelines that connect financial data from source to destination requires rules to integrate, transform, and process data across multiple environments. Further hindering teams’ ability to build pipelines is the fact that the data supply chain is not fixed. It has the potential to shift, especially as new technologies, such as cloud applications, are added. All of these factors have made building resilient data pipelines considerably harder.
Adding to the complexity, building and managing data pipelines has traditionally sat with the IT team. Those in finance trying to complete end-of-year reports or compile EBITDA metrics can struggle to get the data they need when they need it. Up against deadlines and the pressure to inform investors and the markets, any delay in getting data could see finance teams forced to act independently and create pipelines “off grid.” These unsanctioned and untracked pipelines create serious governance challenges for organizations.
To lift the lid on the hidden problem of data integration friction and find out what it means for today’s finance data leaders and practitioners, we surveyed data decision-makers and practitioners working in finance teams at large enterprises in the US, UK, Germany, France, Spain, Italy, and Australia. In this report, we explore the challenges around data integration friction and shine a light on the problems facing finance teams around data access and usability.
Demand for data is outstripping supply
Reliable data is the foundation of any good decision. Access to data has become a critical part of the digital and strategic lifeblood of today’s organizations. This is especially true of the modern finance team. After a financial technological revolution, these teams potentially have access to and are generating more data than ever from financial accounting tools, enterprise resource planning, payment processing systems, and customer relationship management tools. It’s from this data that they are expected to forecast, evaluate, and accurately report on financial performance.
It’s perhaps unsurprising that accounting and finance teams are among the biggest data consumers, with 44% of all data leaders and practitioners getting weekly requests from finance teams.
These findings show the scale of data demand in organizations. There is a classic supply and demand problem, and today’s finance teams have to compete for attention with other line of business (LOB) teams. Another driver of data demand is digital transformation. Over half (69%) of finance data professionals say accelerating digital transformation priorities has created major data supply chain challenges.
The problem of meeting demand for data is compounded by the complexity of enterprise ecosystems. Data engineers must take many steps to connect, transform, and process data and build pipelines that meet the individual needs of different departments. But when data is siloed in multiple systems with inconsistent formats, creating bespoke data pipelines at scale is a huge challenge. In fact, 70% of finance data leaders and practitioners say this data complexity and friction can have a crippling impact on digital transformation.
These problems create a disconnect between data-hungry finance teams’ expectations and what can realistically be delivered. Finance end users want and expect data to be available for their work at the drop of a hat. But non-experts don’t appreciate the scale of the challenges facing data leaders. More than six-in-ten (63%) finance data professionals are frustrated that non-data experts think you can click a button and data magically appears.
It’s extremely difficult to fulfill requests for data against the level of complexity in the modern data ecosystem and with scarce resources. Finance data leaders are under pressure to deliver data and support organizational objectives. However, the skilled employees needed to build resilient data pipelines are in short supply. Data will continue to grow in volume, complexity, and urgency. If the data integration friction problem is left unresolved, the inability to empower finance with the data they need could have a disastrous impact on profitability.
Data chaos is holding finance teams back
To reduce and mitigate data integration friction, finance data leaders and practitioners have sought to adopt “enabling technologies” to improve agility. This often means adopting SaaS tools or moving to the cloud. But all this change can make keeping up with pipelines and infrastructure a near-impossible task and add to data integration difficulties.
As the variety of data that finance teams can draw on across tools and departments—and the tools they use to compile and deliver financial reports—proliferates, it creates a patchwork of systems where data is siloed. Whether legacy systems, point solutions, custom-built tools, or solutions from a cloud service provider, the result is a fragmented and chaotic data environment.
This chaos means that what should be a simple pipeline-building task can become a complex job requiring specialized and expensive skills. This research found that more than two-thirds (68%) of finance data professionals say data integration friction prevents them from delivering data at the speed the business requests. And more than fourin-ten (48%) say data integration friction is a “chronic problem” in their organization.
There are several factors contributing to this friction. The most cited issue by finance teams was the variety of data formats (43%). This was followed by the speed at which data is created (36%) and infrastructure complexity spanning hybrid and multi-cloud (32%).
A further 79% of finance data professionals also say data in legacy systems, such as mainframes or on-premises databases, are hard to access for cloud analytics, so they often “don’t bother” to include it when creating data pipelines. This presents a considerable risk for finance teams, as many legacy systems contain decades of financial, customer, transaction, and tax data.
Finance teams can’t afford to ignore this legacy data. It’s critical for forecasting performance, identifying potential financial risks, and benchmarking against past financial results. This kind of data could become a finance team’s “secret sauce” that gives them accuracy and confidence in results. Getting it is hard, but it could be the key to unlocking insights that improve financial performance.
If finance teams can’t be confident that all data sources have been collated, they cannot fully trust their data and its insights. This means the organization could deliver inaccurate financial statements or make poor decisions based on incorrect and incomplete information. Data must come without caveats. So however chaotic the data ecosystem may be, enterprises must enable finance data professionals to run dynamic data pipelines to unlock insights that drive value.
The human impact of data integration friction: Stop apologizing for your data
Having trust in the data you are using is essential. Forecasting P&L, compiling tax returns, processing expenses, reporting financial results to investors...imagine completing this work with absolute confidence in the data.
No one wants to have to apologize for their data. Or justify why the last quarter’s financial figures aren’t included or add caveats to their datasets about why yearly budgets aren’t quite up to date.
It makes recommendations less powerful, risks providing inaccurate or incomplete reports or insights, and can severely damage a finance team’s reputation with the C-Suite.
Yesterday’s data is not the same as today’s. Finance teams need resilient data pipelines that automatically ingest the most up-to-date information and serve it on demand, wherever it is needed. This gives finance teams complete confidence in the trustworthiness and accuracy of data, so they can stop apologizing for it.
Beyond friction: Cracks in the pipelines
The chaos facing finance teams has created an urgent need for resilient data pipelines. But the state of modern data ecosystems has made this hugely challenging. For many finance data professionals, creating pipelines is labor intensive and requires expert skills to hand-code one-off solutions that can’t be templatized. These manually created pipelines are not built to resist unexpected shifts in the environment, resulting in regular breakages.
Our research found that 48% of finance data leaders and practitioners admit their pipelines are too brittle and crack at the first bump in the road. This is higher than the overall (39%), likely because finance teams are constantly up against deadlines to deliver financial data and reports. Finance teams are often forced to act independently to get the data they need if it’s not being delivered quickly enough.
Another reason for brittle pipelines is the volume of data finance teams need. They require real-time data on everything from invoices and purchase orders to interest rates and tax codes. It can be extremely difficult to keep up with all the data sources and the many pipelines needed to deliver it. This means it can be challenging to fix breakages, with 44% of finance data professionals saying they struggle to fix data pipelines in motion.
Pipelines break when they are not resilient to changes in the environment. The most cited reasons for breakage by finance data professionals include infrastructure changes such as moving to a new cloud (44%), bugs and errors being introduced during a change (38%), and credentials changing or expiring (34%).
A further 56% say their ability to tackle broken data pipelines lags behind other areas of data engineering. This is perhaps why 87% of respondents have experienced data pipeline breaks at least once a year, with 42% saying their pipelines break every week, and worryingly, 18% say they break at least once a day. This is higher than the average, with 36% of all data leaders and practitioners saying their pipelines break every week and 14% admitting they break daily.
It’s hardly surprising to see cloud high on this list. Finance teams have adopted a variety of cloud-based software, including accounting tools such as Intuit Quickbooks, expense management tools such as Concur, business intelligence tools such as IBM Cognos, and financial planning tools such as Oracle Hyperion. However, the migration to the cloud can cause problems if a clear data strategy for multi and hybrid cloud environments is absent. Many opt for a basic “lift-and-shift” approach, which requires extensive efforts to orchestrate systems, rework them, and connect them to data pipelines.
Without laying these data foundations, every change to storing and using data increases the risk of disrupting the data flow to finance teams. This data drift—the unexpected changes to data structure, semantics, and infrastructure—can break processes and corrupt data, creating a disadvantage for any organization that does not get the basics right.
Figure 4. The most common reasons for data pipelines breaking
The true cost of data integration friction
Being unable to build resilient data pipelines can have big repercussions for finance data leaders and practitioners. Given the volume of breakages they experience, time spent firefighting swiftly adds up.
When looking at data leaders and professionals from across all organizations, the average data engineer spends 31% of their time troubleshooting and recoding broken data pipelines. When you consider that businesses, on average, spend $6.13 million annually on data experts, repairing data pipelines equates to $1.9 million of their time per year.
Figure 5. The percentage of data engineers’ time spent troubleshooting and recoding broken data pipelines
The challenge of governance in the Data Wild West
Finance teams must constantly meet deadlines, whether submitting expenses, processing salaries, filing tax returns or compiling end-of-quarter or -year financial reports. The pressure to satisfy these deadlines can mean finance teams are forced to act independently to get the data quickly. But this can have significant governance and data risks.
More than half (54%) of finance data leaders and practitioners say modern infrastructures that span on-premises and multiple cloud environments, combined with data decentralization between line of business teams, have created a data “wild west.” Further, 61% say this fragmentation in the data supply chain has made it harder to understand, govern, and manage data in their organization.
Connecting the data sources that finance teams draw on—such as financial statements, transaction data, budgets, or operational data—is hugely complex. Financial data can come from multiple teams or departments, as well as external vendors or customers. Integration is a challenge because of the potential for different formats and structures. The end result is incorrect or incomplete financial data, which can have legal or financial repercussions.
The variety of sources and the nature of the data also means the need for security and governance is very high, and the risks of losing that data in a breach are severe. The finance data professionals in our research agree, with 91% saying they want consistent security measures to protect data as it flows between on-premises and cloud sources. This is higher than overall data leaders and practitioners from across the organization (81%). Without consistency, visibility and control are lost, significantly increasing the risk of data breaches and resulting in fines.
Life on the Marketing Data Frontier
Five Critical Pillars of Data Governance
Good data governance requires a well-defined strategy. Here are five fundamentals to consider when developing your data governance strategy for finance and accounting.
- Identify your data: To design an effective strategy, you need to know your entire data landscape inside out, including types, structures, movements, locations, and points of data transformation.
- Establish a governance body: The data governance body is a central control point around which all teams and departments can agree on consistent policies that align with business goals.
- Ensure “privacy by design”: A privacy-first approach is central to good data governance. It involves collecting only necessary data, masking personally identifiable information (PII) or private corporate data, and using data only for intended purposes.
- Metadata management: Properly managing metadata makes it easier to track data changes, control data access, and understand relationships between data to fulfill governance requirements.
- Data quality management: An effective data governance strategy will establish consistent criteria and scoring to ensure high-quality and reliable data for use in analytics and AI/ML applications.
Once you’ve designed your strategy following these pillars, you are ready to implement it. Check out this blog to learn how.