Data Integration Key to Achieving Needed Breakthroughs
When peers from other industries ask why data hasn’t driven comparable breakthroughs in healthcare, the answer is usually hiding in plain sight: clinical and administrative data still live in different systems and “speak” different digital languages. Integrating them is a prerequisite for the next wave of progress—better care, better patient experience, and better operational performance at the same time. Fortunately, there are now multiple mandates spurring this integration, from CMS requiring FHIR APIs, to NCQA embracing digital quality measures, to enforcement of prohibitions against information blocking. Health plans and providers, however, must guard against swapping these two longstanding data siloes for several new ones.
Unintended Consequences of Scoping Integration Initiatives by Use Case
Leading organizations are now making great progress in integrating data from disparate sources to streamline prior authorization, close care gaps, and improve coding accuracy. Unfortunately, these initiatives risk a new set of data siloes. They are often run by separate workgroups, funded by separate budgets, and implemented on separate timelines. The end result: data integrations optimized for a specific purpose that underperform when asked to support other use cases.
Building One Source of Digital Truth for Multiple Applications
Best practice organizations are instead investing in a centralized data foundation that supports multiple use cases. This approach minimizes redundant technology and storage and, more important, conflicting information on an individual patient or health plan member. Must-have’s for such an all-purpose resource include:
- Ingests data in a wide range of formats and via multiple transfer protocols and modalities. While FHIR is gaining traction throughout healthcare, other formats and standards continue to dominate legacy systems. Likewise, centralized systems must ingest data across transport modes (push and pull), cadences (real-time and batch), and volumes (single-record and bulk). Ideally your system will also evaluate ingested data for missing elements so feeds can be expanded to capture them.
- Parses, indexes, deduplicates, harmonizes, and normalizes ingested data. These are all essential components of the data transformation process in healthcare. Without all of them, discrete data elements can’t be analyzed at the patient or member level and pushed into relevant systems supporting operations and care management.
- Stores data in relational tables and integrates with preferred end-user interfaces. You should not expect other functions and departments to change their workflows. To ensure application, your centralized system must make relevant data readily available for use by existing departmental tools, systems, and analytics engines.
- Scales with no erosion in performance or security. In particular, your solution must continue to ingest and output data with sufficient speed to support frontline operations as the number and complexity of data elements expand. It must also accurately manage consent and access for an ever-increasing number of individuals.
Ensuring your organization doesn’t swap two longstanding data siloes for several new ones, however, is not just a technical matter. InterSystems also encourages customers to cross-pollinate siloed departments and groups who are charged with advancing separate goals—that would all benefit by drawing upon one common source of digital truth.
To learn more about how InterSystems helps health plans and payers unlock the power of integrated data, visit InterSystems Solutions for Plans and Payers.
At the Becker's 5th Annual Fall Payer Issues Roundtable, taking place November 2–3 in Chicago, payer executives and healthcare leaders will come together to discuss value-based care, regulatory changes, cost management strategies and innovations shaping the future of payer-provider collaboration. Apply for complimentary registration now.
