Transparency in Coverage Implementation

Here’s How You Can Implement Transparency in Coverage

In the previous blogs, we have defined the Transparency of Coverage rule’s compliance requirements and their corresponding solutions. Now that the expectations from the solution is set, here we are looking at the implementation.

Let’s cut straight to it.

The implementation approach will depend on various technological factors such as cloud platforms, identification of the right source system with respect to data elements, data quality, etc. Without further ado, let us take a close look at the various dimension required to implement a solution to comply with Transparency in Coverage.

Data Identification 

Health plans have to discreetly identify the data elements required to fulfill the compliance requirements be it for beneficiary disclosure or public disclosure. The solution is built to address and fulfill the compliance requirements needs to correctly identify data elements required for each component of the rule, such as the self-service tool, hard copy form, and public disclosure files. Most of the information is spread across multiple tables with unique identifiers.

The data mapping process will narrow down the required tables and respective columns.  It will also help identify those tables and columns which will help in calculating the desired output.

Based on the components, the solution must touch upon historical claims data, current eligibility status, a member’s health plans, historical drug pricing details, negotiation details, and providers’ credentialing details to name a few.

Data Acquisition 

Once the identification of the data elements is complete, the next step would be to consolidate them into a single output. Disparate systems with different data formats and hierarchical structures will pose a challenge in the data acquisition process. The demand would be to have a single truth/record irrespective of various input data sources and formats.

Data quality is another dimension that will need to be addressed properly. The logic laid out for data acquisition would depend on missing values, heterogeneous data types, and other data quality issues.

What if the allowed amount or place of service is not present at all for a particular claim. Should we include this claim in the final output or not?  That’s something to think about, for now.

Data Models 

Data models consist of data elements’ relationships, naming conventions, attributes, types, etc. It is a final placeholder for the data before being consumed by the public file generator, self-service tool, and the hard copy form. The definition of hierarchy and normalization will play a major role in this step.

The data model must align PCP’s NPI with TIN relations for every service code, service description, amount, place of service, and health plan. This will ease the files being converted to JSON or XML file formats from the existing available formats.

The volume of data can range from thousands to millions of claim records, depending on the member population and claim volume of the health plan.  So, when defining data models, the solution must account for the frequency of updates, volume, and variation in records.

Technology

The development of a solution depends on the combinations of technical parameters such as cloud vs on-premise infrastructure, scalability, and maintenance requirements among others.  If all the data is already in the cloud, ETL services or scripts can be applied to transform the information into public files. The public file generation, as mentioned previously is once a month. But for the self-service tool and the hard copy form, the solution must display all the required information in a very short time.

AWS Glue and Azure Data Factory provide the functionalities to load and transform the data into the required data format with ETLs.

For the self-service tool, as it is an enhancement of the member portal, the development would be done on any previously used technology such as .NET, PHP, Angular, React, etc. Based on the volume of data and infrastructure, the right tool with regards to JSON or XML format generation can be identified to be incorporated into the existing member portal.

The solution must also account for the archival of public files as well as an audit trail of member’s requests. Due to the two business days constraint to dispatch the requested information for the hard copy form, the solution must also capture the time stamp and should alert users for non-compliance probability.

 

 We Can Help You Out 

One way to ensure a smooth implementation of compliance solutions would be to partner up with healthcare IT experts. Teams with expertise and solutions experience in data management, cloud computing, ETL, etc. are not easy to come by, especially when the CMS keeps payer organizations on their toes. Let us help you navigate your way through the required solutions, and upgrade your operation workflows to comply with the latest regulations.

Connect with our compliance experts at info@nalashaa.com to implement the TinCer approach and tick those compliance checkboxes well before the due dates. The sooner, the better.

The following two tabs change content below.
Pankaj Kundu

Pankaj Kundu

Pankaj has vast experience ranging from claims processing engine to application of machine learning algorithms in US Healthcare. As a Healthcare Business Analyst, he is passionate about addressing healthcare data/process related challenges and ideating solutions for clients.
Pankaj Kundu
Pankaj Kundu

Pankaj has vast experience ranging from claims processing engine to application of machine learning algorithms in US Healthcare. As a Healthcare Business Analyst, he is passionate about addressing healthcare data/process related challenges and ideating solutions for clients.

All stories by: Pankaj Kundu