The last mile of health care: Patient behavior change

Though the US Affordable Care Act has extended health insurance to millions of previously uninsured consumers, recent research found that providing health insurance to those not previously covered did not improve the patients’ health and outcomes:

"The study seems to indicate that greater access to health insurance, in and of itself is not enough to improve outcomes for patients with chronic disease," said lead author Tomasz P. Stryjewski of Massachusetts General Hospital.

Expanding health insurance is only part of the answer. The other part is supporting consumers and patients in their attempts to embrace a sustained behavior change over the long term. The Centers for Disease Control and Prevention (CDC) reported that 75 percent of health care costs go for the treatment of chronic disease. Patients struggle to manage their health and wellness even with health insurance. For instance:

Changes in the US health care system are underway that further incentivize delivering high quality, coordinated care. What’s missing is a clear path on how to accomplish the end goal. Fortunately, the Centers for Medicare & Medicaid Services (CMS) has assessed the need for additional patient support outside the clinical setting. It’s now implementing the Chronic Care Management (CCM) Services program scheduled to go into effect in 2015.

This program will allow providers to bill CMS $42.60 per month per patient for providing 20 minutes or more of chronic care management services to those Medicare patients with at least two chronic conditions expected to last at least 12 months, or until the death of the patient. The conditions must place the patient at significant risk of death, acute exacerbation/decompensation or functional decline. As this affords providers a direct mechanism to bill for patient interactions outside the clinical setting, they must now find approaches to supporting these patients outside the clinical setting. One way to accomplish this is through using technology and automation to engage and support patients at scale. This is one aspect of population health management.

By gathering and analyzing all types of patient data collected in and outside the clinical setting – with automation where appropriate – providers can create a 360-degree view of the patient in a cost-effective manner. Providers can then use this view to risk-stratify the patient populations to understand risk at both the cohort and individual patient level. Armed with this information, providers can identify specific types of interventions for specific consumers and patients and deliver the appropriate care.

health analyticsIn addition to identifying types of interventions, how can providers make better decisions on how to efficiently allocate resources for support and intervention tactics? Not every patient requires additional support to manage their health. Identifying which patients, and how much additional support would be effective, can be solved with advanced analytics applied to the tremendous resource of data collected about the patients. Finally, monitoring patients’ behavior over time to understand who responded to which interventions allows for more informed decisions through response profiling to understand which intervention will work for which patient.  This all needs to take place “at scale.”

One population health management company focused on aggregating a broad spectrum of patient data, aggregating this data, and risk-stratifying the patient population is Geneia. They use SAS® to power their Theon platform where they integrate disparate data sources that include EHR data, claims data, psycho-social data, activity tracker data and device data to create a comprehensive view of the patient both inside and outside the clinical setting. With this Big Data asset, their CareModeler module can support risk stratification of the patient population to allocate resources to close care gaps and/or support patients’ behavior change outside the clinical setting.

Population health management organizations are using automation of data management and advanced analytics to identify those patients that have care gaps and would benefit most from additional support.  The automation and advanced analytics also enable targeted outreach to specific patients at the optimal time for the given patient’s status relative to the management of their disease, based upon data entering the patient’s records. Patients desperately need on-going support outside the clinical setting for sustained behavior change relative to managing their health. And providers, at least for a portion of their patient population, now have solutions available and a mechanism to bill for care management outside the clinical setting.

So, how will all this be put together? Analytics will be key in identifying high-risk patients, allocating resources most efficiently, and aligning the various care management services and follow-up with each individual patient that will be most receptive. Technology and automation will pave the way for these services to be delivered at scale – exactly when they will deliver the most benefit – and capture the critical data surrounding the interaction for analysis and refinement of future care management services. We now have the incentives, health analytics, technology and automation to cover the last mile into the patient’s home and workplace in support of sustained behavior change.

Post a Comment

The art of it all

When I see my doctor, I try to keep one thought in mind: He is the doctor and I am not. The reason I’m seeking his medical advice (other than my wife or prescription laws) is that arguably my doctor knows much more about biology, physiology and medical practice than I do. I try to remind myself that my self-diagnosed condition that my colleague Dr. Google and I came up with isn’t founded on the same kind of knowledge and experience that my doctor has.

Now, this doesn’t mean that I don’t get it right sometimes or that my research isn’t of value; it may help me to think about symptoms or qualities I wouldn’t have otherwise considered. But in the end, my doctor is best able to help me when given as much information as I can possibly give and not filtering what I think is relevant. Yes it’s allergy season, and yes, that means I sneeze more often than usual, but that doesn’t mean I should ignore the sneezing when talking to my doctor – this can be an important bit of information.

Data ModelingRight now, I would guess that any clinician reading this would be agreeing whole heartedly and grumbling about the last patient who came in and told the doctor that they needed the purple pill. To this same clinician, I challenge you whether you give the same consideration to your analytical colleagues. Too often we are simply dictated to regarding the appropriate methods, data or considerations to make. I’m not a cardiologist and don’t pretend to be; yes, I want all the input you can give me to help supplement what I do, but the more you dictate and the less you discuss, the more my hands are tied.

A comment I’ve found myself making in the past (sometimes out loud, but often just to myself as it is a tad snarky) is that if all of your suppositions were correct, you wouldn’t be talking to me. If you truly knew the right data, transformations, methods and models, then these problems would have been solved long ago. I know methods that you don’t, I know pitfalls you are unaware of; to put it simply, I know my craft just as you know yours.

Analytics, like medicine, when done as it should be is as much an art as it is a science. I have my models and distributions just as you have your tests and diagnostics. Knowing which to use and what to look for in the output isn’t always simple. Two tests may answer the same question, but in different ways and with different assumptions. The beauty is in the subtlety: I may only hear a heartbeat where you hear an arrhythmia; and you may see a histogram where I see a trend.

Post a Comment

Data transparency benefits researchers and patients

According to the American Cancer Society, approximately 1 male in 7 will be diagnosed with prostate cancer during his lifetime. Therefore, increasing our medical knowledge about prostate cancer is very important to society. At the recent Clinical Trial Data Transparency Forum, Stephen J. Freedland, MD from the Duke University School of Medicine shared the value of access to clinical trial data for advancing medical knowledge about prostate cancer.

Good news for medical research

Many leaders in the life sciences industry are realizing that greater access to patient-level clinical study data is a good thing – good for science, good for business and good for humanity. In a very short timeframe, many biopharmaceutical companies have elected to make clinical trial data and supporting documents available to researchers. And for researchers who are seeking to discover new medical insights, this is great news.

“It’s incredible data,” said Dr. Stephen Freedland, MD, Associate Professor of Surgery and Associate Professor in Pathology, Duke Urology, at the Duke University School of Medicine. He continued:

“You guys spend millions of dollars gathering the data; the data is clean, there are no holes, and it’s prospective versus retrospective. As an academician who doesn’t have patients, the best I can do is look at the patients who were studied before. So the richness of the data is incredible. From an academic perspective, it allows PhD access to patient data to test new ideas, get quick answers and validate prior studies. For example, we had seen that obesity is a risk factor for prostate cancer; when we saw that in other data sets, this adds another point of validation.”

Good news for patients – new medical knowledge

Clinical trial data transparencyFreedland also provided a compelling example of the tremendous medical insights that can be discovered by using this data to create new science to aid patients and society. He was able to use clinical study data to test his previous research findings.

His research followed patients for multiple years, looking at baseline characteristics to see which patients developed prostate cancer of what degree. “I sought access to pharma data to validate my ideas,” said Freedland. “I didn’t really care about the original trial per se – did drug A do better than drug B – but rather viewed the data as a prospective cohort study for secondary analysis.”

Gaining access to clinical study data has enabled Dr. Freedland to produce many research papers examining risk factors for prostate cancer. The resulting evidence indicated that smoking is correlated with more aggressive prostate cancer, and obesity is also a significant risk factor. Those insights were revealed by making new use of pre-existing clinical study data. And this is just one example of the medical insights researchers can deliver through access to a well-managed clinical trial data sharing program.

More medical insights to come

In a very short timeframe, many biopharmaceutical companies have elected to make clinical trial data and supporting documents available to researchers. The availability of this data for approved medical research will help researchers create new medical science and benefit patients in many ways in the next few years. A strong data transparency commitment from all sponsors of clinical research will help to make future medical insights a reality.

Post a Comment

Provider consolidation and health analytics: A combination with the potential to bend the cost curve

The health care industry is converging in many ways. Health plans are buying health systems, and health systems are creating their own health plans. The lines between health plan and health system are as blurry as ever. It seems that many organizations are jockeying for position as we turn the corner in implementing health care reform. One development that’s getting some attention is the trend of provider consolidation.

Provider consolidation is the trend for health systems to acquire independent providers and bring them into their larger system. It can pose a few issues to health plans though. First of all, this can almost instantly result in an increase in cost for the same health care services, since health systems are usually able to charge a facility fee in their contracts with health plans. Secondly, as a health system gets larger, their bargaining power increases. This too can result in an increase in the amount they are reimbursed for their services.
Bend the cost curve
We have all seen the curve that illustrates the year-over-year increase in health care spend. It is a serious and legitimate concern. There is a collective, national mission to try to “bend the cost curve.” On the surface, provider consolidation may appear to directly conflict with the mission to bend the curve. However, it could enable the industry to improve health care quality while at the same time decreasing the total cost of care.

New reimbursement models are being implemented by both health plans and health systems. The industry overall has acknowledged the need to move away from traditional fee-for-service reimbursement and towards fee-for-value. However, the reality is that providers are often reluctant to enter into these value-based models because they have limited influence and visibility into the quality of care the patient receives downstream. In a fragmented region with many different unaffiliated providers, this challenge is very apparent. In these types of regions, the level of value-based adoption appears to be very early. However, in more consolidated regions, there is more willingness and ability to successfully enter into value-based agreements that incentivize coordinated care and improved quality. This benefits the patient, the provider, and the health plan.

An integrated delivery system alone is not enough to successfully transition to a value-based model. An advanced analytics capability is the next step to establish the foundation to plan and operationalize the new model. A clear understanding of historical performance through the lens of a value-based model is an absolute necessity for both health plans and health systems. Organizations have acknowledged this, and are now arming themselves with the data, information, and insights required to understand their utilization and quality in ways never required before. As the wave of provider consolidation continues, more organizations will require an advanced analytical infrastructure to support their effort to improve quality. In combination with coordinated care and advanced analytics, health plans and health systems will be able to successfully “bend the cost curve.”

Post a Comment

Toward clarity on transparency: An evolution in thinking – and action – about sharing patient-level clinical trial data

Highlights from the fourth SAS Clinical Trial Data Transparency Forum

“Access to the underlying (patient level) data that are collected in clinical trials provides opportunities to conduct further research that can help advance medical science or improve patient care. This helps ensure the data provided by research participants are used to maximum effect in the creation of knowledge and understanding.”

That’s the word from ClinicalStudyDataRequest.com, the data sharing consortium that represents the founding sponsor, GlaxoSmithKline and now nine other sponsors.

Sharing patient-level data for altruism and good? The ideal was not initially embraced.

“When I think back two years to when GSK first started talking about this, there was certainly some concern among staff,” said Paul McSorley of GlaxoSmithKline, a pioneer in the data-sharing movement. “’How much work is this going to be? What happens if or when researchers reach different conclusions from our own?’ We’re way past that now. GSK scientists recognize that these concerns, while real, can be mitigated – and because there is so much support for the potential value data sharing can bring to the medical community, we are very proud of what GSK is doing here.”

In the year since we hosted the first Clinical Trial Data Transparency Forum, we’ve seen a notable shift in organizational culture and the tenor of the discussions:

  • Stage 1. “We see merit in the idea, but we also see many ways it could go wrong.”
  • Stage 2. “We need to do something before external entities impose a data-sharing framework on us.”
  • Stage 3. “We’re excited to be at the forefront of creating policies and processes to make this work.”
  • Stage 4. “This may not be the final state of things, but here’s what has been working for us.”

The fourth forum, held at SAS in Cary, NC, on October 2, exemplified stages 3 and 4 – endorsement for data sharing, and more tangible progress to show for it. These events are not SAS infomercials, far from it. Our role is to facilitate – to join and formalize the conversations that have been taking place in various corners of the industry and academia. As my co-host Matt Gross quipped, SAS provides the room and the food for people to come together. The participants provide the expertise, passion and collaborative spirit.

To start the day, we heard from Dr. Ronald Krall, MD, of the University of Pittsburgh (formerly Chief Medical Officer for GlaxoSmithKline) about the why and how. He challenged the audience to think about what’s next. What’s easily achievable, what’s more aspirational? How can we bake data sharing into the clinical trial process?

As Krall noted, you might be afraid that secondary research reveals something you didn’t want to know, or something that could harm your product or competitive position, but if you’re committed to knowing everything you can possibly know about your products – and you’d rather know it sooner than later – transparency is the ticket.

Dr. Eric Peterson, MD, of the Duke Clinical Research Institute outlined the rigorous, pragmatic framework his group has adopted for review of research requests and publication of associated findings – a model for commercial organizations to consider as well.

Marla Jo Brickman, PhD, of Pfizer and Judy Bryson, PharmD, of UCB Biosciences, Inc., described what it looks like to be an early adopter with a foundation in place and traction building. Both firms were committed to transparency before; what’s new is the more structured way it is done now.

Dr. Kald Abdallah, MD, PhD, Chief Project Data Sphere Officer, and Mark Lim of FasterCures reminded us of the value of data-sharing consortia. The questions are too complex for any of us to answer alone, but they might be answered by triangulating insights in many different places.

Places such as the Yale Open Data Access (YODA) Project. Karla Childers of Johnson & Johnson – which earlier this year announced its participation in YODA – described the process used to review data requests, which will make the data-sharing process independent yet collaborative.

Assuming the data are available and can be de-identified to preserve patient privacy, what is a reasonable request? What is the ideal? Our panelists agreed that shared data shouldn’t be used only to question the validity of previous primary research. The risk there is that by changing parameters and analytical techniques, it is possible to come up with contrary but specious conclusions – and the real value of secondary research is found in creating new knowledge.

This concern about poor analysis can be mitigated by carefully vetting requests – and it might not be that much of a concern to begin with. Ben Rotz of Eli Lilly noted that few research proposals seek to replicate previous research; the overwhelming number are quests to create new science. McSorley concurred, noting that only 1 of 23 research proposals received was intended to confirm results from a previous GSK study.

Momentum is building. Take ClinicalStudyDataRequest.com for example. Initiated by GSK, the online portal now includes 10 sponsors – Bayer, Boehringer Ingelheim, GSK, Lilly, Novartis, Roche, Sanofi, Takeda, UCB and ViiV Healthcare. The site receives an average of 900 visitors a day, 219,000 unique visitors in the last 18 months, reported Dr. Jessica Scott, MD, JD, of GSK.

Clearly we are moving away from “how do we think this could work” to “how is it working” and “how can we improve this as we move forward?”

“We’ve overcome some of the cultural lock-in, the inertia in industry over the past couple of years since we’ve started this process – and have gone from commitments to implementing a system that’s actually up and running and working,” said Scott.

In fact, it has moved from concept to reality very rapidly. We’re beyond infancy in some areas, now evolving from small pilots to a sustainable model, applying early lessons to scale, respond to feedback, and accommodate the needs of the broader industry and the goals we all share.

“When people sign up to be subjects in an experiment, they make a tremendous sacrifice on society’s behalf,” said Krall.  “Our responsibility is to make sure that sacrifice gets the best possible use. If the data can make a contribution – even if it’s a use not envisioned until years later – we have an obligation to make that possible.”

McSorley agreed:  “Part of the mindset change is that it’s not our data, it’s data that belongs to the larger medical community.”

Dr. Stephen Freedland, MD, of Duke University School of Medicine channeled his inner JFK to remind us of the bigger picture: “Ask not what are the risks of data sharing, but what are the risks of not sharing data.”

OnDemand recordings of all the presentations from the fourth Clinical Trial Data Transparency Forum are available for viewing.

Post a Comment

Controlling our destiny: Real-time, visual analytics can combat the spread of disease

The recent outbreak of the Zaire Ebola virus has garnered much media attention and calls for action at all levels of government. The current outbreak thus far has been the gravest in history, and the CDC’s worst case scenario predicts up to 1.4 million cases by late January (correcting for underreporting.) The epidemic in West Africa has become so widespread that Ebola could become a permanent presence – and thus pose a persistent threat to other parts of the world.

We have seen these pandemics before: Small Pox in 1633, Spanish Flu in 1918, and Syphilis in antiquity. In fact, viral outbreaks throughout history have been so common, and so prevalent, some scientists have hypothesized that viruses played a role in our own evolution.

It is time to evolve again. We now have the tools to slow down and even stop epidemics, and those tools start with analytics. Not the wonky dissertation style analytics that show up in obscure statistical reports. I’m referring to analytics done in real time, by people who are on the front lines fighting the epidemic. To stop Ebola we must have the capability to deploy real-world analysis to non-expert users instantly so they can act on results immediately.

Welcome to the world of visual analytics.

Epidemiologists have referred to it as, “A technique aiding data analysis and decision making that allows for a better understanding of the context of complex systems.” That it certainly is. And it has the potential to make a difference in this epidemic. In the US, rules have gone into effect that all travelers flying in from Liberia, Sierra Leone or Guinea will undergo strict screening procedures. But will that be enough? Perhaps not.

At SAS I have the privilege of interacting with data scientists from a wide variety of disciplines that specialize in the real-time analysis of large datasets. SAS develops tools that enable real-time detection of fraudulent activity, mostly by the financial services industry. These tools combine a wide variety of approaches such as social network analysis, business rules, forecasting and predictive analytics to determine in near-real time where and when fraud happens. These same types of analytics can be deployed in the fight against viral epidemics. A screener detects a traveler with a high temperature. A school nurse finds a fever in a school child. An emergency room sees a spike in feverish patients. Are these cases Ebola? If not, what are they? And more importantly, are they contagious?

By combining data such as flu trends, disease trajectories, and geospatial information together with passport records, financial transactions (such as where and when an airline ticket was purchased), and information gleaned from social networks, it’s possible to build models to predict the cause of that fever. Doing this could not only help stop Ebola, it would also help stop the spread of any contagious disease. Developing this capacity would usher in a new era in our relationship with pathogens. Unlike our ancestors who had to resign themselves to fate, we can, through clinical analytics and rapid diagnostic testing, actively engage and control the viruses that make us.

Post a Comment

A new world of trust and transparency for clinical trial information

On Monday September 29th, the European Ombudsman organized a panel discussion on “International Right to Know Day.” This day was established in 2002 by access to information advocates from around the world. This year, the panel’s theme was “Transparency and public health – how accessible is scientific data?”

This topic was well chosen for a week in which the board of the European Medicines Agency (EMA) published their long-awaited policy on publication of clinical data (1). The panel at the European Parliament consisted of representatives from all stakeholders that gave input to EMA’s draft policy over the preceding years. The European Ombudsman, Emily O’Reilly, opened the discussion by saying that while there is much good coming from the pharmaceutical industry, more trust is needed to convince patients that therapies are working, and the only way to create that trust is by opening up clinical trials results and data.

The actions of the industry will silence the critics

Both Ben Goldacre (physician and author of the book Bad Pharma) and Margrete Auken (European Parliament member and shadow rapporteur in the new 2014 clinical trial regulation of the European Union), expressed a lingering distrust against the pharmaceutical industry in how they are releasing data. Richard Bergström of the European Federation of Pharmaceutical Industries and Associations (EFPIA) presented the great progress in the last year by the European pharmaceutical industry in their drive to release clinical trial data in a controlled manner. “This is unprecedented,” said Bergström, with many EFPIA members going beyond the principles that his organization has laid out. These principles include sharing all information produced during a clinical trial, including Clinical Study Reports (CSR’s) and the complete set of (anonymized) individual patient data (IPD).

The European Medicines Agency releases their transparency policy

The EMA policy describes what clinical trial information will be released, when it will be released, and also that EMA itself intends to make it available to interested researchers. Guido Rasi, the Executive Director of EMA, received most of the attention with the publishing of the EMA policy the same week. Rasi pointed out that EMA was under no legal obligation to release the clinical trial information owned by the pharmaceutical companies, but did so to increase public trust in regulatory decisions about new products. According to Rasi, the policy intends to strike a balance between releasing clinical trial data and the commercial interests of pharmaceutical companies. Meanwhile, GSK, as the first global pharmaceutical company, decided last year to open up all clinical trial information – including anonymized IPD – for external researchers. This pharmaceutical giant is now providing external researchers the ability to apply for access to a clinical trial, host an independent review panel, and access a secure online data and analysis environment (SAS® Clinical Trial Data Transparency) that allows the applicants to gain access and re-analyze the patient-level clinical trial data.

Europe leads the way in transparency of clinical trials

The “EMA policy on publication of clinical data for medicinal products for human use” – as it is titled – will become effective January 1, 2015, and reflects a legal obligation for transparency in the new European Clinical Trials Regulation No 536/2014 adopted in late May 2014. The European regulatory agency will implement this policy step-by-step, release Clinical Study Reports (or parts of them) initially, and IPD (“individual patient data”) might follow as a result of a follow-on policy. EMA will release only certain modules of the CSRs, such as:

  • Clinical overviews, (module 2.5 of ICH E3 guidelines), clinical summaries (module 2.7), and clinical study reports (module 5, 16.1.1 - protocol and protocol amendments, 16.1.2 – sample CRF, and 16.1.9 – statistical methods).

Sponsors can redact commercial confidential information (CCI), and these redactions need to be approved by the EMA. At a later date, the EMA will detail how and when individual patient data will be released. The policy that was released on October 2nd however, defines two levels of access;

  • A simple registration process will provide access to the information in screen-only mode (no print capability).
  • A second level (for academics and non-commercial users) will require proof of identity and enable downloading and saving information.

Towards full transparency of clinical trial information, step-by-step

In my view, the EMA policy is a great step forward that will contribute to a better understanding of the regulatory decision that resulted in approval or rejection of marketing authorization applications (MAAs). It should, however, only be seen as complementary to the industry’s initiatives in providing complete information, including complete (but redacted) CSR’s, blank CRF’s, IPD’s, protocol information and other types of supporting information from historical clinical trials (2). For example, at least 19 organizations are listed on EFPIA’s transparency website and currently, 9 organizations are joining GSK in allowing access to anonymized patient-level data on ClinicalStudyDataRequest.com. After an independent review board approves their requests, researchers can access an advanced statistical computing environment and a multi-sponsor repository where they can analyze and compare trials from different sponsors and extract new clinical knowledge about the medicinal products and devices.

If you are an academic researcher, you can now turn to different organizations for information about medical products: to the regulators for regulatory decisions and submitted reports and to pharmaceutical companies for the detailed trial information – including IPD and the ability to re-analyze the data and compare competing or complementary products.

The future of data transparency

I believe that both access and information sharing systems will continue to thrive in the long-term and provide complementary benefits to the public and external researchers. A growing list of pharmaceutical companies are now fully committed to provide detailed trial information and encourage secondary analysis; e.g. they are discussing how to apply clinical data standards such as CDISC for bringing de-identified data together and methods to de-identify the data (together with the help from industry associations like PhUSE and Transcelerate).

I’m hoping that academic trial research centers will now open up their information as well and consider providing centralized access to the data of clinical trials they’re running – preferably in the same multi-sponsor environment as the industry is currently using. While much progress has been made, some maturity and experience will be gathered by all involved stakeholders when researchers start making discoveries. Researchers can now make full use of these different complementary possibilities, start mining the clinical trials for all important confirmatory and secondary findings, and publish high-quality research to further increase the trust of the patients and physicians in medicines and devices approved for use by health care providers. After all, “the right to know”, as the theme of European Parliament panel was, can only be realized if researchers can make sense of the data in an advanced analytical environment.

  1. Publication of clinical reports: EMA adopts landmark policy to take effect on 1 January 2015.
  2. Krumholz et al. 2014, Sea Change in Open Science and Data Sharing Leadership by Industry, Circ Cardiovasc Qual Outcomes. 2014; (7) 499-504.

 

Post a Comment

Predicting the one percent in health care

Episode analytics is a method of using patient-centric data to define episodes of care. These episodes of care can be used to define standards of care – from both a cost and quality perspective – and then project these standards forward to establish bundled payment budgets and quality targets. This can be considered a global method of controlling costs. But what if episode analytics can be used in a predictive sense to determine the next top spenders?

Health care spending is not equal. For the civilian* population, 20 percent of spend is on behalf of one percent of the population, and five percent of the population is responsible for nearly 50 percent of all spend. These members are easily identifiable through claims analytics, and are often the focus of case management efforts to help control their costs. While these care management efforts are effective, they can’t reverse historical spending – nor can they ameliorate the episodes of care that drove the spending. The question is, can episode analytics be used to identify the episode characteristics that can predict the next one percent in order to practice preventive care?

Because SAS® Episode Analytics is patient-centric, it provides a full view of the episodes of care the patient has experienced. This view, however, is rather unique. Not only is all care included, but it is categorized in several manners. First, the care is associated with all episodes that are appropriate. If a follow-up visit after surgery includes diagnosis codes indicating chronic care, the chronic care episode(s) are associated with the visit, in addition to the surgical episode. This identification is hierarchical in nature as well. If the care initiates an episode of care, it is fully allocated to that episode, but can also be associated – not allocated – to another episode. Additionally, care can equally be split in the allocation. This hierarchical categorization of care is unique and allows insight into connections – or lack thereof – in care.

With SAS® Episode Analytics, you have total cost by member, by condition in the stacked graph at the bottom. The upper right breaks out cost by category (T=typical, C=complication, TC=typical with complication). And the reason for the potentially avoidable complication (or PAC) upper left shows cost by condition.

With SAS® Episode Analytics, you have total cost by member, by condition in the stacked graph at the bottom. The upper right breaks out cost by category (T=typical, C=complication, TC=typical with complication). And the reason for the potentially avoidable complication (or PAC) upper left shows cost by condition.

 

 

 

 

 

 

 

 

 

 

Another feature of comprehensive episode analytics is categorizing care as typical care or a potentially avoidable complication (or PAC). This is not only a method to quantify quality – but also identify future, undesirable, member health implications. With SAS, these PACs are categorized based on clinical criteria, such as adverse effect of drug or peripheral embolism. There are over 200 categories PACs identifiable today. These complication categories have the full claim history – including not only the procedures but also the diagnoses – behind them.

The combination of hierarchical associations as well as complication categorizations provides a valuable tool to analyzing historical claims. This new insight into member claims history provides new tools for analytics, and predictive engines. These engines can, in turn, be used to predict the members – and providers – that can benefit from future actions.

 

*Civilian excludes residents of institutions – such as long-term care facilities and penitentiaries, as well as military and other non-civilian members of the population. “Care” reflects personal health care and does not include medical research, public health spending, school health or wellness programs. From “The Concentration of Health Care Spending,” National Institute for Health Care Management (NIHCM) Foundation.

 

Post a Comment

Will the health data you’re using truly answer your question?

Computer processors have undergone a stable and consistent growth since Alan Turing and his contemporaries invented the first “modern” mechanical computers. One way of quantifying this growth is by Moore’s Law which says that every two years we will double the transistors on integrated circuits. While this is a bit too technical to mean much to me, to Intel that means a new processor generation every two years. I couldn’t find a direct benchmark comparison, but try to remember the cutting edge Pentium III you used in 2000 and compare that to the Intel Haswell chip in your ultra-thin MacBook Air (notwithstanding the high-end quad cores in performance machines.)

The ubiquity of advanced analytics
This growth in computing capability has dramatically and positively changed the face (and pace) of analytics. Concepts like machine learning aren’t just hypotheticals or relegated to academia anymore; they are reality, they are powerful, and they are everywhere. The value we get from using advanced analytics is immense, and now, more than ever, modern tools are highly accessible to a wider array of users. Users may not know (or even need to know) how the wheels turn behind the scenes, but with very simple interfaces they’re able to start those complex wheels turning.

First building block: Data
While all this technology has opened up amazing possibilities with respect to easily accessible insight, we would be loath to forget all of the lessons that traditional statistical methods can provide. While the notion of stating a “formal” hypothesis may seem to be limiting (e.g., why test one thing when I can explore a thousand?), taking the time to formulate a research hypothesis makes you think critically about what you’re doing. One of the most important questions you can ask yourself during this process is whether the health data you’re using is even appropriate to answer the questions you want to consider. Lots of data sources may collect similar data elements, but they collect them in different ways and for different reasons.

The myriad of health care data

The myriad of health care data

For instance, medical diagnoses can be captured from billing claims, EMRs, patient histories or public health surveys (e.g., NHANES). Each of these sources could potentially be used to power similar insights – but they do so with differing qualities and caveats. Claims and EMRs come from an “expert” clinical source and diagnoses may be more accurate, where patient histories may include information outside the view of the treating physician but are based on a patient’s own biased recall. All three of these sources are limited to a self-selecting population and lack the coverage of what a general population survey might represent, though here you are limited by data use restrictions, questionnaire limitations and the bias of those pesky respondents.

The art of statistics
Perhaps the most confusing part, and what makes statistics more of an art than a science, is that all of the above scenarios can be right depending on your needs.

I don’t bring up this issue to deride or lampoon the prevalence and utility of highly accessible analytic tools or those who use them. I’m a strong believer that broader access to these tools will open us up to insights we wouldn’t otherwise uncover. At the same time, we can easily forget that not all insights are created equal. As you look at the results and information you uncover, before you evaluate the impact they may have on your business, first evaluate the underlying quality with which they were created.

An example comes from a former colleague who worked on a study profiling pilots and trying to predict who would make a good pilot. In the end, the only significant factor they found was whether you liked strawberry ice cream. Likely, I would guess that a fear of heights and motion sickness are better indicators that I wouldn’t be a good pilot, but maybe it’s been the ice cream all along.

Post a Comment

It takes all kinds (of data): Going beyond our comfort zone with clinical models

When I’m working with new customers or on a new project, there are a handful of questions I typically ask. These help me set the stage, understand needs, and most importantly – learn the customer’s expectations. Almost always, I spend some time talking about what an acceptable model looks like to them. Does it need to have certain characteristics or can the data speak for itself?

“Let the data speak” is the gist of the typical answer, but that usually isn’t reality. It’s like telling someone to “help yourself to anything in the fridge”; you really don’t mean for him or her to grab the steaks you were planning on eating for dinner. They can have anything they want, inside of a predefined, unspoken set of boundaries.

We want to explore the data, but often, we want the data to speak to us in terms of what we already know. An endocrinologist isn’t likely to accept a model predicting diabetes trajectory that doesn’t include HbA1c. A cardiology researcher is going to want to see a QT interval. And an epidemiologist specializing in pulmonary diseases is going to want FEV.

We convince ourselves – due to research, expert opinion, or simply habit – that models must include certain concepts or be rendered invalid. I definitely advocate for the consideration of these known factors in model creation. They’re not only elements that will help to define a robust model, but given our current clinical knowledge, they represent mechanisms by which we can effect a change.

However, while creating models with such considerations is necessary to provide value in a certain context, I would also raise three counterpoints to this. I challenge you to consider these the next time you start a modeling process:

  1. Health care and medicine (like most industries) are a science and while that carries with it the scientific method and its inherent rigor, it also brings with it fallibility. Unlike mathematics, the sciences represent our best understandings and not necessarily truth. While I doubt the relationship of HbA1c to diabetes will go the way of “phlogiston,” I don’t doubt that a sufficient span of time will make many of our current scientific truths seem equally preposterous.

    A statistical model built on valid and robust data that defies current clinical knowledge may be a statistician’s contribution to science. I’m not saying to throw out current knowledge and create off-the-wall models. But rather, we have an opportunity through the exploration of data to bring up new ideas or challenge old ones.

  2. Highly predictive but clinically illogical models may still have utility, though perhaps not in the traditional sense. A model derived based on magazine subscription history, peanut butter brand-switching habits, and completely devoid of any traditional cardiovascular risk indicators, that can calculate a reliable 30-day risk score for a heart attack has value.

    It doesn’t give us actionable information we can use to mitigate that risk, but it does alert us to its presence – whatever the cause. Often we may not have the luxury of ideal data to derive a model. A patient who hasn’t had a heart attack may have never seen a cardiologist, had an EKG, or even have a recent cholesterol panel or CBC. And, even if you do have this data, how often is it collected? But if Cat Fancy and a recent purchase of a jar of Peter Pan crunchy can send up a red flag, why not listen to it?

  3. Many tests are biased, most people lie (at least a little), and all systems are imperfect. We cannot necessarily assume that a data point which attempts to capture a particular concept is able to do so perfectly. Especially in fields like medicine where our most valuable observations aren’t based on static and easily measured concepts. A cashier can count the rolls of toilet paper you purchase, a bank teller can count the dollars and cents in a transaction, but even the best lab tech can’t count the number of white blood cells in a drop of blood.

    Generally speaking, the data points we use are at best highly correlated to the concepts they represent, and at worst, a set of random values. Perfection cannot be reached and bias is often impossible to mitigate, but if we can have consistent bias, we can still have useful information. We can capture directional trends and consistent results. I may not be willing to believe someone who says they took their prescribed statin 300 of the last 365 days, but if I can assume a consistent trend in bias, then that answer still has value to me (just not necessarily as an accurate measure of adherence).

Modern computing resources are powerful and in many ways our data is plentiful. There is no reason not to explore every model we can, no matter how ridiculous or counter-intuitive it might seem to be at first. From this we might discover something new (or refute something old), or even create a new early warning system for heart attacks. Just as correlation doesn’t imply causation, we should also remember that an element of causation (especially as a part of a highly complex and not fully understood system) doesn’t necessarily give us high correlation.

Remember, a statistical or predictive model is a tool. We can use it in many ways from the detection of a signal amidst the noise or to help us find areas where we can effect a change for better health. Tools can be constructed in many ways, and two that seem similar may have drastically different uses. Understanding how it was made and what it was made for is how we come to use a tool properly and ultimately derive maximum value.

Post a Comment
  • About this blog

    Welcome to the SAS Health and Life Sciences blog. We explore how the health care ecosystem – providers, payers, pharmaceutical firms, regulators and consumers – can collaboratively use information and analytics to transform health quality, cost and outcomes.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives