A strategy for population health analytics: Assess and report performance across the continuum (Part 3 of 10)

Any improvement program begins with honestly assessing current performance and identifying causal factors that drive the desired effects. Without such an assessment, the improvement effort is reliant upon blind luck and likely doomed to suffer a myriad of unintended consequences.

Any strategy to improve the health of populations, therefore, must start with establishing a baseline for the causal factors. Herein, analytics plays a key role in identifying correlation but requires smart people to establish causality. Smart people, equipped with the right tools and armed with trustworthy information, can perform root-cause analysis and identify opportunities quickly. Data visualization and exploration will certainly speed the process, but the real benefit comes from democratization of data through enterprise-wide interactive reports to gain a single version of truth across the organization. Thus, reporting becomes a foundational measurement capability by which you can monitor and validate all improvement initiatives.

Performance reporting needs to be personal, visual, mobile and interactive. Static reports only succeed in creating more questions than answers and, ultimately, frustrate users into cynical disregard and skepticism about the value of the reported information. For decision makers to want to use reported information in their decision-making process, they must first trust the accuracy and validity of the information and, secondly, they must be able to interpret and understand the results. Lastly, the information must be “actionable” – meaning specific and granular enough that the decision maker can affect a change by making different/better decisions. That’s a tall order, but we shouldn’t stop there.

The reported information should also be sufficiently granular, highly portable, securely shareable, and social – meaning that report users can collaborate with other care team members across the system to interpret, brainstorm and take action to improve the situation. Due to privacy concerns and our fragmented EMR (electronic medical record) infrastructure, this is one of the most challenging aspects of reporting health care performance in meaningful ways. However, success in population health management depends on our ability to unlock the data buried in the EMR, and surface insights to care team members who can affect change. This will transform reporting into an ongoing performance assessment cycle that enables evergreen prioritization of intervention programs and resource investments.

Population Health Wheel

The SAS Center for Health Analytics and Insights: Population Health Wheel

Until recently, it was extremely difficult to assess health care delivery performance at the episode level. Up until now we’ve relied on groupings of ICD/APG/DRG codes, but the business and clinical logic required to link discrete encounters together into clinically meaningful episodes wasn’t systematic. New, and increasingly sophisticated techniques for episode analysis are becoming available and are based on more clinically relevant episode definitions, such as those made available by HCI3's Prometheus Payment. These new episode structures are designed from the ground up to support value-based payment models that span the care continuum. Your population health analytics (PHA) strategy should include cost-effective methods to analyze episodes, identify variation in cost and quality, eliminate waste, and reduce the incidence of potentially avoidable complications. This is, I believe, a key factor in proving the value of care coordination and justifying the investment in a more connected system of care.

After establishing baseline performance measures and identifying and understanding root causes, we can move into phase three of the strategy and define clinically relevant population cohorts and identify gaps in care. In the next installment (4) of this series, we’ll discuss techniques to quantify risk and the value of decision trees.

Post a Comment

A strategy for population health analytics: Integrate and prepare data (Part 2 of 10)

In the eight-step approach to population health analytics (PHA) that we at the SAS Center for Health Analytics and Insights (CHAI) recommend, the first step is to “Integrate and Prepare Data.” But before we jump directly into a discussion about this first step, let’s take a moment to consider the following question: How much time did you spend in a doctor’s office or hospital (as a patient) last year? And now, compare that number to the amount of time you spent at work, at home, at an airport, in a car, on a bus, on the phone, out with friends, online, shopping, dancing, cooking, exercising, and whatever else you do while awake. I sincerely hope your ratio of minutes spent in a health care setting was similar to mine: approximately 92:350,308. Keep that ratio in your mind as you consider how much data was generated by you – or about you – in a health care setting. Got a number in mind? Now, how does that number compare to the amount of data generated by you – or about you – in all those other settings combined?

Population Health Wheel

The SAS Center for Health Analytics and Insights: Population Health Wheel

This thought experiment helps us recognize the need to include non-traditional ‘big data’ such as social, consumer, survey, and environmental along with our more traditional clinical, pharmaceutical, biometric, and lab data. Additionally, we need to plan for new data types and sources such as streaming data from wearable fitness devices, self-reported data from smartphone apps, blogs and forums, genomic data, and the digital output (audio and video) from telemedicine encounters. Successful integration of this data will require the use of fuzzy-logic to match, merge and de-duplicate with a high degree of accuracy. Perhaps most important, is the ability to mine unstructured (text) data to apply machine learning and natural language processing to extract value from clinician notes and patient verbatims.

Matching and attributing data from disparate sources to the correct person – be they patient or provider – is no mean feat. It’s not uncommon to find 30 percent or more duplication in patient’s EMR records. The problem is a bit like trying to bail water out of a boat before plugging the holes in the hull. Thus, an ounce of prevention in the form of data quality at the point of entry, is worth a pound of cure. However, most health care settings don’t yet enforce rigorous data entry protocols, so it becomes necessary to profile the data and set up repeatable processes to remediate data quality issues. Whatever technology you choose, be sure it routes data quality decisions to the appropriate data steward for quick and secure resolution.

Once you have a cleanly integrated dataset, you can begin the process to prepare that data for analysis. It’s been noted that 80 percent of the work required to find an analytically driven solution is directly related to data preparation. To best prepare data for meaningful use requires a combination of subject-matter knowledge (to identify what a signal might look like) and data science (to know how to tease that signal out of the noise.) Volumes have been written on the subject of data preparation. For population health analytics, the first and most formidable problem you’ll likely face is missing data. As such, it will be necessary to impute data and use advanced statistical methods for gauging reliability.

While this first step seems monumental, it’s important to anticipate and plan for the subsequent phases of the strategy. In Part 3 of this series, we’ll explore what it takes to assess performance across the continuum and report it in impactful ways.

Post a Comment

A strategy for population health analytics (Part 1 of 10)

The promise of Big Data and analytics in managing population health is one of the most hyped, yet least understood opportunities in health care today. The Center for Health Analytics and Insights at SAS hopes to change that by offering you practical advice to help you create a winning strategy for your organization. In this ten-part blog series, we’ll describe the technological capabilities you’ll need, anticipate some of the challenges you’ll overcome, and outline the many benefits you’ll receive from adopting population health analytics (PHA).

The motivation

The combination of government reform and consumerism is driving the health care industry shift from fee-for-service (FFS) to fee-for-value (FFV) – sometimes characterized as a move toward more “accountable care.” Proponents believe that economic opportunities – in the form of shared savings or value-based payments – will encourage health care providers to accept risk for delivering improved outcomes at lower-than-expected cost. This opportunity is already motivating a significant number of entrepreneurial health care providers to begin transforming their traditional care delivery process from responsive episodic care into cross-continuum coordinated care, for defined groups of patients. This emerging delivery model is often referred to as “population health management”.

The challenge

We know that managing population health requires providers to blur the boundaries between public health and the medical treatment of individuals. As health care providers accept financial responsibility for the health care costs of certain groups of people, the need for – and value proposition of – investment in longitudinal cross-continuum care becomes evident. Scarce clinical resources, thin margins and competing priorities require efficient deployment and judicious management of those resources, at scale. Thus, providers on this journey will need health information technologies (HIT) that enable automation, ensure accuracy and integrate seamlessly into clinical workflows.

The strategy in a nut shell

Population Health Wheel

The Wheel of Population Health Analytics (PHA) from the Center for Health Analytics and Insights at SAS

The Center for Health Analytics and Insights at SAS collated best practices and advice from experts across the US into a strategy we call population health analytics, or PHA. As shown at the top of the graphic of the PHA wheel, the strategy begins with integrating data from diverse sources and preparing it for analysis. This leads directly into assessing performance across the continuum of care. System-level performance reporting inevitably surfaces opportunities for improvement that will require providers to “peel back successive layers of the onion” and define increasingly granular cohorts of patients, ultimately ending with a “population of one.” Then, by understanding the needs and risks of each unique individual, providers can design interventions and tailor programs to engage each patient in a personalized care plan. Automation and workflow integration support the delivery of interventions that were strategically designed to improve care coordination and performance. Lastly, by measuring the impact (success or failure) of each intervention, experimenting with new methods, and testing incremental quality improvements along the way, providers can learn and adapt to optimize the entire process.

There are some important considerations for each of the eight analytical competencies, which we will cover in parts 2 through 9 of this series. In subsequent entries, we’ll elaborate on the business rationale, the desired output and the enabling technology that powers each capability. Check back for part 2 of the PHA Strategy series. In the meantime, please tell me your thoughts and ask us your questions in the comments section below.

Post a Comment

The last mile of health care: Patient behavior change

Though the US Affordable Care Act has extended health insurance to millions of previously uninsured consumers, recent research found that providing health insurance to those not previously covered did not improve the patients’ health and outcomes:

"The study seems to indicate that greater access to health insurance, in and of itself is not enough to improve outcomes for patients with chronic disease," said lead author Tomasz P. Stryjewski of Massachusetts General Hospital.

Expanding health insurance is only part of the answer. The other part is supporting consumers and patients in their attempts to embrace a sustained behavior change over the long term. The Centers for Disease Control and Prevention (CDC) reported that 75 percent of health care costs go for the treatment of chronic disease. Patients struggle to manage their health and wellness even with health insurance. For instance:

Changes in the US health care system are underway that further incentivize delivering high quality, coordinated care. What’s missing is a clear path on how to accomplish the end goal. Fortunately, the Centers for Medicare & Medicaid Services (CMS) has assessed the need for additional patient support outside the clinical setting. It’s now implementing the Chronic Care Management (CCM) Services program scheduled to go into effect in 2015.

This program will allow providers to bill CMS $42.60 per month per patient for providing 20 minutes or more of chronic care management services to those Medicare patients with at least two chronic conditions expected to last at least 12 months, or until the death of the patient. The conditions must place the patient at significant risk of death, acute exacerbation/decompensation or functional decline. As this affords providers a direct mechanism to bill for patient interactions outside the clinical setting, they must now find approaches to supporting these patients outside the clinical setting. One way to accomplish this is through using technology and automation to engage and support patients at scale. This is one aspect of population health management.

By gathering and analyzing all types of patient data collected in and outside the clinical setting – with automation where appropriate – providers can create a 360-degree view of the patient in a cost-effective manner. Providers can then use this view to risk-stratify the patient populations to understand risk at both the cohort and individual patient level. Armed with this information, providers can identify specific types of interventions for specific consumers and patients and deliver the appropriate care.

health analyticsIn addition to identifying types of interventions, how can providers make better decisions on how to efficiently allocate resources for support and intervention tactics? Not every patient requires additional support to manage their health. Identifying which patients, and how much additional support would be effective, can be solved with advanced analytics applied to the tremendous resource of data collected about the patients. Finally, monitoring patients’ behavior over time to understand who responded to which interventions allows for more informed decisions through response profiling to understand which intervention will work for which patient.  This all needs to take place “at scale.”

One population health management company focused on aggregating a broad spectrum of patient data, aggregating this data, and risk-stratifying the patient population is Geneia. They use SAS® to power their Theon platform where they integrate disparate data sources that include EHR data, claims data, psycho-social data, activity tracker data and device data to create a comprehensive view of the patient both inside and outside the clinical setting. With this Big Data asset, their CareModeler module can support risk stratification of the patient population to allocate resources to close care gaps and/or support patients’ behavior change outside the clinical setting.

Population health management organizations are using automation of data management and advanced analytics to identify those patients that have care gaps and would benefit most from additional support.  The automation and advanced analytics also enable targeted outreach to specific patients at the optimal time for the given patient’s status relative to the management of their disease, based upon data entering the patient’s records. Patients desperately need on-going support outside the clinical setting for sustained behavior change relative to managing their health. And providers, at least for a portion of their patient population, now have solutions available and a mechanism to bill for care management outside the clinical setting.

So, how will all this be put together? Analytics will be key in identifying high-risk patients, allocating resources most efficiently, and aligning the various care management services and follow-up with each individual patient that will be most receptive. Technology and automation will pave the way for these services to be delivered at scale – exactly when they will deliver the most benefit – and capture the critical data surrounding the interaction for analysis and refinement of future care management services. We now have the incentives, health analytics, technology and automation to cover the last mile into the patient’s home and workplace in support of sustained behavior change.

Post a Comment

The art of it all

When I see my doctor, I try to keep one thought in mind: He is the doctor and I am not. The reason I’m seeking his medical advice (other than my wife or prescription laws) is that arguably my doctor knows much more about biology, physiology and medical practice than I do. I try to remind myself that my self-diagnosed condition that my colleague Dr. Google and I came up with isn’t founded on the same kind of knowledge and experience that my doctor has.

Now, this doesn’t mean that I don’t get it right sometimes or that my research isn’t of value; it may help me to think about symptoms or qualities I wouldn’t have otherwise considered. But in the end, my doctor is best able to help me when given as much information as I can possibly give and not filtering what I think is relevant. Yes it’s allergy season, and yes, that means I sneeze more often than usual, but that doesn’t mean I should ignore the sneezing when talking to my doctor – this can be an important bit of information.

Data ModelingRight now, I would guess that any clinician reading this would be agreeing whole heartedly and grumbling about the last patient who came in and told the doctor that they needed the purple pill. To this same clinician, I challenge you whether you give the same consideration to your analytical colleagues. Too often we are simply dictated to regarding the appropriate methods, data or considerations to make. I’m not a cardiologist and don’t pretend to be; yes, I want all the input you can give me to help supplement what I do, but the more you dictate and the less you discuss, the more my hands are tied.

A comment I’ve found myself making in the past (sometimes out loud, but often just to myself as it is a tad snarky) is that if all of your suppositions were correct, you wouldn’t be talking to me. If you truly knew the right data, transformations, methods and models, then these problems would have been solved long ago. I know methods that you don’t, I know pitfalls you are unaware of; to put it simply, I know my craft just as you know yours.

Analytics, like medicine, when done as it should be is as much an art as it is a science. I have my models and distributions just as you have your tests and diagnostics. Knowing which to use and what to look for in the output isn’t always simple. Two tests may answer the same question, but in different ways and with different assumptions. The beauty is in the subtlety: I may only hear a heartbeat where you hear an arrhythmia; and you may see a histogram where I see a trend.

Post a Comment

Data transparency benefits researchers and patients

According to the American Cancer Society, approximately 1 male in 7 will be diagnosed with prostate cancer during his lifetime. Therefore, increasing our medical knowledge about prostate cancer is very important to society. At the recent Clinical Trial Data Transparency Forum, Stephen J. Freedland, MD from the Duke University School of Medicine shared the value of access to clinical trial data for advancing medical knowledge about prostate cancer.

Good news for medical research

Many leaders in the life sciences industry are realizing that greater access to patient-level clinical study data is a good thing – good for science, good for business and good for humanity. In a very short timeframe, many biopharmaceutical companies have elected to make clinical trial data and supporting documents available to researchers. And for researchers who are seeking to discover new medical insights, this is great news.

“It’s incredible data,” said Dr. Stephen Freedland, MD, Associate Professor of Surgery and Associate Professor in Pathology, Duke Urology, at the Duke University School of Medicine. He continued:

“You guys spend millions of dollars gathering the data; the data is clean, there are no holes, and it’s prospective versus retrospective. As an academician who doesn’t have patients, the best I can do is look at the patients who were studied before. So the richness of the data is incredible. From an academic perspective, it allows PhD access to patient data to test new ideas, get quick answers and validate prior studies. For example, we had seen that obesity is a risk factor for prostate cancer; when we saw that in other data sets, this adds another point of validation.”

Good news for patients – new medical knowledge

Clinical trial data transparencyFreedland also provided a compelling example of the tremendous medical insights that can be discovered by using this data to create new science to aid patients and society. He was able to use clinical study data to test his previous research findings.

His research followed patients for multiple years, looking at baseline characteristics to see which patients developed prostate cancer of what degree. “I sought access to pharma data to validate my ideas,” said Freedland. “I didn’t really care about the original trial per se – did drug A do better than drug B – but rather viewed the data as a prospective cohort study for secondary analysis.”

Gaining access to clinical study data has enabled Dr. Freedland to produce many research papers examining risk factors for prostate cancer. The resulting evidence indicated that smoking is correlated with more aggressive prostate cancer, and obesity is also a significant risk factor. Those insights were revealed by making new use of pre-existing clinical study data. And this is just one example of the medical insights researchers can deliver through access to a well-managed clinical trial data sharing program.

More medical insights to come

In a very short timeframe, many biopharmaceutical companies have elected to make clinical trial data and supporting documents available to researchers. The availability of this data for approved medical research will help researchers create new medical science and benefit patients in many ways in the next few years. A strong data transparency commitment from all sponsors of clinical research will help to make future medical insights a reality.

Post a Comment

Provider consolidation and health analytics: A combination with the potential to bend the cost curve

The health care industry is converging in many ways. Health plans are buying health systems, and health systems are creating their own health plans. The lines between health plan and health system are as blurry as ever. It seems that many organizations are jockeying for position as we turn the corner in implementing health care reform. One development that’s getting some attention is the trend of provider consolidation.

Provider consolidation is the trend for health systems to acquire independent providers and bring them into their larger system. It can pose a few issues to health plans though. First of all, this can almost instantly result in an increase in cost for the same health care services, since health systems are usually able to charge a facility fee in their contracts with health plans. Secondly, as a health system gets larger, their bargaining power increases. This too can result in an increase in the amount they are reimbursed for their services.
Bend the cost curve
We have all seen the curve that illustrates the year-over-year increase in health care spend. It is a serious and legitimate concern. There is a collective, national mission to try to “bend the cost curve.” On the surface, provider consolidation may appear to directly conflict with the mission to bend the curve. However, it could enable the industry to improve health care quality while at the same time decreasing the total cost of care.

New reimbursement models are being implemented by both health plans and health systems. The industry overall has acknowledged the need to move away from traditional fee-for-service reimbursement and towards fee-for-value. However, the reality is that providers are often reluctant to enter into these value-based models because they have limited influence and visibility into the quality of care the patient receives downstream. In a fragmented region with many different unaffiliated providers, this challenge is very apparent. In these types of regions, the level of value-based adoption appears to be very early. However, in more consolidated regions, there is more willingness and ability to successfully enter into value-based agreements that incentivize coordinated care and improved quality. This benefits the patient, the provider, and the health plan.

An integrated delivery system alone is not enough to successfully transition to a value-based model. An advanced analytics capability is the next step to establish the foundation to plan and operationalize the new model. A clear understanding of historical performance through the lens of a value-based model is an absolute necessity for both health plans and health systems. Organizations have acknowledged this, and are now arming themselves with the data, information, and insights required to understand their utilization and quality in ways never required before. As the wave of provider consolidation continues, more organizations will require an advanced analytical infrastructure to support their effort to improve quality. In combination with coordinated care and advanced analytics, health plans and health systems will be able to successfully “bend the cost curve.”

Post a Comment

Toward clarity on transparency: An evolution in thinking – and action – about sharing patient-level clinical trial data

Highlights from the fourth SAS Clinical Trial Data Transparency Forum

“Access to the underlying (patient level) data that are collected in clinical trials provides opportunities to conduct further research that can help advance medical science or improve patient care. This helps ensure the data provided by research participants are used to maximum effect in the creation of knowledge and understanding.”

That’s the word from ClinicalStudyDataRequest.com, the data sharing consortium that represents the founding sponsor, GlaxoSmithKline and now nine other sponsors.

Sharing patient-level data for altruism and good? The ideal was not initially embraced.

“When I think back two years to when GSK first started talking about this, there was certainly some concern among staff,” said Paul McSorley of GlaxoSmithKline, a pioneer in the data-sharing movement. “’How much work is this going to be? What happens if or when researchers reach different conclusions from our own?’ We’re way past that now. GSK scientists recognize that these concerns, while real, can be mitigated – and because there is so much support for the potential value data sharing can bring to the medical community, we are very proud of what GSK is doing here.”

In the year since we hosted the first Clinical Trial Data Transparency Forum, we’ve seen a notable shift in organizational culture and the tenor of the discussions:

  • Stage 1. “We see merit in the idea, but we also see many ways it could go wrong.”
  • Stage 2. “We need to do something before external entities impose a data-sharing framework on us.”
  • Stage 3. “We’re excited to be at the forefront of creating policies and processes to make this work.”
  • Stage 4. “This may not be the final state of things, but here’s what has been working for us.”

The fourth forum, held at SAS in Cary, NC, on October 2, exemplified stages 3 and 4 – endorsement for data sharing, and more tangible progress to show for it. These events are not SAS infomercials, far from it. Our role is to facilitate – to join and formalize the conversations that have been taking place in various corners of the industry and academia. As my co-host Matt Gross quipped, SAS provides the room and the food for people to come together. The participants provide the expertise, passion and collaborative spirit.

To start the day, we heard from Dr. Ronald Krall, MD, of the University of Pittsburgh (formerly Chief Medical Officer for GlaxoSmithKline) about the why and how. He challenged the audience to think about what’s next. What’s easily achievable, what’s more aspirational? How can we bake data sharing into the clinical trial process?

As Krall noted, you might be afraid that secondary research reveals something you didn’t want to know, or something that could harm your product or competitive position, but if you’re committed to knowing everything you can possibly know about your products – and you’d rather know it sooner than later – transparency is the ticket.

Dr. Eric Peterson, MD, of the Duke Clinical Research Institute outlined the rigorous, pragmatic framework his group has adopted for review of research requests and publication of associated findings – a model for commercial organizations to consider as well.

Marla Jo Brickman, PhD, of Pfizer and Judy Bryson, PharmD, of UCB Biosciences, Inc., described what it looks like to be an early adopter with a foundation in place and traction building. Both firms were committed to transparency before; what’s new is the more structured way it is done now.

Dr. Kald Abdallah, MD, PhD, Chief Project Data Sphere Officer, and Mark Lim of FasterCures reminded us of the value of data-sharing consortia. The questions are too complex for any of us to answer alone, but they might be answered by triangulating insights in many different places.

Places such as the Yale Open Data Access (YODA) Project. Karla Childers of Johnson & Johnson – which earlier this year announced its participation in YODA – described the process used to review data requests, which will make the data-sharing process independent yet collaborative.

Assuming the data are available and can be de-identified to preserve patient privacy, what is a reasonable request? What is the ideal? Our panelists agreed that shared data shouldn’t be used only to question the validity of previous primary research. The risk there is that by changing parameters and analytical techniques, it is possible to come up with contrary but specious conclusions – and the real value of secondary research is found in creating new knowledge.

This concern about poor analysis can be mitigated by carefully vetting requests – and it might not be that much of a concern to begin with. Ben Rotz of Eli Lilly noted that few research proposals seek to replicate previous research; the overwhelming number are quests to create new science. McSorley concurred, noting that only 1 of 23 research proposals received was intended to confirm results from a previous GSK study.

Momentum is building. Take ClinicalStudyDataRequest.com for example. Initiated by GSK, the online portal now includes 10 sponsors – Bayer, Boehringer Ingelheim, GSK, Lilly, Novartis, Roche, Sanofi, Takeda, UCB and ViiV Healthcare. The site receives an average of 900 visitors a day, 219,000 unique visitors in the last 18 months, reported Dr. Jessica Scott, MD, JD, of GSK.

Clearly we are moving away from “how do we think this could work” to “how is it working” and “how can we improve this as we move forward?”

“We’ve overcome some of the cultural lock-in, the inertia in industry over the past couple of years since we’ve started this process – and have gone from commitments to implementing a system that’s actually up and running and working,” said Scott.

In fact, it has moved from concept to reality very rapidly. We’re beyond infancy in some areas, now evolving from small pilots to a sustainable model, applying early lessons to scale, respond to feedback, and accommodate the needs of the broader industry and the goals we all share.

“When people sign up to be subjects in an experiment, they make a tremendous sacrifice on society’s behalf,” said Krall.  “Our responsibility is to make sure that sacrifice gets the best possible use. If the data can make a contribution – even if it’s a use not envisioned until years later – we have an obligation to make that possible.”

McSorley agreed:  “Part of the mindset change is that it’s not our data, it’s data that belongs to the larger medical community.”

Dr. Stephen Freedland, MD, of Duke University School of Medicine channeled his inner JFK to remind us of the bigger picture: “Ask not what are the risks of data sharing, but what are the risks of not sharing data.”

OnDemand recordings of all the presentations from the fourth Clinical Trial Data Transparency Forum are available for viewing.

Post a Comment

Controlling our destiny: Real-time, visual analytics can combat the spread of disease

The recent outbreak of the Zaire Ebola virus has garnered much media attention and calls for action at all levels of government. The current outbreak thus far has been the gravest in history, and the CDC’s worst case scenario predicts up to 1.4 million cases by late January (correcting for underreporting.) The epidemic in West Africa has become so widespread that Ebola could become a permanent presence – and thus pose a persistent threat to other parts of the world.

We have seen these pandemics before: Small Pox in 1633, Spanish Flu in 1918, and Syphilis in antiquity. In fact, viral outbreaks throughout history have been so common, and so prevalent, some scientists have hypothesized that viruses played a role in our own evolution.

It is time to evolve again. We now have the tools to slow down and even stop epidemics, and those tools start with analytics. Not the wonky dissertation style analytics that show up in obscure statistical reports. I’m referring to analytics done in real time, by people who are on the front lines fighting the epidemic. To stop Ebola we must have the capability to deploy real-world analysis to non-expert users instantly so they can act on results immediately.

Welcome to the world of visual analytics.

Epidemiologists have referred to it as, “A technique aiding data analysis and decision making that allows for a better understanding of the context of complex systems.” That it certainly is. And it has the potential to make a difference in this epidemic. In the US, rules have gone into effect that all travelers flying in from Liberia, Sierra Leone or Guinea will undergo strict screening procedures. But will that be enough? Perhaps not.

At SAS I have the privilege of interacting with data scientists from a wide variety of disciplines that specialize in the real-time analysis of large datasets. SAS develops tools that enable real-time detection of fraudulent activity, mostly by the financial services industry. These tools combine a wide variety of approaches such as social network analysis, business rules, forecasting and predictive analytics to determine in near-real time where and when fraud happens. These same types of analytics can be deployed in the fight against viral epidemics. A screener detects a traveler with a high temperature. A school nurse finds a fever in a school child. An emergency room sees a spike in feverish patients. Are these cases Ebola? If not, what are they? And more importantly, are they contagious?

By combining data such as flu trends, disease trajectories, and geospatial information together with passport records, financial transactions (such as where and when an airline ticket was purchased), and information gleaned from social networks, it’s possible to build models to predict the cause of that fever. Doing this could not only help stop Ebola, it would also help stop the spread of any contagious disease. Developing this capacity would usher in a new era in our relationship with pathogens. Unlike our ancestors who had to resign themselves to fate, we can, through clinical analytics and rapid diagnostic testing, actively engage and control the viruses that make us.

Post a Comment

A new world of trust and transparency for clinical trial information

On Monday September 29th, the European Ombudsman organized a panel discussion on “International Right to Know Day.” This day was established in 2002 by access to information advocates from around the world. This year, the panel’s theme was “Transparency and public health – how accessible is scientific data?”

This topic was well chosen for a week in which the board of the European Medicines Agency (EMA) published their long-awaited policy on publication of clinical data (1). The panel at the European Parliament consisted of representatives from all stakeholders that gave input to EMA’s draft policy over the preceding years. The European Ombudsman, Emily O’Reilly, opened the discussion by saying that while there is much good coming from the pharmaceutical industry, more trust is needed to convince patients that therapies are working, and the only way to create that trust is by opening up clinical trials results and data.

The actions of the industry will silence the critics

Both Ben Goldacre (physician and author of the book Bad Pharma) and Margrete Auken (European Parliament member and shadow rapporteur in the new 2014 clinical trial regulation of the European Union), expressed a lingering distrust against the pharmaceutical industry in how they are releasing data. Richard Bergström of the European Federation of Pharmaceutical Industries and Associations (EFPIA) presented the great progress in the last year by the European pharmaceutical industry in their drive to release clinical trial data in a controlled manner. “This is unprecedented,” said Bergström, with many EFPIA members going beyond the principles that his organization has laid out. These principles include sharing all information produced during a clinical trial, including Clinical Study Reports (CSR’s) and the complete set of (anonymized) individual patient data (IPD).

The European Medicines Agency releases their transparency policy

The EMA policy describes what clinical trial information will be released, when it will be released, and also that EMA itself intends to make it available to interested researchers. Guido Rasi, the Executive Director of EMA, received most of the attention with the publishing of the EMA policy the same week. Rasi pointed out that EMA was under no legal obligation to release the clinical trial information owned by the pharmaceutical companies, but did so to increase public trust in regulatory decisions about new products. According to Rasi, the policy intends to strike a balance between releasing clinical trial data and the commercial interests of pharmaceutical companies. Meanwhile, GSK, as the first global pharmaceutical company, decided last year to open up all clinical trial information – including anonymized IPD – for external researchers. This pharmaceutical giant is now providing external researchers the ability to apply for access to a clinical trial, host an independent review panel, and access a secure online data and analysis environment (SAS® Clinical Trial Data Transparency) that allows the applicants to gain access and re-analyze the patient-level clinical trial data.

Europe leads the way in transparency of clinical trials

The “EMA policy on publication of clinical data for medicinal products for human use” – as it is titled – will become effective January 1, 2015, and reflects a legal obligation for transparency in the new European Clinical Trials Regulation No 536/2014 adopted in late May 2014. The European regulatory agency will implement this policy step-by-step, release Clinical Study Reports (or parts of them) initially, and IPD (“individual patient data”) might follow as a result of a follow-on policy. EMA will release only certain modules of the CSRs, such as:

  • Clinical overviews, (module 2.5 of ICH E3 guidelines), clinical summaries (module 2.7), and clinical study reports (module 5, 16.1.1 - protocol and protocol amendments, 16.1.2 – sample CRF, and 16.1.9 – statistical methods).

Sponsors can redact commercial confidential information (CCI), and these redactions need to be approved by the EMA. At a later date, the EMA will detail how and when individual patient data will be released. The policy that was released on October 2nd however, defines two levels of access;

  • A simple registration process will provide access to the information in screen-only mode (no print capability).
  • A second level (for academics and non-commercial users) will require proof of identity and enable downloading and saving information.

Towards full transparency of clinical trial information, step-by-step

In my view, the EMA policy is a great step forward that will contribute to a better understanding of the regulatory decision that resulted in approval or rejection of marketing authorization applications (MAAs). It should, however, only be seen as complementary to the industry’s initiatives in providing complete information, including complete (but redacted) CSR’s, blank CRF’s, IPD’s, protocol information and other types of supporting information from historical clinical trials (2). For example, at least 19 organizations are listed on EFPIA’s transparency website and currently, 9 organizations are joining GSK in allowing access to anonymized patient-level data on ClinicalStudyDataRequest.com. After an independent review board approves their requests, researchers can access an advanced statistical computing environment and a multi-sponsor repository where they can analyze and compare trials from different sponsors and extract new clinical knowledge about the medicinal products and devices.

If you are an academic researcher, you can now turn to different organizations for information about medical products: to the regulators for regulatory decisions and submitted reports and to pharmaceutical companies for the detailed trial information – including IPD and the ability to re-analyze the data and compare competing or complementary products.

The future of data transparency

I believe that both access and information sharing systems will continue to thrive in the long-term and provide complementary benefits to the public and external researchers. A growing list of pharmaceutical companies are now fully committed to provide detailed trial information and encourage secondary analysis; e.g. they are discussing how to apply clinical data standards such as CDISC for bringing de-identified data together and methods to de-identify the data (together with the help from industry associations like PhUSE and Transcelerate).

I’m hoping that academic trial research centers will now open up their information as well and consider providing centralized access to the data of clinical trials they’re running – preferably in the same multi-sponsor environment as the industry is currently using. While much progress has been made, some maturity and experience will be gathered by all involved stakeholders when researchers start making discoveries. Researchers can now make full use of these different complementary possibilities, start mining the clinical trials for all important confirmatory and secondary findings, and publish high-quality research to further increase the trust of the patients and physicians in medicines and devices approved for use by health care providers. After all, “the right to know”, as the theme of European Parliament panel was, can only be realized if researchers can make sense of the data in an advanced analytical environment.

  1. Publication of clinical reports: EMA adopts landmark policy to take effect on 1 January 2015.
  2. Krumholz et al. 2014, Sea Change in Open Science and Data Sharing Leadership by Industry, Circ Cardiovasc Qual Outcomes. 2014; (7) 499-504.


Post a Comment
  • About this blog

    Welcome to the SAS Health and Life Sciences blog. We explore how the health care ecosystem – providers, payers, pharmaceutical firms, regulators and consumers – can collaboratively use information and analytics to transform health quality, cost and outcomes.

  • Health Analytics Executive Virtual Forum
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives