Move beyond the 'whys' of CDISC and bridge the gap between theory and practice

3

beyond-the-whys-of-cdiscI am writing this post with the satisfaction of having Implementing CDISC Using SAS: An End-to-End Guide, Second Edition completed and on the shelf. Once again, it was my pleasure to collaborate with Chris Holland, and I am glad that we had a chance to update the book with current software and standards. That leaves me thinking about where we are right now in terms of clinical trials standards and compliance, and I am a bit concerned.

Fifteen years ago, we had the birth of what we know as the CDISC SDTM, and ADaM was in its infancy. Now here we are, and the SDTM, ADaM, and DEFINE-XML submission data standards have matured and grown. Oh, how they have grown. When I teach ADaM, I am often amazed that students haven’t read all of the model documentation, but should I be? If you take the current basic ADaM documentation, which includes the model, implementation guide, time to event, OCCDS and examples document you are looking at reading 305 pages. Define-XML is a sprightly read at 98 pages, and the current SDTM documentation clocks in at a meaty 469 pages. For those keeping track at home, that is 872 pages of FDA submission data standards, and that doesn’t even include the new therapeutic area user guides (TAUGs). Include those and you’re likely at a nice round 1,000 pages of submission standards to understand.

Those 1,000 pages of clinical data standards continues to grow, and so does their interpretation. Define-xml is fairly rigid, with the SDTM a bit less so, and ADaM even less rigid. This lack of rigidity leads to multiple interpretations of the standards across implementations in the industry. To add to that complication, we now have various regulatory requirements on the CDISC requirements. The FDA has a technical conformance guide that adds additional requirements to the CDISC standard requirements, and now they have a new technical rejection criteria document as well. There is also the PMDA CDISC submission requirements in Japan, as well as the additional checks found within the Pinnacle 21 validation tool. Add it all up and you have a lot more to understand beyond the1,000 pages of base standards.

beyond the 'whys' of CDISC

As the standards teams continue to evolve the CDISC standards, and as we get evolving interpretations and additional requirements layered over the base standards, things are getting a tad complex. At times, it has me wondering if this increasing standards complexity is the way to go. It is worth noting that CDISC isn’t even the only clinical data standards game in town. As clinical research evolves to be based more on hospital electronic health records (EHRs), we can expect models such as HL7 v3 or FHIR to play a greater role. How this all works out with CDISC is yet to be seen as the CDISC and HL7 worlds are eventually truly bridged, and not just BRIDG’d.

We need an easy button for clinical data submissions. Have we created one yet?

Share

About Author

Jack Shostak

Associate Director of Statistics at Duke Clinical Research Institute and SAS Press author

Jack Shostak, Associate Director of Statistics, manages a group of statistical programmers at the Duke Clinical Research Institute. A SAS user since 1985, he is the author of SAS Programming in the Pharmaceutical Industry, Second Edition, and coauthor of Common Statistical Methods for Clinical Research with SAS Examples, Third Edition, as well as Implementing CDISC Using SAS: An End-to-End Guide.

3 Comments

Leave A Reply

Back to Top