Sometimes we’re so wrapped up in the day-to-day activities of conducting a clinical trial that we forget to perform some of the more obvious data checks. Data cleaning activities that seem unlikely to bear fruit because “I can’t believe they would do this” or “no way they could mess this up” are always worth performing. More often than not, you’ll identify something that requires your attention. One example that is often overlooked are the visit dates themselves. Sure, we may have validation checks to ensure that Visit (k-1) occurs before Visit k for all values of k, but do we bother looking at the significance of the actual day?
Clinical trials take place in the real world and are subject to the forces that govern them, even those that are seemingly less important than the disease under study. Consider weekends and holidays, and ask yourself the following questions. Is it likely that a trial participant would show up for a study visit on a major holiday? Would a physician’s office be open on a Saturday? Sunday? Now consider national disasters. How likely is it for a clinical site in the Northeast to have conducted study visits immediately after Hurricane Sandy rolled through town causing major flooding and disrupting supply lines and travel? What about past public health events surrounding swine and avian flu? SARS? Is it likely for scheduling at study sites located in these areas to go unaffected?
JMP Clinical has a new analytical process (AP) to aid in identifying the significance of visit dates. For each visit in the database, Weekdays and Holidays will identify the day on which the visit occurs and whether the visit coincides with a major U.S. or Canadian holiday. Figure 1 summarizes findings from the Nicardipine trial. Of course, given the severe condition of the subjects under study, it is not unreasonable for visits to occur on weekends or major holidays. However, significant differences between sites or a majority of hits at a small number of sites may suggest further review is necessary.
Other visit-date criteria that may be a cause for concern include perfect or near-perfect visit attendance on the expected study day. Even if attendance is not perfect, a comparison of visit day distributions between a site versus all others combined may suggest potential problems with scheduling. A version of Figure 2 is presented in Weir & Murray using data from Table IV of Buyse et al. For example, Figure 2 summarizes the proportion of a particular study visit that occurs on each study day. Is it possible for a single site to maintain such precision at a single visit? Perhaps. But if the distributions of the remaining study visits are as equally discrepant compared to the other study sites, perhaps a site visit is in order. For each clinical site, the Perfect Scheduled Attendance AP will compare the distribution of each visit to all other sites in the trial. While these figures may not uncover fraud per se, they can help identify sites that may require additional training and reminders of the importance of adhering to the study protocol.