According to the researchers, a model incorporating self-reports from rheumatoid arthritis (RA) patients corresponded to conventional physician-led assessments of disease activity.
“Machine learning” applied to data collected in a previous RA drug trial, which focused on participants’ assessments of pain and physical and social function, yielded an overall measure of RA drug activity. disease with a positive predictive value (PPV) approaching 90% of the norm Clinical Disease Activity Index (CDAI) scores compiled by physicians, according to Jeffrey Curtis, MD, MPH, of the University of Alabama to Birmingham, and his colleagues.
“This approach holds promise for generating real-world evidence in common circumstances where physician-derived disease activity data are not available,” but patient reports are, the group wrote in ACR Open Rheumatology. This could be particularly useful for assessing treatment responses when patients start a new biologic drug, they added.
Patients seen via telemedicine are one such circumstance, Curtis and colleagues noted. The CDAI and its close cousin, the 28-Joint Disease Activity Score, are the industry standard for judging disease activity. Both require physicians to perform hands-on examinations and therefore require patients to come to the clinic in person. Management could be more effective if patients could simply report their own assessments remotely – especially since their subjective experience is what matters most anyway.
To test their hypothesis, Curtis and his colleagues took data from the AWARE study, in which 1,270 RA patients were followed for 2 years while taking one of two biologics. This study collected participants’ self-reports on a variety of outcomes. In addition to social participation, physical function, pain intensity and interference with daily life, these also covered fatigue, sleep, anxiety, depressive symptoms and condition. CDAI assessments were also performed.
Researchers sought a model that, based on patient-reported data, would accurately classify whether patients had CDAI scores of 10 or less (the accepted level reflecting “low disease activity”) when they were seen between months 3 and 12 of treatment. Of 1,270 patients, 494 had clinic visits after the first 3 months and provided their own assessments both at baseline and at subsequent follow-up. Data from a random sample of 80% of this group was used to train the model and the remaining 20% was used for testing.
The best performances were obtained with a so-called random forest analysis focusing on pain aspects and social and physical functioning.
As is always the case in this type of predictive modeling, PPV values varied inversely with sensitivity. Thus, when set to achieve 100% sensitivity, the PPV of the model was approximately 79%; at 45% sensitivity, the PPV was 89%. The overall accuracy of the model was around 80%, the researchers said.
This particular model is not ready for clinical application, Curtis and colleagues pointed out. Their pilot study had many limitations, including the lack of sufficient data from more than half of AWARE participants and the fact that AWARE did not collect all potentially relevant information about participants (such as details of comorbidities ).
“Additional validation with similar datasets derived from routine care settings, perhaps combined with [electronic records] data, may further extend the utility and support the validity of this approach and its practical implementation,” the group wrote.
Additionally, they observed, “patients could be effectively trained to perform their own self-reported joint counts, thereby improving classification accuracy” when combined with their subjective ratings.
The study had no specific funding. The authors have declared that they have no relevant financial interests.