Research is on-going, and the application of research continues to be necessary for continuing development of a field. However, the applicable nature of research can be subjective and hard to implement. The methods used to practice new research can also be differing depending on the user. Professionals need a proven methodology that allows growth of the skills of implementation. The Physical therapist-driven Education for Actionable Knowledge (PEAK) program aims to do just that. The purpose of this study was to assess the feasibility of the PEAK program with respect to practical implementation, participant reaction, and potential for association with change in participants’ evidence based practice (EBP) attitudes, self-efficacy, knowledge and skills, and self-reported behavior.
A cohort of physical therapists practicing at the University of Southern California were used to simultaneously collect and analyze quantitative and qualitative data gathered from a mixed methods triangulation design model. Therapists practiced in one of three geographically separate USC patient care centers (2 outpatients; 1 inpatient). The PEAK program lasted 6 months and the recruited physical therapists were required to have a minimum of 6 months’ clinical experience, provide patient care at USC at least 20 hours per week, attend both days of an introductory workshop, and be willing to commit to study activities at least 1 hour per month for the duration of the program.
Using the evidence-based practice beliefs scale to assess initial attitude, physical therapists were asked to rate their participation in developing their Best Practices List. A qualitative assessment was used during interviews where a range of experiences and subjective reactions to the PEAK program were explored. Participants were initially asked to describe their own engagement in, and reaction to, the PEAK program. They were then facilitated to describe the impact of the program on their EBP attitudes, self-efficacy, knowledge, skills and practice behaviors. They were also asked to consider, from their professional experience, whether the program provided a benefit to patients. Finally, they were asked to comment on the feasibility of transferring the PEAK program to other clinical settings.
Integrated discussion and analysis by two authors to achieve concurrent triangulation was used in combination with the quantitative and qualitative data. The quantitative and qualitative data was synthesized into meaningful information about the feasibility of the PEAK program. The conclusions are the collaborative nature of PEAK was engaging and motivating. PEAK participants experienced improved self-efficacy which created a positive cycle where success reinforced engagement with the research evidence. Participants believed that the process of using relevant research evidence to develop the Best Practices List would lead to better patient outcomes. Participants’ need to understand how to interpret statistics was not fully met by the PEAK program. Limitations of the study included the number of participants were small, the participant population lacked diversity, and the analysis did not assess long-term outcomes. Modification to the quantitative assessment tools also discredited validation of the original tools.
The ability to establish feasibility of the PEAK program was well-designed. Measurement and evaluation tools like the evidence based practice (EBP) belief scale and assessment of initial attitudes towards the application of research as well as evidence based practice allowed for observers to understand the therapists point of view. However, it would have been helpful had the research combined this research with evaluation of whether the viewpoints of the therapists were accurate. Did the increase in confidence lead to mutually agreeable increases in application of research? Did observers notice improvement, or did the therapist simply believe that change happened? Observers could have calculated objectively whether change occurred, perhaps using a frequency measurement tool or timing the duration of a specified behavior. To believe something is right, despite measurable justification, doesn’t make it right. If perceptions of increased competence in applying research was all that was gained, was there anything gained at all? Unfortunately, the self-evaluation through a self-report system may have led to bias and expectation of change that was unfounded
My Signature Method, Your Signature Move!