

Smartphones and wearables can also automatically record user interactions and app use. Their internet connectivity and ability to collect continuous, high-density data remotely can improve efficiency over other “intermittent and limited” methods, such as questionnaires and traditional pedometers. In-device sensors (ie, accelerometers, gyroscopes, and other sensors embedded in smartphones and wearables) can be used to measure outcomes objectively. To improve the efficiency of data collection, researchers can capitalize on the technological capabilities of consumer devices. To test the impact of individual components, quick factorial approaches have been developed, including the multiphase optimization strategy (MOST), which rapidly tests many experimental conditions, and Sequential Multiple Assignment Randomized Trials and micro-randomized trials, which both evaluate components that adapt across time. To evaluate overall effectiveness, the Continuous Evaluation of Evolving Behavioral Intervention Technologies was developed to test multiple versions of an app simultaneously.

Single-case designs or “n-of-1” studies, in which participants serve as their own control, may be conducted relatively quickly and easily using mHealth technology. To increase the efficiency of mHealth evaluations, particular research designs and data collection methods have been recommended. The latter view produces overlaps between engagement and acceptability, and therefore for clarity during this review, we define “engagement” as users’ interaction and usage behavior (ie, a purely behavioral construct), and “acceptability” as users’ subjective perceptions and experiences.
#Blumentals rapid seo tool review how to
However, how to define and distinguish these constructs is still a subject of debate for example, some digital health researchers have conceptualized engagement as a behavioral construct, whereas others propose that it is composed of both behavioral and subjective components. Accordingly, mHealth researchers have been encouraged to assess “engagement” and “acceptability”.

Measuring these factors alongside effectiveness can help interpret and explain variation in effectiveness outcomes, (ie, why the intervention worked or did not work ). To understand overall effectiveness, studies should evaluate real-world engagement with, and response to, an intervention. Physical activity apps and wearables often contain multiple components, which can interact with context and produce different outcomes for different people in different settings. Consequently, researchers have emphasized the need for greater “efficiency” (ie, rapid, responsive, and relevant, or agile research) when evaluating mobile health (mHealth) technologies.Įvaluating the effectiveness of mHealth technologies can be particularly challenging because of their “complexity”. Randomized controlled trials (RCTs), the “gold standard” of effectiveness evaluations, can take several years to conduct and require interventions to be stable and unchanged throughout this period. However, evaluating the impact of physical activity technologies can be challenging, because of the rapid rate at which they evolve.

The potential of apps and wearables to increase physical activity and ultimately improve health outcomes, such as management of cardiovascular disease, obesity, and type 2 diabetes, has been widely recognized. They often use data from in-device sensors to provide self-monitoring and feedback. Many smartphone apps and wearables designed to improve physical activity are available. Physical inactivity is a major public health problem, with 23% of adults worldwide not meeting recommended levels of physical activity (only 35% and 40% in the United States and the United Kingdom, respectively ).
