Matched cohort study
Matching is not uncommon in epidemiological studies and refers to the selection of unexposed subjects’ i.e., controls that in certain important characteristics are identical to cases. Most frequently matching is used in case-control studies but it can also be used in cohort studies. The matching procedure is often directed towards classical background factors such as sex and age.
Faresjö, T. et al., Int. J. Environ. Res. Public Health (2010). doi:10.3390/ijerph7010325
(Fractional-)factorial (ANOVA) design
Evaluation of eHealth treatments often occurs via randomized clinical trials. While there is a vital role for such trials, they often do not provide as much information as alternative experimental strategies. For instance, engineering researchers typically use highly efficient factorial and fractional-factorial designs that allow for the testing of multiple hypotheses or interventions with no loss of power even as the number of tested interventions increases.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Action research
Action research (AR) is used to both understand and assist eHealth implementation in complex social settings. The AR method provides an insightful technique for studying information systems development (ISD) process across time and across technologies and contexts. Defined as “an inquiry into how human beings design and implement action in relation to one another”, the purpose of AR is to observe and create effective organizational change.
M. Chiasson et al., Int. J. Med. Inform. (2007), doi:10.1016/j.ijmedinf.2006.10.001
Adaptive design
This is an alternative clinical trial design. The idea is to use accumulating data from the trial to make preplanned changes to the design. Usually, a part of the adaptive design is to specify in advance a predictive model that uses intermediate or surrogate endpoints to predict the final primary effectiveness endpoint, which helps to guide when to stop recruiting more patients unnecessarily into the trial based on posterior predictive probability calculations; this is especially helpful in studies with long-term endpoints when the intermediate endpoints are thought to be predictive.
G. Campbell et al., Biopharm. Stat. (2016). doi:10.1080/10543406.2015.1092037
L.M. Law et al., International Journal of Medical Informatics (2014), doi:10.1016/j.ijmedinf.2014.09.002
Big data analysis
Overarching term for all kinds of methods used for analysis of ‘big’
datasets. Mostly ‘machine learning’: an umbrella term for techniques that fit
models algorithmically by adapting to patterns in data.
Mooney, S. J. & Pejaver, V. Big
Data in Public Health: Terminology, Machine Learning, and Privacy. Annu.
Rev. Public Health (2018). doi:10.1146/annurev-publhealth-040617-014208
Case series study
Observational study design which describes several patients (cases) over time. Mostly hypothesis forming (early stage of effectiveness research) and without control group or placebo.
A. Holubova et al., J. Med. Internet Res. (2019), doi:10.2196/11527
CHEATS: a generic information communication technology (ICT) evaluation
CHEATS is a generic information communication technology (ICT) evaluation framework based on a methodology of formative process evaluation utilising both quantitative and qualitative methods. CHEATS stands for: Clinical, Human and organisational, Educational, Administrative, Technical, Social.
N.T. Shaw et al., Comput. Biol. Med. (2002). doi:10.1016/S0010-4825(02)00016-1
Cluster randomised controlled trial
Randomized controlled trial not randomizing individuals, but ‘clusters’, mostly health care centers, or primary care practices.
Oliveira-Ciabati, L. et al., Reprod Heal. (2017).
Cohort study (retro- and prospective)
Observational design, in which groups of patients are followed over time. Usually, multiple exposures and outcomes can be defined in a cohort. Retro-and prospective mostly refers to the timing of data acquisition (before or after designing the study). Patients are sampled on the basis of exposure. Information about baseline characteristics is obtained, and the occurrence of outcomes is assessed during a specified follow-up period. At baseline, all exposed or unexposed persons or both may be included.
Vandenbroucke, J. P. British Medical Journal (1991). doi:10.1136/bmj.302.6775.528-d
Controlled before-after study / non-randomized controlled trial (CBA / NRCT)
A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not.
NB nog ref
Controlled clinical trial (CCT)
A clinical study that includes a comparison (control) group. The comparison group receives a placebo, another treatment, or no treatment at all.
van der Meij, E. et al., Lancet (2018). doi:10.1016/S0140-6736(18)31113-9
Cost-effectiveness analysis
Cost effectiveness analysis (CEA) produces a numerical ratio—the incremental cost effectiveness ratio—in value (dollars, euro’s) per a gain in health from a measure (for example, years of life (QALY). This ratio is used to express the difference in cost effectiveness between new diagnostic tests or treatments and current ones.
de la Torre-Díez, I. et al., Telemed. e-Health (2015). doi:10.1089/tmj.2014.0053 Elbert, N. J. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2790
Cross-sectional study
Observational study design, which samples the exposure and outcome at one moment in time. Useful to get quick insight in possible associations. Drawback is the lack of follow-up time to study relations between exposure and outcome over time.
Hansen, A. H. et al., J. Med. Internet Res. (2018). doi:10.2196/11322
Crossover study
Randomized, parallel group clinical trials often require large groups of patients; this is expensive and takes time. A randomized cross-over trial can be an efficient and more affordable alternative. A cross-over design can be used to study chronic disorders in which treatments have temporary effects. Participants receive all treatments in consecutive periods and outcomes are measured after every period. In general, only a quarter of the total group size is needed for cross-over studies compared with parallel group studies.
Bonten, T. N. et al., Ned. Tijdschr. Geneeskd. (2013).
Economic evaluation
Overarching term to describe the methods used for economic evaluation, which include three major categories based on their evaluation method: cost-effectiveness analyses, cost-utility analyses or cost-benefit analyses.
Bongiovanni-Delaroziere, I. et al., Eur. Res. Telemed. (2017).
HAS methodological framework
The French national authority for health (HAS) published in 2011, a methodological framework for its economic evaluations. Drawing on its vast experience and the in-depth work on economic evaluation methods within the Economic Evaluation and Public Health Committee, the HAS strives to present and share the principles and methods that it uses in economic evaluation analyses, comparing the health effects to be expected from health care with the resources used to produce such care. In addition to the principles and methods that it uses in economic evaluation analyses, quantitative and qualitative research methods should be combined. This will make it possible to take into account the project’s context and understand the different effects of telemedicine interventions. The technology, the medical field, the application of telemedicine, the objectives and local context will decide important parameters which must be taken into account.
Bongiovanni-Delaroziere, I. et al., Eur. Res. Telemed. (2017)
Interrupted time series (ITS) analysis
Interrupted time series (ITS) analysis is a useful quasi-experimental design with which to evaluate the longitudinal effects of interventions, through regression modelling. The term quasi-experimental refers to an absence of randomisation, and ITS analysis is principally a tool for analysing observational data where full randomisation, or a case-control design, is not affordable or possible. Its main advantage over alternative approaches is that it can make full use of the longitudinal nature of the data and account for pre-intervention trends.
Chumbler, N. R. et al., Telemed. e-Health (2008). doi:10.1089/tmj.2008.0108Grigsby, J. et al., J. Telemed. Telecare (2006). doi:10.1258/135763306778393162
Liu, J. L. Y. et al., J. Am. Med. Informatics Assoc. (2011). doi:10.1136/jamia.2010.010306
Kontopantelis, E. et al., BMJ (2015). doi:10.1136/bmj.h2750
Methods comparison study
Two different overarching methodologies for method-comparison studies have been commonly used: equivalence studies and non-inferiority studies. In equivalence studies, we are interested in whether the new assessment does not differ from the conventional (usually in-person) assessment in either direction by a pre-specified amount (i.e. a two-sided test). In an equivalence trial the new assessment method will be selected regardless of whether it is better or worse than an existing assessment as long as the difference falls within the predefined zone of allowable difference (and meets other criteria such as cost effective and stakeholder satisfaction). Commonly in telehealth, the existing model of care (e.g. specialist assessment in tertiary hospital for cognitive impairment) will not be replaced, but rather the telehealth option will be used for people who cannot access conventional services. In this case, the question is whether the telehealth assessment is ‘as good’ as or rather ‘not inferior’ to conventional practice.
Russell, T. G. et al., J. Telemed. Telecare (2017). doi:10.1177/1357633X17727772
Micro-randomised trial
Micro‐randomised trials are trials in which participants are randomly assigned a treatment from the set of possible treatment actions at several times throughout the day. Thus each participant may be randomised hundreds or thousands of times over the course of a study. This is very different than a traditional randomised trial, in which participants are randomised once to one of a handful of treatment groups.
Dempsey, W. et al., Significance (2015). doi:10.1111/j.1740-9713.2015.00863.x
Klasnja, P. et al., Heal. Psychol. (2015). doi:10.1037/hea0000305
Law, L. M. et al., Clin. Trials (2016). doi:10.1177/1740774516637075
Walton, A. et al., NClin. Pharmacol. Ther. (2018).
Mixed methods
Mixed methods research (MMR) is an emerging and evolving research methodology that requires both qualitative and quantitative approaches within the same study. It is an approach to research in the social, behavioural and health sciences in which the investigator gathers both quantitative and qualitative data, integrates the two, and then draws interpretations based on the combined strengths of both sets of data to understand research problems. MMR is important for telehealth research because questions that profit most from a mixed methods design tend to be broad, complex and multifaceted.
Caffery, L. J. et al., J. Telemed. Telecare (2017). doi:10.1177/1357633X16665684
Lee, S. et al., CIN - Comput. Informatics Nurs. (2012). doi:10.1097/NXN.0b013e31824b1f96
Non-inferiority trial
sDemonstrating superiority of the new solution in terms of quality or efficacy of treatment is not always necessary, as the telemedicine/e-health solution/application may have other types of advantages, including saved travel time or saved costs. Testing that the new solution is not inferior to a traditional counterpart may therefore seem to be sufficient in many cases.
Kummervold, P. E. et al., Journal of Medical Internet Research (2012). doi:10.2196/jmir.2169
Parallel cohort study with nested RCT
The longitudinal observational cohort study with a nested RCT design has many similarities with the parallel group RCT but embeds the RCT within a cohort study. The main advantage of a nested RCT design is the available follow-up information of those who refuse the intervention or are non-adherent. By having asked informed consent for the observational study before offering the RCT intervention, baseline and follow-up data can be collected from all individuals, including those who refuse the intervention. Furthermore, participants are only eligible for the nested RCT if they have complied with the observational cohort data collection, which ensures that participants randomized are motivated to participate.
Younge, J. O. et al., Int. J. Epidemiol. (2015). doi:10.1093/ije/dyv183
Patient reported outcome measures (PROMs)
PROMs seek to ascertain patients’ views of their symptoms, their functional status, and their health related quality of life. PROMs are often wrongly referred to as so called “outcome measures,” though they actually measure health—by comparing a patient’s health at different times, the outcome of the care received can be determined. It’s important to distinguish PROMs from patient reported experience measures (PREMs), which focus on aspects of the humanity of care, such as being treated with dignity or being kept waiting.
Black, N., BMJ (2013). doi:10.1136/bmj.f167
Practical clinical trial (PCT)
There are four key characteristics of practical trials. They study representative patients, are conducted in multiple settings, employ as controls reasonable alternative intervention choices rather than no treatment or “usual care,” and report on outcomes relevant to clinicians, potential adoptees, and policymakers.
Glasgow, R. E. Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.023
Pragmatic randomised controlled trial (P-RCT)
The term “pragmatic” for RCTs was introduced half a century ago. In contrast to “explanatory” RCTs that test hypotheses on whether the intervention causes an outcome of interest in ideal circumstances, “pragmatic” RCTs aim to provide information on the relative merits of real-world clinical alternatives in routine care. A critical aim of an explanatory RCT is to ensure internal validity (prevention of bias); conversely, a pragmatic RCT focuses on maximizing external validity (generalizability of the results to many real-world settings), but should try to preserve as much internal validity as possible.
Danaher, B. G., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0 Dal-Ré, R., BMC Med. (2018). doi:10.1186/s12916-018-1038-2
Preference clinical trial (PCT)
In a preference clinical trial (PCT), two or more health-care interventions are compared among several groups of patients, at least some of whom have purposefully chosen the intervention to be administered to them. This stands in contrast to the randomized, controlled clinical trial (RCT), where patients are randomly assigned to receive one of the available test interventions.
Kowalski, C. J. et al., Perspect. Biol. Med. (2013). doi:10.1353/pbm.2013.0004
Pretest-posttest design
The basic premise behind the pretest–posttest design involves obtaining a pretest measure of the outcome of interest prior to administering some treatment, followed by a posttest on the same measure after treatment occurs. Pretest–posttest designs are employed in both experimental and quasi-experimental research and can be used with or without control groups. For example, quasi-experimental pretest–posttest designs may or may not include control groups, whereas experimental pretest–posttest designs must include control groups. Furthermore, despite the versatility of the pretest–posttest designs, in general, they still have limitations, including threats to internal validity. Although such threats are of particular concern for quasi-experimental pretest–posttest designs, experimental pretest–posttest designs also contain threats to internal validity.
Grigsby, J. et al., J. Telemed. Telecare (2006). doi:10.1258/135763306778393162 Salkind, N. J., Encycl. Res. Des. (2010). doi:https://dx.doi.org/10.4135/9781506326139.n538
Propensity score
The propensity score is the conditional probability of receiving treatment A rather than treatment B, given the observed covariates. Rosenbaum and Rubin (1983) state that the propensity score is a balancing score in the sense that it is a function of the observed covariates such that conditional on the propensity score, the distribution of observed baseline covariates will be similar between the two treatment groups. Then, the propensity score methods can be used to assess treatment group comparability with respect to patient baseline covariates and adjust for imbalances in those covariates to allow for a sensible treatment comparison in clinical outcomes. More importantly, for observational studies in regulatory settings, the methodology can be utilized to design an observational study and mimic RCT in the aspects of study design integrity and interpretability of study results.
Campbell, G. et al., J. Biopharm. Stat. (2016). doi:10.1080/10543406.2015.1092037
Randomised controlled trial
The randomised control trial (RCT) is a trial in which subjects are randomly assigned to one of two groups: one (the experimental group) receiving the intervention that is being tested, and the other (the comparison group or control) receiving an alternative (conventional) treatment. The two groups are then followed up to see if there are any differences between them in outcome. The results and subsequent analysis of the trial are used to assess the effectiveness of the intervention, which is the extent to which a treatment, procedure, or service does patients more good than harm.
Kendall, J. M., Emergency Medicine Journal (2003).
Sequential Multiple Assignment Randomized Trial (SMART)
The SMART approach is a randomized experimental design that has been developed especially for building time-varying adaptive interventions. The SMART approach enables the intervention scientist to address questions like these in a holistic yet rigorous manner, taking into account the order in which components are presented rather than considering each component in isolation. A SMART trial provides an empirical basis for selecting appropriate decision rules and tailoring variables. The end goal of the SMART approach is the development of evidence-based adaptive intervention strategies, which are then evaluated in a subsequent RCT.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Danaher, B. G. et al., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0
Almirall, D. et al., Transl. Behav. Med. (2014). doi:10.1007/s13142-014-0265-0
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Stepped wedge trial
In a stepped wedge design, an intervention is rolled-out sequentially to the trial participants (either as individuals or clusters of individuals) over a number of time periods. The order in which the different individuals or clusters receive the intervention is determined at random and, by the end of the random allocation, all individuals or groups will have received the intervention. Stepped wedge designs incorporate data collection at each point where a new group (step) receives the intervention. Data analysis to determine the overall effectiveness of the intervention subsequently involves comparison of the data points in the control section of the wedge with those in the intervention section. There are two key (non-exclusive) situations in which a stepped wedge design is considered advantageous when compared to a traditional parallel design. First, if there is a prior belief that the intervention will do more good than harm, rather than a prior belief of equipoise, it may be unethical to withhold the intervention from a proportion of the participants, or to withdraw the intervention as would occur in a cross-over design. Second, there may be logistical, practical or financial constraints that mean the intervention can only be implemented in stages.
Brown, C. A. et al., BMC Medical Research Methodology (2006). doi:10.1186/1471-2288-6-54 Hussey, M. A. et al., Contemp. Clin. Trials (2007). doi:10.1016/j.cct.2006.05.007
Spiegelman, D., Am. J. Public Health (2016). doi:10.2105/AJPH.2016.303068
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Trial of intervention principles (TIPs)
Trials of Behavioral intervention technologies (BIT) should be viewed as experiments to test principles within that BIT that can then be more broadly applied by developers, designers, and researchers in the creation of BITs and the science behind technology-based behavioral intervention. As such, we refer to these trials as “Trials of Intervention Principles” (TIPs), as they test the theoretical concepts represented within the BIT, rather than the specific technological instantiation of the BIT itself.
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Wait list control group study
A wait list control group, also called a wait list comparison, is a group of participants included in an outcome study that is assigned to a waiting list and receives intervention after the active treatment group. This control group serves as an untreated comparison group during the study, but eventually goes on to receive treatment at a later date. Wait list control groups are often used when it would be unethical to deny participants access to treatment, provided the wait is still shorter than that for routine services.
Nguyen, H. Q. et al., Canadian Journal of Nursing Research (2007). Elliott, S. A. et al., Behav. Res. Ther. (2002). doi:10.1016/S0005-7967(01)00082-1