“Why do two different reviews have different conclusions about the efficacy of kinesiology taping?”
It’s basically a ‘Tale of Two Studies’ to play on Charles Dickens novel by a similar name. The two studies in question are from Lim & Tay (2015) and Parreira et al. (2014). Lim & Tay reported that kinesiology tape is effective at reducing chronic musculoskeletal pain compared to minimal interventions; while Parreira et al. found that the evidence doesn’t support the use of kinesiology tape.
The simplest answer is that these were actually two differently designed studies performed at different times: Lim & Tay performed a meta-analysis in July of 2014, while Parreira et al. performed a systematic review (stopping short of meta-analysis) in June of 2013. Obviously, any studies published between June 2013 and July 2014 would only be included in the Lim & Tay analysis (in fact, four studies included in Lim & Tay were not included in Parreira et al’s analysis because they were published after June 2013).
There are many factors that contribute to the difference in analytical research such as meta-analysis and systematic review, as they both have different levels of analysis. A systematic review follows a very specific guide for including and reviewing studies, but only provides a descriptive (not statistical) result. Recall that a meta-analysis follows systematic review and involves pooling data from a number of relatively homogenous studies for a statistical outcome. This results in an ‘overall effect size’ of the intervention, and helps improve the statistical power of the results by increasing the overall sample size. Parreira et al. found it difficult to combine their studies because of heterogeneity in patients, techniques, outcome measures, and comparative treatments, thus only performing a systematic review. This is likely because Parreira et al’s inclusion criteria were less narrow than Lim & Tay, lending to more homogeneity in their meta-analysis.
The systematic methods of the search and analysis procedures can also be different, specifically in the inclusion and exclusion criteria. This obviously has a potential for selection bias between studies that must be considered when comparing results. For example, sometimes articles not in English are excluded. Specific databases may not be searched. Particular research designs may be excluded. A good guideline is to directly compare “PICO”: the Population, Intervention, Comparison, and Outcomes included. Therefore, it’s crucial for authors to include very specific details of their search methodology. This process is specified by PRISMA guidelines, and must be included in all systematic reviews & meta-analyses. Table 1 compares various factors between the studies, offering insight to potential biases.
Table 1 provides several potential areas for bias when comparing the two studies, including a broader PICO inclusion, different databases, and higher number of studies included.
Another clue to the difference between authors is the actual studies evaluated by the 2 papers. While Lim & Tay only had 3 more total studies than Parreira et al., both papers only had 8 studies in common (Akbas 2011, Aytar 2011, Castro-Sanchez 2012, Llopis 2012, Paoloni 2011, Saavendra-Hernandez 2012, Simsek 2013, Tsai 2010). Lim & Tay had 7 different studies from Parreira et al., who had 4 different studies. Table 2 provides a list of studies excluded from one, but included in the other review, with possible reasons for exclusion. 3 studies in Lim and Tay were published after Parreira et al’s search date.
In addition to the lack of statistical analysis in Parreira et al’s paper, the grouping of studies for analysis in was also different in each paper. For example, Lim & Tay grouped studies comparing kinesiology tape to minimal intervention (no tape, sham, or usual care) on pain and disability. In comparison, Parreira also looked at pain and disability outcomes, but only compared to sham kinesiology taping. This grouping led to only 2 of the same studies being compared in the studies: Aytar (2011) and Castro-Sanchez (2012).
In the statistical analysis of Lim & Tay (who also excluded 2 studies they had originally included because of questionable methodology), they found a moderate effect size for reducing pain in chronic musculoskeletal pain lasting 4 weeks compared to minimal intervention (-0.68 (-1.11,-0.25)) which was statistically significant. They also found a small effect size on reducing disability compared to minimal intervention (-0.41 (-0.83,0.01); however, this was not statistically significant (although by the slightest of margins!). Lim & Tay noted that kinesiology taping is not superior to other interventions for reducing pain (-0.32 (-1.41, 0.76) or disability (0.08 (-0.27, 0.43)). Figure 1 illustrates the effect sizes and confidence intervals from Lim & Tay’s analysis.
Date of publication is an obvious limitation of systematic reviews and meta-analysis since they may not include the most recent studies. In addition, differences between reviews in design, databases, inclusion/exclusion (PICO), and analysis should be considered as sources of bias, particularly when comparing reviews.
Most importantly, note the conclusions made by Parreira et al.: they didn’t say kinesiology tape wasn’t effective; rather, they said the “current evidence does not support the use in clinical practice.” The authors noted that most of the research they reviewed was poor quality and underpowered, stating, “some authors concluded that kinesiology taping was effective when their data did not identify significant benefit.” Therefore, correctly so, Parreira et al. stopped short of saying the tape is ineffective because the evidence just wasn’t there yet. However, Lim & Tay’s subsequent meta-analysis suggests that evidence supports the use of kinesiology tape to reduce musculoskeletal pain lasting more than 4 weeks.
It’s important for clinicians (and the media!) to understand how to analyze and interpret research, rather than focusing on the title or headline.