Show simple record

dc.contributor.authorLucas, Nicholas
dc.contributor.authorMacaskil, Petra
dc.contributor.authorIrwig, Les
dc.contributor.authorMoran, Robert
dc.contributor.authorRickards, Luke
dc.contributor.authorTurner, Robin
dc.contributor.authorBogduk, Nikolai
dc.description.abstractBackground The aim of this project was to investigate the reliability of a new 11-item quality appraisal tool for studies of diagnostic reliability (QAREL). The tool was tested on studies reporting the reliability of any physical examination procedure. The reliability of physical examination is a challenging area to study given the complex testing procedures, the range of tests, and lack of procedural standardisation. Methods Three reviewers used QAREL to independently rate 29 articles, comprising 30 studies, published during 2007. The articles were identified from a search of relevant databases using the following string: “Reproducibility of results (MeSH) OR reliability (t.w.) AND Physical examination (MeSH) OR physical examination (t.w.).” A total of 415 articles were retrieved and screened for inclusion. The reviewers undertook an independent trial assessment prior to data collection, followed by a general discussion about how to score each item. At no time did the reviewers discuss individual papers. Reliability was assessed for each item using multi-rater kappa (κ). Results Multi-rater reliability estimates ranged from κ = 0.27 to 0.92 across all items. Six items were recorded with good reliability (κ > 0.60), three with moderate reliability (κ = 0.41 - 0.60), and two with fair reliability (κ = 0.21 - 0.40). Raters found it difficult to agree about the spectrum of patients included in a study (Item 1) and the correct application and interpretation of the test (Item 10). Conclusions In this study, we found that QAREL was a reliable assessment tool for studies of diagnostic reliability when raters agreed upon criteria for the interpretation of each item. Nine out of 11 items had good or moderate reliability, and two items achieved fair reliability. The heterogeneity in the tests included in this study may have resulted in an underestimation of the reliability of these two items. We discuss these and other factors that could affect our results and make recommendations for the use of QAREL.en_NZ
dc.publisherBioMed Central Ltden_NZ
dc.rightsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.en_NZ
dc.subjectquality appraisalen_NZ
dc.subjectsystematic reviewen_NZ
dc.subjectevidence-based medicineen_NZ
dc.titleThe reliability of a quality appraisal tool for studies of diagnostic reliability (QAREL)en_NZ
dc.typeJournal Articleen_NZ
dc.rights.holder© 2013 Lucas et al.; licensee BioMed Central Ltd.en_NZ
dc.subject.marsden110499 Complementary and Alternative Medicine not elsewhere classifieden_NZ
dc.identifier.bibliographicCitationLucas, N., Macaskill, P., Irwig, L., Moran, R., Rickards, L., Turner, R., and Bogduk, N. (2013). The reliability of a quality appraisal tool for studies of diagnostic reliability (QAREL). BMC Medical Research Methodology. 13(1) : 111.en_NZ
unitec.institutionUniversity of Sydneyen_NZ
unitec.institutionUnitec Institute of Technologyen_NZ
unitec.institutionUniversity of Newcastleen_NZ
unitec.publication.titleBMC Medical Research Methodologyen_NZ
dc.contributor.affiliationUnitec Institute of Technologyen_NZ

Files in this item


This item appears in

Show simple record

 Unitec Institute of Technology, Private Bag 92025, Victoria Street West, Auckland 1142