Inter and intra-rater reliability of rating criteria for the Floor Sitting Posture Screen

Loading...
Thumbnail Image
Other Title
Authors
Houvenagel, Matthias
Author ORCID Profiles (clickable)
Degree
Master of Osteopathy
Grantor
Unitec Institute of Technology
Date
2012
Supervisors
Mannion, Jamie
Moran, Robert
Type
Masters Thesis
Ngā Upoko Tukutuku (Māori subject headings)
Keyword
reliability
visual assessment
floor sitting posture
Citation
Houvenagel, M. (2012). Inter and intra-rater reliability of rating criteria for the Floor Sitting Posture Screen. (Unpublished document submitted in partial fulfilment of the requirements for the degree of Master of Osteopathy). Unitec Institute of Technology, Auckland, New Zealand. Retrieved from https://hdl.handle.net/10652/2010
Abstract
Background: Visual assessment of posture and movement is commonly used by musculoskeletal therapists. Postures have historically been assessed using subjective criteria. Recently, however, a number of new rating protocols have been produced with the aim of enhancing the reliability and objectivity of visual assessments. The Floor Sitting Posture Screen (FSPS), a recently developed visual assessment protocol, could be of clinical value however it is yet to be evaluated in terms of inter and intrarater reliability. The Aims of this study were to evaluate the level of inter- and intrarater reliability while using the FSPS. Methods: A blinded test-retest design was used to examine the level of inter- and intra-rater reliability while using the FSPS. Inter-rater reliability was investigated by comparing results of 12 raters (n=11 senior osteopathy students; n=1 osteopath) while rating pictures of 7 subjects (n=5female; n=2 male). The intra-rater reliability was investigated by having raters rate images of 7 subjects (female n=5) on two occasions one week apart. Results: Inter-rater reliability of each criterion (n=17) of the FSPS ranged from Poor to Good. The majority of the criterion (n=11) demonstrated Moderate to Good interrater reliability, with only one criterion demonstrating poor reliability. Intra-rater reliability of individual criterion could not be calculated using Cohen’s Kappa, due to the sample size and the homogeneity of raw data. However, intra-rater percentages of agreement were above 81%for 10 of the 12 raters. The ICC of the combined criteria score of the FSPS was Almost perfect (ICC=0.93; 95% CI= 0.88-0.95). Conclusions: The level of inter and intra-rater reliability and percentage of agreement of some criterion demonstrate promising results. Some criteria need further development before the FSPS can be applied into practice or prior to any further reliability and validity studies.
Publisher
Link to ePress publication
DOI
Copyright holder
Author
Copyright notice
All rights reserved
Copyright license
Available online at