SciELO - Scientific Electronic Library Online

vol.65 issue7Fluid and electrolyte balance during the first week of life and risk of bronchopulmonary dysplasia in the preterm neonateComparative study on anterior cruciate ligament reconstruction: determination of isometric points with and without navigation author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand



  • English (pdf)
  • Article in xml format
  • How to cite this article
  • SciELO Analytics
  • Curriculum ScienTI
  • Automatic translation


Related links



Print version ISSN 1807-5932

Clinics vol.65 no.7 São Paulo  2010 



Postural assessment software (PAS/SAPO): validation and reliabiliy



Elizabeth Alves G. FerreiraI; Marcos DuarteII; Edison Puig MaldonadoIII; Thomaz Nogueira BurkeI; Amelia Pasqual MarquesI

IDepartment of Physical Therapy, Communication Science & Disorders, Occupational Therapy, Faculdade de Medicina, Universidade de São Paulo - São Paulo/SP, Brazil
IILaboratory of Biophysics, School of Physical Education and Sport, Universidade de São Paulo - São Paulo/SP, Brazil
IIIFaculty of Engineering, Fundação Armando Alvares Penteado - São Paulo/SP, Brazil. Email: Tel: 55 11 3091-7451




OBJECTIVE: This study was designed to estimate the accuracy of the postural assessment software (PAS/SAPO) for measurement of corporal angles and distances as well as the inter- and intra-rater reliabilities.
INTRODUCTION: Postural assessment software was developed as a subsidiary tool for postural assessment. It is easy to use and available in the public domain. Nonetheless, validation studies are lacking.
METHODS: The study sample consisted of 88 pictures from 22 subjects, and each subject was assessed twice (1 week interval) by 5 blinded raters. Inter- and intra-rater reliabilities were estimated using the intraclass correlation coefficient. To estimate the accuracy of the software, an inanimate object was marked with hallmarks using pre-established parameters. Pictures of the object were rated, and values were checked against the known parameters.
RESULTS: Inter-rater reliability was excellent for 41% of the variables and very good for 35%. Ten percent of the variables had acceptable reliability, and 14% were defined as non-acceptable. For intra-rater reliability, 44.8% of the measurements were considered to be excellent, 23.5% were very good, 12.4% were acceptable and 19.3% were considered non-acceptable. Angular measurements had a mean error analisys of 0.11°, and the mean error analisys for distance was 1.8 mm.
DISCUSSION: Unacceptable intraclass correlation coefficient values typically used the vertical line as a reference, and this may have increased the inaccuracy of the estimates. Increased accuracies were obtained by younger raters with more sophisticated computer skills, suggesting that past experience influenced results.
CONCLUSION: The postural assessment software was accurate for measuring corporal angles and distances and should be considered as a reliable tool for postural assessment.

Keywords: Postural assessment; Software; Validation; Reliability; Posture.




Posture has been defined as the alignment of body segments at a particular time.1 Posture is an important health indicator,2 and postural abnormalities are associated with a large number of disorders, including pain syndromes,3-5 generalized or regional musculoskeletal disorders,6-7 and respiratory dysfunctions.8 Postural abnormalities have also been associated with an increased risk of falls in the elderly9-11 and cervical pain.12 Postural realignment is a goal often sought by physicians, dentists, and physiotherapists.13-16

In clinical practice, posture assessments are conducted as part of the physical exam.17 When conducted in the clinic, postural assessments are often subjective,18 and abnormalities are visually inspected. This form of qualitative assessment has low sensitivity as well as low intra- and inter-rater reliabilities. It is largely dependent on past experiences and subjective interpretations. Accordingly, standardized and validated instruments are required for more precise and systematic assessments.19

Posture may be qualitatively and quantitatively assessed through the rigorous interpretation of photographic pictures, which may also be used to monitor treatment outcomes. Several independent companies have developed postural assessment software, which often consists of digital markers for photographic images and tools for measuring several variables.

Quantitative measurements allow physicians and researchers not only to make an accurate assessment of postural changes but also to monitor improvement. Nevertheless, studies are necessary to validate and estimate the reliability of each of these systems. Although partial validations have been conducted for several of these tools, most of these studies have only assessed specific regions of the body (not global posture assessments) or only examined small samples.20-26

Accordingly, comprehensive validation studies are necessary. Postural assessment software (PAS/SAPO) has been developed to assist posture assessment from digitalized pictures,27 and this software is available in the public domain ( PAS allows the measurement of distances and angles. The software is easy to use, and it is accompanied by scientific tutorials. We envision that PAS will be broadly used in both clinical practice and research.

The present study assessed the accuracy of PAS/SAPO for measuring angles and distances and also evaluated inter-rater (repeatability) and intra-rater (reproducibility) reliabilities. We hypothesized that PAS would be an accurate tool for postural assessments.



Overview and acquisition of digital images

The study sample consisted of 22 subjects. In total, 88 pictures were taken from the anterior and posterior directions as well as from both sides. The sample size and number of pictures were chosen based on the relevant literature.20, 21,26,28,29 Pictures were taken with subjects in the standing position, and the subjects were dressed to allow the visualization of 32 anatomic points (including 14 bilateral points). Points are presented in Figure 1.



To mark the points, styrofoam balls (15 mm circumference) were positioned using double-faced adhesive tape. Cameras (Sony Cyber-shot DSC-P93) were placed on tripods (height of 1.63 meters) with angles of 90 degrees (same distance). The first camera was placed 1.9 meters from the subject, and the other camera was 2.52 meters away. The cameras were adjusted to be perpendicular to the anatomical planes of the subject. The zoom of each camera was adjusted to allow about 0.5 meters of free space below and above the subject to minimize any distortion of the image extremities. A plumb line marked with two styrofoam balls was used for vertical calibration.

Assessments were conducted at the Laboratory of Biophysics, School of Physical Education, São Paulo University. All participants signed informed consent forms, and the project was approved by the Ethics Committee (758/02), School of Medicine, São Paulo University.


Five physical therapists (all women from 26 to 37 years old) who were not regular users of the PAS/SAPO were invited to participate as raters; they were invited if they had used the software before, but not if they were a regular user. Raters were oriented about how to use the software, and they practiced by analyzing 8 pictures (4 from each subject). Each rater had 30 minutes to practice and could ask questions during this training phase. Pictures were calibrated according to distance and the guiding vertical line. Raters could use the zoom feature at their own discretion. The raters worked on desktop computers with the PAS software and optic mice.

For reliability analyses, all pictures were given to the investigators in random order (each investigator received a different sequence), and no time limit was established. Measured values are described in Table 1. After 1 week, procedures were repeated, and tests were compared to assess the intra-rater reliability.



Measurements of angles and distance

An object with known dimensions was marked with three styrofoam balls, each of them measuring 15 mm and placed at 90 degrees and 45 cm from each other. The object was photographed and given to participants. Pictures were calibrated as previously described. Using the PAS/SAPO, raters performed the measurements. The values obtained with the software PAS/SAPO were compared to the known positions (actual values) of the object, and the differences were evaluated as an error analyses. The error was calculated according to the differences between the rater's value and the actual value. The mean of five raters were calculated.

Data analysis

Analyses were conducted using Excel 2003, Minitab v.14 and Statistical v.7. The Shapiro-Wilk W test and the Levene test were used to assess normality and homogeneity of the variables. The intraclass correlation coefficient (ICC) model 2.130 was employed for the inter-rater tests and ICC 3.130 was employed for the intra-rater tests. The significance level was defined as α = 0.05.

Inter-rater reliability (reproducibility)

Table 1 displays the inter-rater reliability (mean, standard error, ICC 2.1). The ICC was classified according to the methods of Wahlund, Listin and Dworkin.31 ICCs < 0.7 were considered non-acceptable, 0.71 < ICCs < 0.79 were acceptable, 0.80 < ICCs < 0.89 were very good and ICCs > 0.90 were excellent. Of the total measurements, 41% had excellent reliability, 35% had very good reliability, 10% had acceptable reliability, and 14% had non-acceptable reliability.

Intra-rater reliability (repeatability)

Table 2 displays the intra-rater reliability (mean, standard error, and ICC 3.1) for each of the 5 raters according to the classification of Wahlund, Listin and Dworkin.31 ICCs < 0.7 were considered non-acceptable, 0.71 < ICCs < 0.79 were acceptable, 0.80 < ICCs < 0.89 were very good and ICCs > 0.90 were excellent.

Table 3 summarizes measurements as non-acceptable, acceptable, very good, or excellent (using the same parameters described above). The measurements of rater 5 were the most precise (62.1% excellent), and the measurements from rater 2 were the least precise (44.8% as non-acceptable).



Table 4 summarizes the data for measurements of angles and distance. The mean error analisys for angular measurements was 0.11 ± 0.32 degrees; for distance, it was 1.8 ± 0.9 mm.




The present study measured the accuracy of the PAS as well as its inter- and intra-rater reliabilities. We found that the PAS was a reliable tool for postural analysis because inter-rater and intra-rater agreement were very good or excellent at 75% (22 variables) and 64.8% (20 variables), respectively. The software was also accurate for measurements of angles and distances.

Inter-rater reliability (reproducibility)

Only 4 out of 29 assessed variables were classified as non-acceptable (ICCs < 0.70): 3 for angles and 1 for distance. The calibration of the system and/or the body region being examined may have contributed to the non-acceptable status of these variables. The use of the plumb line (true vertical line) to calibrate the system was required for some of the measurements, such as the horizontal alignment of the head. To execute this calibration, the rater had to click on two reference points on the plumb line to inform the software which line in the image was the true vertical line. Occasional deviations during identification of the markers during the calibration procedure may have increased the error of the final measurements. Indeed, the 4 non-acceptable variables required calibration using the vertical line. According to Dunk et al.,21 the calibration of the vertical line is imprecise compared to biological references because the inherent error that occurs when measuring the vertical line is added to the error incurred when measuring the anatomic markers, thus biasing the results. Interestingly, 8 of the 11 parameters that were classified as excellent did not require calibration (i.e., they only involved biological markers).

The position of the head may also have influenced the results because small changes in the biomechanics of the neck (e.g., in the tragus or C7) can cause small rotations and inclinations that could impact the correct visualization of the markers when observed from the sagittal or coronal plane. These rotations may partially hide some markers, making their visualization and digitalization more difficult. Out of the four variables with the worst results, three were assessed from the frontal plane and were on the first picture, suggesting that performance improved with training.

Despite the limitations discussed above, we concluded that inter- and intra-rater reliability were high. Similar results were obtained by Niekerk et al.32 when pictures were used for assessment of head, shoulder and chest positions. In addition, Iunes et al.33 found similar results when assessing global posture while standing. Dunk et al.20 investigated the importance of digitalization techniques and the use of reflexive markers for postural analyses and found ICCs varying from non-acceptable to acceptable for posture variables.

We also want to mention that the size of the adhesive markers, as well as the accuracy of the zoom, may have influenced the precision of the measurements. We placed small spherical markers (15 mm) on the skin as close as possible to the anatomical structure in an attempt to increase precision. Furthermore, PAS included a zoom feature that was used at the discretion of the rater, which could have created more variability between raters.

Intra-rater reliability (repeatability)

The data also suggested good intra-rater reliability because 68.4% of the measurements were considered very good or excellent. Dunk et al.21 found greater inaccuracy, with ICCs ranging from 0.157 to 0.837. Table 3 shows that raters 2 and 3 had the worst results, but even the worst rater (rater 2) had 55% of the assessments considered acceptable, very good or excellent. Rater 5 had the highest reliability (62% excellent). Interestingly, raters 2 and 3 were older than the other raters, and they may have had less exposure to computer science during their professional careers. Rater 5, who had the best scores, was younger than raters 2 and 3 and had more computer experience. These findings suggest that increased experience with computers and software impacted the performance with PAS. Although the software was easy to use and all of the raters had been previously trained by the investigator, these factors may have influenced our results.

Validation for measuring angles and distance

We found good accuracy for measuring angles (error of 0.11 degrees) and distance (error of 1.8 mm). A comparison of our findings with the software assessments of the body regions in 3D showed that the PAS was more accurate. For example, one type of software (PosturePrint) yielded an error of 1.2 degrees for rotations, 1.6 mm for translation of the chest,26 1.38 degrees for rotation and 1.1 mm for translation of the head.23

It may be hypothesized that these small errors are the result of mathematical approximations of computerized methods and that these errors are of little clinical relevance.26 Accordingly, the PAS may be considered a reliable instrument.



The PAS/SAPO is accurate for measuring angles and distances, has good inter- and intra-rater reliabilities, and should be considered a useful and reliable tool for measuring posture.



We would like to thank the CNPq for their financial support.



1. Gangnet N, Pomero V, Dumas R, Skalli W, Vital JM. Variability of the spine and pelvis location with respect to the gravity line: a three–dimensional stereoradiographic study using a force platform. Surg Radiol Anat. 2003;25:424-33.         [ Links ]

2. McEvoy MP, Grimmer K. Reliability of upright posture measurements in primary school children. BMC Musculoskelet Disord. 2005;29;6:35.         [ Links ]

3. Harber P, Bloswick D, Beck J, Peña L, Baker D and Lee J. Supermarket checker motions and cumulative trauma risk. J Occup Med.1993;35:805-11.         [ Links ]

4. Genaidy AM, Karwowski W. The effects of neutral posture deviations on perceived joint discomfort ratings in sitting and standing postures. Ergonomics. 1993;36:785-92.         [ Links ]

5. Rys M, Konz S. Standing. Ergonomics. 1994;37:676-87.         [ Links ]

6. Emami MJ, Ghahramani MH, Abdinejad F, Namazi H. Q-angle: an invaluable parameter for evaluation of anterior knee pain. Arch Iran Med. 2007;10:24-6.         [ Links ]

7. Tsuji T, Matsuyama Y, Goto M, Yimin Y, Sato K, Hasegawa, Y, et al. Knee-spine syndrome: correlation between sacral inclination and patello-femoral joint pain. J Orthop Sci. 2002;7:519-23.         [ Links ]

8. Lennon J, Shealy N, Cady RK, Matta W, Cox R, Simpson WF. Postural and respiratory modulation of autonomic function, pain, and health. Am J Pain Manag. 1994;4:36-9.         [ Links ]

9. O'Brien K, Culham E, Pickles B. Balance and skeletal alignment in a group of elderly female fallers and nonfallers. J Gerontol A Biol Sci Med Sci. 1997;52:B221-26.         [ Links ]

10. Balzini L, Vannucchi L, Benvenuti F, Benucci M, Monni M, Cappozzo A, et al. Clinical characteristics of flexed posture in elderly women. J Am Geriatr Soc. 2003;51:1419-26.         [ Links ]

11. Lynn SG, Sinaki M, Westerlind KC. Balance characteristics of persons with osteoporosis. Arch Phys Med Rehabil. 1997;78:273-7.         [ Links ]

12. Yip CH, Chiu TT, Poon AT. The relationship between head posture and severity and disability of patients with neck pain. Man Ther. 2008;13:148-54.         [ Links ]

13. Festa F, Tecco S, Dolci M, Ciufolo F, Di Meo S, Filippi MR; et al. Relationship between cervical lordosis and facial morphology in Caucasian women with a skeletal class II malocclusion: a cross-sectional study. Cranio. 2003;21:121-9.         [ Links ]

14. Carr EK, Kenney FD, Wilson-Barnett J, Newham DJ. Inter-rater reliability of postural observation after stroke. Clin Rehabil. 1999;13:229-42.         [ Links ]

15. Muto T, Takeda S, Kanazawa M, Yamazaki A, Fujiwara Y, Mizoguchi I. The effect of head posture on the pharyngeal airway space (PAS). Int J Oral Maxillofac Surg. 2002; 31:579-83.         [ Links ]

16. Akamaru T, Kawahara N, Tim Yoon S, Minamide A, Su Kim K, Tomita K, et al. Adjacent segment motion after a simulated lumbar fusion in different sagittal alignments: a biomechanical analysis. Spine. 2003;28:1560-6.         [ Links ]

17. Cocchiarella L, Anderson GBJ. Guides to the evaluation of permanent impairment. AMA Press; 2001.p.375-422.         [ Links ]

18. Van Maanen CJ, Zonnenberg AJ, Elvers JW, Oostendorp RA. Intra/interrater reliability of measurements on body posture photographs. Cranio. 1996;14:326-31.         [ Links ]

19. Fedorak C, Ashworth N, Marshall J, Paull H. Reliability of the visual assessment of cervical and lumbar lordosis: How good are we?. Spine. 2003;28:1857-9.         [ Links ]

20. Dunk NM, Chung YY, Compton DS, Callaghan JP. The reliability of quantifying upright standing postures as a baseline diagnostic clinical tool. J Manipulative Physiol Ther. 2004;27:91-6.         [ Links ]

21. Dunk NM, Lalonde J, Callaghan JP. Implications for the use of postural analysis as a clinical diagnostic tool: reliability of quantifying upright standing spinal postures from photographic images. J Manipulative Physiol Ther. 2005;28:386-92.         [ Links ]

22. Strimpakos N, Sakellari V, Gioftsos G, Papathanasiou M, Brountzos E, Kelekis D, et al. Cervical spine ROM measurements: optimizing the testing protocol by using a 3D ultrasound- based motion analysis system. Cephalalgia. 2005;25:1133-45.         [ Links ]

23. Janik TJ, Harrison DE, Cailliet R, Harrison DD, Normand MC, Perron DL. Validity of a computer postural analysis to estimate 3-dimensional rotations and translations of the head from three 2-dimensional digital images. J Manipulative Physiol Ther. 2007;30:124-9.         [ Links ]

24. Normand MC, Descarreaux M, Harrison DD, Harrison DE, Perron DL, Ferrantelli JR, et al. Three dimensional evaluation of posture in standing with the PosturePrint: an intra- and inter-examiner reliability study. Chiropr Osteopat. 2007;24;15:15.         [ Links ]

25. Harrison DE, Janik TJ, Cailliet R, Harrison DD, Normand MC, Perron DL, et al. Upright static pelvic posture as rotations and translations in 3-dimensional from three 2-dimensional digital images : validation of a computerized analysis. J Manipulative Physiol Ther. 2008;31:137-45.         [ Links ]

26. Harrison DE, Janik TJ, Cailliet R, Harrison DD, Normand MC, Perron DL, et al. Validation of a computer analysis to determine 3-D rotations and translations of the rib cage in upright posture from three 2-D digital images. Eur Spine J. 2007;16:213-8.         [ Links ]

27. Ferreira EAG. Postura e controle postural: desenvolvimento e aplicação de método quantitativo de avaliação postural. Tese de doutorado. Faculdade de Medicina da Universidade de São Paulo. São Paulo. 2006.         [ Links ]

28. Olaru A, Farré JP, Balius R. Estudo de validación de un instrumento de evaluación postural (SAM, spinal análisis machina). Apunts Medicina Del'esport. 2006;150:51-9.         [ Links ]

29. Mc Alpine RT, Bettany-Saltikoy JA, Warren JG. Computerized back postural assessment in physiotherapy practice: Intra-rater and inter-rater reliability of the MIDAS system. J. Back Musculoskelet Rehabil. 2009;22:173-8.         [ Links ]

30. Weir JP. Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. J Strength Cond Res. 2005;19:231-40.         [ Links ]

31. Wahlund K, List T, Dworkin, SF. Temporomandibular disorders in children and adolescents: reliability of a questionnaire, clinical examination, and diagnosis. J Orofac Pain. 1998;12:42-51.         [ Links ]

32. Niekerk SM, Louw Q, Vaughan C, Grimmer-Somers K, Schreve K. Photographic measurement of upper-body sitting posture of high school students: A reliability and validity study. BMC Musculoskelet Disord. 2008;20;9:13.         [ Links ]

33. Iunes DH, Castro FA, Salgado HS, Moura IC, Oliveira AS, Bevilaqua-Grossi D. Confiabilidade intra e interexaminadores e repetibilidade da avaliação postural pela fotogrametria. Rev Bras Fisioter. 2005;9327-34.         [ Links ]



Received for publication on January 26, 2010
First review completed on February 24, 2010
Accepted for publication on April 12, 2010

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License