Acessibilidade / Reportar erro

Factors associated with student performance on the medical residency test

SUMMARY

OBJECTIVE:

To determine whether the scores of the Progress test, the Skills and Attitude test, and the medical internship are correlated with the medical residency exam performance of students who started medical school at the Federal University of São Paulo in 2009

METHODS:

The scores of 684 Progress tests from years 1-6 of medical school, 111 Skills and Attitude exams (5th year), 228 performance coefficients for the 5th and 6th years of internship, and 211 scores on the medical residency exam were analyzed longitudinally. Correlations between scores were assessed by Pearson's correlation. Factors associated with medical residency scores were analyzed by linear regression.

RESULTS:

Scores of Progress tests from years 1-6 and the Skills and Attitude test showed at least one moderate and significant correlation with each other. The theoretical exam and final exam scores in the medical residency had a moderate correlation with performance in the internship. The score of the theoretical medical residency exam was associated with performance in internship year 6 (β=0.833; p<0.001), and the final medical residency exam score was associated with the Skills and Attitude score (β=0.587; p<0.001), 5th-year internship score, (β=0.060; p=0.025), and 6th-year Progress test score (β=0.038; p=0.061).

CONCLUSIONS:

The scores of these tests showed significant correlations. The medical residency exam scores were positively associated with the student's performance in the internship and on the Skills test, with a tendency for the final medical residency exam score to be associated with the 6th-year Progress test.

KEYWORDS:
Medical school; Progress test; Medical internship; Medical residency

RESUMO

OBJETIVO:

Analisar a presença de correlação e associação entre as notas dos Testes de Progresso, provas de Habilidades e Atitudes e notas de desempenho no internato em relação às notas de Residência Médica (RM) de alunos ingressantes em 2009 no curso médico da Universidade Federal de São Paulo.

MÉTODOS:

análise longitudinal de 684 notas de Testes de Progresso do 1º ao 6º ano, 111 de Habilidades e Atitudes (5º ano), 228 coeficientes de rendimento do 5º e 6º anos e 211 notas da Prova de Residência Médica. Analisou-se a correlação de Pearson entre as notas e os fatores associados às notas da RM por regressão linear.

RESULTADOS:

Os Testes de Progresso do 1º ao 6º ano e Habilidades apresentaram pelo menos uma correlacao moderada e significante entre si. As notas da prova teorica e nota final da RM tiveram correlacao moderada com as notas de desempenho no internato. A nota teorica da Prova de RM se associou ao desempenho no internato no 6º ano (β=0,833; p<0,001) e nota final da Prova de RM se associou as notas da prova de Habilidades e Atitudes (β=0,587; p<0,001), desempenho no 5º ano (β=0,060, p=0,025) e Testes de Progresso do 6º ano (β=0,038; p=0,061).

CONCLUSÕES:

Houve correlacao significante entre as notas das diversas provas. A nota da prova de Residencia Medica se associou positivamente ao desempenho do aluno no internato e prova de Habilidades, com tendencia de associacao do Teste de Progresso do 6º ano com o desempenho final na prova de RM.

PALAVRAS-CHAVE:
Faculdade de medicina; Teste de Progresso; Internato médico; Residência Médica

INTRODUCTION

The form and content of medical student evaluations of should cover the acquired knowledge as well as specific skills and elements of their affective nature, such as attitudes towards professional practice.11. Norcini J, Boulet J. Methodological issues in the use of standardized patients for assessment. Teach Learn Med. 2003;15(4):293-7. The “Miller pyramid” is a model to define the learning outcomes in terms of the skills students should acquire, and it provides the cognitive bases (“knows” and “knows how”) of professional practice (“does”) and the need to evaluate practical skills and competencies (“shows how”).22. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63-7.

“Knows” refers to methods that evaluate knowledge, which can be recovered from memory, and supports the construction of more complex capabilities. Both “knows” and “knows how” belong to the cognitive domain and therefore should be evaluated by appropriate methods for measuring knowledge acquisition. However, they differ in the nature of the knowledge they refer to: “Knows” is more related to the theoretical domain, whereas “knows how” is an applied type of knowledge. Thus, the tests proposed for this stratum should target the use of knowledge for decision-making and problem-solving within a clinical context. The “shows how” refers to the evaluation of clinical skills and competencies that are performed in the context of training. This skill set is usually evaluated through practical exams involving clinical tasks. The evaluation of “does” corresponds to the practice in the work environment. It is tested in the student at the end of the course, in the professional training stages, where training targets the practice of “does”, with the student exercising the clinical practice under supervision.

Among the various forms of applied evaluations, the Paulista School of Medicine of the Federal University of São Paulo (EPM-UNIFESP) has evaluated the performance of students using the Progress test (PT), the Skills and Attitude test (SA), and the performance coefficient (PC) during the medical internship.

In this context, the objective of this study was to determine whether the PT scores from the first to the sixth year of medical school, the SA score, and the PC during the medical internship were correlated with the medical residency (MR) scores of the students who started medical school at the Paulista School of Medicine - Federal University of São Paulo (EPM-UNIFESP) in 2009.

METHODS

This was a retrospective study that employed a longitudinal analysis of the PT scores, SA score, and PC during the 5th and 6th medical school years (PC 5th and PC 6th) and the exam of the MR. This study was approved by the Research Ethics Committee of UNIFESP. The data included the scores on the PT from the 1st to the 6th medical school years, the score on the SA performed in the 5th year, the PC in the 5th and 6th years, and the score on the theoretical test and the final test result of the 2015 MR among the students who in 2009 were enrolled in the 1st year of medical school at EPM-UNIFESP.

The PT, which was voluntarily taken by the students, consisted of 120 multiple-choice questions about medicine and was written and administered together with the institutions that are part of the Núcleo Interinstitucional de Estudos e Práticas de Avaliação em Educação Médica (Interinstitutional Center for Studies and Practices of Medical Training Evaluation).

The voluntary SA consisted of 10 stations with different clinical tasks, such as obtaining a clinical history, performing the clinical examination, evaluating a radiograph or electrocardiographic tracing, and giving the diagnosis and/or instructing the patient. This test was designed and administered similarly to the Objective Structured Clinical Examination (OSCE).

The PC is an index that measures academic performance at the end of each academic period. It is calculated based on the final grade and workload in each curricular unit.

The MR test was performed in two phases. The first phase had a theoretical test with 100 assertive questions with short answers and a computerized test with 50 questions with images. The second phase was in the form of four practical stations (12 boxes). The final result of the MR test was the sum of the theoretical test score (weight 5), the practical test score (weight 4), and the interview score (weight 1).

The scores of students who did not complete the PT in all years of the medical course were excluded.

Reliability analysis of the collected data was performed, the correlations between the scores of all tests were calculated, and the factors associated with the scores of the theoretical test and the final result of the MR test were identified.

Statistical analysis

The numerical variables are expressed as the mean and standard deviation and were compared by the paired t-test. Data reliability was assessed by Cronbach's α coefficient,33. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297-334. which was considered adequate when > 0.7.44. Streiner DL. Being inconsistent about consistency: when coefficient alpha does and doesn't matter. J Pers Assess. 2003;80(3):217-22. For the correlation analysis, Pearson's correlation coefficient was calculated, where 0.1 to 0.3 was considered a weak correlation, 0.3 to 0.6 was considered a moderate correlation, and 0.7 to 1.0 was considered a strong correlation. 55. Dancey C, Reidy J. Estatística sem matemática para psicologia: usando SPSS para Windows. 3a. ed. Porto Alegre: Artmed; 2006.

Two linear regression models were constructed. The dependent variable of one of the models was the theoretical test score, while the dependent variable of the other model was the final result of the MR test. In the univariate linear regressions, the scores of the various tests were regressed on, and those with p < 0.2 were included in the multiple linear regression model.

Statistical analyses were performed in SPSS® software (IBM SPSS Statistics, Somers, NY, USA), with a significance level of p < 0.05.

RESULTS

In 2009, 123 students enrolled in the 1st year of the medical school at EPM-UNIFESP. Of these, 114 (92.7%) performed all PT tests from 2009 to 2014, and 111 (90.2%) performed the SA in the 5th year. Of the 114 students included in the study, 106 (93.0%) completed the theoretical MR test, and 105 (92.1%) completed all stages of the MR tests at EPM-UNIFESP (Figure 1).

Figure 1
FLOWCHART OF THE STUDENTS INCLUDED IN THE STUDY. PT: PROGRESS TEST; PC: PERFORMANCE COEFFICIENT DURING THE INTERNSHIP; SA: SKILLS AND ATTITUDE TEST; MR TT: MEDICAL RESIDENCY THEORETICAL TEST; MR FT: MEDICAL RESIDENCY FINAL TEST.

The data reliability analysis showed internal consistency between the scores of all tests analyzed (0.730, 95% CI: 0.646 to 0.802 (p <0.001). The data reliability had an α of 0.749 (95% CI: 0.669 to 0.815 (p < 0.001) if the scores of the 1st-year PT or the SA score were deleted. This difference was small, so the scores of all tests were included in the analysis.

Some students received a score of 0 on the PT, to wit, four students in the 1st school year, one student in the 2nd year, one student in the 3rd year, two students in the 4th year, six students in the 5th year, and five students in the 6th year. The mean ± SD of PT scores (n = 114 for each year) were as follows: 1st year (2.67 ± 0.97), 2nd year (3.01 ± 0.70), 3rd year (4.19 ± 1.13), 4th year (4.01 ± 1.05), 5th year (5.19 ± 1.56), and 6th year (6.38 ± 1.80) (Figure 2). There was a significant increase (p <0.05) between the PT scores of consecutive years, except from the 3rd to the 4th school year. Between the 1st and 6th school years, there was a considerable increase in the PT score (Figure 3).

Figure 2
MEAN ± STANDARD DEVIATION OF THE STUDENTS' SCORES ON THE PROGRESS TEST (PT) FROM THE 1ST TO THE 6TH SCHOOL YEAR, SKILLS AND ATTITUDE TEST (SA), PERFORMANCE COEFFICIENT (PC) OF THE 5TH AND 6TH SCHOOL YEARS, MEDICAL RESIDENCY THEORETICAL TEST (MR TT), AND MEDICAL RESIDENCY FINAL TEST (MR FT) (P < 0.05).
Figure 3
DIFFERENCE ON THE PROGRESS TEST MEANS (%) Y-AXIS: PERCENTAGE

The mean of the 111 SA scores was 7.53 ± 0.56. The scores of PC 5th (n = 114) were 8.32 ± 0.29 and PC 6th (n = 114) were 8.26 ± 0.28 (p <0.001) (Figure 2).

Of the 114 students included in the study, 106 (93.0%) took the theoretical MR test. One student withdrew from the practical test and interview, although this student had a high enough mean score in the theoretical test to progress in the evaluation process. The 105 (100%) students who completed the evaluation passed the MR test, with a mean score on the theoretical test of 7.53 ± 0.56 and a final MR result of 8.05 ± 0.42 (p <0.001) (Figure 2).

The correlations between the scores obtained are shown in Table 1. The factors associated with the MR score were tested through two linear regression models. The first model considered the score of the theoretical MR test as the dependent variable, while the other model considered the final result of the MR test as the dependent variable (Tables 2 and 3). The final model of multiple linear regression, which was included the PT scores from the 4th to the 6th school year, the SA score, and the performance in the medical internship because they met the cutoff in the univariate linear regression model (Table 2), showed that each additional point in PC 6th increased the theoretical test score by 0.833 points (Table 3). In the second multivariate regression model, which included the same variables mentioned above (Table 2), each additional point in PC 5th increased the final MR score by 0.587, and each additional point on the SA increased the final MR score by 0.06. The PT of the 6th school year showed an association trend without statistical significance (Table 3).

TABLE 1
PEARSON'S CORRELATION BETWEEN THE SCORES OF THE PROGRESS TEST, THE SKILLS AND ATTITUDE TEST, THE PERFORMANCE COEFFICIENTS AT THE 5TH AND 6TH SCHOOL YEARS, AND THE SCORES OF THE THEORETICAL AND FINAL MEDICAL RESIDENCY TEST.
TABLE 2
UNIVARIATE LINEAR REGRESSION FOR FACTORS ASSOCIATED WITH THE SCORE ON THE THEORETICAL EXAMINATION AND THE SCORE ON THE FINAL MEDICAL RESIDENCY EXAMINATION AT UNIFESP.
TABLE 3
FINAL LINEAR REGRESSION MODEL OF VARIABLES ASSOCIATED WITH THE SCORES OF THE THEORETICAL EXAMINATION AND FINAL EXAMINATION OF MEDICAL RESIDENCY

DISCUSSION

In the present study, the PT scores showed a progressive increase over time, except between the 3rd and 4th years. Between the 1st and the 6th years, there was a substantial increase of 3.71 points on the PT, which was an increase of 139%. This highlights the degree of knowledge acquisition in the medical school period. Similarly, a study conducted in the EPM from 1996 to 2001 showed a similar increase of 2.79 points to 3.90 points in the PT score from the 1st to the 6th school year.66. Faccin MP. O Teste do Progresso como instrumento de avaliação da aquisição do conhecimento na graduação médica [Tese de Doutorado]. São Paulo: Universidade Federal de São Paulo; 2004. At the University of Londrina, there was an average increase of 2.89 points in the PT from the 1st to the 6th school year from 1998 to 2006.77. Sakai MH, Ferreira Filho OF, Almeida MJ, Mashima DA, Marchese MC. Teste de Progresso e avaliação do curso: dez anos de experiência da medicina da Universidade Estadual de Londrina. Rev Bras Educ Med. 2008;32(2):254-63.

In general, the PT scores obtained in the present study were similar to those of other institutions. The national 2015 PT conducted in 23,065 students at 57 medical schools and showed pass rates of 32.38%, 35.23%, 39.71%, 44.85%, 51.98%, and 61.28% from the 1st to the 6th school year, respectively, which were higher than those at UNIFESP in the first two school years and lower than that at UNIFESP in the 6th school year.88. Bicudo AM. Experiência do Teste do Progresso no Brasil. VII Fórum Nacional de Ensino Médico. Brasília, 06 a 07 outubro 2016. [cited 2019 Jul 31]. Available from: http://www.eventos.cfm.org.br/images/stories/PDF/ensino2016/6out16angelica.pdf
http://www.eventos.cfm.org.br/images/sto...
At the State University of Campinas (UNICAMP), the PT grades from 2011 to 2014 among the 6th-year students had a mean of 6.7 ± 0.7, 6.1 ± 1.0, 6.6 ± 0.7, and 7.1 ± 0.7, respectively, which were slightly lower than the means of the present study.99. Ferreira RC. Relação entre o desempenho no teste de progresso e na seleção para residência médica [Tese de Doutorado]. Campinas: Universidade Estadual de Campinas; 2019. A study at the University of Missouri, USA, reported lower percentages of correct answers on the PT than those of the present study for the 1st to the 6th school years, with percentages of 6.1%, 16.1%, 30.7%, 41.6%, 50.9%, and 56.0%, respectively.1010. Willoughby TL, Hutcheson SJ. Edumetric validity of the quarterly profile examination. Educ Psychol Meas. 1978;38(4):1057-61. Blake et al1111. Blake JM, Norman GR, Keane DR, Mueller B, Cunnington J, Didyk N. Introducing progress testing in McMaster University's problem-based medical curriculum: psychometric properties and effect of learning. Acad Med. 1996;71(9):1002-7. reported the PT scores of three classes of students at McMaster University, which were 10-20% at the beginning of the course but increased almost linearly until reaching 50% on the 5th exam 20 months later. The scores of the theoretical MR test in this study were similar to those observed at UNICAMP, which were 6.8 ± 0.8, 7.6 ± 0.9, 7.3 ± 0.8, and 7.5 ± 0.9 from 2011 to 2014, respectively.99. Ferreira RC. Relação entre o desempenho no teste de progresso e na seleção para residência médica [Tese de Doutorado]. Campinas: Universidade Estadual de Campinas; 2019.

In this study, there was a weak correlation between the PT scores of the 1st and 2nd school years. From the 2nd school year, the most relevant correlations occurred between the closest years. There was a strong correlation between the 3rd and 4th school years and between the 5th and 6th school years, which suggests greater consistency in the knowledge acquired in the clinical area. However, the strength of correlations decreased for the MR tests, which might show the insufficient preparedness for that test at that time.

Ferreira99. Ferreira RC. Relação entre o desempenho no teste de progresso e na seleção para residência médica [Tese de Doutorado]. Campinas: Universidade Estadual de Campinas; 2019. compared the PT score during the 6th school year with the score on the MR theoretical examination of UNICAMP students and observed moderate correlations of 0.588 in 2011, 0.610 in 2012, 0.671 in 2013, and 0.476 in 2014, in contrast to the present study, which showed a significant but weak correlation between the 6th school year PT score and the final MR score.

The analysis of internal consistency showed that the scores of PT, SA, PC 5th, PC 6th, and MR together showed an α coefficient greater than 0.7. Another study showed that PT scores in medical school had a predictive validity of 0.6 for success on the medical licensing exam.1111. Blake JM, Norman GR, Keane DR, Mueller B, Cunnington J, Didyk N. Introducing progress testing in McMaster University's problem-based medical curriculum: psychometric properties and effect of learning. Acad Med. 1996;71(9):1002-7.

In the present study, the removal of the 1st-year PT score or the SA score from the analysis increased the internal consistency of the study. This result suggests that the PT has a large number of questions considered “difficult” for the 1st-year students and therefore has less discriminatory power for student performance. Regarding the SA test, a possible explanation would be the diverse nature of this test in comparison to PT test, in addition to the small number of questions, since studies have highlighted the importance of the number of questions on this test.1212. Albanese M, Case SM. Progress testing: critical analysis and suggested practices. Adv Health Sci Educ Theory Pract. 2016;21(1):221-34.

Other studies have found stronger predictive relationships between the clinical performance of the student and OSCE when the test had 18 5-minute stations1313. Wilkinson TJ, Frampton CM. Comprehensive undergraduate medical assessments improve prediction of clinical performance. Med Educ. 2004;38(10):1111-6. or 35 2-minute stations,1414. Graham R, Zubiaurre Bitzer LA, Anderson OR. Reliability and predictive validity of a comprehensive preclinical OSCE in dental education. J Dent Educ. 2013;77(2):161-7. which suggests that longer OSCE assessments may be better predictors of performance. A systematic review reinforced this hypothesis, finding that the best reliability was associated with a greater number of stations.1515. Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. Med Educ. 2011;45(12):1181-9. In another systematic review, 12 of the 15 reviewed articles that reported the best relationships between OSCE scores and clinical performance were precisely those with the most questions.1616. Terry R, Hing W, Orr R, Milne N. Do coursework summative assessments predict clinical performance? A systematic review. BMC Med Educ. 2017;17(1):40.

In the literature, the importance of OSCE in health education programs is well established. It is an evaluation mode specifically designed to provide a valid and reliable measure of the clinical competence of students in a simulated environment.1717. Harden RM, Gleeson FA. Assessment of clinical competence using na objective structured clinical examination (OSCE). Med Educ. 1979;13(1):41-54. Reinforcing this idea, the final linear regression model obtained in the present study showed the importance of the SA score to the final result of the MR test. Each additional point on the SA increased the final MR test score by 0.060 points. However, the variable most strongly associated with performance on the final exam of the MR was PC 5th, each point of which increased the score on the final MR exam by 0.58 points. Likewise, PC 6th was significantly associated with the score of the theoretical MR exam, each point of which increased the theoretical MR exam score by 0.83 points. A study at São Paulo State University from 2009 to 2011 showed a moderate correlation between PT 6th and the score on the first phase of the multiple-choice MR test, but this correlation did not persist in the second phase of the test, which consisted of the OSCE and an interview.1818. Hamamoto Filho PT, Arruda Lourenção PLT, Valle AP, Abbade JF, Bicudo AM. The correlation between students' progress testing scores and their performance in a residency selection process. Med Sci Educ. 2019;29:1071-5. Another study1919. Aa H, Esmaeili A. The validity of medical students' scores in their internship courses, a historical cohort study. JME. 2008;12(1,2):29-36. showed an association between good performance on the medical internship exam and student performance in clinical practice (0.8 points), in the theoretical course of clinical medicine (0.5 points), and in basic science (0.4 points). Similarly, other authors found that performance on previous tests influenced future results.2020. Pearson SA, Rolfe IE, Henry RL. The relationship between assessment measures at Newcastle Medical School (Australia) and performance ratings during internship. Med Educ. 1998;32(1):40-5..2121. Andriole DA, Jeffe DB, Whelan AJ. What predicts surgical internship performance? Am J Surg. 2004;188(2):161-4.

The results of the present study suggest that students should be encouraged to take the PT and SA seriously to improve their performance year by year. In addition, greater participation in practical activities, especially in medical internships, can improve the final result on the MR test.

The strength of this study was the longitudinal analysis of the scores of the tests taken during the medical course and the statistical treatment applied, which showed results that would be expected in practice but were reinforced by the relevant statistical analysis. The limitations of the study were the inclusion of student grades in only a certain period, and in the linear regression, external factors were not considered, such as having taken preparatory courses, which could interfere in the performance on the MR test.

CONCLUSION

The data reliability obtained in this study was adequate and showed a significant correlation between the scores of the tests analyzed. The performance on the SA and in the internship was positively associated with the performance on the MR tests.

REFERENCES

  • 1
    Norcini J, Boulet J. Methodological issues in the use of standardized patients for assessment. Teach Learn Med. 2003;15(4):293-7.
  • 2
    Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63-7.
  • 3
    Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297-334.
  • 4
    Streiner DL. Being inconsistent about consistency: when coefficient alpha does and doesn't matter. J Pers Assess. 2003;80(3):217-22.
  • 5
    Dancey C, Reidy J. Estatística sem matemática para psicologia: usando SPSS para Windows. 3a. ed. Porto Alegre: Artmed; 2006.
  • 6
    Faccin MP. O Teste do Progresso como instrumento de avaliação da aquisição do conhecimento na graduação médica [Tese de Doutorado]. São Paulo: Universidade Federal de São Paulo; 2004.
  • 7
    Sakai MH, Ferreira Filho OF, Almeida MJ, Mashima DA, Marchese MC. Teste de Progresso e avaliação do curso: dez anos de experiência da medicina da Universidade Estadual de Londrina. Rev Bras Educ Med. 2008;32(2):254-63.
  • 8
    Bicudo AM. Experiência do Teste do Progresso no Brasil. VII Fórum Nacional de Ensino Médico. Brasília, 06 a 07 outubro 2016. [cited 2019 Jul 31]. Available from: http://www.eventos.cfm.org.br/images/stories/PDF/ensino2016/6out16angelica.pdf
    » http://www.eventos.cfm.org.br/images/stories/PDF/ensino2016/6out16angelica.pdf
  • 9
    Ferreira RC. Relação entre o desempenho no teste de progresso e na seleção para residência médica [Tese de Doutorado]. Campinas: Universidade Estadual de Campinas; 2019.
  • 10
    Willoughby TL, Hutcheson SJ. Edumetric validity of the quarterly profile examination. Educ Psychol Meas. 1978;38(4):1057-61.
  • 11
    Blake JM, Norman GR, Keane DR, Mueller B, Cunnington J, Didyk N. Introducing progress testing in McMaster University's problem-based medical curriculum: psychometric properties and effect of learning. Acad Med. 1996;71(9):1002-7.
  • 12
    Albanese M, Case SM. Progress testing: critical analysis and suggested practices. Adv Health Sci Educ Theory Pract. 2016;21(1):221-34.
  • 13
    Wilkinson TJ, Frampton CM. Comprehensive undergraduate medical assessments improve prediction of clinical performance. Med Educ. 2004;38(10):1111-6.
  • 14
    Graham R, Zubiaurre Bitzer LA, Anderson OR. Reliability and predictive validity of a comprehensive preclinical OSCE in dental education. J Dent Educ. 2013;77(2):161-7.
  • 15
    Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. Med Educ. 2011;45(12):1181-9.
  • 16
    Terry R, Hing W, Orr R, Milne N. Do coursework summative assessments predict clinical performance? A systematic review. BMC Med Educ. 2017;17(1):40.
  • 17
    Harden RM, Gleeson FA. Assessment of clinical competence using na objective structured clinical examination (OSCE). Med Educ. 1979;13(1):41-54.
  • 18
    Hamamoto Filho PT, Arruda Lourenção PLT, Valle AP, Abbade JF, Bicudo AM. The correlation between students' progress testing scores and their performance in a residency selection process. Med Sci Educ. 2019;29:1071-5.
  • 19
    Aa H, Esmaeili A. The validity of medical students' scores in their internship courses, a historical cohort study. JME. 2008;12(1,2):29-36.
  • 20
    Pearson SA, Rolfe IE, Henry RL. The relationship between assessment measures at Newcastle Medical School (Australia) and performance ratings during internship. Med Educ. 1998;32(1):40-5.
  • 21
    Andriole DA, Jeffe DB, Whelan AJ. What predicts surgical internship performance? Am J Surg. 2004;188(2):161-4.

Publication Dates

  • Publication in this collection
    06 Nov 2020
  • Date of issue
    Oct 2020

History

  • Received
    24 Mar 2020
  • Accepted
    21 Apr 2020
Associação Médica Brasileira R. São Carlos do Pinhal, 324, 01333-903 São Paulo SP - Brazil, Tel: +55 11 3178-6800, Fax: +55 11 3178-6816 - São Paulo - SP - Brazil
E-mail: ramb@amb.org.br