Acessibilidade / Reportar erro

Evolution of the cognitive achievement of Brazilian youth on Pisa

Abstracts

This article analyzes the evolution of young Brazilians' cognitive abilities, as measured by the Program for International Student Assessment - Pisa. The results were very positive. Although it was observed a considerable increase in the percentage of youngsters able to complete the Pisa exam and, therefore, a strong decrease in selectivity, the Brazilian average rose 33 points over the past nine years. The relative position of the Brazil also increased, since the average score rose from 75% to 80% of the one found in the original group of countries that participated in this survey, in 2000. In distributive terms, the improvement was even more prominent in the bottom of the cognitive abilities' distribution: the hundredths in the lower tail distribution in Mathematics had their scores increased by about 70 points, against approximately 30 points found in the hundredths of the upper tail.

Pisa; learning evaluation; cognitive development


O artigo analisa a evolução das habilidades cognitivas dos jovens brasileiros medidas pelo Programa Internacional de Avaliação de Alunos - Pisa. Reporta-se a resultados bem positivos. Apesar de um aumento considerável na porcentagem dos jovens aptos a fazer o Pisa e, portanto, uma forte redução na seletividade, a nota média brasileira subiu 33 pontos ao longo dos últimos nove anos. A posição relativa do país também aumentou: nossa nota média foi de 75% para 80% da nota média do grupo original de países que fizeram o Pisa em 2000. Em termos distributivos, a melhora foi mais proeminente na parte inferior da distribuição de habilidades cognitivas. Os centésimos na cauda inferior da distribuição de matemática viram suas notas aumentarem em torno de 70 pontos contra em torno de 30 pontos para os centésimos na cauda superior.

Pisa; avaliação da aprendizagem; desenvolvimento cognitivo


OTHER ISSUES

ITechnical Expert in Planning and Research of DISOC - Directorate for Social Studies and Policies (Diretoria de Estudos e Políticas Sociais) of IPEA - the Institute for Applied Economic Research (Instituto de Pesquisa Econômica Aplicada). sergei.soares@ipea.gov.br

IITechnical Expert in Planning and Research of DISET - Directorate for Sectoral Studies and Policies, Innovation, Regulation and Infrastructure (Diretoria de Estudos e Políticas Setoriais, de Inovação, Regulação e Infraestrutura) of IPEA - the Institute for Applied Economic Research (Instituto de Pesquisa Econômica Aplicada). paulo.nascimento@ipea.gov.br

ABSTRACT

This article analyzes the evolution of young Brazilians' cognitive abilities, as measured by the Program for International Student Assessment - Pisa. The results were very positive. Although it was observed a considerable increase in the percentage of youngsters able to complete the Pisa exam and, therefore, a strong decrease in selectivity, the Brazilian average rose 33 points over the past nine years. The relative position of the Brazil also increased, since the average score rose from 75% to 80% of the one found in the original group of countries that participated in this survey, in 2000. In distributive terms, the improvement was even more prominent in the bottom of the cognitive abilities' distribution: the hundredths in the lower tail distribution in Mathematics had their scores increased by about 70 points, against approximately 30 points found in the hundredths of the upper tail.

Keywords: pisa; learning evaluation; cognitive development

Since the year 2000, the Organization For Economic Cooperation And Development - OECD - has carried out a large-scale educational evaluation among its member countries and in countries with which it has partnerships for this purpose. The evaluation is called PISA (the Program for International Student Assessment) and is held every three years. The fourth and most recent edition was in 2009, the results for which were published in December 2010. The major merit of the program is to make public and freely accessible a database of international scope containing information on cognitive performance that can be cross-referenced with contextual variables constructed on the basis of questionnaires answered by schools, families, and the students themselves.

The design of the PISA, and the fact that it has already been held four times, Brazil being included every time, enables a series of trends and informative comparisons - both from the point of view of the training of human capital and from the point of view of changes in educational inequalities - to be prospected. For example, the focus of the test on situations and challenges that require the students to demonstrate practical applications of the knowledge, skills and competencies developed over their school careers, enables an idea to be formed as to how well prepared the students are for the challenges they are likely to face in the following stages of their training, and above all in their daily lives and the job market. It also gives a notion of how these young people's performance has changed over time, allowing this to be compared with the performance of young people in other countries. Changes in Brazil's development, both over time and in comparison with the performance of other countries, can be tracked.

With regard to changes in inequalities, the four editions of the PISA show how variance in the performance of Brazilian students over time has evolved. It can thus be seen whether the gap between the best performing and the worst performing Brazilian students is growing (which would mean an increase in educational inequality) or shrinking (which would indicate a reduction in educational inequalities).

The discussion in the present article is therefore broken down into five sections as well as this introduction: firstly, progress in the level of instruction of the population making up the sampling universe of the PISA in Brazil; secondly, Brazil's progress compared with other countries in successive editions of the exam; thirdly, a close look at Brazil's performance, ranking scores for the population that took part in the test in hundredths, so as to detect possible changes in the gap between the best and worst performing students. The final section is for closing remarks.

THE INCREASE IN SCHOOLING FOR YOUNG PEOPLE FROM 15 TO 16 YEARS OF AGE

PISA samples a population aged 15 at the beginning of the year in which the exam is held. It also requires that an examinee be enrolled in and attending at least the seventh year in a formal teaching institution. Therefore, 15-year-olds who still have fewer than six complete years of studying, or who are not enrolled in formal teaching institutions, are not part of the target audience from which samples are taken in the countries participating.

In an educational system like Brazil's, where failing and dropping out are historically deep-rooted problems, this sampling methodology might place our students at a relative advantage. This is because high failure and dropout rates would hypothetically tend to funnel the educational system, filling the final years of primary education and the entire secondary education with the most "capable" students only, and therefore, in those countries where this culture predominates, leading to an over-representation of better-performing students in the final sample of the PISA.

However, the first four editions of the PISA coincided with a historical moment in which Brazilian educational systems were gradually adopting an approach of cycles,1 1 See Fetzner (2007-2008) for a discussion on the regime of cycles. For a specific discussion on the organization of schoolwork in cycles as an inclusive policy attempting to address the problems of repeating years and dropping out of school, see Sousa (2008) and Dalben (2009). It it is also worth reading Fernandes (2010), whose line of argument makes (what should be) an obvious point: quality of schools does not involve the choice between years or cycles. by continued progress from one year of study to the next. In 2006, the basic education census (Censo da Educação Básica) in Brazil showed that 41.3% of state schools located in urban areas were adopting the cycles approach (MENEZES-FILHO et al., 2008). This trend has been reflected in pass rates: in 2000, the year when PISA was first held, the pass rate in primary education was 78.2%; in 2009, the most recent edition, it was 85.2%2 2 Pass rates used here have been taken from the statistical summaries for compulsory education (Educação Básica) for 2001 (referencing 2000) and 2009 (referencing 2009) available at www.inep.gov br. The rate for 2000 was calculated as a proportion of the number of those passing over the total number of students enrolled in primary school. The rate for 2009 is given directly in the synopsis. an increase that means that for each year of primary school an additional seven students on average were passing in 2009 than in 2000.

This article does not propose to gauge the possible impact of the cyclical approach and continuous progression on school performance measurements, although the lack of and need for studies in this area in Brazil is acknowledged. Very few Brazilian studies have explored this issue using quantitative data; the efforts of Menezes Filho et al. (2008), stand out: they found a significant effect on the reduction of failure rates caused by the adoption of a cycle approach, and on all levels of teaching by the increase of pass rates. Simultaneously, however, the same authors, estimating the effects of continuous progression on performance of fourth year and eight years students in the so-called Prova Brasil, found non-significant results in the first case, and significantly negative impacts in the second. As Gomes (2005) argues, the empirical evidence available does not enable a categoric statement as to whether a non-year approach is beneficial to student performance. The clearest effects of the spread of continuous progression through Brazilian educational systems is in the increased levels of schooling of the population at large.3 3 Although, as Gomes (2005) stresses, continuous progression runs the risk of becoming a mere flow correction policy when not carried out alongside more assertive follow-up of students throughout their school careers in order to actually seek quality and equity. Returning to Fernandes's statement (2010), mentioned in a previous footnote, the quality of school goes beyond organizing progression through school on a year-by-year or cycle-by-cycle approach.

With regard to PISA, this means that with each subsequent application of the test, Brazilian students involved in it have higher and higher levels of schooling. Graph 1 shows that the level of schooling of young people making up the sampling universe of PISA in Brazil has grown.


When PISA was applied for the first time in 2000, 62% of Brazilian 15 and 16-year-olds had the minimum schooling to the part of the test's sampling universe. Over the decade, an increasingly large percentage of young people of that age bracket achieved the threshold, so that in the most recent edition of 2009, approximately 79% of the population of this age group were eligible. Did the expansion of PISA's sampling universe lead to a drop in performance among Brazilian students over successive editions of the test?

MASTERY OF COGNITIVE SKILLS IN BRAZIL

The answer to this question is no, but it should be placed in the context of a wider examination of how the cognitive skills of young people have evolved over the last ten years in Brazil.

Although this text is inspired by and concentrates on the PISA test, it is not the only source on the cognitive performance of young people. Since 1995 Brazil has had the SAEB examination (Compulsory Education Evaluation system - Sistema de Avaliação do Ensino Básico), which evaluate the cognitive skills of children in the 4th (9-year-olds) and 8th (13-year-olds) grades of primary school and those of young people in the final year of high school (17-year-olds), the category which interests us. And since 1998 there has also been the ENEM (National Secondary School Examination-Exame Nacional do Ensino Médio) sat by young people finishing secondary school. Three features differentiate the three exams: use (or otherwise) of item response theory; the philosophical approach followed in constructing the items; and the construction of the sampling universe.

ITEM RESPONSE THEORY

Item Response Theory enables comparisons to be made between applications of a test. Before the 1950s, the only way of working with cognitive measurements was simply by counting right and wrong answers to items. Two tests could never be compared, since they were incommensurable. IRT revolutionized the design of cognitive tests and other types of test, enabling comparisons to be drawn between two tests and therefore between two or more moments in time.

The basic principle behind IRT is very simple: each individual has a latent, non-observable skill, which is conventionally called proficiency, which determines the likelihood of giving the right answer to a question or to an item that measures this skill. Both the difficulty of an item and the proficiency of a person may be expressed on a single scale (to learn more about IRT, see Andrade, Tavares, Valle, 2000; Klein, 2003; or Araújo, Andrade, Bortolotti, 2009).

PISA (OECD, 2000, 2003, 2006) and SAEB (KLEIN, 2003) are both constructed using IRT. ENEM, however, only began to use IRT in 2009 (BRASIL, 2009). This means that a complex statistical treatment of ENEM items previous to 2009 would be needed to make the tests comparable among themselves.4 4 It would be necessary to identify virtually identical items in different years and used them as "anchor questions" so as to place all the tests on a single scale and then re-estimate proficiency within ENEM on an IRT scale. This would take a research team one year to do. This would be too burdensome for this study, and we have therefore eliminated the ENEM as a source.

ITEM CONSTRUCTION

There are several differences between the philosophical approaches of PISA and ENEM on the one hand, and of SAEB on the other. SAEB is designed to directly measure mastery of contents and not application of contents. It is a test that is closely modeled on curriculum parameters. Examples of items can be found at http://www.inep.gov.br/web/saeb-e-prova-brasil/downloads. PISA and ENEM use items constructed to measure the application of knowledge to situations of practical life. See http://www.inep.gov.br/web/enem/provas for ENEM items, and http://www.gave.min-edu.pt/np3/134.html for PISA items.

A comparison between the two approaches shows that PISA and ENEM exams are much more interesting and come closer to the cognitive skills that are usable in real life or in the job market than the content-based SAEB test.

TARGET POPULATION

The the major difference between SAEB and PISA, however, is not the philosophical approach but the sampling. As has been mentioned, PISA samples individuals attending school who were born 16 years before the application of the exam, and who are enrolled in any year, provided they are not lagging behind by more than three years. SAEB, on the other hand, tests individuals at the end of high school, whatever their age. Given the high degree of repeating that occurs in Brazil, these two target populations will not coincide.

Graph 2 shows changes as of 1995 in the target populations of PISA and the contingent of students in the final year of high school, who are therefore eligible to take the SAEB exam at the end of their third year. The lighter arrows show years in which SAEB was applied, and darker arrows show years when PISA was applied. Change in the PISA target population is consistent with graph 1 - reasonably large increases in the target population from 2000 to 2003 and from 2006 to 2009 and a more modest increase from 2003 to 2006. The increases in the SAEB target population have been much larger. The SAEB target population rises above all in the 1990s, while, as will be seen below, results fall drastically. If we compare changes in the two target populations, we shall see that the SAEB population virtually doubled from 1995 to 2009 for the 16-year-old bracket; the PISA target population rose by approximately 50% for the same cohort.


What should be expected a priori in the comparison of the two target populations? As was seen in the previous section, the fall in repeating and drop-out rates have led to a less and less elitist profile in secondary school education. Children and adolescents from increasingly underprivileged socio-economic backgrounds are making up the secondary school student body. In the specialized literature, the factor most strongly associated with student performance is precisely the socio-economic origin of the student's family (HANUSHEK; WÖβMANN, 2011). At the same time, high rates of repeating and dropping out that have historically characterized the Brazilian educational system, and which have gradually come down in the last 10 years, typically mostly affect children and adolescents from less favored socio-economic strata (RIBEIRO, 1993; LEON, MENEZES FILHO, 2002). Graciano and Haddad (2009) emphasize that PNAD data for 2006 show that the presence of young people between 15 and 17 in secondary school is over three times greater when these young people are from the richest 20% of the population in that age bracket than from the poorest 20%.

As has been seen, a proportion of students who repeat remain eligible to take the PISA, which includes the end of primary school, but are entirely excluded from SAEB, which only includes the last year of secondary school. This being the case, extended access to secondary school brought about by the gradual reduction in levels of repeating tends to be reflected more significantly in the student body sitting the SAEB rather than those sitting PISA. As seen in Graph 2, the PISA target population is growing much less than that of SAEB. A priori, this should lead to qualitatively better changes in PISA van in SAEB, since the student body taking SAEB will come from an increasingly less privileged socio-economic background. The two panels of Graph 3 show that this is exactly the case.


Brazil's average score in the PISA has risen steadily since 2000, when the exam was first set, reflecting an improvement in educational quality in Brazil. In the SAEB, however, the score has fallen virtually continuously from 1995 to 2005, reflecting a reduction in the selection process caused by repeating. Improvement in school performance was stronger than the reduction in the selection factor, and there was an improvement in the average score, only in 2007 and 2009.

The difference between the two historical series suggests that the widely criticized flow normalization policies by means of progression - and often by means of somewhat artificial expedients - have been correct, despite the usual resistance to their adoption. This resistance often even comes from parents and students themselves, as has been shown in studies such as Jacomini (2010). Furthermore, as can be seen in studies such as Earp (2009), failing is still seen by teachers and many specialists as an essential pedagogical measure, although empirical evidence, as Crahay (2006) states, has for a long time testified against its efficacy in improving the learning of students in difficulty. In our point of view, Schwartzman's diagnosis (2005) that, having overcome the issue of the expansion of education networks that for many years afflicted educational policymakers, the major challenges facing Brazilian education are repeating and the poor quality of teaching, is still valid.

The education system cannot aim to provide cognitive skills only to those who with great difficulty progress in the educational system, but must teach these skills to all children and adolescents. Looked at from this point of view, PISA is undoubtedly the best examination to assess the general progress in skills and competencies developed by our adolescents. Despite the increase in the percentage of Brazil's population with the minimum schooling to be part of the target population, the country's score has risen by 33 points (almost 10%) in the last 10 years.

Having established that Brazil has improved in absolute terms, the question remains: what improvement has been made in relative terms, in other words, how does Brasil perform in international comparisons?

BRAZIL'S PROGRESS VIS-À-VIS OTHER COUNTRIES

It is by no means trivial assessing Brazil's evolution vis-à-vis other countries with regard to the cognitive skills of its school population. Several new countries participate with each new round of PISA - and a few are left out owing to technical problems or political disagreements. It is as unenlightening to say that the number of countries with worse scores than Brazil has risen from zero in 2000 to eleven in 2009 as it is to say that the number of countries with a better score has risen from 30 to 54. Both of these effects are partly due to the fact that the assessment in 2000 had 31 countries, while there were 66 in 2009.

Another obvious problem is that there are significant variations in proficiency that do not change relative positions, and are therefore not taken into consideration in the rankings. This is what happens in the case of Brazil, in comparison with countries that took part in the 2000 edition. Both in 2000 and in 2009 Brazil ranked worst out of countries taking part in the PISA, although the average grade had risen from 75% to 80% in the unweighted average for the test.

Naturally, the way to evaluate how much Brazil has improved or worsened is to construct a panel containing the same countries and observed variations in the scores. This was done in the case of two periods: 2000 to 2009, to measure longer trends, and from 2006 to 2009, to compare recent results.

Graph 4 shows the average score5 5 It is about the average scores for three subjects reading, mathematics, and sciences. for PISA 2000 on the horizontal axis and the average score for PISA 2009 on the vertical axis. Consequently, countries whose scores increased from 2000 to 2009 lie above the diagonal, and those whose scores fell are below. Furthermore, the further towards the top-right a country's position lies, the better will have been its performance in the two years plotted.


Of the 31 countries taking part both in PISA 2000 and PISA 2009, the variation in the score was superior to Brazil's in only one (Luxembourg). Brazil's average rose by 32.7 points. This sharp growth, however, was not enough to lift it out of last position among countries taking part in the test in both years - it can be seen that Brazil is the country closest to the bottom left among those represented on Graph 4.

Although the period covered in the Graph is long enough for long-term trends to be observed, the group of countries covered is very different from Brazil - Mexico is the only Latin American country that also took part in both assessments.

The set of countries taking part grew considerably from 2006. This can be seen in graph 5, which shows precisely the variation in average score from 2006 to 2009.


Brazil is no longer the last among the set of countries that took part both in PISA 2006 and in PISA 2009. Argentina, Azerbaijan Colombia, Indonesia, Kyrgyzstan and Tunisia obtained lower average scores than Brazil's, but this merely reflects the fact that these countries entered the test.

What is relevant is that Brazil's average score continues to rise (16.8 points). Only seven countries evolved more positively in this item: Colombia, Italy, Portugal, Kyrgyzstan, Servia and Turkey. Along with Brazil, Argentina, Colombia and Tunisia, they make up a group of countries whose scores are still low but are rising rapidly. However, in Latin America, Brazil still lags behind Mexico and above all Chile, whose score is 38 points higher.

An overview of the reasons for Brazil's good performance has been set forth in the second section of the study. Another reason, by no means an incompatible one, follows.

THE GREATER THE ADVANCE AMONG THE LOWER SOCIO-ECONOMIC STRATA, THE GREATER THE ADVANCE IN THE GENERAL AVERAGE

Any distribution is defined not only by an average, but also by the dispersion around it. Additionally, the average and the dispersion are not independent. In this section we will argue that, in learning, having a high average means the dispersion must be low.

The first argument is to make an international comparison. The challenge of international comparisons is to find a country whose features are comparable to those of Brazil, but whose results are sufficiently different to teach us something. Excluding highly homogeneous countries such as Finland and career, Canada is the country whose students score highest on PISA. It is a large federative country whose ethnic diversity rivals that of Brazil. However, its students' average scores in the PISA were never less than 100 points above Brazilians' averages in any year or subject.

If we compare the difference between Canada and Brazil in their grades by hundredths, we obtain graph 6. The graph shows scores for reading in 2003 that illustrate our argument well, but for any year and any subject matter the same qualitative conclusion remains valid. Graph 6 shows that the difference between the best Brazilian students and the best Canadian students, albeit substantial, is around 70 points. However, the worse the hundredth, the greater the difference, so that the worst Brazilian students are 150 points below the worst Canadians.


In other words, half of Brazil's 125-point deficit with Canada is explained by changing the entire distribution upwards and equalizing Brazil's best students with the best Canadian students. The other half of the gap can be closed by raising the lower tail of Brazil's distribution so that our inequality is the same as observed in Canada.

Graph 7 shows changes in the accrued distribution of PISA scores in the three subject matters tested (mathematics, sciences and reading) in 2000, 2003, 2006 and 2009 for Brazil. The population that took part in the test is on the horizontal axis, ranked from the worst performance and grouped in hundredths. On the vertical axis you have the average score for each hundredth. Different behaviors can be seen for different subjects.


In the case of mathematics, a clear improvement both in the average and in the inequality can be observed. It can also be seen that, except between 2000 and 2003, the scores of students in the lower tail of the distribution (centiles to the left on the graph) grew more than those of the upper tail (centiles to the right).

In the case of sciences, there is a reduction in inequality and an increase in the average, but the improvement is not as clear as in the case of mathematics. Finally, in the case of reading, it is not clear whether there was any reduction of inequality at all, and the increase in the average is very modest.

Another way of observing the phenomenon is to measure the difference in performance, hundredth by hundredth, over time. There are two ways of doing this.

The first is the accrued difference - choose a base year and observe the difference hundredth by hundredth between the base year and each successive year. This is shown in panel 1 of graphs 8 to 10.


The second way is to plot the difference between successive tests, once again hundredth by hundredth. Each curve shows the gain or loss over a three-year period (the interval between editions of PISA). This is shown in panel 2 of graphs 8 to 10.

Graph 8 shows in great detail what can be seen in the first panel of Graph 7 - the considerable increase in the average for mathematics coincides with a reduction in inequality. From 2000 to 2003, the 90 lowest hundredths gained approximately 20 points, but the ten highest hundredths gained up to 50 points, in other words much more. Fortunately this trend towards cognitive inequality was reversed in 2006. The dotted line shows that the lower hundredths enjoyed stronger performance improvements. This trend is maintained in 2009: the curve for that year shows that while the lowest tenth has gained 70 points (since 2000), the two upper tenths have gained approximately 30. It it should be no surprise that Brazil's increase in the average for mathematics was 52 points on the PISA scale and the fall in standard deviation was 12 points.

The story is completely different for reading. Comparing 2009 with the horizontal axis (equivalent to 2000) it is clear that there has been an increase in inequality. The uppermost hundredths are higher - from 30 to 40 points above the scores of the same hundredth in 2000 - than the lower equivalents, where the gain was from 10 to 20 points. The distribution for 2006 is more unequal than four 2000.

However, there was no clear trend. From 2000 to 2003, there was an increase in the average and a large increase in inequality. From 2003 to 2006, there was a slight fall in the average and a slight increase in inequality. It it was only from 2006 to 2009 that the change was in the desired direction, with an increase in the average and a fall in inequality.

The summary is a relatively modest increase of 21 points in the average and an increase of 12 points in the standard deviation.

Finally, changes in the distribution of scores for sciences lie between what was observed for the distribution in mathematics and in reading. The distribution of scores in sciences in 2009 is clearly more equal than in 2000 and the increase in the average score for the period was 30 points. There was a slight increase in inequality from 2000 to 2003, but this was offset in 2009.

If we can draw any conclusions from this analysis, they are the same as in the McKinsey (2007) report on education, which affirms that "high-performing systems (...) are designed to ensure that every child is able to benefit". Put another way, an educational system cannot be good if its worst students learn very little. Brazil achieved greater gains in mathematics partly because it managed to reduce inequality in scores in that subject. Brazil obtained poor gains in reading because it did not manage to get the worst students to learn to read better.

FINAL REMARKS

The trends presented in this study suggest that in the decade beginning 2000, the population of Brazil in the 15 and 16-year-old bracket has advanced substantially in skills and competencies development in reading, mathematics and sciences. This is more important when it is remembered that the three subjects are commonly seen as a basic tripod for the development of other skills and abilities that are indispensable in future stages of education of these young people and in their performance in the job market.

This advance in cognitive skills measured by PISA occurs concomitantly with a "U"-shaped curve in cognitive skills measured by SAEB, the results of which for 2009 are still below those for 1995. The difference stems from the target populations of the two evaluations - while PISA samples 15 and 16-year-old children at school, even if they are below their proper grades, SAEB samples children at the end of secondary school, even when over the proper age - and from the improvement seen in this period in school flows. The difference between the two evaluations also shows that the much-criticized flow regulating policies were right. This is because the goal of a country's educational system cannot be to improve cognitive skills in those who manage to reach the end of secondary school, but must instead be to improve cognitive abilities in all children.

However, Brazil's positive evolution in the PISA has not yet been enough to promote significant leaps upwards in its ranking vis-à-vis other countries. Generally speaking, the basic education of our young people remains low quality. This makes it difficult for a significant contingent of young people who would be capable of satisfactorily completing a university course and later holding down jobs that demand increasingly complex and changing competencies and skills, to even reach the gates of the University. Brazil's average performance remains well below that of most other countries.

Nevertheless, even though it has not been captured by rankings normally constructed from large scale evaluations of this type, the improvement we report has proved consistent and intense. The gap separating Brazil from other countries has narrowed. What is even more encouraging is the finding that an increasingly significant portion of Brazil's advance may be ascribed, from the second edition of PISA, to young men and women from the lowest levels of grade distribution - a sign that Brazil's educational system is gradually reducing its long-standing inequalities. This is mainly due to mathematics, less so to sciences, and least of all to reading. It should also be noted that this has been the case even in the face of a growing trend among Brazilian teaching systems to adopt a cycle-based approach with automatic passing from one year to the next. This maybe an indication, ultimately, that holding back students at different stages of their school career reveals a greater backwardness and lack of preparedness on the part of the school institution itself than on the part of the student.

REFERENCES

Received: AUGUST 2011

Approved for publication: SEPTEMBER 2011

TRANSLATED BY David Coles

  • ANDRADE, Dalton F.; TAVARES, Heliton R.; VALLE, Raquel C. Teoria da Resposta ao Item: conceitos e aplicações. São Paulo: Associação Brasileira de Estatística, 2000.
  • ARAÚJO, Eulália A. C.; ANDRADE, Dalton F.; BORTOLOTTI, Silvana L. V. Teoria da Resposta ao Item. Revista da Escola de Enfermagem da USP, São Paulo, v. 43, n. esp., p. 1000-1008, dez. 2009.
  • BRASIL. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira. Sinopse Estatística da Educação Básica: 2001 e 2009. Disponível em: <http://www.inep.gov.br>. Acesso em: 20 jan. 2011.
  • BRASIL. Ministério da Educação. Proposta à Associação Nacional dos Dirigentes das Instituições Federais de Ensino Superior. Brasília, 2009. Disponível em: < http://portal.mec.gov.br/index.php?Itemid=310&id-13318&option=com_content&view=article>. Acesso em: 27 maio 2011.
  • CRAHAY, Marcel. É possível tirar conclusões sobre os efeitos da repetência? Cadernos de Pesquisa, São Paulo, v. 36, n. 127, p. 223-246, jan./abr. 2006.
  • DALBEN, Ângela I. L. F. Os Ciclos de formação como alternativa para a inclusão escolar. Revista Brasileira de Educação, Rio de Janeiro, v. 14, n. 40, p. 66-82, jan./abr. 2009.
  • EARP, Maria L. S. A Cultura da repetência em escolas cariocas. Ensaio: Avaliação e Políticas Públicas em Educação, Rio de Janeiro, v. 17, n. 65, p. 613-632, dez. 2009.
  • FERNANDES, Claudia O. A Necessária superação da dicotomia no debate séries-ciclos na escola obrigatória. Cadernos de Pesquisa, São Paulo, v. 40, n. 141, p. 881-894, set./dez. 2010.
  • FETZNER, Andréa R. Construção de uma outra escola possível. Wak: Rio de Janeiro, 2007a. (Ciclos em revista, v. 1)
  • ______. Implicações curriculares de uma escola não seriada. Wak: Rio de Janeiro, 2007b. (Ciclos em revista, v. 2)
  • ______. Aprendizagem em diálogo com as diferenças. Rio de Janeiro: Wak, 2007c. (Ciclos em revista, v. 3)
  • ______. (Org.) Avaliação: desejos, vozes, diálogo e processos. Rio de Janeiro, Wak, 2008. (Ciclos em revista, v. 4)
  • FUNDAÇÃO IBGE. Censo populacional 2000. Rio de Janeiro, 2000.
  • ______. Pesquisa nacional por amostra de domicílio. Rio de Janeiro, 1995-2009.
  • GOMES, Candido A. Desseriação escolar: alternativa para o sucesso? Ensaio: Avaliação e Políticas Públicas em Educação, Rio de Janeiro, v. 13, n. 46, p. 11-38, jan./mar. 2005.
  • GRACIANO, Mariângela; HADDAD, Sergio. Balanço e perspectivas do ensino médio no Brasil: Apresentação. In: KRAWCZYK, Nora. O Ensino médio no Brasil. São Paulo: Ação Educativa, 2009. p. 5-6.
  • HANUSHEK, Eric; WÖBMANN, Ludger. The Economics of international differences in educational achievement. In: HANUSHEK, Eric A.; MACHIN, Stephen; WÖBMANN, Ludger (Ed.). Handbook of the economics of education, 3. Amsterdam: North Holland, 2011. p. 89-200.
  • JACOMINI, Márcia A. Por que a maioria dos pais e alunos defende a reprovação? Cadernos de Pesquisa, São Paulo, v. 40, n. 141, p. 895-919, set./dez. 2010.
  • KLEIN, Ruben. Utilização da Teoria de Resposta ao Item no Sistema Nacional de Avaliação da Educação Básica (Saeb). Ensaio: Avaliação e Políticas Públicas em Educação, Rio de Janeiro, v. 11, n. 40, p. 283-296, jul./set. 2003.
  • LEON, Fernanda L. L.; MENEZES FILHO, Naércio A. Reprovação, avanço e evasão escolar no Brasil. Pesquisa e planejamento econômico, Rio de Janeiro, v. 32, n. 3, p. 417-452, dez. 2002.
  • MENEZES FILHO, Naércio et al. Avaliando o impacto da progressão continuada nas taxas de rendimento e desempenho escolar do Brasil. In: ENCONTRO ANUAL LACEA, 13., 20-22 nov. 2008, Rio de Janeiro. Mimeo.
  • MCKINSEY & COMPANY. How the world's best-performing school systems come out on top: Repport. EUA: McKinsey & Company, set. 2007. Disponível em: <http://www.mcldnsey.com/App_Media/Reports/SSO/Worlds_School_Systems_Final. pdf>. Acesso em: 27 maio 2011.
  • OCDE. Programme for International Student Assessment (Pisa). Disponível em: <http://www.pisa.oecd.org>. Acesso em: 20 jan. 2011.
  • ______. PISA 2000 technical report. Paris, 2000. Disponível em: <http://www.pisa.oecd.org>. Acesso em: 20 jan. 2011.
  • ______. PISA 2003 technical report. Paris, 2003. Disponível em: <http://www.pisa.oecd.org>. Acesso em: 20 jan. 2011.
  • ______. PISA 2006 technical report. Paris, 2006. Disponível em: <http://www.pisa.oecd.org>. Acesso em: 20 jan. 2011.
  • ______. Programme for International Student Assessment (Pisa). s.d. Disponível em: <http://www.pisa.oecd.org>. Acesso em: 20 jan. 2011.
  • RIBEIRO, Sergio C. A Educação e a inserção do Brasil na modernidade. Cadernos de Pesquisa, São Paulo, n. 84, p. 63-82, fev. 1993.
  • SCHWARTZMAN, Simon. Os Desafios da educação no Brasil. In: BROCK, C.; SCHWARTZMAN, S. (Org.). Os Desafios da educação no Brasil. Rio de Janeiro: Nova Fronteira, 2005. p. 9-50.
  • SOUSA, Sandra Z. Ciclos: inclusão escolar? In: FETZNER, Andréa R. (Org.). Avaliação: desejos, vozes, diálogo e processos. Rio de Janeiro: Wak, 2008. p. 213-232. (Ciclos em revista, 4)
  • Evolution of the cognitive achievement of Brazilian youth on Pisa

    Sergei Suarez Dillon SoaresI; Paulo A. Meyer M. NascimentoII
  • 1
    See Fetzner (2007-2008) for a discussion on the regime of cycles. For a specific discussion on the organization of schoolwork in cycles as an inclusive policy attempting to address the problems of repeating years and dropping out of school, see Sousa (2008) and Dalben (2009). It it is also worth reading Fernandes (2010), whose line of argument makes (what should be) an obvious point: quality of schools does not involve the choice between years or cycles.
  • 2
    Pass rates used here have been taken from the statistical summaries for compulsory education
    (Educação Básica) for 2001 (referencing 2000) and 2009 (referencing 2009) available at
    www.inep.gov br. The rate for 2000 was calculated as a proportion of the number of those passing over the total number of students enrolled in primary school. The rate for 2009 is given directly in the synopsis.
  • 3
    Although, as Gomes (2005) stresses, continuous progression runs the risk of becoming a mere flow correction policy when not carried out alongside more assertive follow-up of students throughout their school careers in order to actually seek quality and equity. Returning to Fernandes's statement (2010), mentioned in a previous footnote, the quality of school goes beyond organizing progression through school on a year-by-year or cycle-by-cycle approach.
  • 4
    It would be necessary to identify virtually identical items in different years and used them as "anchor questions" so as to place all the tests on a single scale and then re-estimate proficiency within ENEM on an IRT scale. This would take a research team one year to do.
  • 5
    It is about the average scores for three subjects reading, mathematics, and sciences.
  • Publication Dates

    • Publication in this collection
      11 Sept 2012
    • Date of issue
      Apr 2012

    History

    • Received
      June 2011
    • Accepted
      Jan 2012
    Fundação Carlos Chagas Av. Prof. Francisco Morato, 1565, 05513-900 São Paulo SP Brasil, Tel.: +55 11 3723-3000 - São Paulo - SP - Brazil
    E-mail: cadpesq@fcc.org.br