Abstracts
This work wishes to contribute to the debate on the assessment of higher education in Brazil by describing a study about the systems used to measure quality and productivity. Through bibliographical review and documental analysis we seek to analyze the origins of the process of assessment, the historical sequence of the political debates that defined the work programs in this area, the methodological conceptions adopted by these programs, the measuring and follow-up instruments devised, and the systems of indicators created to evaluate the quality of the teaching of the higher education institutions, as well as the students' performance. The discussion includes all assessment systems used in Brazil up to 2005, and concludes that from the first procedures established there has been continuous evolution in the definition of more accurate and efficient indicators. The results yielded by the research that gives support to this work apply explicitly to the revision of the assessment instruments used in Brazil. The work suggests indicators hitherto not used in the historical assessment process, in the search to improve upon the current system.
Higher education; Assessment policies; Quality indicators
Este trabalho tem como objetivo contribuir para o debate sobre a avaliação do Ensino Superior no Brasil, apresentando um estudo a respeito dos sistemas utilizados para as medidas de qualidade e produtividade. Busca-se analisar, por meio da revisão bibliográfica e da análise documental, as origens do processo de avaliação, a seqüência histórica dos debates políticos que definiram os programas de trabalho na área, as concepções metodológicas adotadas por esses programas, os instrumentos de mensuração e acompanhamento que foram desenvolvidos e os sistemas de indicadores criados para aferir a qualidade de ensino nas instituições, bem como o desempenho dos estudantes. A discussão contempla todos os sistemas de avaliação praticados no Brasil até o ano de 2005 e conclui que, desde os primeiros procedimentos estabelecidos, houve uma evolução contínua na definição de indicadores mais precisos e eficientes. Os resultados alcançados pela pesquisa que fundamenta o trabalho aplicam-se explicitamente à revisão dos instrumentos de avaliação praticados no Brasil. O trabalho sugere indicadores ainda não utilizados no processo histórico de avaliação, buscando o aperfeiçoamento do sistema atual.
Ensino Superior; Políticas de avaliação; Indicadores de qualidade
ARTICLES
Policies for the assessment of higher education in Brazil: a critical review
Carmen Lúcia Dias; Paulo Sergio Marchelli; Maria de Lourdes Morales Horiguela
Universidade do Estado de São Paulo
Contact
ABSTRACT
This work wishes to contribute to the debate on the assessment of higher education in Brazil by describing a study about the systems used to measure quality and productivity. Through bibliographical review and documental analysis we seek to analyze the origins of the process of assessment, the historical sequence of the political debates that defined the work programs in this area, the methodological conceptions adopted by these programs, the measuring and follow-up instruments devised, and the systems of indicators created to evaluate the quality of the teaching of the higher education institutions, as well as the students' performance. The discussion includes all assessment systems used in Brazil up to 2005, and concludes that from the first procedures established there has been continuous evolution in the definition of more accurate and efficient indicators. The results yielded by the research that gives support to this work apply explicitly to the revision of the assessment instruments used in Brazil. The work suggests indicators hitherto not used in the historical assessment process, in the search to improve upon the current system.
Keywords: Higher education Assessment policies Quality indicators.
The present work offers a discussion on the origin and evolution in Brazil of the practices of assessment of Institutions of Higher Education IHEs with the purpose of conducting a critical analysis of the indicators employed in such processes, including mainly the research of legal documents produced since the establishment of these practices, and presenting, in parallel to that, a survey of the literature pertaining to the subject. The authors believe to have captured and described the chief trends of the Brazilian thought about the policies created in the country with respect to the evaluation of an education system whose important and much-needed expansion cannot take place without the establishment of specific standards of quality. The advances of the policies of the sector, particularly in what concerns the definition of quality indicators, are dealt with here based on the idea that their gaps can be filled with the elimination of the natural discrepancies imposed by the political agendas, replacing them with more consistent elements. Thus, it appears as the more specific objective of this work, which also limits its scope, the analysis of the concept of assessment present in the legal documents according to the social and political circumstances that produced them, seen largely through the eyes of the authors surveyed, as well as through the comparison between historically given indicators and others still absent from the assessment practices, but which are here proposed as essential to advance the comprehension of the reality of higher education system.
Each country has adopted during the last decades its own methodology to evaluate its higher education system, recent examples being found for England (Harvey, 2005), Malaysia (Alfan; Othman, 2005), Japan (Nguyen; Yoshinari; Shigeji, 2005), Chinese Special Administrative Region of Hong Kong (Mok, 2005), India (Stella, 2004), Chile (Lemaitre, 2004), Hungary (Rozsnyai, 2004) and South Africa (Strydom; Strydom, 2004). In Brazil, the political debates around the functioning of the higher education system have been going on since the late 1950s, early 1960s, a time when the characteristics of the democratic-populist practice of the political regime triggered strong queries about the project of university (Sguissardi, 1997), giving rise to proposals to prioritize the launching of processes to enhance teaching (Grego; Souza, 2004).
With the establishment of military rule in 1964, higher education policies had as their aim to guarantee Brazil's insertion in the multiple functionality of dependent capitalism, so that the impact of a restrictive scenario for international economy deepened the educational crisis with student strikes, and justified a series of agreements between Brazil and the Agency for International Development AID. For Romanelli (1978), through such agreements, called the MEC/USAID agreements, the country effectively surrendered the organization of the higher education system to foreign experts. In 1968, with the hardening of the military regime, two important documents were prepared as landmarks of the reform of higher education: the Atcon Plan and the Meira Mattos Committee Report (AMORIM, 1991). The evaluation issues in the Atcon Plan were based on two dimensions: the first evoked the idealizing principles of an enterprise model of the university system; and the second dimension projected autonomy and independence for this system. However, in order to bring such autonomy into actual existence, the institutions would have to be changed into private foundations. The Meira Mattos Committee Report made an extensive assessment of the country's political, social and economic situation, proposing measures to respond to the social demands for access to the university, inhibiting the dissatisfaction of students and intellectuals. Based on the concept of profitability of the education system, the Report proposed a wide institutional restructuring with the purpose of obtaining higher performance from the school > system with less injection of resources. Still at that time, the Department of University Matters of the Ministry for Education MEC -, today the Secretariat for Higher Education SESU -, began to publish annual reports presenting data on the situation of higher education, and offering instruments of analysis that were hoped to be used in the assessment of performance and development of each institution in the whole system (Neiva, 1988).
The first texts specifically created on the theme of assessment reveal intense concern with the control of the quality of IHEs, bearing in mind the hypothesis that their unchecked growth and the large number of enrolments they received would result in loss of quality. The problems that came up with the expansion of basic education in the 1960s were fundamental to reflect on the idea of an accelerated creation of IHEs which, in the 1980s and 1990s, would reach high quantitative levels. Given the high cost to the citizen of the private model of expansion of higher education, its qualitative assessment was more important than ever as a demonstration to society of the accountability of the public sector. In this way, the statements about the political benefits of the creation and improvement of the instruments to assess the quality of teaching were regarded as veritable institutional principles of the post-military Brazilian democracy.
The first program subjected to political discussion and approved by the country emerged in 1983 under the title of University Reform Assessment Program URAP -, and was presented by the MEC as a result of the discussions within the then called Federal Education Council FEC concerning the strikes that had happened at the federal universities in the previous years (Cunha, 1997). The formulation of the URAP was influenced by the graduate sector, which in the early 1980s was already running an assessment system widely recognized for its quality. Undergraduate studies did not have anything that resembled that, and
[...] they needed a mechanism that could point out to what extent the University Reform had actually occurred, what advantages had been gained, and what problems were faced by the several types of courses and institutions. (Dias, 2001, p. 71)
The methodology of the assessment consisted in the application of questionnaires to teachers, university authorities, and students, with the objective of gathering data on the didactic and administrative structure of the IHEs, as well as on the form adopted to respond to the expansion of enrolments, and the means employed to evaluate the activities of teaching, research and university extension. The analysis of the data gave priority to measure the quality of the teaching staff, students, and administrative and support staff, the scientific productivity, and the institutional links with the community.
Gonçalves Filho (2004) remarks that the assessment approaches that appeared in the USA based on the neoliberal functionalism influenced the conception of the URAP in Brazil. The presuppositions of these approaches were associated to beliefs originated in the new views of democracy. Scattered researches about learning in programs or systems were already taking place in Brazil since the 1970s. Notwithstanding their intensification in the 1990s, these studies still were generally fragmentary. And despite the many efforts made to face this situation, the Brazilian experience of assessment was never free from the North American influence. North American authors inspired the researches in virtually every western country, and today they have more than half a century of theories and practices in this field. In the USA the birth of large-scale assessments by the State lies in the Post-World War II period, and was simultaneous with the process of construction of the Welfare State. Leite (1997) highlights the pertinence of the initiatives of assessment of higher education focused on student performance. According to that author, the year of 1977 marked the beginning of assessments of graduate studies in Brazil by the Coordination for the Improvement of Higher Education Personnel CAPES -, which influenced the assessment systems for undergraduate studies.
Assessment strategies in the New Republic
A year after its inception, the URAP was discontinued without reaching an agreement about the data gathered. In the absence of such consensus, the MEC, responsible for the University Reform, was taken by fierce internal power struggle among the various political groups it housed, each one claiming for itself the competence to decide what the nation should do with her universities.
In 1985, during the government of José Sarney, Marco Maciel was nominated Minister for Education, and set up the 24-members National Commission for the Reform of Higher Education. In the Report produced by this Commission (Ministério da Educação, 1985), the issue of an assessment of institutional quality covering all the university community appeared for the first time, showing that the country was still along way from the formulation of a political instrument that would please all national segments. The heterogeneity of the members of the Commission was blatant, and not all of them had university experience, giving rise to a turmoil that resulted in a disperse report, comprised of a number of disconnected texts about mismatched issues. The academic community barricaded together to avoid inadequate changes in the university, and the concept of autonomy gave the normative tone to the document. The Commission created by President Sarney did not raise any direct political actions from the government.
At the end of 1985, a few months after finishing the Report, the National Commission for the Reform of Higher Education was dissolved, and early in 1986 Marco Maciel established the Executive Group for the Reform of Higher Education GERES -, with five members: one teacher, one representative from the MEC, one former rector, one researcher, and the director of CAPES. CAPES had developed several instruments specifically for the assessment of graduate courses and programs, so that GERES intended to capitalize on this experience.
GERES elaborated a pre-project of a law proposing a reformulation of the functioning of the system comprised by the federal IHEs. However, faced with the large number of criticisms it received, stemming mainly from fears of the government feeling released of its financial obligations, the President of Brazil, amidst the political difficulties that prevailed at the time of a Constitutional Assembly, withdrew the pre-project from the Congress and rewrote it just as a set of guidelines to the reformulation of general government policies for higher education.
Even so, GERES stepped up the debates between university and government by establishing new assessment criteria on which the accreditation and re-accreditation of IHEs could be based. In these debates, the polemic was largely centered on the articulations established between the concepts of autonomy and assessment. The criteria presented were intended to evaluate the social responsibility of the institutions and, at the same time, give them more autonomy, including financial autonomy. In so doing, GERES echoed the motto of the international monetary bodies, particularly the World Bank, spokesmen of the emergent neoliberal economy that proposed the reduction of public investment in Education.
The theme of, and interest in, assessment gained much more strength from the moment that, all around the world, the crisis that has led governments to invest less and less in the social area, especially education, has become more acute. (Sobrinho, 1996, p. 20)
By the late 1980s, after the initial tribulations and disputes, the assessment of higher education is finally established as an instrument of political action of the State, reflecting the international moment with respect to educational institutions as a whole. This started to become more evident when, in 1987, took place the International Meeting of Assessment of Higher Education (Encontro, 1988), promoted to discuss and analyze the models implemented in other countries, mainly Canada, France, England, and Japan. The conclusions of this important international event can be summarized in eight main points: 1) the assessment of the Brazilian higher education system is considered to be an imperative measure, and urgent procedures must be adopted to put it in place; 2) the assessment must at first focus on each undergraduate course, leaving to the universities the task to define priority areas and to establish quality indicators; 3) the MEC must promote and encourage the processes of internal assessment and external assessment by peers; 4) the assessment of teaching has as its consequence the search for quality in the target academic activities, such as research and university extension; 5) the assessment indicators must be tailored to the specificities of each institution and to the different areas of knowledge; 6) the results must be divulged and published widely to society; 7) the assessment must be conducted under the highest standards of integrity and accuracy, so as to correspond to the desired levels of efficacy; and 8) the government must allocate, through the MEC, specific resources to support the assessment programs at the public universities.
In 1988 four other large meetings gave continuity to the process initiated by GERES; sponsored by the MEC and SESu, they took place: in March, at the Federal University of Pará, with the participation of IHEs from Amazon and Pará; in May, at the Federal University of Santa Catarina, bringing together institutions from the Southern Region of the country; still in May, at the Federal University of Ceará, involving isolated institutions of the Northeast; and in September, at the State University of São Paulo, a meeting of a more regional character, but having the participation of other states. These meetings discussed the need to implement assessment procedures, and the concern with the definition of quantitative or performance indicators was still not there (Silva; Lourenço, 1998).
Albeit timid, the tentative steps so far taken to consolidate assessment policies at the IHEs helped to bring the country in pace with the international scenario, where similar strategies for social and economic development were already part of other nations' plans since the late 1970s. The more visible cases in the 1980s were Chile's, in Latin America, and Margaret Thatcher's Britain, then a champion of neoliberal policies (Sobrinho, 1998). In 1987, the University of Brasília UnB began to organize its own internal self-assessment process, followed in 1988 by the Federal University of Paraná UFPR and, in the same year, by the University of São Paulo USP. In 1991, the State University of Campinas UNICAMP carried out its self-assessment.
The assessment model developed in the 1990s
The hegemony of the neoliberal policies in the 1990s had strong impact on Education, bringing international funding agencies, notably the World Bank, to set up proposals comprising teaching assessment as part of the strategies that would be applied before lending money. Striving for the reduction of state costs, public universities should be more autonomous and link with market forces, producing knowledge that was useful and profitable as a condition for their survival within the competitive globalized society. Assessment was seen as a measure and control instrument to respond to the expected efficiency and productivity of higher education under situations of growing budget limitations. In 1994, the World Bank proposed the following guidelines as the summary of requisites for the concession of funding for higher education:
[ ] to encourage the diversification of institutions of higher education, and the competitiveness (and not solidarity) between them; to stimulate the growth and expansion of private institutions; to make public universities extract more and more of their sustenance from the selling of services and from charging student fees; and to tie the funding from official bodies to criteria of efficiency and productivity in market terms. (Sobrinho, 1996, p. 16)
In July 1993 the SESu created the National Commission for the Assessment of Brazilian Universities with the task of implementing the internationally recommended political processes. This commission was coordinated by the Department of Higher Education Policy of SESu, and gathered several segments: the National Association of the Directors of Federal Institutions of Higher Education ANDIFES; the Brazilian Association of State and Municipal Universities ABRUEM; the National Association of Private Universities ANUP; the Brazilian Association of Catholic Schools ABESC; National Forums of Pro-rectors of Undergraduate Studies, Research and Graduate Studies, and National Forums of Pro-rectors of Planning, Administration and University Extension. After the commission was set up, a Technical Advisory Committee was established, with experts dedicated to analyze the projects coming from the universities. The position of the MEC in this process was to be one of coordination, articulation and funding of the institutional assessment, taking on the political stance of working in partnership with the universities.
Within this context appeared the Program of Institutional Assessment of Brazilian Universities PAIUB, which conceived the self-assessment as an initial stage of a process that would reach all institutions, and would be completed with external assessment. The basic principle of the PAIUB rests on the totality in which IHEs should be evaluated, so that
[ ] all elements teaching, research, extension, quality of classes, laboratories, teacher qualifications, services etc that comprise the university life should be part of the assessment, so that it can be as complete as possible. (Dias, 2001, p. 79)
Apart from that, the PAIUB went in search of a language common to all IHEs in the country through the creation of a table of minimum institutional indicators for undergraduate education. Other important ideas that gave support to the program included: respect for institutional identity, taking into consideration the differences between the IHEs evaluated; absence of punishment or reward for results achieved; voluntary adhesion; search for ethical legitimacy of the process; and continuity of assessment activities with a view to integrate them into the institutional culture.
The PAIUB intended to establish new forms of dialogue between government and academia, trying to legitimize the culture of assessment and to promote perceptible changes to the dynamics of teaching. Despite the wide acceptance from the universities, its implementation was hampered by the interruption of the support by the MEC, causing a reduction in the incentive programs, and a concentration on the internal evaluation objectives. Thus, the program came to a crossroads, and on October 10, 1996 the MEC issued the Decree No 2026 (Brasil, 1996a), establishing new procedures for the assessment processes of courses and institutions of higher education. The conclusion one reaches is that the PAIUB did not manage, during its brief existence, to fulfill the objective of acting as an effective instrument to measure the productivity of the Brazilian higher education system, in order to respond to the demands of the hegemonic neoliberal policies of competitiveness and market efficiency advocated by the international funding agencies, such as the World Bank.
The hub of the new decree established after the PAIUB was the "analysis of the main indicators of global performance of the national higher education system, broken down by region and state, according to the areas of knowledge and the type or nature of the education institution" (Article I, Section 1). This analysis should be made by the Secretariat of Assessment of Educational Information of the MEC SEDIAE and would cover the following points:
I schooling rates, gross and net; II rates of availability and occupancy of school places; III rates of dropout and of productivity; IV average time for completion of courses; V levels of qualification of teaching staff; VI students to teacher average ratio; VII average size of classes; VIII percentage of higher education in total education expenditure; IX public expenditure per student in higher education; X ration of expenditure per student to Gross Internal Product per capita in public and private systems; and XI participation of teacher wages in public expenditure. (Brasil, 1996a, Art. 3)
As it is well known, the gross rate of schooling per level of teaching reflects the ratio between the total enrolment at that level, irrespective of age, and the total corresponding population. The net rate of schooling represents the number of students enrolled at a given level whose age theoretically corresponds to the average age at that level, expressed as a fraction of the total population of that age. In other words, in the net rate of schooling the numerator and denominator include the same age group. In Brazil, the theoretical age range for a person in higher education is from 18 to 24 years.
It is difficult to see how the SEDIAE could use the concept of rate of schooling to comply with Decree No. 2026, for its statistical nature encompasses variables that go well beyond the institutional specificities. According to the data gathered by the Higher Education Census (Ministério da Educação, 2004a) and the population projections of the Brazilian Institute of Geography and Statistics IBGE (2001), the gross rate of schooling of the Brazilian population in 2003 was at 2.17%. The Higher Education Census of 2003 did not include the segmentation of enrolment by age group; neither did the IBGE publish separately the projection for the population in the 18 to 24 years bracket, so that there is not enough information to calculate the net rate of schooling. However, this rate has been estimated for 2003 at somewhere between 9% and 10%. As we can see, it is a generic statistical quantification, which refers to a global social indicator that seems somehow to be useful to assess the quality of teaching at each IHE. On the other hand, if quantity is taken as a dimension of quality, it is easy to understand why higher education in Brazil is considered to be so deficient.
Another difficulty was to consider that the rates of availability and occupancy of places are a measure of the efficiency of the education system, for these variables are independent of the quality of the institutions and of their courses. Every institution must offer places, and their occupancy depends fundamentally on social and economic factors exogenous to the institution, associated to the conditions of access of the students. The same goes for the rates of dropout and for the average time for finishing the courses, which can hardly be directly linked with the system's efficiency or lack thereof. The factors for dropout and time to conclusion are consequences of students' conditions, usually depending on socioeconomic peculiarities largely external to the institutions.
Other points proposed by the Decree No. 2026/96, such as the level of qualification of the teaching staff, average student to teacher rate, and the average size of classes undoubtedly relate to institutional quality indicators, but there are no guarantees of precise numerical correlations, because one cannot infer that an extremely qualified teaching staff will produce high coefficients of educational performance if they work under precarious conditions of, say, low wages. Yet, a less well qualified, but better institutionally structured staff, may be much more motivated to teach.
The expenditure issues that appear in Sections VIII to XI are quite controversial, considering that the quantitative criteria are not defined. At first sight, the larger the participation of public expenditure with higher education, the better the education will be. But the dominant neoliberal canons of global economy defend a rate of efficiency defined by high yields with low investment. Thus, one does not know to what extent the public system should invest to increase its efficiency, so as to cater for the largest possible number of students with good quality. It would be interesting that the Decree paid attention to the qualitative aspects of public expenditure in higher education, defining evaluation channels to establish more precisely the contours of the cost of quality education.
With regard to the "individual evaluation of institutions of higher education carried out by an external committee especially designated by the Secretary for Higher Education SESu" (Art. 4), the Decree considers three central aspects. First, it refers to the "effectiveness of the functioning of collective bodies" (Section I). The assessment of this effectiveness is quite relative, opening the possibility of considering the number of meetings conducted, the number of proposals voted, the institutional representation of the members, measured from the proportion between number of teachers, non-teaching staff, students etc. However, none of these factors necessarily translates the quality of teaching, whence it can be surmised that collective bodies, even if effective and necessary for the proper functioning of the institution, can exist in such a way as to be completely inefficient. It begs the question: what objectively means to evaluate the 'effectiveness' of the functioning of university collective bodies?
Section I, Article 4, of Decree 2026/96 also establishes that the individual assessment must give priority to the "relations between the controlling agency and the institution of education". We may ask: what is the meaning of "relations" in this context? Could they be put on a scale from very poor, poor, acceptable, and good, to very good? Would it be measured, for example, by the quantitative ratio between financial amounts offered by the controlling agency as student scholarships and the annual enrolment? Let us suppose, for instance, that an institution offers one student scholarship for every ten students enrolled, according to the criterion of grade obtained in the university entry exam. This is definitely a positive relation between the controlling agency and the institution of education, but its classification on a scale from very poor to very good is open to debate, because this assessment depends heavily on the global average for the institutions, which is little known in Brazil. The above mentioned Section establishes still that the individual assessment should give priority to the "efficiency of the activities carried out as means to the final objectives" of the general administration of the institutions. Now, the final objective of a teaching institution is teaching, and there is a long list of means it uses to that end. One may ask: what activities are more efficient as means in a teaching institution? A high level of qualification of the teaching staff is more effective to reach the institution's ends than the good functioning of the collective bodies? How does one place the values of the institutions of higher education into a hierarchy for the purposes of assessment?
Section II, Article 4, refers to the evaluation of academic administration, focusing on the "adequate definition of the curriculum of undergraduate courses and of the management of their execution; adequacy of the control of fulfillment of regimental requirements of execution of the curriculum; adequacy of the criteria and evaluation procedures of school performance". On these points, it can be inferred from Moreira that the assessment of the curricular field cannot be referred directly to a differentiation of approaches and specializations of a given type of undergraduate course, but that it involves also the needs of an administrative order, which comprise the organization and the specific nature of each IHE. When it emerged in the USA by the turn of the 19th century, the field of curriculum drew from the principles of scientific management, whilst borrowing from Sociology and behavioral Psychology their basic assumptions and methodology.
Since the 1990s the cultural studies, postmodernism, post-structuralism the gender studies, race studies, and environmental studies, amongst others, became the reference to understand the problems and issues involved in the field of curriculum in general. (Moreira, 2002, p. 95)
In the face of such complexity, it is not difficult to see why the assessment of the curriculum field has shown little evolution in Brazil until now, even considering the right guidelines offered by Decree 2026/96.
Another criterion established by Decree 2026/96 was the evaluation of social integration with a view to quantify the "degree of insertion of the institution in the local and regional communities through extension and service programs" (Article 4, Section III). The administrative academic systems located at the institutions are centralized and usually do not compute separately the costs of teaching, research and extension. In fact, little was known about the data for each separate area, jeopardizing the knowledge of the efficiency of the Brazilian university, which is planned in a tripartite way. Given this state of affairs, an important question to be answered is: What is the average investment of universities in extension services? It is only from this information that we can quantify the relevance of the programs extended by the institutions to the community. The distribution of the averages would allow attributing an objective measure varying from very poor to very good, in a five-level scale, drawing a picture of the situation of all institutions in the extension topic.
Section IV of Article 4 establishes how the assessment of the scientific, cultural, and technological production should be carried out, an issue that has given rise to huge controversy lately. Data from 2004 place Brazil in 19th place within a group of 31 countries that concentrate 98% of the most cited scientific articles in the world. The Brazilian share has increased from 0.84% in the 1993-1997 period to 1.2% in the 1997-2001 period. This means an increase of 45% above the average world performance, but it raises the question:
And why this performance has not been matched by a correspondingly expressive growth of our GDP in the same period? The answer is that it is not science (the creation of knowledge), as many would think, but the mastery of industrial technology (the competence in the use of knowledge to create innovations that make our industry more competitive) that makes economy grow in quick and sustainable fashion, as the Eastern countries have shown. And this competence in technological innovation is not measured by articles: it is internationally measured by the number of patents granted in the largest market, the North American. (Férézou; Nicolsky, 2004, p. A3)
Little is known about the cost of Brazil having increased its participation in the world scientific production in terms of citations of Brazilian research. One has the impression that the investment in research made after 1997 was much larger in proportion to the results achieved, making the country gain only in absolute production, but losing ground when the expenses are taken into account. That is why, considering the data from the United States Patent and Trademark Office USPTO, the Brazilian growth was minimal: just 1%. Some people are radical, and believe that the criteria adopted by the Decree 2026/96 of assessing scientific productivity in terms of number of article and citations translates into "an elitist and sterile process for the economic and social development of the country" (Férézou; Nicolsky, 2004, p. A3).
In its single paragraph, Article 4 of the Decree establishes that each institution must present to the external evaluation commissions data obtained from an internal assessment process. These data, however, are still far from being produced, because the culture necessary to produce them has not yet evolved in the country.
The evaluation of graduate courses "shall be made through the analysis of indicators established by the teaching expert commissions [ ]" (Article 5). In practice, however, the work of the experts consisted only in the application of the famous five-level scale going from very poor to very good. It was, therefore, a subjective assessment, of an essentially qualitative basis, far from representing the objectivity contained in indicators defined by precise national statistics. Issues such as these show that the assessment of higher education in Brazil in the early 1990s was far from achieving minimum levels of objectivity.
A relevant recommendation is the following:
The assessment of graduate courses carried out by the Expert Commissions designated by the SESu shall be preceded by a wide-ranging analysis of the situation of the corresponding academic or professional area, taking into account the international context, and the national labor market. (Brasil, 1996a, Article 5)
In a situation where the national and international labor markets go through profound structural changes as a result of an economy in complete transformation, one may ask: How can we know that the student from a given course of a given institution is receiving the correct teaching to grant him/her an education appropriate to work in a labor market of uncertain future? If even the question is complicated to ask, we can imagine what the answer from the evaluator would be like!
As to the analysis of the conditions offered by the IHEs, the following shall be considered:
[ ] I. the didactic-pedagogical organization; II. the adequacy of physical facilities in general; III. the adequacy of special facilities, such as laboratories, workshops, and other spaces needed to implement the curriculum; IV. the qualifications of the teaching staff; V. the libraries, with special attention to the bibliographic collection, including books and journals, working regime, modern services, and adequate spaces. (Brasil, 1996a, Article 6)
The didactic-pedagogical organization can be evaluated by indicators supplied by the model of Dias (2001), which starts from the results of a study focused on the diagnostic of pedagogical shortcomings of the teaching staff at a higher education institute in the area of the Health Sciences run by a private IHE. The study involved the whole body of teachers and students, chosen by statistical sampling. Through the use of closed questionnaires it was investigated how students see their relation with teachers, and also if they are favorable, indifferent, or unfavorable to the attitude features of these teachers with reference to methods, techniques, and evaluation systems employed by them. The assessment of teacher performance by the students making use of the questionnaire as a data-gathering tool is a traditional form of collecting indicators about the didactic-pedagogical organization of higher education (Lampert, 1995; Ristoff, 1996; Silva; Lourenço, 1998). "The indifference or disapproval by the students are never without meaning, no matter how unsubstantiated they may seem to the teacher. Nobody can better explain their enthusiasm or difficulties than the students themselves" (Kourganoff, 1990, p. 260). Seeing and listening to the teacher in action, the students are the only direct witnesses of the teaching process, allowing them to make constructive comparisons. The questionnaires used as instruments must respect the institutional specificities and the historical context of the occasion.
The evaluation of the physical facilities in general and of the special facilities imply in the analysis of architectural plants of the institution and in collecting raw data corresponding to built area comprising all spaces, including gardens, circulation areas and parking lots. The net didactic space comprises only laboratories, workshops, auditoriums, work team meeting rooms, student supervision rooms, model offices, and "other spaces needed to implement the curriculum", as found in the Decree 2026/96. There are several spaces whose didactic nature is debatable, such as traditional classrooms that contain teachers and students disposed in an environment composed only by desks, blackboard, chalk and eraser. There are spaces that leave little doubt about the fact that they cannot be counted as didactic, mainly the teachers common room inherited from European secondary schools, which are prevalent in our IHEs. In an IHE are important the individual teacher offices and their meeting rooms, which are authentic didactic spaces. There are other spaces about which there can be no doubt that they are not didactic, such as rectories, sub-rectories, dean offices and similar spaces formatted after the Taylorian management system according to which many IHEs tend to organize. The areas dedicated to the coordination of courses may raise doubts, but should be considered as didactic spaces in the cases where, besides administrative work, they also cater for the supervision of students' academic projects. In this case, it is necessary to check the institution's rules and the criteria for the choice of heads of department or course coordinators to see if their functions include merely bureaucratic and administrative purposes, or if they also perform tasks of direct support to the work of the teachers. Clearly, the rooms reserved for the institution collective do not have any direct didactic purpose.
An important element for the evaluation of the conditions offered by the IHEs is the utilization factor of their facilities and didactic equipment. This factor corresponds to the idea of efficiency as employed in management. In the system implied by the Decree 2026/96, the evaluation of the didactic work gives priority to the assessment of the adequacy of the facilities corresponding to the spaces dedicated to the teaching and learning process, but no explicit mention is made to collecting data about the quality of this process. In other words, it would be possible to set up a grade that would go from very poor to very good measuring the effective benefit to the students of the resources put at their disposal, considering the time dedicated to their learning directly at the equipments. This factor should emerge from the pedagogical plan as an element of its macrostructure, that is, there should be for each area of formation an equilibrium value for the relationship between practical and theoretical time loads. In Brazil these values are not well known for each higher education course, but the assessment system cannot do without it in order to assess the quality of the offer from the IHEs. The notorious informatics laboratories present in many IHEs are a typical example of an oversized didactic use, and many times inadequate; many hours are often spent in them with poor results in terms of learning, because the kind of pedagogical process prevailing here largely lacks a teacher-student relation better than the one accomplished in the classroom.
The quality of the teaching staff has been traditionally evaluated by the Index of Qualification of the Teaching Staff IQCD - , which has conceptual flaws, and merits questioning. The analysis of the historical series of data from the National Courses Exam ENC from 1996 to 2003 shows no direct correlation between an institution's IQCD and the performance of its students, that is, students coming from an institution with a higher IQCD can score worse than students coming from an institution with a lower IQCD, and vice-versa. On the other hand, a factor that has been shown to be directly related to student performance is the way in which institutions structure their teachers' careers. Thus, data from the ENC indicate that the valuation of teachers' work through the organization of consistent careers for the teachers, and the good management of available hours tend to yield better student performance. Teacher qualification, in fact, should be understood just as a requisite for entering the career and progressing along it, but if the career is not adequate to produce good results, the talents of a well qualified teaching staff may be squandered.
As to the evaluation of libraries, considered in Section V, Article 6 of the Decree, the traditional indicator measures the number of works in the collection, such as books, journals, databases etc. However, this is just raw data about the libraries, and the latter need to be judged also according to their purpose of effectively promoting reading and being integrated into the institutional life as a whole. Quality indicators of the students' reading can be obtained through questionnaires filled at the time of the visits to the library, with the objective of gathering information needed to compose the assessment scales, and also including aspects of the following nature: the motives that more frequently make students use the library; if they found in the library's collection what they needed; if in their view the library is up to date; how often do they use the library; what is their opinion about the physical space, the service, borrowing periods, type of material consulted, the nature and purpose of the material borrowed etc. The ENC has proved that there is a direct correspondence between the reading of books, journals, magazines etc and the performance of students in the exam.
Shortly after the publication of Decree 2026/96, the Law of Guidelines and Bases for National Education LDB (Brasil, 1996b) was sanctioned, reinforcing the importance of higher education assessment processes as a means to promote the regulation of the sector and carry out the accreditation of institutions and courses.
The 2000s and the search for new methodologies
The balance of the 1990s shows that the assessment instruments applied have taken a strategic position with respect to the organizational dynamics, and have established new standards of functionality for the Brazilian higher education system. The expansion of the system, particularly with regard to the number of courses offered, which concentrated largely in private institutions, enhanced the need for evaluation and defined the structure of the instruments conceived to such end.
In the case of the implementation of assessment under the ENC format, our hypothesis is that is has been applied to promote and feed the working of a mass higher education system, that is, its role is to contribute to transform a selective, closed and elitist higher education system into a mass system. The ENC thus represents the most important step taken by public policy to institutionalize the mass evaluation system. Since massification of the education system has been one of the central objectives of the official policy for higher education, such massification was promoted by setting up evaluation procedures that have as their purpose to produce, on one hand, specific information about the performance of the institutions to restructure and promote the market of higher education through competition for students among institutions, and through the strengthening of the power of student-consumers, who in their turn compete for the best evaluated institutions based on the information produced by the ENC; on the other hand, the establishment of assessment procedures had as its objective to challenge the outrageous lack of qualification of the majority of the institutions of higher education, particularly in the private sector, mainly through the Evaluation of Conditions of Offer of Undergraduate Courses. (Gomes, 2002, p. 284)
In 2001 the National Education Plan PNE was published (Brasil, 2001a), proposing to constitute a wide system of goals for higher education, and establishing that by 2010 there should be places available for at least 30% of the population between the ages of 18 and 24. In fact, the accumulated rate of general growth of enrolment in the 1996-1999 period was 34.7%, whereas in the 2000-2003 period it reached 64.1%. A major share of the expansion was due to the private sector, which absorbed 45.2% in the first four-year period, and 78.9% in the second. The public sector displayed much more modest results, absorbing 18.8% and 36.9%, respectively. The average rate of growth of enrolment as a whole was 7.7% and 13.1%, respectively. On the basis of this last result, it is projected that by 2010 there will be 9,234,548 students enrolled in higher education, with the public offer, according to the PNE, covering at least 40% of places, corresponding to 3,693,820 places (Ministério da Educação, 2004a).
Although concerns with the quality of the offer can be observed in the policies formulated for higher education in Brazil, the need to expand the demand has been practiced more with the quantitative increase of places and with the accreditation of courses than with the improvement of the conditions of access to the system by the population. In 1997 there were approximately 2,500 undergraduate courses in public institutions, and an equal number in the private institutions, with the latter showing, in their majority, a history of low quality. In 2003 the number of courses offered by the private institutions increased to 10,791, and that of the public sector to only 5,662. There appeared 5.6 new courses per day in Brazil in 2003, with 4.5 of them being created in the private sector and only 1.1 of them in the public sector. In 2003 the private sector showed a vacancy of 42.2% - places that were offered but not taken while in the public sector this figure was 5.1%. This means that the increase in the offer of places has not brought the desired expansion of the enrolment. Conditions must also be created for the population to have access to the places offered, something that has been the tone of the policies for the past two years, particularly with the creation of the Program University for Everyone PROUNI. It should be noted that, in order to achieve the goal set by the PNE, "significant investment will be needed, especially to absorb low-income students that today have access to fundamental and secondary education" (Ministério da Educação, 2004a, p. 45).
Thus, Brazil developed a higher education system that could not respond to the real specificities of a demand largely formed by students who could not afford the cost of private schools. The present government has planned to create more places at public institutions, but no one knows how many will actually emerge. Considering the number of students entering public IHEs in 2003, it will be necessary to expand the system by more than 300% to fulfill the requisites of the PNE by 2010. Apart from this problem, even considering that deeper efforts have not been made to interpret the results of successive assessments carried out since the PAIUB, there are no good indicators on didactic-pedagogical organization of the institutions, preparation of teaching staff, physical facilities, libraries, equipment etc.
Six months after the approval of the Law that established the PNE, a Decree was made setting up new operational evaluation procedures (Brasil, 2001b), addressing several difficulties present in the previous Decree (Brasil, 1006a), which was then revoked. In the new operational procedures the indicators of global performance which, as demonstrated above, had little to do with the institutions viewed in isolation, were eliminated: gross and net schooling rates, rates of availability and use of places etc. With regard to the evaluation of the performance of individual institutions, almost all indicators were preserved, and others were created: "capability to access communication networks and information systems" (Brasil, 2001b, Chap. IV, Article 17, Section II, item d); and "the self-assessment carried out by the institution and the measures adopted to correct the deficiencies identified" (item j). Regarding the analysis of the conditions of offer, were kept the indicators on didactic-pedagogical organization, adequacy of general and specific facilities, adequacy of libraries, and quality of teaching staff. With respect to the latter, the new legislation included the following aspects that were not previously considered: "the professional experience, the career structure, working hours, and work conditions" (Brasil, 2001b, Chap. IV, Art. 17, Paragraph 1, Section II). In the section about libraries, the following aspects were also included: "special attention to the specialized collection, including electronic, to the conditions of access to communication networks and information systems, opening hours, and modernization of user services" (Section IV).
The only really significant point added by the new system is related to the teaching staff. The ENC had already demonstrated that the IQCD is not enough to judge the quality of the teaching offered, requiring the assessment of other aspects, such as career structure, and working hours and conditions, now finally included.
The question of the capability to access communication networks and information systems, as well as the one related to the electronic collections of the library etc, even if new, is not significant. Actually, the indicator is ill-defined, because the capability to access the systems is quite different from the availability of access systems. It seems that the Decree is concerned with the latter, going back to the same problem of the previous decade, when the informatization of the teaching did not produce any real qualitative result. The evaluation of the capability to access communication networks and information systems needs, therefore, to be made in terms of indicators that point to the use of existing resources, and not just to the speed computers connect to the Internet, the performance of equipment, the size of the network they form etc.
The self-evaluation aspect is a novelty that had large repercussion, producing later on the Self-evaluation Commissions CPAs with the purpose of producing indicators capable of measuring the programs and projects developed by the institutions, pass judgment about the organization of seminar, meetings and consultations, measure the efficiency of the academic-administrative bodies and collectives, analyze the pertinence of the Institutional Development Plan PDI - , evaluate the knowledge required from students entering the institution, and verify the results of the pedagogical goals established for the learning of the students during their time at the IHE (Ministério da Educação, 2004b).
In 2003 was instituted the Special Commission for the Assessment of Higher Education CEA -, which carried out a critical review of the instruments, methodologies, and criteria employed up to that point, and proposed changes with a view to build a system capable of advancing the commitment and social responsibilities of the institutions. The CEA conducted public hearings with representative bodies from several sectors of society, and proposed the National System for the Assessment of Higher Education SINAES -, writing a document whose objective was to set out principles based on the concept that it is the social function of the IHE that must fundamentally be emphasized as the measure of its efficiency. There appeared then a new methodology to assess higher education, improving on the evaluation procedures and instruments used up to that point.
According to the CEA document, there was an unbalance in the matter related to the evaluation of higher education in Brazil, due to the fact that:
a) it is centered almost exclusively on the supervision attributes of the MEC; b) it practically does not consider institutions and course as subjected to evaluation; c) it does not distinguish adequately between supervision and evaluation, giving clear emphasis to the former; d) it does not properly constitute a national evaluation system, but, is more a juxtaposition of checking of certain conditions unilaterally defined by the Ministry. [ ] The instruments in place, being considered valid, should be preserved and improved, but should be integrated into another logic that would be capable of constructing a national evaluation system of higher education that articulated regulation and educative assessment. (Ministério da Educação, 2003a, p. 16)
The educative assessment is committed to the transformation of IHEs under a formative and emancipative perspective, whereas the regulatory perspective is tied to the control of the results of the IHEs by the State, where the evaluation systems must check how these perspectives establish social commitments articulated in terms of quality of teaching, research, and extension.
In its diagnostic, the CEA document presents an examination of the variegated legislation produced in the previous decades since the Constitution of 1988 to the successive provisional measures, going through the LDB, the PNE, and the various decrees issues, recognizing that "there was indisputable progress in the legal recognition of the importance of evaluation associated to the idea of the improvement of quality" (Ministério da Educação, 2003a, p. 17). The document also describes in detail the attributions of the federal bodies in the field of formative evaluation and regulation. It still conducts a critical analysis of the two main assessment instruments developed and applied up to that point, namely, the Evaluation of Teaching Conditions ACE and the ENC.
The view expressed in the CEA document as to the main positive points of the work carried out by the ACE Commissions indicates that they established parameters that contributed to the improvement of the courses, helping them to: "(i) expand the search and exchange of innovative experiences; (ii) expand the knowledge about the Political Pedagogical Projects of the courses among their teachers; (iii) make the selection of teaching staff more judicious; (iv) structure and organize better the working of the courses" (Ministério da Educação, 2003a, p. 40). These contributions are directly linked to three main dimensions on which the ACE is focused: (i) didactic-pedagogical organization; (ii) teaching staff; and (iii) facilities.
As a negative aspect of the procedures conducted by the ACE, the document in question points out the "problems related to the instrument, which emphasizes certain aspects over others, and for which there are no indicators, especially those capable of identifying how much the IHE manages to aggregate to the student after his/her entering the course", thereby developing knowledge and attitudes that correspond to the social value of the institution (Ministério da Educação, 2003a, p. 40-41). This indicates "that the current procedures are insufficient to promote, in the courses and institutions, an evaluation in the sense of its emancipation" (p. 41). The factors that contribute most to this insufficiency are the weak aspects of the evaluators' training process and the guidance offered by the General Manual of Evaluation of Teaching Conditions that the evaluators use in their work. These factors "reveal that the ACE lacks the adequate instruments for a formative evaluation, committed to the course's contribution to the constitution of the individual, just as it does not aim at apprehending the course's contribution to society" (p. 41).
In fact, the deficiencies of ACE go much further than the analysis by the CEA can reach. These shortcomings come to the point of not giving an answer to basic questions, such as: (i) evolution of the number of working hours of teachers with respect to the number of students of the institutions; (ii) improvement in the conditions of laboratories and didactic equipment, aiming at the quality of the teaching work; (iii) higher features and distinctive attributes that make computers and access to communication networks and information systems stand out in their didactic virtues. Not even the more obvious reality of the neglect of public universities and of the reckless expansion of private higher education, with its clear damage to the conditions of the offer of education, are included in the diagnostic analysis that gave origin to the SINAES.
The ACE should undoubtedly have observed the average levels of decline of the quality of higher education in Brazil, a decline due to the unrestrained expansion of the offer, which inhibits the demand according to elementary economy principles. Looking only at the offer, the ACE represented an inexperienced methodology that gave its first uncertain steps and ignored the importance of analyzing the demand. The most serious problem of higher education in Brazil is that of the access of the population to the system of offer, which is far from being resolved considering the fact that the number of places to be filled largely exceeds even the most optimistic forecasts, since "from about one million students that did the ENEM [National Exam of Secondary Education] this year [2004], 600,000 fit under the PROUNI [Program University for Everyone], but there are only about 110,000 places" (Bragon, 2004, p. C5). The PROUNI is the result of policies focused on facilitating the access of the demand to the current offer, covering in part or in full the cost of enrolment of low-income students into the private sector in exchange for tax exemptions, such as Income Tax, Social Contribution on Net Profit, Contribution to Finance Social Security COFINS, Program of Social Integration PIS etc.
The principle that the evaluation must fulfill formative functions attending to the emancipative transformation of the IHEs, next to the regulatory functions exercised by the governmental educational bodies, limits the critical reach of the diagnostic presented by the SINAES. It seems clear that before formative and regulatory functions are possible, a system of information is necessary to determine the indicators that will define the judgment value about the good or bad working of the institutions and their courses. Thus, one would attempt to develop the evaluation within a comparative scale of concepts based on the national average for the variable assessed by the ACE. It is then much easier to determine if a given institution has a library adequate to the educational objectives of its courses, when one knows the average number of specialized titles that similar courses around the country have. The same goes for the number of students per teacher working hour the student-teacher ratio -, to didactic equipment etc.
In its diagnostic, the SINAES reveals that "the analysis of the instruments and manuals, as well as of the descriptive-analytical reports prepared by the INEP evaluators, leads to believe that even in the points where the ACE did bring relative progress to technical aspects, it would be important to develop adjustments and improvements" (Ministério da Educação, 2003a, p. 42). There is lack of 'globality' in the theoretical-analytical model that served as the basis for ACE, being necessary to "adjust some of the indicators" (p. 42). In fact, these adjustments depend on analysis models that still do not exist or are only poorly formulated. For each of the criteria defined in the legislation and applied by the ACE one needs to understand how the set formed by the universe of the institutions of higher education behaves. The evaluation of conditions of access to the information systems, for example, can only be fully evaluated when one knows the average time spent by the students studying with access to communication networks. In Brazil, no one knows the average time a student spends inside university libraries consulting the traditional written collection, nor if this time has increased or decreased in the last years. If the average time of consultation to the information systems has fallen, but the average performance of students in national exams has remained the same, then the Brazilian higher education system has still not adapted to the promises of digital technology. Or could it be that these technologies are useless with respect to their ability to facilitate learning processes? Thus, to evaluate if a given institution is developing adequately its pedagogical plans with regard to the policy for access of its students to communication networks and information systems, one needs to compare the institution's results to the general picture of the data taken for the ensemble of all Brazilian institutions.
To avoid repeating the mistakes of the ACE, an analysis model is needed that can supply more adequate indicators, because the diagnostic made points out that "not all information generated in the visits are included in the database, compromising the production of statistical reports and a general analysis of the assessments" (Ministério da Educação, 2003a, p. 42). However, in order to include every piece of information into the database, it is first necessary to overcome the current systemic deficiencies, that is, it is necessary to have analytical models implemented in the computers in the shape of programs that process the information and produce the corresponding reports. There is no advantage in including the data relative to primary in loco assessments into the database without the development and implementation of computational analysis programs based on better theoretical models. The most harmful consequence of the absence of these models is that without them the assessment system can hardly fulfill its formative function, which allows us to understand better the following diagnostic conclusion: "lastly, the infrastructure of the MEC seems to be insufficient both with respect to the 'logistics' for the commissions during the visits, and to give support and operational guidance to the institutions" (Ministério da Educação, 2003a, p. 42).
Another fundamental assessment instrument criticized by the diagnostic of SINAES is the ENC, about which is says that "although the MEC intends to apprehend the knowledge and competences acquired by the students who are about to finish their undergraduate courses, the main objective is to evaluate the undergraduate courses offered by the IHEs, and to use these assessments as an instrument of regulation of the higher education system" (Ministério da Educação, 2003a, p. 43). The first ENC took place in 1996, examining 616 courses from three areas; the latest was in 2003 with the participation of 5,897 courses covering 26 areas. The ENC's analysis model is produced by indicators of correlation between the graduates' performance in a knowledge test and the socio-cultural patterns they display. Although the evaluation process uses the performance results from graduates, that is, from students at the end of their course, the ENC discards the analysis suggested by the additive features in what concerns the students' classification according to these results. The analysis aims exclusively at evaluating the courses the students attended, and is imbued with the character of a formative evaluation, on the basis of the interpretation of the history of results and of information supplied, suggesting to the institutions' managers that they review and discuss their projects, objectives and pedagogical procedures:
From the analysis of the results offered by the ENC (information and data qualitative and quantitative about the performance of its graduates with respect to abilities and contents included in the tests, and about the answers to the questionnaire), and considering the whole process of evaluation, and the context of the course in which the evaluation happened, managers and teaching staff have elements to make safer decisions, aiming at the improvement of their educative practice and, consequently, the improvement of the quality of teaching. It is at the base of the ENC, therefore, the diagnostic function of the evaluation, by offering an assessment of the reality of teaching with regard to the situation of the graduates in their abilities and contents evaluated. (Ministério da Educação, 2003b, p. 14)
The instruments used by the ENC were written tests and questionnaires to collect information about the students. The tests had discursive and multiple choice questions emphasizing the ability for critical analysis, problem-solving, logical reasoning, organization of ideas, proposition of hypotheses, and formulation of conclusions. The proposal was to evaluate the pedagogical projects of the courses through questions whose answers would express the qualitative dimension of the learning achieved by the students, vis-à-vis the minimum curriculum components for the undergraduate courses in the country. The MEC established guidelines for each area of knowledge, constituting commissions of experts indicated by the bodies related to undergraduate education, such as professional councils and scientific associations of the areas under study etc. "Their attribution is to define the scope, objectives, directions and other specifications necessary to the creation of the instruments to be used in the ENC, to proceed to an evaluation of the ENC with a view to improving the process, and also to set out procedures and guidelines for the process of in loco Evaluation of the Conditions of Teaching" (Ministério da Educação, 2003b, p. 16). The guidelines for each area of the ENC defined the objectives of the exam, the profile expected from the graduates, the competences, abilities, and contents to be verified, as well as the format of the exam. The questionnaires applied socioeconomic and cultural information, and also their point of view on resources, facilities, curriculum structure, and teacher performance in their courses of origin. For the ENC the data gathered would translate into an analysis model that would allow "investigating hypotheses concerning the performance variable, studying trends based on time histories, or supplementing information in assessment processes carried out at the institutions or courses", thereby giving to INEP and to the researchers interested a hitherto unavailable body of information on the university education in Brazil (p. 20).
The statistical model of the results of the written tests suffered changes throughout the history of the ENC. At first, it represented the overall average of the graduates of each of the courses examined, using an absolute scale from 0 to 100, in which five levels of performance were defined, according to predefined percentages: the bottom 12% of the courses received E; to the next 18%, was attributed the concept D; the next 40% received C; the next 18%, the concept B, and the top 12% received A. Later, in 2001, the absolute values obtained by the courses began to be converted into a relative scale based on the standard deviation of these averages. IN 2003, in the latest version of the exam, the results were published both in absolute and in relative terms. For the latter, the attribution of levels did not use the predefined percentages, but the position of the overall averages obtained within specific ranges in the scale from 0 to 100.
To the SINAES, from all instruments employed to evaluate the Brazilian higher education system, the ENC was the one that suffered the most severe and bruising criticism. One of them concerns the fact that the students' exam is dissociated from "an integrated set of evaluations with clearly defined principles, objectives, agents, and actions" (Ministério da Educação, 2003a, p. 44). In fact, the ACE should have performed this integrating role, but it lacked the analysis models to correlate more precisely the data obtained from the evaluation of the institutions with the performance displayed by their students at the ENC. This correlation, according to the guidelines established by the MEC, should have been conducted by the institutions through the analysis of the specific data sent to them. However, if successful analyses have been performed, they remained within the institutional boundaries, and never came to the knowledge of the public through traditional communication channels. Thus, the idea of assessing data instituted by the ACE and the ENC was never supplemented by analyses conducted either by the MEC or by the academic community. This gap represents the main problem to be addressed so that a coherent vision about the quality of higher education in Brazil can be achieved. In the country, there are no clear, well formulated elements on what institutions should do, within a feasible group of options, to improve their quality of teaching. The analyses divulged by the MEC itself about the ENC admit the system's inefficiency as to its formative objectives:
[ ] the concepts do not reflect the quality of the courses, and are inadequate to give guidance to educational policies common to all; [ ] are insufficient to offer guidance to students, their parents, and society at large about the quality of the courses; [ ] are incapable of supplying adequate guidance to the administrative actions of the IHEs managers; and, [ ] on their own, are insufficient to rank the courses or guide reinforcement and/or punitive policies as it has been done up until now (Ministério da Educação, 2003c, p. 9-10).
The following argument is illuminating as to the lack of credibility of the ENC as a vehicle for the assessment of the quality of teaching:
A low score at the ENC may mean, for example, that the course receives weak students and that despite the institutional efforts, it is not possible to bring them up to the level of the stronger students from institutions with highly competitive entrance exams. Likewise, an 'A' may just mean that, as a consequence of the high exigencies of the entrance exam, a given institution is working with the best students. In this case, the performance at the ENC may have very little to do with the qualifications of the teaching staff, the sophistication of teaching methodologies and techniques, the size and up-to-dateness of its library, the quality of didactic laboratories, or the academic atmosphere of the course etc (Ministério da Educação, 2003c, p. 10)
The SINAES presents in its diagnostic all the inefficiencies above, adding to them that "the ENC administration turns out to be every year more complex and costly as a result of the growing number of institutions, courses, and areas" (Ministério da Educação, 2003a, p. 45). The budget necessary for the exam has, therefore, become an obstacle to follow the legal determination of a gradual inclusion of new courses into the ENC.
The evaluation of institutions, courses, and students' performance: coherence and contradictions of the current model
The proposal for the evaluation of higher education created under the government of President Luiz Inácio Lula da Silva takes place as part of the revision of the policy established under the previous government, that of Fernando Henrique Cardoso: "one of the more constant criticisms made to the evaluation practices of these past years is the use of instruments applied to isolated objects, and which lead to a partial and fragmentary view of reality" (Ministério da Educação, 2003a, p. 62). The new proposal assumes that it is necessary to use global understanding schemes capable of breaking away from the existing methodological fragmentation, and of instituting evaluation systems in which the various dimensions of the reality evaluated institutions, individual systems, learning, teaching, research, administration, social intervention, link with the society etc are integrated into comprehensive syntheses. The new conception intends to guarantee coherence, both conceptual and epistemological, as practical with respect to the objectives and instruments employed, emerging as capable of articulating the formative nature of evaluation, focused on the increase of quality and capacity of the institutions with the regulation functions proper to the State, involving orientation, supervision, accreditation and disaccreditation etc. Its ethical and political legitimacy is taken as "given by its proactive objectives, respect for plurality, democratic participation, and also by the professional and citizen qualities of its actors"; and its technical legitimacy is considered as "guaranteed by the theory, the adequate methodological procedures, by the correct formulation of instruments, and by everything that is recommended in a scientific activity" (Ministério da Educação, 2003a, p. 67).
On April 14th 2004 comes into force the Law Nº 10861 (Brasil, 2004a) instituting the SINAES with the objective of "guaranteeing the national process of evaluation of the institutions of higher education, undergraduate courses, and the academic performance of their students" (Article 1). The SINAES is thus comprised by three integrated subsystems: 1) the institutional evaluation, which shall be carried out in two spheres, internal and external, and "shall have as its objective identifying their [the institutions'] profile and the meaning of their action, through their activities, courses, programs, projects and sectors, considering the different institutional dimensions [ ] (Article 3); 2) the evaluation of undergraduate courses, dedicated to "identify and conditions of teaching offered to the students, in special those related to the profile of the teaching staff, the physical space, and the didactic-pedagogical organization" (Article 4); and 3) the evaluation of students, which "shall be carried out through the application of the National Exam of Student Performance ENADE" (Article 5), and shall have as its function to gauge the students' command of the "programmatic contents specified in the curriculum guidelines of the respective undergraduate course, their abilities to adapt to the demands originated from the evolution of knowledge, and their competence to understand issues external to the specific sphere of their profession [ ]" (Article 5, Paragraph 1).
Law 10861/04 also instituted the National Commission for the Evaluation of Higher Education CONAES -, a "collective body for the coordination and supervision of the SINAES" (Article 6), with the attribution of "proposing and evaluating the dynamics, procedures, and mechanisms of institutional assessment, of courses, and of student performance" (Article 6, Section I). The CONAES is composed by thirteen members with mandates of two or three years, including representatives of the following segments: INEP, CAPES, MEC, teaching staff, students, technical-administrative body, and citizens with recognized scientific, philosophical, or artistic knowledge, and well-known competence in higher education evaluation or management. Finally, "each institution of higher education, public or private, shall constitute a Self-evaluation Commission CPA [ ] with the attributions of conducting the institution's processes of internal evaluation, and of systematization and offer of information required by INEP [ ]" (Article 11).
In July 9th 2004 the Ordinance 2051 (Brasil, 2004b) regulating the procedures instituted by Law 10861/04. The Ordinance expands the competences of CONAES and establishes that it shall give INEP the guidelines for the execution of the three levels of evaluation integrated by the SINAES (Article 4). To carry out the external evaluations in loco, the INEP shall designate separately External Commissions of Institutional Evaluation and External Commissions of Course Evaluation (Article 5), carrying out periodically programs of preparation of evaluators (Article 6). As for the Self-evaluation Commissions CPAs -, these shall be "constituted within the sphere of each institution of higher education, and shall have as their attribution the coordination of internal processes of evaluation of the institution, and of systematization and offer of information required by INEP" (Article 7). Next, the Ordinance 2051/2004 opens three sections with the purpose of detailing such levels of evaluation integrated by the SINAES.
Section I specifies that "the evaluation of the institutions of higher education shall have as its objective to identify the profile and the meaning of the activities of the institutions, based on the principles of respect to identity and to the diversity of the institutions, as well as by the conduction of self-assessment and external evaluation" (Article 9). The self-assessment, coordinated by each institution's CPA, shall be carried out following the general guidelines set out by INEP and made available electronically, starting from guidelines established by CONAES (Article 11). The timeframe for the presentation of the results of the self-assessment process shall be of two years, starting from September 1st 2004" (Article 13, Paragraph 1).
Section I legislates also upon the action of the External Commissions of Institutional Evaluation, whose members will be registered and trained by INEP. CONAES shall establish its own chronogram for these evaluations, which must take place after the self-assessment process, composing with it "the basic reference for the process of accreditation and reaccreditation of the institutions, with the deadlines set out by the regulating bodies of the Ministry for Education" (Article 14). The information and the documents examined by the Commissions of External Evaluation shall be the following: Plan for Institutional Development PDI -, reports of the self-assessment process, data from the Census of Higher Education and from the Registry of Institutions of Higher Education, data about the performance of students at the ENADE, amongst others (Article 15, Sections I to IX).
Section II indicates the evaluation criteria for the undergraduate courses, and establishes that they shall be applied by the External Commissions for Course Evaluation. They shall be based on data supplied by the IHE in electronic forms, and shall take into account the following aspects: teaching staff, physical spaces, didactic-pedagogical organization, and students' performance at the ENADE, amongst others.
Section III deals with ENADE, which shall be carried out with technical support from the Area Advisory Commissions, and shall apply "sampling procedures to students from the end of the first year and from the last year of their undergraduate course, which shall be selected each year to take part I the exam" (Article 25). The areas and the courses that shall take part in ENADE shall be defined every year by the MEC, and the IHEs shall enroll with INEP all their students fit for the exam, in order to comprise the sample. "The results from ENADE shall be given in a five-level scale, and divulged to the students that integrated the samples selected from each course, to the participating IHEs, to the regulating bodies, and to society at large [ ]" (Article 29, Paragraph 1). Apart from the exam, INEP shall apply to the students a socioeconomic questionnaire and, to the coordinators of the selected courses, it shall apply another questionnaire focused on the definition of the course's profile.
The results of external evaluation shall be expressed "in a five-point scale, with levels 4 and 5 indicating strong points, levels 1 and 2 indicating weak points, and level 3 indicating the minimum acceptable level for institution authorization and accreditation processes (Article 32). In the cases where results are unsatisfactory, there shall be a commitment protocol to the signed between the IHE and the MEC, establishing deadlines and goals to carry out actions to be adopted to overcome the difficulties detected. "CONAES, in its reports, shall inform, as the case may be, about the need to sign a protocol of commitment [ ]" (Article 35).
In the document dedicated to the CPAs (Ministério da Educação, 2004b, p. 8), the INEP establishes that "the methodology, procedures, and objectives of the evaluation process must be developed by the IHE according to its specificity and dimension, listening to the community, and agreement with the guidelines the institutions the "definition of the methodology of data analysis and interpretation" related to the self-assessment (p. 10) creates more problems than it solves for SINAES, since "in Brazil there is not as yet a culture of systematic self-assessment and policy-making based on the feedback of information; on the contrary, there exists a culture of redoing and reinventing processes" (Moreira; Hortale; Hartz, 2004, p. 34).
In view of that, it seems that INEP should have been designated by the legislation not just to register and train the members of the external commissions, but also to do the same thing with the members of the CPAs. The lack of a systematic culture of self-assessment in the country puts before the SINAES the problem of presenting the IHEs with a set of principles, criteria, presuppositions and premises that will serve them as conceptual, political, and justification foundation to put in operation the processes that must be implemented. There is no denying that there is lack of commitment from the institutions in fulfilling their educational responsibilities, especially with regard to the academic-scientific, professional, ethical and political education of the citizens, as well as concerning knowledge production and promotion of the advancement of science and culture. The feedback from assessment processes has been done in the country exclusively under the form of regulation, with no principles of formative nature being culturally established. SINAES itself become a victim of this cultural phenomenon, for when justifying its legitimacy it states that: "evaluation is not just a technical issue. It is also a significant power instrument. [ ] The technical issues can be technically answered, but it is the ethical and political senses that involve the conceptions of higher education, of society and consequently of evaluation" (Ministério da Educação, 2003a, p. 67). Thus, the SINAES is taking for itself only the regulatory aspect of evaluation, leaving the formative element to the IHEs, thereby exempting itself from the construction of a wide formal system to guide them ethically and politically.
Even the reasoning on the technical aspects of the self-assessment developed by the SINAES is mistaken.
Most of the quantitative data about the institutions and courses can be found in the Higher Education Census carried out annually by INEP. Other data, including qualitative data, are generated with the help of institutional researchers indicated by the Rectors or Principals, making it extremely important for the CPAs to identify in each case the responsible for the information received, and work in association with them. The information given annually to the Census is an important starting point for the development of the institutional self-awareness, and for the evaluation activity itself. (Ministério da Educação, 2004b, p. 13-14)
The information given annually by the IHEs to the Census concern the administrative category and the academic organization, the types of undergraduate course that exist and their areas of knowledge, the places offered, the candidates that compete for them, the age bracket of incoming students and of those finishing the courses, the qualification of the teaching staff, and the extension activities (Ministério da Educação, 2004a, p. 4). It is difficult to know how such data, whose nature is strictly informative, and predominantly quantitative, can be used to develop the institutional awareness about the foundations of the activity of self-assessment. Quite the opposite, by recommending this type of evaluation, the SINAES hinders the development of methodologies of qualitative analysis that may seek to highlight the effectiveness of the social commitment of the IHEs. The reasoning needed for self-assessment is definitely not the same used to carry out the Higher Education Census, for the SINAES itself dictates that the "General Guidelines for Institutional Evaluation should not be taken as an instrument of mere checking or verification, or simply of quantification" (Ministério da Educação, 2004b, p. 14). The IHEs cannot, therefore, carry out the self-assessment as of they were performing an internal censual assessment. The Census should be taken as a methodological counter-example of what to do in the self-assessment, and not as the starting point of the process.
There are points in the institutional self-assessment script that demonstrate a poorly defined vision, as when it recommends checking if there is "articulation between the PDI and the Institutional Pedagogical Project PPI in what concerns the teaching, research and extension activities, academic management, institutional management, and institutional evaluation" (p. 15). This seems to underestimate the ability of the institutional managers to produce such documents in an articulate manner, for even if the commissions responsible for each one of them are completely different, the CPA will examine both, and will undoubtedly correct mistakes, if they exist. When the time comes for the external evaluation, the documents will be matching each other. Even so, later guidelines and the external evaluation instruments (Ministério da Educação, 2005) consider the articulation between the PDI and the PPI as a central indicator among the dimensions of the SINAES.
There also issue presented in the script for institutional self-assessment that leave room for several interpretations, such as: "do the curricula and programs for each course correspond to the profile of the student leaving it?" (Ministério da Educação, 2004b, p. 17). Perhaps the intention here is to find out if the students educated by the institution have command of the curriculum contents, which corresponds to evaluate the policies developed by the institution to correct any possible deficiencies pointed out by the ENADE. Perhaps what is intended is something else entirely, namely, if the curricula and study programs are coherent with the type of professional the course intends to form. Or still, it may be that what is intended is to assess if the institution makes curriculum adaptations and updates in response to the results achieved by its alumni in their professional careers etc. The fact that there is room for various interpretations shows that the issue is ill-formulated, apart from suffering from the theoretical difficulties of the curriculum area already presented.
Some of the issues would be better analyzed if split into two or more interrelated questions: "the IHE's scientific production is coherent with its mission and with the investments and policies proposed for its development? And what about the social needs and demands of science?" (Ministério da Educação, 2004b, p. 18). The same problem would be more objectively formulated in the following way: once the quality of the scientific production of the IHE is expressed in a five-point scale, which policies exist to improve its classification, if it is not already at the top level?
Some questions, to be answered demand studies whose conduction is possible only based on complex methodologies and analysis models, such as: "what is the impact of the extension activities in the community and the in the formation of students?" (p. 19). It would be necessary that the SINAES indicated sources about the impact analysis intended, for the answer raises subjective interpretations that me lead to conflicting positions: it is clear that a given IHE always believes in the value of the extension activities it carries out, otherwise it would have stopped investing in them; but the external evaluation commissions may disagree about this after comparing this institution with the others. Apart from university extension, there is a series of other factors related to measures of impact that have been adopted as criteria for self-assessment by the SINAES, all of them focused on measuring the social relevance of the IHE's actions.
With regard to indicators, some of then are quite strange, such as, for instance, the one establishing the "fulltime student/teacher" ratio (p. 26). Maybe it would be more consistent for the CPAs to assess the students/fulltime teacher or students/teacher's working hour ratios, which can be related to quality indicators whose pertinence was already presented here when discussing the old systems of ACE and ENC.
Overall, there is lack of systemic articulation to the General Guidelines for the Institutional Self-assessment Script, revealing the need for a wider national basis of information, and even of more elaborated reports, so that the CPAs can compare their institution's data with those of the others. For example, concerning the institutional resources for teaching and research, the script asks: "are the infrastructure, facilities, and educative resources sufficient? Justify" (p. 30). Perhaps the question could be better formulated as: what is the degree of sufficiency of the infrastructure, facilities, and educative resources in relation to the other institutions of the country that have similar courses? But this crucial questioning cannot be answered objectively by the institutions, because the national data about the conditions of offer of the courses that matter to self-assessment are inexistent. Likewise, the SINAES has not indicated any literature to support the analyses, leaving it to the CPAs. In Brazil, it is known that there is no culture of self-assessment already in place, and it can only be built with the help of good references on the subject. The works produced about self-assessment in Education are few, making it necessary a research of bibliography produced in other countries. The resources of each individual institution are limited for such a wide study, so that they should join efforts to produce it. Since SINAES is an instrument developed by the central policymaking bodies of Brazilian education, it should include among its objective the organization of basic information systems to serve as general references for self-assessment. Observing the mechanism proposed by the script, one can conclude that the results obtained will probably have little formative impact upon the institutions, unless SINAES achieves more elaborated terms and corrects the direction suggested by the methodologies defined so far.
Another problem detected concerns the duplicity of functions of the commissions dedicated to evaluate the institutions and courses, because the legal mechanisms set out by Ordinance 2051/04 spell interpretation difficulties about this:
The External Commissions of Institutional Evaluation shall examine the following information and documents: I The Institutional Development Plan (PDI); II partial and final reports of the self-assessment process, produced by the IHE according to the general guidelines supplied by the INEP; III data general and specific to the IHE appearing in the Higher Education Census and in the Registry of Institutions of Higher Education; IV data on students' performance at the ENADE, available at the time of the evaluation; V evaluation reports of the IHE's undergraduate courses produced by the External Commissions of Course Evaluation, available at the time of the evaluation; V data from the students' Socioeconomic Questionnaire, collected at the application of the ENADE; VI the report from the Commission of Follow-up of the Commitment Protocol, when adequate; VII reports and grades from CAPES for the IHE's Graduate courses, if pertinent; VIII documents from the IHE's accreditation and last reaccreditation; IX other documents as seen fit. (Brasil, 2004b, Article 15. Item V appears twice in the original)
On the other hand, one has that:
The External Commissions of Course Evaluation shall have previous access to the data, supplied in electronic form by the IHE, and shall consider also the following aspects: I the profile of the teaching staff; II the conditions of the physical facilities; III the didactic-pedagogical organization; IV the performance of the IHE's students at the ENADE; V the data from the socioeconomic questionnaire filled by the students, as available at the time of the evaluation; VI up-to-date data from the Census of Higher Education and from the General Registry of Institutions and Courses; and VII other regarded as relevant by the CONAES. (Brasil, 2004b, Article 20)
The PDI appears as a topic for consideration of the External Commissions of Institutional Evaluation, but not for the Courses Commissions. However, there is no way that an institution can develop its PDI outside the context of its courses, for these are organic units that justify and motivate the institution's development. The planning of institutional actions expressed in a well prepared PDI I obviously circumscribed by the working logic of the courses offered, which determine the nature of the teaching staff, infrastructure, didactic-pedagogical organization etc. Thus, the PDI is, perforce, ineluctably linked to the undergraduate courses. Therefore, it will be necessary that the External Commissions of Course Evaluation also consider the PDI, which is an important document for them. However, this is not established in Ordinance 2051/04, unless one includes the PDI as one of the "other [documents] regarded as relevant by the CONAES, as set out Section VII of Article 20.
The data on the students' performance at the ENADE and the data of the socioeconomic questionnaire are absolutely identical, both with regard to the courses and to the institution as a whole, because the students are the same. Thus, it is redundant to examine these data twice. Statistical procedures correct and applied to information gathered through adequate instruments could greatly simplify the evaluation efforts of the Commissions.
Data from the Census of Higher Education, although inadequate to characterize any particular institution and its courses, as already discussed here, will also be the same to any of these commissions. We can conclude that the functions of the two commissions of in loco external evaluation are highly intertwined, so that there could be only one commission, perhaps of an interdisciplinary nature, covering both the courses and the institutions as a whole. This would make the process quicker and, what is more important, more economical from the point of view of the financial costs necessary to conduct it, a problem that was already felt previously, and has modified the course of the strategies for the evaluation of higher education in Brazil.
One last aspect to be considered concerns the new student evaluation system, the ENADE. The previous system, represented by the ENC (popularly known as Provão, the Big Exam), has shown to be inadequate to achieve its objective of evaluating the institutions' courses through the performance of their students, because, being applied a single time at the end of undergraduate studies, it could not reveal the learning progress obtained since the beginning, this progress being supposedly the true measure of the quality of the teaching the students received. The ENADE applied twice, once at the start and once at the end of studies, should correct this distortion. According to the philosophy of SINAES, whereas with the ENC what was being evaluated was not the institution in all its complexity, but just the relative performance of the graduating students in an exam, with the ENADE it should be possible to achieve a more faithful assessment of the quality of the teaching at the courses, making clear the difference between the values presented at the start and at the conclusion of the learning process. Even so, it seems that the inadequacy of trying to assess the quality of the teaching at the institutions through the evaluation of the students' performance will exist also within the new system. The students' performance continue to be relative, only now at two distinct moments, and nothing guarantees that the differences present absolute values that really express the quality of the institution. The students entering the more competitive institutions start from better performance levels than the students entering less sought-after institutions. At the end of undergraduate studies the former must continue to exhibit higher levels of performance, but it is impossible to say that the difference between these indicators constitutes a measure of the quality of teaching they received. The institutions that receive the weaker students, despite their efforts to bridge the existing learning gaps will have great difficulty matching the institutions that receive the best students. The quality of the student is essentially different from the quality of teaching, but the two can be confused in practice. In Brazil, the elitization of Education takes place since fundamental school, producing better prepared students that enter the more competitive IHEs, which are, therefore, better perpetuating the elitization process. This shows that the instruments of evaluation of higher education will only reach objective standards when the various systems of basic education of the country break through the barrier imposed by social inequality.
Concluding remarks
Since the first procedures established in Brazil for the evaluation of courses and institutions of higher education it can be said that a systematic evolution in the consistency of the indicators employed has occurred. The data reflecting global socioeconomic conditions external to the courses and institutions, such as schooling rates, and availability and utilization of places, are no longer being used, because there was no way of making them operational. There has been significant progress in the way that the evaluation of the teaching staff is conceived, no longer using just the IQCD as an indicator of quality, but also taking into account information on the career structure offered by the institutions, as well as working hours and conditions. The concept of evaluation of the didactic-pedagogical organization seems to be the one that has suffered the smaller progress, reproducing a model in which the indicators are liable to endless criticism as to their consistency. The measure of the institutions' capacity to access communication networks and information systems, although introduced in the evaluation in an ill-defined manner, raises in its unfolding the establishment of new indicators focused on quantifying the time spent by the student using the new resources which, when compared to more traditional means will indeed express the relevance of the informatization of teaching. The role of the evaluation of student performance understood as an indicator of institutional quality is controversial, and the methods developed to implement it have shown to be inefficient when we consider that the access of the population to higher education in Brazil is unequivocally based on principles of social inequality.
An attempt has been made to change the regulatory nature that characterizes the assessment process in Brazil via the introduction of procedures that target a formative character, but appropriate methods for that have still not been found, and the culture of self-assessment, so necessary to such end, is still not constituted.
An indicator unquestionably relevant to gauge the productivity of teaching is the number of students per working hour of the teacher. In theory, one could arrive at an ideal average value that would correspond to the highest productivity. Institutions operating close to this number would obviously have an extra quality factor. Strangely enough, this indicator has never been used in Brazil. The MEC themselves recognize the existing systemic predicaments concerning the evaluation of higher education in Brazil, and the need to develop instruments that allow a clearer understanding of the reality. Also, a national database systematizing the more important information based on assessments of all institutions still remains to be built.
Bibliographical references
Contact:
Carmen Lúcia Dias
Depto de Psicologia da Educação da Unesp/Marília-SP
Av. Hygino Muzzi Filho, 737
17525-900 Marília - SP
e-mail: carmen.dias@flash.tv.br
Received on 24/09/05
Corrected on 21/03/06
Accepted on 16/10/06
Carmen Lúcia Dias has a doctorate in Education from the São Paulo State University, and lectures at the Department of Psychology of Education of UNESP Marília.
Paulo Sergio Marchelli has a doctorate in Education from the University of São Paulo, and lectures at the São Marcos University.
Maria de Lourdes Horiguela has a doctorate in Psychology from the University of São Paulo, and lectures at the Graduate Program of UNESP Marília.
- ALFAN, E.; OTHMAN, M. N. Undergraduate students' performance: the case of University of Malaya. Quality Assurance in Education, Bradford, v. 13, n. 4, p. 329-343, 2005.
- AMORIM, A. Avaliação institucional da universidade São Paulo: Cortez, 1991.
- BRAGON, R. Universidade para todos: relator amplia benefícios a particulares. Folha de S. Paulo, São Paulo, p. c5, 27 nov. 2004.
- BRASIL. Decreto n. 2.026, de 10 de outubro de 1996. Estabelece procedimentos para o processo de avaliação dos cursos e instituições de ensino superior. Diário Oficial, Poder Executivo, Brasília, DF, 11 out. 1996a. Seção I.
- ______. Lei n. 9.394, 20 de dezembro de 1996. Estabelece as Diretrizes e Bases da Educação Nacional. Diário Oficial, Brasília, n.248, p.27833-41, 1996b. Seção I.
- ______. Lei n. 10.172, de 9 de janeiro de 2001. Aprova o Plano Nacional de Educação e dá outras providências. 2001a. Disponível em: <http://www.adunesp.org.br/download/PNE%20-%20Lei%2010172-%2009-01-01.pdf>. Acesso em: 07 mar.2006.
- ______. Decreto n. 3.860, de 09 de julho de 2001. Dispõe sobre a organização do ensino superior, a avaliação de cursos e instituições, e dá outras providências. 2001b. Disponível em: <http://www.ead.ufsc.br/profor/disciplinas/textos/texto008.pdf>. Acesso em: 12 out. 2004.
- ______. Lei n. 10.861, de 14 de abril de 2004. Institui o Sistema Nacional da Avaliação Superior û SINAES e dá outras providências. 2004a. Disponível em: <http://www.mec.gov.br/legis/pdf/l10861.pdf>. Acesso em: 17 dez. 2004.
- ______. Portaria n. 2.051, de 9 de julho de 2004. Regulamenta os procedimentos de avaliação do Sistema Nacional de Avaliação da Educação Superior (SINAES), instituído na Lei n. 10.861, de 14 de abril de 2004. Diário Oficial, Poder Executivo, Brasília, n. 132, p. 12, 2004b. Seção I.
- CUNHA, L. A. Nova reforma do ensino superior: a lógica reconstruída. Cadernos de Pesquisa, São Paulo, n. 101, p. 20-49, jul. 1997.
- DIAS, C. L. Avaliação da capacitação pedagógica do docente de ensino superior através de uma escala de atitudes Marília, 2001. 262f. Tese (Doutorado em Educação). Universidade Estadual Paulista, Marília, 2001.
- ENCONTRO Internacional de Avaliação do Ensino Superior. Anais.. Brasília: MEC/SESu, 1988.
- FÉRÉZOU, J. P.; NICOLSKY, R. Excelência científica e crescimento. Folha de S. Paulo, São Paulo, p. a3, 06 set. 2004.
- GOMES, A. M. Política de avaliação da educação superior: controle e massificação. Educação e Sociedade, Campinas, v. 23, n. 80, p. 275-298, set. 2002,
- GONÇALVES FILHO, F. Enfoques avaliativos em Revista: concepções de avaliação institucional em questão. Política da Educação Superior - GT 11 Brasília: FE-UNICAMP/CAPES. 2004. Disponível em: <http://www.anped.org.br /25/posteres/franciscogoncalvesfilhop11.rtf>. Acesso em: 04 set. 2004.
- GREGO, S. M. D.; SOUZA, C. B. G. A normatização da avaliação institucional das instituições universitárias na instância federal e no governo do Estado de São Paulo e a autonomia universitária 2004. Disponível em: <http://www.anped.org.br/26/trabalhos/soniamariaduarte grego.rtf>. Acesso em: 05 set. 2004.
- HARVEY, L. A history and critique of quality evaluation in the UK. Quality Assurance in Education, Bradford, v. 13, n. 4, p. 263-276, 2005.
- IBGE Instituto Brasileiro de Geografia e Estatística. Estimativas de população Planilha do Microsoft Excel, 2001. Disponível em: <www.ibge.gov.br>. Acesso em: 07 mar. 2006.
- KOURGANOFF, W. A face oculta da universidade São Paulo: Editora da Unesp, 1990.
- LEMAITRE, M. J. Development of external quality assurance schemes: an answer to the challenges of higher education evolution. Quality in Higher Education, London/New York, v. 10, n. 2, p. 89-99, jul. 2004.
- LAMPERT, E. Avaliação do professor universitário: pressupostos técnicos e conclusões. Avaliação Educacional, São Paulo, n. 12, p. 79-94, jul./dez. 1995.
- LEITE, D. B. C. Avaliação e tensões de estado, universidade e sociedade na América Latina. Avaliação/Rede de Avaliação Institucional da Educação Superior. RAIES, ano 2, v.2, n. 1, mar. 1997.
- MINISTÉRIO DA EDUCAÇÃO. Secretaria de Ensino Superior. Comissão para a reformulação da Educação Superior. Uma nova política para a Educação Superior Brasília: MEC, 1985.
- ______. Sistema Nacional de Avaliação da Educação Superior (SINAES). Bases para uma nova proposta de avaliação da educação superior Brasília: MEC, 2003a.
- ______. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (INEP). Diretoria de Estatísticas e Avaliação da Educação Superior. Relatório do Exame Nacional de Cursos û 2003 Brasília: MEC, 2003b.
- ______. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (INEP). Diretoria de Estatísticas e Avaliação da Educação Superior. Resumo Técnico do Exame Nacional de Cursos û 2003 Brasília: MEC, 2003c.
- ______. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (INEP). Diretoria de Estatísticas e Avaliação da Educação Superior. Censo da Educação Superior û 2003: resumo técnico. Brasília: MEC, 2004a.
- ______. Comissão Nacional de Avaliação da Educação Superior (CONAES). Sistema Nacional de Avaliação da Educação Superior (SINAES). Orientações gerais para o roteiro da auto-avaliação das instituições Brasília: MEC, 2004b.
- Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira. Avaliação externa de instituições de Ensino Superior: diretrizes e instrumento. Brasília: MEC, 2005.
- MOK, K. The quest for world class university: Quality assurance and international benchmarking in Hong Kong. Quality Assurance in Education, Bradford, v. 13, n. 4, p. 277-304, 2005.
- MOREIRA, C. O.; HORTALE, V. A.; HARTZ, Z. A. Avaliação da pós-graduação: buscando consenso. Revista Brasileira de Pós-Graduação (RBPG), Brasília, v. 1, n. 1, p. 26-40, jul. 2004.
- MOREIRA, A. F. B. O campo do currículo no Brasil: construção no contexto da ANPED. Cadernos de Pesquisa, São Paulo, n. 117, p. 81-101, nov. 2002.
- NEIVA, C. C. Avaliação do Ensino Superior: relato de uma experiência. Brasília, nov. 1988, 17p. Documento interno da Assessoria do SESu/MEC.
- NGUYEN, D. N.; YOSHINARI, Y.; SHIGEJI, M. University education and employment in Japan: Students' perceptions on employment attributes and implications for university education. Quality Assurance in Education, Bradford, v. 13, n. 3, p. 202-218, 2005.
- RISTOFF, D. Princípios do programa de avaliação institucional. Avaliação, Campinas, v. 1, n. 1, p. 47-53, jul. 1996.
- ROMANELLI, O. L. História da Educação no Brasil (1930/1973). Petrópolis: Vozes, 1978.
- ROZSNYAI, C. A decade of accreditation in Hungary: lessons learned and future directions. Quality in Higher Education, London/New York, v. 10, n. 2, p. 129-138, jul. 2004.
- SGUISSARDI, V. Para avaliar propostas de avaliação do Ensino Superior. In: SGUISSARDI, V. (Org.). Avaliação universitária em questão: reformas do Estado e da Educação Superior. Campinas: Autores Associados, 1997. P. 41-70.
- SILVA, E. M. C.; LOURENÇO, E. B. Avaliação institucional no Brasil: contexto e perspectiva. Avaliação, Campinas, v. 3, n. 4, p. 63-73, dez. 1998.
- SOBRINHO, J. D. Avaliação institucional: marcos teóricos e políticos. Avaliação, Campinas, v.1, n.1, p.15-24, jul. 1996.
- ______. Avaliação institucional da educação superior: fontes externas e internas. Avaliação, Campinas, v. 3, n. 4, p. 29-35, dez. 1998.
- STELLA, A. External quality assurance in Indian Higher Education: developments of a decade. Quality in Higher Education, London/New York, v. 10, n. 2, p. 115-127, jul. 2004.
- STRYDOM, A. H.; STRYDOM, J. F. Establishing Quality Assurance in the South African Context. Quality in Higher Education, London/New York, v. 10, n. 2, p. 101-113, jul. 2004.
Publication Dates
-
Publication in this collection
17 Apr 2007 -
Date of issue
Dec 2006
History
-
Accepted
16 Oct 2006 -
Received
24 Sept 2005 -
Reviewed
21 Mar 2006