Abstract
This paper proposes a complementary approach to assess the performance of Industrial Engineering undergraduate programs within the Brazilian Higher Education System. This approach presents an alternative Composite Indicator formulation derived from a Data Envelopment Analysis (DEA) model and the Benefit-of-Doubt (BoD) approach. This method enhances comparability among programs across diverse Higher Education Institutions (HEIs) by optimizing weights and incorporating weight restrictions. The application of the novel approach was conducted with the analysis of 73 undergraduate programs within the Brazilian Federal Public System in 2019, the study identified both programs with suboptimal performance levels and benchmark programs showcasing best practices. This feature of the BoD model can support the HEIs assessed as suboptimal in pursuing improvement opportunities in a benchmarking exercise. The incorporation of Weight Restrictions (WRs) enables a more comprehensive consideration of key performance indicators (KPIs) within the Preliminary Program Grade (CPC) framework, thereby revealing additional improvement opportunities. This research contributes to the field by integrating DEA and BoD into the official STEM performance criteria framework for the first time, offering insights into evaluating undergraduate Industrial Engineering programs within Brazilian HEIs.
Keywords:
Performance assessment; data envelopment analysis; composite indicator; higher education; industrial engineering; preliminary program grade
1 INTRODUCTION
The performance assessment of higher education institutions (HEIs) is essential in academic and managerial decision-making, serving as a key mechanism for policy formulation and institutional improvement. In the Brazilian context, the National Higher Education Evaluation System (SINAES) provides the primary framework for evaluating HEIs, undergraduate programs (i.e., bachelor programs), and student performance Brasil (2004). Since 2008, two key indicators-the Preliminary Program Grade (CPC) and the General Program Index (IGP)-have been used to quantify educational quality, streamlining assessments and reducing the need for on-site evaluations (Brasil, 2008; Verhine & Dantas, 2009). Although widely adopted, the methodologies behind these indicators remain a subject of debate, particularly regarding their capacity to accurately reflect institutional diversity and educational quality (Schwartzman, 2008).
The assessment of HEIs has gained increasing relevance in the wake of Sustainable Development Goal 4 (SDG 4) within the 2030 Agenda (UN, 2015). This global initiative emphasizes inclusive, equitable, and quality education as a fundamental pillar of sustainable development. In this context, the SINAES framework becomes a crucial instrument for promoting educational quality in alignment with international standards. The CPC, considered the primary Quality Indicator of Higher Education in Brazil (Ikuta, 2016), provides a broad view of the educational landscape, offering insights into the strengths and weaknesses of Brazilian higher education (Griboski & Fernandes, 2016). However, despite its relevance, the formulation of the CPC has been a subject of ongoing debate and criticism within the academic community (Schwartzman, 2008).
A key point of discussion concerns the exogenous weighting system assigned to different CPC components. Even minor alterations in these weights can significantly impact the final performance measurements of undergraduate programs (Bittencourt et al., 2010), potentially influencing the strategic decisions of educational institutions. The lack of consensus regarding the allocation of weight is further complicated by the standardization of these weights across all programs and HEIs, disregarding institutional specificities and disciplinary differences. This issue is particularly critical in the comparison of public and private HEIs under identical performance criteria (Lacerda & Ferri, 2017). Furthermore, the substantial reliance on student-related components, reaching 70% of the CPC, raises concerns about subjectivity in student responses, necessitating a more contextualized approach to evaluation (Griboski & Fernandes, 2016; Ikuta, 2016).
Given these challenges, this study proposes a complementary procedure to assess the performance of Brazilian undergraduate programs based on Data Envelopment Analysis. Specifically, the research seeks answers to the following research question (RQ)
RQ. How an alternative CPC formulation, based on Data Envelopment Analysis (DEA), can provide a more flexible and institutionally tailored assessment of undergraduate program performance?
To address this question, two hypotheses were formulated as follows.
-
(i) By allowing for endogenous weight determination, DEA-based model, enables institutions to value their unique strengths;
-
(ii) Introducing weight restrictions in the CPC calculation fosters a balanced trade-off between complete flexibility and standardization, leading to a more equitable assessment framework.
To test these hypotheses, the study explores two distinct scenarios: (1) exploring assessment results of a DEA formulation allowing full flexibility in the selection of weights; (2) Exploring results enforcing weight restrictions in the DEA model to balance flexibility with standardization. By adopting this approach, the study refines the CPC alternative composite indicator, enhancing its utility as a complementary tool for performance analysis. The proposed formulation has the potential to support managerial decisions, identify critical areas for continuous improvement, and highlight best practices among suboptimal programs.
Given the complexities inherent in higher education assessments, multiple theoretical perspectives can deepen the performance analysis of the programs (Andriola, 2004; Andriola & Araújo, 2018). Tehrefore, this alternative CPC formulation represents a methodological innovation towards a more nuanced assessment of undergraduate programs.
This paper is structured as follows. Section 2 reviews the literature on HEI performance assessment with special focus on DEA-based assessments of undergraduate programs and graduate programs. Section 3 provides an in-depth discussion of the official dimensions and components of the CPC index. Section 4 details the Benefit of the Doubt (BoD) approach, including the DEA composite indicator (CI) model and complementary CPC assessment. Section 5 presents and discusses the results, while Section 6 offers concluding remarks, highlighting the implications of the proposed method and suggesting directions for future research.
2 LITERATURE REVIEW ON THE ASSESSMENT OF HIGHER EDUCATION PROGRAMS USING DEA
This section reports a literature review of DEA-based assessments of higher education Institutions (HEIs). It is organized into two subsections. The first subsection focuses on assessments of undergraduate programs, while the second one addresses graduate programs, particularly at Brazilian master’s and doctoral levels.
2.1 DEA-based studies assessing Undergraduate Programs
This subsection examines studies that applied DEA to assess the efficiency of undergraduate programs in HEIS. Papers published between 1988 and 2025 were searched, and as a result, 19 peer-reviewed papers released up to the year 2022 were found. Also, it is noteworthy that studies of HEI’s administration efficiency were disregarded due to their scope (e.g., Di Leo et al., 2024; Almeida et al., 2024)
All papers reviewed in this subsection are indexed in the following bases: Scopus, spanning the period. Note that the papers discussed in this subsection focused particularly on applied studies of Data Envelopment Analysis (DEA) models. The papers analyzed include network DEA, directional distance function (DDF), and bootstrap analysis. The thematic scope of these studies covers efficiency evaluations at the institutional, departmental, and program levels, taking into consideration different hierarchical levels of HEIs. Furthermore, the inclusion of studies from diverse educational contexts, such as Chinese, Polish, Greek, Taiwanese, and Brazilian HEIs, enhanced the global perspective of this review. By synthesizing efficiency assessments across various contexts, this study identifies key research gaps, particularly concerning the Benefit of Doubt (BoD) approach and the application of DEA in the efficiency evaluation of Brazilian HEIs, highlighting the contribution of this paper to advancing knowledge in the field.
On the assessments of HEIs, Johnes & Yu (2008) assessed the research efficiency of a set of Chinese universities. Similarly, Nazarko & Šaparauskas (2014) assessed the efficiency of Polish technology universities and their strategies for financial resource allocation. The work of Zhang & Shi (2019) assessed the teaching performance of Chinese colleges and universities from the perspective of network DEA. Xiong et al. (2022) analyzed resource allocation problems according to the teaching and research performance of universities. Ding et al. (2021) proposed a three-stage DEA model to measure the performance of higher education institutions across Chinese provinces. Chen & Chang (2021) developed a novel two-stage DEA method for evaluating the operating efficiency of multiple Chinese university departments. Finally, Yang et al. (2018) introduced a two-stage DEA model specified with a directional distance function to quantify inefficiencies in terms of inputs and outputs in Chinese universities. The use of DDF makes this work methodologically distinct from the previous ones analyzed.
Regarding departmental-level assessments, Beasley (1990) is considered a pioneer in terms of using DEA to evaluate the efficiency of departments in the U.K., Beasley (1990) introduced an enhanced methodology to quantify both teaching and research efficiencies. This approach was extended by Kounetas et al. (2011), who studied the research efficiency of academic departments at a Greek university. Chang et al. (2012) applied a two-stage DEA model to assess the efficiency of tourism and leisure departments in Taiwanese universities, offering valuable managerial insights. Soon after, Barra & Zotti (2016) brought innovative perspectives with their works focusing on assessing relatively heterogeneous units within disciplinary departments. These authors adopted the combination of bootstrap and DEA, while Ding et al. (2021) selected a set of nonhomogeneous network DEA models for assessing faculty departments. Also, Ghasemi et al. (2020) explored the performance of a few campuses whilst assessing departmental efficiency in HEIs with strong hierarchical vocation. Chen & Chang (2021) formulated weight restrictions for assessing departmental efficiency, supported by judgment values.
In terms of the efficiency assessment of Brazilian programs, the use of the DEA technique has gained attention since the 2000s. Despite the popularity, the assessment of programs in HEIs, specifically considering the CPC-related topics, has been identified in only three recent papers within the recent literature (e.g., Zanella & Oliveira, 2021).They are discussed in the next paragraphs.
The first paper is the work of Borges Carnielli (2006), who evaluated 181 programs based on the 2003 National Program Exam database. This research reported a comprehensive evaluation using the criteria set by the National Higher Education System (SINAES, in Portuguese, Sistema Nacional de Educação Superior). The second paper is the paper of Soliman et al. (2017), who applied a DEA model to assess the efficiency of a set of 1229 Engineering programs in both private and public institutions using data from the year 2009. These authors addressed components related to faculty qualification, working hours, and student perception of the formative process offered by the program as inputs (resources), and components such as students’ scores in the National Student Performance Exam (ENADE, in Portuguese: Exame Nacional do Desempenho de Estudantes), and the value-added by the formative process as outputs (results) of the process. The last one is the paper of Rodrigues & Gontijo (2019), which addressed the KPIs of the CPC as inputs and outputs of the process to assess the efficiency of Public Administration Programs. They also incorporated expert opinions on the importance of the components through weight restrictions in the DEA model.
Note that the three papers identified addressed the assessment of HEIs and higher education programs from the perspective of efficiency analysis, and the variables considered were rather inputs and outputs. Nevertheless, works using the Benefit of Doubt (BoD) approach to construct a composite indicator were not identified during the literature review stage, meaning that to the best of our knowledge, this is the first research paper proposing the construction of an alternative composite indicator to CPC to assess undergraduate Industrial Engineering programs in the context of Brazilian Higher Education Institutions (HEIs).
2.2 DEA-based studies assessing Brazilian Master’s and Doctoral Graduate Programs.
This subsection presents a body of literature on the assessment of graduate programs in Brazil published between 2015 and 2025. Particular focus was dedicated to studies based on Data Envelopment Analysis (DEA). The use of DEA in multi-method frameworks was taken into account as well (e.g., Network DEA, MCDM/A-DEA). Although this paper does not assess graduate programs, exploring the literature on Brazilian DEA applications in the context of graduate education offers valuable methodological and conceptual insights that enrich this investigation. As a result, a set of 13 papers published between 2015 and 2023 in Brazilian journals is discussed broadly and in more depth in the next paragraphs. One should note that papers assessing graduate programs using different operations research techniques rather than DEA are not covered in this subsection (e.g., Tavares et al., 2022).
Since the 2010s, a range of studies has emerged in Brazil exploring the application of DEA to evaluate the performance and efficiency of stricto sensu graduate programs. Particularly, in the period analyzed, Brazilian literature has contributed to this field, reinforcing DEA’s role in assessing graduate programs across several disciplines.
Among these contributions, the paper of Vasconcelos et al. (2016) is considered a pioneer work in efficiency assessment of Brazilian engineering graduate programs according to the data and criteria of the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES) in Portuguese, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. These authors emphasize the methodological rigor required in selecting appropriate input-output variables for DEA models, underscoring DEA’s potential as a tool to support evidence-based decision-making in educational policy. Similarly, Gontijo et al. (2018) demonstrated the adaptability of DEA in the public sector by applying it to assess educational efficiency in public organizations. Their methodological framework aligns closely with the evaluation of graduate programs, bridging institutional performance and educational outputs.
Another advanced application of DEA is presented by Angulo-Meza et al. (2018), employing a Network DEA model to evaluate the efficiency of Brazilian graduate programs within the “Engineering III” subarea. This approach allows for a stage-wise decomposition of the academic production process, distinguishing between inputs and outputs across sub-processes such as student supervision, scientific publication, and intellectual production. Their findings contribute to a more nuanced understanding of how graduate programs transform academic resources into scientific outcomes.
Tavares et al. (2021) proposed an efficiency evaluation approach for HEIs based on the Network DEA model. The objective was to reflect the complexity of universities by considering multiple activities and sub processes within a three-stage evaluation focused on different perspectives: financial, teaching, research, and innovation. The approach was applied to 45 Brazilian federal universities using data from 2016 and 11 variables distributed across the three stages. The results reveal significant variations in performance among the HEIs and identify those with higher relative efficiency at each stage. The approach also enables the identification of efficient HEIs that may serve as benchmarks for others. This methodological approach aligns with and extends previous work by Angulo-Meza et al. (2018) on network DEA applications in graduate program evaluation.
Felicetti & Cabrera (2022) focused on students’ experiences within graduate programs. Although their study does not directly implement DEA, it reveals how students’ perceptions, particularly regarding engagement and collaborative learning, can serve as meaningful inputs in DEA-based evaluations. This perspective adds nuance to the broader understanding of educational outputs and their subjective dimensions.
In parallel, Azevedo et al. (2021) examined the evolution of graduate education in Brazil, advocating for adaptive methodologies in program assessment. Their findings support the integration of structured models like DEA to sustain continuous improvement efforts in professional doctoral programs.
In addition to these national contributions, Guimarães et al. (2019) provided insights into Internationalization at Home (IaH), highlighting how internationalization strategies intersect with educational quality, an important aspect in performance frameworks such as DEA. While not explicitly framed within DEA, the study underlines the strategic role of global engagement in improving academic outcomes. Complementing this perspective, Hammes et al. (2020) and Filho et al. (2023) employ DEA to analyze public spending in federal universities, linking budgetary efficiency to academic performance. These studies underscore how fiscal responsibility and performance metrics are intertwined in Brazilian higher education.
Furthermore, Villano & Tran (2019) conducted a meta-regression analysis that reflects the increasing reliance on heterogeneous DEA models to accommodate varied institutional contexts. Their results affirm DEA’s growing legitimacy as a preferred tool in educational evaluation.
On a related front, Daú et al. (2023) explored the integration of sustainability indicators aligned with the UN 2030 Sustainable Development Goals into higher education assessments. Their innovative approach opens new avenues for combining DEA with broader social responsibility frameworks, enabling the evaluation of how graduate programs contribute to both education quality and the achievement of global sustainability goals.
Internationally, methodologies such as those proposed by Fisher et al. (2017) and Popović et al. (2020) have expanded DEA’s application scope by linking student quality and academic performance, or incorporating multi-criteria decision-making methods to assess teaching effectiveness. These models, although not exclusive to the Brazilian context, illustrate transferable techniques that can enrich domestic evaluations.
The literature review presented in this subsection highlights the relevance of DEA methodology in the assessment of Brazilian master’s and doctoral graduate programs. By encompassing perspectives from fiscal efficiency, sustainability, student engagement, and program quality, these studies can enrich both performance and efficiency assessments.
2.3 Discussion of literature review findings
The literature reviewed in this paper reveals a methodologically diverse field of research using DEA to evaluate higher education programs. The adoption of DEA models, along with variations such as network DEA, directional distance functions (DDF), and bootstrapped models, demonstrates a growing effort to refine efficiency assessments and account for the multifaceted nature of academic performance. Notably, the broad geographical scope of the review, spanning Asia, Europe, and Latin America, provided a wide and in depth outlook of the research body released since the 2000s. It also surfaces research opportunities in country-specific applications, particularly within the Brazilian context.
Regarding assessing undergraduate program assessments, the majority of studies focus on resource utilization and academic outcomes, typically measured through student performance, faculty characteristics, and institutional productivity. However, few studies integrate the perspective of student satisfaction, program infrastructure, and graduate performance as recommended by the CPC framework. This finding suggests that while international literature is robust in methodological experimentation, there is limited contextual calibration to national performance frameworks such as those established by Brazil’s Ministry of Education (MEC).
At the departmental level, research has demonstrated the capacity of DEA to analyze multidisciplinary academic units. This is particularly useful in multidisciplinary fields like Industrial Engineering, where departments may vary significantly in size, focus, and output. Studies employing multi-stage and hierarchical DEA have addressed this challenge, yet Brazilian departmental-level applications remain sparse. Furthermore, the frequent omission of stakeholder perceptions, such as student evaluations or expert judgments, as part of the input-output configuration presents an opportunity for future research to bridge quantitative assessment with qualitative insights.
Graduate-level assessments in Brazil incorporated multi-method approaches, including MCDM/A-DEA hybrids and sustainability KPIs. These developments reflect an awareness of the broader mission of higher education beyond academic outputs, encompassing institutional accountability, internationalization, and social responsibility. However, this complexity also increases the need for transparent, reproducible, and policy-aligned assessment models. DEA’s flexibility makes it a strong candidate for such evaluations, though its application must be aligned with policy objectives like CAPES’ framework.
Another key observation is the limited use of composite indicators models specified in the reviewed DEA literature. Most studies use DEA purely as a benchmarking tool rather than as a basis for constructing performance indexes. The Benefit of the Doubt (BoD) methodology, which allows endogenous weighting of indicators, is rarely used in the HEI context, despite its compatibility with the composite nature of policy-oriented indices like CPC. As a result, there is a significant gap in research bridging DEA and BoD for the purpose of institutional evaluation in Brazil.
Despite the abundance of DEA literature applied to institutional and departmental analyses, only a handful of studies have explicitly explored DEA’s alignment with Brazil’s Preliminary Program Grade (CPC) framework. Borges & Carnielli (2006), Soliman et al. (2017), and Rodrigues & Gontijo (2019) provided foundational contributions by evaluating higher education programs using CPC-related indicators. However, none of these studies employed the Benefit of the Doubt (BoD) approach to construct a composite indicator, reinforcing the originality of the present research in proposing a DEA-BoD framework tailored to assess Industrial Engineering undergraduate programs.
3 THE OFFICIAL CPC CALCULATION PROCEDURE
The Preliminary Program Grade (CPC) can be considered a composite indicator that combines various Key Performance Indicators (KPIs) reflecting the performance criteria of undergraduate programs into an overall performance measure. Established in 2008, CPC has evolved through a sequence of updates over time to accommodate alterations to the National Student Performance Exam (ENADE) and also to respond to the needs of the academic community. The main modifications have primarily focused on the inclusion of assessment components and the corresponding set of weights (Barreyro & Rothen, 2014). Table 1 reports dimensions (criteria) and components (or KPIs) of the CPC, along with the weight system integral to its calculation.
The assessment framework reported in Table 1 is based on Technical Note No. 45/2019 (Brasil, 2019). It defines a four-dimensional set of criteria; each one is assigned a specific weight based on its relative importance. These dimensions encompass various quantifiable components to assess particular aspects of the overall performance of the undergraduate programs. The first dimension is Graduate Performance. It is quantified by the KPI entitled “ENADE Grade” (Y 1). It reflects the academic achievements of the graduates. The second dimension is the “Program Value-Added”, which is reflected by the KPI “Difference Between Observed and Expected Performance Indicator Score” (Y 2). This KPI quantifies the program’s effectiveness in adding value to students’ career outcomes. The third dimension is “Student’s Perception” and it is represented by three KPIs (“Didactic-Pedagogical Organization” (Y 3); “Infrastructure and Physical Installations” (Y 4), and “Opportunities for Expanding Academic and Professional Training” (Y 5)). The KPIs associated with these components collectively measure the students’ perspectives on the educational environment, infrastructure, and growth opportunities. The fourth and last dimension is “Faculty”, which is composed of KPIs reflecting the faculty education level and working regime (e.g., full-time, part-time). The KPIs are the following: “Faculty members holding a master’s degree” (Y 6), “Faculty members holding a PhD” (Y 7), and “Faculty Work Journey” (Y 8). These KPIs collectively gauge the qualifications, expertise, and commitment of the faculty, providing a comprehensive evaluation of the faculty’s role in the program.
All CPC KPIs undergo standardization and rescaling, assuming continuous values comprised between zero and five. Based on the standardized component values, the CPC Continuous Grade for each program j (NC j ) is given by weighting the scores for each KPI as reported in formulation 1.
4 METHODOLOGY
This section presents the Composite Indicator (CI) DEA model specified for quantifying the performance of undergraduate programs within the Brazilian public Federal System. DEA was introduced by Charnes et al. (1978) as a linear programming-based technique designed to quantify the relative efficiency of a homogeneous set of Decision-Making Units (DMUs) (Cooper et al., 2006).
DEA-based models enable the estimation of one aggregate efficiency measure for each DMU by making direct comparisons with other DMUs within a sample. While initially proposed to assess the technical efficiency of homogeneous DMUs using multiple inputs to generate multiple outputs, DEA can also be applied in the context of constructing composite indicators
A CI aggregates a specific set of individual KPIs into a global performance measure. The resulting CI can capture multidimensional concepts that a single individual indicator might overlook. Advantages of using CIs include the ability to summarize information and supporting interpreting results compared t a set of indicators, as outlined in the guidelines by the Organization for Economic Cooperation and Development (OECD) (Nardo et al., 2008).
The use of DEA for constructing CIs was initially proposed by Cook & Kress (1990). But, Cherchye et al. (2007) further popularized the use of CIs based on DEA by introducing a mathematically straightforward model known as the ”Benefit-of-the-doubt (BoD) Composite Indicator”. This approach assigns endogenous weights to each indicator for each DMU under evaluation through optimization. Consequently, each DMU is evaluated with weights that maximize its advantage in the assessment, mitigating the use of a singular weight system that could disadvantage other DMUs.
The linear programming model for estimating the performance of Brazilian Industrial Engineering undergraduate programs is expressed in formulation (2). As noted by Cherchye et al. (2007), this model is analogous to the original input-oriented DEA model proposed by Charnes et al. (1978), wherein all process indicators are treated as outcomes (outputs), and a dummy variable equal to ”1” is considered as a singular input for all DMUs.
In formulation (2), y rj represents the value observed in indicator r for DMU j. The subscript “o” in j o , represents the DMU under evaluation, note that (2) is solved n times. u rj represents the weight of the i-th KPI of DMU j. Thus, u rj reflects every decision variable of this optimization model. The letter ε is an infinitesimal value that guarantees strictly positive weights. The study conducted in this article uses the model of formulation (2) to quantify the performance of undergraduate programs considering the KPIs of the CPC. Therefore, the DMUs represent the “undergraduate programs” and the individual indicators (y rj ) represent the “KPIs of the CPC”. can assume values ranging between 0 and 1. = 1 outlines the frontier of efficiency and indicates the highest performances in the Production Possibility Set (PPS), while ≤ 1 indicates potential for improvement. To facilitate the interpretation of results estimated using formulation (2), a small illustrative example is depicted in Figure 1. This small example illustrates the performance assessment of five DMUs (A, B, C, D, and E). They were assessed using two desirable outputs KPIs (Y 1 and Y 2). DMUs A, B, and C had the best performances in the set, so they are outlining the frontier of efficiency, against which the other DMUs D and E were compared and evaluated. Taking as reference DMU D to illustrate the estimation of scores, is given by the ratio between O ′ D/O ′ D ∗ (Figure 1), where O’ is the origin coordinate (0, 0); D ∗ is the projection of the DMU D on the performance frontier, meaning the reference point representing the desirable output levels for D to be considered efficient. Therefore, can also be interpreted as a radial improvement potential for DMU D.
One advantage of a DEA-based CI model lies in allowing each DMU to select its own set of weights, therefore emphasizing their specialties or vocations through optimization, whilst the performance score is still a relative measure. While advantageous, this flexibility in choosing weights may also represent a weakness when a DMU performs poorly in a specific KPI. In such cases, nearly zero weight may be allocated to this KPI, leading to its overlook in the performance analysis. Since this model exclusively requires weights to be positive, assessments where there is interest in avoiding the assignment of very low weights may benefit from incorporating value judgments through the imposition of weight restrictions.
In this paper, the undergraduate programs under assessment were initially evaluated under total flexibility of positive weights. Subsequently, partial weight restrictions (WRs) were imposed on the weights of KPIs to reflect their relative importance to all evaluated KPIs. There is a body of research addressing procedures to add restrictions to weights in DEA-based models, and this line of work can be traced back to the works of Dyson & Thanassoulis (1988) and Wong & Beasley (1990). Since then, the issue of WRs on DEA-based models has attracted considerable attention in the literature, and different approaches have been proposed. Some noteworthy contributions to WR literature include the works of Thompson et al. (1990), Sarrico & Dyson (2004), Estellita Lins et al. (2007), and Zanella et al. (2015).
In this research, weight restrictions were applied to the DMU under assessment (j o ), as shown in formulation (3). These WRs limit the virtual weights associated with KPI r relative to the virtual weight allocated to all KPIs . Each virtual weight can be interpreted as the product between the absolute weight and the value of the KPI associated with it. It can also be interpreted as the relative “importance” of a KPI in the framework.
The restrictions to the virtual weights (3) were first proposed by Wong & Beasley (1990) and have been extensively used in applications when the objective is to limit the weight of a KPI of the composite indicator in percentage terms.
In formulation (3), w is the weight of KPI r. 1 ± k reflects the degree of flexibility allowed for the KPI weight. The higher the value of k, the greater the degree of flexibility allowed. For instance, k = 0 specifies lower and upper limits equal to the value defined in the original CPC weighting system (see reference values in Table 1). However, when k = 0.2, the assessment allows a margin of more or less 20% in the reference value of the weights.
4.1 Sample description
The performance assessment reported in this paper was conducted using a sample collected from the National Institute of Educational Studies and Research Anísio Teixeira (INEP) open database. The database refers to the base year of 2019, the most recent year of evaluation for engineering programs available in INEP’s open databases at the time of this research. Regarding industrial engineering programs, INEP reported 654 industrial engineering programs, including in-person, hybrid, and digital programs. The HEIs composing the population include colleges, university centers, and universities. Only in-person programs from public federal HEIs were considered for the sample, as these are the programs required to participate in the National Student Performance Exam (ENADE) in Brazil. Nevertheless, public state and private programs whose participation in ENADE is optional were not included in this sample. Also, to enhance the reliability of the assessment results, programs with less than 10 students enrolled were excluded from the sample. Consequently, the final sample comprised 73 undergraduate programs of Industrial Engineering. All programs included in this sample are listed in Appendix A.
5 RESULTS AND DISCUSSION
5.1 Exploratory analysis of CPC KPIs
Figure 2 illustrates the histograms depicting the distribution of the eight CPC KPIs, while Table 1 presents the primary descriptive measures. Notable asymmetries are observed in the data distributions, particularly in KPIs associated with Faculty Work Journey (Y 8) and Faculty members holding a Master’s degree (Y 6). Regarding KPI Y 8, a few programs received a grade different from 5, indicating high performance in this criterion. This level of performance was expected, as the sample consists of HEIs where the full-time work journey is the standard. The histogram of KPI Y 6 reveals that 75% of the programs within the sample excel in this criterion. The grades attained are mostly higher than 4.38, as shown in Table 2.
Another histogram showing a pronounced concentration of programs with similar grades refers to KPI Y 2, which exhibits one of the lowest standard deviations in the sample. In contrast, the most significant discrepancy in the sample is associated with the KPI linked to faculty members holding PhD degrees (Y 7), characterized by the highest standard deviation in the sample.
Several programs exhibit very low or zero values for some KPIs. As clarified in Technical Note No. 38, programs lacking higher educators with a master’s degree (Y 6) or PhD degree (Y 7), or those with partial or full-time Faculty Work Journey (Y 8), have their grades computed as zero. Additionally, concerning the KPIs within the Student Perception dimension (Y 3, Y 4, and Y 5), if no student responds to at least one item in each KPI, the program receives a computed value of zero.
Figure 3 depicts the Pearson correlation coefficients between the KPIs, providing insights into the relationships among KPIs.
The highest correlation coefficients in Figure 3 highlight the relationships among the KPIs within the dimension of Student Perception of the Training Process. Despite the distinct objectives of the three KPIs Didactic-Pedagogical Organization (Y 3), Infrastructure and Physical Installations (Y 4), and Opportunities for Expanding Academic and Professional Training (Y 5) students’ responses consistently align, indicating a propensity among students to evaluate various aspects similarly. Another noteworthy correlation is evident between the KPIs reflecting the proportion of ”Faculty members with a master’s degree” (Y 6) and ”Faculty members with a PhD” (Y 7). This correlation aligns with expectations, given that the Y 6 KPI includes professors with doctoral degrees in its calculation. Furthermore, a significant correlation exists between the KPIs ”Faculty members with a PhD” (Y 7) and ”ENADE Grade” (Y 1). Despite these KPIs aiming to capture distinct aspects of programs, the coefficient suggests that programs with high grades in one KPI tend to perform well in the other.
As outlined in the OECD handbook (Nardo et al., 2008) on constructing composite indicators, assessing statistical correlations between individual indicators is crucial. Correlated indicators included with weights w 1 and w 2 essentially represent a combined weight of w 1 + w 2. In such cases, it is imperative to determine whether the correlated indicators measure the same aspect. In the present research, the high correlation between Y 6 and Y 7 reveals that these two KPIs seek to reflect a common aspect of ”higher educator qualification”. In contrast, the correlations observed in the Student Perception dimension, despite being high, indicate that the KPIs aim to capture different aspects, as discussed in the earlier sections of this work.
5.2 Performance assessment allowing flexibility to weights
In this subsection, results estimated using formulation (2) are discussed. The set of optimized weights was calculated with full flexibility. For depicting the application of the method, Table 3 reports the 10 programs that attained , reflecting the Industrial Engineering programs (DMUs) considered relatively efficient in the PPS, along with the number of occurrences when each DMU was selected as a benchmark for other units.
The undergraduate programs of the Federal University Rio de Janeiro (UFRJ Macaé), the Federal University of Ceará (UFC Russas), the Celso Suckow da Fonseca Federal Center for Technological Education (CEFET-RJ Nova Iguaçu), and the Federal Technological University of Paraná (UTFPR Apucarana) are the ones most frequently selected as examples of best practices. They were selected as peers 61, 22, 11, and 10 times, respectively.
These DMUs benchmarks were identified using the dual envelopment model of formulation 2. The envelopment model is given by the expression subject to: ; and λ j ≥ 0, ∀j, and it gives the same optimization results for under Constant Returns to Scale.
To illustrate the discussion of the results, we consider the program of the Federal University of Pelotas (UFPEL, Pelotas), also named DMU 42, located in the Brazilian state of Rio Grande do Sul. UFPEL selected two peers: UFABC (São Bernardo do Campo) and UTFPR (Apucarana), whose KPI levels are shown in Table 4. Note that λ j can be interpreted as the intensity variable comparing the DMU under assessment to its peers. Each KPI value of the peers (benchmark) represents a learning opportunity for UFPEL to reach higher levels of performance.
To exemplify the benchmarking exercises between UFPEL and its peers, Figure 4 illustrates the KPI levels of UFPEL, UFBAC, and UTFPR.
The UFPEL program can learn from UFABC examples best practices of KPI “ENADE Grade” (Y 1), Program value-added, represented by KPI “Difference Between Observed”, “Opportunities for Expanding Academic and Professional Training” (Y 5), and “Faculty member holding PhD Degree” (Y 7). From peer UFTPR, the DMU under assessment can acquire insights regarding KPIs “Expected Performance Indicator Score” (Y 2), the “Student’s Perception”, represented by KPIs “Didactic-Pedagogical Organization” (Y 3), and the “Infrastructure and Physical Installations” (Y 4). UFPEL can learn from both peers some managerial practices in terms of “Faculty members holding a master’s degree” (Y 6) and the “journey of work of the faculty” (Y 8).
5.3 Performance assessment under weight restrictions
With the incorporation of formulation (3) into formulation (2), a few changes were observed. When k=0.5, it means that 50% of the virtual weights were restricted in the assessment. In practical terms, a 50% bound ensures that the assessment remains anchored to the official CPC policy framework while allowing programs to express some institutional specificity. For instance, for KPI Y2, the weight is constrained between 17.5% and 52.5% , which reflects a balance between respecting the official 35% weight and acknowledging variability in institutional performance drivers.
Therefore, the DMUs also had 50% freedom to select the weights that maximize their performance score in the optimization. Table 6 shows the lower and upper bounds of the virtual WRs for each KPI considered in the assessment. As a result, the composite indicator scores estimated ranged between 0.4455 and 1.
The histogram depicted in Figure 5 shows the distribution of the CI scores under WRs. One can observe that most of the programs achieved scores between 0.7 and 0.9, indicating that most of the programs still have room for improvement.
Regarding the descriptive statistics of the 73 DMUs within the Production Possibility Set (PPS), the mean and standard deviation of the scores were 0.7895 and 0.1042, respectively. Based on the quartile values, the Industrial Engineering programs were classified as follows: bottom performance programs (Q4), where ; moderate performance programs (Q3), where ; high-performance programs (Q2), where ; and shining stars performance programs (Q1), where .
Table 6 presents the CI values of the Industrial Engineering programs evaluated under Weight Restrictions (WRs).
Two Industrial Engineering programs UFRJ (Rio de Janeiro) and UFC (Russas) maintained their top positions as benchmarks under WRs.
Table 6 shows that in the presence of WRs, three programs lost benchmark status when WRs were added (e.g., UFRJ-Macaé, CEFET-Nova Iguaçu, UTFPR-Apucarana). One should note that the change in benchmark programs under weight restrictions indicates that WRs help filter out DMUs that were efficient only due to extremely flexible weighting. The revised set of benchmarks can reflect more realistically the public policies and support deepening institutional learning.
Three programs that had previously achieved the maximum performance score experienced a significant decrease in their performance when WRs were imposed on the evaluation, namely UFRJ Macaé, CEFET Nova Iguaçu, and UTFPR Apucarana. This means that these programs have at least one of the eight components with very low values and when these components carry more weight in the assessment, they decrease the course’s performance.
The assessments based on DEA and WRs were particularly useful to identify the programs’ strengths and weaknesses. Taking the Program of the Federal University of Pelotas (UFPEL program code 1102178) as an example, Figure 6 illustrates its distribution of virtual weights in the eight KPIs considered.
In the assessment using formulations (2) with WRs (3), the Pelotas Federal University Program (UFPEL) allocated the lowest weights (between the limits allowed) to the KPIs Y 1 and Y 2, indicating potential weaknesses in these components compared to the other programs in the sample. Similarly, the highest weights (between the allowed limits) were assigned to KPIs Y 3 to Y 8, representing a potential strength of this program. Appendix A provides information on weights selected by other programs for further analysis.
5.4 Comparing the official CPC index and the DEA alternative formulation results
Figure 7 illustrates the correlation between the BoD-based composite indicator scores, ranging from 0 to 1, and the CPC Continuous Score values (INEP approach), ranging from 0 to 5. A Pearson correlation coefficient of 0.9247 indicates a strong positive correlation between the outcomes of the two assessment methods. Figure 7 also reports the 73 undergraduate Industrial Engineering programs categorized according to four performance levels. The dispersion pattern reveals that programs with both bottom and top performance scores are equivalent using both assessment approaches. Unlike the INEP approach, where each program’s performance is independently calculated based on weighted averages of KPIs, the BoD approach involves direct comparisons among all programs. Moreover, the composite indicator approach can offer flexibility in the estimation of virtual weights for the KPIs. Appendix A reports further information and data regarding the CI values and the categorization of all DMUs in the sample used in this paper.
The alternative formulation of the composite indicator proposed in formulation (2) can work as a complementary management tool to assess the performance of Higher Education Institutions (HEIs). Unlike traditional methods that rely exclusively on exogenous weighted averages, the BoD approach provides a more insightful assessment by allowing direct comparisons among programs within the sample. This feature can support a more accurate evaluation of the programs’ relative strengths and weaknesses, while also empowering institutions with the flexibility to adapt and optimize their strategies. The support for benchmarking can also facilitate strategic decisionmaking, enabling HEIs to identify specific areas for improvement and allocate resources more effectively. Therefore, integrating the use of BoD with the official assessment method represents an asset to support decisions at the high administration level.
6 CONCLUSIONS
This study proposed a complementary method to enhance the performance assessment of Brazilian undergraduate programs in Industrial Engineering. The approach relies on the Data Envelopment Analysis (DEA) Benefit of the Doubt (BoD) method to specify a composite indicator. Key Performance Indicators (KPIs) from the INEP official CPC framework were adopted. The application of the weight restrictions (expression 3) allowed for the incorporation of their relative importance within a flexible framework. By analyzing the weights assigned by each Industrial Engineering undergraduate program within the permitted limits, it was possible to identify further opportunities for performance improvement. The low or high weights assigned reveal the strengths and weaknesses of each program compared to others in the sample. Such insights would not be captured by INEP’s rigid CPC calculation.
The analysis focused on Higher Education Institutions (HEIs) within the federal public system in Brazil, using data from 2019, which was the most recent data available at the time of collection.
The BoD DEA approach offered additional insights that can be combined with the official INEP assessment methodology. This DEA-based approach enabled the generation of performance scores with optimized weighting systems capable of addressing concerns highlighted by Bittencourt et al. (2010). These concerns relate to HEIs’ vocational strengths being taken into account in the assessment. The results were explored within a benchmarking exercise amongst DMUs. Given that the BoD composite indicator is based on a DEA model, it naturally inherits its properties, including the ability to identify efficient units as benchmarks. In this context, the study takes advantage of this feature to identify high-performing Industrial Engineering programs - i.e., those with the highest performance scores and profiles similar to those of low-performing programs - thereby serving as relevant references for improvement.
Furthermore, two key insights emerge from this evaluation. First, identifying benchmark Industrial Engineering undergraduate programs within the sample provided valuable management guidelines of best practices for DMUs categorized as inefficient. Second, the exploration of weight restrictions specified in formulation (3) enabled exploring further opportunities for improving performance in the assessed programs.
In addition, this study identified significant correlations, particularly within the Student Perception dimension, which warrants further exploration to understand their influence on CPC results. Involving experts or regulators in an interactive modeling process to establish weight restrictions is also suggested. For future research, it is recommended to deepen the analysis of interrelationships among the CPC’s KPIs. Overall, the BoD-based composite indicator, particularly when combined with WRs, also enhances performance assessments. By retaining the core components of the CPC and adding flexibility in their evaluation, this approach offers a meaningful complement to INEP’s method, providing actionable insights for both policymakers and HEIs.
Finally, this collaborative approach would integrate stakeholder insights and preferences, enhancing the robustness and inclusivity of the evaluation framework.
References
-
ALMEIDA JPLD, ANJOS FHD, MOREIRA MF, BERMEJO PHDS, PRATA DN & RODRIGUES W. 2024. University efficiency evaluation using data envelopment analysis: future research agenda. Cogent Education, 12(1). Available at: https://doi.org/10.1080/2331186X.2024.2445964
» https://doi.org/10.1080/2331186X.2024.2445964 - ANDRIOLA WB. 2004. Avaliação institucional na Universidade Federal do Ceará (UFC): Organização de sistema de dados e indicadores da qualidade institucional. Avaliação: Revista Da Avaliação Da Educação Superior, 9(4): 33-54.
- ANDRIOLA WB & ARAÚJO AC. 2018. Uso de indicadores para diagnóstico situacional de Instituições de Ensino Superior. Ensaio: Avaliação e Políticas Públicas Em Educação, 26(100): 645-663.
- ANGULO-MEZA L, SOARES DE MELLO JC, GOMES JUNIOR SF & MORENO P. 2018. Evaluation of post-graduate programs using a network data envelopment analysis model. DYNA, 85(204): 83-90.
- AZEVEDO G, MACCARI E & ASGARY N. 2021. The use of adaptive project management practices and methodologies in the development of a professional doctoral program. Revista De Administração Da Ufsm, 14(1): 44-62.
- BARRA C & ZOTTI R. 2016. Measuring efficiency in higher education: An empirical study using a bootstrapped data envelopment analysis. International Advances in Economic Research, 22(1): 11-33.
- BARREYRO GB & ROTHEN JC. 2014. Percurso da avaliação da educação superior nos Governos Lula. Educação e Pesquisa, 40(1): 61-76.
- BEASLEY JE. 1990. Comparing university departments. Omega, 18(2): 171-183.
- BITTENCOURT HR, VIALI L, RODRIGUES AC & CASARTELLI AO. 2010. Mudanças nos pesos do CPC e seu impacto nos resultados de avaliação em universidades federais e privadas. Avaliação: Revista Da Avaliação Da Educação Superior (Campinas), 15(3): 147-166.
- BORGES RS & CARNIELLI BL. 2006. Confrontando avaliações: os resultados do Exame Nacional de Cursos e os da Análise Envoltória de Dados. Revista da Rede de Avaliação Institucional da Educação Superior, 11(3): 171-185.
- BRASIL. 2004. Lei n. 10.861, de 14 de abril de 2004. Institui o Sistema Nacional de Avaliação da Educação Superior - Sinaes e dá outras providências.
- BRASIL. 2008. Portaria Normativa n. 4, de 5 de agosto de 2008. Regulamenta a aplicação do conceito preliminar de cursos superiores - CPC, para fins dos processos de renovação de reconhecimento respectivos, no âmbito do ciclo avaliativo do Sinaes.
- BRASIL. 2019. Nota técnica n. 45/2019. Metodologia de cálculo do Indicador de Diferença entre os Desempenhos Observado e Esperado (IDD) referente ao ano de 2018.
- CHANG TY, CHUNG PH & HSU SS. 2012. Two-stage performance model for evaluating the managerial efficiency of higher education: Application by the Taiwanese tourism and leisure department. Journal of Hospitality, Leisure, Sport & Tourism Education, 11(2): 168-177.
- CHARNES A, COOPER W & RHODES E. 1978. Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6): 429-444.
-
CHEN SP & CHANG CW. 2021. Measuring the efficiency of university departments: an empirical study using data envelopment analysis and cluster analysis. Scientometrics, 126(6): 5263-5284. Available at: https://link.springer.com/10.1007/s11192-021-03982-3
» https://link.springer.com/10.1007/s11192-021-03982-3 - CHERCHYE L, MOESEN W, ROGGE N & PUYENBROECK TV. 2007. An introduction to ’benefit of the doubt’ composite indicators. Social Indicators Research, 82(1): 111-145.
- COOK WD & KRESS M. 1990. A data envelopment model for aggregating prefence rankings. Management Science, 36(11): 1302-1310.
- COOPER WW, SEIFORD LM & TONE K. 2006. Introduction to Data Envelopment Analysis and Its Uses. 1 ed. New York: Springer. 354 pp.
- DAÚ G, SCAVARDA A, ALVES M, SANTA R & FERRER M. 2023. An analysis of the brazilian higher educational opportunity and challenge processes to achieve the 2030 agenda for the sustainable development. International Journal of Sustainability in Higher Education, 24(6): 1197-1219.
-
DI LEO S, AVENALI A, DARAIO C, MOREA D, D'UGGENTO AM, LEPORI B & BONACCORSI A. 2024. Climbing university rankings under resources constraints: a combined approach integrating DEA and directed Louvain community detection. Annals of Operations Research, Available at: https://doi.org/10.1007/s10479-024-06219-7
» https://doi.org/10.1007/s10479-024-06219-7 -
DING T, YANG J, WU H, WEN Y, TAN C & LIANG L. 2021. Research performance evaluation of Chinese university: A non-homogeneous network DEA approach. Journal of Management Science and Engineering, 6(4): 467-481. Available at: https://linkinghub.elsevier.com/retrieve/pii/S2096232020300482
» https://linkinghub.elsevier.com/retrieve/pii/S2096232020300482 -
DYSON RG & THANASSOULIS E. 1988. Reducing Weight Flexibility in Data Envelopment Analysis. Journal of the Operational Research Society, 39(6): 563-576. Available at: http://link.springer.com/10.1057/jors.1988.96
» http://link.springer.com/10.1057/jors.1988.96 -
ESTELLITA LINS M, MOREIRA DA SILVA A & LOVELL C. 2007. Avoiding infeasibility in DEA models with weight restrictions. European Journal of Operational Research , 181(2): 956-966. Available at: https://linkinghub.elsevier.com/retrieve/pii/S0377221706005017
» https://linkinghub.elsevier.com/retrieve/pii/S0377221706005017 - FELICETTI V & CABRERA A. 2022. Students’ experiences with graduate education in brazil a confirmatory factor analysis approach. Revista De Investigación Educativa, 40(2): 319-339.
- FILHO P, SOUSA E, CARMO C & GONÇALVES T. 2023. Evaluation of efficiency of brazilian federal universities: an approach through data envelopment analysis. Avaliação: Revista Da Avaliação Da Educação Superior (Campinas), 28.
- FISHER S, CHI R, FISHER D & KIANG M. 2017. Determining the value of undergraduate business programs from market vs academic perspectives. International Journal of Educational Management, 31(2): 236-251.
- GHASEMI N, NAJAFI E, LOTFI FH & SOBHANI FM. 2020. Assessing the performance of organizations with the hierarchical structure using data envelopment analysis: An efficiency analysis of Farhangian University, .
- GONTIJO T, UMŸLDER C & RODRIGUES A. 2018. Incorporating managed preferences in the evaluation of public organizations efficiency: a DEA approach. Independent Journal of Management & Production, 9(4): 1108-1126.
- GRIBOSKI CM & FERNANDES IR. 2016. Avaliação da Educação Superior: como avançar sem desqualificar. Observatório Universitário, 1: 1-31.
- GUIMARÃES F, MENDES A, RODRIGUES L, PAIVA R & FINARDI K. 2019. Internationalization at home, coil and intercomprehension. Sfu Educational Review, 12(3): 90-109.
- HAMMES D, FLACH L & MATTOS L. 2020. The efficiency of public expenditure on higher education: a study with brazilian federal universities. Ensaio: Avaliação e Políticas Públicas Em Educação , 28(109): 1076-1097.
- IKUTA CYS. 2016. Sobre o Conceito Preliminar de Curso: concepção, aplicação e mudanças metodológicas. Estudos Em Avaliação Educacional, 27(66): 938-969.
-
JOHNES J & YU L. 2008. Measuring the research performance of Chinese higher education institutions using data envelopment analysis. China Economic Review, 19(4): 679-696. Available at: https://www.sciencedirect.com/science/article/pii/S1043951X08000679
» https://www.sciencedirect.com/science/article/pii/S1043951X08000679 -
KOUNETAS K, ANASTASIOU A, MITROPOULOS P & MITROPOULOS I. 2011. Departmental efficiency differences within a Greek university: An application of a DEA and Tobit analysis. International Transactions in Operational Research, 18(5): 545-559. Available at: https://onlinelibrary.wiley.com/doi/10.1111/j.1475-3995.2011.00813.x
» https://onlinelibrary.wiley.com/doi/10.1111/j.1475-3995.2011.00813.x - LACERDA LLV & FERRI C. 2017. Conceito Preliminar de Curso: conceito único para uma realidade educacional múltipla. Estudos Em Avaliação Educacional , 28(69): 748-772.
- NARDO M, SAISANA M, SALTELLI A, TARANTOLA S, HOFFMAN A & GIOVANNINI E. 2008. Handbook on Constructing Composite Indicators. 1 ed. Paris: OECD PUBLICATIONS. 162 pp.
- NAZARKO J & ŠAPARAUSKAS J. 2014. Application of DEA method in efficiency evaluation of public higher education institutions. Technological and Economic Development of Economy, 20(1): 25-44.
- POPOVIĆ M, SAVIĆ G, KUZMANOVIĆ M & MARTIĆ M. 2020. Using data envelopment analysis and multi-criteria decision-making methods to evaluate teacher performance in higher education. Symmetry, 12(4): 563.
- RODRIGUES AC & GONTIJO TS. 2019. Incorporando julgamentos de especialistas em educação na avaliação da eficiência de cursos de graduação: uma abordagem por data envelopment analysis. Revista Gestão & Tecnologia, 19(1): 113-139.
- SARRICO CS & DYSON RG. 2004. Restricting virtual weights in data envelopment analysis. European Journal of Operational Research , 159: 17-34.
- SCHWARTZMAN S. 2008. O Conceito Preliminar e as boas práticas de avaliação do ensino superior. Estudos Revista da Associação Brasileira de Mantenedoras de Ensino Superior, 26(38): 9-32.
- SOLIMAN M, SILUK JCM, NEUENFELDT JR AL & CASADO FL. 2017. Avaliação da eficiência técnica dos cursos de Administração no Brasil. Revista De Administração Da Ufsm , 10(2): 188-203.
-
TAVARES R, ANGULO-MEZA L, RANGEL L & SANT’ANNA A. 2022. Interdisciplinary graduate programs: application of the MACBETH multicriteria method for assessing their performances. Annals of Operations Research, 316(2): 1383-1399. Available at: https://doi.org/10.1007/s10479-021-04108-x
» https://doi.org/10.1007/s10479-021-04108-x -
TAVARES RS, ANGULO-MEZA L & SANT’ANNA AP. 2021. A proposed multistage evaluation approach for Higher Education Institutions based on network Data Envelopment Analysis: A Brazilian experience. Evaluation and Program Planning, 89: 101984. Available at: https://doi.org/10.1016/j.evalprogplan.2021.101984
» https://doi.org/10.1016/j.evalprogplan.2021.101984 -
THOMPSON RG, LANGEMEIER LN, LEE CT, LEE E & THRALL RM. 1990. The role of multiplier bounds in efficiency analysis with application to Kansas farming. Journal of Econometrics, 46(1-2): 93-108. Available at: http://linkinghub.elsevier.com/retrieve/pii/030440769090049Y
» http://linkinghub.elsevier.com/retrieve/pii/030440769090049Y -
UN. 2015. Transforming our world: the 2030 Agenda for Sustainable Development. Available at: https://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1⟪=E
» https://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1⟪=E - VASCONCELOS M, HORA H & ERTHAL M. 2016. Produção científica dos programas de pós graduação: avaliação da eficiência da área engenharias iii. Revista Produção E Desenvolvimento, 2(2): 11-25.
- VERHINE RE & DANTAS LM. 2009. A avaliação do desempenho de alunos de educação superior: uma análise a partir da experiência do ENADE. In: DAZZANI MV & LORDÊLO JAC (Eds.), Avaliação Educacional: destando e reatando nós. pp. 173-199. Salvador: SciELO Books.
- VILLANO R & TRAN C. 2019. Survey on technical efficiency in higher education: a metafractional regression analysis. Pacific Economic Review, 26(1): 110-135.
-
WONG YHB & BEASLEY JE. 1990. Restricting Weight Flexibility in Data Envelopment Analysis. The Journal of the Operational Research Society , 41(9): 829. Available at: http://www.jstor.org/stable/2583498?origin=crossref
» http://www.jstor.org/stable/2583498?origin=crossref -
XIONG X, YANG GL, ZHOU DQ & WANG ZL. 2022. How to allocate multi-period research resources? Centralized resource allocation for public universities in China using a parallel DEA-based approach. Socio-Economic Planning Sciences, 82: 101317. Available at: https://linkinghub.elsevier.com/retrieve/pii/S0038012122001021
» https://linkinghub.elsevier.com/retrieve/pii/S0038012122001021 -
YANG GL, FUKUYAMA H & SONG YY. 2018. Measuring the inefficiency of Chinese research universities based on a two-stage network DEA model. Journal of Informetrics, 12(1): 10-30. Available at: https://linkinghub.elsevier.com/retrieve/pii/S1751157717302766
» https://linkinghub.elsevier.com/retrieve/pii/S1751157717302766 - ZANELLA A, CAMANHO AS & DIAS TG. 2015. Undesirable outputs and weighting schemes in composite indicators based on data envelopment analysis. European Journal of Operational Research , 245(2): 517-530.
-
ZANELLA A & OLIVEIRA R. 2021. Avaliação de desempenho na educação superior: uma abordagem utilizando a Análise Envoltória de Dados. Ciência E Natura, 43: e81. Available at: https://doi.org/10.5902/2179460X66024
» https://doi.org/10.5902/2179460X66024 -
ZHANG X & SHI W. 2019. Research about the university teaching performance evaluation under the data envelopment method. Cognitive Systems Research, 56: 108-115. Available at: https://linkinghub.elsevier.com/retrieve/pii/S1389041718308957
» https://linkinghub.elsevier.com/retrieve/pii/S1389041718308957
-
Funding
The authors gratefully acknowledge the funding provided by the National Council for Scientific and Technological Development (Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPQ).Grant Number:373182/2024-4
-
Data Availability
The data analyzed in this study are publicly available in the INEP Open Data Repository under the title “Indicadores de Qualidade da Educação Superior.” The data can be accessed at https://www.gov.br/inep/pt-br/acessoa-informacao/dados-abertos/indicadores-educacionais/indicadores-de-qualidade-da-educacao-superior.
APPENDIX A - CI VALUES AND WEIGHTS SELECTED BY THE PROGRAMS IN EACH KPI
Data availability
The data analyzed in this study are publicly available in the INEP Open Data Repository under the title “Indicadores de Qualidade da Educação Superior.” The data can be accessed at https://www.gov.br/inep/pt-br/acessoa-informacao/dados-abertos/indicadores-educacionais/indicadores-de-qualidade-da-educacao-superior.
Publication Dates
-
Publication in this collection
08 Aug 2025 -
Date of issue
2025
History
-
Received
19 Feb 2025 -
Accepted
29 May 2025














