Acessibilidade / Reportar erro

Eliciting and defining requirements based on metaevaluation: the case of the CRAS 2008 Census

Abstracts

The Brazilian Ministry of Social Development and Fight against Hunger (MDS) regularly promotes the evaluation of its social programs, such as those developed in the Reference Centers for Social Assistance (CRAS). Such evaluations make use of a web system that supports the collection and processing of information as well as the dissemination of its results to local, regional and central government officials through the so-called CRAS Census. A meta-evaluation of the CRAS 2008 Census was carried out based on criteria specified by the Joint Committee (1994), from which we elicited requirements that enabled improvements of the web system. The article reports new requirements elicited from the meta-evaluation of the CRAS 2008 Census, held in the period 2009-2010. The approach of meta-evaluation as an alternative source of requirements elicitation took into consideration results from evaluations of social programs in order to identify system problems without the usual need of intense interaction with users. This approach revealed opportunities for improvements in the evaluation process that led to the elicitation of requirements for the computerized system. Some of the elicited features were incorporated into the Census 2010 and others may be incorporated in future censuses.

Requirements Elicitation; Evaluation; Metaevaluation


O Ministério do Desenvolvimento Social e Combate à Fome (MDS) promove periodicamente a avaliação dos seus programas sociais, como aqueles desenvolvidos nos Centros de Referência em Assistência Social (CRAS). Essa avaliação utiliza como instrumento um sistema web que suporta o processo de coleta e tratamento de informações e disseminação de resultados aos gestores municipais, estaduais e federais mediante o chamado Censo CRAS. Conduziu-se uma meta-avaliação do CRAS 2008, baseada nos critérios especificados peloJoint Committee (1994), a partir da qual foram elicitados requisitos que possibilitaram a melhoria do referido sistema web. Neste artigo são relatados os resultados da meta-avaliação do Censo CRAS 2008, realizada no período de 2009 a 2010, na qual novos requisitos foram elicitados. A abordagem da meta-avaliação como fonte alternativa de elicitação de requisitos considerou os resultados de avaliações de programas sociais para identificar os problemas do sistema, sem a necessidade da usual e intensa interação com usuários, e revelou oportunidades de melhorias no processo de avaliação que conduziram à elicitação de requisitos para o sistema informatizado. Algumas funcionalidades elicitadas foram incorporadas ao Censo CRAS 2010 e outras poderão ser incorporadas em censos futuros.

Elicitação de requisitos; Avaliação; Meta-avaliação


1. Introduction

Requirements Engineering (RE), in the context of Software Engineering, can be seen as an important activity that permeates the communication and modeling activities in order to build a bridge between the need for software and its design and implementation (Pressman, 2010Pressman, R. Software engineering: a Practitioner’s Approach. 7. Ed., New York, EUA: McGraw-Hill, 2010., p. 120).

For IEEE (1990)IEEE. Standard Glossary of Software Engineering Terminology (IEEE Std. 610.12-1990). New York, EUA: IEEE, 1990., the set of requirements of a system includes: (i) conditions or potentialities required by a user to solve a problem or achieve an objective; (ii)conditions or potentialities a system, component, or product must exhibit to be accepted, and(iii) the documentation related to these two items.

Pressman (2010Pressman, R. Software engineering: a Practitioner’s Approach. 7. Ed., New York, EUA: McGraw-Hill, 2010., p. 121) categorize the RE tasks as: conception, elicitation, elaboration, negotiation, specification (modeling), management, and validation of the requirements of a software package. Additionally, the author warns about the possibility of overlapping these activities along a project schedule. This work describes the elicitation of nonfunctional, functional, and business rules requirements in the context of the information systems involved in the evaluation of the CRAS 2008[1] [1] The CRAS Census is carried out annually by MDS and refers to the data collection and analysis in the Centers of Reference in Social Assistance. This census has been performed regularly since 2007. Census.

In the case studied the starting point for eliciting requirements were not the traditional method of applying interviews to clients to figure out their expectations, but rather the results of a critical analysis based on widely accepted evaluation standards from the metaevaluation realm (Coosky; Caracelli, 2009Coosky, L.; Caracelli, V. Metaevaluation in practice.Journal of MultiDisciplinary Evaluation, v. 6, n.11, 2009.).

2. The case studied

The Centers of Reference in Social Assistance (CRAS) are units managed by the Brazilian Ministry of Social Development and Fight against Hunger (MDS) spreaded along the 5,560 municipalities in which social services are provided. These centers are partially supported by the Federal Government, via the MDS, having the management under the municipalities’ responsibility. Their decentralized structure led to an expressive participation of municipalities. On the other hand, it hindered the follow-up of the policies implementation in the municipalities and the quality management of this process by the Federal Government.

For this reason, in 2007, by means of the Secretariat for Evaluation and Information Management (SAGI) and the National Secretariat for Social Assistance (SNAS), MDS carried out a census to quantify, identify, and collect information from CRAS. Along with this effort, the SAGI designed, created, developed, and has maintained the information systems for this purpose. By its side, the SNAS did the mobilization, system definitions, and the contact with the municipalities. For this census, the SAGI enabled an Internet site in which the municipal manager inserts information about the local CRAS.

Also in 2007, a data collection about location, human resources, physical infrastructure, and capacity to articulate with other public and private agencies was done. The first results led the MDS to propose indicators and limits to be satisfied by the CRAS in terms of services provided, physical infrastructure, and quality, to be measured in future census. This way, a traditional census took an evaluation characteristic, by subsidizing the delimitation of criteria useful to assess the services offered by the CRAS.

In the subsequent years, beyond the annual census for the CRAS, it was carried out a census in the Specialized Centers for Social Assistance Reference (CREAS), in the municipal and state boards of social assistance, and in the social assistance management. However, the evaluation process with indicators and evaluation criteria was first deployed in the CRAS Census conducted in 2007 (Brazil, 2008aBrasil. Ministério do Desenvolvimento Social e Combate à Fome. Linha de Base do Monitoramento dos CRAS. Brasília, DF: Secretaria de Avaliação e Gestão da Informação, Secretaria Nacional de Assistência Social, 2008a. 104 p.). The results of this evaluation were published in 2010.

The evaluation process of the CRAS Census was the first in the two involved secretariats that used information systems, indicators, training, and mobilization for enhancing the decision-making process in the State and Federal management level considering data collected from the Brazilian municipalities.

Considering the possibility of improvements in any evaluation process, for example, in terms of quality or precision (Posavac; Carey, 2003Posavac, E. J., & Carey, R. G. Program evaluation. Methods and case Studies. 6ª Ed., New Jersey, EUA: Prentice Hall. 2003.), a metaevaluation for this purpose was carried out, based on theJoint Committee´s (1994)JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994. evaluation standards. According to Hedler (2007Hedler, H. Meta-avaliação das auditorias de natureza operacional do Tribunal de Contas da União: um estudo sobre auditorias de natureza operacional de programas sociais. Tese de Doutorado apresentada ao Instituto de Psicologia da UnB, 2007. 260f., p. 59), metaevaluation is “a research method for re-evaluating one or more steps of an evaluation study already done; the previous evaluation is compared with quality and validity standards accepted by the scientific community, and a new evaluation of the evaluation study in analysis is issued in the end”. The standards were used as references for the metaevaluation, as done in Hedler (2007)Hedler, H. Meta-avaliação das auditorias de natureza operacional do Tribunal de Contas da União: um estudo sobre auditorias de natureza operacional de programas sociais. Tese de Doutorado apresentada ao Instituto de Psicologia da UnB, 2007. 260f., and applied to the CRAS Census process performed in 2007.

The standards of the Joint Committee (1994)JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994.were applied to subsidize the improvements in the information systems used in the census. In short, a requirements elicitation was performed based on the results of the metaevaluation.

3. Theoretical references

3.1 Requirements engineering

Software Engineering (SE), as a research field, provides methodological support for the development/construction of software packages, making available techniques, methods, and standards that can be applied to the complete lifecycle of a software package.

Usually, the first phase of the software development process is supported by the RE, a subdivision of the SE responsible for defining objectives and the limits of software (Pressman, 2006, p. 116; Paula Filho, 2009Paula Filho, W. Engenharia de software fundamentos, métodos e padrões. Rio de Janeiro, RJ: LTC, 2009., p.165; Wiegers, 2003Wiegers, K. Software requirements: practical techniques for gathering and managing requirements throughout the product development cycle. 2. ed. Redmond, EUA: Microsoft Press, 2003., p. 380). SE also offers standards like the Capability Maturity Model Integration (CMMI), adopted both by practitioners and academy, which has processes targeted to requirements development: requirements definition (SEI, 2010SEI. Software Engineering Institute. Improving process for developing better products and services. Disponível em: <http://www.sei.cmu.edu/reports/10tr033.pdf>. Acesso em 9 de novembro de 2010·
http://www.sei.cmu.edu/reports/10tr033.p...
, p. 325) and requirements management (SEI, 2010SEI. Software Engineering Institute. Improving process for developing better products and services. Disponível em: <http://www.sei.cmu.edu/reports/10tr033.pdf>. Acesso em 9 de novembro de 2010·
http://www.sei.cmu.edu/reports/10tr033.p...
, p. 341).

Wiegers (2003Wiegers, K. Software requirements: practical techniques for gathering and managing requirements throughout the product development cycle. 2. ed. Redmond, EUA: Microsoft Press, 2003., p. 47) divides the activities of the RE related to the development of requirements as:(i) elicitation, (ii) analysis,(iii) specification, and (iv) validation. The requirements elicitation phase focuses on discovering the requirements and the communication between the developers and the clients. If the communication step fails, the resulting software tends to be unfitted to satisfy the necessities and expectance of the client. This is the most critical phase (Wiegers, 2003Wiegers, K. Software requirements: practical techniques for gathering and managing requirements throughout the product development cycle. 2. ed. Redmond, EUA: Microsoft Press, 2003., p. 115) and, usually, applies interviews, information collection, and group discussion as the main methodological approaches.

Pressman (2006, p. 118) points out the main problems in the requirements development process: (i) problems with the project scope,(ii) of problem understanding, and (iii)in the volatility or changes in the requirements during the project. Saiedian and Dale (1999)Saiedian, H. Dale, R. Requirements engineering: making the connection between the software developer and customer. Information Software Technology, n. 42, 1999. add other problems: (i) poor communication, (ii)resistance to changes from the involved people, (iii) problems of articulation among the involved people, and (iv) different perspectives among the target clients. Some software organizations offer alternatives to mitigate this problem by creating standards to be used in the RE: the CMMI from the Software Engineering Institute (SEI) and the General Guide for Improving the Brazilian Software Process (MPSBr) from Softex (2009)Softex. MPS.br – Melhoria de Processo de Software Brasileiro - Guia Geral. 2009. Disponível em: <http://www.softex.br/mpsbr/_guias/guias/MPS.BR_Guia_Geral_2009.pdf>
http://www.softex.br/mpsbr/_guias/guias/...
are some examples.

Among the possible inputs for the requirements discovery (SEI 2010SEI. Software Engineering Institute. Improving process for developing better products and services. Disponível em: <http://www.sei.cmu.edu/reports/10tr033.pdf>. Acesso em 9 de novembro de 2010·
http://www.sei.cmu.edu/reports/10tr033.p...
, p. 329-330) it can be found: (i)questionnaires, interviews, and scenarios; (ii) prototypes and models; (iii) market questionnaires; (iv)brainstorm; (v) use cases; (vi) business cases analysis; (vii) software tests; (viii)technology demonstration; (ix) business policies;(x)legacy products; (xi) regulatory statutes, and (xii) standards. In this work, the requirements elicitation was performed on the basis of programs evaluation standards.

Based on the SE concepts and in the RE processes recommended by the IEEE (1990), and considering the software quality standard from ISO 9126[2] [2] The ISO/IEC 9126 standard (INTERNATIONAL STANDARD ORGANIZATION, 2001) focuses on the quality of the software product. It establishes a quality model based on the following components: (i) the development process, (ii) quality of the final product, and (iii) software product quality in use. and the concept of Business Process Management – BPM[3] [3] Business Process Management is a management approach focused on identifying, designing, implementing, documenting, measuring, monitoring, controlling and improving business processes, automated or not, to achieve the desired results, consistent and aligned with the strategic goals of an organization (ASSOCIATION OF BUSINESS PROCESS MANAGEMENT PROFESSIONALS, 2009). , Castro and Guimarães (2010)Castro, E. J. R.; Guimarães, F. A. Processo eXtreme requirements XR. Disponível em: <http://www.quaddract.com.br/download/Metodo_eXtreme_Requirements_XR.pdf>. Acesso em: 27 fev. 2010.
http://www.quaddract.com.br/download/Met...
proposed eXtreme Requirements (XR) to the production of requirements according the phases: business analysis, solution proposal, requirements definition, prototyping, tests, and requirements managements.

XR classifies the requirements in non-functional, functional, complementary, and business rules. Functional requirements are the functionalities or activities mandatory for the system to perform. Complementary requirements are characteristics or properties derived from the detailing of functional requirement. Business rules come from the organizational context, like regulations, conditions, or standards required to perform each functionality. Non-functional requirements are characteristics related to the software quality.

3.2 Evaluation

For Worthen, Sanders and Fitzpatrick (2007, p. 35), an evaluation refers to the identification, clarification, and application of defensible criteria to determine the value, merit, usefulness, effectiveness, or importance of the object evaluated. Stufflebeam and Shinkfield (2007Stufflebeam, D.; Shinkfield, A. Evaluation, theory, models & applications. San Francisco, EUA: Jossey-Bass, 2007., p. 16) define evaluation as a systematic process to delimitate, obtain, report, describe, and judge the information on the merit, value, integrity, feasibility, security, significance or equity of an object. Weiss (1997Weiss, C. H. Evaluation. 2. Ed., Saddle River, EUA: Prentice Hall, 1997., p. 4) argues that evaluation is an approach to attest, systematically, the operation and results of a policy or program compared to a set of standards as a way to contribute to the improvement of the policy or program.

It can be noticed, by these definitions, that evaluations should judge or clarify an issue on the basis of standards or criteria in order to qualify a social program, a person, an organization, or a process. The first definition is broader, mainly due to the fact it is not limited to social programs, and was the one adopted in this research.

According to Worthen, Sanders and Fitzpatrick (2007, p. 44), the results of an evaluation can bring improvements to the object, program, or policy evaluated. Conversely, an evaluation barely constructed and implemented is a poor guide for management decisions. The reason for an evaluation failure can be a bad methodological planning or even the lack of ethics from the people or organizations involved in the evaluation process.

The need to attest the quality and improve the construction of new evaluations led several organizations and authors to propose standards. These standards can be used as metaevaluation mechanisms.

According to Stufflebeam (2001)Stufflebeam, D. The metaevaluation imperative. American Journal of Evaluation, v. 22, p. 183-209, 2001., “metaevaluation is the process of delineating, obtaining, and applying descriptive information and judgmental information about an evaluation’s utility, feasibility, propriety, and accuracy and its systematic nature, competence, integrity/honesty, respectfulness, and social responsibility to guide the evaluation and publicly report its strengths and weaknesses”. Among the evaluation standards available, there are the Educational Evaluation Standards from the Joint Committee (1994)JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994., the Guiding Principles for Evaluators from the American Evaluation Association (AEA, 2004) and the Government Auditing Standards from the U.S. Government Auditing Office (USGA, 2007UNITED STATES GOVERNMENT ACCOUNTABILITY OFFICE.GOF: Government Auditing Standards. 2007. Disponível em: <http://www.gao.gov/new.items/d07731g.pdf>. Acesso em: 08 out. 2009.
http://www.gao.gov/new.items/d07731g.pdf...
).

The Joint Committee was formed initially by a group of authors in the evaluation field that worked together from 1970 to 1990, discussing sets of standards to guide and evaluating the construction of evaluations. It proposes 30 standards, organized in four groups, created to subsidize the evaluation of programs and educational projects, aiming at stimulating and improving the interchange of ideas among professional involved in evaluations. The Joint Committee (1994JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994., p. 4), however, encourages the use of standards from other evaluation methods that, according Stufflebeam and Shinkfield (2007Stufflebeam, D.; Shinkfield, A. Evaluation, theory, models & applications. San Francisco, EUA: Jossey-Bass, 2007., p. 92), are also applicable to metaevaluation.

The Guiding Principles for Evaluators (AEA, 2004) has five principles:(i)systematic inquiry, (ii) competence,(iii) integrity and honesty, (iv) respect for people, and (v) responsibilities for general and public welfare. Stufflebeam and Shinkfield (2007Stufflebeam, D.; Shinkfield, A. Evaluation, theory, models & applications. San Francisco, EUA: Jossey-Bass, 2007., p. 110), however, argue that these principles are already considered in the Joint Committee standards.

The guiding principles of the Government Auditing Standards (USGAUNITED STATES GOVERNMENT ACCOUNTABILITY OFFICE.GOF: Government Auditing Standards. 2007. Disponível em: <http://www.gao.gov/new.items/d07731g.pdf>. Acesso em: 08 out. 2009.
http://www.gao.gov/new.items/d07731g.pdf...
, 2007) were proposed to assure the achievement of high quality auditing that is essential to accountability and transparence of public resources investments. For this, auditing must be objective, based on facts, impartial, capable of measuring the program performance, and make available information related to decision making. According to Stufflebeam e Shinkfield (2007Stufflebeam, D.; Shinkfield, A. Evaluation, theory, models & applications. San Francisco, EUA: Jossey-Bass, 2007.), these standards present similarities with the Joint Committee proposal and consider standards for independent auditing, judgment of professionals, competence, control and quality, fieldworks, reports, and performance of auditing.

The standards from the Joint Committee - USGA and AEA - as suggested by Stufflebeam and Shinkfield (2007Stufflebeam, D.; Shinkfield, A. Evaluation, theory, models & applications. San Francisco, EUA: Jossey-Bass, 2007., p. 109), are diverse in details and guidance, non-contradictory and complementary. So, for this work, the Joint Committee criteria (1994JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994.) were adopted.

4. Methodological issues

This work departs from the assumption that an evaluation involving Information Systems (IS) can be metaevaluated in order to generate input to the requirements elicitation of software aiming at improving future evaluations. Figure 1 illustrates this assumption, according to the continuous improvement approach shown as a spiral that starts in the computerized evaluation.

Figure 1
- Improvement cycle of evaluation systems by means of metaevaluation based on information systems

4.1 Specifying a method to study the case CRAS 2008 Census

The evaluation of the CRAS 2008 Census was taken as starting point to indentify new requirements in the evaluation process of the CRAS. The requirements’ elicitation was carried out on the basis of results from the metaevaluation performed using the Joint Committee standards (JOINT COMMITTEE, 1994JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994.). This metaevaluation involved an investigation based on interviews, documents, and process regulation.

The standards from the Joint Committee were chosen due to its generality, comprehensiveness, and wide acceptance by the scientific community (Coosky and Caracelli, 2009Coosky, L.; Caracelli, V. Metaevaluation in practice.Journal of MultiDisciplinary Evaluation, v. 6, n.11, 2009.). Moreover, these patterns suggest recommendations and common errors in assessments, enabling its use as a guide for identifying weaknesses in the evaluation process whose solution may involve the use of Information and Communication Technologies (ICT). Table 1 lists the documents, systems, and interviews examined in the metaevaluation of the CRAS.

Table 1
- Recommendations and main errors according to the instructions from the Joint Committee (1994)JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994.

The Joint Committee (1994)JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994. provides standards to verify the adequability, usefulness, and precision of an evaluation. Three standards were chosen for the analysis: (i)Evaluation Complete and Fair, (ii) Systematic Information, and(iii) Conclusions Justified. These standards were chosen by a committee of three senior professionals in the ICT that agreed that these standards are related to the identification of new software requirements.

As detailed in Table 1, the main input to the metaevaluation was: (i) the evaluation report (Brasil, 2010Brasil. Ministério do Desenvolvimento Social e Combate à Fome. Monitoramento SUAS: censo CRAS 2008. Brasília: Secretaria de Avaliação e Gestão da Informação, Secretaria Nacional de Assistência Social, 2010. 235 p.), (ii)semi-structured interview with a business expert, (iii) the software Manager CRAS 2008 (BRASIL, 2008b), and (iv)semi-structured interview with the IT expert from the MDS. Both the interviewees were active in the evaluation process. The business expert worked the CRAS 2008 evaluation process from the formulation of questions to the generation of the final evaluation report. The IT expert participated in the software development and followed the information processes of the evaluation. According Coosky and Caracelli (2009)Coosky, L.; Caracelli, V. Metaevaluation in practice.Journal of MultiDisciplinary Evaluation, v. 6, n.11, 2009., this input is frequently used in metaevaluation processes.

The requirements definition was done by means of the Requirements Definition Document (RDD), proposed by Castro and Guimarães, as part of eXtreme Requirements method. This artifact is used to identify the software requirements, the business rules, the traceability matrix, and the prioritization of requirements from the business processes evaluated. This document includes: (i) functional requirements,(ii)complementary requirements, (iii)non-functional requirements, (iv) business rules, (v)processes flow, (vi) users list, and(vii) risk analysis. The traceability of requirements was not considered since the focus did not include the requirements management.

5. Results of CRAS 2008 metaevaluation and requirements elicitation

According the standard “Evaluation Complete and Fair”, an evaluation must point out the strengths and weaknesses of the evaluated program, allowing the emphasis of successful issues beyond the correction of existing errors. A summary of the data collected with respect to this standard is shown in Table 2.

Table 2
- Analysis of the standard “Complete and Fair Evaluation”

Next standard, “Systematic Information”, advocates that the information col-lected, processed, and included in the reports must be revised and corrected if an error is found. The results by applying this standard are shown in Table 3. The main IT improvements found in the application of this standard refer to communication among the people involved in the evaluations and user authentication.

Table 3
- Analysis of the standard “Systematic Information”

In the standard “Conclusions Justified”, the Joint Committee (1994)JOINT COMMITTEE. The Program Evaluation Standards. Thousand Oaks, EUA: SAGE, 1994. argues that the conclusions of an evaluation must be explicitly justified to enable their analysis by the people involved in the evaluation or program. In Table 4 a summary of the analysis based on this standard is shown. The main improvement arising with respect to IT is to enable the people involved to provide feedback related to their experience in the evaluation.

Table 4
- Analysis of the standard “Conclusions Justified”

Next, a detailed discussion on the results from the standard “Complete and Fair Evaluation” is presented in which items 1, 2, and 3 refer to the recommendations and the others (4 to 9) are related to the main errors:

  1. All indicators that point out the strengths and weaknesses for each CRAS were found in the system Manager. There is no information on the methodology for generating these indicators; however, this information is included in the evaluation report (Brasil, 2010Brasil. Ministério do Desenvolvimento Social e Combate à Fome. Monitoramento SUAS: censo CRAS 2008. Brasília: Secretaria de Avaliação e Gestão da Informação, Secretaria Nacional de Assistência Social, 2010. 235 p., p. 137-173).

  2. There are no records of comments from the people involved neither in the system Manager nor in the final report. Asked about this fact, the business expert reported meetings occurred with the representatives from States, municipalities, and the Federal Government to criticize the evaluation process, but no record was done. An improvement could be the adoption of a transactional information system integrated with the Manager to enable the managers from States and municipalities to criticize the data and indicators to be published.

  3. Nothing was found with respect to this recommendation neither in the report nor in the system Manager. There are financial and accounting information systems that can be used in the evaluation. It is also possible to adopt systems for project management.

  4. No inconsistency was detected. However, the analysis of data requires further investigation that could be accomplished, for example, by a software agent[4] [4] According to Russell and Norvig (2010, p. 34), an agent is an entity that perceive its environment by mean of sensors and actuate over this environment by means of actuators, processing information and knowledge. that compares the results with the original data. States can also check the results via system Manager and with the support of a system for data recording and information about the manipulation of any report.

  5. The business expert stated that neither promotion nor protection of the MDS interests happened.

  6. There is no information about any other possible approaches for analyzing data and indicators neither in the system Manager nor in the final report.

  7. According to business expert, the generation of indicators was made preventing any great loss to the CRAS and municipalities.

  8. The methods for defining strengths and weaknesses (Brasil, 2010Brasil. Ministério do Desenvolvimento Social e Combate à Fome. Monitoramento SUAS: censo CRAS 2008. Brasília: Secretaria de Avaliação e Gestão da Informação, Secretaria Nacional de Assistência Social, 2010. 235 p. p. 137-173) are available in the evaluation report.

  9. Both the system Manager and the evaluation report present the strengths and weaknesses of each CRAS.

A summary of the strengths and weaknesses found and respective recommendations for the IT team is shown in Table 5. This table details the requirements elicitation based on the metaevaluation.

Table 5
- Summary of the results from the requirements elicitation by means of metaevaluation of the CRAS 2008 Census

6. Definition of software requirements

Based on the recommendations raised during the metaevaluation of the CRAS 2008 Census, a definition of desirable requirements was formulated for the improvement of the ITC solutions available in the context of the CRAS evaluation. For this, the construction of requirements for recommendations 1, 3, 4.2, 5, 6, 7, 9, and 10 was chosen, as shown in Table 5. These recommendations refer to the registration information and comments from States and municipalities regarding the collected data or conclusions of the evaluation. Thus, the construction of a system that provides a formal channel of communication between federal, State and municipal levels and that integrates in the “Manager CRAS System” is one way of implement the recommendations suggested by the metaevaluation. Recommendations 2, 4.1, and 8 were not considered because they are not directly related to an activity of designing and developing information systems.

6.1 The activity flow of the CRAS 2008 Census

The activity flow of the evaluation of the CRAS 2008 is shown in Figure 2. In this figure, each column represents one of the three actors involved in the process (MDS, STATES and MUNICIPALITIES) and activities under their responsibility. It begins making available, by the technical and business team from the MDS, of the online questionnaire for completion by managers of the CRAS. Completed questionnaires are analyzed by the technique and business team of the MDS and disseminated to the States and other officials through the “Manager CRAS System”. However, feedback, critiques and supplementary information from States can only be done informally and not controlled. Therefore, the MDS is solely responsible for the analysis and interpretation of data and the generation of evaluation findings. In other words, there is no formal participation of States and municipalities in the review process.

Figure 2
: Activity flow from the CRAS 2008 Census

In the suggested process (Figure 3), a communication module among the MDS, States, and municipalities was added in the CRAS (MODCCRAS) evaluation. This module allows the formal record of comments and questions and data verification by all involved in the evaluation. This way, they can opine about the evaluation process and in the generate information. These opinions can be considered for the analysis and support the decision on new treatments of information. Among the evaluation results there are the problems found in the implementation of policies related to the CRAS that are potentially useful to recast these policies.

Figure 3
: Activity flow proposed

6.2 Requirements

For the implementation of process depicted in Figure 3, it is proposed the functional requirements, complementary requirements, business rules, and non-functional requirements, respectively shown in Tables 6, 7, 8, and 9.

Table 6
- Functional requirements

Table 7
: Complementary requirements

Table 8
: Business rules proposed

Table 9
: Non-functional requirements

The functional requirements were divided into components that comprise the MODCCRAS module: (i) comments and criticisms,(ii) error control, (iii) survey, and (iv) authentication. The purpose of these four components is to troubleshoot authentication and registration of feedback, critics and errors.

The non-functional requirements of the system were not obtained directly from the metaevaluation process; they were discussed with the IT expert who participated in this process. Thus, they were inserted by being part of the artifact RDD of the eXtreme Requirements method.

6.3 Profiles, permissions and risk analysis

There are three user profiles accredited in the system: employees of the MDS, users of States, and users of municipalities. Everyone can access any component, but with pre-defined constraints on functional requirements and business rules.

Risk analysis consists of mapping the possible problems or interferences that a project may face during its execution. For example, the risk mapped for this project comes from municipalities and States that participated in the evaluation process. As shown in Figure 4, without their active participation, it would not be possible to construct an evaluation fitting the recommendations of metaevaluation.

Figure 4
- Example of the MODCCRAS Risk Matrix

7. Conclusions

The CRAS provide social assistance services throughout Brazil. In 2008, the MDS, as a major sponsor of programs and social policies, evaluated the quality of these units. This evaluation involved the mobilization of people across Brazil that supplied the information used for the preparation of indicators of quality.

Based on the consolidated results of the CRAS 2008 Census, a metaevaluation was performed in order to help the requirements elicitation process. The method applied resulted from a mixed of a metaevaluation approach and the XR methodology for defining requirements. Despite the limitations of this study in terms of the number of patterns used for the metaevaluation of the 2008 CRAS, ten recommendations on how the ICT could improve the evaluation process were obtained.

This work contributed to the context of RE, since the current process of UML object-oriented requirements analysis for information systems projects has as its starting point the formulation of use cases involving: (i)identifying the needs of a particular actor (human or otherwise),(ii) their interfaces with the system, and(iii)actions to be performed (PRESSMAN 2010Pressman, R. Software engineering: a Practitioner’s Approach. 7. Ed., New York, EUA: McGraw-Hill, 2010., p. 161). This is a process based typically on the expectations of the users in relation to information systems. A critical view adopted in a metaevaluation allows the identification of weaknesses that can be mapped in the ICT solutions.

The results from this work can open new opportunities for research, like:(i)extend the application of the methodology to other standards of the Joint Committee in the context of the CRAS evaluations,(ii) apply the methodology proposed in other evaluations conducted by the MDS or other governmental units interested in improving their information systems for evaluation, (iii) study the role of ICT in each standard of the Joint Committee, and (iv) deepen studies on requirements elicitation based on the analysis of critical processes.

References

  • AMERICAN EVALUATION ASSOCIATION. Guiding principles for evaluators. Disponível em: <http://www.eval.org/Publications/aea06.GPBrochure.pdf>. Acesso em: 22 maio 2009.
    » http://www.eval.org/Publications/aea06.GPBrochure.pdf
  • ASSOCIATION OF BUSINESS PROCESS MANAGEMENT PROFESSIONALS. Guia para o Gerenciamento de Processos de Negócio Corpo Comum de Conhecimento (BPM CBOK®) - Versão 2.0. ABPMP, 2009. Disponível em: <http://www.abpmp-br.org/CBOK/CBOK_v2.0_Portuguese_Edition_Thrid_Release_Look_Inside.pdf>.
    » http://www.abpmp-br.org/CBOK/CBOK_v2.0_Portuguese_Edition_Thrid_Release_Look_Inside.pdf
  • Brasil. Ministério do Desenvolvimento Social e Combate à Fome. Linha de Base do Monitoramento dos CRAS. Brasília, DF: Secretaria de Avaliação e Gestão da Informação, Secretaria Nacional de Assistência Social, 2008a. 104 p.
  • Brasil. Ministério do Desenvolvimento Social e Combate à Fome.Gerente CRAS 2008 2008b. Disponível em: <http://aplicacoes.mds.gov.br/sagi/CRAS2008/adm>.
    » http://aplicacoes.mds.gov.br/sagi/CRAS2008/adm
  • Brasil. Ministério do Desenvolvimento Social e Combate à Fome. Monitoramento SUAS: censo CRAS 2008. Brasília: Secretaria de Avaliação e Gestão da Informação, Secretaria Nacional de Assistência Social, 2010. 235 p.
  • Castro, E. J. R.; Guimarães, F. A. Processo eXtreme requirements XR. Disponível em: <http://www.quaddract.com.br/download/Metodo_eXtreme_Requirements_XR.pdf>. Acesso em: 27 fev. 2010.
    » http://www.quaddract.com.br/download/Metodo_eXtreme_Requirements_XR.pdf
  • Coosky, L.; Caracelli, V. Metaevaluation in practice.Journal of MultiDisciplinary Evaluation, v. 6, n.11, 2009.
  • Hedler, H. Meta-avaliação das auditorias de natureza operacional do Tribunal de Contas da União: um estudo sobre auditorias de natureza operacional de programas sociais. Tese de Doutorado apresentada ao Instituto de Psicologia da UnB, 2007. 260f.
  • IEEE. Standard Glossary of Software Engineering Terminology (IEEE Std. 610.12-1990). New York, EUA: IEEE, 1990.
  • INTERNATIONAL STANDARD ORGANIZATION . ISO/IEC 9126-1, Software engineering – product quality – Part 1: Quality Model, first ed.: 2001-06-15.
  • JOINT COMMITTEE. The Program Evaluation Standards Thousand Oaks, EUA: SAGE, 1994.
  • Paula Filho, W. Engenharia de software fundamentos, métodos e padrões Rio de Janeiro, RJ: LTC, 2009.
  • Posavac, E. J., & Carey, R. G. Program evaluation. Methods and case Studies 6ª Ed., New Jersey, EUA: Prentice Hall. 2003.
  • Pressman, R. Software engineering: a Practitioner’s Approach 7. Ed., New York, EUA: McGraw-Hill, 2010.
  • Russel, S.; Norvig, P. Artificial Intelligence a Modern Approach 3. Ed. Boston, EUA: Pearson, 2010.
  • Saiedian, H. Dale, R. Requirements engineering: making the connection between the software developer and customer. Information Software Technology, n. 42, 1999.
  • SEI. Software Engineering Institute. Improving process for developing better products and services. Disponível em: <http://www.sei.cmu.edu/reports/10tr033.pdf>. Acesso em 9 de novembro de 2010·
    » http://www.sei.cmu.edu/reports/10tr033.pdf
  • Softex. MPS.br – Melhoria de Processo de Software Brasileiro - Guia Geral 2009. Disponível em: <http://www.softex.br/mpsbr/_guias/guias/MPS.BR_Guia_Geral_2009.pdf>
    » http://www.softex.br/mpsbr/_guias/guias/MPS.BR_Guia_Geral_2009.pdf
  • Stufflebeam, D. The metaevaluation imperative. American Journal of Evaluation, v. 22, p. 183-209, 2001.
  • Stufflebeam, D.; Shinkfield, A. Evaluation, theory, models & applications San Francisco, EUA: Jossey-Bass, 2007.
  • UNITED STATES GOVERNMENT ACCOUNTABILITY OFFICE.GOF: Government Auditing Standards. 2007. Disponível em: <http://www.gao.gov/new.items/d07731g.pdf>. Acesso em: 08 out. 2009.
    » http://www.gao.gov/new.items/d07731g.pdf
  • Weiss, C. H. Evaluation 2. Ed., Saddle River, EUA: Prentice Hall, 1997.
  • Wiegers, K. Software requirements: practical techniques for gathering and managing requirements throughout the product development cycle 2. ed. Redmond, EUA: Microsoft Press, 2003.
  • Worthen, B.; Sanders, J.; Fitzpatrick, J. Avaliação de programas: concepções e práticas São Paulo, SP: Editora Gente, 2004.
  • [1]
    The CRAS Census is carried out annually by MDS and refers to the data collection and analysis in the Centers of Reference in Social Assistance. This census has been performed regularly since 2007.
  • [2]
    The ISO/IEC 9126 standard (INTERNATIONAL STANDARD ORGANIZATION, 2001INTERNATIONAL STANDARD ORGANIZATION . ISO/IEC 9126-1, Software engineering – product quality – Part 1: Quality Model, first ed.: 2001-06-15.) focuses on the quality of the software product. It establishes a quality model based on the following components: (i) the development process, (ii) quality of the final product, and (iii) software product quality in use.
  • [3]
    Business Process Management is a management approach focused on identifying, designing, implementing, documenting, measuring, monitoring, controlling and improving business processes, automated or not, to achieve the desired results, consistent and aligned with the strategic goals of an organization (ASSOCIATION OF BUSINESS PROCESS MANAGEMENT PROFESSIONALS, 2009ASSOCIATION OF BUSINESS PROCESS MANAGEMENT PROFESSIONALS. Guia para o Gerenciamento de Processos de Negócio Corpo Comum de Conhecimento (BPM CBOK®) - Versão 2.0. ABPMP, 2009. Disponível em: <http://www.abpmp-br.org/CBOK/CBOK_v2.0_Portuguese_Edition_Thrid_Release_Look_Inside.pdf>.
    http://www.abpmp-br.org/CBOK/CBOK_v2.0_P...
    ).
  • [4]
    According to Russell and Norvig (2010Russel, S.; Norvig, P. Artificial Intelligence a Modern Approach. 3. Ed. Boston, EUA: Pearson, 2010., p. 34), an agent is an entity that perceive its environment by mean of sensors and actuate over this environment by means of actuators, processing information and knowledge.

Publication Dates

  • Publication in this collection
    Apr 2014

History

  • Received
    26 Sept 2011
  • Accepted
    11 Feb 2014
TECSI Laboratório de Tecnologia e Sistemas de Informação - FEA/USP Av. Prof. Luciano Gualberto, 908 FEA 3, 05508-900 - São Paulo/SP Brasil, Tel.: +55 11 2648 6389, +55 11 2648 6364 - São Paulo - SP - Brazil
E-mail: jistemusp@gmail.com