Acessibilidade / Reportar erro

Proposal for a measurement model for software tests with a focus on the management of outsourced services

Abstract

The need for outsourcing IT services has shown a significant growth over the past few years. This article presents a proposal for a measurement model for Software Tests with a focus on the management of these outsourced services by governmental organizations. The following specific goals were defined: to identify and analyze the test process; to identify and analyze the existing standards that govern the hiring of IT services and to propose a Measurement Model for outsourced services of this type. As to the analysis of the data collected (documentary research and semi-structured interviews), content analysis was adopted, and in order to prepare the metrics, the GQM - Goal, Questions, Metrics - approach was used. The result was confirmed by semi-structured interviews. Here is what the research identifies as possible: to establish objective and measurable criteria for a measurement size as the input to evaluate the efforts and deadlines involved; to follow up the test sub-processes and to evaluate the service quality. Therefore, the management of this type of service hiring can be done more efficiently.

Test process; hiring management; outsourcing; metrics; and measurements


Proposal for a measurement model for software tests with a focus on the management of outsourced services

Angélica Toffano Seidel CalazansI,II; Ricardo Ajax Dias KosloskiII; Luiz Carlos Miyadaira Ribeiro JuniorII

ICentro Universitário de Brasília, Uniceub, Brasilia, Brazil

IIUniversidade de Brasília, UNB, Brasilia, Brazil

Address for correspondence Address for correspondence: Angélica Toffano Seidel Calazans, Doctorate degree in Information Science from Universidade de Brasília (2008) and a master's degree in Knowledge Management and IT from Universidade Católica de Brasília (2003). A post-graduate degree in Systems Analysis from UDF (1986) and in Client Server Platform (1996). Professor at Centro Universitário de Brasilia, Uniceub/BR, FATECS, Brasília-DF, Phone: 55 061 81167246, E-mail: angélica.calazans@uniceub.br

ABSTRACT

The need for outsourcing IT services has shown a significant growth over the past few years. This article presents a proposal for a measurement model for Software Tests with a focus on the management of these outsourced services by governmental organizations. The following specific goals were defined: to identify and analyze the test process; to identify and analyze the existing standards that govern the hiring of IT services and to propose a Measurement Model for outsourced services of this type. As to the analysis of the data collected (documentary research and semi-structured interviews), content analysis was adopted, and in order to prepare the metrics, the GQM - Goal, Questions, Metrics - approach was used. The result was confirmed by semi-structured interviews. Here is what the research identifies as possible: to establish objective and measurable criteria for a measurement size as the input to evaluate the efforts and deadlines involved; to follow up the test sub-processes and to evaluate the service quality. Therefore, the management of this type of service hiring can be done more efficiently.

Keywords: Test process, hiring management, outsourcing, metrics, and measurements

1. INTRODUCTION

High market competitiveness and technological advances have increased the demand for better and better software, produced through predefined costs and deadlines. In turn, such factors as complexity, size, heterogeneity and the dynamism of computer systems have directly impacted the quality of these products. In this scenario, the test process becomes increasingly important due to the fact that its main objectives are product analysis, identification of defects and their possible elimination.

Software tests include the Verification and Validation processes. According to (Melhoria de Processo do Software Brasileiro, 2009), the purpose of Verification is to confirm that each service and/or product of the process or project satisfies the specified requirements while the objective of Validation is to confirm that a product or component will satisfy the intended use when applied to the production environment. The correct implementation of these processes results in economic gains such as: reduction in the levels of software defects, reduction in development costs and in product delivery time and the increase in efficiency of the software development process (Venkatasubramaniarn et Vinoline, 2010).

Despite these gains Juristo, Moreno And Vegas (2004) regard software tests as one of the most costly practices in the development process, which needs to be properly managed in order to avoid resource waste and delays in the software development project schedule, among other possibilities. Models such as COBIT and ITIL emphasize the need for the competent management of all IT resources, whether internal or external. This need is also reflected on the test activities, especially when the outsourcing of this process is considered.

According to Silva, Duarte and Castro (2009), "the outsourcing activity or information technology outsourcing has been showing significant growth rates in the IT services segment." And by taking the test context into account, Venkatasubramanian et Vinoline (2010) affirm that software development organizations are currently beginning to outsource test activities (through the use of test factories), in order to reduce costs and increase the quality and the reliability of software products. This has also been a trend in Brazil, especially among government agencies.

Thus, this paper's purpose is to define a proposal for a Measurement Model for Tests considering the needs of the outsourcing process management by government agencies. A brief view of the test process is then presented in section 2. The laws, standards and models related to service hiring are briefly described in section 3, such as Law # 8666/93, Normative Instruction # 4 of 2010 and other models. In section 4, the research methodology is presented, and a few of the criteria for the measurement of test services are described in item 5. Measurements for the assessment of the quality of the service provided are shown in section 6 and the ways to measure product quality are in section 7. Conclusion and future papers are found in section 8.

2 TEST PROCESS

Testing software is more comprehensive than reporting impressions and non-conformities. The IEEE829 standard for software tests documentation specifies the way to use a set of documents defined in eight stages for the software tests, and for each stage to potentially produce its own type of document, as shown in Figure 1.


The Test Plan, according to this standard, contains the test's objectives and the global goals, while the Test Design Specification describes in detail and specifies how the Test Plan will be executed. The Test Specification Case describes situations which must be tested and the Test Procedure Specification describes the actions that must be performed by the software for the Test Case to be executed.

As for the Test Log (or evidence), it describes the executed tests, regardless of errors having been encountered or not. The Test Incident Report describes the failures that have occurred during the execution of the tests and, finally, the Test Summary Report (or executive) contains the summary of the test conditions executed, the failures encountered and the desired statistical tabulations.

In addition to Standard 829, the "V" model software tests (Pfleeger, 2004) emphasizes the verification and validation activities for the purpose of preventing/detecting failures, and minimizing the risks of the project. For each stage of the software development process, a "V" model introduces one stage or the corresponding test level. In this model, the test planning and specification occur from top to bottom, that is, throughout the software development stages the tests are planned and specified. The execution of the tests occurs in the opposite direction, as can be seen in Figure 2.


As a complement to these definitions, Caetano (2008) cites the existence of two test techniques. The Structural Test technique, known as the White-Box Test, where criteria are used for the creation of test cases with the purpose of identifying failures in the software's internal structures. While the Functional Test technique, also known as the Black-Box Test, where criteria are used for the creation of test cases with the purpose of evaluating adherence or compliance of the implemented software in relation to the behavior described in the requirements.

In addition to these techniques, many authors (Sommerville, 2007; Pressman, 2000) identify several types of tests: functionality, usability, performance, security, regressions, load, and configuration, among others.

That is, the test activity involves multiple facets and identifying and defining them for future outsourced services also involves the analysis of already existing standards of hiring and of monitoring this type of service, which are briefly described below.

3 LAWS, STANDARDS AND MODELS LINKED WITH SERVICE HIRING AND MONITORING

In order to do this research, Law n. 8666/93 and Normative Instruction n. 4 of 2010 were briefly analyzed to also identify the applicable aspects of the hiring and monitoring of software test activities. Law n. 8666/93 establishes general rules about bids and administration contracts related to the works, the services, including advertising, purchases, liens, and rentals under the scope of the Powers of the Union, of the States, of the Federal District and of the Municipalities.

In addition to establishing the service hiring methods of Law n. 8666/93, this law also establishes, among other aspects, the need for monitoring of the contract when it cites in its paragraph 67 that

"The execution of the contract shall be monitored and inspected by a specially assigned Administrative representative, as the hiring of third-parties is allowed, in order to assist them and provide them with information related to this assignment."

Paragraph 1 complements this article citing that

"the Administration representative will write their own notes regarding the events related to the execution of the contract, by establishing the necessary means to correct existing failures or defects encountered".

The Normative Instruction n. 4 of 2010 (IN04, 2010), of the Logistics and Information Technology Department from the Ministry of Planning, establishes in its article 2, paragraph 20, that:

"Acceptance Criteria: they are objective and measurable parameters used to verify whether an asset or service provided complies with the specified requirements."

In its article 15, paragraph 3 establishes that the service hiring strategy must contain, among other items:

- establishment of procedures and Acceptance Criteria of the services or assets provided, including metrics, indicators and minimum accepted values;

- previous quantification or estimation of the volume of the demanded services or the number of assets to be provided for comparison and control purposes;

- establishment of the quality assessment methodology and of the suitability of the Information Technology Solution to the functional and technological specifications;

Finally, in article 25, paragraph 3 which describes the monitoring of the services provided, the following items are then specified, among other items

"quality assessment of the services or assets provided as well as justifications in accordance with the Acceptance Criteria established by means of a contract, assigned to Technical Inspectors and to the Petitioner of the Contract."

It is important to highlight that the Normative Instruction n. 4 of November 2010 (IN04, 2010) recommends the use of metrics in software solutions while the Court Decisions of the Federal Audits Court recommend the use of Unadjusted Function Points under contracts for the provision of systems maintenance and development services.

Also, by considering the service hiring context, Cruz, Andrade and Figueiredo (2011) present a service hiring process which complies with Normative Instruction #4. In the established process, in its stage 4 named Contract Management and in the Perform technical monitoring activity, these authors describe the need for: monitoring the service order execution; managing risks, establishing corrective measures and making changes to the service order.

The description of these activities emphasizes the application of the constant monitoring of the service performance. The authors also highlight the need to evaluate the services provided by the Contracted Party in order to verify the "compliance with requested functional and qualitative requirements as well as quality criteria established in the processes of the work."

With a focus on the management of the service hiring process, Cobit (ITGI, 2007), which is one of the best known IT governance models, in its "Monitor and Evaluate" domain, highlights the need of the top management to ensure compliance with the IT processes by the external requirements, that is, the legislation and jurisprudence (ITGI, 2007).

In addition, COBIT, in this same domain, stresses the importance of IT processes to be regularly evaluated in order to assure quality and adherence to control requirements. There are other models which describe and emphasize the importance of the management of the service hiring process, among them, the CMMI-ACQ v1.2 (SEI, 2007), eSCM-CL v1.1 (ITSqc 2009a) (ITSqc 2009b), and the MPS.BR-Guia de Aquisição:2009 (Softex, 2009).

As such, by considering the test process characteristics, with its activities and products, Law # 8666/93, the instructions in Normative Instruction #4, the proposed service hiring process put forth by Cruz et al (2011) and the need for managing this process, the hiring of the Test factory should contain, at least, objective criteria to measure the demands, evaluate the quality of the services provided, and evaluate product quality in accordance with previously established criteria.

In the next sections, the conceptual model proposed and the research methodology will be presented as well as some types of metrics and measuring techniques, associated with the conceptual model proposed.

4 METHODOLOGY

The general objective of this work is to propose a Measurement Model for software tests by considering outsourced services, in order to make it easier for these contracts to be managed. In order to achieve this general goal, the following specific objectives were set:

- To identify and analyze the test process, its stages and activities;

- To identify and analyze existing laws and standards which govern the hiring of IT services;

- To analyze and propose a Measurement Model for outsourced test services.

The following data collection instruments were applied: research in documentation and semi-structured interviews. For the analysis of the data collected, content analysis was used (interview and documentation). In the documentary analysis, the following constructs were considered: aspects related to test processes, identifying stages, activities and products generated; the laws, standards, instructions and models concerning the test discipline.

Documentary research is the data collection method and aims to access the related sources, whether they are written or not. Written documentary sources include official, unofficial and statistical documentation. Non-written documentary sources include sources such as images and sounds, and iconography, among others. Documentary research sometimes leads to other research techniques such as observation, content analysis and others (Albarello et al, 1995).

By considering the data obtained through documentary research, a conceptual model was designed which represents the adopted concepts and the relationships between one another. The conceptual model built (Figure 03) was based on the confirmation that the management of test services, in order for it to be consistent with the laws, the standards in force, and the proposed models, and in order for it to be efficiently performed, it should use criteria to: measure the test services provided (size and effort), evaluate the quality of the service provided, and measure the quality of the product.


That is, the construction of the model identified the need to, in order to manage the outsourced test services, propose a Measurement Model by taking the following needs into account:

- To establish criteria for the measurement of test services provided (size and effort);

- To establish the criteria for the evaluation of the quality of the service provided.

- To measure the quality of the product.

By considering these criteria, the GQM - Goal, Questions, Metrics - approach was applied in an attempt to identify the main goals, questions and the metrics related to the test services. The GQM approach was proposed by BASILI in the first half of the 90's and has been used to provide metrics in accordance with the information needs related to the products, processes and resources used, establishing the basis for comparisons with future work (Basili & Rombach, 1994).

The GQM approach is used in relation to the assumption that an organization, in order to objectively perform measurements, must specify the objectives to be achieved by the established measurements. Such objectives direct the course of the questions which, after being refined, result in metrics, whose application will answer the established questions and, consequently, the identified measuring objectives (Basili & Rombach, 1994). The measurement model of the GQM approach works according to hierarchical levels among objectives, questions and metrics where:

-Conceptual Level - it is defined in the scope of the evaluation; that is, the object to be measured.

-Operational Level - questions that help characterize the object being studied are defined and how it must be seen within the context of quality.

-Quantitative Level - data sets to be obtained are defined, as related to each of the questions defined with the purpose of answering them in a quantitative manner; that is, as metrics.

The results of the data collected allow for an interpretation model related to the objectives set forth (Basili & Rombach,1994). The GQM paradigm provides a top-down method for the establishment of questions and metrics and a bottom-up interpretation model of the data.

The GQM approach contributes to the establishment or selection of metrics which achieve the objectives set forth by the organization and has been widely used by other models with a focus on continuous improvement. The CMMI model, for instance, says that the GQM approach is useful to select measurements that provide information about the business objectives of the organization (Chrissis, Konrad & Shrum, 2003).

In order to complement the research and aiming at the triangulation of the results, employees of a test factory were interviewed, in search of their perception of the services performed in a test contract. Also, semi-structured interviews were conducted with employees of a hiring company of a test factory, with the purpose of identifying the perception of their needs as related to the test activities hired.

The purpose of the interview is to obtain descriptions of the different aspects and the specific situations of a real-world phenomenon according to the interviewees' view (Kvale, 1996). In the semi-structured interview, the interviewer obtains detailed information, as well data and opinions by means of a free-style conversation, following a previously prepared list of questions, supported by theories of interest to the research (Trivinos, 1987). Kvale (1996) cites five methods to analyze and interpret qualitative interviews:

Meaning condensation, meaning categorization, narrative structuring, meaning interpretation and generating meaning are generated by means of ad-hoc methods. The meaning condensation method was used in the research for the purpose of identifying common points in the perception of the participants.

Next, the research results are described aiming to identify measuring criteria of the test services provided (size and effort.)

5 METRICS FOR MEASURING THE SIZE AND EFFORT FOR THE TESTS

Chart 1 shows the comparison of some techniques and experiments identified to estimate the effort that will be put into the Software Test subject in a software development project. In this chart the metrics are succinctly described, and the advantages and disadvantages found. It is interesting to highlight that all the interviews carried out with the employees from the test factory and the employees who hire this service found the need for metrics to estimate the effort of the test. This is a necessity for both teams.


5.1 Some final considerations on the estimations of the size and effort

Among the techniques mentioned for size measurement, the Test Point Analysis - TPA considers the most number of factors for the estimation, which presumes that this technique may give more consistent results to measure the size of a software test. For example, the complexity factor is obtained by the quantity of the conditions (IF-THEN-ELSE) of a function, which will directly influence the quantity of Test Cases.

The Function Point Analysis, for example, two similar functions may have the same size in FP, but if they have different complexities, the TPA technique will reflect the difference in the size of the functions.

The Test Case Points - TCP technique also seems to be more accurate with respect to the estimation of the size for the test process than the FPA, for it also considers by certain factors, the internal complexities of the functions.

It is important to highlight that all techniques use a productivity factor to derive the effort through the measurement of the size obtained by the technique. Thus, such a factor should be calibrated according to three characteristics:

- The method for measuring the size;

- The characteristics that influence the productivity of the project, such as technology, environment, team, etc;

- The strategy of the tests used, including the levels, types and test techniques as well as the test environment.

6 METRICS FOR THE EVALUATION OF THE SERVICES PROVIDED

In the context of outsourcing test services, one of the challenges to be considered is how to monitor the quality of the services provided. Also, how to validate whether the test activities and scopes of outsourcing were executed satisfactorily, especially if the tested software product is not of good quality.

Demanding only the predefined tools may be risky, for it does not guarantee the quality of the execution of the tests. As such, it is necessary to monitor closely the test process, from the strategy adopted, going through the range reached, to finalizing and follow-up of the defects that were found. To make comprehension of this subject easier, a mind map was created (Figure 04) with the most needed information. The documentary research identified that the various authors cited metrics for the monitoring of defects, effectiveness of the tests etc (Nirpal & Kale, 2011), (Caetano, 2008), (Pusala, 2006), (Kaur, Suri & Sharma,, 2007). It is interesting to highlight that the interviews emphasized the need for some of these metrics to follow up the service provided. The employees from the test factory as well as the hiring company cited the absence of this monitoring.


Click to enlarge

7. METRICS FOR THE EVALUATION OF THE PRODUCT TESTED

Finally, it is most important to ensure that the product has the quality expected by all, in accordance with the criteria of the quality demanded. Such criteria of quality must be evaluated according to criteria that are also objective, that is, by metrics software. As such, the NBR ISO/IEC 9126 norm itself provides in parts 2, 3 and 4 the metrics for the evaluation of the criteria of quality, be it External, Internal, and Quality of Use of this norm.

A software product does not reach its complete stability in the first releases of the software. The most important fact is that the evolution of the defects be monitored as soon as possible and that the causes are addressed during the development process. This way, the NBR ISO/IEC 9126 norm itself provides a set of metrics for each of the characteristics of quality and their respective sub-characteristics. Such metrics aim to answer such questions as:

- How adequate are the evaluated functions?

- How complete are the functions relative to the specified requirements?

- How frequently do the users find incorrect results?

- How complete are the auditing records in reference to the accesses by users of the system and to the data?

It is important to highlight that the follow-up to the quality of the product should be in line with the criteria of the quality demanded and that the strategy of the tests should test the attributes that best represent the adherence to the desired and adequate level of quality. In this context, the criteria of quality relative to the non-functional requirements are normally forgotten or non-prioritized, for example, the requirements of Performance or Efficiency. Should there be any requirement of performance for some function, this attribute should be measured and validated by the test process.

The documentary research identified a few other measurement proposals of quality of the product, frequency of defects etc ((Lazic & Mastorakis, 2008), (Kaur et al, 2007)). These measurements, aside from the ISO IEC 9126 proposal, were also confronted with the results from the interviews, aiming to identify the most relevant ones. Described below are some metrics given to evaluate the product tested.

Click to enlarge

8 CONCLUSIONS AND FUTURE PAPERS

The general objective of this paper was to "propose a measurement model for software tests, considering the outsourcing of this service and the need to support the management of these contracts. In order to achieve this general objective, the following specific objectives were defined:

- Identify and analyze the test process with all its phases and activities;

- Identify and analyze the already existing laws, norms and models that regulate the hiring of services in IT;

- Analyze and propose a measurement model to outsource test services

This research allowed identifying the complexity of the subject of the tests through the study of the test process. The analysis of the already existing laws, norms and models allowed defining the conceptual model of the research that identified the need to manage the service for the outsourcing of the test services, the building of a Measurement Model considering the following criteria:

- Measuring the test services provided (size and effort);

- Evaluating the quality of the service provided;

- Measuring the quality of the product

Furthermore, it was evident that the outsourcing of any IT service also needs to consider the characteristics that influence the productivity of the project, like technology, environment, team, etc., and the strategy of the tests used, including their levels, types, and test techniques as well as the test environment.

Finally, after the analysis of the measurements found in the specialized literature, of the already existing norms, instructions, and models to manage the outsourcing of services in the governmental sphere, and of the application of the GQM methodology, it is possible to: establish a measurement in size, and consequently input, to estimate the effort and the time frame for tests demands monitoring the sub processes of outsourced tests, also by means of objective and measurable criteria; and to establish the criteria of quality, evaluating whether the end-product meets such criteria. As such, the management of this type of outsourcing would be made viable in a more efficient manner. It is important to highlight that the interviews held validated the identified needs as well as the proposed measurements.

For future papers, the implementation of the model, and of the proposed measurements to verify its applicability, is recommended.

ACKNOWLEDGEMENTS

Support by TiMetricas

REFERENCES

Albarello, L., Digneffe, F., Hiernaux, J., Maroy, C., Ruquoy, D., & Saint-Georges, P. (1995). Prática e Métodos de Investigação em Ciências Socias. Portugal: Gradiva.

Aranha, E., & Borba, P. (2007). An Estimation Model for Test Execution Effort. First International Symposium on Empirical Software Engineering and Measurement - IEEE computer society.

Basili, V., & Rombach, H. (1994). Goal question metric paradigm. Encyclopedia of software engineering. (2).

Law nº 8.666, of June 21, 1993 (1993). Regulates the art. 37, item XXI of the Constitution, establishing rules for bidding and contracts for Public Administration and other measures. Brazil. Retrieved 16 agosto, 2011, from http://www.planalto.gov.br/ccivil_03/Leis/L8666cons.htm.

Normative Instruction SLTI # 4, of Nov 12, 2010. (2010). Provides for the process of hiring the services of Information Technology for Public Administration Federal direct, autonomous agencies and foundations. Brazil. Retrieved 16 agosto, 2011, from http://www.governoeletronico.gov.br/sisp-conteudo/nucleo-de-contratacoes-de-ti/modelo-de-contratacoes-normativos-e-documentos-de-referencia/instrucao-normativa-mp-slti-no04.

Caetano, C. (2002). Gestão de defeitos. Engenharia de Software, year 1, 1ª. Edition

Caetano, C. (2008). Gestão de Testes Ferramentas Open Source e melhores práticas na gestão de testes. Engenharia de software v.3.

Chrissis, M. B., Konrad, M., & Shrum, S. (2003). CMMI: Guidelines for Process Integration and Product Improvement. Addison-Wesley

Cruz, C. S., Andrade, E. L. P., & Figueiredo, R. M. C. (2011). PCSSCEG - Processo de contratação de serviços de Tecnologia da Informação para Organizações Públicas. DF: MCT.

Fenton, N, & Pfleeger, S. (1997). Software Metrics: A Rigorous and Practical Approach. (2nd.ed.) Boston: PWS Publishing Company.

Institute of Electrical and Electronics Engineers IEEE. (2008). Standard for Software & System. Test Documentation. IEEE 829-2008.

International Function Point Users Group, IFPUG. (2010). Manual de Práticas de Contagens de Pontos de Função, v. 4.3.1.

Information Technology Governance Institute, ITGI.COBIT. (2007). Control Objectives for Information and related Technology. 4.1.ed. Retrieved 16 agosto, 2011, from http://www.isaca.org/Knowledge-Center/cobit/Pages/Downloads.aspx.

International Organization for Standardization and International Electrotechnical Commission. (2002). ISO/IEC 9126:2002 Software quality.

Information Technology Services Qualification Center, ITSqc. (2009a). eSourcing Capability Model for Client Organizations (eSCM-CL). v1.1, part 1. Retrieved 16 agosto, 2011, from http://www.itsqc.org/downloads/documents/eSCM-CL_Part1_V1dot1.html.

Information Technology Services Qualification Center, ITSqc. (2009b). eSourcing Capability Model for Client Organizations (eSCM-CL). v.1.1, part 2. Retrieved 16 agosto, 2011, from http://www.itsqc.org/downloads/documents/eSCM-CL_Part2_V1dot1.html.

Jones, C. (2007). Software Estimating Rules of Thumb. [Working Paper]. Capers Jones. Retrieved 15 march, 2011, from http://www.compaid.com/caiinternet/ezine/capers-rules.pdf.

Juristo, N., Moreno, A. M., &Vegas, S. (2004). Reviewing 25 years of testing technique experiments. Empirical Softw. Eng., v. 9, n. 1-2, p. 7-44.

Kaur, A., Suri, B., & Sharma, A. (2007, March). Software Testing Product Metrics - A Survey. Proceedings of National Conference on Challenges & Opportunities in Information Technology (COIT-2007) RIMT-IET, Mandi Gobindgarh, 23.

Kushwaha, D. S., & Misra, A. K. (2008). Software Test Effort Estimation. ACM SIGSOFT Software Engineering, v. 33, n. 3.

Kvale, S. (1996). Interviews: an introduction to qualitative research interviewing. California: Sage publications.

Lazic, L., & Mastorakis, N. (2008). Cost Effective Software Test Metrics. Wseas Tansactions on Computers. v. 7, n.6.

Nageswaran, S. (2001, June). Test effort estimation using use case points. Proceedings of Quality Week, San Francisco, California, USA.

Nirpal, P. B., & Kale, K. V. (2011). A Brief Overview Of Software Testing Metrics. International Journal on Computer Science and Engineering (IJCSE), v. 3 n. 1.

Patel, N., Govindrajan, M., Maharana, S., & Randas, S. (2001). Test Case Point Analysis. [Working Paper] Cognizant Technology Solutions. Retrieved 15 march, 2011, from www.stickyminds.com/getfile.asp?ot=XML&id=2566&fn=XUS373692file1.pdf.

Pfleeger, S. L. (2004). Engenharia de software: teoria e prática. (2nd. ed.) São Paulo: Prentice Hall.

Pressman, R. (2006). Engenharia de Software. (6th ed.).Sao Paulo: Mgraw-Hill.

Pusala, R. (2006). Operational Excellence through efficient software testing metrics. [Working Paper]. Point view -Infosys. Retrieved 15 march, 2011, from http://www.infosys.com/it-services/independent-validation-testing-services/white-papers/documents/operational-excellence.pdf

Santra, A. (2010). A New approach for estimation of software testing process based on software requirements. Journal of scientific & industrial research, v. 69, pp.746-749.

Software Engineering Institute, SEI. (2007). CMMI for Acquisition (CMMI-ACQ). V. 1.2. Retrieved 01 march, 2011, from www.sei.cmu.edu/cmmi/tools/acq/download.cfm

Silva, M. A. da S., Duarte, R. G., & Castro, J. M. de. (2009). Outsourcing de TI e redefinição do papel da subsidiária: um estudo comparativo entre as subsidiárias brasileiras e indiana de uma multinacional americana. Journal of Information Systems and Technology Management, v. 6, n. 2, pp. 173-202.

Associação para Promoção da Excelência do Software Brasileiro, SOFTEX (2009). MPS.BR - Melhoria de Processo do Software Brasileiro: Guia de Aquisição. Retrieved 15 march, 2011, from www.softex.br/mpsbr.

Sommerville, I. (2007). Engenharia de software. (8th. ed). São Paulo: Pearson Addison-Wesley.

Trivinos, A. N. S. (1987). Introdução à pesquisa em ciências sociais: a pesquisa qualitativa na educação. São Paulo: Atlas.

Veenendaal, E, & Dekkers, T. (1999). Test point analysis: a method for test estimation, Project Control for Software Quality. Shaker Publishing.

Venkatasubramanian, A., & Vinoline, V. (2010). Software Test Factory (A proposal of a process model to create a Test Factory). International Journal of Computational Intelligence Techniques, v.1, n. 1, pp.14-19.

Manuscript first received 22/09/2011

Manuscript accepted: 12/04/2012

Ricardo Ajax Dias Kosloski,

Professor at Universidade de Brasília - UnB. A post-graduate degree in Software Engineering, from Universidade Católica de Brasília - UcB (2003), a master's degree in Knowledge Management and Information Technology also from UcB (2005).

TiMétricas, Brasília-DF, Phone: 55 061 84063679, E-mail: ricardo.kosloski@metricas.com.br

Luiz Carlos Miyadaira Ribeiro Junior ,

Associate Professor at Universidade de Brasília, master's degree in Computer Science from Universidade Federal de São Carlos and a doctorate degree from Escola Politécnica da Universidade de São Paulo (2007).

Universidade de Brasília-UnB, Brasília-DF, E-mail: luiz.miyadaira@gmail.com

  • Albarello, L., Digneffe, F., Hiernaux, J., Maroy, C., Ruquoy, D., & Saint-Georges, P. (1995). Prática e Métodos de Investigação em Ciências Socias
  • Aranha, E., & Borba, P. (2007). An Estimation Model for Test Execution Effort. First International Symposium on Empirical Software Engineering and Measurement - IEEE computer society.
  • Basili, V., & Rombach, H. (1994). Goal question metric paradigm. Encyclopedia of software engineering (2).
  • Caetano, C. (2002). Gestão de defeitos. Engenharia de Software, year 1, 1Ş. Edition
  • Caetano, C. (2008). Gestão de Testes Ferramentas Open Source e melhores práticas na gestão de testes. Engenharia de software v.3.
  • Chrissis, M. B., Konrad, M., & Shrum, S. (2003). CMMI: Guidelines for Process Integration and Product Improvement Addison-Wesley
  • Cruz, C. S., Andrade, E. L. P., & Figueiredo, R. M. C. (2011). PCSSCEG - Processo de contratação de serviços de Tecnologia da Informação para Organizações Públicas DF: MCT.
  • Fenton, N, & Pfleeger, S. (1997). Software Metrics: A Rigorous and Practical Approach (2nd.ed.) Boston: PWS Publishing Company.
  • Institute of Electrical and Electronics Engineers IEEE. (2008). Standard for Software & System. Test Documentation IEEE 829-2008.
  • International Function Point Users Group, IFPUG. (2010). Manual de Práticas de Contagens de Pontos de Função, v. 4.3.1.
  • Information Technology Governance Institute, ITGI.COBIT. (2007). Control Objectives for Information and related Technology. 4.1.ed Retrieved 16 agosto, 2011, from http://www.isaca.org/Knowledge-Center/cobit/Pages/Downloads.aspx
  • International Organization for Standardization and International Electrotechnical Commission. (2002). ISO/IEC 9126:2002 Software quality.
  • Information Technology Services Qualification Center, ITSqc. (2009a). eSourcing Capability Model for Client Organizations (eSCM-CL) v1.1, part 1. Retrieved 16 agosto, 2011, from http://www.itsqc.org/downloads/documents/eSCM-CL_Part1_V1dot1.html
  • Information Technology Services Qualification Center, ITSqc. (2009b). eSourcing Capability Model for Client Organizations (eSCM-CL). v.1.1, part 2. Retrieved 16 agosto, 2011, from http://www.itsqc.org/downloads/documents/eSCM-CL_Part2_V1dot1.html
  • Jones, C. (2007). Software Estimating Rules of Thumb. [Working Paper]. Capers Jones Retrieved 15 march, 2011, from http://www.compaid.com/caiinternet/ezine/capers-rules.pdf
  • Juristo, N., Moreno, A. M., &Vegas, S. (2004). Reviewing 25 years of testing technique experiments. Empirical Softw. Eng, v. 9, n. 1-2, p. 7-44
  • Kaur, A., Suri, B., & Sharma, A. (2007, March). Software Testing Product Metrics - A Survey. Proceedings of National Conference on Challenges & Opportunities in Information Technology (COIT-2007) RIMT-IET, Mandi Gobindgarh, 23.
  • Kushwaha, D. S., & Misra, A. K. (2008). Software Test Effort Estimation. ACM SIGSOFT Software Engineering, v. 33, n. 3.
  • Kvale, S. (1996). Interviews: an introduction to qualitative research interviewing California: Sage publications.
  • Lazic, L., & Mastorakis, N. (2008). Cost Effective Software Test Metrics. Wseas Tansactions on Computers. v. 7, n.6.
  • Nageswaran, S. (2001, June). Test effort estimation using use case points. Proceedings of Quality Week, San Francisco, California, USA.
  • Nirpal, P. B., & Kale, K. V. (2011). A Brief Overview Of Software Testing Metrics. International Journal on Computer Science and Engineering (IJCSE), v. 3 n. 1.
  • Patel, N., Govindrajan, M., Maharana, S., & Randas, S. (2001). Test Case Point Analysis. [Working Paper] Cognizant Technology Solutions Retrieved 15 march, 2011, from www.stickyminds.com/getfile.asp?ot=XML&id=2566&fn=XUS373692file1.pdf
  • Pfleeger, S. L. (2004). Engenharia de software: teoria e prática (2nd. ed.) São Paulo: Prentice Hall.
  • Pressman, R. (2006). Engenharia de Software (6th ed.).Sao Paulo: Mgraw-Hill.
  • Pusala, R. (2006). Operational Excellence through efficient software testing metrics. [Working Paper]. Point view -Infosys Retrieved 15 march, 2011, from http://www.infosys.com/it-services/independent-validation-testing-services/white-papers/documents/operational-excellence.pdf
  • Santra, A. (2010). A New approach for estimation of software testing process based on software requirements. Journal of scientific & industrial research, v. 69, pp.746-749.
  • Silva, M. A. da S., Duarte, R. G., & Castro, J. M. de. (2009). Outsourcing de TI e redefinição do papel da subsidiária: um estudo comparativo entre as subsidiárias brasileiras e indiana de uma multinacional americana. Journal of Information Systems and Technology Management, v. 6, n. 2, pp. 173-202.
  • Associação para Promoção da Excelência do Software Brasileiro, SOFTEX (2009). MPS.BR - Melhoria de Processo do Software Brasileiro: Guia de Aquisição Retrieved 15 march, 2011, from www.softex.br/mpsbr
  • Sommerville, I. (2007). Engenharia de software (8th. ed). São Paulo: Pearson Addison-Wesley.
  • Trivinos, A. N. S. (1987). Introdução à pesquisa em ciências sociais: a pesquisa qualitativa na educação São Paulo: Atlas.
  • Veenendaal, E, & Dekkers, T. (1999). Test point analysis: a method for test estimation, Project Control for Software Quality. Shaker Publishing.
  • Venkatasubramanian, A., & Vinoline, V. (2010). Software Test Factory (A proposal of a process model to create a Test Factory). International Journal of Computational Intelligence Techniques, v.1, n. 1, pp.14-19.
  • Address for correspondence:

    Angélica Toffano Seidel Calazans,
    Doctorate degree in Information Science from Universidade de Brasília (2008) and a master's degree in Knowledge Management and IT from Universidade Católica de Brasília (2003). A post-graduate degree in Systems Analysis from UDF (1986) and in Client Server Platform (1996).
    Professor at Centro Universitário de Brasilia, Uniceub/BR, FATECS, Brasília-DF, Phone: 55 061 81167246, E-mail: angélica.calazans@uniceub.br
  • Publication Dates

    • Publication in this collection
      12 Sept 2012
    • Date of issue
      Aug 2012

    History

    • Received
      22 Sept 2011
    • Accepted
      12 Apr 2012
    TECSI Laboratório de Tecnologia e Sistemas de Informação - FEA/USP Av. Prof. Luciano Gualberto, 908 FEA 3, 05508-900 - São Paulo/SP Brasil, Tel.: +55 11 2648 6389, +55 11 2648 6364 - São Paulo - SP - Brazil
    E-mail: jistemusp@gmail.com