Acessibilidade / Reportar erro

Software process assessment and improvement using Multicriteria Decision Aiding - Constructivist

Abstract

Software process improvement and software process assessment have received special attention since the 1980s. Some models have been created, but these models rest on a normative approach, where the decision-maker's participation in a software organization is limited to understanding which process is more relevant to each organization. The proposal of this work is to present the MCDA-C as a constructivist methodology for software process improvement and assessment. The methodology makes it possible to visualize the criteria that must be taken into account according to the decision-makers' values in the process improvement actions, making it possible to rank actions in the light of specific organizational needs. This process helped the manager of the company studied to focus on and prioritize process improvement actions. This paper offers an empirical understanding of the application of performance evaluation to software process improvement and identifies complementary tools to the normative models presented today.

software process assessment; software process improvement; decision; performance measurement; CMMI; SPICE


Software process assessment and improvement using Multicriteria Decision Aiding - Constructivist

Leonardo EnsslinI; Luiz Carlos Mesquita ScheidI; Sandra Rolim EnsslinI; Rogério Tadeu de Oliveira LacerdaII

IFederal University of Santa Catarina, Brazil

IIUNISUL - University of South of Santa Catarina, Brazil

Correspondence Correspondence: Leonardo Ensslin Programa de Pós-Graduação em Engenharia de Produção Universidade Federal de Santa Catarina Campus Universitário Trindade, Caixa Postal 476 CEP 88040-900, Florianópolis, SC, Brasil

ABSTRACT

Software process improvement and software process assessment have received special attention since the 1980s. Some models have been created, but these models rest on a normative approach, where the decision-maker's participation in a software organization is limited to understanding which process is more relevant to each organization. The proposal of this work is to present the MCDA-C as a constructivist methodology for software process improvement and assessment. The methodology makes it possible to visualize the criteria that must be taken into account according to the decision-makers' values in the process improvement actions, making it possible to rank actions in the light of specific organizational needs. This process helped the manager of the company studied to focus on and prioritize process improvement actions. This paper offers an empirical understanding of the application of performance evaluation to software process improvement and identifies complementary tools to the normative models presented today.

Keywords: software process assessment, software process improvement, decision, performance measurement, CMMI, SPICE.

1. INTRODUCTION

Software process improvement (SPI) and software process assessment (SPA) have received special attention from government, researchers and industries (Staples et al., 2007; Coleman et al., 2008; Habra et al., 2008; Niazi et al., 2010). Published works certify the economy provided by the improvement in software quality (Pitterman, 2000). Since the 1980s several models have been developed with this intention. The most used by software organizations are CMMI and SPICE (Kuilboer et al., 2000). The CMM and CMMI models were developed by the Software Engineering Institute (SEI) of Canergie Mellon University and SPICE by the International Organization for Standardization (ISO).

Despite the importance and the interest, when we looked for statistics about the number of software organizations that have adopted one of the models, we have noticed that few have done so. Lack of adoption can be seen by examining the SEI CMMI appraisal data for the years 2002 - 2006, in which period just 1581 CMMI appraisals were reported to the SEI (Coleman et al., 2008).

Why are these models not being adopted as expected?

The proposed models are based on a base processes activity (BPA) set. These activities are pre-defined by models (Yoo et al., 2006). Following the rationalist paradigm, the models determine which processes the organization should execute (Roy, 1993). They suggest the order in which the process areas must be assessed and improved with regard to performance. Finally, they determine how to consider the current stage of process capacity or the adoption of a practice.

Looking for an adaptation, SPICE (ISO/IEC 15.504) was developed using the software continuous assessment and performance improvement process. The continuous models present a path to organizations to prioritize the process areas to be improved in accordance with their business plans (Sheard et al., 1999). CMMI has followed the changes and it has two models: (1) continuous and (2) by stages. Work developed by researchers has been published with proposals to facilitate some of the difficulties in the models' adoption, for example: (a) how to identify the barriers in an organization from the perspective of software assessment and improvement processes and the determination of critical success factors (Staples et al., 2007; Niazi et al., 2010)and (b) how the judgment about the current stage of an activity or process may or may not have credibility (lee et al., 2001; Niazi et al., 2009).

In order to address this weakness, this work presents the use of the methodology Multi-criteria Decision Aiding - Constructivist (MCDA-C) (de Moraes et al., 2010; Ensslin et al., 2010), as an alternative for software organizations, for adoption in process assessment and improvement, through the option of the constructivist approach, which recognizes the need of expansion of a decision-maker's knowledge about his/her specific decision context (Lacerda et al., 2011a), in contrast with normative models which believe they have an optimum solution to any context.

The relevance of this research is supported by the opportunity of using the MCDA-C in information technology project management to aid CMMI projects (Lacerda et al., 2011b).

Thus, the specific objectives of the research are:

(i) to present a performance measurement methodology and to generate a better understanding of the objectives of process improvement in an organization; and

(ii) to present a case study in order to illustrate the proposed methodology for assessing and creating decision opportunities in process improvement programmes.

In the next section, a short description is given of the CMMI and SPICE models. In Section 3, the methodological procedures used in this research are described, with a case study applying the MCDA-C presented in Section 4 and, finally, considerations and conclusions are provided in Section 5

2. SPICE AND CMMI MODELS

This section highlights two normative models used to improve processes in software development: SPICE and CMMI.

2.1 SPICE

In January 1993, ISO/IEC JTC1 approved starting work with the objective of elaborating an international pattern for SPA. In 1988, the technical report ISO/IEC TR 15.504 was published. The project was named SPICE. It had three main objectives:

  • to develop initial documents to the pattern of SPA (called technical reports);

  • to organize the industry initiatives regarding the use of the new pattern;

  • to promote the technology transfer of SPA inside the software industry.

The model proposed for ISO/TEC 15.504 defines the processes and the basic practices to be adopted by the software organization.

The process dimension is assessed with regard to its existence and its adequacy (ISO/IEC15504-3, 1996 p.8). First, the organization tries to implement the base practices of the process and subsequently it tries to look for performance improvements until the completely adequate level is reached. This assessment has an internal proposal, that is, it has no purpose of certification or to achieve external recognition. Processes are grouped into five categories: (1) the supplier - customer category comprising processes that cause direct impact on the customer, operation and use, such as the transition of software from the development to the production environments; (2) the engineering category comprising processes for software specification, building or maintenance, and documentation; (3) the project category, comprising processes concentrating on base practices for project management (activities, effort and term determination, and resources) or services to attend the customer; (4) the support category consisting of processes that support other processes of a project; and; (5) the organization category consisting of processes that establish business objectives in the organization and about software development processes, products, and resources (tangible and intangible). As an example, the base practice of the process identify the customer's necessities belongs to the supplier - customer category (ISO/IEC15504-2, 1996 p.19). The objective of this process is to manage the union of the process and to meet the customers' requirements aiming to better understand what will satisfy their expectations. The base practices are (a) ascertain the customers' requirements and obtain orders, (b) obtain an understanding of the customers' expectations, and (c) keep the customers informed about the status of requirements and orders.

Another dimension of the assessment concerns process establishment. It is expressed in capacity levels and generic practices, which are grouped with common characteristics. There are six levels of capacity numbered from zero to five. In level 0, Not Executed, there are no common characteristics and in general there are faults in the execution of base practices. The products resulting from the process are not easily identified. In level 1, Executed Informally, the base practices of the process are usually executed but their performance is not planned or followed. In level 2,Planned and Followed, the performance of base practices is planned and followed, the performance agreed is verified and the products resulting from the work are achieved through patterns. In level 3,Well-defined, the base practices are executed in a well-defined process and documented from the pattern specially adapted for software organizations. Level 4,Controlled Quantitatively, has metrics to analyse and measure the performance, there are processes that allow performance improvement, and the created quality of products is known in a quantitative way. Level 5, Continuous Improvement, is based on the business objectives of the organization. In this level, the quantitative objectives of effectiveness and efficiency are established for the processes that are continuously improved with regard to performance, always comparing them with the goals (objectives) previously established.

The capacity level is measured through the judgment of generic practice adequacy, which has the following scale values: not adequate, partially adequate, largely adequate, and completely adequate. A capacity level is reached when the generic practices are evaluated as completely adequate. The complete definition of generic practices can be found in the document ISO/IEC15504-2 (1996).

2.2 CMMI

The CMMI, presented in 2000 by SEI, are continuous and staged models, assured by SEI to be compatible with SPICE. Software organizations should choose one or other of the models, and also the disciplines that will be part of the model for the assessment and improvement of the software process (Staples et al., 2008).

The measurement scale of the continuous model is called the capacity dimension (competence to execute a determined process) (Staples et al., 2007). Associated with the capacity level of a process area are the generic practices used to achieve performance improvement, similar to ISO/IEC 15.504. Each stage of the model has the objective of measuring the performance of one group of process areas (PAs) considered by a software organization to be critical to achieve a determined level of maturity (Herbsleb et al., 1996). They are grouped by common characteristics and are characterized by the focus on assessment: institutionalization or implementation (NIAZI et al., 2005a). Implementation is characterized by having a PA, but not all in the software organization execute their activities as requested. A PA is established when the software organization, as one, executes its activities in a standard way. The level of maturity varies between 1 and 5.

In the continuous model, the organization should pre-determine the PA to be assessed in order to improve performance (SEI, 2006). The processes are grouped into four categories as summarized by Huang et al.(2006): (1) process management, that has practices related to definition, planning, organization, liberation, implementation, observation, control, verification, measurement, and improvement; (2) project management, that deals with the activities related to planning, observation and control of projects; (3) engineering, covering the development and maintenance of practices that are shared by disciplines of software system engineering; and (4) support, involving the practices that support the development and maintenance of products. Each process has specific goals and practices in deciding on the process implementation, together with generic goals and practices to verify the establishment of a software organization process. For instance, the process requirement development, from the process engineering category, has the proposal of producing and analysing customers' requirements, products and components of products to be developed. It has as specific goals: the development of customers' requirements, the development of requested products and the analysis and validation of requirements. The generic goals are: achieve the specific goals and institutionalize a managed process, a defined process, a quantitative management process, and an optimized process. Finally, as an example of specific practices, the specific goal customer requirement development has to (a) collect the needs of the customers, (b) extract the necessities, and (c) transform the customer's needs, their expectations, restrictions and interfaces to the customers' requirements. The document CMMI (2006)has a detailed description of each component and examples of the uses of continuous models.

In the staged rather than the continuous model, the PAs are organized by maturity levels to support and propose a process improvement guide. The level of maturity of a software organization is a way to presuppose the future performance related to one set of PAs (Yoo et al., 2006). For example, when an organization achieves level 2 - managed, it is possible to presuppose that in a software development project, the team will be repeating specific practices already institutionalized by reference to project management, because PAs - as requirement management, project planning, project control and attendance, metrics and results analysis, quality guarantee, and configuration management - are disciplines from project management, and all in the organization know them and practice them in daily project work. The maturity level can be considered as a step in performance improvement. Related to the market, benchmarking shows the evolution stage of the software organization. There are five maturity levels: initial, managed, defined, quantitatively managed, and optimizing. As in the continuous model, the components of the staged model are: PA, specific practices and goals, and generic practices and goals. The difference from the continuous model is that, in the staged model, the generic practices are grouped into four common characteristics (Huang et al., 2006). The common characteristics do not receive grades in the assessment process, they only group generic practices and goals. The generic goals are established to verify if the implementation and the institutionalization of each process area are effective, repetitive and lasting.

3. RESEARCH METHODOLOGY

This section is divided into three sub-sections. The first presents the methodological framing, the second the intervention instrument adopted and the third resumes the procedures of the method executed in this research.

3.1 Methodological framing

In order to justify the intervention instrument as a proper method in this research, there is a need to understand the means that science has to meet challenges (Tasca et al., 2010). These ways of dealing with problems are decision-aiding approaches adopted by the researcher or consultant when finding solutions to organizational problems. Each approach carries with it a set of assumptions that affects the way that management is understood, developed and implemented during the decision-making process (ROY, 1993).

Thus, the approaches and their work assumptions are world views that act as filters in the eyes of researchers and consultants, making them see specific things and ignore others in the contexts in which they operate (Melão et al., 2000).

To see the benefits occasioned by SPI, the decision approach should demonstrate certain properties. These properties are closely connected with the world view adopted by the researcher or consultant working on process improvement.

Each of these perspectives carries with it a set of assumptions that directly affects the modus operandi with the methodologies of process management that are developed and implemented in organizations, because they act as lenses through which certain properties are observed and others disregarded (Melão et al., 2000; Brunswik et al., 2001; Karlsson, 2008).

For an understanding of these approaches, Roy (1993)categorizes three ways to deal with problems in the decision-making process: (i) the path of realism, (ii) the axiomatic (prescriptive) path and (iii) the method of constructivism (Roy, 1993; Tsoukias, 2008).

In the realist approach, the decision-maker is considered to be a rational human being and he trusts the model to represent reality (Roy, 1993).

The axiomatic methods aim, from the discourse of the decision-maker, to identify deductive logic to identify the values and preferences of the decision-maker to build a model. Thus, this approach generates knowledge for the facilitator to understand the situation and prescribe solutions (Keeney, 1992).

The constructivist approach aims to generate knowledge in decision-making during the construction of the model, so that the decision-maker can understand the consequences of the current situation for his/her values and the evolution caused by his/her decisions for his/her strategic objectives (Roy, 1993; Tsoukias, 2008).

Affiliated with the constructivist paradigm (Lacerda et al., 2011a), the intervention instrument set in this paper is the Multi-Criteria Decision Aiding Methodology - Constructivist (MCDA-C).

3.2 MULTI-CRITERIA DECISION AIDING - CONSTRUCTIVIST (MCDA-C)

The MCDA was cited as an important decision-making context more than two centuries ago (Lacerda et al., 2011b). The consolidation of the method as a scientific instrument occurred in the 1990s, through the work of researchers such as Roy (1993), Keeney (1992), Landry (1995), Bana e Costa et al.(1999), among others.

The MCDA-C is a branch of the MCDA, as a way to aid decision-makers in complex, conflicting, uncertain contexts, where the decision-makers want to improve their understanding of the situation and no alternatives exist at the beginning of the process, but should be developed (Ensslin et al., 2000; de Moraes et al., 2010; Ensslin et al., 2010; Zamcopé et al., 2010; Azevedo et al., 2011; Della Bruna JR et al., 2011; Lacerda et al., 2011a; Lacerda et al., 2011b; Azevedo et al., 2012; DA ROSA et al., 2012).

Bearing in mind the scientific contribution of this paper, Table 1 draws core differences between realist and constructivist approaches.

3.3 Procedures of the MCDA-C

The construction of the model of performance measurement following the MCDA-C methodology is divided into three phases: (i) structuring, (ii) evaluation, and (iii) recommendations (Bana E Costa et al., 1999)as presented in Figure 1 and described in this sub-section.


3.3.1 Step 1: Contextualization

The Structuring Phase aims to achieve a broad understanding of the problem to be discussed. To achieve such a goal, the stakeholders are identified, so that it becomes clear whose perception of the context is important and for whom knowledge about the context should be improved.

3.3.2 Step 2: For process improvement: a study case

The decision-maker, with the facilitator's help, defines a label for the problem that describes the focus of the main decision-maker's concerns. The facilitator then encourages the decision-maker to talk about the context and, by interpreting the interviews, the primary elements of evaluation (PEE) are identified (Lacerda et al., 2011b). Thus, the understanding of each PEE is expanded by the construction of the objective associated with it. For each PEE, a concept representing the decision-maker's choice of preference direction is built, as well as its psychological opposite pole (Eden et al., 1992).

With the concepts built, means - ends relationship maps are constructed. In the cognitive map, the clusters of concepts are identified (Eden et al., 1985). Each cluster in the cognitive map has an equivalent point of view in the hierarchical structure of value.

3.3.3 Step 3: Construction of descriptors

The hierarchical structure of value represents the dimension called fundamental points of view (FPsV) or criteria. Thus, it is necessary to use the information in the cognitive maps to build ordinal scales in the hierarchical structure of value, named descriptors, in order to measure the range of what is measured (Bana e Costa et al., 1999). In order to establish the basis for comparing the performance among descriptors, the decision-maker must identify the reference levels 'neutral' and 'good' (Lacerda et al., 2011a).

3.3.4 Step 4: Independence analysis

The MCDA-C uses a compensatory model to build the global evaluation model. This model needs the compensation rates used in the integration to be constant. Thus, the criteria must be independent. The ordinal and cardinal independency analysis is conducted in this phase (Lacerda et al., 2011b).

3.3.5 Step 5: Construction of values functions and identification of compensation rates

The next step in the MCDA-C methodology is the transformation of the ordinal scales into value functions. This transformation requires the decision-makers to describe the different levels of attractiveness for all the levels of the ordinal scale. Integration is achieved by associating the compensation rates with the increase in performance when improving from the 'neutral' reference level to the 'good' reference level for each descriptor (Lacerda et al., 2011b).

3.3.6 Step 6: Identification of impact profile of alternatives

With the multi-criteria model, it is possible to measure the performance of the alternatives. The models built by the MCDA-C methodology make an explicit evaluation possible in the cardinal and graphical forms, facilitating the understanding of the strong as well as the weak points of the alternatives evaluated (Lacerda et al., 2011b).

3.3.7 Step 7: Sensitivity analysis

The model allows for the development of a sensitivity analysis of the impact of alternatives in the scales, in the attractiveness difference in the cardinal scales as well as in the compensation rates (Lacerda et al., 2011b).

3.3.8 Step 8: Formulation of recommendations

The knowledge generated by the MCDA-C allows the decision-makers to visualize where the performance of the alternatives is 'good', 'normal' or 'poor'. The levels of the ordinal scales allow the identification of actions to improve performance. Mixing this element with the global evaluation obtained in the previous step, it is possible to create alternatives and assess their impact in the context (Lacerda et al., 2011b). This process is called the recommendation stage.

4. CONSTRUCTIVIST APPROACH FOR PROCESS IMPROVEMENT: A CASE STUDY

The next sections will present a study case using the MCDA-C in order to assess and create improvement actions in the processes of a software company.

4.1 Step 1: Contextualization

The work of NIAZI et al. (2005b)presented an empirical study of critical factors of success for an SPI model adoption, based on published literature and research undertaken with software organizations. Senior management commitment, staff involvement and team training appeared as the main critical success factors.

In the multi-criteria methodology of decision aid, the decision-maker (a person or a group of people) is asked to participate in all problem descriptions. The interaction among the decision-maker, facilitator (the person who will facilitate and aid the decision process) and procedures will occur throughout the decision process (Roy, 1993; Barthélemy et al., 2002). In MCDA-C, in asking how SPI and SPA works, it is necessary to define that the players in the subsystem consisted of decision-makers (people who have the maximum responsibility to make decisions), actors (people involved in a passive way), representatives (people who represent the decision-maker when he/she is absent) and the facilitator (Lacerda et al., 2011b).

The research commenced with meetings with the decision-makers of a software company based in Santa Catarina State, Brazil, in order to contextualize the problem. The company wanted to have a method to measure performance and create process improvement action plans in the light of the strategic objectives of its managers.

The interview resulted in the establishment of a problem focus, with definitions of:

  • Problem label: assessing and creating process improvement action plans in the light of the strategic objectives of the company.

  • Decision-maker: operations director.

  • Relevant stakeholders: others directors and project managers.

  • Those directly affected by decisions: employees.

  • Those indirectly affected by decisions: customers.

  • Facilitators: researchers.

4.2 Step 2: Hierarchical structure of value

One of the problems of the decision-makers in an organization is the identification and prioritization of the areas of process which should be assessed and improved (Huang et al., 2006; Trkman, 2010).

Using MCDA-C, the improvement of opportunities will be identified by the players in the decision process, which, in an interactive form, initially identifies the primary elements of evaluation (PEsE). The PEsE are identified during meetings where the players freely mention their values, concerns, problems, actions, everything related to the process that they want to improve (Bana e Costa et al., 1999).

Afterwards, the built concepts direct the actions, the PEsE, and associations with a psychological opposite (Eden et al., 1992), so that the concepts have two separate poles by "...", read as "instead of".

Table 2 shows a sub-set of the PEsE and concepts built in this study case.

4.3 Step 3: Construction of descriptors

The next step using MCDA-C is to build cognitive maps (Eden et al., 1992). The map structure is formed by means concepts and end concepts, related by influence connections (Montibeller et al., 2007). Figure 2 presents an example based on PEsE and concepts built in previous examples. The top of the map shows the concern area of technology definition. Original concepts are numerated and are centralized in the map. The concepts built towards the end are achieved through the question: 'Why is this concept important?'. The concepts built towards the means are achieved through the question: 'How could you achieve such a concept?'.


In order to group together the knots with strong connections, called intra-components, a cluster is formed. In the example below, Figure 2, two clusters of three are highlighted. The third cluster (hidden in the figure) is about object-oriented tools, as can express the concept 20.

The cognitive map built from each area of the decision-makers' concerns has a set of candidates from the fundamental point of view (FPV) (Lacerda et al., 2011a). These FPsV will be represented in a tree structure. Figure 3 shows the derived structure of the previous example. The global point of view technology definitionis decomposed into three areas of interest: (1) Standard technology, (2) technology time definition and(3) OO tools. The interest area standard technology, as an example, has the candidates for FPV promote technology used internally and technology area concept. At this point of the MCDA-C methodology use, the problem is already structured. The next steps will present the preparation of the assessment of potential actions.


Through the MCDA-C methodology a multi-criteria model is built for assessment of potential actions and improvements using descriptors. The descriptor has the function of giving a better understanding of the decision-maker's concern and the value function measures the difference in attractiveness among levels of descriptors (Bana e Costa et al., 1999). The descriptor should be measurable, operational (easy to define and measure data to be collected) and understandable (Keeney, 1992; Keeney, 1996).

In order to build the descriptors, the facilitators used all the concepts related to the respective cluster. Figure 4 presents three descriptors for the measure promote technology used internally point of view.


4.4 Step 4: Independence analysis

In this case study, all the criteria were analysed to check the independence of preferences, according to the details of Lacerda et al. (2011a).

4.5 Step 5: Construction of values functions and identification of compensation rates

There are several methods for building the value function. In this article, it a semantic judgement method, MACBETH, will be presented (Bana e Costa et al., 1997; Bana e Costa et al., 2005). MACBETH uses the judgment of attractiveness difference between two levels of an ordinal scale. Table 3 presents the value function of the descriptor kit technology used internally.

When we submit a potential action for assessment in the multi-criteria model, it is rarely the best in relation to the criteria analysed (Lacerda et al., 2011b), making it difficult to identify the most attractive work in the fundamental form. Consequently, the compensatory model has arisen aiming to integrate several dimensions in one measure, without mischaracterizing the multi-criteria model. The compensation rate is the way to aggregate these assessment dimensions. The preliminary action mentioned here is the 'status quo' of the process or action to be assessed. This assessment will represent to the decision-maker how much improvement is expected after the action is implemented.

Figure 5 presents the compensation rates achieved by the comparison method pair-to-pair. As an example, the achievement of compensation rates of criteria (a) kit technology used internally (55%), (b) kit technology prospected (30%), and (c) kit technology tendency (15%) are shown in the lower part of Figure 5.


First, it is necessary to order the criteria by preference. To do this, as Figure 6 shows, an ordering matrix was used. It was elaborated with fictitious actions to assess the preference and questions for the decision-maker: Among the fictitious actions, action 1, by which it is possible to build only 4 technology kits used internally, action 2, by which it is possible to build only 4 prospected technology kits, or action 3, by which it is possible to build only 4 technology tendency kits, which is your preference? See Figure 6. The decision-maker answered that the fictitious action 1 was preferred to the others. The green line represents this preference graphically. The good level is preferred to the neutral level. These judgments are put inside an ordering matrix (see Table 4), where the value 1 is attributed to the line kit internally used technology and columns kit technology prospected and kit technology tendency. Subsequently, other combinations among the fictitious actions are tested achieving the preference order of analysis criteria. Following the use of the weighted version of the MACBETH software, used in a similar way to that when determining the function value, the attractiveness of going from one impact level to another is judged. Table 5 shows the result of this judgment, using the semantic categories (C0 - indifferent, C1 - very weak, C2 - weak, C3 - moderate, C4 - strong, C5 - very strong, and C6 - extreme). To facilitate the decision-maker's judgment, for instance, the decision-maker can be asked whether: once kit technology is used internally, is it better than the kit technology prospected and kit technology tendency? What is the loss in attractiveness in changing kit technology used internally for kit technology prospected? For this example, the decision-maker answered that the loss of attractiveness is strong (C4).


A global assessment of a potential action a is calculated by:

Where:

  • V(a) is global assessment of a potential action a belonging to A;

  • A is the set of all possible actions;

  • a is the action to be measured;

  • Wj is the compensation rates for the criterion j, which allow the transformation of a partial unit of value related to each PVFj in the global unit value, to the range determined good and neutral;

  • (VFPVj(a)) is the indicator that determines the local points (attractiveness) of the action a in the PVFj for j = 1, 2, ..., m ;

  • m is the number of points of view of the model.

4.6 Step 6: Identification of impact profile of alternatives

The global assessment is presented in Table 6. Note that the first column presents the PVFs and its descriptors. The second column the compensation rates, as Figure 5. The following 5 columns show the value functions of the descriptors. The last column has the value of global action calculated by function V(a). Consequently, as with the decision-maker's judgment, the potential 'status quo' action, the processes related to the objective technology definition have a global value of 28 points. Now, each of the actions can have its impact calculated and so the value of each process improvement in the future can be known, after its implementation.

4.7 Steps 7 and 8: Sensitivity analysis and formulation of recommendations

The developed knowledge from the decision-aiding process has help the managers to measure, on a cardinal scale, the contribution that process improvement actions may make to the strategic objectives of the decision-makers. As can be observed in Table 6, the global measurement was 28 points for the current situation of the studied company. With this knowledge, the decision-makers started a new procedure to sort the improvement requests and improve the current situation.

A common problem of normative models is to create alternatives before knowing the necessary actions in the specific decision context. The presented methodology is first concerned with understanding and explaining the decision-maker's objectives in an ordinal way.

After that, the model built from the MCDA-C methodology helps the decision-makers to focus on creating process improvement actions, once all the concerns with the descriptors in the model have been expressed. The technical and expert teams could check each descriptor and determine the possibilities in order to improve each descriptor, using the current resources (Keeney, 1996).

This activity could elicit many process improvement opportunities, making it difficult to determine which action is more likely to improve the context globally. In this case, it is necessary to use the cardinal evaluation to measure the global contribution of each action.

In the case study, two sets of project actions were created. One project focused on communication to collaborate on technology more efficiently and another aimed to improve the technical definition velocity. Table 7 shows the impact of the two projects on global objectives and highlights the preference of the decision-makers to fund the communication project before the velocity project.

The next stage of MCDA-C is to verify how robust the projects are in the face of the model changes. This procedure is named sensitivity analysis and details can be obtained from Bana e Costa (1999).

5. CONCLUSIONS

This article summed up the components and assessment methods of two CMMI and SPICE models, the most used by software organizations.

Even with support, these models are not adopted on a large scale. When we researched the published work related to SPI and SPA, the attempt to solve the model weaknesses with palliative solutions was noted.

The MCDA-C is presented as an alternative to SPI and SPA and has as the advantage of being supported by a constructivist paradigm. In this methodology, the problem is structured with the players and takes into account concerns about the context. Consequently, the results of the improvements will address the specific objectives of the decision-makers. Instead of process players prioritizing which BPA should be first, through an interactive process they will structure the problem in accordance with their perceptions and objectives.

As the first specific objective of this research, Section 3 'RESEARCH METHODOLOGY' explained the methodological framing of this research and explored the differences between normative approaches, such as CMMI and SPICE, and the constructivist approaches. Beyond that, a performance measurement methodology to generate a better understanding about the objectives of process improvement in a specific organization was presented.

In order to address the second specific objective, a case study was presented in Section 4'CONSTRUCTIVIST APPROACH FOR PROCESS IMPROVEMENT: A CASE STUDY' to illustrate how the proposed methodology can assess and create decision opportunities in IT process improvement programmes.

The MCDA-C methodology has shown its importance in supporting IT project management and other strategic contexts. However, it is recommended that it be applied in other contexts and organizations to observe its generality.

It is important to highlight that the models generated in each situation are specific to the context and the method utilized by this paper may not always be a feasible approach, especially within the context of repetitive decision-making situations where the time required to make decisions is often crucial.

6. ACKNOWLEDGEMENTS

The authors would like to thank the Proof-Reading-Service.com by the review the English version of this paper as well as to thank the considerations of the referees, who improved this work.

Luiz Carlos Mesquita Scheid

Universidade Federal de Santa Catarina Brasil

Sandra Rolim Ensslin

Programa de Pós-Graduação em Engenharia de Produção

Universidade Federal de Santa Catarina Campus Universitário

Trindade, Caixa Postal 476

CEP 88040-900, Florianópolis, SC, Brasil

Rogerio Tadeu de Oliveira Lacerda

Programa de Pós-Graduação em Administração da UNISUL

Florianópolis, SC, Brasil

Email: rogerlacerda@gmail.com

http://lattes.cnpq.br/7209487473702675

Manuscript first received 04/04/2011

Manuscript accepted 17/07/2012

  • Azevedo, R. C., Ensslin, L., Lacerda, R. T. O., França, L. A., Gonzalez, C. J. I., Jungles, A. E.; Ensslin, S. R. (2011) Avaliação de desempenho do processo de orçamento: estudo de caso em uma obra de construção civil. Ambiente Construído, v.11, n.1, p.85-104.
  • Azevedo, R. C., Lacerda, R. T. O., Ensslin, L., Jungles, A. E.; Ensslin, S. R. (2012) Performance Measurement to Aid Decision Making in the Budgeting Process for Apartment Building Construction: A Case Study Using MCDA-C. Journal of Construction Engineering and Management.
  • Bana e Costa, C., Corte, J. M.; Vansnick, J. C. (2005) On the mathematical foundation of MACBETH. Multiple Criteria Decision Analysis: state of the art surveys, p.409-437.
  • Bana e Costa, C. A., Ensslin, L., Correa, E. C.; Vansnick, J.-C. (1999) Decision Support Systems in action: Integrated application in a multicriteria decision aid process. European Journal of Operational Research, v.113, n.2, p.315-335.
  • Bana e Costa, C. A.; Vansnick, J. C. (1997) Applications of the MACBETH approach in the framework of an additive aggregation model. Journal of MultiCriteria Decision Analysis, v.6, n.2, p.107-114.
  • Barthélemy, J., Bisdorff, R.; Coppin, G. (2002) Human centered processes and decision support systems. European Journal of Operational Research, v.136, n.2, p.233-252.
  • Brunswik, E., Hammond, K.; Stewart, T. (2001) The essential Brunswik: beginnings, explications, applications: Oxford University Press.
  • Coleman, G.; O'connor, R. (2008) Investigating software process in practice: A grounded theory perspective. Journal of Systems and Software, v.81, n.5, p.772-784.
  • da Rosa, F. S., Ensslin, S. R., Ensslin, L.; Lunkes, R. J. (2012) Environmental disclosure management: a constructivist case. Management Decision, v.50, n.6, p.1117-1136.
  • de Moraes, L., Garcia, R., Ensslin, L., DA Conceição, M. J.; DE Carvalho, S. M. (2010) The multicriteria analysis for construction of benchmarkers to support the Clinical Engineering in the Healthcare Technology Management. European Journal of Operational Research, v.200, n.2, p.607-615.
  • Della Bruna JR, E., Ensslin, L.; Ensslin, S. R. (2011) Supply chain performance evaluation: a case study in a company of equipment for refrigeration. Proceedings of the 2011 IEEE International Technology Management Conference, San Jose, USA. June, pp. 969-978. 2011.
  • Eden, C., Ackermann, F.; Cropper, S. (1992) The analysis of cause maps. Journal of Management Studies, v.29, n.3, p.309-324.
  • Eden, C., Jones, S.; Simms, D. (1985) Messing about in Problems. R&D Management, v.15, n.3, p.255-255.
  • Ensslin, L., Dutra, A.; Ensslin, S. R. (2000) MCDA: a constructivist approach to the management of human resources at a governmental agency. International Transactions in Operational Research, v.7, n.1, p.79-100.
  • Ensslin, L., Giffhorn, E., Ensslin, S. R., Petri, S. M.; Vianna, W. B. (2010) Avaliação do Desempenho de Empresas Terceirizadas com o Uso da Metodologia Multicritério de Apoio à Decisão- Construtivista. Revista Pesquisa Operacional, v.30, n.1, p.125-152.
  • Habra, N., Alexandre, S., Desharnais, J. M., Laporte, C. Y.; Renault, A. (2008) Initiating software process improvement in very small enterprises: Experience with a light assessment tool. Information and software technology, v.50, n.7, p.763-771.
  • Herbsleb, J. D.; Goldenson, D. R. (1996) A systematic survey of CMM experience and results: IEEE, 323-330 p.
  • Huang, S. J.; Han, W. M. (2006) Selection priority of process areas based on CMMI continuous representation. Information & Management, v.43, n.3, p.297-307.
  • Karlsson, C. (2008) Researching operations management: Routledge.
  • Keeney, R. L. (1992) Value-Focused Thinking: A Path to Creative Decisionmaking: Harvard University Press.
  • ______. (1996) Value-focused thinking: Identifying decision opportunities and creating alternatives. European Journal of Operational Research, v.92, n.3, p.537-549.
  • Kuilboer, J. P. ; Ashrafi, N. (2000) Software process and product improvement: an empirical assessment. Information and software technology, v.42, n.1, p.27-34.
  • Lacerda, R. T. O., Ensslin, L.; ENSSLIN, S. R. (2011a) A performance measurement framework in portfolio management: A constructivist case. Management Decision, v.49, n.4, p.648-668.
  • ______. (2011b) A Performance Measurement View Of IT Project Management. The International Journal of Productivity and Performance Management, v.60, n.2, p.132-151.
  • Landry, M. (1995) A Note on the Concept of 'Problem'. Organization Studies, v.16, n.2, p.315.
  • Lee, H., Jung, H. W., Chung, C. S., Lee, J. M., Lee, K. W.; Jeong, H. J. (2001) Analysis of interrater agreement in iso/iec 15504-based software process assessment: IEEE, 341-348 p.
  • Melão, N.; Pidd, M. (2000)A conceptual framework for understanding business processes and business process modelling. Information Systems Journal, v.10, n.2, p.105-129.
  • Montibeller, G., Belton, V., Ackermann, F.; Ensslin, L. (2007)Reasoning maps for decision aid: an integrated approach for problem-structuring and multi-criteria evaluation. Journal of the Operational Research Society, v.59, n.5, p.575-589.
  • Niazi, M.; Babar, M. A. (2009) Identifying high perceived value practices of CMMI level 2: an empirical study. Information and software technology, v.51, n.8, p.1231-1243.
  • Niazi, M., Babar, M. A.; Verner, J. M. (2010) Software Process Improvement barriers: A cross-cultural comparison. Information and software technology, v.52, n.11, p.1204-1216.
  • Niazi, M., Wilson, D.; Zowghi, D. (2005a) A framework for assisting the design of effective software process improvement implementation strategies. Journal of Systems and Software, v.78, n.2, p.204-222.
  • ______. (2005b) A maturity model for the implementation of software process improvement: an empirical study. Journal of Systems and Software, v.74, n.2, p.155-172.
  • Pitterman, B. (2000) Telcordia technologies: The journey to high maturity. Software, IEEE, v.17, n.4, p.89-96.
  • Roy, B. (1993) Decision science or decision-aid science? European Journal of Operational Research, v.66, n.2, p.184-203.
  • Sei. (2006) CMMI for Development, version 1.2.
  • Sheard, S. A.; Roedler, G. J. (1999) Interpreting continuous view capability models for higher levels of maturity. Systems Engineering, v.2, n.1, p.15-31.
  • Staples, M.; Niazi, M. (2008) Systematic review of organizational motivations for adopting CMM-based SPI. Information and software technology, v.50, n.7, p.605-620.
  • Staples, M., Niazi, M., Jeffery, R., Abrahams, A., Byatt, P.; Murphy, R. (2007) An exploratory study of why organizations do not adopt CMMI. Journal of Systems and Software, v.80, n.6, p.883-895.
  • Tasca, J., Ensslin, L., Ensslin, S.; Alves, M. (2010) An approach for selecting a theoretical framework for the evaluation of training programs. Journal of European Industrial Training, v.34, n.7, p.631-655.
  • Trkman, P. (2010) The critical success factors of business process management. International Journal of Information Management, v.30, n.2, p.125-134.
  • Tsoukias, A. (2008) From decision theory to decision aiding methodology. European Journal of Operational Research, v.187, n.1, p.138-161.
  • Yoo, C., Yoon, J., Lee, B., Lee, C., Lee, J., Hyun, S.; Wu, C. (2006) A unified model for the implementation of both ISO 9001: 2000 and CMMI by ISO-certified organizations. Journal of Systems and Software, v.79, n.7, p.954-961.
  • Zamcopé, F. C., Ensslin, L., Ensslin, S. R.; Dutra, A. (2010) Modelo para avaliar o desempenho de operadores logísticos: um estudo de caso na indústria têxtil. Gestão & Produção, v.17, n.4, p.693-705.
  • Correspondence:
    Leonardo Ensslin
    Programa de Pós-Graduação em Engenharia de Produção
    Universidade Federal de Santa Catarina Campus Universitário
    Trindade, Caixa Postal 476
    CEP 88040-900, Florianópolis, SC, Brasil
  • Publication Dates

    • Publication in this collection
      16 Jan 2013
    • Date of issue
      Dec 2012

    History

    • Received
      04 Apr 2011
    • Accepted
      17 July 2012
    TECSI Laboratório de Tecnologia e Sistemas de Informação - FEA/USP Av. Prof. Luciano Gualberto, 908 FEA 3, 05508-900 - São Paulo/SP Brasil, Tel.: +55 11 2648 6389, +55 11 2648 6364 - São Paulo - SP - Brazil
    E-mail: jistemusp@gmail.com