## Services on Demand

## Journal

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Cited by Google
- Similars in SciELO
- Similars in Google

## Share

## Pesquisa Operacional

##
*Print version* ISSN 0101-7438

### Pesqui. Oper. vol.32 no.3 Rio de Janeiro Sept./Dec. 2012 Epub Nov 30, 2012

#### http://dx.doi.org/10.1590/S0101-74382012005000022

**Influence of models and scales on the ranking of multiattribute alternatives**

**Helen M. Moshkovich ^{I}; Luiz Flávio Autran Monteiro Gomes^{II,*}; Alexander I. Mechitov^{III}; Luís Alberto Duncan Rangel^{IV}**

^{I}School of Business, University of Montevallo, Montevallo, AL, 35115, USA. E-mail: MoshHM@montevallo.edu

^{II}Ibmec/RJ, Av. Presidente Wilson, 118, Room 1110, 20030-020 Rio de Janeiro, RJ, Brazil. E-mail: autran@ibmecrj.br

^{III}School of Business, University of Montevallo, Montevallo, AL, 35115, USA. E-mail: mechitov@montevallo.edu

^{IV}UFF/EEIMVR/PUVR, Av. dos Trabalhadores, 420, 27255-125 Volta Redonda, RJ, Brazil. E-mail: duncan@metal.eeimvr.uff.br

**ABSTRACT**

This paper presents an application of three multiple criteria methods to determining rank of residential real estate options. Methods SAW and TODIM are based on eliciting the decision maker's preferences (weights and values) directly in a quantitative form while using linear (SAW) and non-linear (TODIM) aggregation functions for alternatives' evaluation. ZAPROS seeks and uses preferences in an ordinal form as an indirect comparison of trade-offs between criteria. Advantages and disadvantages of different approaches are discussed.

**Keywords:** Simple Additive Weighting, Verbal Decision Analysis, Multicriteria Decision Aiding.

**1 INTRODUCTION**

The problem under consideration is the rank of alternatives evaluated against a set of criteria (attributes.) Usually it is assumed that better (more preferable) values against criteria lead to a better overall value of an alternative and to a higher rank in the preference order. To be able to estimate the overall value of each alternative, multiple criteria decision aiding techniques are used to construct an aggregation model on the basis of preference information provided by the decision maker. Rather often such aggregation model is called preference model as it provides a preference structure on the set of alternatives which leads to the sought alternatives' ranking.

Greco *et al.* (2008) differentiated two main types of preferential information: direct and indirect. Direct preferential information is used in the traditional aggregation paradigm in the form of scale constant of criteria or aspiration levels or discrimination thresholds or some other parameters necessary for the aggregation model. These parameters are elicited from the decision maker and are then applied in a model to obtain the aggregated value of each alternative. These aggregated values are used to rank alternatives. Many popular multiple criteria methods are based on the paradigm: MAUT (Dyer, 2005), SMART (Edwards & Barron, 1994), AHP (Saaty, 2005) and others (see, * e.g.* Keeney & Raiffa, 1993; Roy & Bouyssou, 1993; Pomerol & Barba-Romero, 2000; Belton & Stewart, 2002).

The difficulties in assessing these parameters were widely noted (Borcherding *et al.*, 1991; Schoemaker & Waid, 1982; Weber & Borcherding, 1993). Many attempts were made to make it easier for the decision makers to provide this information in an ordinal form. Once obtained, this ordinal information in converted into numbers used in the aggregation model (see, *e.g.* Cook & Kress, 1992; Kirkwood & Sarin, 1995; Podinovski, 1999; Weber, 1987). The attempts to limit input to the ordinal form usually fail to provide the complete order on the set of alternatives (Shepetukha & Olson, 2001).

Another way to obtain preference information is to use indirect methods of preference elicitation, preferably in an ordinal form. According to Greco *et al.* (2008) this approach is called the disaggregation (or regression) paradigm where the decision maker provides some holistic preference information (*e.g.*, pairwise comparison of a small number of alternatives from the initial set). These methods are considered to require less cognitive effort from the decision maker. For example, the UTA method (Siskos * et al.*, 2005) seeks additive utility functions from the preference information in the form of alternatives' ranking carried out by the decision maker. The derived functions may then be used to evaluate other alternatives. There is evidence that comparing multiple criteria alternatives consistently is rather difficult and may not be less complicated than providing "weight trade-offs" or "constructing indifference curves for utility functions" in the direct aggregation approach. According to Larichev (1992) it is more comfortable for people to compare alternatives which differ against a relatively small number (2-3) of criteria.

In this paper we compare the results of applying the direct aggregation approach to rank of alternatives in the residential real estate with the indirect preference elicitation. The direct approach is presented through methods TODIM (Gomes & Lima, 1992; Gomes & Rangel, 2009) and SAW (Simple Additive Weighting) (Keeney & Raiffa, 1993; Vincke, 1989) while the indirect approach is presented through method ZAPROS (Larichev & Moshkovich, 1995, 1997) developed within the framework of Verbal Decision Analysis.

The goal of the research was to analyze the differences in the implementation and the stability of the results obtained through different approaches. The findings are presented and discussed later in the paper.

**2 DESCRIPTION OF METHODS**

**2.1 Problem statement**

The problem under consideration may be presented as follows. There is a set of alternatives *S* = {*a*_{1}, *a*_{2}, ..., *a _{n}*} and a set of criteria

*C*= {

*C*

_{1},

*C*

_{2},...,

*C*}. Each alternative

_{m}*a*, is evaluated against a set of criteria

_{j}*C*

_{1},

*C*

_{2},...,

*C*,

_{m}*i*= {1, 2,...,

*m*} and may be presented as a vector

*a*= {

_{j}*a*

_{1j},

*a*

_{2j}, ...,

*a*},

_{mj}*j*= 1, 2,...,

*n*, where

*a*is estimate of alternative

_{ij}*a*against criterion

_{j}*C*. The goal is to rank alternatives on the basis of this information according to the decision maker's preferences.

_{i} **2.2 The SAW Method**

The SAW Method (Simple Additive Weighting) is one of the more popular and easy to understand and use. The approach is based on the Multiple Attribute Value Theory (MAVT) and assumes preferential independence of criteria (see, * e.g.*, Triantaphyllou, 2002). The SAW technique uses a linear additive function to estimate the value of each alternative in the form:

where *w _{i}* is the scale constant of the

*i*-th criterion and

*v*is the value of alternative

_{ij}*a*evaluated by the

_{j}*i*-th criterion. The single-attribute values, or utilities

*v*reflect how well eachalternative does on each criterion. Criterion weights are the relative scale constants of different criteria leading to the overall value

_{ij}*V*(

*a*) of the alternative. The higher the level of performance of alternatives according to the criteria with the highest weights, the higher their globalvalue will be.

_{j} This approach is widely used in multiple criteria decision making. Main differences in the approaches deal with ways of eliciting weights and single-attribute values (Keeney & Raiffa, 1993; Vincke, 1989). Weights eliciting may follow the reasonably sophisticated approach of swinging weights in SMARTS (Edwards & Barron, 1994), or elaborate weights' elicitation through a system of lotteries with trade-offs (Keeney & Raiffa, 1993). In the majority of cases the decision maker evaluates criteria scale constants using some interval or cardinal scales (*e.g.*, if the most important criterion is 100 points, assign appropriate points to other criteria, or use a 5 point scale to assign constant scales to criteria.) The results of such evaluation are normalized, so that the sum of all criterion weights *w _{i}* is equal to 1.

Single-attribute values for criteria may also be evaluated differently. Rather often quantitative scales are also normalized to produce comparable values. Qualitative scales may be converted into values either directly by the decision maker (*e.g.*, the most preferred value of 1 and the least preferred is 0 and others are assigned values between 1 and 0). Sometimes, as with weights, the decision maker may evaluate them using some cardinal scale (*e.g.*, from 1 to 5). These estimates are then normalized the same way as quantitative scales to produce the required values. In our case, normalization will be used to turn values *a _{ij}* into corresponding values

*v*as in formula (2). Without loss of generality we assume that in all criteria larger values constitute more preferable estimates.

_{ij}Once the weights and single-attribute values are established, each alternative's "global value" is evaluated according to the additive function (1) and all alternatives are ranked on the basis of these global values *V*(*a _{j}*),

*j*= 1, 2, ...,

*n*.

**2.3 The TODIM Method**

The TODIM method (Gomes & Lima, 1992; Gomes & Maranhão, 2008) is a combination of MAVT approach in the sense that it is based on an additive value function and preferential independence of criteria, but it is close to the, so called, outranking methods (Brans & Mareshal, 1990; Roy & Bouyssou, 1993) as it evaluates overall value of each alternative as a sum of relative "gains" and "losses" of each alternative against all other alternatives in the set.

The initial data for the problem and their normalization into *v _{ij}* values and criterion weights are defined exactly as in the SAW method. The main difference is how overall alternatives' values

*V*(

*a*) are calculated. Computations by TODIM are carried out through the following steps:

_{j}2.3.1 Individual criterion weights are recalculated using the most "important" one (criterion c with the highest weight

w) presenting criterion weights as a proportion of the most important one: for each_{c}w, 1 = 1, 2,...,_{i}m, likew=_{ic}w._{i}/w_{c}2.3.2 For each criterion

i= 1, 2,...,mfor each two alternativesaand_{j}a(_{k}j,k= 1,2,...,n) the "single-attribute dominance" Φ(_{i}a,_{j}a) is calculated as:_{k}Formulas (3) allow presentation of the value of relative "gains" and "losses" for two alternatives to be presented as an S- shape function (see Fig. 1) which reflects findings of the Prospect Theory (Kahneman & Tversky, 1979) about how people essentially make decisions connected with risks. Above the horizontal axis of Figure 1 there is a concave curve representing the gains, and, below the horizontal axis, there is a convex curve representing the losses. The concave part reflects the aversion to risk in the face of gains and the convex part symbolizes the propensity to risk when dealing with losses.

2.3.3 For each pair of alternatives

aand_{j}a(_{k}j,k= 1, 2,...,n) the relative "dominance"δ(a,_{j}a) is calculated as a sum of single-attribute dominance measures Φ_{k}(_{i}a,_{j}a),_{k}i= 1, 2,...,m:The "global dominance"

G(a) of each alternative_{j}a,_{j}j= 1, 2,...nis calculated as sum of "dominances" over all other alternatives:The last step normalizes "global dominances" to produce the relative overall value

V(a) of each alternative using formula:_{j}These overall values of the TODIM method, ranging from 0 to 1, are used to rank alternatives.

**2.4 The ZAPROS Method**

The ZAPROS method (Larichev & Moshkovich, 1995, 1997) is part of the Verbal Decision Analysis (VDA) paradigm (Moshkovich *et al.*, 2005). VDA acknowledges that there are people who are uncomfortable with providing numerical values for qualitative notions. The requirement to provide such estimates may have an unexpected influence on decisions (Moshkovich *et al.*, 2002). The main peculiarity of VDA is that it is oriented on using only ordinal judgments in preference elicitation and evaluation of alternatives. ZAPROS is oriented on the construction of a partial order of alternatives on the basis of the so-called Joint Ordinal Scale.

** 2.4.1 Step 1 - Construction of ordinal scales for all criteria**

Let λ* _{i}* denote the number of possible levels on the scale of the

*i*-th criterion, then

*X*= {

_{i}*x*} is a set of levels for the i-th criterion rank-ordered from the most preferable to the least preferable one. Then we can define the set of all possible alternatives in the criterion space as

_{ij}*X*=

*X*,

_{1}*X*,...,

_{2}*X*. Then the set of initial alternatives

_{m}*A*is a subset of

*X*.

ZAPROS also assumes that the overall value of each alternative is evaluated by an additive value function as in formula (1) but does not limit the form of value functions for individual criteria to any specific form. The value of the best level on the *j*-th criterion scale is 1. The value of the least preferable level on this scale is equal to 0: *v*(*x _{i}*

_{1}) = 1,

*v*(

*x*,

_{i}*γ*

*) = 0 for*

_{i}*i*= 1, 2,...

*m*.

The description of alternatives using ordinal scales allows comparison of alternatives according to dominance: alternative *a* is not less preferable than alternative *b*, if for each criterion *C _{i}*(

*i*= 1, 2,...,

*m*) estimate

*a*of alternative

_{i}*a*is not less preferable than estimate

*b*of alternative

_{i}*b*(see Fig. 2 below). Usually in real tasks dominance does not lead to any practical order of alternatives. So ZAPROS suggests the next step of the decision maker's preference elicitation with the goal of constructing a Joint Ordinal Scale (JOS).

**2.4.2 Step 2 - Construction of the Joint Ordinal Scale (JOS)**

The decision-maker is asked to compare pairs of hypothetical alternatives, each with the best levels of attainment on all criteria but one. The number *N* of these alternatives is given by equation (7):

As a result the decision maker is to compare alternatives attainment levels on only two attributes, holding all other attributes values at the same best level. Possible responses are limited to two variants: one of the alternatives is preferred to another ( will mean "more preferable") or they are equally preferable (≈ will mean "equally preferable"). An illustrative matrix of pairwise comparisons resulting from the decision maker's responses is presented in Figure 3. Numbers present the level in the ordinal scale (from second best to the worst). It means that criterion *C*_{1} had 5 attainment levels while criterion *C*_{2} has only 4 of them. Preference symbols accompanied with "!" in the matrix reflect the actual comparisons carried out by the decision maker, others were evaluated through the transitivity of preferences property.

Comparisons are carried out for all pairs of criteria. On one hand, due to transitivity of preferences the number of actual comparisons by the decision maker is much less than the overall number of cells in all these matrices. On another hand, due to transitivity of preferences it is possible to test decision maker's responses for consistency, thus providing a reliable tested preference structure in the criterion space.

When all necessary comparisons are carried out the criterion value levels starting with the second best, may be ranked. This rank is called the Joint Ordinal Scale (JOS). The place of the criterion value level in this ranking is called JOS rank and is marked as *J*(*x _{ij}*). The smaller the index the better the corresponding criterion level is. Note that all the best value have the same highest rank:

*J*(

*x*

_{11}) =

*J*(

*x*

_{21}) = ... =

*J*(

*x*

_{m}_{1}). Thus, we construct a unique ordinal scale for all attributes with their possible values.

**2.4.3 Step 3 - Using JOS for pairwise comparison of alternatives from the set S**

Construction of the Joint Ordinal Scale provides a simple rule for comparison of multiattribute alternatives. Each vector *a* = (*a*_{1}, *a*_{2}, ..., *a _{m}*) may be rewritten in the form of the rank vector

*J*(

*a*) = (

*J*(

*a*

_{1}),

*J*(

*a*

_{2}),...,

*J*(

*a*)), where each component is substituted by its JOS rank.

_{m}The advantage of this presentation is due to the comparability of JOS ranks among criteria. We are not able to compare *x _{iq}* and

*x*but the JOS rank

_{jt}*J*(

*x*) is always comparable with the JOS rank

_{iq}*J*(

*x*). The rule for comparison the two alternatives on the basis of the JOS is the following:

_{jt}*alternative*

**a**

*is not less preferable than alternative*

**b**

*, if for each component a*

_{i}of alternative**a**

*there may be found a component b*

_{j}of alternative**b**

*such that J*(

*a*)

_{i}*J*(

*b*)

_{j}*(see Fig.*4

*for illustration)*.

The correctness of the rule in case of an additive value function was proven in Larichev & Moshkovich (1995).

To easily implement this rule, it is enough to rearrange elements of each rank vector in an ascending order:

and use dominance principle for pairwise comparison: *alternative ***a*** is not less preferable than alternative ***b*** if for each i *= 1, 2,..., *m* (*J*(*a _{i}*)

*J*(

*b*).

_{j}

**3 THE CASE STUDY**

** 3.1 Problem description**

The three presented methods were used to compare a set of 15 residential properties available for rent in the city of Volta Redonda in Brazil (Gomes & Rangel, 2009.) Eight most important criteria for property evaluation were established working with the real estate agents and evaluators. The analysis in this case study aimed to assisting professionals in the real estate market to evaluate the alternatives more clearly in relation to the evaluation criteria. In other words, by using the results from such analysis realtors were provided a comprehensive way to evaluate properties. All weights used in this case study were provided by these realtors through interviews. For methods SAW and TODIM qualitative criteria were turned into cardinal scales while quantitative scales were left as they were. For qualitative criteria the TODIM method relies on mapping readings along a qualitative, ordinal scale into corresponding readings along a cardinal scale. By doing this evaluations according to qualitative criteria are transformed into numerical values. For ZAPROS quantitative criteria were divided into a set of levels most appropriate from the point of view of the decision maker were ordered from the least to the most preferred one.The resulting system of criteria is presented in Table 1.

Fifteen alternatives were evaluated against these eight criteria. The result for quantitative scales is presented in Table 2.

**3.2 Ranking Alternatives using the SAW Method**

The first step in the SAW method is to normalize all scales. Formula (2) is used to obtain *v _{ij }*as all criteria are to be maximized. To evaluate overall value for each alternative

*V*(

*a*),

_{j}*j*= 1, 2,...,

*n*, we multiply each normalized value

*v*by corresponding criterion weight

_{ij}*w*presented in the last column of Table 1. Then we sum up results for all criteria

_{i}*i*= 1, 2,...,

*m*in according to formula (1). The resulting ranking of alternatives accompanied by their overall value is presented in Table 3 in the first column at the left (SAW method).

**3.3 Ranking Alternatives using the TODIM Method **

The TODIM method uses the same normalization procedure to obtain criterion values. To evaluate the overall value of each alternative using data from Table 3, it is necessary to go through several steps. As the process was described in detail in (Gomes & Rangel, 2009) we will just illustrate some moments of the process.

The first step is to transform the initial criterion weights *w _{i}* (

*i*= 1, 2,...,

*m*) into relative weights using a reference (

*e.g.*the most important) criterion weight. The reference criterion in this problem is the first criterion A and its weight is 0.25. The relative weights are

*w*

_{i}_{1}=

*w*/

_{i}*w*

_{1}

*i*= 1, 2,...,

*m*. For criterion A the relative weight is 1 (0.25/0.25 = 1), for criterion B it is 0.15/0.25 = 0.60. Analogously, the relative weights for other criteria are 0.40, 0.80, 0.20, 0.40, 0.20, and 0.40. The sum of all relative weights is equal to 4.

Using the functions Φ_{i}(*R _{i}*,

*R*) for each criterion i and for each pair of alternatives

_{k}*j*and

*k*are calculated according to formulae (3). Let illustrate the process for alternatives

*R*

_{1}and

*R*

_{2}for criterion B.

*v*

_{21}= 0.103 and

*v*

_{22}= 0.064.

*v*

_{21}>

*v*

_{22}so,

To evaluate dominance of alternative *R*_{1} over alternative *R*_{2} we have to calculate functions Φ_{i} for each criterion (*i* = 1, 2,..., *m*) and sum up the results to produce a delta function according to formula (4):

To evaluate the global dominance measure for alternative *R*_{1} delta values for all alternatives are summed up according to formula (5): *G*(*R*_{1}) =. The overall value of alternative *R*_{1} is obtained through normalization of global measures using formula (6). Results for all alternatives are presented in Table 3.

Methods TODIM and SAW use the same formula for scales (criterion values) and the same criteria weights, but different aggregation models. The result of the ranking of the same alternatives though is somewhat different for these two methods (see Fig. 5). There are six alternatives with different ranks using these two models: *R*_{8}, *R*_{9}, *R*_{10}, *R*_{12}, *R*_{13}, and *R*_{15}. Without some additional information it is difficult to choose which model best represents the decision maker's preferences as the assumptions about the decision maker's preference structure seem to be the same in both methods.

If we construct a matrix of pairwise comparisons of alternatives *R*_{1} to *R*_{15} using the SAW ranking and compare it with the one obtained through the TODIM method, there will be 7 reversals in the comparison of alternatives. Reversed conditions occurred between alternative *R*_{13} and alternatives *R*_{1}, *R*_{4}, *R*_{8}, and *R*_{15}; between alternatives *R*_{4} and *R*_{8}; and between alternative *R*_{9} and alternatives *R*_{10} and *R*_{12}.

**3.4 Implementation of the ZAPROS method**

ZAPROS uses ordinal scales as well as ordinal pairwise comparisons to create the Joint Ordinal Scale (JOS) in a reliable fashion and then uses it to make binary comparisons of the alternatives (thus providing a partial order on the set of 15 alternatives). It is reasonable to expect that alternatives with "reversed" preferences in the previous two models would be left incomparable in ZAPROS as they are evidently close in their overall value.

To obtain the Joint Ordinal Scale (JOS) ordinal scales for all criteria are used (see Table 1). The decision maker is asked to carry out comparison of alternatives differing in values against only two criteria with all other criterion values being at the best possible level. The decision maker is to respond to the following type of questions:

"What would you prefer: an alternative with all the best values but the second best value against criterion A, or an alternative with all the best values but the second best value against criterion B?"

This question may be formulated in a simpler form as follows:

"Do you prefer to have an alternative which has an excellent location and with an area size of 200 to 270 sq. meters or an alternative with a good location and an area size of over 270 sq. meters?"

As a result, for each pair of criteria a small matrix of preferences is formed (see an example in Fig. 6). Symbol "" means more preferable, "" means "less preferable", and " ≈ " means "equally preferable".

To fill in the first matrix the decision maker had to carry out only 4 pairwise comparisons (they are marked with the exclamation mark). All other comparisons were extrapolated on the basis of transitivity of preferences. In the second matrix, the number of comparisons carried out directly by the decision maker was 3. Some comparisons will be double-checked when comparisons for criteria *B* and *C* are formed (see Fig. 7). For example, from Figure 6 we have *A*_{4} ≈ *B*_{3}. This means that preferences of *A*_{4} compared to criterion *C* presented in Figure 6 (*A-C*) has to be the same for the *B*_{3} and *C* presented in Figure 7. Analogously, *A*_{3} ≈ *C*_{2}, so all conditions for *A*_{3} in the first matrix has to be correct for *C*_{2} in matrix in Figure 7. As a result almost all comparisons (marked with the exclamation mark) in the matrix in Figure 7 may be derived from the transitivity relationships and the previous comparisons.

The transitivity of preferences makes it possible to construct an effective procedure of pairwise comparisons (for more details on the procedure see Larichev & Moshkovich, 1995, 1997).

The matrices are constructed for all pairs of criteria, presenting a reliable complete system of pairwise comparisons. This system is used to rank order all criterion values providing ranks for each one of them. The resulting Joint Ordinal Scale is presented in Table 4 (smaller rank presents more preferable criterion values).

To compare 15 alternatives on the basis of JOS we have to substitute each criterion value with the corresponding rank *n* the JOS, then reorder ranks in an ascending order and compare the resulting alternatives on the basis of a dominance rule.

All alternatives first are evaluated using ordinal scales presented in Table 1. For example, alternative *R*_{1} initial evaluation was presented as (3, 290, 3, 3, 1, 6, 4, 0). Using ordinal scales alternative *R*_{1} is presented as (*A*_{3}, *B*_{4}, *C*_{3}, *D*_{3}, *E*_{2}, *F*_{3}, *G*_{5}, *H*_{1}).

At the next step all alternatives are evaluated through JOS ranks. Let illustrate the process using alternative *R*_{1} again. Value *A*_{3} has a rank of 6 (see Table 4). Value *B*_{4} has a rank of 1 and so on. As a result, *R*_{1} is presented through JOS as (6, 1, 1, 4, 3, 2, 1, 7). The ordered rank presentation of JOS(*R*_{1}) will be (1, 1, 1, 2, 3, 4, 6, 7) as seen in Table 5.

Evaluations in Table 5 were used to compare pairs of alternatives on the basis of dominance. For example, while comparing alternative *R*_{1} with *R*_{2} it is easy to see that alternative *R*_{1} has all values more preferable than alternative *R*_{2}. On the other hand, alternatives *R*_{6} and *R*_{7} are incomparable: alternative *R*_{7} is better than alternative *R*_{6} in position 1 but less preferable in other positions. The resulting matrix of pairwise comparisons is presented below in Figure 8.

Symbol "?" is used for alternatives left incomparable. Boldfaced cells show the reversed pairwise comparisons obtained through methods SAW and TODIM. It is reasonable to expect that alternatives with "reversed" preferences in the previous two models would be left incomparable in ZAPROS. Many studies show that stable comparisons of alternatives are provided by different methods only when alternatives are substantially different in value, but it is not the case when alternatives are close in value and should be considered incomparable (or equal) with the used criterion system (Larichev *et al.*, 1995; Olson *et al.*, 1995).

All cases of reversed preferences in TODIM and SAW were left incomparable when using JOS. There were no reversals with JOS compared to stable preferences in both TODIM and SAW. Some additional alternatives were left incomparable when using JOS due to a limited compensatory nature of ordinal comparisons. Incomparable alternatives in ZAPROS usually had ranks close to each other in TODIM and SAW.

The only exclusion from this situation was alternative *R*_{7}. This alternative was indeed the worst one in both methods. Implementation of ZAPROS was not able to compare alternative *R*_{7} to quite a few other alternatives from the list due to an unusual combination of values in this alternative. Alternative *R*_{7} has the best value against criterion H (Security) which leads to rank 1 in the JOS representation while many other criterion values are at their lowest level. Thus, the alternative is incomparable with any alternative without the best value against at least one criterion, despite "good" values against all other criteria.

Analysis of incomparable alternatives may flag such situations and be easily resolved through additional pairwise comparisons by the decision maker (see Moshkovich *et al.*, 2002). The comparisons are "goal-oriented". As with the JOS, the decision maker compares alternatives differing in values against two criteria but there is no requirement that in both alternatives only one value differs from the best level. We asked the decision maker: "Would you prefer an alternative with "no additional security" but with "area size of 200-270 sq. m." to an alternative with "Doorman and security cameras" but "area size of less than 125 sq. m."? The question represents a comparison of *B*_{3}*H*_{1} with *B*_{1}*H*_{2}. The first combination was preferred to the second which compared alternative *R*_{7} to all other alternatives as less preferable one.

**4 DISCUSSION AND CONCLUSIONS**

The presented study confirms the conclusion that it is difficult to ensure reliable evaluation and selection of an appropriate model in a multiple criteria ranking task. Implementation of the same criterion weights and scale transformations for criterion values produced significant differences in the ranking of alternatives when two different methods were used for the aggregation of the preferential information.

The Simple Additive Weighting model (SAW) produced different ranking for 6 out 15 alternatives when compared to the ranking obtained through the TODIM method. Both methods assume preferential independence of criteria but TODIM also takes into account relative "gains" and "losses" of one alternative compared to all others. This assumes that the model is dependent on the set of alternatives while the SAW model should be much less dependent on the actual set of alternatives. In general, this is not so as criterion values for both methods are produced from actual scale values in the data set. If we eliminate, for example, alternative R_{7} from the set which was considered by both methods stably the least preferable, there will be no changes in the ranking produced by method TODIM but there will be quite a few changes in the ranking produced with the SAW method as presented below (in Fig. 9):

The changes are boldfaced. As can be seen it is difficult to decide on the preferable method as well as evaluate quality of the result and its stability to the set of alternatives.

Method ZAPROS does not require the decision maker to produce criteria weights in some form or make decisions about how to evaluate criterion values and how to combine them in the overall value. It is based on the same assumption of criteria independence but uses ordinal scales for all criteria eliminating essential changes in the results due to slight differences in the actual values presented in the alternatives' set. The decision makers acknowledged that it was even easier to decide on ordinal scales for quantitative criteria than it was to produce cardinal scales for the qualitative ones.

The decision maker's preferences were elicited through pairwise comparison of hypothetical alternatives differing in values against only two criteria with all other values being at their best values. This type of comparison was rather easy for the decision maker and did not require the understanding of the underlying notions or principles except transitivity of preferences.This approach presents an indirect type of preference elicitation. The process though somewhat lengthy, produced reliable information as it was partially double-checked using transitivity relationship. This information was used to form the Joint Ordinal Scale which let to for pairwise comparison of the 15 alternatives.

As was expected, the cases of rank reversals using the SAW and the TODIM methods were left incomparable in ZAPROS, while no comparisons contradicted to the ones obtained stably through SAW and TODIM. Due to the limited compensatory nature of ZAPROS some cases were left incomparable, though all these cases dealt with alternatives close in ranking to each other.

The application showed that the ZAPROS method is beneficial to obtaining stable groups of preferred alternatives. In our case, the order from ZAPROS may be viewed as follows:

Additional partial orders within the groups are possible. In case of *R*_{7} it was easily decided (with one additional question) that it was less preferable than other alternatives despite its best value on security.

Previous studies illustrated that in many cases stable comparisons of alternatives are possible only when the differences in their overall quality are significant. In case of close in value alternatives slight changes in procedures and or preferences usually lead to different results. This study supports this idea through the application in a real problem. As in real life situations it is difficult to evaluate a priori how close in value the alternatives are, careful approach to preference elicitation is a must. ZAPROS presents one of the more reliable approaches to preference elicitation as it is based on indirect elicitation with the possibility of a feedback on errors in judgment through intransitivity of preferences.

In addition to the overall ranking of all criteria values in JOS we also obtain the ranking of criteria according to their scale constants through "swing" procedure (Edwards & Barron, 1994) from the comparison of the least preferable values against pairs of criteria. The criteria obtained indirectly through ZAPROS is: *A * *D * *B* ≈ *C* ≈ *F * *E* ≈ *G * *H*.

In another application of TODIM to the same data (Gomes & Rangel, 2009) the directly expressed preference was *A * *D * *B* *C* ≈ F ≈ *H* *E* ≈ *G*. The most distinction concerned criterion H (Security) which seemed less important to the decision maker when asked through comparison of alternatives than directly. Though usually decision makers have no problems rank ordering criteria by their scale constants, their meaning may not always reflect the actual "trade-off" weights applicable in multiple criteria decision making.

The study showed the attractive features of the ZAPROS method which is based on Verbal Decision Analysis paradigm and tries to use indirect reliable ordinal judgments from the decision makers about their preferences. At the same time ZAPROS requires much more time from the decision maker while provides only partial order in alternatives. The decision makers liked SAW and TODIM for their easy application but agreed that they had to rely on the consultants for many decisions about the problem. ZAPROS was attractive due to comfortable information elicitation process with the feedback (intransitivity of preferences). It provided more assurance for the decision maker in the results of the analysis.

**ACKNOWLEDGMENTS**

The authors are grateful to the referees for their insightful comments on the first version of this paper. This work was partially supported by CNPq throught Research Projects No. 310603/ 2009-9 and 502711/2009-4.

**REFERENCES**

[1] BELTON V & STEWART TJ. 2002. Multiple criteria decision analysis: an integrated approach. Massachusetts: Kluwer Academic Publishers. [ Links ]

[2] BOUYSSOU D. 1986. Some remarks on the notion of compensation in MCDM. *European Journal of Operational Research*, **26**(1): July, 150-160. [ Links ]

[3] BORCHERDING K, EPPEL T & VON WINTERFELDT D. 1991. Comparison of weighting judgments in multiattribute utility measurement. *Management Science*, **37**(2): 1603-1619. [ Links ]

[4] BRANS JP & MARESCHAL B. 1990. The PROMÉTHÉE methods for MCDM, the PROMCALC GAIA and BANDADVISER software. In: *Readings in Multiple Criteria Decision Aid* (edited by C.A. Bana e Costa), chapter 2. Berlin: Springer Verlag, 216-252. [ Links ]

[5] CLEMEN R & REILLY T. 2001. Making Hard Decisions with Decision Tools. Pacific Grove, Duxbury. [ Links ]

[6] COOK WD & KRESS M. 1992. *Ordinal Information & Preference Structure* (Decision Models and Applications). New Jersey: Prentice-Hall, Inc. [ Links ]

[7] DYER J. 2005. Multiattribute Utility Theory, In: *Multiple Criteria Decision Analysis: State of the Art Surveys* (edited by J. Figueira, S. Greco & M. Ehrgott), chapter 7, Berlin: Springer Verlag, 265-296. [ Links ]

[8] EDWARDS W & BARRON FH. 1994. SMARTS and SMARTER: Improved Simple Methods for Multiattribute Utility Measurement. *Organizational Behavior and Human Decision Processes*, **60**:306-325. [ Links ]

[9] GOMES LFAM & LIMA MMPP. 1992. TODIM: basics and application to multicriteria ranking of projects with environmental impacts. *Foundations of Computing and Decision Sciences*, **16**(4):113-127. [ Links ]

[10] GOMES LFAM & MARANHÃO FJC. 2008. A exploração de gás natural em Mexilhão: análisemulticritério pelo método TODIM. *Pesquisa Operacional*, **28**(3): 491-509. [ Links ]

[11] GOMES LFAM & RANGEL LAD. 2009. An application of the TODIM method to the multicriteria rental evaluation of residential properties. *European Journal of Operational Research*, **193**(1):204-211. [ Links ]

[12] GRECO S, MOUSSEAU V & SLOWINSKI R. 2008. Ordinal regression revisited: multiple criteria ranking using a set of additive value functions. *European Journal of Operational Research*, **191**(2): 415-435. [ Links ]

[13] KAHNEMAN D & TVERSKY A. 1979. Prospect theory: An analysis of decision under risk. *Econometrica*, ** 47**: 263-292. [ Links ]

[14] KEENEY RL & RAIFFA H. 1993. Decisions with multiple objectives: preferences and value tradeoffs. Cambridge: Cambridge University Press. [ Links ]

[15] KIRKWOOD CW & SARIN RK. 1985. Ranking with partial information: a method and an application. *Operations Research*, **33**: 38-48. [ Links ]

[16] LARICHEV OI. 1992. Cognitive validity in design of decision-aiding techniques. *Journal of Multi-Criteria Decision Analysis*, **1**: 127-138. [ Links ]

[17] LARICHEV OI & MOSHKOVICH HM. 1995. ZAPROS-LM - A Method and System for Ordering Multiattribute Alternatives. *European Journal of Operational Research*, **82**: 503-521. [ Links ]

[18] LARICHEV OI & MOSHKOVICH HM. 1997. *Verbal Decision Analysis for Unstructured Problems*. Berlin: Kluwer Academic Publishers. [ Links ]

[19] LARICHEV OI, OLSON DL, MOSHKOVICH HM & MECHITOV AI. 1995. Numeric *vs.* cardinal measurements in multiattribute decision making: (How exact is enough?). *Organizational Behavior and Human Decision Processes*, **64**(1): 9-21. [ Links ]

[20] MOSHKOVICH H, MECHITOV A & OLSON D. 2005. Verbal Decision Analysis. In: *Multiple Criteria Decision Analysis: State of the Art Surveys* (edited by J. Figueira, S. Greco & M. Ehrgott), chapter 15, Berlin: Springer Verlag, 609-640. [ Links ]

[21] MOSHKOVICH H, MECHITOV AI & OLSON D. 2002. Ordinal Judgments for Comparison ofMultiattribute Alternatives. *European Journal of Operational Research*, **137**: 625-641. [ Links ]

[22] OLSON DL, MOSHKOVICH HM, SCHELLENBERGER R & MECHITOV AI. 1995. Consistency and Accuracy in Decision Aids: Experiments with Four Multiattribute Systems. *Decision Sciences*, **26**: 723-748. [ Links ]

[23] PODINOVSKI V. 1999. A DSS for multiple criteria decision analysis with imprecisely specifiedtrade-offs. *European Journal of Operational Research*, **113**: 261-270. [ Links ]

[24] POMEROL JC & BARBA-ROMERO S. 2000. * Multicriterion Decision in Management: principles and practice*. Boston: Kluwer Academic Publishers. [ Links ]

[25] ROY B & BOUYSSOU D. 1993. *Aide multicritère à la décision: méthodes et cas.* Paris: Economica. [ Links ]

[26] SAATY TL. 2005. The analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making. In: *Multiple Criteria Decision Analysis: State of the Art Surveys* (edited by J. Figueira, S. Greco & M. Ehrgott), chapter 9, Berlin: Springer Verlag, 345-408. [ Links ]

[27] SCHOEMAKER PJH & WAID CC. 1982. An experimental comparison of different approaches to determining weights in additive utility models. *Management Science*, ** 12**(2): 182-196. [ Links ]

[28] SISKOS Y, GRIGOROUDIS E & MATSATISNIS N. 2005. UTA Methods. In: *Multiple Criteria Decision Analysis: State of the Art Surveys* (edited by J. Figueira, S. Greco & M. Ehrgott), chapter 8, Berlin: Springer Verlag, 297-344. [ Links ]

[29] SHEPETUKHA Y & OLSON DL. 2001. Comparative Analysis of Multiattribute Techniques Based on Cardinal and Ordinal Inputs. *Mathematical and Computer Modeling*, **34**: 229-241. [ Links ]

[30] TRIANTAPHYLLOU E. 2002. *Multi-Criteria Decision Making Methods: A Comparative Study*.Second Edition, Kluwer Academic Publishers. [ Links ]

[31] WEBER M. 1987. Decision Making with Incomplete Information. *European Journal of Operational Research*, **28**(1): 1-12. [ Links ]

[32] WEBER M & BORCHERDING M. 1993. Behavioral Influences on weight judgements in multi-attribute decision making. *European Journal of Operational Research*, **67**: 1-12. [ Links ]

* Corresponding author