Peak hour evaluation – a methodology based on Brazilian airports

This paper aims to establish a new methodology to calculate the design peak-hour passenger based on Brazilian Airport data. First, a cluster analysis using the Ward Hierarchical Method is applied grouping similar airports in terms of annual passenger throughput and EPH (Equivalent Peak-Hour). Then, we proceed with the calculation of the coefficients of variation of the aggregated hourly passenger throughput of the last seven years of the airports in each cluster. We propose the peak-hour for each airport cluster to be determined at the point where the stability of these coefficients is reached. We conclude by estimating a relationship of our proposed design peak-hour passenger as a function of the variables used to determine the clusters.


Introduction
The estimation of the design peak-hour passenger is critical for the design of airport terminals and their accessibility as well as for the evaluation of the level-of-service provided by an airport existing infrastructure (Piper, 1990;McKelvey, 1988;Yen, 1995).The concept of the design peak-hour passenger is associated with an hourly passenger throughput although below the absolute peak-hour recorded in a year, but still sufficient to ensure adequate level-of-service at the great majority of the operation time of the airport.
The level-of-service provided to passengers is directly related to the critical moments in the airport operations and how these critical moments are perceived.There are service-standards developed and published, for example, by IATA, BAA and Aéroports de Paris (Ashford, 1988) to be used as benchmark; obviously, in off-peak hours, it is simple to assess the level-ofservice, but generally at those moments the quality-of-service is very high, therefore the airport authority should consider to define critical periods of the year to assess the quality-of-service in order to evaluate if such quality is reasonable or if it is necessary to plan for expansions.
This paper establishes a new methodology for determining the design peak-hour passenger for airports based on cluster analysis by Ward Hierarchical Method, by grouping similar airports regarding annual passenger throughput and their EPH (Equivalent Peak Hour), a number derived from the division of the typical day throughput by its highest hourly throughput (Wang, 2012).
The next step is to calculate the coefficient of variation of the hourly passenger throughput during the period from 2005 to 2011 in order to identify at what point stability in these movements is reached.Such level of stability is the key point for the methodology to estimate the design peak-hour passenger by means of a regression using as explanatory variables the same ones that were used to determine the clusters.

Literature review
In 1976, ICAO (International Civil Aviation Organization) established a study group (GE / TRAP) to investigate the traffic peak on an international basis and define approaches to improve the situation.In 1978, AACC (Airports Associations Coordinating Council) and IATA (International Air Transport Association) decided to collaborate on a study of peak-hour and airport capacity use, producing a first edition with guidelines for airport management and an updated version in 1990: Guidelines for Airport Capacity / Demand Management.
Peak-hour passenger movement instead of annual passenger movement is the basis for the design of the passenger terminal and its facilities.Furthermore, the operational costs involved in running these facilities are also determined by the peaks.The most important role of operational management of an airport is to maximize the use of existing facilities and minimize the problem of congestion.
Passengers peak-hours should not be considered a problem, in reality it is a phenomenon, i.e., no matter what the political or charging structure is present in an airport, there will always be a tendency to concentrate movements in certain periods of the day due to many factors such as: interest of passengers, airlines planning, fleet usage maximization, etc.
According to Ashford (1997) some factors are crucial to establish the movement of passengers in peak hours: Domestic/International; Traffic Characteristics; Geographical Location; Hub; Catchment Area; and Terminal Capacity.
According to Brunetta (1999) design peak-hour passenger is usually defined from historical data, and it may be the 30th busiest time of the year.Ashford (1997) has outlined below the three most important methodologies for determining design peak-hour passenger: The Standard Busy Rate (SBR), Busy Hour Rate (BHR) and Typical Peak-Hour Passenger (TPHP).
The problem presented by this latter methodology is the discontinuity of the curve.For instance, an airport with an annual throughput of 29,999,999 passengers derives a design peak-hour passenger of 12,000 passengers, while an airport with an annual throughput with one more passenger (30 million/yr) would derive a design peak-hour passenger of 10,500 passengers, a quite different figure for just one more passenger a year.Wang (1999) developed for Brazilian airports a criterion based on the assumption that highest peak-hours at airports are random at certain levels, i.e. the highest absolute peak-hours in a given year behave in a random way from year to year.To forecast peak-hour demand to a statistically significant level it is necessary to achieve a certain stability.By means of the calculation of the coefficient of variation applied on 48 Brazilian Airports, Wang concluded that stability would be achieved at 96.5% of the annual passenger throughput.So far Wang's Methodology is the only with statistical criteria to choose and to calculate the design peak-hour, therefore this paper uses the same principle to measure the stability of the data.
A great variety of methods is used to determine the design peak-hour at airports, but none of them provides unrestricted service for the peak-hour, as this may result in wasted resources.There is a consensus that planning should aim to meet the demand at some level below this absolute peak, resulting that most passengers receives adequate service levels, and only a small percentage would experience the impact of congestion during very short periods of time.

Data Analysis
This section is divided into three subsections: 1) presentation of the variables used, 2) cluster analysis for 56 INFRAERO's airports based on the 2011 passenger annual throughput; and 3) coefficient of variation of passenger movements for each cluster to establish passenger peak-hour.
The preparation of the data leads to the model to calculate the passenger peak-hour using annual passenger throughput and EPH as explanatory variables.

Description of the database
The data used for this study were obtained from INFRAERO strategic data base, therefore this paper will not present detailed information but only the results of the analysis.
Basically there are two variables derived from this database of 56 INFRAERO's airports: i) 2011 Annual Passenger Throughput and ii) EPH (Equivalent Peak Hour).
As defined by Wang (2012), the EPH is a measure of the infrastructure usage throughout a typical day, which can be used for the evaluation of capacity and of the need for future investments.EPH can be calculated by adding up all the 24 medians (50th percentile) of the passenger hourly throughput of every day of a given year and then dividing it by the maximum value of these medians.High EPH values mean high usage of the infrastructure along the day and/or high homogeneity of the data.On the other hand, low EPH is derived from low usage of the infrastructure and/or high hourly throughput concentration.Low EPH is not desired by infrastructure providers.

Cluster Analysis
The cluster analysis aims to divide the elements of a group of interest into sub-groups of elements sharing similar characteristics and presenting heterogeneity when compared to elements of another sub-group (e.g., see Al-Sultan and Marrof Khan, 1996;Kaufman Roussweuw and, 1990;Koskosidis and Powell, 1992;Laporte et al. 1989).
Techniques for building the clusters are classified according Hair (2009) in two types: hierarchical and non-hierarchical techniques.The hierarchies are used in exploratory analyzes of the data in order to identify possible clusters and the probable value of the number of groups.As for the use of non-hierarchical techniques, it is necessary that the value of the number of groups already be pre-specified by the researcher.
In this paper, we use the hierarchical method of Ward (1963) which is an agglomerative hierarchical technique.Ward's method assumes that at the beginning of the clustering process there are n groups, since each element is considered to be an isolated conglomeration.At each step of the algorithm, the sample elements are grouped to form new clusters in which all elements share similar characteristics.For example, in the first step, two clusters are formed, with n-1 groups; in the second step another cluster is added with n-2 subgroups left.In this algorithm all clusters are formed after n-1 steps.
At each stage of the clustering algorithm, the two more "similar" clusters are combined to form a new conglomerate.In Ward method cluster similarity is measured by the distance between clusters defined as: Where ni and nj are the sizes of the clusters e when the grouping process is at stage K and ̅ e ̅ are the centroids (vector of the variable means) of and , that is, where ̅ ∑ is the mean of the variable number "l" of the cluster.At each step of the algorithm, the two clusters which minimize the distance above defined are combined.Only one new cluster may be formed in each step.
Hierarchy Property: At each step, each new conglomerate formed is a grouping of clusters formed in the early stages.If two sample elements are grouped in the same cluster at some stage of the clustering process, they will remain grouped in all subsequent stages, that is, once united these elements cannot be separated.Due to the hierarchy property, it is possible to construct a graph called a dendogram, which is the "tree" or the history of the grouping.
The final choice of the number of groups "g" in which the data set should be allocated is subjective.There are some statistical measures that may be used to assist in determining "g".The criteria adopted in this work are: the distance behavior analysis and the similarity level, detailed below: Distance: In each step of the clustering algorithm, compute the Euclidean distance between the centroids of the clusters that are being formed.As it progresses in the algorithm, the distance between the centroids is increased, which means that the combined groups become less similar.Thus, if one makes a graph of the distances at every stage of the process, it would be able to verify "jump points" relatively large compared to other distance values.These points indicate the ideal time to stop the algorithm, i.e., the number of clusters g and the final composition of the groups.
Similarity: This test is analogous to the previous one, but this time the behavior of the similarity level is monitored instead of observing the distance at each stage.If the groups and are united at a certain stage, the level of similarity between them is defined by: (3) where is the largest distance between the sample elements in the distance matrix of the clustering process first stage.In this case, the key is to detect points at which there is a sharp decrease in the similarity of the clusters, indicating that the algorithm should be stopped.
One way to assess whether the partition achieves satisfactory requirements of cohesion (or similarity) and isolation (or separation) of the formed clusters is to calculate the sums of total squares, between and within groups, defined by: Sum of Total Square (SQTotal): where is the vector of p measurements observed for the element number "r" of the sample group "i", i.e.
and ̅ is the vector of global means, regardless any partition, i.e.
where ̅ ∑ is global average of the variable number, .
The Sum of Total Squares within the partition groups (Sum of the Residual Squares): Total Sum of Squares between the "g" partition groups: The Sum of Total Squares is the sum of the residual squares sum with the sum of squares between groups, yielding the total variation.If a good partition is performed, it is expected that the formed groups have internal cohesion but are heterogeneous in comparison to the others.Thus, it is expected that variations within groups (sum of squares within each group, , and altogether, ) are small in relation to the total sum of squares, or, equivalently, it is expected that the variation between groups ( Sum of squares between groups) represents the majority of the data variation.

Coefficient of variation -Stability in peak-hour passenger
Considering the clusters established in the previous section, for each cluster the data used will be the passenger hourly throughput for the last seven years, i.e., from 2005 to 2011.
Using the methodologies of determining peak-hour cited in the literature review, it was found that for the 56 airports in this study, no airport peak-hour was below the hundredth highest hourly throughput.Thus, as a safety margin to achieve a stabilization of the passenger throughput, as well as to save processing time, this study will only take into account the 300 highest hourly throughput for each year to compound the peak-hour database.For each ranked hourly throughput there will be performed a calculation of their coefficient of variation which is the standard deviation divided by the mean.By definition, the stability is achieved when this coefficient does not vary more than 0.01 unities for the entire sample, and thus, the hourly throughput (which the stability was achieved) is to be considered as the representative for the respective cluster.The maximum variation of the coefficient was chosen as 0.01 unities, because if this variation is less than 0.01 unities, in the last two clusters the stability would not be able to be achieved for less than 300 hours.

Groupings
According to the method of Ward (1963), let us now consider as the criterion of agglomeration the variables: annual passenger throughput and EPH.In the search for a small number of groups, we will analyze the measures of Similarity and Distance for the last 15 steps.Table 1 shows the values of these measures and their percentage change from the previous step to the current.From Table 1, observing the behavior of distances or equivalently the similarities in the various steps of grouping, one realizes that there is a large loss of similarity (44.32% -it is a figure of two digits contrasting previous onedigit-figures from 41st step to 49th step) and a sharp increase in distance or fusion level (116.40%)from step 49 to 50.
Another way to check the heterogeneity of the groups that were formed is to analyze the centroids (means) of the variables in study for each group (Table 3 = 3,192,410.1 and EPH = 7,7 From Table 3, it can be noticed that standardization is necessary to transform the variables to compatible scales, since, in numerically terms, "Passengers" are million times "EPH".After standardization both variables were similarly important for the definition of the groups.There are clear differences between the mean values for both groups of passengers as for EPH, while groups 2 and 3 have close values of EPH.

Stability in passenger peak hour.
In possession of the coefficients, it seems that the peak hour for Cluster 1 is obtained in the 6th highest hourly throughput, corresponding to 99.88% of annual passenger movement, for Cluster 2 it is the 8th highest hourly throughput, corresponding to 99, 06% of annual passenger movement, for Cluster 3 it is the 14th highest hourly throughput, corresponding to 99.47% of annual passenger movement, for Cluster 4 it is the the 25th highest hourly throughput, corresponding to 99.04% movement of passengers annually; for Cluster 5 it is the 30th highest hourly throughput, corresponding to 97.94% of annual passenger movement, for Cluster 6 it is the 37th highest hourly throughput, corresponding to 96.42% of the movement passengers annually, and for Cluster 7 it is the 57th highest hourly throughput, corresponding to 88.57% of annual passenger movement.
From the above it can be inferred that additional clusters would implicate in higher difficulty of achieving enough stability.In other words, a theoretical cluster 8 in this case would implicate in something around the 100th hour to reach stability, proving mathematically that enough levels of stability are not achievable for every airport.

Least Square Regression.
According to the variables presented in this study: annual movement of passengers (PAX) and Equivalent Peak Hour (EPH), the Equation (10) yielded is: hp = (10) It is possible to verify that the variables PAX and EPH are statistically significant at 1%, i.e., there is strong evidence of the causal effect of these variables on passenger peak-hour.Regarding elasticities (P-value ranges: p<0.01 and adj.R-sq : 0.964), it is possible to conclude that annual passenger throughput (Pax) impacts more than EPH, i.e., for every additional 1% of PAX there is an increment of 0,6368% (t-test 0,034) of the peak hour demand, while for every additional 1% in the EPH results in 0,3170% (t-test 0.084) peak hour demand increment.

Conclusion
The methodology proposed by this paper is very useful for defining, mathematically, a design peak hour.A clustering process was chosen because, according to Hair (2009), the best advantages of the hierarchical method is its simplicity, the development of measures of similarity and the results are achieved in little time.The clustering method by Ward, which grouped the airports into seven clusters, presented very significant variables: the annual passenger throughput and the Equivalent Peak Hour (EPH).Other variables were tested such as: declared airport capacity, connecting passenger throughput and international passenger throughput; but the two that were chosen for this study (annual passenger throughput and EPH) were the ones who achieved best results.Another interesting finding was the use of the coefficient of variation to determine the stability of the peak-hour demand for the seven clusters.The independent variables were chosen to explain objectively the phenomenon of airport passenger concentration, other important variables, such as the level-ofservice perceived by the users, is very subjective and cannot be mathematically modeled at the statistical level of significance this study required.
After defining the independent variables, it was possible to model a linear regression which it was proven to be, statistically, very significant.The contribution intended with this model is to have a robust and consistent methodology, taking into account the impacts on peak hour demand by the annual passenger throughput and the EPH, two easy variables to get, either in the present time, or by means of econometric scenarios for the future.All of the models from the literature review do not take into account the impact of restrictions derived from airport capacity, this present methodology is the first one to evaluate it and to present a simple model considering EPH as a very good airport capacity proxy.

Table 4 -
Results of the Linear Regression Model.