Validação de proposta de avaliação de programas de controle de infecção hospitalar Validation of a proposal for evaluating hospital infection control programs

MÉTODOS: O programa consiste de quatro indicadores: estrutura técnicooperacional; diretrizes operacionais de controle e prevenção; sistema de vigilância epidemiológica; atividades de controle e prevenção. Esses indicadores, cujo conteúdo foi previamente validado, foram aplicados em 50 instituições de saúde, no município de São Paulo, SP, em 2009. Utilizou-se estatística descritiva para caracterizar os hospitais e escores dos indicadores e o coefi ciente α de Cronbach para avaliar a consistência interna. A análise da validade discriminante foi realizada comparando-se escores dos indicadores entre grupos de hospitais, com versus sem certifi cação em qualidade. A análise da validade de construto baseou-se na análise fatorial exploratória com matriz de correlação tetracórica.


INTRODUCTION
There is a growing demand for systems to assess the quality of care practices and services relating to hospital infection control. 7,8Such assessments can be made by means of indicators that are defi ned as quantitative measurements of variables, characteristics or attributes relating to a given process or system that make it possible to recognize its results, whether desirable or undesirable. 2,3,9,10,12,14hese indicators are ratios or quotients in which the numerator corresponds to the event that has been measured or recognized, which needs to have a clear and objective defi nition and be readily applicable and rapidly identifi able.The denominator is the population at risk of the event defi ned in the numerator.
These indicators take into consideration assessments of structures, results and processes.Structural assessments refl ect the capacity of the service to provide quality care, through the existence of human resources, care systems, fi nancial support, physical area and equipment, accessibility, protocols, physical plans and equipment, among others.The second assessment measures how frequently an event occurs, and thus makes it possible to estimate risk factors by identifying effects from the treatment, symptom relevance and other effects, along with establishing maximum and minimum acceptable limits for such events.The third assessment measures how the activities of a service or a given type of care are performed, this making it possible to analyze their quality in accordance with previously defi ned standards. 3Hence, assessments on specifi c practices require a set of indicators.
14 Hospital infection control programs (HICP) make assessments on results in order to identify the prevalence and incidence of cases of hospital infection.They classify cases according to topography, specialty, location and other characteristics, so that risk factors can be established.However, such assessments are not enough to recognize or determine the quality of the care practices effected, or to act preventively.The particular quality of the HICP also needs to be recognized, so that the quality of the care practices can be assessed and interventions can be made.
The present study aimed to validate the construct and discriminant properties of measurements on HICP.

METHODS
This study developed methodology for devising and validating health assessment measurements at 50 healthcare services in the municipality of São Paulo, Southeastern Brazil, in 2009.One hundred and sixty-two healthcare services were contacted by letter and in person.These were selected from the Datasus database in accordance with the following inclusion criteria: attendance structures of greater complexity and hospital beds, respecting the hospital size classification.i.e. general or specialized and public or private in nature; and the following exclusion criteria: treatment of mental diseases, healthcare centers, care provided in patients' homes, solely outpatient services and nonsurgical delivery.
Trained professionals gathered data to characterize the heath service and the hospital infection control service (HICS) and hospital infection control commission (HICC): the location, type of care, health certifi cation or accreditation, size, maintaining entity, specialized services, length of time for which the HICS/HICC had existed, nature of the HICS/HICC and linkage of the professionals at the HICS/HICC.An instrument for assessing HICP was applied, consisting of four indicators: technical-operational structure of the HICP ("structure"); operational prevention and control guidelines for hospital infection ("guidelines"); epidemiological surveillance system for hospital infection ("epidemiology"); and prevention and control activities against hospital infection ("activities").a Characteristics relating to the hospital's profi le and indicator scores were described by means of descriptive statistics, means (with standard deviations), medians, minimum and maximum values, and percentages.The internal consistency was assessed by means of Cronbach's α.This could range from 0.00 to 1.00 and the higher the coeffi cient was, the more exact (internally consistent) the measurement would be. 13his coeffi cient estimated the mean correlation of each item with the total score.When the values of Cronbach's α were greater than 0.70, it was considered that the instrument had good internal consistency.To analyze the discriminant validity, the indicator scores were compared between hospitals with some type of quality certifi cation and those without certifi cation.In this, higher scores were expected for the hospitals with certifi cation.The nonparametric Mann-Whitney statistical test was used.
To analyze the construct validity, exploratory factor analysis with a tetrachoric correlation matrix was used, which is indicated for variables with dichotomous responses. 5The method used for factor extraction was the iterative principal factor method, and the orthogonal rotation used was Varimax.The items that present loading greater than 0.30 were considered to be important in the factor composition.This technique made it possible to identify the least possible number of factors (or dimensions/constructs) that would best explain the correlations between the indicator items.The statistical packages used were SPSS version 14.0 (calculation of descriptive statistics, internal consistency and comparisons between groups) and SAS version 10.0 for the factor analysis.The signifi cance level was taken to be 0.05 for all the statistical tests.
The study was approved by the Research Ethics Committee of the Escola de Enfermagem da Universidade de São Paulo, (Process 800/2009) on April 8, 2009.The participants signed commitment and informed consent statements.

RESULTS
Fifty hospitals participated in applying the HICP indicators (31%), while 16 (10%) formally refused to participate (non-approval by the management or no return from the local Research Ethics Committee) and 96 did not give any response (59%).The envelopes sent through the post were not returned.
The care provision consisted predominantly of general hospitals (80%); 50% did not have quality certification or accreditation.The majority with certifi cation (68%) had obtained it from the National Accreditation Organization, while 20% had certification of Commitment to Hospital Quality, 8% from the Joint Commission International and 4% from the International Organization for Standardization.With regard to hospital size, 80% were of medium or large size (38% and 42%, respectively).The complexity was classifi ed according to the structure: 100% had intensive care beds, an emergency department and a surgical center, and 98% had a sterilized material center.
All the hospitals had their own HICS and HICC, which were not part of consortiums, and these had been in operation for six to 20 years (median = 14 years).
The analysis on the internal consistency (Table 1) showed that the α coeffi cients ranged from 0.58 to 0.80.However, for the "structure" and 3 "epidemiology" indicators, the coeffi cients could only be calculated using three items, since the others did not present any variability, i.e., they were constant and invariable.
The "guidelines" and "activities" indicators were the ones that presented the best internal consistency results: 0.80 and 0.67 respectively (0.76 without the participation and technical decisions).
Discriminant validity analysis was possible for the "guidelines" and "activities" indicators, because the other two indicators were constant and equal to 100.0% in the group with certifi cation.This in itself shows that the "structure" and "epidemiology" indicators in the group of qualifi ed hospitals presented better total conformity (Table 2).
The "guidelines" and "activities" indicators presented better mean values in the group with certifi cation and with statistically signifi cant difference (Table 2).It was not possible to perform a construct validity analysis for the "structure" and "epidemiology" indicators because these indicators varied very little, with 100% conformity in almost all the assessments.
Table 3 presents the factor analysis for the "guidelines" indicator, in terms of two factors: factor 1, recommendations for preventing infections; and factor 2, recommendations for standardizing the prophylaxis procedures.
Factor 1 had Cronbach's α of 0.73 (fi ve items), but if the item "Are there any recommendations for health service waste disposal?"were to be removed, Cronbach's α would increase to 0,80.For factor 2, if this same question were to be included, Cronbach's α would remain 0.79 (seven items).
Table 4 presents the results from the factor analysis on the "activities" indicator, in terms of two factors.In the fi rst factor, Cronbach's α was 0.88 (seven items), and this factor included items relating to the treatment units (dialysis, blood bank, hospital admission, intensive care, surgical center and emergency department).The second factor (α = 0.72, three items) assessed support units (laboratories).The item "outpatient service" did not present any important loadings in either of the two factors extracted.

DISCUSSION
The system for HICP assessment was validated in the present study and applied to a sample that was epidemiologically similar to two previous investigations aimed at more specifi c assessment of HICPs.In one of these studies, conducted by the Brazilian National Health Surveillance Agency in 2004, 6,714 questionnaires were sent out, with a general return rate of 61.8%, of which 35.1% was from the southeastern region.b In this fi rst case, it was not stated whether the instrument used had previously been validated.In 2009, the Regional Medical Council of the State of São Paulo carried out and published a HICP diagnosis based on 158 visits to hospitals in the State of São Paulo, of which 56 were in the state capital and its metropolitan region (35%).However, the investigative method was not disclosed.c In the institutions evaluated here, the wide variation in the length of time for which the HICS/HICC had existed (six to 20 years) can be explained by the length of time for which the national legislation in this fi eld has been in force.Brazil's fi rst regulations on hospital infection control were published in 1983, and these determined that HICCs were mandatory throughout the national territory.This was followed by Ordinance 930, in 1993.d This platform defi ned models for HICCs based on formation of state and municipal commissions.This was follows by Law 9341 of 1997 e and RDC Resolution 2616 of 1998, f which dealt with HICP.
Analysis on the internal consistency showed that the HICP indicators ranged from 0.58 to 0.80.The "guidelines" and "activities" indicators were the ones that presented the best results, even though the "activities" indicator had been applied in six hospitals.This was because some items presented unknown responses, since they were not applicable in the other hospitals.Although the item "participation in technical decisions" interfered with the fi nal result, it was kept because of the importance of this activity developed by HICS, as support for healthcare managers.
There was variation in three items of the "structure" and "epidemiology" indicators.The other items had constant values of 100.0% for the conformity scores, which refl ected the homogeneity of the sample, but interfered with the internal reliability.These indicators probably do not need to go through validation scrutiny, given  that hospitals have already incorporated them into their routine, in compliance with the legislation in force.
The reliability of an instrument is related to the heterogeneity of the sample, i.e. the more homogeneous the sample is, the more similar the scores will be and the lower the reliability coeffi cient will be.This does not necessarily mean that the instrument is unreliable, given that if an instrument is designed to measure differences and the members of the sample are similar to each other, it is more diffi cult to discriminate the reliability. 11measurement instrument should not be correlated with variables that have nothing to do with it (false attributes).This characteristic is known as discriminant validity and was used in the present study to test the hypothesis that the indicators selected would be sufficient to "measure" the quality of a HICP distinctly in institutions with and without qualifi cation processes. 6 the analysis on the discriminant validity, there was a statistical difference in the group of institutions with quality accreditation/certifi cation processes, for the "guidelines" and "activities" indicators.These indicators refl ected actions of documentation, guidance, recommendation and interface with other services within the healthcare institution.
The results found showed that these indicators were suffi ciently sensitive to detect the hospitals that had gone through healthcare quality processes and consequently had better HICPs.
The Joint Commission International recommends that effective programs should be wide-ranging, provide attendance beyond the minimum required by legislation and contain within their scope actions such as systems for data gathering, administration, analysis and communication, with a plan for continuous improvement; formal policies and procedures; study, education and training programs; and collaboration and interfaces with all departments in the institution. 1Such actions are included in the set of HICP indicators, especially in the "guidelines" and "activities" indicators.
For the dimensions of a measurement instrument to be consistently determined, along with its correlations with other similar measurements within the same theory or concept, the construct validity is used.This is known to be diffi cult and challenging. 6,11ve items correlated best with the factor "recommendations for prevention of infections" (respiratory, urinary, bloodstream and surgical site) in the "guidelines" indicator, with variance of 23.5%.The item "Are there any recommendations for health service waste disposal?"had the lowest correlation with factor 1 (0.30) and mainly encompassed aspects of recommendations or standardization of procedures.It was observed that this was one of the items in which the HICS had least direct or separate infl uence.The recommendations and decisions are made jointly with other professionals, thereby contributing towards devising and implementing health service waste management programs.g In the other factor of the "guidelines" indicator, seven items were correlated with the recommendations for prophylaxis procedures.The item "Is there any standardization of germicide and antiseptic solutions?"was the one with the highest loading in factor 2 (0.72).On the other hand, the item "Are there any recommendations for washing and cleaning of clothes used at the institution?"was the one with the lowest correlation with this factor (0.33), responsible for 22.5% of the total variance of the data.This item was also little infl uenced by the HICS.This may have resulted from the fact that laundry services were outsourced in most of the institutions visited, even though there was joint responsibility for the processes.
It was concluded that the structure of the "guidelines" indicator could be maintained as it was (i.e.only one dimension), on the basis of the results relating to internal consistency, factor analysis and internal consistency of the factors.This structure could also be stratifi ed into two dimensions: recommendations for preventing infections and recommendations for standardizing prophylaxis procedures.
For the "activities" indicator, the results from the factor analysis showed two factors or dimensions: factor 1, treatment units; and factor 2, support units.
Factor 1 consisted of seven items: the best correlated item (0.94) was "surgical center", while the least correlated item (0.35) was "hospitalization units".The items "material sterilization center", "intensive care unit" and "surgical center" were the most important items in factor 2 and, among these three, only the last of them seems to be more appropriate to this factor, which refl ects support units.Factor 1 was responsible for 27.8% of the total variance of the data.
The items best correlated with factor 2 were the ones that refl ected the support units, such as clinical analysis laboratories and pathological anatomy laboratories, and the nursery, which was the item with the highest correlation (0.93).This factor explained 22.4% of the total variance of the data.α were better with the stratifi cation than without it (α = 0.67).Application of the indicator was focused on specifi c actions for improving the treatment and support sectors.
We did not fi nd any similar studies for validating HICP assessment instruments in either the Brazilian or worldwide literature.The closest was the study by Prade, h carried out in 2002, in which a process for validating an information instrument for aiding in HICPs was developed.The instrument was capable of evaluating different dimensions of the systemic scope of hospital care, for its management, with a view to making managerial decisions.h However, several indicators did not achieve validation, and the validity of the criteria and reliability of form B (the dimension of the hospital's care provision and its structure) was low.It was concluded that the group of examiners was unable to apply this type of assessment, or had had little practice in doing this.Prade considered that the internal validity was compromised by the unsatisfactory result from one of the studies, which would be redone after changes to examiner preparation had been made.It was concluded that the concordance between the examiners was not good and that the study would have to undergo the modifi cations suggested, and would have to be expanded in order to ensure external validity for the system. 2 The individuals who applied the instrument in the present study had undergone prior training.Specifi c manuals were drawn up to guide the data gathering and systematize the data obtained.
The assessment proposal, with a small number of indicators and seeking only to evaluate the HICP, did not intend to correlate it directly with the hospitals' overall care provision and structures, or with the professionals' technical capacity.Although the conformity obtained may make it possible to make suggestions or reach conclusions in this regard, other assessment systems that are more specifi c can and should be added, given that a HICP in itself does not allow inferences to be made regarding the quality of the care provided.
Application of this proposal for assessment of HICPs to health services in the municipality of São Paulo enabled full validation of its measurement properties.The indicators that assess operational guidelines and prevention and control activities against hospital infection were the ones that presented the best internal consistency, discriminant validity and construct validity results.
In conclusion, the validity of the instrument for assessing HICPs, with the possibility of application with a scientifi c basis, will allow a diagnosis of the real situation of these programs throughout the national territory.

Table 1 .
Internal consistency of the indicators for assessing hospital infection control programs.São Paulo, Southeastern Brazil, 2009.

Table 2 .
Comparison of the means for the assessment indicators for hospital infection control programs, between hospital groups.São Paulo, Southeastern Brazil, 2009.The values for this group were constant and equal to 100.0.
a Standard deviation.b Mann-Whitney test for comparison between means.c

Table 3 .
Factor analysis for the indicator of operational prevention and control guidelines for hospital infection.São Paulo, Southeastern Brazil, 2009.

Table 4 .
Factor analysis for the indicator of prevention and control activities against hospital infection.São Paulo, Southeastern Brazil, 2009.