SciELO - Scientific Electronic Library Online

 
vol.34 issue4Sepsis risk assessment: a retrospective analysis after a cognitive risk management robot (Robot Laura®) implementation in a clinical-surgical unitInfluence of neural mobilization in the sympathetic slump position on the behavior of the autonomic nervous system author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

Share


Research on Biomedical Engineering

Print version ISSN 2446-4732On-line version ISSN 2446-4740

Res. Biomed. Eng. vol.34 no.4 Rio de Janeiro Oct./Dec. 2018

http://dx.doi.org/10.1590/2446-4740.18004 

Original Article

SMART: a service-oriented architecture for monitoring and assessing Brazil’s Telehealth outcomes

Jailton Carlos de Paiva1  2  * 

Túlio de Paiva Marques Carvalho1  2 

Allyson Bruno Campos Barros Vilela1  2 

Giovani Ângelo Silva da Nóbrega1 

Beatriz Soares de Souza1 

Ricardo Alexsandro de Medeiros Valentim1 

1 Laboratory for Technological Innovation in Healthcare, Federal University of Rio Grande do Norte, Natal, RN, Brazil.

2 Federal Institute of Rio Grande do Norte, Natal, RN, Brazil.

Abstract

Introduction

Brazilian Telehealth Program was instituted by the Ministry of Health in 2007. Its initial structure was composed by nine telehealth centers administered by public higher education institutions. No standards, processes, applications or quality indicators had been defined since its creation. All this, combined with the decentralization of the centers, led each one of them to develop their own system, with different programming languages and architectures. The lack of regulation and integration of the information with the Ministry of Health made it difficult to evaluate the program. In this context, this paper describes the specification, implementation and validation of an architecture, entitled SMART, to integrate the various telehealth platforms developed by the centers. Such architecture aims to standardize information so that the Ministry of Health can monitor and evaluate the results of Telehealth actions.

Methods

SMART’s architecture consists of four main components: a web tool for data manipulation; a web service to receive the center’s production data; a component responsible for converting the received data into decision support data; and a component that collects data from external sources to compose the data warehouse.

Results

The architecture was validated with performance tests, which were executed under extreme workloads. The results of the experiments were summarized in order to attest SMART’s effectiveness.

Conclusion

The analysis of the results obtained on real data shows that the project’s performance remained stable under the workloads and its high quality was proven due to the absence of errors during the experiments.

Keywords Telehealth; Evaluation; Standardization; SOA; Integration; Interoperability

Introduction

Brazil's national health service - Sistema Único de Saúde (SUS), defines principles and guidelines that should be followed by the health centers all over the country. When some aspects such as the long distances, the population size and the demographic density are considered, it becomes clear how challenging it is to think in health-focused actions on a national scale and fulfill the principles and guidelines mentioned above. Regional and local aspects also play an important role in that process ( Mendes, 2013 ).

In face of this scenario, Brazilian National Telehealth Program ( Brasil, 2007 ) was created in 2007 with a pilot project for supporting Primary Health Care involving nine Telehealth Centers (NT) located in several universities of the country ( Haddad, 2012 ; Wen, 2008 ).

The program was reformulated between 2009 and 2011 and received the accession of more NTs. Since then, it has been called National Program Telehealth Brazil Networks (PTBR) and was formally institutionalized through the Ordinance nº 2.546/2011 ( Brasil, 2011 ). Such institutionalization, besides redefining and expanding the program, also established four telehealth services: teleconsulting, telediagnosis, tele-education and Formative Second Opinion (SOF) ( Silva et al., 2015 ). Telehealth, thus, represents an important strategy to strengthen national policies under the aspect of primary healthcare in SUS since it can guarantee large-scale attendance ( Oliveira et al., 2017 ; Silva and Moraes, 2012 ; Silva et al., 2015 ).

Since there has been no standard of the implementation process, each NT developed its own creation and implantation activities according to its demands and regional needs, without worrying about the documentation or description of the process in a systematic way ( Haddad, 2012 ). Consequently, each NT developed, autonomously, its telehealth platforms (PNT) and its own models to assess the offered services.

According to Vargens (2014) , the heterogeneity of the Information Systems (IS) within telehealth is a natural process in the creation of the IS in SUS, and this has occurred essentially for two reasons. The first one is related to cultural, social, regional and local diversity, which are aspects that generate different demands and priorities for each NT. The second reason is related to the lack of not only an interoperability regulation but also the definition of both a technological framework and the specification of a data model in the health sector ( Rodrigues et al., 2013 ).

These problems were related to the pilot project of PTBR. However, they were only evidenced by the accession of new NTs when the number of centers grew from 9 to 47 NTs in 2012 ( Oliveira et al., 2017 ).

According to Lopes et al. (2014) , the expansion of telehealth required the monitoring and evaluation of the offered services to support the improvement and sustainability of the program. Therefore, the national coordination of the program worked in association with the NTs in 2013 to create the monitoring and evaluation indicators so that they could be used in a national scale. As a result, the Technical Note 05/2014 (NT5) was published on 10th February, 2014 ( Brasil, 2014 ).

The lack of systematic evaluations of the program in the national scenario existed since the pilot project ( Silva et al., 2015 ). However, NT5 provided a favorable environment for the adoption of a structure, processes and results indicators to assess the architecture presented in this document.

In the international context, surveyed papers generally focus on the development of frameworks capable of evaluating different aspects of telehealth by providing a better analysis of the offered services ( AlDossary et al., 2017 ; Chang, 2015 ). Several systematic reviews ( Agboola et al., 2014 ; Maeder and Poultney, 2016 ; van Dyk, 2014 ) discuss approaches, processes and techniques by comparing the existing assessment models and analyzing which are the most efficient practices and pointing out which are the weaknesses of the models currently in use.

Considering the national and the international scope, it is observed that the adoption of a software solution that allows the telehealth results to be assessed is still a challenge to be overcome. Therefore, this research describes the specification, implementation and validation of an architecture, entitled SMART, which is based on Business Intelligence (BI) techniques and service-oriented architecture (SOA) to integrate the several heterogeneous PNTs developed by the NTs and provide standardization of the information.

Methods

According to Waas et al. (2013) , a typical BI infrastructure is composed of five layers: 1) a layer that represents heterogeneous and distributed data sources; 2) an extraction, transformation and load (ETL) layer, responsible for loading data into a central database called data warehouse (DW); 3) a layer involving the data warehouse, responsible for storing integrated and summarized data; 4) repositories, called data marts, responsible for storing subsets of aggregate data from the central DW and; 5) an online analytical processing (OLAP) layer responsible for various types of data analysis and visualizations.

The proposed solution includes the national standard of interoperability for information exchange between IS in the context of PTBR and SMART architecture. BI techniques were used to extract external data and produce reports to the Ministry of Health (MS) policy makers in a timely and friendly way, supporting therefore the decision-making process.

PTBR National Interoperability Model (MNI-PTBR)

According to Brasil (2018) and Rodrigues et al. (2013) , a broad interoperability requires cooperation agreements in three levels: technical, semantic and organizational. The first level relates to the capability of two or more systems to communicate for data exchange; the second one is related to the content definition in a controlled vocabulary. Finally, the last level refers to the cooperation between organizations, obtained by aligning processes.

Regarding PTBR, the ISs of the NTs were developed based on the local demands and did not observe pertinent issues related to information sharing. This resulted in lack of standardization of both vocabulary and data interchange, which implied, therefore, great challenges to achieve broad interoperability.

In this context, the data dictionaries of the telehealth systems databases of the main NTs were analyzed aiming at the standardization of data representation and definition of unique terminologies to be applied to the data interchanged with SMART. Both the primary activities (teleconsulting, telediagnosis, SOF and tele-education) and secondary activities (implementation planning, articulation and monitoring of the offered services) of NTs were identified during that analysis. The public health information systems were also evaluated in order to verify the univocal identifiers already used in Brazil.

Table 1 shows the public databases used with the adopted univocal identifiers. A minimum dataset to be adopted in Brazil for the ISs in the context of PTBR and the definition of data exchange among these systems and SMART were specified as a result of this research and it was based on NT5 ( Brasil, 2014 ).

Table 1 Public database used for data exchanging in telehealth systems. 

Database Used terminology
CNES 1 CNES code for identification of the health establishment;
(CPF of the health professional, CNES code and CBO code) to identify the health professional's affiliation;
INE code to identify the health team;
IBGE 2 IBGE Code to identify cities, federal units and regions of Brazil;
CBO 3 CBO code for identification of health professionals’ occupation and 4 first digits identify the occupation’s category;
ICD 10 4 ICD Code 10 for classification of teleconsulting diseases;
ICPC 2 5 ICPC 2 code for classification of teleconsulting diseases;
DeCS 6 of Bireme DeCS Code to identify the themes of tele-education activities;
SIA/SIH 7 SIA/SIH code to classify the type of exam of the telediagnosis.

1 National Registries of Health Services (CNES) updated by the Department of Health Informatics (DATASUS), whose function is to available information on the current conditions of the physical infrastructure and the functioning of the health services in the country.

2 Brazilian Institute of Geography and Statistics (IBGE) is the agency responsible for the official collection of statistical, geographic, cartographic, geodetic and environmental information in Brazil.

3 Brazilian Classification of Occupations (CBO), identifier code of occupations in the job market for classificatory purpose to the administrative and domiciliary registries, available by the Ministry of Labor.

4 International Statistical Classification of Diseases and Related Health Problems (ICD -10) is published by the World Health Organization (WHO) and aims to standardize the diseases codes and other problems related to health.

5 International Classification of Primary Care (ICPC 2). Its main objective is to classify issues related to people, not diseases. It allows the classification of both the problems diagnosed by health professionals and the reasons for the consultation and the answers.

6 The trilingual and structured vocabulary DeCS - Health Sciences Descriptors - was created by the Latin American and Caribbean Center on Health Sciences Information (BIREME) for use in indexing articles from scientific, journals, books, congress proceedings, technical reports, and other types of materials, as well as for searching and retrieving subjects from the scientific literature in LILACS, MEDLINE and other data bases.

7 Hospital and Outpatient Information System (SIH/SIA) updated by DATASUS. The SIA/SUS comprises data on the number of outpatient procedures that were performed under SUS coverage, either at the public or at the private network.

Each service offered by PTBR has a message format for data exchange; a data-schema definer to verify which attributes are expected and whether the possible values for each attribute are consistent; and a web service to receive the production data of the NTs. Technical details of MNI-PTBR are available in SMART (Sistema…, 2018).

Considering the above-mentioned challenges, technical interoperability was overcome with the use of web services to promote the intercommunication between the heterogeneous systems. JSON was the data format adopted for information exchange. Semantics is guaranteed through the definition of the terminologies used for data exchange, which is validated through the proposed schema files. Finally, organizational interoperability is achieved through cooperation policies between MS and the NTs, the definition of the business process and the regulation for the exchange of information between the ISs of the NTs and SMART.

Data warehouse project

The dimensional modeling of SMART’s DW was implemented based on the lifecycle principles of Kimball and Ross (2013) , in which the business processes are initially identified and then followed by the properly definition of the granularity, dimensions and facts. This schema, called star schema, was used and it consists of a fact table that contains the process metrics, which are linked to other tables called dimension tables responsible for contextualizing the facts.

The guiding document for SMART’s development was NT5 ( Brasil, 2014 ), which includes the performance indicators used to measure and evaluate the actions of telehealth in Brazil. Five business processes were identified based on NT5 and the monitoring electronic spreadsheets used by PTBR coordination: 1) analysis of teleconsulting production; 2) analysis of SOF production; 3) analysis of telediagnosis production; 4) analysis of participations in tele-education and; 5) analysis of the impact of telehealth coverage.

Each business project is composed of one or more data marts, consisting of one or more star schemas. Therefore, the fact tables that compose the star schema initially have facts related to the indicator blocks in NT5. The main dimension tables that surround the fact tables are: time, with month granularity; localization, with health establishments granularity; and the telehealth center.

SMART architecture

The external actors of the architecture and how they influence the solution’s requirements and restrictions, as well as an overview of the components and their collaborative interaction with one another and the external environment to consistently perform the reception of the production data sent by the NTs are presented in Figure 1 .

Figure 1 SMART architecture overview. It shows the external actors of the architecture and an overview of the components and how they interact in a collaborative way with one another and with the external environment to receive the NT’s production data.  

The external actors are the NTs, the MS, the citizens, the PNTs and the SOF platform, which is maintained by Bireme. The citizens have access to the public portal. The PNTs are responsible for sending the NT’s production. The Bireme platform sends data related to the SOF’s elaboration flow. The MS includes the National Coordination of Telehealth (CNPTBR) and the General Coordination of Basic Health Care Management (CGGAB). The CNPTBR manages all administrative processes of its coordination through the web portal and it also monitors the information of the telehealth activities with tools that allow an integrated data overview. These tools are also available to the NTs, although they can only view information related to their activities. CGGAB manages all the financial incentive flow of monthly funding for the intercity and interstate NTs.

The external parties of the architecture interact with its functionalities through the User Interface (UI) and the Web Service Interface (WSI). The UI, a graphic interactive interface based on web development technologies, is composed by the Business Applications (BAP), the Ad hoc Analysis (ADHOC) and the Data Visualization Dashboard (DVD). The WSI exposes the operations provided by the Interaction Service (ISE) and Data Acquisition & Delivery (DAD) components to the endpoints.

The Security Component (SEC) is responsible to ensure SMART privacy, authenticity, integrity and confidentiality and it can be accessed from any point in the architecture. Encryption was used in the communication channel between the NT’s IS and SMART in order to ensure that the information will not be modified by unauthorized third parties. A pre-shared security key was used to ensure that the received data is actually from an authorized telehealth system and that it actually belongs to the sending NT. Authentication of only authorized people has been accredited by Sabiá, a national authentication service that provides a unique identifier for each user. The access permissions to certain areas of the architecture have been set at the user, group, and role level.

The architecture has several interfaces to provide administrative functions and business flow process. However, addressing each one of them is out of the scope of this work and they are represented by the Business Applications (BAP) component. The business logic for each one of the interfaces is implemented by the Business Process (BP) component.

One of the premises of the architecture is the supply of information in an efficient and reliable way, providing autonomy and agility for decision making. Therefore, the Ad hoc Analysis (ADHOC) and Data Visualization Dashboard (DVD) tool was developed for that purpose. The ADHOC component allows the drill-down, roll-up, slide-dice and pivot operations, as well as the dynamic filters inherent to the OLAP tool. Data can be visualized in table, graphic and maps formats. The user can save queries or customize them. The DVD enables the simultaneous monitoring of several information in various types of views such as tables, graphs or maps in a single environment.

Figure 2 presents the three main components directly related to the architecture’s objective: Data Aggregation & Delivery Information (DAGDI), Interaction Services (ISE) and Data Acquisition & Delivery (DAD). It is also worth mentioning the Asynchronous Task Queue Manager (ATQM) service, which enables the execution of potentially long operations asynchronously. It is used in the architecture to guarantee performance and scalability since it allows the execution of simultaneous and distributed tasks on more than one server. Values highlighted by circles are used in the activity flowchart ( Figure 3 ) to indicate in which subcomponent each activity is occurring.

Figure 2 Functional view of SMART architecture, shows the main subcomponents responsible for processing the reception of production data. The numbers highlighted by circles are used in the flowchart of Figure 3 .  

Figure 3 Flow chart of the production data reception process and the generation of decision support data. The black solid line corresponds to the main flow, the blue dashed line, the alternate stream that occurs when the univocal identifiers are not in the local database, and the red dotted line, the exception stream, occurs when the received data is not in accordance to the business rules. The numbers highlighted by the circles correspond to the features shown in Figure 2 .  

Interaction Service (ISE) is responsible for enabling the intercommunication with the NTs’ ISs, and its two main subcomponents are the Validation Engine (VE) and the Integration Engine (IIE). The VE performs three validation types on the received data: syntactic, semantic and the business validation. The syntactic validation analyzes whether the data meet the rules specified in the data schema definitions. The semantic validation checks whether the univocal identifiers exist in the database. Finally, business validation ensures that the policies defined by the MS are fulfilled. The IIE component is responsible for obtaining data related to the univocal identifiers that were not found in the DB transactions database.

Regarding PTBR’s National Interoperability model, the system uses unique identifiers, which are already in use in the public databases in Brazil to guarantee the interoperability of the information exchanged with the heterogeneous NT’s ISs. Therefore, it was necessary to create a mechanism to allow the extraction of the data related to these identifiers and, for performance reasons, this data should be in a local database before receiving the production data sent by the NTs. In addition to these challenges, this architecture should also make this data available to the NTs since they do not have this information in their data dictionaries.

Based on the requirements mentioned above, the Data Acquisition & Delivery (DAD) component was developed. Figure 2 shows that the DAD is able to extract data from different sources and several formats. The DAD is responsible for executing the ETL’s Extract-Transform step, in which data are extracted, go through a cleaning and standardizing process and then are saved in a staging area – “BD Repository” ( Kimball and Ross, 2013 ). All extraction tasks are the responsibility of the Integration Engine (DIE) subcomponent, scheduled by the Job Scheduler (JOB) and executed asynchronously and distributed by the ATQM service. Depending on how often data changes occur, tasks are executed daily, weekly, and monthly. The DIE has resilient data recovery mechanism and when the process is stopped, it continues where it left off. For each data source, there is a specific adapter to make the conversion from the original format to the data model defined in the architecture.

The DAGDI component has two relevant subcomponents: Data Analytical Engine (DAE) and Data Aggregation Engine (DAG). DAE executes the OLAP operations according to the settings defined by the user. The DAG is responsible for the ETL’s Transform-Load step and transforms the original data from the “DB Transaction” and “DB Repository” databases into decision support data according to the predefined transformation rules and algorithms. This data is then loaded into the DW “DB Aggregate”.

Activity stream overview

The activity flow for the process of generating decision support data is illustrated in Figure 3 . The numbers highlighted by circles are related to Figure 2 and correspond to the point in which the task occurs. There are three scenarios included in the flow. The main one, in which everything happens successfully, is highlighted by the solid black line; the alternative scenario, highlighted by the blue dashed line, occurs only if the univocal identifiers were not in the ISE component transactional database; the exception scenario, highlighted by the red dotted line, occurs when the received data is not accepted, or univocal identifiers are not found. The entire process is initialized from the PNT that sends the data, which is received by the ISE. This component initially verifies the authenticity of the pre-shared key and then, if the data is in accordance with the validation rules defined in the architecture (syntactic, business, and semantic), the production data are saved in their respective relational tables in the “DB Transactions”. Then, three parallel processes are initiated: a success response message is sent to the PNT; an e-mail with a summary report on the received data is sent to the managers responsible for the NT’s IS and; finally, the ATQM service, which includes the task in the execution queue to be further processed by the DAGDI component, is triggered and, at the end, it sends summary e-mails to the managers responsible for the NTs and to the managers of the national coordination.

When the semantic validator does not find the univocal identifiers, the ATQM is triggered and calls the subcomponent IIE to obtain the identifiers data from external sources through the DAD component. At the end of this process, a fail message response is sent to the PNT.

Implementation

The adoption of the technologies used to implement the proposed architecture had as main guideline the use of widely accepted and well-known technologies in the scientific research area. Open source technologies were preferable because they can be better explored and extended.

When the DAE receives the user settings, three steps are executed: 1) conversion of the settings into SQL queries; 2) processing of the “DB Aggregate” data in memory and; 3) preparation of these data for later submission to the ADHOC interface. A Python object was implemented to perform steps 2 and 3.

Python was adopted as programming language because it has undoubtedly become the de facto standard for exploratory, interactive, and computation-driven scientific research. Thanks to its high-level interactive nature and its maturing ecosystem of scientific libraries, it is an appealing choice for algorithmic development and exploratory data analysis. ( Millman and Aivazis, 2011 ).

PostgresSQL was chosen because it is considered the most powerful open source spatial database engine. The PostGIS extension was used to provide PostgreSQL several spatial data types and over 300 functions for working with these spatial types ( Obe and Hsu, 2015 ). Window Functions and the CUBE, ROLLUP and GROUPING SET operators from the GROUP BY clause allowed the optimization of the SQL queries performance ( Jain et al., 2015 ).

The proposed architecture makes an extensive use of two python libraries called NumPy and Pandas. Both are core SciPy packages that provides low level optimizations for scientific computation across platforms. Pandas provides a common interface for formatting and manipulating data and NumPy provides basic array functionality ( McLeod, 2015 ). NumPy arrays are used for mapping the star schema. Pandas formats and manipulates the data obtained from DB Aggregate.

REST (REpresentational State Transfer) was used as architectural style for implementing the services. RESTful web services are software services published on the web that takes full advantage of the HTTP protocol. We used RESTful API web services because SOAP requires the XML format while RESTful web services can be implemented with multiple formats (XML, JSON, CSV, etc) providing more flexibility to the clients during the interoperability process ( Nurseitov et al., 2009 ; Pautasso, 2014 ). JSON (JavaScript Object Notation) was used in the project as the format for all data interchange because it has a human-readable representation, can be easily parsed by computers, is faster and uses fewer computational resources compared to the XML format ( Changbin and Xu, 2010 ).

Django was selected because it encourages rapid software development, facilitating the tasks of creating complex, database-driven web applications ( Arifin et al., 2017 ). It supports PostgreSQL and has a powerful ORM (Object Relational Mapping), which provides an abstraction layer to interact with databases. The thirty-party plugin Django REST Framework was used to implement the RESTful API web services. Django's architecture is known as MTV (Model-Template-View) and it fits with the architecture proposed in this article. The model interacts with PostgreSQL, the templates support data presentation in HTML and the views interact with the RESTful API.

The ATQM service was implemented with Celery, an asynchronous queue/job queue based on distributed message passing ( McLeod, 2015 ), which is of extreme importance to SMART because it abstracts all the complexities of building a distributed task management system.

The communication with the NT’s ISs is encrypted and uses the HTTPS (Hypertext Transfer Protocol Secure) protocol, which provides a layer for the HTTP traffic over the TLS/SSL encrypted transport protocols to ensure confidentiality and integrity. HTTPS is the dominant protocol to make web traffic secure ( Kranch and Bonneau, 2015 ).

The experiments were performed with Locust, which is an open source platform that allows the load tests configuration to simulate virtual users sending requests at the same time ( Sernadela et al., 2017 ).

Experiments

Regarding the activity flow overview in Figure 3 , after receiving the production data, the system performs two tasks. It firstly processes the received data (ISE component) and at the end it invokes the ATQM service to asynchronously transform the data (DAGDI component) into a decision support data format in the data warehouse (DW). The ISE component is critical in the structure because it has to ensure that the data is received and consistently processed, regardless of its volume. In this scenario, the tests aim to verify the reliability of the application under a high workload during the reception and processing of the production data. Therefore, performance tests simulating the sending of teleconsulting (TC), telediagnosis (TD) and participations of tele-education (TE) data to the production environment were executed.

According to Sharmila and Ramadevi (2014) , the performance tests are executed to verify the system behavior under high and excessive load conditions. To execute performance tests, Kundu (2012) recommends creating realistic scenarios with medium and heavy workloads obtained from past data. Our tests were based on these two conditions and they focused on assessing the robustness of the architecture. Table 2 shows an initial analysis performed on the period of May/2016 to April/2018. The maximum data volume was used since the objective of the tests was to check the behavior of the architecture under extreme workload.

Table 2 Historical data with average and maximum production volumes (May/16 to Apr/18).  

Type of activity Average data volume Maximum data volume
Number of records Data size (MB) Number of records Data size (MB)
Teleconsulting 247 0.07 3812 1.04
Telediagnosis 11627 3.69 53195 14.25
Tele-education 295 0.04 3798 0.46

Currently, 18 PNTs send production data from 55 NTs to SMART. To assess the simultaneous data sending from the PNTs, two series of experiments were performed. In the first one, the number of users was equal to the number of PNTs and the sent data was relative to a period of 60 months. This experiment involved 3300 submissions per activity, resulting in 9900 requests in 104 minutes. In the second experiment, the number of PNTs and NTs were doubled. The period of 30 months was considered in order to maintain the same request numbers and the experiment lasted 101 minutes. In both scenarios, each PNT randomly sent data from each type of activity.

The experiment was designed to resemble the real scenario. The infrastructure used to assemble the test environment consists of four virtual servers with the same configuration: 16 v-cores Intel (R) Xeon (R) CPU E5-2670 @ 2.60GHz, 16GB RAM with Debian GNU / Linux 9 operating system, connected to a 1Gbps network. The first server hosted the ISE and ATQM components. The DAGDI was deployed in the second sever and the DAD and the database was installed in the third and fourth server respectively. Since the purpose of the test was to measure ISE performance, no metrics for the DAGDI component were included. The third server was not used in the tests since the univocal identifiers were already in the database. The simulated data followed the flow highlighted by the solid line shown in Figure 3 . In addition to the servers, there was also a client on the same network. The client was a desktop machine that simulated the virtual PNTs and has the following configuration: Intel Core i7 @ 2.2GHz quad core, 16GB RAM with OS X Sierra operating system.

In order to quantify the performance under the considered workloads, different measures were taken during the experiments. Asadollah and Chiew (2012) suggests the use of the response time (RT), transmission time (TT) and processing time (PT) to assess the performance of a web service. In this article, RT is the time between the data sending and response receiving (difference between points 1 and 16 in Figure 2 ). PT is the time between the request receiving and the response sending (difference between points 2 and 12 in Figure 2 ). TT is the difference between RT and PT, which is directly influenced by the network layer ( Cito et al., 2015 ). For a better accuracy of the PT metric, the architecture was adjusted to record the date and time at the moment of the data receiving (point 2 of Figure 2 ) and at the moment of the response sending (point 12 of Figure 2 ). In addition to these metrics, the following measures were performed in order to check which ISE subcomponents affect its processing time: the validation time (VT), which corresponds to the sum of the subcomponents execution times (3, 4 and 5 in Figure 2 ); the time to save the data (ST) in the database (point 11 in Figure 2 ); and the time to obtain data of the univocal identifiers (difference between points 10 and 8 in Figure 2 ). Since the data of the univocal identifiers were already in the database, the time to obtain data of the univocal identifiers was always zero and this measure was discarded.

Results

This section presents the results of the metrics obtained from the experiments. Line graphs were used for a general analysis of the time data ( Cannata et al., 2014 ; Cito et al., 2015 ). To obtain a more detailed view of the processing times distribution, the histogram graph was used ( Hsieh et al., 2012 ).

In order to standardize the variation of the response times, the simple moving average (SMA) was applied in 100 observations ( Cito et al., 2015 ). Figure 4 shows 6 graphs and compares the performance of the SMART’s architecture during the first experiment (with 18 users) and the second one (with 36 users). Graphs 4 (a), 4 (b) and 4 (c) display the raw data of the response times (y-axis) of each submission (x-axis) and the number of failures (exceptions) occurred during the experiment for teleconsulting, telediagnosis and tele-education services respectively. The graphs clearly show that no exceptions were detected in a total of 19800 requests. Graphs 4 (d), 4 (e) and 4 (f) display the response and processing time (y-axis) in a range of 100 requests (y-axis) for the same services. It is evident that when the number of simultaneous users was doubled, RT increased approximately in the same proportion for teleconsulting and telediagnosis, but it practically quadrupled for tele-education. However, the PT values remained practically the same when compared to the RT in the graphs Figures 4 d, e and f of each activity. The data of the second experiments is more volatile, probably caused by the network layer. Around the time points between 1500 and 2000 requests, there is a small increase in the PT, visually noticeable for teleconsulting. That increase is not alarming enough to be considered a significant change.

Figure 4 Performance evaluation of SMART architecture during processing of a high workload (18 users) and in extreme conditions (36 users).  

Table 3 presents the numerical results of the minimum, maximum, mean and standard deviation of the metrics used in Figure 4 . The values are highlighted by the type of service for each experiment.

Table 3 Response, processing and transmission times of the experiments. 

Type User RT (seconds) PT (seconds) TT (seconds)
Min Max AVG SD Min Max AVG SD Min Max AVG SD
TC 18 7.72 14.27 9.79 0.82 7.44 11.30 9.21 0.61 0.12 4.90 0.58 0.55
TC 36 11.94 27.59 20.43 2.22 7.54 11.66 9.47 0.59 2.06 18.19 10.96 2.19
TD 18 14.90 24.18 17.18 1.06 14.71 21.44 16.77 0.92 0.12 4.91 0.61 0.53
TD 36 19.26 36.25 28.03 2.35 15.23 21.44 17.12 0.91 1.52 18.23 10.91 2.16
TE 18 1.54 6.72 2.34 0.59 1.37 2.48 1.79 0.19 0.10 4.89 0.56 0.54
TE 36 5.93 19.61 12.77 2.15 1.43 2.82 1.85 0.17 4.13 17.63 10.92 2.14

Columns: RT is the response time, PT is the processing time and TT the transmission time or latency. Min is the minimum value of the sample, Max is the amplitude, AVG and the mean and SD is the standard deviation. The lines are: TC for teleconsulting, TD for telediagnosis and TE for participations in tele-education; 18 is the experiment with high workload, simulating 18 users at the same time and 36 the experiment with extreme workload.

Figure 5 a, b and c concerns to the distribution histogram of the processing times of the first experiment and Figure 5 d, e and f of the second one, whose minimum and maximum values are shown in Table 3 . The longest tail of the distribution is on the right, indicating the occurrence of longer times with low frequency, visually noticed in Figures 5 b, c, e and f. They can be considered outliers, atypical values. The red dashed line represents the distribution curve where the mean is the NT and the standard deviation describes the dispersion of the sample. The vertical dotted lines represent the 5th, 50th (median), 95th and 99th percentiles and their respective values for teleconsulting ​​in seconds are 8.26, 9.2, 10.26 and 99.70 in the first experiment and, 8.51, 9.46, 10.45 and 10.91 in the second experiment. For telediagnosis, the 5th, 50th, 95th and 99th percentiles of the first experiment were respectively 15.46, 16.69, 18.36 and 19.19, while the values for the second experiment are 15.83, 17.05, 18.65 and 19.45. For tele-education, the 5th percentile was 1.49, the 50th was 1.77, the 95th was 2.12 and the 99th was 2.25 in the first experiment and 1.57, 1.84, 2.17, 2.30 respectively for the 5th, 50th, 95th and 99th percentiles in the second experiment. The 5th percentile of teleconsulting in the first experiment ( Figure 5 a) was 8.26 seconds, which shows that only 5% of processing times occur below 8.26 seconds, the 50th (medean), shows that 50% of the times are below 9.20 seconds and the other 50% above, the 95th percentile indicates that 95% of production submissions were completed in 10.26 seconds or less. Lastly, the 99th percentile shows that every 1 in 100 submissions occurred in 10.70 seconds. The same reasoning is applied to the percentiles of the other figures.

Figure 5 Histograms of the distribution of processing time of the experiment with 18 concurrent users (a, b, c) and with 36 (d, e, f).  

Discussion

In this article, we analyzed SMART’s performance under high and extreme workloads. The results of the experiments show that despite the increase in RT over the different workloads, the PT remains practically the same, which leads to the conclusion that the RT was influenced by the transmission time caused by the network layer. This can be easily observed in the values ​​in Table 3 , and it is more evidenced for tele-education, in which the TT average (0.56) of the first experiment rose up to 10.92 in the second experiment. The tele-education TT was much larger than the other activities. The reason for this difference is that in the test script, each user randomly sends data from each service and when a user sends tele-education data, the others are sending telediagnosis and teleconsulting. This increase, therefore, the network traffic and, as the PT for tele-education is smaller, there is consequently a greater wait for the response.

In general, the histograms presented in Figure 5 show that the data are approximately symmetrical in relation to the mean, which can be confirmed when the values of the averages ( Table 3 ) and the medians (50th percentile) are checked and they are practically the same. However, there is a slight tendency for the data to be concentrated in the lowest times. In Figure 5 a, 51.12% are to the left of the mean while in Figure 5 b, c, d, e and f the values correspond respectively to 53.42%, 52.42%, 50.8%, 52.39%, 51.76%.

Based on Figure 5 d, e and f and on the low standard deviations of the processing times presented in Table 3 , it is clear that despite the workload had doubled, the processing time remained practically the same and the performance of the architecture remained stable over the average in the two experiments, which was also observed in the histogram of Figure 5 . The tests results show that the system has good robustness and quality since no failure and no error responses were recorded.

As a future work, we suggest the implementation of a mechanism capable of asynchronously saving the received data in a database since the empirical results showed that 79.92% (teleconsulting), 23.13% (telediagnosis) and 18.40% (tele-education) of the total processing time were spent to save the data in the database.

The main contributions of this research are: (a) the definition of a minimum data model applied in the development of IS within the PTBR framework; (b) a national standard of interoperability for the data interchange between ISE in the context of the PTBR; (c) creation of a web service oriented architecture to integrate telehealth platforms; and (d) the availability of tools that enable managers to efficiently and reliably access relevant information, providing agility and assertiveness to the decision making.

Acknowledgements

We thank the Ministry of Health for fomenting the research, the National Program Telehealth Brazil Networks for technically supporting the research and the Laboratory for Technological Innovation in Healthcare (LAIS) for providing the infrastructure.

How to cite this article: Paiva JC, Carvalho TPM, Vilela ABCB, Nóbrega GAS, Souza BS, Valentim RAM. SMART: a service-oriented architecture for monitoring and assessing Brazil’s Telehealth outcomes. Res Biomed Eng. 2018; 34(4): 317-328. DOI: 10.1590/2446-4740.180047

References

Agboola S, Hale TM, Masters C, Kvedar J, Jethwani K. “Real-world” practical evaluation strategies: a review of telehealth evaluation. JMIR Res Protoc. 2014; 3(4):1-11. http://dx.doi.org/10.2196/resprot.3459 . PMid:25524892. [ Links ]

AlDossary S, Martin-Khan MG, Bradford NK, Armfield NR, Smith AC. The development of a telemedicine planning framework based on needs assessment. J Med Syst. 2017; 41(5):171-94. http://dx.doi.org/10.1007/s10916-017-0709-4 . PMid:28321589. [ Links ]

Arifin SMN, Madey GR, Vyushkov A, Raybaud B, Burkot TR, Collins FH. An online analytical processing multi-dimensional data warehouse for malaria data. Database (Oxford). 2017; 1(1):1-20. http://dx.doi.org/10.1093/database/bax073 . PMid:29220463. [ Links ]

Asadollah AS, Chiew TK. Web service response time monitoring: architecture and validation. Adv Math Comput Methods. 2012; 2(3):58-63. http://dx.doi.org/10.1007/978-3-642-24999-0_39 . [ Links ]

Brasil. Ministério da Saúde. Portaria nº 35 de 04 de janeiro de 2007. Institui, no âmbito do Ministério da Saúde, o Programa Nacional de Telessaúde. Diário Oficial da República Federativa do Brasil [Internet], Brasília, 2007 [cited 2017 June 10]. Available from: http://bvsms.saude.gov.br/bvs/saudelegis/gm/2007/prt0035_04_01_2007_comp.html [ Links ]

Brasil. Ministério da Saúde. Portaria nº 2.546, de 27 de outubro de 2011. Redefine e amplia o Programa Telessaúde Brasil, que passa a ser denominado Programa Nacional Telessaúde Brasil Redes. Diário Oficial da República Federativa do Brasil [Internet], Brasília, 2011 [cited 2017 June 10]. Available from: http://bvsms.saude.gov.br/bvs/saudelegis/gm/2011/prt2546_27_10_2011.html [ Links ]

Brasil. Ministério da Saúde. Nota Técnica n° 05/2014. Define diretrizes para o monitoramento e avaliação do Programa Nacional Telessaúde Brasil Redes, conforme Portaria n. 2.546, de 27 de outubro de 2011. Diário Oficial da República Federativa do Brasil [Internet], Brasília, 2014 [cited 2017 June 20]. Available from: http://smart.telessaude.ufrn.br/static/smart/nt_005_2014.pdf [ Links ]

Brasil. Ministério do Planejamento, Orçamento e Gestão. Padrões de interoperabilidade de governo eletrônico [Internet]. Brasília: Secretaria de Tecnologia da Informação; 2018 [cited 2018 Jan 10]. Documento de referência. Avaliable from: http://eping.governoeletronico.gov.br [ Links ]

Cannata M, Antonovic MP, Molinari ME. Load testing of HELIDEM geo-portal: an OGC open standards interoperability example integrating WMS, WFS, WCS and WPS. Int J Spat Data Infrastruct Res. 2014; 9(1):107-30. http://dx.doi.org/10.2902/1725-0463.2014.09.ART5 . [ Links ]

Chang H. Evaluation framework for telemedicine using the logical framework approach and a fishbone diagram. Healthc Inform Res. 2015; 21(4):230-8. http://dx.doi.org/10.4258/hir.2015.21.4.230 . PMid:26618028. [ Links ]

Changbin W, Xu Y. Web services integration method. Programme networks and distributed systems [thesis]. Gothenburg: Chalmers University of Technology; 2010. [ Links ]

Cito J, Gotowka D, Leitner P, Pelette R, Suljoti D, Dustdar S. Identifying web performance degradations through synthetic and real-user monitoring. J Web Eng. 2015; 14(5-6):414-42. http://dx.doi.org/10.5167/uzh-110955 . [ Links ]

Haddad AE. Experiência Brasileira do Programa Nacional Telessaúde Brasil. GoldBook: Inovação Tecnológica em Educação e Saúde. [Internet]. 2012 [cited 2017 June 11]; 1(1):12-56. Available from: http://www.telessaude.uerj.br/resource/goldbook/pdf/2.pdf [ Links ]

Hsieh SH, Hsieh SL, Cheng PH, Lai F. E-health and healthcare enterprise information system leveraging service-oriented architecture. Telemed J E Health. 2012; 18(3):205-17. http://dx.doi.org/10.1089/tmj.2011.0100 . PMid:22480301. [ Links ]

Jain PJ, Rajput B, Sayankar A. Implications for data analyses and higher education with SQL and data analysis. IJFEAT. 2015; 11(1):80-6. [ Links ]

Kimball R, Ross M. The data warehouse toolkit: the definitive guide to dimensional modeling. USA: John Wiley & Sons; 2013. [ Links ]

Kranch M, Bonneau J. Upgrading HTTPS in mid-air: an empirical study of strict transport security and key pinning. In: Proceedings of the NDSS Symposium; 2015 Feb; San Diego, CA, USA. USA: Internet Society; 2015. http://dx.doi.org/10.14722/ndss.2015.23162 . [ Links ]

Kundu S. Web testing: tool, challenges and methods. IJCSI Int J Comput Sci Issues. 2012; 9(2):481-5. [ Links ]

Lopes PRL, Gundim RS, Silva AB. Avaliação: um componente importante da telemedicina. In: Ribeiro JL Fo, Messina LA, Lopes PRL, editors. RUTE 100 - As 100 primeiras unidades de Telemedicina no Brasil e o impacto da Rede Universitária de Telemedicina (RUTE). Rio de Janeiro: E-papers; 2014. p. 78-87. [ Links ]

Maeder A, Poultney N. Evaluation strategies for telehealth implementations. In: IEEE International Conference on Healthcare Informatics (ICHI); 2016 Oct 4-7; Chicago. USA: IEEE; 2016. p. 363-6. http://doi.org/10.1109/ICHI.2016.65 . [ Links ]

McLeod C. A framework for distributed deep learning layer design in python. arXiv preprint. 2015; 1(1):1-8. [ Links ]

Mendes EV. 25 anos do Sistema Único de Saúde: resultados e desafios. Estud Av. 2013; 27(78):27-34. http://dx.doi.org/10.1590/S0103-40142013000200003 . [ Links ]

Millman KJ, Aivazis M. Python for scientists and engineers. Comput Sci Eng. 2011; 13(2):1-5. http://dx.doi.org/10.1109/MCSE.2011.36 . [ Links ]

Nurseitov N, Paulson M, Reynolds R, Izurieta C. Comparison of JSON and XML data interchange formats. San Francisco: Caine; 2009. p. 157-62. [ Links ]

Obe R, Hsu L. PostGIS in action. 2th ed. Greenwich: Manning Publications Co; 2015. [ Links ]

Oliveira TC, Oliveira JG Jr, Tavares G, Rigato AFG, Pereira FW A, Carvalho FF B. The National Program Telehealth Brazil networks: a historic and situational perspective. Lat Am J Telehealth. 2017; 4(2):104-13. [ Links ]

Pautasso C. RESTful web services: principles, patterns, emerging technologies. InWeb Serv Found. New York: Springer; 2014. p. 31-51. [ Links ]

Rodrigues F, Pereira D, Nascimento JC, Ribeiro J, Barros P, Correia R, et al. Interoperabilidade na Saúde - Onde Estamos? APDSI. 2013; 1(1):1-167. [ Links ]

Sernadela P, González-Castro L, Oliveira JL. Scaleus: semantic web services integration for biomedical applications. J Med Syst. 2017; 41(4):54-64. http://dx.doi.org/10.1007/s10916-017-0705-8 . PMid:28214993. [ Links ]

Sharmila S, Ramadevi E. Analysis of performance testing on web applications. Int J Adv Res Comput Commun Eng. 2014; 3(3):5228-60. [ Links ]

Silva AB, Carneiro ACMG, Sindico SRF. Regras do Governo Brasileiro sobre Serviços de Telessaúde: revisão integrativa. Planej Polít Públicas. 2015; 44(1):167-88. [ Links ]

Silva AB, Moraes IHS. O caso da Rede Universitária de Telemedicina: análise da entrada da telessaúde na agenda política brasileira. Physis. 2012; 22(3):1211-35. http://dx.doi.org/10.1590/S0103-73312012000300019 . [ Links ]

Sistema de Monitoramento e Avaliação dos Resultados do Programa Telesaúde – SMART. Modelo Nacional de Interoperabilidade do BTBR [Internet]. Natal: LAIS/UFRN; 2018 [cited 2018 Jan 15]. Available from: http://smart.telessaude.ufrn.br/webapp/api_docs.html [ Links ]

van Dyk L. A review of telehealth service implementation frameworks. Int J Environ Res Public Health. 2014; 11(2):1279-98. http://dx.doi.org/10.3390/ijerph110201279 . PMid:24464237. [ Links ]

Vargens JMC. Uma abordagem sociotécnica para design e desenvolvimento de sistemas de informação em saúde no âmbito do SUS [thesis]. Rio de Janeiro: Escola Nacional de Saúde Pública; 2014. [ Links ]

Waas F, Wrembel R, Freudenreich T, Thiele M, Koncilia C, Furtado P. On-demand ELT architecture for right-time BI: extending the vision. Int J Data Warehous Min. 2013; 9(2):21-38. http://dx.doi.org/10.4018/jdwm.2013040102 . [ Links ]

Wen CL. Telemedicina e telessaúde - um panorama no Brasil. Rev Inform Públ. 2008; 10(2):7-15. [ Links ]

Received: June 02, 2018; Accepted: November 12, 2018

*Corresponding author: R. Dr. Nilo Bezerra Ramalho, 1692, Tirol, CEP 59015-300 Natal, RN, Brazil. E-mail: jailton.paiva@ifrn.edu.br

Creative Commons License  This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.