Assessment of the usability of a digital learning technology prototype for monitoring intracranial pressure 1

ABSTRACT Objective: to assess the usability of a digital learning technology prototype as a new method for minimally invasive monitoring of intracranial pressure. Method: descriptive study using a quantitative approach on assessing the usability of a prototype based on Nielsen's ten heuristics. Four experts in the area of Human-Computer interaction participated in the study. Results: the evaluation delivered eight violated heuristics and 31 usability problems in the 32 screens of the prototype. Conclusion: the suggestions of the evaluators were critical for developing an intuitive, user-friendly interface and will be included in the final version of the digital learning technology.


Introduction
Technology has a major influence on current society. Information emerges rapidly from all over the world, and can be accessed via computers and mobile devices connected to the Internet. It is fair to say that technology and society have become inseparable (1) . This vast demand for information brings with it professional competition and the demand for increasingly qualified and well educated professionals in all areas of knowledge. In healthcare, continued professional education enables developing competences and skills to make decisions on behalf of patient safety (2) .
However, lack of time due to overload and an excessive number of daily tasks is often an obstacle for continued education. On the other hand, the working environment can facilitate Distance Learning (DL) (2)(3)(4)(5) , offering flexibility in terms of learning pace and hours (6)(7) .
DL is expanding in this country, and is considered an efficient tool as it enables anyone to seek knowledge unhindered by national or other borders (5) . Among the tools available, DL is a digital learning technology that has been increasingly used to promote learning in healthcare, especially for nurses (6,(8)(9)(10) .
Using digital learning technology could be an efficient strategy in the teaching/learning process.
However, developing this tool requires rich and dynamic content, and a well-designed interface that is intuitive to the users (11) . Thus it is important that these resources be evaluated by professionals or users, regardless of the method used.
Numerous methods have been used to evaluate digital learning tools (4,(11)(12) , however regarding the quality of the interface, meaning a product that satisfies and meets user demands, the method most often used is usability (13)(14)(15)(16) .
Usability assessment normally refers to the ease with which users can perform specific tasks when interacting with the tool or object using the appropriate interface. It is related to five attributes: ease of learning, memorization, mistake prevention, efficiency and user satisfaction (17) .
A systematic review shows that one of the most often used approaches to assess usability is a heuristic evaluation (18) . Among them are Nielsen's heuristics (17) , a simple and low-cost method (19) that can detect a number of usability problems with a small number of evaluators (15,(19)(20) in a relatively short period of time (21) .
This approach provides reliable results and suggestions to improve the interface (16,(19)(20) .
The earlier the assessment, the lower the cost of making changes (17) . Thus, creating prototypes has proven an efficient and economic strategy, as potential problems can be tested for and detected, then corrected before the final product version is launched (11,(19)(20)22) .
In this context, to assess the usability of a digital learning technology prototype for a new method for minimally invasive monitoring of intracranial pressure.

Method
This is a descriptive study using a quantitative approach to assess the usability of a digital learning technology prototype.
Assessment of usability used an analytical approach where expert usability evaluators inspected the interface in search of problems, and suggested ways to improve the interface (17) . While there are numerous approaches for analytical assessment, in this study we chose heuristic evaluation. (17) This method is based on using a set of usability principles that guide evaluators as they use an interface in search of problems and shortcomings (17) .
The guidelines offer a range of heuristics available, however two experts in usability grouped these heuristics into ten, to facilitate evaluations in practice. These are known as Nielsen's Heuristics (17) .

H1 -System status visibility
Are users informed about system progress through appropriate feedback within reasonable time?
H2 -Match between system interface and the real world Does the system speak the user's language, with familiar words, phrases and concepts, rather than technical terms? Does the system use real-world conventions, making information appear in a natural and logical order?

H3 -User control and freedom
Can users do what they want when they want?
H4 -Consistency and standards Do the elements of design, such as objects and actions have the same meaning or effect in different situations?
(the Figure 1 continue in the next page...) Figure 1 -Nielsen's Heuristics (17) and the questions asked for each one. São Carlos, SP, Brazil, 2013 related to each heuristic, as shown in Figure 1. We also asked the evaluators for suggestions of how to address the problems found, which in this type of evaluation is not required. Finally, a date and place for the next meeting was agreed.

Evaluation
This phase is done individually and participants may choose where they want to perform the evaluation.
Each evaluator should run through and analyze his/her prototype at least twice. The first run is to make the evaluator familiar with the interface, while the second allows the evaluator to focus on specific elements in search of usability issues using Nielsen's Heuristics (17) , analyzing severity on a scale of zero (0) to four (4), where a higher score is associated with a larger usability problem ( Figure 2).

Severity Meaning
0 Not fully considered a usability problem. 1 An esthetic problem only, does not need to be fixed unless there is time available.  (23) . Thus, four potential participants were formally invited by e-mail. Following confirmation that they were willing to participant, a date and place for the first phase of the evaluation was set.
A heuristic evaluation should include three phases: pre-evaluation, evaluation and a session with the evaluators (17) , all of which are described below.

Pre-evaluation
In this phase, the interface is presented to evaluator, along with information about functionality, objectives and standard terminology.

Results
Validation resulted in eight of the ten Nielsen (17) heuristics violated, and 31 problems in the 32 screens of the prototype (Table 1). The results show that "Correspondence between the system interface and the real world" and "User control and freedom" were the most often violated,
it is necessary to assess these systems, otherwise technology may cease to perform its role as a facilitator, and resources will be under-used.
They heuristic assessment of this study was performed by four evaluators. While 3 to 5 are recommended for studies of this nature, who may find up to 75% usability problems (15,17,(20)(21)(22) . We chose to use Nielsen's heuristics (17) , considering them to be the best suited to evaluate software (15,19,22), healthcare information systems (22) and digital learning tools (16) .
It is believed that a combination of two or more types of assessment will enable identifying more usability (18) problems, however Nielsen's heuristics are always present (15)(16)20) . The authors recommend heuristic evaluations and user testing. Heuristic evaluation may be used the first time to identify and correct the more evident problems, and once changes have been made, will also be used for user testing (16) . Thus, as future work in this effort, we intend to evaluate a final version of the prototype with users.
The results shown show the importance of evaluating a system before it is made available to users. Of the ten heuristics proposed by Nielsen, eight were violated, creating 31 usability problems in the 32 screens of the prototype. Correspondence between system interface and the real world, user freedom and control, system status and consistency and standard accounted for over 77% of the violated heuristics. Studies have shown similar results using the same assessment method as that used in this study (15)(16)(21)(22) .
Usability problems related to these heuristics can reduce the interaction between the system and its users.
Therefore, designers must pay attention when designing interfaces, making sure they are intuitive and easy for even an inexperienced user, avoiding additional effort in learning to use the system (25) .
The project team and the digital learning technology developer quickly understood the suggestions made by the evaluators, as these were solidly based on the heuristics.
Interfaces and content must be simple, and expressed in clear and objective language that is easy to understand (11,25) . In one study, the evaluators suggested less content on each page, and that content be arranged by topic. They also suggested using more objective language and a different visual presentation (7) . Excessive content (13) e and better content organization (12) were mentioned in other studies.
Thus, one of the important contributions made by the evaluators was to point out usability problems at the interfaces, which could confuse or distract users, suggesting significant changes to address the problems found. Other studies revealed similar suggestions, such as: insert explanations of charts and images that may not be entirely clear (15,25) , a scroll-bar for long texts (25) , changing the color of the interface (9,15) to light green, standardized links (25) , including an error message when users do not correctly fill out the registration screen, stopping wherever the error occurred (16) , and a page describing the icons (25) . In one study evaluating learning technology, evaluators had problems identifying some of the icons (7) , demonstrating how important it is that they be fully intuitive.
A user-friendly interface may include buttons to make it easier to use. Buttons are used to select items or actions, and should be described using verbs (25) . Here They "flexibility and efficiency when using" and "help users recognize, diagnose and recover errors" heuristics were not violated. It is believed that these heuristics were not violated as the prototype is not fully functional, and some actions could not be evaluated.  (20)(21)25) .
Severity ranged from 0 (not a usability problem) to 3 (major usability problems), suggesting that some violations are sufficiently serious to warrant attention.
None of the violations in this study were considered grade 4 (catastrophic). Each usability problem found in this study, and the associated severity, were taken into consideration, believing they might negatively affect the effectiveness of the final version of the digital learning technology.
The results of this study show the importance of evaluating digital learning technologies to make them more efficient and user friendly, fostering the teaching and learning process. In nursing, the researchers involved in developing learning systems are concerned with correcting and changing their products based on feedback received from evaluators before making them available to the end user (7)(8)(9)(10)(12)(13)24) .
One of the limitations in this study was that the prototype was not fully functional, some aspects could not be evaluated. We also had trouble finding five evaluators fulfilling the inclusion criteria. For that reason we used only four, still within the acceptable for this type of evaluation (17) .

Conclusion
Given the findings of this study, we believe the goal was achieved as we were able to identify a large number of usability problems with minimal effort, and using a single type of evaluation.
The advantage of heuristic evaluation is that evaluators who are experts in the method were able to make suggestions for the problems found, highlight the strengths and weaknesses of the project. All of the suggestions were considered in the final version of the digital learning technology.
As future work we intent to evaluate the new version of the digital learning technology with users, as in our view evaluations are essential for providing a quality product.