Journal articles on the topic 'Usability Testing and Evaluation'

To see the other types of publications on this topic, follow the link: Usability Testing and Evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Usability Testing and Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jeffries, Robin, and Heather Desurvire. "Usability testing vs. heuristic evaluation." ACM SIGCHI Bulletin 24, no. 4 (October 1992): 39–41. http://dx.doi.org/10.1145/142167.142179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tan, Wei-Siong, and R. R. Bishu. "Which is a Better Method of Web Evaluation? a Comparison of User Testing and Heuristic Evaluation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 14 (September 2002): 1256–60. http://dx.doi.org/10.1177/154193120204601404.

Full text
Abstract:
Besides recognizing the importance of incorporating usability evaluation techniques in the design and development phase of any user interface (UI), it is also very important that designers recognize the benefits and limitations of the different usability inspection methods. This is because the quality of the usability evaluation is dependent on the method used. Two of the more popular usability evaluation techniques are user testing and heuristic analysis. The main objective of this study was to compare the efficiency and effectiveness between user testing and heuristic analysis in evaluating four different commercial websites. Comparing the proportion of usability problems and the type of problems addressed by these two methods both in the early and later stage of the design process does this. The results showed that both user testing and heuristic analysis addressed very different usability problems and with the exception compatibility and security and privacy problems, where heuristic analysis outperforms user testing, both methods are equally efficient and effective in addressing different categories of usability problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Enlie, and Barrett Caldwell. "An Empirical Study of Usability Testing: Heuristic Evaluation Vs. User Testing." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 8 (September 2002): 774–78. http://dx.doi.org/10.1177/154193120204600802.

Full text
Abstract:
In this study, two different usability-testing methods (Heuristic Evaluation and User Testing) were selected to test the usability of a pre-release version of software searching for Science, Mathematics and Engineering education materials. Our major goal is to compare Heuristic Evaluation and User Testing in terms of efficiency, effectiveness and cost/benefit analysis. We found that Heuristic Evaluation was more efficient than User Testing in finding usability problems (41 vs. 10), while User Testing was more effective than Heuristic Evaluation in finding major problems (70% vs.12%). in general, Heuristic Evaluation appears to be more economic in finding a wide range of usability problems by incurring a low cost in comparison to User Testing. However, User Testing can provide more insightful data from real users such as user's performance and satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
4

Sinabell, Irina, and Elske Ammenwerth. "Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation." Applied Clinical Informatics 13, no. 01 (January 2022): 67–79. http://dx.doi.org/10.1055/s-0041-1740919.

Full text
Abstract:
Abstract Background Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems. To address this demand, the present study systematically identified and expert-validated rapidly deployable eHealth usability evaluation methods. Objective Identification and prioritization of eHealth usability evaluation methods suitable for agile, easily applicable, and useful eHealth usability evaluations. Methods The study design comprised a systematic iterative approach in which expert knowledge was contrasted with findings from literature. Forty-three eHealth usability evaluation methods were systematically identified and assessed regarding their ease of applicability and usefulness through semi-structured expert interviews with 10 European usability experts and systematic literature research. The most appropriate eHealth usability evaluation methods were selected stepwise based on the experts' judgements of their ease of applicability and usefulness. Results Of these 43 eHealth usability evaluation methods identified as suitable for agile, easily applicable, and useful eHealth usability evaluations, 10 were recommended by the experts based on their usefulness for rapid eHealth usability evaluations. The three most frequently recommended eHealth usability evaluation methods were Remote User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method. Eleven usability evaluation methods, such as Retrospective Testing, were not recommended for use in rapid eHealth usability evaluations. Conclusion We conducted a systematic review and expert-validation to identify rapidly deployable eHealth usability evaluation methods. The comprehensive and evidence-based prioritization of eHealth usability evaluation methods supports faster usability evaluations, and so contributes to the ease-of-use of emerging eHealth systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Ismail, Nor Azman, Fadzrul Izwan Jamaluddin, Akmal Harraz Hamidan, Ahmad Fariz Ali, Su Elya Mohamed, and Che Soh Said. "Usability Evaluation of Encyclopedia Websites." International Journal of Innovative Computing 11, no. 1 (April 28, 2021): 21–25. http://dx.doi.org/10.11113/ijic.v11n1.282.

Full text
Abstract:
Usability is an important aspect that every website should focus more. It tells us how well and success website will function with real users. Many people often think usability tests are expensive and time-consuming. It can be a cost-effective and time saver with usability testing instead of spending more time fixing an unusable website. This study evaluates the usability of encyclopedia websites by using automated usability testing tools and questionnaire methods. The questionnaire was developed based on a standard form called Website Analysis and Measurement Inventory (WAMMI) that identified 20 common usability questions divided into five categories. Each category deals with one aspect of usability. Simultaneously, the automated usability testing tools used in this study were Pingdom and GT Metrix to calculate and analyse the website performance of selected encyclopedia websites based on website components including page load time, media size and overall web performance grades. This study could help web designer, developer, and practitioners design better and more user-friendly encyclopedia websites.
APA, Harvard, Vancouver, ISO, and other styles
6

Følstad, Asbjørn, and Kasper Hornbæk. "Work-domain knowledge in usability evaluation: Experiences with Cooperative Usability Testing." Journal of Systems and Software 83, no. 11 (November 2010): 2019–30. http://dx.doi.org/10.1016/j.jss.2010.02.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohd Amin, Siti Fauziah, Sabariah Sharif, Muhamad Suhaimi Taat, Mad Nor Madjapuni, and Muralindran Mariappan. "IMPLEMENTATION OF USABILITY TESTING USING EXPERT PANEL EVALUATION METHOD IN THE EVALUATION PHASE OF M-SOLAT ROBOT MODULE." International Journal of Education, Psychology and Counseling 7, no. 45 (March 15, 2022): 222–33. http://dx.doi.org/10.35631/ijepc.745018.

Full text
Abstract:
The evaluation phase is one of the essential phases in the study of Design and Development Research (PRP). Various methods can be used in this phase. Nevertheless, a researcher must choose a reasonable method to secure the achievement of the objectives. Accordingly, this research implemented the usability testing evaluation of the M-Solat Robot Module using the expert panel evaluation method. The instrument employed in this study was the USE Questionnaire which was analysed using the Percentage Calculation Method (PCM). The outcomes confirmed the usability of the M-Solat Robot Module in terms of usability = 90.2%, ease of use = 88.4%, ease of learning = 90.1% and satisfaction = 91.7%. The usability testing evaluation using the expert panel evaluation method in this study enabled the researcher to accomplish the study objectives. Ergo, this analysis recommended that prospective researchers use expert panel evaluation in usability testing evaluation for studies involving usability screening evaluation of innovation.
APA, Harvard, Vancouver, ISO, and other styles
8

Pierce, Robert P., Bernie R. Eskridge, Brandi Ross, Margaret A. Day, Brooke Dean, and Jeffery L. Belden. "Improving the User Experience with Discount Site-Specific User Testing." Applied Clinical Informatics 13, no. 05 (October 2022): 1040–52. http://dx.doi.org/10.1055/s-0042-1758222.

Full text
Abstract:
Abstract Objectives Poor electronic health record (EHR) usability is associated with patient safety concerns, user dissatisfaction, and provider burnout. EHR certification requires vendors to perform user testing. However, there are no such requirements for site-specific implementations. Health care organizations customize EHR implementations, potentially introducing usability problems. Site-specific usability evaluations may help to identify these concerns, and “discount” usability methods afford health systems a means of doing so even without dedicated usability specialists. This report characterizes a site-specific discount user testing program launched at an academic medical center. We describe lessons learned and highlight three of the EHR features in detail to demonstrate the impact of testing on implementation decisions and on users. Methods Thirteen new EHR features which had already undergone heuristic evaluation and iterative design were evaluated over the course of three user test events. Each event included five to six users. Participants used think aloud technique. Measures of user efficiency, effectiveness, and satisfaction were collected. Usability concerns were characterized by the type of usability heuristic violated and by correctability. Results Usability concerns occurred at a rate of 2.5 per feature tested. Seventy percent of the usability concerns were deemed correctable prior to implementation. The first highlighted feature was moved to production despite low single ease question (SEQ) scores which may have predicted its subsequent withdrawal from production based on post implementation feedback. Another feature was rebuilt based on usability findings, and a new version was retested and moved to production. A third feature highlights an easily correctable usability concern identified in user testing. Quantitative usability metrics generally reinforced qualitative findings. Conclusion Simplified user testing with a limited number of participants identifies correctable usability concerns, even after heuristic evaluation. Our discount usability approach to site-specific usability has a role in implementations and may improve the usability of the EHR for the end user.
APA, Harvard, Vancouver, ISO, and other styles
9

Yul, Faradila Ananda, and Miftahul Jannah. "ANALISIS USABILITAS WEBSITE SIAM UMRI MENGGUNAKAN METODE USABILITY TESTING." Jurnal Surya Teknika 7, no. 1 (December 13, 2020): 86–95. http://dx.doi.org/10.37859/jst.v7i1.2355.

Full text
Abstract:
The thing that underlies the existence of a website is the development of information and communication technology. This study aims to measure the Usability level of the Student Academic Information System (SIAM) Website at Universitas Muhammadiyah Riau. Some problems encountered were that there was no reusability evaluation of the UMRI SIAM website, as well as complaints from UMRI students when accessing the website. The reusability problem in this study was resolved by conducting usability testing with five dimensions proposed by Nielsen (1993), namely learnability, efficiency, memorability, error & user satisfaction when accessing the SIAM UMRI website and using the thinking aloud method which required 3-5 respondents. The subjects studied were expert users (UMRI students) and novice users (non-UMRI students). Based on the results of the analysis on the learnability dimension, it is found that the respondents have the ability to learn a good website. In the efficiency dimension, the results of the increase in the speed of completing tasks by the respondents are obtained. Furthermore, in the memorability dimension, the results show that the respondents have good memory ability. In the error dimension, there are 38 problems when accessing the SIAM UMRI website, and in the satisfaction dimension, the results of respondents' satisfaction when accessing the SIAM UMRI website are obtained with a score of 70. In addition, in this study there are recommendations for improving the SIAM UMRI website.
APA, Harvard, Vancouver, ISO, and other styles
10

Lyon, Aaron R., Kelly Koerner, and Julie Chung. "Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI): A methodology for assessing complex intervention implementability." Implementation Research and Practice 1 (January 2020): 263348952093292. http://dx.doi.org/10.1177/2633489520932924.

Full text
Abstract:
Background: Most evidence-based practices in mental health are complex psychosocial interventions, but little research has focused on assessing and addressing the characteristics of these interventions, such as design quality and packaging, that serve as intra-intervention determinants (i.e., barriers and facilitators) of implementation outcomes. Usability—the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction—is a key indicator of design quality. Drawing from the field of human-centered design, this article presents a novel methodology for evaluating the usability of complex psychosocial interventions and describes an example “use case” application to an exposure protocol for the treatment of anxiety disorders with one user group. Method: The Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI) methodology comprises four steps: (1) identify users for testing; (2) define and prioritize EBPI components (i.e., tasks and packaging); (3) plan and conduct the evaluation; and (4) organize and prioritize usability issues. In the example, clinicians were selected for testing from among the identified user groups of the exposure protocol (e.g., clients, system administrators). Clinicians with differing levels of experience with exposure therapies (novice, n =3; intermediate, n = 4; advanced, n = 3) were sampled. Usability evaluation included Intervention Usability Scale (IUS) ratings and individual user testing sessions with clinicians, and heuristic evaluations conducted by design experts. After testing, discrete usability issues were organized within the User Action Framework (UAF) and prioritized via independent ratings (1–3 scale) by members of the research team. Results: Average IUS ratings (80.5; SD = 9.56 on a 100-point scale) indicated good usability and also room for improvement. Ratings for novice and intermediate participants were comparable (77.5), with higher ratings for advanced users (87.5). Heuristic evaluations suggested similar usability (mean overall rating = 7.33; SD = 0.58 on a 10-point scale). Testing with individual users revealed 13 distinct usability issues, which reflected all four phases of the UAF and a range of priority levels. Conclusion: Findings from the current study suggested the USE-EBPI is useful for evaluating the usability of complex psychosocial interventions and informing subsequent intervention redesign (in the context of broader development frameworks) to enhance implementation. Future research goals are discussed, which include applying USE-EBPI with a broader range of interventions and user groups (e.g., clients). Plain language abstract: Characteristics of evidence-based psychosocial interventions (EBPIs) that impact the extent to which they can be implemented in real world mental health service settings have received far less attention than the characteristics of individuals (e.g., clinicians) or settings (e.g., community mental health centers), where EBPI implementation occurs. No methods exist to evaluate the usability of EBPIs, which can be a critical barrier or facilitator of implementation success. The current article describes a new method, the Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI), which uses techniques drawn from the field of human-centered design to evaluate EBPI usability. An example application to an intervention protocol for anxiety problems among adults is included to illustrate the value of the new approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Andre, Terence S., Steven M. Belz, Faith A. McCrearys, and H. Rex Hartson. "Testing a Framework for Reliable Classification of Usability Problems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 37 (July 2000): 573–76. http://dx.doi.org/10.1177/154193120004403707.

Full text
Abstract:
The User Action Framework (UAF) is a knowledge base of usability issues and concepts structured to provide a framework and method for classifying usability problems identified during usability evaluation. The UAF is essentially a hierarchical structure of usability attributes that users traverse as a decision structure, selecting the most appropriate classification category and sub-category at each level of the hierarchy. The cumulative set of category choices along the classification path is taken as a sequence of usability attributes that determines a complete classification description of the usability problem in question. The UAF itself has been the subject of usability evaluation and is the product of an extensive, iterative design process. In this paper we report on the reliability of the UAF by measuring the agreement of 10 experienced usability practitioners as they classify 15 different usability problems.
APA, Harvard, Vancouver, ISO, and other styles
12

Rosenbaum, S. "Usability evaluations versus usability testing: when and why?" IEEE Transactions on Professional Communication 32, no. 4 (1989): 210–16. http://dx.doi.org/10.1109/47.44533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Russ, Alissa L., Michelle A. Jahn, Himalaya Patel, Brian W. Porter, Khoa A. Nguyen, Alan J. Zillich, Amy Linsky, and Steven R. Simon. "Formative Usability Evaluation of a Novel Tool for Medication Reconciliation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 602. http://dx.doi.org/10.1177/1541931213601634.

Full text
Abstract:
To decrease medication errors, a common cause of injury to patients, we developed a novel electronic tool to facilitate asynchronous communication between healthcare professionals (HCPs) and patients for medication reconciliation. However, it was unknown whether the tool adequately supported HCPs’ usability needs. Our objective was to conduct an iterative usability evaluation of the tool with physicians, nurses, and pharmacists, in preparation for a randomized controlled trial. We hypothesized that we would identify design weaknesses that could be addressed via interface modifications prior to the trial. We completed a mixed-method, formative usability evaluation with 20 HCPs in the Veterans Affairs (VA) Health Services Research and Development, Human-Computer Interaction and Simulation Laboratory located within a major medical center. The tool in this study is formally known as the Secure Messaging for Medication Reconciliation Tool (SMMRT). The evaluation consisted of four sequential steps: 1) phase I usability testing to assess the baseline tool along with small, iterative design changes throughout testing; 2) heuristic evaluation; 3) implement major design changes that incorporate findings from previous steps; and 4) phase II usability testing to assess the implemented design changes and further refine the tool. This presentation focuses on steps 1 and 4 related to usability testing. During testing, HCPs worked through a real case consisting of a patient discharged from the hospital within the past 30 days who had at least 5 outpatient medications. We collected data on efficiency, usability errors, and participants’ satisfaction, along with participants’ ability to detect and address three distinct types of medication errors via the tool. For the latter, we inserted three safety probes into the simulation: 1) a missing medication (i.e., omission); 2) an extraneous medication (i.e., commission); and 3) an inaccurate dose (i.e., dose discrepancy). Data were analyzed descriptively, rather than via statistical comparisons, due to the formative and iterative nature of this research. There was no indication of efficiency gains during iterative prototyping and testing. Highlights of usability errors included confusion about medication entry fields; incorrect assumptions regarding medication list accuracy; inadequate medication information sorting and organization; and premature closure. Additionally, HCPs described usability errors that might occur in clinical practice. For example, medication images on the tool may not match what is dispensed to patients. HCPs also expressed concern that medication updates made via the tool may not be consistently updated in the electronic health record. In terms of satisfaction, HCPs’ ratings tended to increase as design modifications were implemented. After phase II usability testing, their overall satisfaction was favorable. Finally, for each of the three safety probes, 50% or fewer of HCPs identified the associated medication error. This research illustrates the importance of usability evaluations as a precursor to randomized trials of health information technology. Our multi-step approach to usability testing, with heuristic evaluation at the midpoint, may inform the design of other usability evaluations. While efficiency gains were not realized, user satisfaction improved. The inclusion of safety probes was especially valuable, since probes allowed us to assess error detection rates. There may be opportunities for human factors professionals to expand the sophistication and types of probes used in future healthcare research. Future studies are needed to develop more advanced design approaches that facilitate healthcare professionals’ detection of medication errors.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Ting, Pei-Luen Patrick Rau, Gavriel Salvendy, and Jia Zhou. "Comparing Low and High-Fidelity Prototypes in Mobile Phone Evaluation." International Journal of Technology Diffusion 3, no. 4 (October 2012): 1–19. http://dx.doi.org/10.4018/jtd.2012100101.

Full text
Abstract:
This study compared usability testing results found with low- and high-fidelity prototypes for mobile phones. The main objective is to obtain deep understanding of usability problems found with different prototyping methods. Three mobile phones from different manufactures were selected in the experiment. The usability level of the mobile phones was evaluated by participants who completed a questionnaire consisting of 13 usability factors. Incorporating the task-based complexity of the three mobile phones, significant differences in the usability evaluation for each individual factor were found. Suggestions on usability testing with prototyping technique for mobile phones were proposed. This study tries to provide new evidence to the field of mobile phone usability research and develop a feasible way to quantitatively evaluate the prototype usability with novices. The comparisons of paper-based and fully functional prototypes led us to realize how significantly the unique characteristics of different prototypes affect the usability evaluation. The experiment took product complexity into account and made suggestions on choosing proper prototyping technique for testing particular aspects of mobile phone usability.
APA, Harvard, Vancouver, ISO, and other styles
15

Kanno, Takahiro, and Yasuyoshi Yokokohji. "Usability Evaluation of Variable-Scale Microteleoperation System." Journal of Robotics and Mechatronics 22, no. 5 (October 20, 2010): 659–68. http://dx.doi.org/10.20965/jrm.2010.p0659.

Full text
Abstract:
The usability of variable-scale micromanipulation system we developed previously, which consists of manipulation and vision subsystems, is evaluated. Based on preliminary usability testing results, we introduced a modified user interface providing more intuitive operation and conducted usability testing of the system including the improved interface. Results showed that variable scaling is significantly more effective than fixed scaling - but only for subjects used to the system.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Timothy, Richard Booth, Royce Jean-Louis, Ryan Chan, Anthony Yeung, David Gratzer, and Gillian Strudwick. "A Primer on Usability Assessment Approaches for Health-Related Applications of Virtual Reality." JMIR Serious Games 8, no. 4 (October 28, 2020): e18153. http://dx.doi.org/10.2196/18153.

Full text
Abstract:
Health-related virtual reality (VR) applications for patient treatment, rehabilitation, and medical professional training are on the rise. However, there is little guidance on how to select and perform usability evaluations for VR health interventions compared to the supports that exist for other digital health technologies. The purpose of this viewpoint paper is to present an introductory summary of various usability testing approaches or methods that can be used for VR applications. Along with an overview of each, a list of resources is provided for readers to obtain additionally relevant information. Six categories of VR usability evaluations are described using a previously developed classification taxonomy specific to VR environments: (1) cognitive or task walkthrough, (2) graphical evaluation, (3) post hoc questionnaires or interviews, (4) physical performance evaluation, (5) user interface evaluation, and (6) heuristic evaluation. Given the growth of VR in health care, rigorous evaluation and usability testing is crucial in the development and implementation of novel VR interventions. The approaches outlined in this paper provide a starting point for conducting usability assessments for health-related VR applications; however, there is a need to also move beyond these to adopt those from the gaming industry, where assessments for both usability and user experience are routinely conducted.
APA, Harvard, Vancouver, ISO, and other styles
17

Ahmad, Naseer, Muhammad Waqas Boota, and Abdul Hye Masoom. "Smart Phone Application Evaluation with Usability Testing Approach." Journal of Software Engineering and Applications 07, no. 12 (2014): 1045–54. http://dx.doi.org/10.4236/jsea.2014.712092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ginting, Lit Malem, Grady Sianturi, and Christina Vitaloka Panjaitan. "Perbandingan Metode Evaluasi Usability Antara Heuristic Evaluation dan Cognitive Walkthrough." Jurnal Manajemen Informatika (JAMIKA) 11, no. 2 (September 27, 2021): 146–57. http://dx.doi.org/10.34010/jamika.v11i2.5480.

Full text
Abstract:
Usability evaluation is needed to identify and analyze usability problems in an application. This study will compare the results of usability evaluation with Heuristic Evaluation and Cognitive Walkthrough methods on the SIMRS Del Egov Center web from the aspects of usability problems found, the level of usability problems, and end user responses that will be evaluated using usability testing to find a more effective method of finding usability problems. Heuristic Evaluation will refer to the 10 heuristic principles proposed by Jacob Nielsen, while Cognitive Walkthrough, the expert will follow the task provided by the researcher. The results showed that the results of the evaluation conducted by Heuristic Evaluation found more usability problems in aspects: efficiency, memorability and satisfaction, while Cognitive Walktrough found more usability problems in aspects: learnability and error. In the severity rating aspect, Cognitive Walktrough is more effective in finding usability problems with a higher severity level, with an average of 3, while heuristic evaluation with an average of 2. In the aspect of end user responses to websites based on usability testing, Heuristic Evaluation has a SUS score which is higher at 57, while Cognitive Walktrough has an SUS score of 54.5. This shows that based on these three aspects, the Heuristic Evaluation method is better at finding usability problems in the SIMRS Del Egov Center application study object.
APA, Harvard, Vancouver, ISO, and other styles
19

Virzi, Robert A., James F. Sorce, and Leslie Beth Herbert. "A Comparison of Three Usability Evaluation Methods: Heuristic, Think-Aloud, and Performance Testing." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 4 (October 1993): 309–13. http://dx.doi.org/10.1177/154193129303700412.

Full text
Abstract:
A high-fidelity prototype of an extended voice mail application was created. We tested it using three distinct usability testing paradigms so that we could compare the quantity and quality of the information obtained using each. The three methods employed were (1) heuristic evaluation, in which usability experts critique the user interface, (2) think-aloud testing, in which naive subjects comment on the system as they use it, and (3) performance testing, in which task completion times and error rates are collected as naive subjects interact with the system. The three testing methodologies were roughly equivalent in their ability to detect a core set of usability problems on a per evaluator basis. However, the heuristic and think-aloud evaluations were generally more sensitive, uncovering a broader array of problems in the user interface. Implications of these findings are discussed in terms of the costs of doing the evaluations and in light of other work on this topic.
APA, Harvard, Vancouver, ISO, and other styles
20

Davids, Mogamat Razeen, Usuf M. E. Chikte, and Mitchell L. Halperin. "An efficient approach to improve the usability of e-learning resources: the role of heuristic evaluation." Advances in Physiology Education 37, no. 3 (September 2013): 242–48. http://dx.doi.org/10.1152/advan.00043.2013.

Full text
Abstract:
Optimizing the usability of e-learning materials is necessary to maximize their potential educational impact, but this is often neglected when time and other resources are limited, leading to the release of materials that cannot deliver the desired learning outcomes. As clinician-teachers in a resource-constrained environment, we investigated whether heuristic evaluation of our multimedia e-learning resource by a panel of experts would be an effective and efficient alternative to testing with end users. We engaged six inspectors, whose expertise included usability, e-learning, instructional design, medical informatics, and the content area of nephrology. They applied a set of commonly used heuristics to identify usability problems, assigning severity scores to each problem. The identification of serious problems was compared with problems previously found by user testing. The panel completed their evaluations within 1 wk and identified a total of 22 distinct usability problems, 11 of which were considered serious. The problems violated the heuristics of visibility of system status, user control and freedom, match with the real world, intuitive visual layout, consistency and conformity to standards, aesthetic and minimalist design, error prevention and tolerance, and help and documentation. Compared with user testing, heuristic evaluation found most, but not all, of the serious problems. Combining heuristic evaluation and user testing, with each involving a small number of participants, may be an effective and efficient way of improving the usability of e-learning materials. Heuristic evaluation should ideally be used first to identify the most obvious problems and, once these are fixed, should be followed by testing with typical end users.
APA, Harvard, Vancouver, ISO, and other styles
21

Bailey, Robert W., Robert W. Allan, and P. Raiello. "Usability Testing vs. Heuristic Evaluation: A Head-to-Head Comparison." Proceedings of the Human Factors Society Annual Meeting 36, no. 4 (October 1992): 409–13. http://dx.doi.org/10.1177/154193129203600431.

Full text
Abstract:
The importance of user testing, heuristic evaluation and iterative design in the development of computer software programs was examined. In the first study, twenty-five subjects with limited computer experience, were randomly divided into five groups of five subjects each. All groups were asked to perform a telephone bill inquiry task using two character-based screens. After having one group perform, one change per screen was made before beginning the testing of the next group. The system was improved three times. A final experimental group completed the same task using an “ideal” system designed and presented by Molich and Nielsen (1990). Rather than the 29 changes originally suggested by Molich and Nielsen, our results showed that only one change to each of the original screens was necessary to achieve the same performance and preference levels as those demonstrated by their “ideal” system. The same task was repeated using a graphical user interface. A heuristic evaluation suggested up to 43 potential changes, whereas the usability test demonstrated that only two changes optimized performance. These findings demonstrate one of the major weaknesses of heuristic evaluations, and the importance of usability testing in the design and development of human interfaces.
APA, Harvard, Vancouver, ISO, and other styles
22

Saputra, I. Made Arya Dwi, I. Made Ardwi Pradnyana, and Nyoman Sugihartini. "USABILITY TESTING PADA SISTEM TRACER STUDY UNDIKSHA MENGGUNAKAN METODE HEURISTIC EVALUATION." Jurnal Pendidikan Teknologi dan Kejuruan 16, no. 1 (January 30, 2019): 98. http://dx.doi.org/10.23887/jptk-undiksha.v16i1.18171.

Full text
Abstract:
Abstrak Tujuan dari penelitian ini adalah untuk menganalisis tingkat usability sistem tracer study Undiksha yang diukur menggunakan metode Heuristic Evaluation dengan menggunakan 10 variabel usability untuk menentukan rancangan layout sistem yang memenuhi kriteria usability. Sampel dalam penelitian ini terdiri dari 20 responden alumni Undiksha. Teknik penentuan sampel menggunakan proportionate starafied random sampling sebagai teknik pengambilan data. Hasil penelitian menunjukkan tingkat usability sistem tracer study Undiksha sebesar 60% yang termasuk dalam kategori tinggi. Dari hasil analisis kuisioner yang didapat hasil bahwa layout sistem tracer study Undiksha sudah mampu memenuhi kriteria usability sebuah sistem informasi. Sehingga dalam penelitian ini rekomendasi perbaikan berdasarkan hasil kuisioner yang memiliki persentase rendah dan berpedoman pada panduan HCI perbaikan layout. Perbaikan lebih difokuskan pada kejelasan informasi yang disajikan masih sedikit dan kurang update, adanya bantuan yang muncul tepat waktu saat terjadi error, penyajian submenu dan ikon-ikon yang konsisten, adanya keterangan pada link, dokumentasi yang lengkap, dan adanya menu bantuan untuk mempermudah pengguna dalam mencari solusi jika terdapat kesalahan saat mengakses sistem.
APA, Harvard, Vancouver, ISO, and other styles
23

Wallace, Steve, Adrian Reid, Jin-Su Kang, and Daniel Clinciu. "A Comparison of the Usability of Heuristic Evaluations for Online Help." Information Design Journal 20, no. 1 (September 23, 2013): 58–68. http://dx.doi.org/10.1075/idj.20.1.05wal.

Full text
Abstract:
This study compares the usability of a general heuristic evaluation to that of a domain-specific heuristic evaluation focused on technical documentation. Eight technical writers used both heuristic evaluations to identify usability problems in an online help application. The validity of the usability problems they identified was ascertained by user testing. No significant difference was found in overall effectiveness or efficiency. However, writers indicated greater satisfaction with the general heuristic evaluation, while the domain-specific heuristic evaluation was more effective in some categories and showed greater inter-rater agreement. Results suggest that differences in effectiveness were related to the level of detail of the heuristics. This study therefore recommends the incorporation of more detailed heuristics into heuristic evaluations.
APA, Harvard, Vancouver, ISO, and other styles
24

Firdaus, Abdi. "Usability Testing Aplikasi Mobile E-Office Tabalong Menggunakan Heuristic Evaluation." EXPLORE 12, no. 1 (September 21, 2021): 1. http://dx.doi.org/10.35200/explore.v12i1.498.

Full text
Abstract:
E-Office Tabalong merupakan aplikasi absensi online berbasis mobile yang digunakan oleh Pemerintah Daerah (Pemda) kabupaten Tabalong, selain berfungsi sebagai absensi, aplikasi ini juga digunakan untuk memudahkan rekapitulasi data secara sistematis. Karena merupakan aplikasi baru, tentunya aplikasi ini mendapat banyak tanggapan dari para penggunanya sehingga perlu ada evaluasi usability. Untuk dapat mengukur tingkat kenyamanan pengguna, kelayakan aplikasi, dan antarmuka aplikasi maka dilakukan pengujian usability, dengan tujuan untuk menganalisis user experience dalam penggunaan aplikasi E-Office Tabalong. Penelitian ini dilakukan dengan metode sepuluh heuristic Nielsen. Metode ini sering digunakan untuk sistem evaluasi software komputer ataupun aplikasi mobile berbasis pengguna. Metode ini melibatkan evaluator untuk memberikan masukan kemudian dikategorikan dalam prinsip-prinsip heuristik. Evaluasi heuristic banyak digunakan pada perancangan dengan jangka waktu singkat dan dana yang terbatas. Dari hasil pengukuran menunjukkan terdapat masalah severity rating dengan level Kategori major usability problem dengan poin akhir 2,61 yang diperoleh dari rata-rata nilai secara keseluruhan pada semua aspek usability yang diteliti.
APA, Harvard, Vancouver, ISO, and other styles
25

Rosenbaum, Stephanie, and Dana Chisnell. "Choosing Usability Research Methods." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 37 (July 2000): 569–72. http://dx.doi.org/10.1177/154193120004403706.

Full text
Abstract:
This paper describes why and how human factors practitioners should employ a combination of research methods for user data collection, rather than relying on only one or two such as heuristic evaluation and laboratory testing. The paper presents three case studies of multiple-method usability research projects: alternating usability testing with ethnographic interviews in a longitudinal study of a clinical information system; two parallel usability research efforts for a small company with a limited budget; and contextual inquiries followed by group interviews of experts, then by usability testing. The authors believe that combining research methods is more likely to increase the strategic penetration of human factors within organizations.
APA, Harvard, Vancouver, ISO, and other styles
26

Tuwanakotta, Janet Livia, and Andeka Rocky Tanaamah. "Evaluation of Usability Quality Between InDriver and Maxim Applications Using Usability Scale (SUS) and Usability Testing Methods." SISTEMASI 11, no. 3 (September 30, 2022): 630. http://dx.doi.org/10.32520/stmsi.v11i3.2001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tambunan, Gracella, and Lit Malem Ginting. "COMPARISON OF HEURISTIC EVALUATION AND COGNITIVE WALKTHROUGH METHODS IN DOING USABILITY EVALUATION OF MOBILE-BASED DEL EGOV CENTRE HOSPITAL INFORMATION SYSTEM." SEMINASTIKA 3, no. 1 (November 4, 2021): 99–106. http://dx.doi.org/10.47002/seminastika.v3i1.244.

Full text
Abstract:
Usability is a factor that indicates the success of an interactive product or system, such as a mobile application. The increasing use of smartphones demands a more accurate and effective usability evaluation method to find usability problems, so that they can be used for product improvement in the development process. This study compares the Cognitive Walkthrough method with Heuristic Evaluation in evaluating the usability of the SIRS Del eGov Center mobile application. Evaluation with these two methods will be carried out by three evaluators who act as experts. Finding problems and recommending improvements from each method will produce an improvement prototype made in the form of a high-fidelity prototype. Each prototype will be tested against ten participants using the Usability Testing method, which will generate scores through the SUS table. From the test scores, the percentage of Likert scale and the success rate of each prototype will be found. The results show that between the two usability evaluation methods, the Heuristic Evaluation method is the more effective method, finds more usability problems, and has a higher Likert scale percentage, which is 66.5%, while Cognitive Walkthrough is 64.75%.
APA, Harvard, Vancouver, ISO, and other styles
28

McLaughlin, Anne Collins, Patricia R. DeLucia, Frank A. Drews, Monifa Vaughn-Cooke, Anil Kumar, Robert R. Nesbitt, and Kevin Cluff. "Evaluating Medical Devices Remotely: Current Methods and Potential Innovations." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 7 (September 22, 2020): 1041–60. http://dx.doi.org/10.1177/0018720820953644.

Full text
Abstract:
Objective We present examples of laboratory and remote studies, with a focus on studies appropriate for medical device design and evaluation. From this review and description of extant options for remote testing, we provide methods and tools to achieve research goals remotely. Background The FDA mandates human factors evaluation of medical devices. Studies show similarities and differences in results collected in laboratories compared to data collected remotely in non-laboratory settings. Remote studies show promise, though many of these are behavioral studies related to cognitive or experimental psychology. Remote usability studies are rare but increasing, as technologies allow for synchronous and asynchronous data collection. Method We reviewed methods of remote evaluation of medical devices, from testing labels and instruction to usability testing and simulated use. Each method was coded for the attributes (e.g., supported media) that need consideration in usability studies. Results We present examples of how published usability studies of medical devices could be moved to remote data collection. We also present novel systems for creating such tests, such as the use of 3D printed or virtual prototypes. Finally, we advise on targeted participant recruitment. Conclusion Remote testing will bring opportunities and challenges to the field of medical device testing. Current methods are adequate for most purposes, excepting the validation of Class III devices. Application The tools we provide enable the remote evaluation of medical devices. Evaluations have specific research goals, and our framework of attributes helps to select or combine tools for valid testing of medical devices.
APA, Harvard, Vancouver, ISO, and other styles
29

Jangi, Majid, Reza Khajouei, Mahmoud Tara, Mohammad Reza Mazaheri Habibi, Azade Kamel Ghalibaf, Sara Zangouei, and Mostafa Mostafavi. "User Testing of an Admission, Discharge, Transfer System: Usability Evaluation." Frontiers in Health Informatics 10, no. 1 (May 14, 2021): 77. http://dx.doi.org/10.30699/fhi.v10i1.282.

Full text
Abstract:
Introduction: To improve the first step of the hospitalization procedure, appropriate interaction must be established between users and the admission, discharge, and transfer system. The aim of this study was to evaluate the usability of the ADT system in some of selected Iranian non-teaching hospitals.Material and Methods: This study was cross-sectional research that has evaluated the usability of a selected ADT system using the think-aloud method by 11 medical record administrators. Users were asked to follow the provided scenario, then share and elaborate on what they saw, thought about, did, felt, and decided during their interaction with the system. Users' feedbacks were collected and organized into four main categories for further processing.Results: To evaluate the usability of an ADT system, four routine scenario tasks were followed by users and only 45.45% of them could implement all tasks. Overall, 36 independent problems were identified. All problems were related to the data entry categories that accounted for the largest share. The most important problems were related to the issues regarding "date of birth" field in this category which deals with the outpatient admission process.Conclusion: The study of the usability testing method indicated that the ADT subsystem of non-teaching hospital has many problems in interact with real users with the system. It showed that more than half of the users could not completely and successfully perform the entire real-world scenario tasks. Furthermore, the most usability problems were found in data entry categories.
APA, Harvard, Vancouver, ISO, and other styles
30

Birkmire-Peters, Deborah P., Leslie A. Whitaker, and Leslie J. Peters. "Usability Testing for Telemedicine Systems: A Methodology and Case Study." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 2 (October 1997): 792–96. http://dx.doi.org/10.1177/107118139704100214.

Full text
Abstract:
This paper presents the conceptual framework and methodology that has been developed to perform usability evaluations of commercially available equipment for use in telemedicine applications. Specifically, the three components of the evaluation methodology, namely, technical acceptability, operational effectiveness, and clinical appropriateness, are described. This methodology was used to evaluate commercially available video-otoscope systems for use in a store-and-forward teleconsultation project.
APA, Harvard, Vancouver, ISO, and other styles
31

Herawati, Yani, Sandy Halim, and Ceicalia Tesavrita. "Evaluasi Website Rakuten Indonesia dengan Eyetracking Usability Testing." Jurnal Rekayasa Sistem Industri 5, no. 1 (April 29, 2016): 60. http://dx.doi.org/10.26593/jrsi.v5i1.1914.60-68.

Full text
Abstract:
<p><em>The increasing internet users in Indonesia has encourage companies to take advantage of internet technology in its business (e-commerce). Amid the growth of e-commerce, Rakuten Indonesia (RI) as one of the e-commerce company, have to compete in order to retain its existence. For e-commerce companies, website’s usability has an important role in attracting consumers to conduct transactions. RI website’s usability will be evaluated using Eyetracking usability testing. Eyetracking usability testing use the results of the gaze replay, gaze plots, heat maps, and the area of interest (AOI). The evaluation was done for the website features (product categorization, filtering products, sorting products, product description, saving products, as well as ordering the product), and the placement of the advertisement on the homepage RI website. From the evaluation, it was found 10 problems related RI website’s feature and the improvements was made. From the results of the evaluation of advertisement’s placements based AOI, can be determined the cost of advertising and the content of the advertisement on a specific area on the website's homepage.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
32

Muhammad, AbdulHafeez, and Iqra Ashraf. "A Survey on Evaluating Usability of Visual Studio." Pakistan Journal of Engineering and Technology 5, no. 1 (March 12, 2022): 23–28. http://dx.doi.org/10.51846/vol5iss1pp23-28.

Full text
Abstract:
Programing plays an important role in Computer Science (CS) students during their degrees. The study was conducted to evaluate the Microsoft visual studio programming tool to find out usability issues and provides the recommendations. The main goal of this research was to initiate dialog in the CS community to address usability issues can provide better interface to improve the usability of such kind of programming framework. It has been examined by evaluating qualitatively and quantitatively the interface with novice and expert users. Quantitative assessment is done in university with the help of usability testing in order to explore the usability problems of programming tool Visual Studio with the help of quantitative evaluation where satisfaction level is measured. System Usability Scale (SUS) questionnaire and After Scenario Questionnaire (ASQ) is used for the opinion of the participants after usability testing that help to improve the interface of the tool. Qualitative evaluation-usability inspection technique termed as heuristic evaluation is used by Nielson heuristics to improve the usability of the interface. Experiment was conducting at Bahria University Lahore campus in programing course lab under controlled environment. Overall SUS score was 48% of first semester bachelors of information technology (BSIT) students that is below the threshold satisfaction value which recommends that usability issues should be identified. Experts evaluated the interface by 10 Nielson heuristics and highlighted major and, minor issues. Proper interface was suggested by the experts to produce better result.
APA, Harvard, Vancouver, ISO, and other styles
33

Langthaler, Sonja, Alexander Lassnig, Christian Baumgartner, and Jörg Schröttner. "Usability evaluation of a locomotor therapy device considering different strategies." Current Directions in Biomedical Engineering 2, no. 1 (September 1, 2016): 67–69. http://dx.doi.org/10.1515/cdbme-2016-0018.

Full text
Abstract:
AbstractUsability of medical devices is one of the main determining factors in preventing use errors in treatment and strongly correlates to patient safety and quality of treatment. This thesis demonstrates the usability testing and evaluation of a prototype for locomotor therapy of infants. Therefore, based on the normative requirements of the EN 62366, a concept combined of evaluation procedures and assessing methods was created to enable extensive testing and analysis of the different aspects of usability. On the basis of gathered information weak points were identified and appropriate measures were presented to increase the usability and operating safety of the locomotor prototype. The overall outcome showed an usability value of 77.4% and an evaluation score of 6.99, which can be interpreted as “satisfactory”.
APA, Harvard, Vancouver, ISO, and other styles
34

Ali, Guma. "Heuristic Evaluation and Usability Testing of G-MoMo Applications." Journal of Information Systems Engineering and Management 7, no. 3 In progress (July 29, 2022): 15751. http://dx.doi.org/10.55267/iadt.07.12296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hendradewa, Andrie Pasca, and Yassierli Yassierli. "A Comparison of Usability Testing Methods in Smartphone Evaluation." Industrial Engineering & Management Systems 18, no. 2 (June 30, 2019): 154–62. http://dx.doi.org/10.7232/iems.2019.18.2.154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ahmad, Ibrahim, Nazreen Abdullasim, and Norhaida Mohd Suaib. "Usability testing on game interface design using video-based behavior analysis." International Journal of Engineering & Technology 7, no. 2.15 (April 6, 2018): 142. http://dx.doi.org/10.14419/ijet.v7i2.15.11372.

Full text
Abstract:
The objective of this study is to quantitatively incorporate user observation into usability evaluation of game interface design. In this study, an experiment was conducted to monitor and record users' behavior using built in video-cam. The experiment was done after the user play “A Garuda” game. All the character movement controlled by user were captured and recorded for comparative analysis. There were about 20 people involved as a subject for this experiment. The data from video recordings were coded with Noldus Observer XT in order to find usage patterns and thus to gather quantitative data for analyzing the GUI effectiveness, efficiency and satisfaction. The result of user's interaction towards the design of game's GUI able to give additional information for the game designer to develop a better responsive game toward its usability. The effect of user's control and emotion that can be seen from user's face will give the information needed to be considered in game development. Previous studies mostly focusing on evaluating the usability with performance measures by only looking at task results. Thus, at the end of this study, a method is proposed by incorporating user observation into usability evaluation of game design interfaces.
APA, Harvard, Vancouver, ISO, and other styles
37

Biers, David W. "The Case for Independent Software Usability Testing: Lessons Learned from a Successful Intervention." Proceedings of the Human Factors Society Annual Meeting 33, no. 18 (October 1989): 1218–22. http://dx.doi.org/10.1177/154193128903301811.

Full text
Abstract:
This report presents the lessons learned from a software usability test for an external customer. An initial evaluation with naive users revealed problems in the user interface and that the customer's objectives were not being met. After initial resistance to making changes in the software, the customer decided to delay release of its product to implement some of the recommendations and changed the focus of initial release to experienced users. The results of a second evaluation conducted on the revised product with experienced users were positive. Several lessons can be learned from the above evaluation: (1) Usability evaluation should be incorporated earlier in the software development cycle to minimize resistance to changes in a hardened user interface; (2) Organizations should have an independent usability evaluation of software products to avoid the temptation to overlook problems to release the product; (3) Multiple categories of dependent measures should be employed in usability testing because subjective measurement is not always consonant with user performance; and (4) Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.
APA, Harvard, Vancouver, ISO, and other styles
38

Welda, Welda, Desak Made Dwi Utami Putra, and Ayu Manik Dirgayusari. "Usability Testing Website Dengan Menggunakan Metode System Usability Scale (Sus)s." International Journal of Natural Science and Engineering 4, no. 3 (November 25, 2020): 152. http://dx.doi.org/10.23887/ijnse.v4i2.28864.

Full text
Abstract:
In ensuring the effectiveness, efficiency and user satisfaction of the website, it is necessary to evaluate and evaluate the website. This study aims to analyze the Usability Testing Website Using the System Usability Scale (SUS) Method. The method used in this study is the SUS (System Usability Scale). Data collection techniques in this study were interview techniques, observation techniques or questionnaire techniques. The instrument used in this study was a questionnaire. The subjects in this study consisted of 30 respondents. The data analysis technique used in this research is descriptive qualitative analysis. The results of the evaluation in this study, namely the results of the assessment of the respondents, obtained a total value of the SUS score of 2012.50 with the resulting average value of 67.08, this shows that the total SUS score on the STIKI Indonesia website is 67.08, which means the level of user Acceptability Range is Marginal High, Grade Scale level is category D, user Adjective Rating level is OK category and SUS Score Percentile Rank is grade D. The website still needs to be evaluated and developed further so that its use can be more optimal.
APA, Harvard, Vancouver, ISO, and other styles
39

Cheung, Kei Long, Mickaël Hiligsmann, Maximilian Präger, Teresa Jones, Judit Józwiak-Hagymásy, Celia Muñoz, Adam Lester-George, et al. "OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL." International Journal of Technology Assessment in Health Care 34, no. 1 (2018): 68–77. http://dx.doi.org/10.1017/s0266462317004470.

Full text
Abstract:
Objectives:Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study.Methods:A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool.Results:A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation.Conclusions:Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.
APA, Harvard, Vancouver, ISO, and other styles
40

Newton, Sydney, Sarah Yale, John Gosbee, and Matthew Scanlon. "Evaluation of a High-Risk Clinical Guideline through implementation of Usability Evaluation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 1357–61. http://dx.doi.org/10.1177/1071181321651097.

Full text
Abstract:
This paper des cribes the use of heuristic evaluation on a clinical guideline for the treatmentof diabetic ketoacidosis, a medical emergency. After an hour of instruction, two novices tousability testing applied a heuristic tool revealing numerous usability issues in the dom ainsof metaphor, organization, typology and layout. When compared to Nielsen’s principles ofheuristics, the findings identified multiple sources of potential error. A dditionally, thispaper demonstrates that novices to usability testing can perform effective heuristicevaluation with limited training and the use of a heuristic tool. The findings will guideredesign of the studied guideline as well as prompt and more readily accessible usabilitytesting of other high-risk and high-volume clinical guidelines.
APA, Harvard, Vancouver, ISO, and other styles
41

Unger, Nicholas R. "Testing the Untestable: Mitigating Simulation Bias During Summative Usability Testing." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 9, no. 1 (September 2020): 142–44. http://dx.doi.org/10.1177/2327857920091058.

Full text
Abstract:
A literature review was conducted on the topic of sources of simulation bias, as it applies to test design and use scenario creation for simulated-use studies, on medical devices for FDA submission. When it comes to Summative Usability Testing, there isn’t room for simulation bias to impact the data collected. From working with clinicians to design more realistic/appropriate clinical scenarios, to traveling to medical simulation labs to set up a realistic operating room simulation, we, as researchers, are constantly learning and improving our testing designs to ensure that they are as realistic as possible. This poster will look at current research and study logistics to provide best practices for identifying common sources of simulation bias and mitigating those sources during your Summative Evaluation.
APA, Harvard, Vancouver, ISO, and other styles
42

Miller, K., R. Kowalski, S. Coffey-Zern, G. Ebbert, J. Learish, and R. Arnold. "Advancing Patient Safety through the Creation of a Mobile Usability Lab." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 5, no. 1 (June 2016): 128–33. http://dx.doi.org/10.1177/2327857916051029.

Full text
Abstract:
The demand for usability testing is becoming increasingly important as healthcare moves toward a commitment to zero patient harm and higher value of care provided. Usability testing and simulation, techniques used in user-centered interaction design to evaluate healthcare systems, promote safe, high-quality care for patients. We propose the concept of a “Mobile Usability Lab”, a novel way to conduct usability testing system-wide. The Mobile Usability Lab describes a unique opportunity to step away from the standard state-of-the-art usability lab and take a creative approach to usability testing. To demonstrate the utility of this concept, we present a case study detailing a hospital-wide comparative device evaluation of new defibrillators. We recommend that research and clinical teams explore the concept of a mobile usability lab to evaluate products in the clinical environment. This work can reduce preventable harm through the optimization of health care delivery.
APA, Harvard, Vancouver, ISO, and other styles
43

Susanto, Novie, Heru Prastawa, Manik Mahachandra, and Dinda Ayu Rakhmawati. "Evaluation of Usability on Bionic Anthropomorphic (BIMO) Hand for Disability Hand Patient." Jurnal Ilmiah Teknik Industri 18, no. 2 (December 19, 2019): 124–33. http://dx.doi.org/10.23917/jiti.v18i2.8133.

Full text
Abstract:
One aspect that needs to be assessed to make better product is usability. Testing the level of usability on BIMO hand is performed using adapted Southampton Hand Assesment Procedure (SHAP) method. Users are given the task of using prosthetic hand to perform daily activities according to performance measurement rules. Based on the opinion of Jacob Nielsen, the theory used to measure usability are 5 main parameters, i.e: Learnability, Memorability, Efficiency, Satisfaction, and Errors. Testing is done by giving 15 task daily activity to the respondent. After testing, respondents were given USE questionnaires (Usefulness, Satisfaction and Ease of use) as a media for usability assessmentand received suggestions from respondents. Some suggestions were obtained from the respondents after the research was conducted, such as the size of the product, censoric product respond, the suitability shape and weight of product. Based on the percentage of USE questionnaire assessment known Usability Prosthetic hand level tested is in the status of GOOD, this indicates the respondents feel the product is good to use.
APA, Harvard, Vancouver, ISO, and other styles
44

Malik, Hafiz Abid Mahmood, Abdulhafeez Muhammad, and Usama Sajid. "Analyzing Usability of Mobile Banking Applications in Pakistan." Sukkur IBA Journal of Computing and Mathematical Sciences 5, no. 2 (December 28, 2021): 25–35. http://dx.doi.org/10.30537/sjcms.v5i2.883.

Full text
Abstract:
Usability is a key factor in the quality of the product, which includes ease of use, user satisfaction and the ability of the user to quickly understand the product without practice. As smartphone usage increases, most organizations have shifted their services to mobile applications, such as m-banking. Most of the people uses banking services but hesitate to use m-banking due to complex interfaces. Usability researchers concentrate on the value of design simplicity so that users can perform a particular task with satisfaction, efficiency, and effectiveness. If a mobile app lacks one of these usability features, users may get confused while using the app. This research examines the key usability issues in existing m-banking after checking the usability satisfaction level through System Usability Scale. To compare and highlight a number of usability issues, the researcher used two types of usability evaluation method 'User Testing' and 'Heuristic Evaluation'. In heuristic evaluation expert users used two M-banking apps i.e. Bank of Punjab (BOP) and MCB Islamic Bank (MIB) to evaluate them against Neilson 10 heuristics and extract the usability issues in apps. The user testing is then performed by novice users which includes tasks (translated from extracted problems by heuristic evaluation). After completion on whole testing users filled the post-test SUS’s questionnaire. The result shows that the overall success rate of the tasks was 83%, SUS score was 77 and overall relative time-based efficiency very 54.2%. The expert evaluators found 83% minor errors and 17% major errors. The finding of this paper shows usability problems and recommendations are provided to increase the usability of mobile banking applications at the end of this paper.
APA, Harvard, Vancouver, ISO, and other styles
45

Quill, Laurie L., David E. Kancler, Allen R. Revels, Carlton D. Donahoo, Megan E. Gorman, and Matthew W. Goddard. "Performance Testing and Subjective Evaluation: Giving Equal Importance to Both." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 24 (September 2002): 1949–53. http://dx.doi.org/10.1177/154193120204602403.

Full text
Abstract:
The key to finding real issues with, and benefits of a product or system is to collect and merge findings from both empirical performance testing and subjective evaluation methods. As practitioners, Human Factors professionals are frequently challenged with identifying cost effective solutions that also meet end-user needs for usability. If the system is not usable from the end-user's perspective, performance enhancements cannot be achieved. Likewise, if the system is very usable but does not provide any process improvement, then it is not likely to be purchased. Reconciling and communicating differences in findings between empirical and subjective data is challenging. This paper provides a systematic method for providing value for all product or system users, including individuals with such disparate needs as management and end-users. The paper incorporates recognized usability testing methods for addressing detailed usability concerns, includes a method of systematic testing called the LSF Process, and introduces a means of communicating subjective feedback through cluster graphs.
APA, Harvard, Vancouver, ISO, and other styles
46

Jackson, France, and Lara Cheng. "UX in Practice: Applying a Heuristic Evaluation Technique to Real World Challenges." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 702–3. http://dx.doi.org/10.1177/1541931218621159.

Full text
Abstract:
Introduction Heuristic Evaluation is a usability method that requires usability experts to review and offer feedback on user interfaces based on a list of heuristics or guidelines. Heuristic Evaluations allow designers to get feedback early and quickly in the design process before a full usability test is done. Unlike many usability evaluation methods, Heuristic Evaluations are performed by usability experts as opposed to target users. That is one reason it is going to make a great challenge activity for the UX Day Challenge session. Heuristic Evaluation is a usability method often used in conjunction with usability testing. During the evaluation, usability experts evaluate an interface based on a list of heuristics or guidelines (Nielsen and Molich, 1990). There are several sets of guidelines and they are used to evaluate a myriad of interfaces from gaming (Pinelle, Wong & Stach, 2008) and virtual reality (Sutcliffe & Gault, 2004) to online shopping (Chen & Macredie, 2005). Some of the most common heuristic guidelines to choose from were created by Nielsen (Nielsen and Molich, 1990) (Nielsen, 1994), Norman (Norman, 2013), Tognazzini (Tognazzini, 1998), and Shneiderman (Shneiderman, Plaisant, Cohen and Elmqvist, 2016). Choosing the best set of guidelines and the most appropriate number of usability professions is important. Nielsen and Molich’s research found that individual evaluators only find 20-51% of the usability problems when evaluating alone. However, when the feedback of three to five evaluators is aggregated together, more usability problems can be uncovered (Nielsen and Molich, 1990). This method can be advantageous because designers can get quick feedback early for iteration before a full round of usability testing is performed. The goal of this session is to introduce this method to some and give others a refresher on how to apply this method in the real world. The Challenge For several years, UX day has offered an alternative session. The most intriguing sessions were interactive and offered hands-on training. For this UX Day Challenge session, teams of at most five participants will perform a Heuristic Evaluation of a sponsor’s website or product. During the session, participants will be introduced to Heuristic Evaluations. Topics such as how to perform one, who should perform one, and when it is appropriate to perform one will be covered. Additionally, the pros and cons of using this method will be discussed. Following the introduction to Heuristic Evaluation, teams will use the updated set of Nielson Heuristics (Nielsen, 1994) for the evaluation exercise. Although there are several sets of heuristics, Nielsen’s is one of the best known and widely accepted sets. The following Updated Nielsen Heuristics will be used: • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation Following the evaluation period, teams will be asked to report their findings and recommendations to the judges and audience. The judges will deliberate and announce the winner. Conclusion This alternative session will be an opportunity to potentially expose participants to a methodology they may not use often. It will also be an opportunity to have a hands-on learning experience for students who have not formally used this methodology in the real world. Most importantly this session is in continuation of the goal to continue to bring new, interesting, and disruptive sessions to the traditional “conference” format and attract UX practitioners.
APA, Harvard, Vancouver, ISO, and other styles
47

Müssener, Ulrika, Kristin Thomas, Catharina Linderoth, Marie Löf, Katarina Åsberg, Pontus Henriksson, and Marcus Bendtsen. "Development of an Intervention Targeting Multiple Health Behaviors Among High School Students: Participatory Design Study Using Heuristic Evaluation and Usability Testing." JMIR mHealth and uHealth 8, no. 10 (October 29, 2020): e17999. http://dx.doi.org/10.2196/17999.

Full text
Abstract:
Background Mobile electronic platforms provide exciting possibilities for health behavior promotion. For instance, they can promote smoking cessation, moderate alcohol consumption, healthy eating, and physical activity. Young adults in Sweden are proficient in the use of technology, having been exposed to computers, smartphones, and the internet from an early age. However, with the high availability of mobile health (mHealth) interventions of varying quality, it is critical to optimize the usability of mHealth interventions to ensure long-term use of these health promotion interventions. Objective This study aims to investigate the usability of an mHealth intervention (LIFE4YOUth) targeting health behaviors among high school students through heuristic evaluation and usability testing. Methods A preliminary version of the LIFE4YOUth mHealth intervention, which was aimed at promoting healthy eating, physical activity, smoking cessation, and nonrisky drinking among high school students, was developed in early 2019. We completed a total of 15 heuristic evaluations and 5 usability tests to evaluate the usability of the mHealth intervention prototype to improve its functioning, content, and design. Results Heuristic evaluation from a total of 15 experts (10 employees and 5 university students, both women and men, aged 18-25 years) revealed that the major usability problems and the worst ratings, a total of 17 problems termed usability catastrophes, concerned shortcomings in displaying easy-to-understand information to the users or technical errors. The results of the usability testing including 5 high school students (both girls and boys, aged 15-18 years) showed that the design, quality, and quantity of content in the intervention may impact the users’ level of engagement. Poor functionality was considered a major barrier to usability. Of the 5 participants, one rated the LIFE4YOUth intervention as poor, 2 rated as average, and 2 assessed it as good, according to the System Usability Scale. Conclusions High school students have high expectations of digital products. If an mHealth intervention does not offer optimal functions, they may cease to use it. Optimizing the usability of mHealth interventions is a critical step in the development process. Heuristic evaluation and usability testing in this study provided valuable knowledge about the prototype from a user’s perspective. The findings may lead to the development of similar interventions targeting the high school population.
APA, Harvard, Vancouver, ISO, and other styles
48

Iqbal, Taufiq, and Bahruni Bahruni. "Evaluasi Usability Test e-Repository dengan menggunakan Metode Nielsen’s Attributtes of Usability (NAU)." Jurnal JTIK (Jurnal Teknologi Informasi dan Komunikasi) 3, no. 2 (September 30, 2019): 40. http://dx.doi.org/10.35870/jtik.v3i2.85.

Full text
Abstract:
In meeting good software standards, testing of software quality is required. Usability is an aspect of software The purpose of this study is to obtain the usability test evaluation results on AMIK Indonesia's e-Repository with the efficiency and error factor based on the Nielsen's Attributes of Usability (NAU) questionnaire method, so that later it will be made as a suggestion and recommendation for the development of AMIK Indonesia's e-Repository based on the results The test is to improve the quality of the website in the usability aspect. This research method is divided into 4 stages consisting of activities in it, consisting of; 1) Initiation, 2) Pre-User Testing, 3) Pre-User Testing, and 4) Post User Testing. The sample of users was 22 students consisting of Class 2015, 2016, 2017 and 2018 who were active students of AMIK Indonesia. From the results of the achievement of research conducted that the usability test with Nielsen's Attributes of Usability (NAU) model can be applied in finding the quality level of a website. From the test results, the level of success in the UT-7, UT-8, and UT-10 tests with achievements of less than 80% of respondents failed to answer. For success rates above 80% on testing UT-1, UT-2, UT-3, UT-4, UT-5, UT-6, and UT-9. The results of the analysis carried out very satisfying interpretation of 14, Satisfied 1, Satisfied 2, Not Satisfied 1. As for the interpretation of dissatisfaction with questions with the ER14 code and quite satisfied with the ER12 and ER13 codes which are all three dimensions of error.Keywords:Usability, Evaluation, e-Repository, Website Quality, Nielsen's Attributes of Usability (NAU).
APA, Harvard, Vancouver, ISO, and other styles
49

Kouroupetroglou, Georgios, and Dimitris Spiliotopoulos. "Usability Methodologies for Real-Life Voice User Interfaces." International Journal of Information Technology and Web Engineering 4, no. 4 (October 2009): 78–94. http://dx.doi.org/10.4018/jitwe.2009100105.

Full text
Abstract:
This paper studies the usability methodologies for spoken dialogue web interfaces along with the appropriate designer-needs analysis. The work unfolds a theoretical perspective to the methods that are extensively used and provides a framework description for creating and testing usable content and applications for conversational interfaces. The main concerns include the design issues for usability testing and evaluation during the development lifecycle, the basic customer experience metrics and the problems that arise after the deployment of real-life systems. Through the discussion of the evaluation and testing methods, this paper argues on the importance and the potential of wizard-based functional assessment and usability testing for deployed systems, presenting an appropriate environment as part of an integrated development framework.
APA, Harvard, Vancouver, ISO, and other styles
50

Stambler, Danielle Mollie, Erin Feddema, Olivia Riggins, Kari Campeau, Lee-Ann Kastman Breuch, Molly M. Kessler, and Stephanie Misono. "REDCap Delivery of a Web-Based Intervention for Patients With Voice Disorders: Usability Study." JMIR Human Factors 9, no. 1 (March 25, 2022): e26461. http://dx.doi.org/10.2196/26461.

Full text
Abstract:
Background Web-based health interventions are increasingly common and are promising for patients with voice disorders because web-based participation does not require voice use. To address needs such as Health Insurance Portability and Accountability Act compliance, unique user access, the ability to send automated reminders, and a limited development budget, we used the Research Electronic Data Capture (REDCap) data management platform to deliver a patient-facing psychological intervention designed for patients with voice disorders. This was a novel use of REDCap. Objective We aimed to evaluate the usability of the intervention, with this intervention serving as a use case for REDCap-based patient-facing interventions. Methods We used REDCap survey instruments to develop the web-based voice intervention modules, then conducted usability evaluations using (1) heuristic evaluations by 2 evaluators, and (2) formal usability testing with 7 participants, consisting of predetermined tasks, a think-aloud protocol, ease-of-use measurements, a product reaction card, and a debriefing interview. Results Heuristic evaluations found strengths in visibility of system status and real-world match, and weaknesses in user control and help documentation. Based on this feedback, changes to the intervention were made before usability testing. Overall, usability testing participants found the intervention useful and easy to use, although testing revealed some concerns with design, content, and terminology. Some concerns were readily addressed, and others required adaptations within REDCap. Conclusions The REDCap version of a complex web-based patient-facing intervention performed well in heuristic evaluation and formal usability testing. REDCap can effectively be used for patient-facing intervention delivery, particularly if the limitations of the platform are anticipated and mitigated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography