Academic literature on the topic 'Usability Testing and Evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Usability Testing and Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Usability Testing and Evaluation"

1

Jeffries, Robin, and Heather Desurvire. "Usability testing vs. heuristic evaluation." ACM SIGCHI Bulletin 24, no. 4 (October 1992): 39–41. http://dx.doi.org/10.1145/142167.142179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tan, Wei-Siong, and R. R. Bishu. "Which is a Better Method of Web Evaluation? a Comparison of User Testing and Heuristic Evaluation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 14 (September 2002): 1256–60. http://dx.doi.org/10.1177/154193120204601404.

Full text
Abstract:
Besides recognizing the importance of incorporating usability evaluation techniques in the design and development phase of any user interface (UI), it is also very important that designers recognize the benefits and limitations of the different usability inspection methods. This is because the quality of the usability evaluation is dependent on the method used. Two of the more popular usability evaluation techniques are user testing and heuristic analysis. The main objective of this study was to compare the efficiency and effectiveness between user testing and heuristic analysis in evaluating four different commercial websites. Comparing the proportion of usability problems and the type of problems addressed by these two methods both in the early and later stage of the design process does this. The results showed that both user testing and heuristic analysis addressed very different usability problems and with the exception compatibility and security and privacy problems, where heuristic analysis outperforms user testing, both methods are equally efficient and effective in addressing different categories of usability problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Enlie, and Barrett Caldwell. "An Empirical Study of Usability Testing: Heuristic Evaluation Vs. User Testing." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 8 (September 2002): 774–78. http://dx.doi.org/10.1177/154193120204600802.

Full text
Abstract:
In this study, two different usability-testing methods (Heuristic Evaluation and User Testing) were selected to test the usability of a pre-release version of software searching for Science, Mathematics and Engineering education materials. Our major goal is to compare Heuristic Evaluation and User Testing in terms of efficiency, effectiveness and cost/benefit analysis. We found that Heuristic Evaluation was more efficient than User Testing in finding usability problems (41 vs. 10), while User Testing was more effective than Heuristic Evaluation in finding major problems (70% vs.12%). in general, Heuristic Evaluation appears to be more economic in finding a wide range of usability problems by incurring a low cost in comparison to User Testing. However, User Testing can provide more insightful data from real users such as user's performance and satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
4

Sinabell, Irina, and Elske Ammenwerth. "Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation." Applied Clinical Informatics 13, no. 01 (January 2022): 67–79. http://dx.doi.org/10.1055/s-0041-1740919.

Full text
Abstract:
Abstract Background Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems. To address this demand, the present study systematically identified and expert-validated rapidly deployable eHealth usability evaluation methods. Objective Identification and prioritization of eHealth usability evaluation methods suitable for agile, easily applicable, and useful eHealth usability evaluations. Methods The study design comprised a systematic iterative approach in which expert knowledge was contrasted with findings from literature. Forty-three eHealth usability evaluation methods were systematically identified and assessed regarding their ease of applicability and usefulness through semi-structured expert interviews with 10 European usability experts and systematic literature research. The most appropriate eHealth usability evaluation methods were selected stepwise based on the experts' judgements of their ease of applicability and usefulness. Results Of these 43 eHealth usability evaluation methods identified as suitable for agile, easily applicable, and useful eHealth usability evaluations, 10 were recommended by the experts based on their usefulness for rapid eHealth usability evaluations. The three most frequently recommended eHealth usability evaluation methods were Remote User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method. Eleven usability evaluation methods, such as Retrospective Testing, were not recommended for use in rapid eHealth usability evaluations. Conclusion We conducted a systematic review and expert-validation to identify rapidly deployable eHealth usability evaluation methods. The comprehensive and evidence-based prioritization of eHealth usability evaluation methods supports faster usability evaluations, and so contributes to the ease-of-use of emerging eHealth systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Ismail, Nor Azman, Fadzrul Izwan Jamaluddin, Akmal Harraz Hamidan, Ahmad Fariz Ali, Su Elya Mohamed, and Che Soh Said. "Usability Evaluation of Encyclopedia Websites." International Journal of Innovative Computing 11, no. 1 (April 28, 2021): 21–25. http://dx.doi.org/10.11113/ijic.v11n1.282.

Full text
Abstract:
Usability is an important aspect that every website should focus more. It tells us how well and success website will function with real users. Many people often think usability tests are expensive and time-consuming. It can be a cost-effective and time saver with usability testing instead of spending more time fixing an unusable website. This study evaluates the usability of encyclopedia websites by using automated usability testing tools and questionnaire methods. The questionnaire was developed based on a standard form called Website Analysis and Measurement Inventory (WAMMI) that identified 20 common usability questions divided into five categories. Each category deals with one aspect of usability. Simultaneously, the automated usability testing tools used in this study were Pingdom and GT Metrix to calculate and analyse the website performance of selected encyclopedia websites based on website components including page load time, media size and overall web performance grades. This study could help web designer, developer, and practitioners design better and more user-friendly encyclopedia websites.
APA, Harvard, Vancouver, ISO, and other styles
6

Følstad, Asbjørn, and Kasper Hornbæk. "Work-domain knowledge in usability evaluation: Experiences with Cooperative Usability Testing." Journal of Systems and Software 83, no. 11 (November 2010): 2019–30. http://dx.doi.org/10.1016/j.jss.2010.02.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohd Amin, Siti Fauziah, Sabariah Sharif, Muhamad Suhaimi Taat, Mad Nor Madjapuni, and Muralindran Mariappan. "IMPLEMENTATION OF USABILITY TESTING USING EXPERT PANEL EVALUATION METHOD IN THE EVALUATION PHASE OF M-SOLAT ROBOT MODULE." International Journal of Education, Psychology and Counseling 7, no. 45 (March 15, 2022): 222–33. http://dx.doi.org/10.35631/ijepc.745018.

Full text
Abstract:
The evaluation phase is one of the essential phases in the study of Design and Development Research (PRP). Various methods can be used in this phase. Nevertheless, a researcher must choose a reasonable method to secure the achievement of the objectives. Accordingly, this research implemented the usability testing evaluation of the M-Solat Robot Module using the expert panel evaluation method. The instrument employed in this study was the USE Questionnaire which was analysed using the Percentage Calculation Method (PCM). The outcomes confirmed the usability of the M-Solat Robot Module in terms of usability = 90.2%, ease of use = 88.4%, ease of learning = 90.1% and satisfaction = 91.7%. The usability testing evaluation using the expert panel evaluation method in this study enabled the researcher to accomplish the study objectives. Ergo, this analysis recommended that prospective researchers use expert panel evaluation in usability testing evaluation for studies involving usability screening evaluation of innovation.
APA, Harvard, Vancouver, ISO, and other styles
8

Pierce, Robert P., Bernie R. Eskridge, Brandi Ross, Margaret A. Day, Brooke Dean, and Jeffery L. Belden. "Improving the User Experience with Discount Site-Specific User Testing." Applied Clinical Informatics 13, no. 05 (October 2022): 1040–52. http://dx.doi.org/10.1055/s-0042-1758222.

Full text
Abstract:
Abstract Objectives Poor electronic health record (EHR) usability is associated with patient safety concerns, user dissatisfaction, and provider burnout. EHR certification requires vendors to perform user testing. However, there are no such requirements for site-specific implementations. Health care organizations customize EHR implementations, potentially introducing usability problems. Site-specific usability evaluations may help to identify these concerns, and “discount” usability methods afford health systems a means of doing so even without dedicated usability specialists. This report characterizes a site-specific discount user testing program launched at an academic medical center. We describe lessons learned and highlight three of the EHR features in detail to demonstrate the impact of testing on implementation decisions and on users. Methods Thirteen new EHR features which had already undergone heuristic evaluation and iterative design were evaluated over the course of three user test events. Each event included five to six users. Participants used think aloud technique. Measures of user efficiency, effectiveness, and satisfaction were collected. Usability concerns were characterized by the type of usability heuristic violated and by correctability. Results Usability concerns occurred at a rate of 2.5 per feature tested. Seventy percent of the usability concerns were deemed correctable prior to implementation. The first highlighted feature was moved to production despite low single ease question (SEQ) scores which may have predicted its subsequent withdrawal from production based on post implementation feedback. Another feature was rebuilt based on usability findings, and a new version was retested and moved to production. A third feature highlights an easily correctable usability concern identified in user testing. Quantitative usability metrics generally reinforced qualitative findings. Conclusion Simplified user testing with a limited number of participants identifies correctable usability concerns, even after heuristic evaluation. Our discount usability approach to site-specific usability has a role in implementations and may improve the usability of the EHR for the end user.
APA, Harvard, Vancouver, ISO, and other styles
9

Yul, Faradila Ananda, and Miftahul Jannah. "ANALISIS USABILITAS WEBSITE SIAM UMRI MENGGUNAKAN METODE USABILITY TESTING." Jurnal Surya Teknika 7, no. 1 (December 13, 2020): 86–95. http://dx.doi.org/10.37859/jst.v7i1.2355.

Full text
Abstract:
The thing that underlies the existence of a website is the development of information and communication technology. This study aims to measure the Usability level of the Student Academic Information System (SIAM) Website at Universitas Muhammadiyah Riau. Some problems encountered were that there was no reusability evaluation of the UMRI SIAM website, as well as complaints from UMRI students when accessing the website. The reusability problem in this study was resolved by conducting usability testing with five dimensions proposed by Nielsen (1993), namely learnability, efficiency, memorability, error & user satisfaction when accessing the SIAM UMRI website and using the thinking aloud method which required 3-5 respondents. The subjects studied were expert users (UMRI students) and novice users (non-UMRI students). Based on the results of the analysis on the learnability dimension, it is found that the respondents have the ability to learn a good website. In the efficiency dimension, the results of the increase in the speed of completing tasks by the respondents are obtained. Furthermore, in the memorability dimension, the results show that the respondents have good memory ability. In the error dimension, there are 38 problems when accessing the SIAM UMRI website, and in the satisfaction dimension, the results of respondents' satisfaction when accessing the SIAM UMRI website are obtained with a score of 70. In addition, in this study there are recommendations for improving the SIAM UMRI website.
APA, Harvard, Vancouver, ISO, and other styles
10

Lyon, Aaron R., Kelly Koerner, and Julie Chung. "Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI): A methodology for assessing complex intervention implementability." Implementation Research and Practice 1 (January 2020): 263348952093292. http://dx.doi.org/10.1177/2633489520932924.

Full text
Abstract:
Background: Most evidence-based practices in mental health are complex psychosocial interventions, but little research has focused on assessing and addressing the characteristics of these interventions, such as design quality and packaging, that serve as intra-intervention determinants (i.e., barriers and facilitators) of implementation outcomes. Usability—the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction—is a key indicator of design quality. Drawing from the field of human-centered design, this article presents a novel methodology for evaluating the usability of complex psychosocial interventions and describes an example “use case” application to an exposure protocol for the treatment of anxiety disorders with one user group. Method: The Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI) methodology comprises four steps: (1) identify users for testing; (2) define and prioritize EBPI components (i.e., tasks and packaging); (3) plan and conduct the evaluation; and (4) organize and prioritize usability issues. In the example, clinicians were selected for testing from among the identified user groups of the exposure protocol (e.g., clients, system administrators). Clinicians with differing levels of experience with exposure therapies (novice, n =3; intermediate, n = 4; advanced, n = 3) were sampled. Usability evaluation included Intervention Usability Scale (IUS) ratings and individual user testing sessions with clinicians, and heuristic evaluations conducted by design experts. After testing, discrete usability issues were organized within the User Action Framework (UAF) and prioritized via independent ratings (1–3 scale) by members of the research team. Results: Average IUS ratings (80.5; SD = 9.56 on a 100-point scale) indicated good usability and also room for improvement. Ratings for novice and intermediate participants were comparable (77.5), with higher ratings for advanced users (87.5). Heuristic evaluations suggested similar usability (mean overall rating = 7.33; SD = 0.58 on a 10-point scale). Testing with individual users revealed 13 distinct usability issues, which reflected all four phases of the UAF and a range of priority levels. Conclusion: Findings from the current study suggested the USE-EBPI is useful for evaluating the usability of complex psychosocial interventions and informing subsequent intervention redesign (in the context of broader development frameworks) to enhance implementation. Future research goals are discussed, which include applying USE-EBPI with a broader range of interventions and user groups (e.g., clients). Plain language abstract: Characteristics of evidence-based psychosocial interventions (EBPIs) that impact the extent to which they can be implemented in real world mental health service settings have received far less attention than the characteristics of individuals (e.g., clinicians) or settings (e.g., community mental health centers), where EBPI implementation occurs. No methods exist to evaluate the usability of EBPIs, which can be a critical barrier or facilitator of implementation success. The current article describes a new method, the Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI), which uses techniques drawn from the field of human-centered design to evaluate EBPI usability. An example application to an intervention protocol for anxiety problems among adults is included to illustrate the value of the new approach.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Usability Testing and Evaluation"

1

Capra, Miranda Galadriel. "Usability Problem Description and the Evaluator Effect in Usability Testing." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26477.

Full text
Abstract:
Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-users. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully measure evaluation effectiveness. The goals of this research were to develop UPD guidelines, explore the evaluator effect in UT, and evaluate the usefulness of the guidelines for grading UPD content.

Ten guidelines for writing UPDs were developed by consulting usability practitioners through two questionnaires and a card sort. These guidelines are (briefly): be clear and avoid jargon, describe problem severity, provide backing data, describe problem causes, describe user actions, provide a solution, consider politics and diplomacy, be professional and scientific, describe your methodology, and help the reader sympathize with the user. A fourth study compared usability reports collected from 44 evaluators, both practitioners and graduate students, watching the same 10-minute UT session recording. Three judges measured problem detection for each evaluator and graded the reports for following 6 of the UPD guidelines.

There was support for existence of an evaluator effect, even when watching pre-recorded sessions, with low to moderate individual thoroughness of problem detection across all/severe problems (22%/34%), reliability of problem detection (37%/50%) and reliability of severity judgments (57% for severe ratings). Practitioners received higher grades averaged across the 6 guidelines than students did, suggesting that the guidelines may be useful for grading reports. The grades for the guidelines were not correlated with thoroughness, suggesting that the guideline grades complement measures of problem detection.

A simulation of evaluators working in groups found a 34% increase in severe problems found by adding a second evaluator. The simulation also found that thoroughness of individual evaluators would have been overestimated if the study had included a small number of evaluators. The final recommendations are to use multiple evaluators in UT, and to assess both problem detection and description when measuring evaluation effectiveness.
Ph. D.

APA, Harvard, Vancouver, ISO, and other styles
2

Bjelkenstedt, Alf. "Web-Based Drawing Tool in GWT with Usability Testing and Usability Evaluation." Thesis, Linköpings universitet, Programvara och system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-104975.

Full text
Abstract:
On behalf of Inspectera HK AB in Norrköping a web-based drawing tool has been developed in Java, mainlywith the library Google Web Toolkit (GWT). The purpose of this tool is to facilitate both the staff's at Inspecteraand their client's work with different types of drawings such as blueprints for pest control, fire protection andespecially drawings of the company's e-service of self-checks. Besides developing the drawing tool usabilitytesting and a usability evaluation has been performed.
APA, Harvard, Vancouver, ISO, and other styles
3

Oz, Saba. "Usability Testing Of A Family Medicine Information System." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614716/index.pdf.

Full text
Abstract:
Healthcare is an important part of life in most societies that attract a significant amount of public investment. Primary healthcare is a fundamental branch of the healthcare system where patients and doctors initially meet. Family Medicine Information Systems are developed in an effort to ease the daily work of family doctors with the help of information technology. Such systems are generally used for handling critical tasks such as managing health records of patients, monitoring pregnancy and keeping track of children&rsquo
s vaccination. Like any medical information technology, the usability of such systems is a vital concern for enabling efficient and effective primary healthcare operations. Family Medicine is a recently established practice in Turkey and there are a number of systems in service to aid the daily work of family doctors. However, none of these systems have been subjected to a systematic usability analysis. In this study, a usability analysis of a popular Family Medicine Information System used in Turkey is conducted. By combining several usability evaluation techniques, the study identified several important usability issues and provided recommendations for further improving the system. The main usability issue observed in the system was the overall complexity of the information presented at the main interface that often confused and misled the users. In order to address this problem, it is suggested that features related to the most frequent family medicine operations should be placed on the main screen, whereas remaining features should be organized under auxiliary pages with clear navigation aids.
APA, Harvard, Vancouver, ISO, and other styles
4

Fayyaz, Muhammad-Hamid, and Usman Idrees. "Usability Testing & Evaluation of Chores in GNU/Linux for Novice." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5100.

Full text
Abstract:
A challenging issue of GNU/Linux: usability has been studied in this report. Usability is considered as one of the core component in any system software. System software should be efficient, effective and satisfying for users. Different studies on usability issue have been conducted in different distros but there is no specific study on Ubuntu 8.10. Ubuntu 8.10 is considered for usability evaluation of GNU/Linux system software and a multi-phased research approach is adopted. Participants (students) from different disciplines and level are taken to conduct the usability test. The system software is evaluated on the basis of usability test results and user’s opinion. An interview is designed and conducted to validate the tested findings of the system. GNU/Linux is serving the whole community as being used by different distros. The current set of interface guidelines and default softwares used by Ubuntu does not provide efficiency, effectiveness and satisfaction for novice users. It is very important aspect that software should have uniformity and complete control in applications. There is need to improve or redesign the default softwares for better usability in terms of interface, message windows, bugs and help etc for novice users.
APA, Harvard, Vancouver, ISO, and other styles
5

Lennerup, Anna. "Att mäta användbarhet på webbplatser : En fall studie där två metoder för användbarhet jämförs." Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-26317.

Full text
Abstract:
The aim with this thesis is to investigate two different methods used to test usability on web-sites to determine the advantages and disadvantages of two methods and compare the results. The two selected methods are the think aloud method and asynchronous remote method. Re-mote usability testing has existed for more than ten years but the possibilities with this meth-od needs to be explored more. To achieve the purpose of this thesis a case study was conducted on an e-commerce site and for the online remote usability testing software was used. The results of this study show that think aloud usability testing encounter more issues with the website than the remote usability testing.
APA, Harvard, Vancouver, ISO, and other styles
6

Neveryd, Malin. "Integrating Usability Evaluation in an Agile Development Process." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102922.

Full text
Abstract:
Medius is a software company which provides IT-solutions that streamlines and automates business processes. The purpose with this thesis was to investigate the possibility to integrate usability evaluation in the development process of Medius software MediusFlow. How such integration would be done, and which usability evaluation methods could be used. To be able to provide a suggestion, a prestudy was conducted, this in order to get a good overview of Medius as a company as well as the development process of MediusFlow. With the prestudy as a basis, the main study was conducted, and four different usability evaluation methods were chosen. In this study, the four chosen methods were Cognitive Walkthrough, Coaching Method, Consistency Inspection and Question-Asking protocol. These usability evaluation methods were carried out, evaluated and analyzed.  Based on the studies and the literature, a suggestion regarding integration of usability evaluations was made.  The result from this case study was presented as a process chart, where the different phases in Medius software development process are matched together with suiting usability evaluation methods. The relevant phases and their suggested methods: Preparation phase - Cognitive Walkthrough and Coaching Method in combination with Thinking-Aloud and Interviews Hardening phase - Coaching Method in combination with Thinking-Aloud and Interviews, as well as Consistency Inspection Maintenance - Field observation This result is a part of the overall work towards a more user-centered design of the software.
APA, Harvard, Vancouver, ISO, and other styles
7

Selvaraj, Prakaash V. "Comparative Study of Synchronous Remote and Traditional In-Lab Usability Evaluation Methods." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9939.

Full text
Abstract:
Traditional in lab usability evaluation has been used as the 'standard' evaluation method for evaluating and improving usability of software user interfaces (Andre, Williges, & Hartson, 2000). However, traditional in lab evaluation has its drawbacks such as availability of representative end users, high cost of testing and lack of true representation of a user's actual work environment. To counteract these issues various alternative and less expensive usability evaluation methods (UEMs) have been developed over the past decade. One such UEM is the Remote Usability Evaluation method. Remote evaluation is a relatively new area and lacks empirical data to support the approach. The need for empirical support was addressed in this study. The overall purpose of this study was to determine the differences in the effectiveness of the two evaluation types, the remote evaluation approach (SREM) and the traditional evaluation approach, in collecting usability data. This study also compared the effectiveness between the two methods based on user type, usability novice users and usability experienced users. Finally, the hypothesis that users, in general, will prefer the remote evaluation approach of reporting to the traditional in-lab evaluation approach was also tested. Results indicated that, in general, the synchronous remote approach is at least as effective as the traditional in lab usability evaluation approach in collecting usability data across all user types. However, when user type was taken into consideration, it was found that there was a significant difference in the high severity negative critical incident data collected between the two approaches for the novice user group. The traditional approach collected significantly more high severity negative critical incident data than the remote approach. Additionally, results indicate that users tend to be more willing to participate in the same approach as the one they participated previously. Recommendations for usability evaluators for conducting the SREM approach and areas for future research are identified in the study.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Eidaroos, Abdulhadi M. "Multidimensional evaluation approach for an e-government website : a case study of e-government in Saudi Arabia." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/9173.

Full text
Abstract:
This study investigates the refinement of an evaluation framework for e-Government websites. The aim of the research was to determine how an existing evaluation framework, which recommends the use of multiple usability techniques, could be used to obtain usability data that would indicate how to improve e-Government websites and satisfy users' needs. The framework describes how common techniques, such as heuristic testing and user testing, can be used with the emerging discipline of web analytics to provide a comprehensive and detailed view of users' interactions on e-Government websites. The original framework was refined in the light of the findings and the refined framework should facilitate the improvement of e-Government websites depending on users' demands and interactions. The work involved implementing the original multi-dimensional framework in e-Government websites in Saudi Arabia. A case study method was used over two implementations. In the former implementation, the evaluation methods consisted of heuristic evaluation followed by usability testing then web analytic tools. However, in the later implementation, refinements to the evaluation framework were proposed and the order of methods was amended: web analytics was used first, followed by heuristic evaluation then usability testing. The framework recommends specific usability methods for evaluating specific issues. The conclusions of this study illustrate the potential benefits of using a multidimensional evaluation framework for e-Government websites and it was found that each usability method had its own particular benefits and limitations. The research concludes by illustrating the potential usefulness of the designed evaluation framework in raising awareness of usability methods for evaluating e-Government websites in Saudi Arabia.
APA, Harvard, Vancouver, ISO, and other styles
9

Демська, А. І., В. В. Євсєєв, Т. А. Колесникова, and В. П. Ткаченко. "Methods and means of evaluation usability of human-machine interface." Thesis, Kaunas University of Applied Science, 2019. http://openarchive.nure.ua/handle/document/9241.

Full text
Abstract:
For a achieving the real usability goal, the designer’s activity demands a additional toolkit for the quality evaluation of graphical and multimedia products, including quality from psychological and ergonomic standpoint. In this paper, an analysis of modern methods for assessing the effectiveness of websites was conducted and research development in the field of promotion of web-resources on the Internet was generalized. The methods of obtaining data for the quantitative assessment of visual perception, which can be used in the development of both information technology and personal identification technologies, are explored. It is revealed that with the use of cognitive technologies that take into account the peculiarities of person's visual perception of graphic information, it is possible to create effective tools and methods for the development of technological modules and completed application information systems. As a result of the work, an algorithm for improving the efficiency of UI web-systems was developed.
APA, Harvard, Vancouver, ISO, and other styles
10

Macko, J. Steven T. "Remote evaluation, a comparison of attended and unattended usability testing via the internet." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0012/MQ32368.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Usability Testing and Evaluation"

1

Computer usability testing and evaluation. Englewood Cliffs, N.J: Prentice-Hall, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Usability testing and system evaluation: A guide for designing useful computer systems. London: Chapman & Hall, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jakob, Nielsen. Eyetracking web usability. Berkeley, CA: New Riders, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hertzum, Morten. Usability Testing. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-02227-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barnum, Carol M. Usability testing and research. New York: Longman, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

W, Jordan Patrick, ed. Usability evaluation in industry. London: Taylor & Francis, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Soares, Marcelo Marcio, and Francesco Rebelo. Advances in usability evaluation. Boca Raton, FL: Taylor & Francis, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jonathan, Kendler, and Yale Allison S, eds. Usability testing of medical devices. Boca Raton: Taylor & Francis, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schalles, Christian. Usability Evaluation of Modeling Languages. Wiesbaden: Springer Fachmedien Wiesbaden, 2013. http://dx.doi.org/10.1007/978-3-658-00051-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Janice, Redish, ed. A practical guide to usability testing. Norwood, N.J: Ablex Pub. Corp., 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Usability Testing and Evaluation"

1

Ivory, Melody Y. "Usability Testing Methods." In Automated Web Site Evaluation, 23–37. Dordrecht: Springer Netherlands, 2003. http://dx.doi.org/10.1007/978-94-017-0375-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riihiaho, Sirpa, Marko Nieminen, Stina Westman, Ronja Addams-Moring, and Jukka Katainen. "Procuring Usability: Experiences of Usability Testing in Tender Evaluation." In Lecture Notes in Business Information Processing, 108–20. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21783-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ekşioğlu, Mahmut, Esin Kiris, Burak Çapar, Murat N. Selçuk, and Selen Ouzeir. "Heuristic Evaluation and Usability Testing: Case Study." In Lecture Notes in Computer Science, 143–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21660-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Effendy, Veronikha, Dana Sulistiyo Kusumo, Nungki Selviandro, and Kusuma Ayu Laksitowening. "Usability Evaluation Using Unmoderated Remote Usability Testing on Angkasa LMS Website Case Study." In Proceedings of Seventh International Congress on Information and Communication Technology, 761–69. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1607-6_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nichols, Elizabeth, Erica Olmsted-Hawala, Temika Holland, and Amy Anderson Riemer. "Usability Testing Online Questionnaires: Experiences at the U.S. Census Bureau." In Advances in Questionnaire Design, Development, Evaluation and Testing, 315–48. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2019. http://dx.doi.org/10.1002/9781119263685.ch13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Murillo, Braulio, Jose Pow Sang, and Freddy Paz. "Heuristic Evaluation and Usability Testing as Complementary Methods: A Case Study." In Design, User Experience, and Usability: Theory and Practice, 470–78. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91797-9_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Murillo, Braulio, Silvia Vargas, Arturo Moquillaza, Luis Fernández, and Freddy Paz. "Usability Testing as a Complement of Heuristic Evaluation: A Case Study." In Design, User Experience, and Usability: Theory, Methodology, and Management, 434–44. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58634-2_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Po-Hsin, and Ming-Chuan Chiu. "Evaluating the Healthcare Management System by Usability Testing." In Lecture Notes in Computer Science, 369–76. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07725-3_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maguire, Martin, and Paul Isherwood. "A Comparison of User Testing and Heuristic Evaluation Methods for Identifying Website Usability Problems." In Design, User Experience, and Usability: Theory and Practice, 429–38. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91797-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rasche, Peter, Moritz Richter, Katharina Schäfer, Sabine Theis, Verena Nitsch, and Alexander Mertens. "Practical Evaluation of the Emergency Usability Lab for Testing the Usability of Medical Devices in Emergency Situations." In Human Aspects of IT for the Aged Population. Technologies, Design and User Experience, 222–30. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50252-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Usability Testing and Evaluation"

1

Wirasasmiata, Rasyid, and Muhammad Uska. "Evaluation of E-Rapor Usability using Usability Testing Method." In Proceedings of the 6th International Conference on Educational Research and Innovation (ICERI 2018). Paris, France: Atlantis Press, 2019. http://dx.doi.org/10.2991/iceri-18.2019.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tarta, A. M., and G. S. Moldovan. "Automatic Usability Evaluation Using AOP." In 2006 IEEE-TTTC International Conference on Automation, Quality and Testing, Robotics. IEEE, 2006. http://dx.doi.org/10.1109/aqtr.2006.254605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rahmawati, Ajeng Fitria, Tenia Wahyuningrum, Ariq Cahya Wardhana, Anindita Septiari, and Lasmedi Afuan. "User Experience Evaluation Using Integration of Remote Usability Testing and Usability Evaluation Questionnaire Method." In 2022 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom). IEEE, 2022. http://dx.doi.org/10.1109/cyberneticscom55287.2022.9865664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hellen O. da Silva, Thiago, Lavínia Matoso Freitas, Marília Soares Mendes, and Elizabeth Sucupira Furtado. "Textual evaluation vs. User testing: a comparative analysis." In X Workshop sobre Aspectos da Interação Humano-Computador na Web Social. Sociedade Brasileira de Computação (SBC), 2019. http://dx.doi.org/10.5753/waihcws.2019.7673.

Full text
Abstract:
One of the ways to evaluate a system is from Textual Evaluation. This type of evaluation consider the textual user’s opinions to infer aspects of the interaction with the system. Although this method covers many texts and consi- ders spontaneous narratives of the users, it takes a lot of time and effort. Some authors have reported on the need to compare evaluations techniques in order to investigate their effectiveness in revealing issue or to supplement the results of a systems assessment. This study presents a comparative analysis between the textual evaluation and user testing. A case study was performed evaluating the usability and user experience of a health app. As a result, the techniques were analyzed based on aspects that involved describing the results, resources needed and description of problems and users.
APA, Harvard, Vancouver, ISO, and other styles
5

Di Nuovo, Alessandro, Simone Varrasi, Daniela Conti, Joshua Bamsforth, Alexandr Lucas, Alessandro Soranzo, and John McNamara. "Usability Evaluation of a Robotic System for Cognitive Testing." In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019. http://dx.doi.org/10.1109/hri.2019.8673187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moses, Nicholas D., and Nordica A. MacCarty. "A Practical Evaluation for Cookstove Usability." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85728.

Full text
Abstract:
While improved cookstoves have been designed and distributed for decades with the goal of addressing the human health and environmental issues caused by traditional biomass cooking methods, they often have not achieved the intended impact. One of the main reasons for this shortcoming is that engineers often focus on technical attributes of cookstove designs, such as improved fuel and combustion efficiency, but neglect usability. If a stove design does not meet a cook’s needs and preferences, the stove will likely be used only as a supplement to a traditional stove, or not used at all. To help close this gap, a testing protocol for cookstove usability was developed. The proposed protocol is based on established usability practices from fields such as software and consumer product design, and includes usability criteria taken from existing cookstove research and interviews with subject experts. The protocol includes objective and subjective testing methods, is designed to elicit user perceptions of and the relative importance of each usability criterion in a given context, and incorporates ethnographic methods to improve validity in cross-cultural applications and in diverse testing scenarios. This protocol may be useful to stove designers as a way to better understand users and validate or improve designs, to implementers as a method to assist with the selection of the most appropriate stove for a project, and to researchers as a tool to assess cookstoves and cookstove programs. Preliminary field and laboratory work to test the validity of the protocol demonstrated a mixture of meaningful and uncertain results, indicating that while it is a reasonable tool to assess cookstove usability, the protocol requires interpretation of qualitative data and assessment of uncertainty to be most effective.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Dandan, and Can Wang. "Usability Evaluation of Software Testing Based on Analytic Hierarchy Process." In 2016 4th International Conference on Machinery, Materials and Computing Technology. Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/icmmct-16.2016.404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Winter, S. H., and M. Bouzit. "Testing and Usability Evaluation of the MRAGES Force Feedback Glove." In 2006 International Workshop on Virtual Rehabilitation. IEEE, 2006. http://dx.doi.org/10.1109/iwvr.2006.1707532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Febrianti, Sisca Ayu, and Satriyo Adhy. "Usability Evaluation of iPekalonganKota with Different End-user Groups Using Usability Testing and USE Questionnaire Methods." In 2021 5th International Conference on Informatics and Computational Sciences (ICICoS). IEEE, 2021. http://dx.doi.org/10.1109/icicos53627.2021.9651841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Martono, Kurniawan Teguh, Oky Dwi Nurhayati, and Aris Puji Widodo. "The Evaluation of Child’s Health Monitoring System Using the Usability Testing Approach." In 2018 5th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE). IEEE, 2018. http://dx.doi.org/10.1109/icitacee.2018.8576933.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Usability Testing and Evaluation"

1

Lowry, Svetlana Z., Matthew T. Quinn, Mala Ramaiah, Robert M. Schumacher, Emily S. Patterson, Robert North, Jiajie Zhang, Michael C. Gibbons, and Patricia Abbott. Technical evaluation, testing, and validatiaon of the usability of electronic health records. Gaithersburg, MD: National Institute of Standards and Technology, 2012. http://dx.doi.org/10.6028/nist.ir.7804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lowry, Svetlana Z., Mala Ramaiah, Sheryl Taylor, Emily S. Patterson, Sandra Spickard Prettyman, Debora Simmons, David Brick, Paul Latkany, and Michael C. Gibbons. Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records: Empirically Based Use Cases for Validating Safety-Enhanced Usability and Guidelines for Standardization. National Institute of Standards and Technology, October 2015. http://dx.doi.org/10.6028/nist.ir.7084-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lowry, Svetlana Z., Mala Ramaiah, Sheryl Taylor, Emily S. Patterson, Sandra Spickard Prettyman, Debora Simmons, David Brick, Paul Latkany, and Michael C. Gibbons. Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records: Empirically Based Use Cases for Validating Safety-Enhanced Usability and Guidelines for Standardization. National Institute of Standards and Technology, October 2015. http://dx.doi.org/10.6028/nist.ir.7804-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Agarwal, Smisha, Madhu Jalan, Holly C. Wilcox, Ritu Sharma, Rachel Hill, Emily Pantalone, Johannes Thrul, Jacob C. Rainey, and Karen A. Robinson. Evaluation of Mental Health Mobile Applications. Agency for Healthcare Research and Quality (AHRQ), May 2022. http://dx.doi.org/10.23970/ahrqepctb41.

Full text
Abstract:
Background. Mental health mobile applications (apps) have the potential to expand the provision of mental health and wellness services to traditionally underserved populations. There is a lack of guidance on how to choose wisely from the thousands of mental health apps without clear evidence of safety, efficacy, and consumer protections. Purpose. This Technical Brief proposes a framework to assess mental health mobile applications with the aim to facilitate selection of apps. The results of applying the framework will yield summary statements on the strengths and limitations of the apps and are intended for use by providers and patients/caregivers. Methods. We reviewed systematic reviews of mental health apps and reviewed published and gray literature on mental health app frameworks, and we conducted four Key Informant group discussions to identify gaps in existing mental health frameworks and key framework criteria. These reviews and discussions informed the development of a draft framework to assess mental health apps. Iterative testing and refinement of the framework was done in seven successive rounds through double application of the framework to a total of 45 apps. Items in the framework with an interrater reliability under 90 percent were discussed among the evaluation team for revisions of the framework or guidance. Findings. Our review of the existing frameworks identified gaps in the assessment of risks that users may face from apps, such as privacy and security disclosures and regulatory safeguards to protect the users. Key Informant discussions identified priority criteria to include in the framework, including safety and efficacy of mental health apps. We developed the Framework to Assist Stakeholders in Technology Evaluation for Recovery (FASTER) to Mental Health and Wellness and it comprises three sections: Section 1. Risks and Mitigation Strategies, assesses the integrity and risk profile of the app; Section 2. Function, focuses on descriptive aspects related to accessibility, costs, organizational credibility, evidence and clinical foundation, privacy/security, usability, functions for remote monitoring of the user, access to crisis services, and artificial intelligence (AI); and Section 3. Mental Health App Features, focuses on specific mental health app features, such as journaling and mood tracking. Conclusion. FASTER may be used to help appraise and select mental health mobile apps. Future application, testing, and refinements may be required to determine the framework’s suitability and reliability across multiple mental health conditions, as well as to account for the rapidly expanding applications of AI, gamification, and other new technology approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Andre, Terence S., and Margaret Schurig. Advanced Usability Evaluation Methods. Fort Belvoir, VA: Defense Technical Information Center, April 2007. http://dx.doi.org/10.21236/ada470915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Greitzer, Frank L. Situated Usability Testing for Security Systems. Office of Scientific and Technical Information (OSTI), March 2011. http://dx.doi.org/10.2172/1015274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Edwards, T. L., and H. W. Allen. Programming software for usability evaluation. Office of Scientific and Technical Information (OSTI), January 1997. http://dx.doi.org/10.2172/446361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Feher, Bela. GeoPlot Declutter Interface Usability Evaluation. Fort Belvoir, VA: Defense Technical Information Center, July 2004. http://dx.doi.org/10.21236/ada427323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Paar, J. G., and D. R. Porterfield. Evaluation of radiochemical data usability. Office of Scientific and Technical Information (OSTI), April 1997. http://dx.doi.org/10.2172/461261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Theofanos, Mary, Brian Stanton, Shahram Orandi, Ross Micheals, and Nien-Fan Zhang. Usability testing of ten-print fingerprint capture. Gaithersburg, MD: National Institute of Standards and Technology, 2007. http://dx.doi.org/10.6028/nist.ir.7403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography