Journal articles on the topic 'Graphical user interfaces (Computer systems)'

To see the other types of publications on this topic, follow the link: Graphical user interfaces (Computer systems).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Graphical user interfaces (Computer systems).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Boyd, L. H., W. L. Boyd, and G. C. Vanderheiden. "The Graphical User Interface: Crisis, Danger, and Opportunity." Journal of Visual Impairment & Blindness 84, no. 10 (December 1990): 496–502. http://dx.doi.org/10.1177/0145482x9008401002.

Full text
Abstract:
The graphical user interface is a powerful new interface for mainstream computer users and a source of serious concern for those who cannot see. Fortunately, it will eventually be made as accessible to blind people as character-based forerunners. The systems that evolve will provide blind computer users with new capabilities not possible with character-based computers and access systems. However, the effects of previous inaccessibility, the current limited accessibility, and lingering doubts about the solvability of some of the access problems have slowed efforts to capitalize on the advantages and opportunities of these new systems. This article identifies potential new problems posed by graphic computing environments and describes some programs and strategies that are being developed to provide access to these environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Dobrokvashina, Aleksandra S. "DEVELOPMENT AND TESTING OF THE GRAPHICAL USER INTERFACES FOR UAV." RSUH/RGGU Bulletin. Series Information Science. Information Security. Mathematics, no. 1 (2024): 8–20. http://dx.doi.org/10.28995/2686-679x-2024-1-8-20.

Full text
Abstract:
Currently, the field of unmanned aerial vehicles is growing and developing rapidly. Unmanned systems are being actively integrated into everyday life. However, with the increase in the number of UAVs, the volume of software has expanded significantly, and many graphical interfaces for control have appeared. Today, various solutions are known for controlling aircraft: both using mobile phones and devices, and using personal computers and control systems. Control can be carried out both for one UAV and for their groups and swarms, and in some cases for heterogeneous groups of robots, where UAVs can be used in conjunction with ground, surface or underwater robots. In the latter case, the complexity of the graphical interface can increase significantly. To date, there are no clear ways to classify and organize graphical interfaces for controlling UAVs and their groups. This work provides an analytical review of existing graphical interfaces, and its goal is to create a clear classification to enable their distribution. That will make it possible in the future to propose a prototype of a universal graphical interface for controlling unmanned aerial vehicles and their groups, as well as to develop methods for creating and testing a number of interfaces.
APA, Harvard, Vancouver, ISO, and other styles
3

Halvankar, Amol. "College Enquiry For Student using AI ChatBot." International Scientific Journal of Engineering and Management 03, no. 03 (March 23, 2024): 1–9. http://dx.doi.org/10.55041/isjem01409.

Full text
Abstract:
A chat bot is a computer program that may initiate conversations between users and other computers. A larger audience can use chatbot technology, which is text-based and safe to use.. Chatbots for university research are developed using AI algorithms that interpret user messages and assess user demands. The aims of the chatbot's responses is to match the user's input while avoiding making oneself physically available to the institution in response to queries. The program responds to the students' inquiries by applying its intelligence. For using this type applications, natural processing language, command line, graphical user interface (GUI), menu driven, form-based, etc. that used in user interfaces TGUI and web-based user interfaces are the most typical types, however sometimes another type of user interface is required. This is where a conversational user interface based on chatbots fits in. One type of bot that has been present on chat systems is the chatbot. The user can interact with them via graphical interfaces, and the trend is in this direction. They often offer a stateful service, meaning that each session's data is saved by the application.
APA, Harvard, Vancouver, ISO, and other styles
4

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (October 1, 2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
5

Halmetoja, Esa, and Francisco Forns-Samso. "Evaluating graphical user interfaces for buildings." Journal of Corporate Real Estate 22, no. 1 (January 11, 2020): 48–70. http://dx.doi.org/10.1108/jcre-08-2019-0037.

Full text
Abstract:
Purpose The purpose of this paper is to evaluate six different graphical user interfaces (GUIs) for facilities operations using human–machine interaction (HMI) theories. Design/methodology/approach The authors used a combined multi-functional method that includes a review of the theories behind HMI for GUIs as its first approach. Consequently, heuristic evaluations were conducted to identify usability problems in a professional context. Ultimately, thematic interviews were conducted with property managers and service staff to determine special needs for the interaction of humans and the built environment. Findings The heuristic evaluation revealed that not all the studied applications were complete when the study was done. The significant non-motivational factor was slowness, and a lighter application means the GUI is more comfortable and faster to use. The evaluators recommended not using actions that deviate from regular practice. Proper implementation of the GUI would make it easier and quicker to work on property maintenance and management. The thematic interviews concluded that the GUIs form an excellent solution that enables communication between the occupant, owner and service provider. Indoor conditions monitoring was seen as the most compelling use case for GUIs. Two-dimensional (2D) layouts are more demonstrative and faster than three-dimensional (3D) layouts for monitoring purposes. Practical implications The study provides an objective view of the strengths and weaknesses of specific types of GUI. So, it can help to select a suitable GUI for a particular environment. The 3D view is not seen as necessary for monitoring indoor conditions room by room or sending a service request. Many occupants’ services can be implemented without any particular layout. On the other hand, some advanced services were desired for the occupants, such as monitoring occupancy, making space reservations and people tracking. These aspects require a 2D layout at least. The building information model is seen as useful, especially when monitoring complex technical systems. Originality/value Earlier investigations have primarily concentrated on investigating human–computer interaction. The authors’ studied human–building interaction instead. The notable difference to previous efforts is that the authors considered the GUI as a medium with which to communicate with the built environment, and looked at its benefits for top-level processes, not for the user interface itself.
APA, Harvard, Vancouver, ISO, and other styles
6

Basak, Abhinav, and Shatarupa Thakurta Roy. "Application of universal design principles on computer mouse interface: developing a universal mouse pointing and control system to provide affordance to the left-handed users." Proceedings of the Design Society 4 (May 2024): 2317–26. http://dx.doi.org/10.1017/pds.2024.234.

Full text
Abstract:
AbstractThe graphical user interface was introduced to democratize access to computer systems by simplifying hardware and visual interfaces. Technological advancements further reduced the constraints, primarily benefiting the mainstream users. However, the specialized needs of the critical users have always been neglected. This paper delves into the ergonomics of the mouse pointer and the computer mouse, focusing on left-handed computer users as a critical user category to develop and propose a universal design solution to integrate left-handers as a mainstream user category in a computer interface.
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Sara. "Graphical interfaces for knowledge engineering: an overview of relevant literature." Knowledge Engineering Review 3, no. 3 (September 1988): 221–47. http://dx.doi.org/10.1017/s0269888900004483.

Full text
Abstract:
AbstractLiterature relevant to the design and development of graphical interfaces for knowledge-based systems is briefly reviewed and discussed. The efficiency of human-computer interaction depends to a large extent on the degree to which the human-machine interface can answer the user's cognitive needs and accurately support his or her natural cognitive processes and structures. Graphical interfaces can often be particularly suitable in this respect, especially in cases where the user's “natural idiom” is graphical. Illustrated examples are given of the way in which graphical interfaces have successfully been used in various fields with particular emphasis on their use in the field of knowledge-based systems. The paper ends with a brief discussion of possible future developments in the field of knowledge-based system interfaces and of the role that graphics might play in such developments.
APA, Harvard, Vancouver, ISO, and other styles
8

Cuomo, Donna L., and Charles D. Bowen. "Stages of User Activity Model as a Basis for User-System Interface Evaluations." Proceedings of the Human Factors Society Annual Meeting 36, no. 16 (October 1992): 1254–58. http://dx.doi.org/10.1177/154193129203601616.

Full text
Abstract:
This paper discusses the results of the first phase of a research project concerned with developing methods and measures of user-system interface effectiveness for command and control systems with graphical, direct manipulation style interfaces. Due to the increased use of prototyping user interfaces during concept definition and demonstration/validation phases, the opportunity exists for human factors engineers to apply evaluation methodologies early enough in the life cycle to make an impact on system design. Understanding and improving user-system interface (USI) evaluation techniques is critical to this process. In 1986, Norman proposed a descriptive “stages of user activity” model of human-computer interaction. Hutchins, Hollin, and Norman (1986) proposed concepts of measures based on the model which would assess the directness of the engagements between the user and the interface at each stage of the model. This first phase of our research program involved applying three USI evaluation techniques to a single interface, and assessing which, if any, provided information on the directness of engagement at each stage of Norman's model. We also classified the problem types identified according to the Smith and Mosier (1986) functional areas. The three techniques used were cognitive walkthrough, heuristic evaluation, and guidelines. It was found that the cognitive walkthrough method applied almost exclusively to the action specification stage. The guidelines were applicable to more of the stages evaluated but all the techniques were weak in measuring semantic distance and all of the stages on the evaluation side of the HCI activity cycle. Improvements to existing or new techniques are required for evaluating the directness of engagement for graphical, direct manipulation style interfaces.
APA, Harvard, Vancouver, ISO, and other styles
9

Grayson, J. P., C. Espinosa, M. Dunsmuir, M. Edwards, and B. Tribble. "Operating systems and graphic user interfaces." ACM SIGGRAPH Computer Graphics 23, no. 5 (December 1989): 281–93. http://dx.doi.org/10.1145/77277.77292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Neelamkavil, F., and GS Teo. "X versus Eiffel toolkits for building graphical user interfaces." Information and Software Technology 33, no. 8 (October 1991): 559–65. http://dx.doi.org/10.1016/0950-5849(91)90114-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

ETIENNE, F. "The Impact of Modern Graphics Tools on Science, and their Limitations." International Journal of Modern Physics C 02, no. 01 (March 1991): 58–65. http://dx.doi.org/10.1142/s012918319100007x.

Full text
Abstract:
Within the last few years the range of scientific applications for which computer graphics is used has become extremely large. However, not all scientists require the same level of computing power. Until recently the software interface to graphics display systems has been provided by the manufacturers of the hardware. This generated interest in the possibility of using graphics standards. Another important issue is related to the deluge of data generated by super-computers and high-volume data sources which make it impossible for users to have an overall knowledge of either the data structures or the application programs. Partial solutions can be found in emerging products providing an interactive computational environment for scientific visualization. Some of the characteristics required for graphics hardware are presented. From a hardware perspective, graphics computing involves the use of a graphical computer system with sufficient power and functionality that the user can manipulate and interact with displayed objects. To achieve such a level of performance computers are usually designed as networked workstations with access to local graphics capabilities. Finally, it is made clear that the main computer graphics applications are scientific activities. From high energy physics experiments with wireframe event displays up to medical imaging with interactive volume rendering, scientific visualization is not simply displaying data from data intensive sources. Fields of computer graphics like image processing, computer aided design, signal processing and user interfaces provide tools helping researchers to understand and steer scientific computation.
APA, Harvard, Vancouver, ISO, and other styles
12

Deshmukh, Akshay Madhav, and Ricardo Chalmeta. "Validation of system usability scale as a usability metric to evaluate voice user interfaces." PeerJ Computer Science 10 (February 29, 2024): e1918. http://dx.doi.org/10.7717/peerj-cs.1918.

Full text
Abstract:
In recent years, user experience (UX) has gained importance in the field of interactive systems. To ensure its success, interactive systems must be evaluated. As most of the standardized evaluation tools are dedicated to graphical user interfaces (GUIs), the evaluation of voice-based interactive systems or voice user interfaces is still in its infancy. With the help of a well-established evaluation scale, the System Usability Scale (SUS), two prominent, widely accepted voice assistants were evaluated. The evaluation, with SUS, was conducted with 16 participants who performed a set of tasks on Amazon Alexa Echo Dot and Google Nest Mini. We compared the SUS score of Amazon Alexa Echo Dot and Google Nest Mini. Furthermore, we derived the confidence interval for both voice assistants. To enhance understanding for usability practitioners, we analyzed the Adjective Rating Score of both interfaces to comprehend the experience of an interface’s usability through words rather than numbers. Additionally, we validated the correlation between the SUS score and the Adjective Rating Score. Finally, a paired sample t-test was conducted to compare the SUS score of Amazon Alexa Echo Dot and Google Nest Mini. This resulted in a huge difference in scores. Hence, in this study, we corroborate the utility of the SUS in voice user interfaces and conclude by encouraging researchers to use SUS as a usability metric to evaluate voice user interfaces.
APA, Harvard, Vancouver, ISO, and other styles
13

Stepanyan, Ivan V. "Ergonomic qualities of graphic user interfaces (GUI): state and evolution." Occupational Health and Industrial Ecology, no. 12 (February 15, 2019): 51–56. http://dx.doi.org/10.31089/1026-9428-2018-12-51-57.

Full text
Abstract:
More workers are involved into interaction with graphic user interfaces most part of the working shift. However, low ergonomic qualities or incorrect usage of graphic user interface could result in risk of unfavorable influence on workers’ health. The authors revealed and classified typical scenarios of graphic user interface usage. Various types of graphic user interface and operator occupations are characterized by various parameters of exertion, both biomechanical and psycho-physiological. Among main elements of graphic user interface are presence or absence of mouse or joystick, intuitive clearness, balanced palette, fixed position of graphic elements, comfort level, etc. Review of various graphic user interface and analysis of their characteristics demonstrated possibility of various occupational risk factors. Some disclosed ergonomic problems are connected with incorporation of graphic user interface into various information technologies and systems. The authors presented a role of ergonomic characteristics of graphic user interface for safe and effective work of operators, gave examples of algorithms to visualize large information volumes for easier comprehension and analysis. Correct usage of interactive means of computer visualization with competent design and observing ergonomic principles will optimize mental work in innovative activity and preserve operators’ health. Prospective issues in this sphere are ergonomic interfaces developed with consideration of information hygiene principles, big data analysis technology and automatically generated cognitive graphics.
APA, Harvard, Vancouver, ISO, and other styles
14

Thomas, Bruce H., and Paul Calder. "Supporting cartoon animation techniques in direct manipulation graphical user interfaces." Information and Software Technology 47, no. 5 (March 2005): 339–55. http://dx.doi.org/10.1016/j.infsof.2004.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Seongil. "Access to Computer Systems with Graphical User Interface by Touch: Haptic Discrimination of Icons." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 39, no. 11 (October 1995): 742–46. http://dx.doi.org/10.1177/154193129503901106.

Full text
Abstract:
The inherently visual nature of graphical user interfaces often makes it difficult for visually impaired or blind people to access current information systems. The purpose of this study is to examine the possibility of using haptic modality for “blind access” to graphical information systems by examining haptic discrimination performance of icons under geometrically transformed conditions. Seven sighted and five congenitally blind subjects discriminated ten icons by touch using the raised line drawings and the Optacon. Two haptic tasks were performed for each subject: 1) naming the icons, and 2) matching the icons under geometrical transformations. The performance in haptic discrimination of icons was dependent on display type and transformation. No significant difference in accuracy and latency between sighted and blind subjects could be found for the two-dimensional tactile displays employed in the study. The results of this study may have implications in the design of tactile communication systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Foust, Gabriel, Jaakko Järvi, and Sean Parent. "Generating reactive programs for graphical user interfaces from multi-way dataflow constraint systems." ACM SIGPLAN Notices 51, no. 3 (May 11, 2016): 121–30. http://dx.doi.org/10.1145/2936314.2814207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Alemerien, Khalid. "User-Friendly Security Patterns for Designing Social Network Websites." International Journal of Technology and Human Interaction 13, no. 1 (January 2017): 39–60. http://dx.doi.org/10.4018/ijthi.2017010103.

Full text
Abstract:
The number of users in Social Networking Sites (SNSs) is increasing exponentially. As a result, several security and privacy problems in SNSs have appeared. Part of these problems is caused by insecure Graphical User Interfaces (GUIs). Therefore, the developers of SNSs should take into account the balance between security and usability aspects during the development process. This paper proposes a set of user-friendly security patterns to help SNS developers to design interactive environments which protect the privacy and security of individuals while being highly user friendly. The authors proposed four patterns and evaluated them against the Facebook interfaces. The authors found that participants accepted the interfaces constructed through the proposed patterns more willingly than the Facebook interfaces.
APA, Harvard, Vancouver, ISO, and other styles
18

Akselrud, Lev, and Yuri Grin. "WinCSD: software package for crystallographic calculations (Version 4)." Journal of Applied Crystallography 47, no. 2 (March 11, 2014): 803–5. http://dx.doi.org/10.1107/s1600576714001058.

Full text
Abstract:
The fourth version of the program packageWinCSDis multi-purpose computer software for crystallographic calculations using single-crystal and powder X-ray and neutron diffraction data. The software environment and the graphical user interface are built using the platform of the Microsoft .NET Framework, which grants independence from changing Windows operating systems and allows for transferring to other operating systems. Graphic applications use the three-dimensional OpenGL graphics language.WinCSDcovers the complete spectrum of crystallographic calculations, including powder diffraction pattern deconvolution, crystal structure solution and refinement in 3 + dspace, refinement of the multipole model and electron density studies from diffraction data, and graphical representation of crystallographic information.
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Sumit, Nitin Nitin, and Mitul Yadav. "Finite State Testing of Graphical User Interface using Genetic Algorithm." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 5 (May 17, 2023): 282–87. http://dx.doi.org/10.17762/ijritcc.v11i5.6615.

Full text
Abstract:
Graphical user interfaces are the key components of any software. Nowadays, the popularity of the software depends upon how easily the user can interact with the system. However, as the system becomes complex, this interaction is also complicated with many states. The testing of graphical user interfaces is an important phase of modern software. The testing of GUI is possible only by interacting with the system, which may be a time-consuming process and is generally automated based on the test suite. The test suite generation proposed in this paper is based on the genetic algorithm in which various test cases are generated heuristically. For performance validation of the proposed approach, the same has been compared with a variant of PSO, and it found that GA is slightly better in comparison to the PSO.
APA, Harvard, Vancouver, ISO, and other styles
20

Maturana, S., and Y. Eterovic. "Vehicle Routing and Production Planning Decision Support Systems: Designing Graphical User Interfaces." International Transactions in Operational Research 2, no. 3 (July 1995): 233–47. http://dx.doi.org/10.1111/j.1475-3995.1995.tb00018.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Peng, Shurong. "The influence of Graphical User Interfaces on human-computer interaction and the impact of organizing software on decision-making process." Applied and Computational Engineering 50, no. 1 (March 25, 2024): 213–21. http://dx.doi.org/10.54254/2755-2721/50/20241509.

Full text
Abstract:
The necessity for effective information management has grown crucial due to the widespread availability and utilization of electronic devices and software systems. Organizational software provides a solution by enhancing efficiency and reducing cognitive burden. Graphical User Interfaces (GUIs) play a crucial role in facilitating the interaction between humans and computers in the domain of software design. This study employs a comprehensive literature review to examine the fundamental concepts underlying efficient GUI design and the current trends in organizing software interfaces. The text presents a GUI concept that has been designed for the purpose of organizing software in order to address complex decision-making scenarios. The model offers a graphical representation of a framework that may be used to analyze various elements influencing decision-making. This study sheds light on the potential of Organizing Software to enhance cognitive processes. However, there is a need for additional enhancements in integrating user connectivity and intelligent algorithms. In general, the study emphasizes the significance of user-centric GUI design in improving the effectiveness of Organizing Software.
APA, Harvard, Vancouver, ISO, and other styles
22

KIM, HADONG, and MALREY LEE. "AN OPEN MODULE DEVELOPMENT ENVIRONMENT (OMDE) FOR INTERACTIVE VIRTUAL REALITY SYSTEMS." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 06 (September 2010): 947–60. http://dx.doi.org/10.1142/s0218001410008251.

Full text
Abstract:
Graphic designers and developers would like to use virtual reality (VR) systems with a friendly Graphical User Interface (GUI) and development environment that provide efficient creation, modification and deletion functions. Although current VR graphical design systems incorporate the most up-to-date features, graphic designers are not able to specify the interface features that they desire or features that are most suitable for specific design and development tasks. This paper proposes an Open Module Development Environment (OMDE) for VR systems that can provide interactive functions that reflect the graphic designers requirements. OMDE allows graphic designers to specify their specific interface features and functions, and the system configuration to utilize plug-in modules. Hence a dynamically created development environment is provided that is also tailored to the graphic designer's requirements and facilitates graphical composition and editing. The functions of the graphical interface modules and the OMDE system specifications are identified. The system implementation environment and structure of the 3D VR software are described and the implementation is evaluated for performance, as an improved 3D graphic design tool.
APA, Harvard, Vancouver, ISO, and other styles
23

Wilkinson, Alexander, Michael Gonzales, Patrick Hoey, David Kontak, Dian Wang, Noah Torname, Sam Laderoute, et al. "Design guidelines for human–robot interaction with assistive robot manipulation systems." Paladyn, Journal of Behavioral Robotics 12, no. 1 (January 1, 2021): 392–401. http://dx.doi.org/10.1515/pjbr-2021-0023.

Full text
Abstract:
Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.
APA, Harvard, Vancouver, ISO, and other styles
24

Bailey, Shannon K. T., Daphne E. Whitmer, Bradford L. Schroeder, and Valerie K. Sims. "Development of Gesture-based Commands for Natural User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1466–67. http://dx.doi.org/10.1177/1541931213601851.

Full text
Abstract:
Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”
APA, Harvard, Vancouver, ISO, and other styles
25

Mazmanian, R. O. "MAIN OPERATION MODES AND GRAPHICAL USER INTERFACE OF THE COMPLEX FOR EXPERI-MENTAL STUDIES OF MAGNETIC FIELD AND DIAGNOSTICS OF ELECTRICAL EQUIPMENT." Praci Institutu elektrodinamiki Nacionalanoi akademii nauk Ukraini 2024, no. 67 (April 30, 2024): 82–89. http://dx.doi.org/10.15407/publishing2024.67.082.

Full text
Abstract:
Hardware and software monitoring complex is designed to study the relationship between changes in the patterns of the magnetic field intensity distribution with power equipment failures. In experimental studies with predetermined techni-cal faults artificially introduced into the object the complex will provide useful and reliable results by registering changes in the magnetic field induction in the spatial, temporal and frequency domains. The choice and flexible adjust-ment of data collection tools by the researcher will ensure the use of the most appropriate method for measuring, con-verting and displaying information about the investigated faults. The article presents the main operating modes and graphical user interfaces of a personal computer and a mobile data acquisition system that jointly convert, store, ana-lyze and display measurement information received from various magnetic sensors. The generalized functional specifi-cation of the complex software, the design of the graphical user interface (GUI) as parts of the real-time operating sys-tem (RTOS) of the mobile data acquisition system (DAQ) and PC software will also be used in the development of prob-lem-oriented computer diagnostic systems for electric power equipment. Ref. 12, fig. 3.
APA, Harvard, Vancouver, ISO, and other styles
26

Laport, Francisco, Daniel Iglesia, Adriana Dapena, Paula M. Castro, and Francisco J. Vazquez-Araujo. "Proposals and Comparisons from One-Sensor EEG and EOG Human-Machine Interfaces." Sensors 21, no. 6 (March 22, 2021): 2220. http://dx.doi.org/10.3390/s21062220.

Full text
Abstract:
Human-Machine Interfaces (HMI) allow users to interact with different devices such as computers or home elements. A key part in HMI is the design of simple non-invasive interfaces to capture the signals associated with the user’s intentions. In this work, we have designed two different approaches based on Electroencephalography (EEG) and Electrooculography (EOG). For both cases, signal acquisition is performed using only one electrode, which makes placement more comfortable compared to multi-channel systems. We have also developed a Graphical User Interface (GUI) that presents objects to the user using two paradigms—one-by-one objects or rows-columns of objects. Both interfaces and paradigms have been compared for several users considering interactions with home elements.
APA, Harvard, Vancouver, ISO, and other styles
27

Khan, Mumtaz, Shah Khusro, Iftikhar Alam, Shaukat Ali, and Inayat Khan. "Perspectives on the Design, Challenges, and Evaluation of Smart TV User Interfaces." Scientific Programming 2022 (February 23, 2022): 1–14. http://dx.doi.org/10.1155/2022/2775959.

Full text
Abstract:
The user interface (UI) is a primary source of interaction with a device. Since the introduction of graphical user interface (GUI), software engineers and designers have been trying to make user-friendly UIs for various computing devices, including smartphones, tablets, and computers. The modern smart TV also comes with built-in operating systems. However, little attention has been given to this prominent entertainment device, i.e., smart TV. The technological advancement and proliferation of smart TV enabled the manufacturer to provide rich functionalities and features; however, this richness resulted in more clutter and attention-demanding interfaces. Besides, smart TV is a lean-back supporting device having a diverse range of users. Therefore, smart TV’s usability and user experience (UX) are questionable due to diverse user interests and limited features of traditional remote controls. This study aimed to discuss and critically analyze the features and functionalities of the existing well-known smart TV UIs of various operating systems in the context of usability, cognition, and UX. Moreover, this study highlights the issues and challenges in the current smart TV UIs and recommends some research opportunities to cope with the smart TV UIs. This study further reports and validates some overlooked factors affecting smart TV UIs and UX. A subjective study and usability tests from diverse users are presented to validate these factors. The study concludes that a one-size-fits-all UI design is unsuitable for shared devices, i.e., smart TV. This study further recommends a personalized adaptive UI, which may enhance the learnability and UXs of the smart TV viewers.
APA, Harvard, Vancouver, ISO, and other styles
28

Daunys, Einaras, and Sigita Turskienė. "Kompiuterių tinklo gedimų registravimo sistemos modeliavimas." Informacijos mokslai 50 (January 1, 2009): 262–66. http://dx.doi.org/10.15388/im.2009.0.3228.

Full text
Abstract:
Darbe aptariami esamų kompiuterių tinklų registravimo sistemų technologiniai trūkumai, pateikiama gedimų registracijos sistema papildyta nešiojamais įrenginiais, nurodant jų ypatumus. Analizuojamos galimybės sukurti ir įdiegti į praktiką kompiuterių tinklų gedimų registravimo programinį produktą delniniams kompiuteriams. Darbe apžvelgtos delninių kompiuterių operacinės sistemos ir jų ypatybės, parinkta optimali aplikacijos kūrimo platforma. Aplikaciją sudaro dvi dalys: darbui su duomenų bazesukurtos klasės ir vartotojo grafi nės sąsajos klasės. Grafi nė vartotojo sąsaja sukonstruota Java Swing komponentų pagrindu. Sukurtas programinis produktas testuotas keliose operacinėse sistemose pagal kelis kriterijus. Testavimo rezultatai patvirtino siūlomos sistemos mobilumo pranašumus.Modeling of the Computer Networks’ Failure Registration SystemEinaras Daunys, Sigita Turskienė SummaryThis work examines the opportunities to develop and implement the practice of computer networks fault registration software products for handheld computers. The paper outlines the handheld computer operating systems and their characteristics, selected the optimum application development platform. It consists of two parts: work with the database created by the class, and a graphical user interface class. The graphical user interface’s design is based on the Java Swing components. The software product is tested in several operating systems, according to several criteria. Test results confi rmed the proposed system of mobility benefi ts.
APA, Harvard, Vancouver, ISO, and other styles
29

Kappel, G., and A. Min Tjoa. "State of art and open issues on graphical user interfaces for object-oriented database systems." Information and Software Technology 34, no. 11 (November 1992): 721–30. http://dx.doi.org/10.1016/0950-5849(92)90167-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wulfman, C. E., M. Rua, C. D. Lane, E. H. Shortliffe, and L. M. Fagan. "Graphical Access to Medical Expert Systems: V. Integration with Continuous-Speech Recognition." Methods of Information in Medicine 32, no. 01 (1993): 33–46. http://dx.doi.org/10.1055/s-0038-1634892.

Full text
Abstract:
Abstract:This paper describes three prototypes of computer-based clinical record-keeping tools that use a combination of window-based graphics and continuous speech in their user interfaces. Although many of today’s commercial speech-recognition products achieve high rates of accuracy for large grammars (vocabularies of words or collections of sentences and phrases), they can only “listen for” (and therefore recognize) a limited number of words or phrases at a time. When a speech application requires a grammar whose size exceeds a speech-recognition product’s limits, the application designer must partition the large grammar into several smaller ones and develop control mechanisms that permit users to select the grammar that contains the words or phrases they wish to utter. Furthermore, the user interfaces they design must provide feedback mechanisms that show users the scope of the selected grammars. The three prototypes described were designed to explore the use of window-based graphics as control and feedback mechanisms for continuous-speech recognition in medical applications. Our experiments indicate that window-based graphics can be effectively used to provide control and feedback for certain classes of speech applications, but they suggest that the techniques we describe will not suffice for applications whose grammars are very complex.
APA, Harvard, Vancouver, ISO, and other styles
31

Goyzueta, Denilson V., Joseph Guevara M., Andrés Montoya A., Erasmo Sulla E., Yuri Lester S., Pari L., and Elvis Supo C. "Analysis of a User Interface Based on Multimodal Interaction to Control a Robotic Arm for EOD Applications." Electronics 11, no. 11 (May 25, 2022): 1690. http://dx.doi.org/10.3390/electronics11111690.

Full text
Abstract:
A global human–robot interface that meets the needs of Technical Explosive Ordnance Disposal Specialists (TEDAX) for the manipulation of a robotic arm is of utmost importance to make the task of handling explosives safer, more intuitive and also provide high usability and efficiency. This paper aims to evaluate the performance of a multimodal system for a robotic arm that is based on Natural User Interface (NUI) and Graphical User Interface (GUI). The mentioned interfaces are compared to determine the best configuration for the control of the robotic arm in Explosive Ordnance Disposal (EOD) applications and to improve the user experience of TEDAX agents. Tests were conducted with the support of police agents Explosive Ordnance Disposal Unit-Arequipa (UDEX-AQP), who evaluated the developed interfaces to find a more intuitive system that generates the least stress load to the operator, resulting that our proposed multimodal interface presents better results compared to traditional interfaces. The evaluation of the laboratory experiences was based on measuring the workload and usability of each interface evaluated.
APA, Harvard, Vancouver, ISO, and other styles
32

Hicinbothom, James H., and Wayne W. Zachary. "A Tool for Automatically Generating Transcripts of Human-Computer Interaction." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 15 (October 1993): 1042. http://dx.doi.org/10.1177/154193129303701514.

Full text
Abstract:
Recording transcripts of human-computer interaction can be a very time-consuming activity. This demonstration presents a new technology to automatically capture such transcripts in Open Systems environments (e.g., from graphical user interfaces running on the X Window System). This technology forms an infrastructure for performing distributed usability testing and human-computer interaction research, by providing integrated data capture, storage, browsing, retrieval, and export capabilities. It may lead to evaluation cost reductions throughout the software development life cycle.
APA, Harvard, Vancouver, ISO, and other styles
33

Ratcliffe, Liam, and Sadasivan Puthusserypady. "Importance of Graphical User Interface in the design of P300 based Brain–Computer Interface systems." Computers in Biology and Medicine 117 (February 2020): 103599. http://dx.doi.org/10.1016/j.compbiomed.2019.103599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lane, C. D., Joan Walton, and E. H. Shortliffe. "Graphical Access to Medical Expert Systems: II. Design of an Interface for Physicians." Methods of Information in Medicine 25, no. 03 (July 1986): 143–50. http://dx.doi.org/10.1055/s-0038-1635464.

Full text
Abstract:
SummaryThe ONCOCIN Interviewer program provides a graphical interface between physicians and an expert system that is designed to assist with therapy selection for patients receiving experimental cancer therapy. A principal goal has been to increase acceptance of advanced computer tools in a clinical setting. The interface has been developed for high-performance Lisp workstations and is tailored to the existing paper forms and practices of the outpatient clinic. To be flexible, the program makes use of a document formatting language to control a raster graphics display of medical forms, traditional paper versions of which have been used to track patient progress. The program utilizes a mouse input device coupled with a software-defined data entry approach that may be customized to the specific environment. The work described suggests ways in which high density graphics interfaces, with pointing devices rather than an emphasis on keyboards, may make decision support tools more useful to physicians and more acceptable to them.
APA, Harvard, Vancouver, ISO, and other styles
35

DeSoi, J. F., and W. M. Lively. "Survey and analysis of nonprogramming approaches to design and development of graphical user interfaces." Information and Software Technology 33, no. 6 (July 1991): 413–24. http://dx.doi.org/10.1016/0950-5849(91)90077-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hills, Bill, George Bruce, Mike Evans, and Alan Brewster. "Development of Cost-Effective Computer Management Information Systems for Small Shipyards." Journal of Ship Production 13, no. 02 (May 1, 1997): 125–37. http://dx.doi.org/10.5957/jsp.1997.13.2.125.

Full text
Abstract:
The introduction of new technology into small companies which have limited resources is a complex process often involving a change in culture which needs to be supported by staff development programmes. Collaborative schemes involving university staff and company personnel are an effective way of assisting the process of new technology introduction. This paper describes one such scheme, the Teaching Company Scheme, and how it was used to implement a fully integrated management information system into a small ship repair yard. Details are given of the approach utilized, based on business process analysis, including formal modelling of the key functional activities. This led to a detailed specification of the Information Technology Systems required to support integration across: estimating, accounts, time office and buying. The specification also identified hardware characteristics and requirements. The implementation of the system is described with particular reference being made to the procedures adopted to ensure user opinion was taken into account when designing graphical user interfaces.
APA, Harvard, Vancouver, ISO, and other styles
37

Ruşanu, O. A. "LabVIEW Instruments for Creating Brain-Computer Interface Applications by Simulating Graphical Animations and Sending Text Messages to Virtual and Physical LEDs Based Display Systems Connected to Arduino Board." IOP Conference Series: Materials Science and Engineering 1262, no. 1 (October 1, 2022): 012037. http://dx.doi.org/10.1088/1757-899x/1262/1/012037.

Full text
Abstract:
The brain-computer interface (BCI) is a multidisciplinary research field aimed at helping people with neuromotor disabilities. A BCI system enables the control of mechatronic devices by using cognitive intentions translated into electroencephalographic signals. This paper presents the implementation of LabVIEW-based display systems that can be controlled by a brain-computer interface based on detecting the voluntary eye blinks used as commands. The interactive virtual or physical display systems are helpful thanks to running or simulating various graphical animations or transmitting different text messages in a user-customizable way. The proposed LabVIEW-based virtual display systems provide versatile functionalities such as: customizing the own visual animations and the movement of any text message by switching the direction (to the left or to the right) depending on the user’s choice. This paper presents five original virtual LEDs based display systems developed in LabVIEW graphical programming environment. The implemented LabVIEW applications included: an 8x8 LEDs matrix for simulating graphical animations, 2x16 LCD TEXT for showing text messages, and a 7-segments display for implementing chronometer functionality. Moreover, the LabVIEW virtual display systems were interfaced with the physical display systems (8x8 LEDs matrix controlled by MAX7219 driver and 2x16 LCD TEXT) connected to the Arduino Uno board.
APA, Harvard, Vancouver, ISO, and other styles
38

Banerjee, Ishan, Bao Nguyen, Vahid Garousi, and Atif Memon. "Graphical user interface (GUI) testing: Systematic mapping and repository." Information and Software Technology 55, no. 10 (October 2013): 1679–94. http://dx.doi.org/10.1016/j.infsof.2013.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Di Trapani, Lyall, and Tamer Inanc. "NTGsim: A graphical user interface and a 3D simulator for nonlinear trajectory generation methodology." International Journal of Applied Mathematics and Computer Science 20, no. 2 (June 1, 2010): 305–16. http://dx.doi.org/10.2478/v10006-010-0023-5.

Full text
Abstract:
NTGsim: A graphical user interface and a 3D simulator for nonlinear trajectory generation methodology Nonlinear Trajectory Generation (NTG), developed by Mark Milam, is a software algorithm used to generate trajectories of constrained nonlinear systems in real-time. The goal of this paper is to present an approach to make NTG more user-friendly. To accomplish this, we have programmed a Graphical User Interface (GUI) in Java, using object oriented design, which wraps the NTG software and allows the user to quickly and efficiently alter the parameters of NTG. This new program, called NTGsim, eliminates the need to reprogram the NTG algorithm explicitly each time the user wishes to change a parameter.
APA, Harvard, Vancouver, ISO, and other styles
40

Bakaev, Maxim, Sebastian Heil, and Martin Gaedke. "Reasonable Effectiveness of Features in Modeling Visual Perception of User Interfaces." Big Data and Cognitive Computing 7, no. 1 (February 8, 2023): 30. http://dx.doi.org/10.3390/bdcc7010030.

Full text
Abstract:
Training data for user behavior models that predict subjective dimensions of visual perception are often too scarce for deep learning methods to be applicable. With the typical datasets in HCI limited to thousands or even hundreds of records, feature-based approaches are still widely used in visual analysis of graphical user interfaces (UIs). In our paper, we benchmarked the predictive accuracy of the two types of neural network (NN) models, and explored the effects of the number of features, and the dataset volume. To this end, we used two datasets that comprised over 4000 webpage screenshots, assessed by 233 subjects per the subjective dimensions of Complexity, Aesthetics and Orderliness. With the experimental data, we constructed and trained 1908 models. The feature-based NNs demonstrated 16.2%-better mean squared error (MSE) than the convolutional NNs (a modified GoogLeNet architecture); however, the CNNs’ accuracy improved with the larger dataset volume, whereas the ANNs’ did not: therefore, provided that the effect of more data on the models’ error improvement is linear, the CNNs should become superior at dataset sizes over 3000 UIs. Unexpectedly, adding more features to the NN models caused the MSE to somehow increase by 1.23%: although the difference was not significant, this confirmed the importance of careful feature engineering.
APA, Harvard, Vancouver, ISO, and other styles
41

KENDER, JOHN R. "VISUAL INTERFACES TO COMPUTERS: A SYSTEMS-ORIENTED FIRST COURSE IN RELIABLE CONTROL VIA IMAGERY ("VISUAL INTERFACES")." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 05 (August 2001): 869–84. http://dx.doi.org/10.1142/s0218001401001209.

Full text
Abstract:
We present the rationale, description, and critique of a first course in image computing that is not a traditional computer vision principles-and-tools course. "Visual Interfaces to Computers" is instead complementary to standard Computer Vision, User Interface, and Graphics courses; in fact, VI:CV::UI:G. It is organized by case studies of working visual systems that use camera input for data or control information in service of higher user goals, such as GUI control, user identification, or automobile steering. Many CV scientific principles and engineering tools are therefore taught, as well as those of psychophysics, AI, and EE, but taught selectively and always within the context of total system design. Course content is derived from conference and journal articles and Ph.D. theses, augmented with video tapes and real-time web site demos. Students do two homework assignments, one to design a "visual combination lock", and one to parse an image into English. They also do a final paper or project of their own choosing, often in teams of two, and often with surprisingly deep results. The course is assisted by a custom C-based tool kit, "XILite", a user-friendly (and comparatively bug-free) modification of Sun's X-windows Image Library for our lab's camera-equipped Sun workstations. The course has been offered twice to a wide audience with good reviews.
APA, Harvard, Vancouver, ISO, and other styles
42

Takagi, Noboru, Shingo Morii, and Tatsuo Motoyoshi. "Prototype Development of Image Editing Systems Available for Visually Impaired People and Consideration of Their User Interfaces." Journal of Advanced Computational Intelligence and Intelligent Informatics 20, no. 6 (November 20, 2016): 961–67. http://dx.doi.org/10.20965/jaciii.2016.p0961.

Full text
Abstract:
For example, when sighted scholars study mathematics and physics etcetera, they need to access visual information, e.g., graphs and pictures. Furthermore, sighted people can express their own ideas and opinions visually. On the other hand, blind people can access visual information if it is expressed tactilely, but find it difficult to express their ideas and opinions visually. We are therefore developing a computer-aided system enabling blind people to draw their own figures on their own. This system consists of a matrix braille display to edit computer line drawings. The matrix braille display enables the blind to feel a tactile graphic during editing. After explaining two input methods for elementary plane shapes, we discuss two methods for scrolling tactile graphics to make the matrix braille display large enough to show tactile graphics in sufficient detail. We then show experimental results for using input and scrolling, and conclude with discussion on the usability of input and scrolling.
APA, Harvard, Vancouver, ISO, and other styles
43

Park, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.

Full text
Abstract:
In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Liccardo, Annalisa, and Francesco Bonavolontà. "VR, AR, and 3-D User Interfaces for Measurement and Control." Future Internet 15, no. 1 (December 29, 2022): 18. http://dx.doi.org/10.3390/fi15010018.

Full text
Abstract:
The topics of virtual, mixed, and extended reality have now become key areas in various fields of scientific and industrial applications, and the interest in them is made tangible by the numerous papers available in the scientific literature. In this regard, the Special Issue “VR, AR, and 3-D User Interfaces for Measurement and Control” received a fair number of varied contributions that analyzed different aspects of the implementation of virtual, mixed, and extended reality systems and approaches in the real world. They range from investigating the requirements of new potential technologies to the prediction verification of the effectiveness and benefits of their use, the analysis of the difficulties of interaction with graphical interfaces to the possibility of performing complex and risky tasks (such as surgical operations) using mixed reality viewers. All contributions were of a high standard and mainly highlight that measurement and control applications based on the new models of interaction with reality are by now increasingly ready to leave laboratory spaces and become objects and features of common life. The significant benefits of this technology will radically change the way we live and interact with information and the reality around us, and it will surely be worthy of further exploration, maybe even in a new Special Issue of Future Internet.
APA, Harvard, Vancouver, ISO, and other styles
45

Ekinci, Serdar, Aysen Demiroren, and Hatice Lale Zeynelgil. "PowSysGUI: A new educational software package for power system stability studies using MATLAB/Simulink." International Journal of Electrical Engineering & Education 54, no. 4 (January 5, 2017): 283–98. http://dx.doi.org/10.1177/0020720916686800.

Full text
Abstract:
Graphical user interfaces have been progressively used in the classrooms to provide users of computer simulations with a friendly and visual approach to specify all input parameters with enhanced configuration flexibility. In this paper, an educational software package called PowSysGUI (Power System GUI), which runs on MATLAB and uses graphical user interfaces, has been developed for analysis and simulation of small to large size electric power systems. PowSysGUI is open-source software and anyone can see the inner structure of the program to figure out how to code a power engineering problem. It is designed as a simulation tool for researchers and educators, as it is simple to use and modify. PowSysGUI has algorithms for solving power flow, small signal stability analysis, and time-domain simulation. In the case studies, IEEE 16-machine 68-bus test system is given to show the features of the developed software tool. Moreover, classroom experience has shown that the developed software package helps in consolidating a better understanding of power system stability phenomena.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Shijia, Luoyun Zhou, Mingqi Fan, and Yuncheng Xiong. "A comprehensive analysis of gesture recognition systems: Advancements, challenges, and future direct." Applied and Computational Engineering 43, no. 1 (February 26, 2024): 68–75. http://dx.doi.org/10.54254/2755-2721/43/20230810.

Full text
Abstract:
Gesture recognition emerges as a potent avenue for human-computer interaction, harnessing mathematical algorithms to interpret gestures. It promises to surpass text-based or graphical interfaces, enabling touchless device control through simple gestures. Our review of 7 papers encompassing various fields and methods underscores its diverse applications. Challenges persist, such as distinguishing genuine user intent from accidental actions amid environmental interference. Creating a universal EMG pattern recognition model demands intricate individual pre-training. Sensor-based gesture recognition grapples with real-world dynamics, necessitating adaptable models that discern user intent from non-intent actions. Addressing these gaps holds the key. Adaptable models and personalized approaches can enhance robustness and accuracy across applications, surmounting challenges in the gesture interaction technology realm.
APA, Harvard, Vancouver, ISO, and other styles
47

Sellis, Diamantis, Dimitrios Vlachakis, and Metaxia Vlassi. "Gromita: A Fully Integrated Graphical user Interface to Gromacs 4." Bioinformatics and Biology Insights 3 (January 2009): BBI.S3207. http://dx.doi.org/10.4137/bbi.s3207.

Full text
Abstract:
Gromita is a fully integrated and efficient graphical user interface (GUI) to the recently updated molecular dynamics suite Gromacs, version 4. Gromita is a cross-platform, perl/tcl-tk based, interactive front end designed to break the command line barrier and introduce a new user-friendly environment to run molecular dynamics simulations through Gromacs. Our GUI features a novel workflow interface that guides the user through each logical step of the molecular dynamics setup process, making it accessible to both advanced and novice users. This tool provides a seamless interface to the Gromacs package, while providing enhanced functionality by speeding up and simplifying the task of setting up molecular dynamics simulations of biological systems. Gromita can be freely downloaded from http://bio.demokritos.gr/gromita/ .
APA, Harvard, Vancouver, ISO, and other styles
48

Rezeika, Aya, Mihaly Benda, Piotr Stawicki, Felix Gembler, Abdul Saboor, and Ivan Volosyak. "Brain–Computer Interface Spellers: A Review." Brain Sciences 8, no. 4 (March 30, 2018): 57. http://dx.doi.org/10.3390/brainsci8040057.

Full text
Abstract:
A Brain–Computer Interface (BCI) provides a novel non-muscular communication method via brain signals. A BCI-speller can be considered as one of the first published BCI applications and has opened the gate for many advances in the field. Although many BCI-spellers have been developed during the last few decades, to our knowledge, no reviews have described the different spellers proposed and studied in this vital field. The presented speller systems are categorized according to major BCI paradigms: P300, steady-state visual evoked potential (SSVEP), and motor imagery (MI). Different BCI paradigms require specific electroencephalogram (EEG) signal features and lead to the development of appropriate Graphical User Interfaces (GUIs). The purpose of this review is to consolidate the most successful BCI-spellers published since 2010, while mentioning some other older systems which were built explicitly for spelling purposes. We aim to assist researchers and concerned individuals in the field by illustrating the highlights of different spellers and presenting them in one review. It is almost impossible to carry out an objective comparison between different spellers, as each has its variables, parameters, and conditions. However, the gathered information and the provided taxonomy about different BCI-spellers can be helpful, as it could identify suitable systems for first-hand users, as well as opportunities of development and learning from previous studies for BCI researchers.
APA, Harvard, Vancouver, ISO, and other styles
49

Jumaa, Bassim Abdulbaki, and Maysoon Allawi Saleem. "Efficient and Effective Computer-based Library Management System." لارك 1, no. 6 (May 30, 2019): 340–61. http://dx.doi.org/10.31185/lark.vol1.iss6.937.

Full text
Abstract:
EECLMS is Efficient and Effective Computer–based Library database Management System that was developed for and Implement at dentistry collage to meet user unique requirements and preoperational of functional Specification for library manumit with new and special fleetest which cannot be found in other systems. The present of status of requirements is outlined as the basis for describing the problem (Traditional Library Management System) which EECLMS is intended to deal with and solve a database has been design to fit the above requirements using Microsoft/access. Visual basic programming language has been chosen to be the tool for developing the algorithms to make full control of the database designed. The result was a complete library management system with state of the art in basic programming and user–friendly system with many unique and new features which can be represented by (1) flexibility, (2) reliability, (3) multi–language; (4) high security issues (5) attraction graphical user interfaces and other features.
APA, Harvard, Vancouver, ISO, and other styles
50

Vasiljevas, Mindaugas, Robertas Damaševičius, and Rytis Maskeliūnas. "A Human-Adaptive Model for User Performance and Fatigue Evaluation during Gaze-Tracking Tasks." Electronics 12, no. 5 (February 25, 2023): 1130. http://dx.doi.org/10.3390/electronics12051130.

Full text
Abstract:
Eye gaze interfaces are an emerging technology that allows users to control graphical user interfaces (GUIs) simply by looking at them. However, using gaze-controlled GUIs can be a demanding task, resulting in high cognitive and physical load and fatigue. To address these challenges, we propose the concept and model of an adaptive human-assistive human–computer interface (HA-HCI) based on biofeedback. This model enables effective and sustainable use of computer GUIs controlled by physiological signals such as gaze data. The proposed model allows for analytical human performance monitoring and evaluation during human–computer interaction processes based on the damped harmonic oscillator (DHO) model. To test the validity of this model, the authors acquired gaze-tracking data from 12 healthy volunteers playing a gaze-controlled computer game and analyzed it using odd–even statistical analysis. The experimental findings show that the proposed model effectively describes and explains gaze-tracking performance dynamics, including subject variability in performance of GUI control tasks, long-term fatigue, and training effects, as well as short-term recovery of user performance during gaze-tracking-based control tasks. We also analyze the existing HCI and human performance models and develop an extension to the existing physiological models that allows for the development of adaptive user-performance-aware interfaces. The proposed HA-HCI model describes the interaction between a human and a physiological computing system (PCS) from the user performance perspective, incorporating a performance evaluation procedure that interacts with the standard UI components of the PCS and describes how the system should react to loss of productivity (performance). We further demonstrate the applicability of the HA-HCI model by designing an eye-controlled game. We also develop an analytical user performance model based on damped harmonic oscillation that is suitable for describing variability in performance of a PC game based on gaze tracking. The model’s validity is tested using odd–even analysis, which demonstrates strong positive correlation. Individual characteristics of users established by the damped oscillation model can be used for categorization of players under their playing skills and abilities. The experimental findings suggest that players can be categorized as learners, whose damping factor is negative, and fatiguers, whose damping factor is positive. We find a strong positive correlation between amplitude and damping factor, indicating that good starters usually have higher fatigue rates, but slow starters have less fatigue and may even improve their performance during play. The proposed HA-HCI model and analytical user performance models provide a framework for developing an adaptive human-oriented HCI that enables monitoring, analysis, and increased performance of users working with physiological-computing-based user interfaces. The proposed models have potential applications in improving the usability of future human-assistive gaze-controlled interface systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography