Siga este link para ver outros tipos de publicações sobre o tema: Touch-Based interaction.

Teses / dissertações sobre o tema "Touch-Based interaction"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 17 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Touch-Based interaction".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Gauci, Francesca. "Game Accessibility for Children with Cognitive Disabilities : Comparing Gesture-based and Touch Interaction". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-107102.

Texto completo da fonte
Resumo:
The interest in video games has grown substantially over the years, transforming from a means of recreation to one of the most dominating fields in entertainment. However, a significant number of individuals face several obstacles when playing games due to disabilities. While efforts towards more accessible game experiences have increased, cognitive disabilities have been often neglected, partly because games targeting cognitive disabilities are some of the most difficult to design, since cognitive accessibility barriers can be present at any part of the game. In recent years, research in human-computer interaction has explored gesture-based technologies and interaction, especially in the context of games and virtual reality. Research on gesture-based interaction has concentrated on providing a new form of interaction for people with cognitive disabilities. Several studies have shown that gesture interaction may provide several benefits to individuals with cognitive disabilities, including increased cognitive, motor and social aptitudes. This study aims to explore the impact of gesture-based interaction on the. accessibility of video games for children with cognitive disabilities. Accessibility of gesture interaction is evaluated against touch interaction as the baseline, a comparison founded on previous studies that have argued for the high level of accessibility and universal availability of touchscreen devices. As such, a game prototype was custom designed and developed to support both types of interaction, gestures and touch. The game was presented to several users during an interaction study, where every user played the game with both methods of interaction. The game and outcome of the user interaction study were further discussed with field experts. This study contributes towards a better understanding of how gesture interaction impacts the accessibility in games for children with cognitive disabilities. This study concludes that there are certain drawbacks with gesture-based games, especially with regards to precision, accuracy, and ergonomics. As a result, the majority of users preferred the touch interaction method. Nevertheless, some users also considered the gesture game to be a fun experience. Further, discussion with experts produces several points of improvement to make gesture interaction more accessible. The findings of the study are a departure point for a deeper analysis of gestures and how they can be integrated into the gaming world.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Fard, Hossein Ghodosi, e Bie Chuangjun. "Braille-based Text Input for Multi-touch Screen Mobile Phones". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5555.

Texto completo da fonte
Resumo:
ABSTRACT: “The real problem of blindness is not the loss of eyesight. The real problem is the misunderstanding and lack of information that exist. If a blind person has proper training and opportunity, blindness can be reduced to a physical nuisance.”- National Federation of the Blind (NFB) Multi-touch screen is a relatively new and revolutionary technology in mobile phone industry. Being mostly software driven makes these phones highly customizable for all sorts of users including blind and visually impaired people. In this research, we present new interface layouts for multi-touch screen mobile phones that enable visionless people to enter text in the form of Braille cells. Braille is the only way for these people to directly read and write without getting help from any extra assistive instruments. It will be more convenient and interesting for them to be provided with facilities to interact with new technologies using their language, Braille. We started with a literature review on existing eyes-free text entry methods and also text input devices, to find out their strengths and weaknesses. At this stage we were aiming at identifying the difficulties that unsighted people faced when working with current text entry methods. Then we conducted questionnaire surveys as the quantitative method and interviews as the qualitative method of our user study to get familiar with users’ needs and expectations. At the same time we studied the Braille language in detail and examined currently available multi-touch mobile phone feedbacks. At the designing stage, we first investigated different possible ways of entering a Braille “cell” on a multi-touch screen, regarding available input techniques and also considering the Braille structure. Then, we developed six different alternatives of entering the Braille cells on the device; we laid out a mockup for each and documented them using Gestural Modules Document and Swim Lanes techniques. Next, we prototyped our designs and evaluated them utilizing Pluralistic Walkthrough method and real users. Next step, we refined our models and selected the two bests, as main results of this project based on good gestural interface principles and users’ feedbacks. Finally, we discussed the usability of our elected methods in comparison with the current method visually impaired use to enter texts on the most popular multi-touch screen mobile phone, iPhone. Our selected designs reveal possibilities to improve the efficiency and accuracy of the existing text entry methods in multi-touch screen mobile phones for Braille literate people. They also can be used as guidelines for creating other multi-touch input devices for entering Braille in an apparatus like computer.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Hardy, Robert. "Exploring touch-based phone interaction with tagged physical objects : exemplified using near-field communication". Thesis, Lancaster University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.661129.

Texto completo da fonte
Resumo:
Today, mobile phones provide a versatile platform that is able to support a multitude of new applications. However, an inherent obstacle with mobile phones is their limited output capabilities. This poses constraints on the user's ability to find and interact with the software applications installed due to factors such as the limited screen size of the device. The goal of the herein work is to extend the phone's user interface to the physical environment. The user's interactions with the physical environment are through phone touches; thus, explicit and direct. Moreover, in addition to the environment providing spatial awareness visually, the phones also lend their capabilities (e.g. input modalities, display, storage, etc.) to the interaction. The approach taken in this thesis is to support touch-based object interaction through the use of tagging technologies. This involves augmenting the physical environment with devices that can be sensed by the phone. If the data stored on the device represents the physical object, the phone can effectively sense this object. Furthermore, the advantage of tagging technologies is the freedom provided to create a variety of different user interfaces. Currently, the majority of implemented solutions focus on single-tag interaction paradigms whereby the phone reads only one tag to accomplish a goal. In order to explore the potential of touch-based mobile interaction further, mUltiple touches (using multiple tags) could be concatenated to achieve expressive interactions. The contribution of this project is the fut1her analysis of touch-based object interactions and the creation of guidelines for the development of such systems, as well as to establish developer supp0l1 for the future development.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yan, Zhixin. "A Unified Multi-touch Gesture based Approach for Efficient Short-, Medium-, and Long-Distance Travel in VR". Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/392.

Texto completo da fonte
Resumo:
As one of the main topics in Virtual Reality (VR), travel interfaces have been studied by many researchers in the past decades. However, it is still a challenging topic today. One of the design problems is the tradeoff between speed and precision. Some tasks (e.g., driving) require a user to travel long distances with less concern about precise movement, while other tasks (e.g., walking) require users to approach nearby objects in a more precise way, and to care less about the speed. Between these two extremes there are scenarios when both speed and precision become equally important. In the real world, we often seamlessly balance these requirements. However, most VR systems only support a single travel mode, which may be good for one range of travel, but not others. We propose and evaluate a new VR travel framework which supports three separate multi-touch travel techniques for different distance ranges, that all use the same input device with a unifying metaphor of the user’s fingers becoming their legs. We investigate the usability and user acceptance for the fingers-as-legs metaphor, as well as the efficiency, naturalness, and impact on spatial awareness such an interface has.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Cavez, Vincent. "Designing Pen-based Interactions for Productivity and Creativity". Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG013.

Texto completo da fonte
Resumo:
Conçus pour une utilisation avec la souris et le clavier, les outils aidant à la productivité et à la créativité sont puissants sur les ordinateurs de bureau, mais leur structure devient un obstacle lorsqu'ils sont transposés sur des surfaces interactives offrant une saisie tactile et au stylet.En effet, les opportunités offertes par le stylet en termes de précision et d'expressivité ont été démontrées dans la littérature sur en IHM. Cependant, les outils de productivité et de créativité nécessitent une refonte minutieuse exploitant ces propriétés uniques pour tirer parti de l'intuitivité qu'ils offrent, tout en conservant les avantages liés à la structure. Cette articulation délicate entre le stylet et la structure a été négligée dans la littérature.Mon travail de thèse se concentre sur cette articulation à travers deux cas d'utilisation afin de répondre à la question de recherche générale : « Comment concevoir des interactions basées sur le stylet pour la productivité et la créativité sur des surfaces interactives ? » Je considère que la productivité dépend de l'efficacité, tandis que la créativité repose à la fois sur l'efficacité et la flexibilité, et j'explore des interactions qui favorisent ces deux dimensions.Mon premier projet, TableInk, explore un ensemble de techniques d'interaction basées sur le stylet et conçues pour les logiciels de tableurs, et propose des lignes directrices pour promouvoir l'efficacité sur les surfaces interactives. Je commence par analyser les logiciels commerciaux et par mener une étude d'élicitation pour comprendre ce que les utilisateurs peuvent faire et ce qu'ils aimeraient faire avec les tableurs sur des surfaces interactives. Sur la base de ces analyses, je conçois des techniques d'interaction qui exploitent les opportunités offertes par le stylet pour réduire les frictions et permettre plus d'opérations par manipulation directe sur et à travers la grille. Je prototype ces techniques d'interaction et mène une étude qualitative auprès d'utilisateurs qui effectuent diverses opérations sur tableurs avec leurs propres données. Les observations montrent que l'utilisation du stylet pour contourner la structure constitue un moyen prometteur de favoriser l'efficacité dans un outil de productivité.Mon deuxième projet, EuterPen, explore un ensemble de techniques d'interaction basées sur le stylet, et conçues pour les logiciels de notation musicale, et propose des lignes directrices pour promouvoir à la fois l'efficacité et la flexibilité sur les surfaces interactives. Je commence par une série de neuf entretiens avec des compositeurs professionnels afin de prendre du recul et de comprendre à la fois leur processus de réflexion et leur processus de travail avec leurs outils actuels sur ordinateur de bureau. Sur la base de cette analyse double, j'élabore des lignes directrices pour la conception de fonctionnalités ayant le potentiel de promouvoir à la fois l'efficacité pour les opérations fréquentes ou complexes et la flexibilité dans l'exploration des idées. Ensuite, je mets en œuvre ces lignes directrices à travers un processus de conception itératif : deux phases de prototypage, un atelier de conception participative et une série finale d'entretiens avec huit compositeurs professionnels. Les observations montrent qu'en plus d'utiliser le stylet pour profiter de la structure afin de favoriser l'efficacité, tirer parti de ses propriétés pour briser temporairement la structure constitue un moyen prometteur de promouvoir la flexibilité dans un outil de soutien à la créativité.Je conclus ce manuscrit en discutant de différentes manières d'interagir avec la structure, en présentant un ensemble de recommandations pour soutenir la conception d'interactions basées sur le stylet pour les outils de productivité et de créativité, et en élaborant sur les applications futures que cette thèse ouvre
Designed with the mouse and keyboard in mind, productivity tools and creativity support tools are powerful on desktop computers, but their structure becomes an obstacle when brought to interactive surfaces supporting pen and touch input.Indeed, the opportunities provided by the pen for precision and expressivity have been demonstrated in the HCI literature, but productivity and creativity tools require a careful redesign leveraging these unique affordances to take benefit from the intuitiveness they offer while keeping the advantages of structure. This delicate articulation between pen and structure has been overlooked in the literature.My thesis work focuses on this articulation with two use cases to answer the broad research question: “How to design pen-based interactions for productivity and creativity on interactive surfaces?” I argue that productivity depends on efficiency while creativity depends on both efficiency and flexibility, and explore interactions that promote these two dimensions.My first project, TableInk, explores a set of pen-based interaction techniques designed for spreadsheet programs and contributes guidelines to promote efficiency on interactive surfaces. I first conduct an analysis of commercial spreadsheet programs and an elicitation study to understand what users can do and what they would like to do with spreadsheets on interactive surfaces. Informed by these, I design interaction techniques that leverage the opportunities of the pen to mitigate friction and enable more operations by direct manipulation on and through the grid. I prototype these interaction techniques and conduct a qualitative study with information workers who performed a variety of spreadsheet operations on their own data. The observations show that using the pen to bypass the structure is a promising mean to promote efficiency with a productivity tool.My second project, EuterPen, explores a set of pen-based interaction techniques designed for music notation programs and contributes guidelines to promote both efficiency and flexibility on interactive surfaces. I first conduct a series of nine interviews with professional composers in order to take a step back and understand both their thought process and their work process with their current desktop tools. Building on this dual analysis, I derive guidelines for the design of features which have the potential to promote both efficiency with frequent or complex operations and flexibility in regard to the exploration of ideas. Then, I act on these guidelines by engaging in an iterative design process for interaction techniques that leverage the opportunities of the pen: two prototyping phases, a participatory design workshop, and a final series of interviews with eight professional composers. The observations show that on top of using the pen to leverage the structure for efficiency, using its properties to temporarily break the structure is a promising mean to promote flexibility with a creativity support tool.I conclude this manuscript by discussing several ways to interact with structure, presenting a set of guidelines to support the design of pen-based interactions for productivity and creativity tools, and elaborating on the future applications this thesis opens
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Tikkanen, Marjo. "What makes the flow - Understanding the immateriality of screen-based interactions". Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-21577.

Texto completo da fonte
Resumo:
Building upon previous research done on interactivity attributes describing the aesthetic quality of interactions, this paper aims to explore the sense of ow in screen-based interactions, narrowing down to the finer details that make the personality of the interactive experience. The goal is to provide a deeper knowledge on the concrete, expressive qualities of screen based interaction, being “the immaterial material” moulded in the design process. The relevant information for practicing designers is the awareness of what emotions certain interactions elicit and how interactions can be used to convey brand personality.The process conducted followed a research through design methodology, as the aim was to explore the defined design space rather than answer a specific problem. Two major user studies provided insights and refocus in the process along the way to the final result: a set of guidelines and a prototype embodying them. The guidelines provide some details on designing flow experiences on a mobile screen, and are to serve as inspiration and reference to future work and design practice. The prototype, called ‘The Embodiment’ serves as an illustration of the guidelines, as mere words are not able to fully describe the dynamic quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kuhlman, Lane M. "Gesture Mapping for Interaction Design: An Investigative Process for Developing Interactive Gesture Libraries". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244003264.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Vargas, Gonzalez Andres. "SketChart: A Pen-Based Tool for Chart Generation and Interaction". Master's thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6375.

Texto completo da fonte
Resumo:
It has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users sketch a chart and interact with the information once the drawing is identified. The prototype's user interface consists of an area to sketch and touch based elements that will be displayed depending on the context and nature of the outline. Brainstorming and live presentations can benefit from the prototype due to the ability to visualize and manipulate data in real time. We also perform a short, informal user study to measure effectiveness of the tool while recognizing sketches and users acceptance while interacting with the system. Results show SketChart strengths and weaknesses and areas for improvement.
M.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Junior, José Augusto Costa Martins. "Interação usuário-TV digital interativa: contribuições via controle remoto". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28062011-115007/.

Texto completo da fonte
Resumo:
O sistema de TV digital interativa está em fase de implantação no Brasil. O middleware Ginga, responsável por permitir a apresentação de programas interativos, prevê que usuários possam interagir com aplicações apresentadas na TV ao pressionar de teclas em um controle remoto. Considerando que controles remotos tradicionais apresentam limitações de usabilidade, este trabalho teve o objetivo investigar a aplicação de conceitos de computação ubíqua, em particular interfaces naturais e multimodais, como alternativas para prover interatividade entre usuários e programas de TV digital. Como resultado, um dispositivo móvel alternativo ao controle remoto tradicional foi utilizado no projeto de novos mecanismos de interação que incluem interfaces baseadas em telas sensíveis ao toque, interfaces sensíveis a gestos capturados por dispositivos que contêm acelerômetros, e interfaces que contêm microfones que permitem entrada de dados por voz. A construção de protótipos correspondentes foi beneficiada pela (assim como beneficiou) implementação prévia de um componente que oferece funcionalidades para envio de dados multimodais para um receptor de TV digital contendo o middleware Ginga, e de um componente que, instalado no receptor, permite a comunicação peer-to-peer entre dispositivos sem fio
The tradicional Brazilian TV system is being replaced by an interactive digital platform. The Ginga middleware, responsible for allowing the presentation of interactive programs, is able to support user interactions with TV applications by means of key presses on a remote control. Since traditional remotes have usability limitations, this work aimed at investigating the application of ubiquitous computing concepts, such as natural and multimodal interfaces, to provide alternatives for the interaction among users and TV applications. Considering the availability of mobile devices such as smartphones, prototype interfaces based on touch screens, as well as gesture-based, accelerometer-based, and voice-based interfaces have been designed and implemented to allow the interaction usually provided by remote controls. The implementation of those interfaces was supported by the previous development of components providing multimodal interaction and peer-to-peer communication in the context of the Brazilian interactive digital TV system middleware
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Weigel, Martin [Verfasser], e Jürgen [Akademischer Betreuer] Steimle. "Interactive on-skin devices for expressive touch-based interactions / Martin Weigel ; Betreuer: Jürgen Steimle". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2017. http://d-nb.info/1136607838/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Jung, Annkatrin. "The Breathing Garment : Exploring Breathing-Based Interactions through Deep Touch Pressure". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284203.

Texto completo da fonte
Resumo:
Deep touch pressure is used to treat sensory processing difficulties by applying a firm touch to the body to stimulate the nervous system and soothe anxiety. I conducted a long-term exploration of deep touch pressure from a first-person perspective, using shape-changing pneumatic actuators, breathing and ECG sensors to investigate whether deep touch pressure can guide users to engage in semi-autonomous interactions with their breathing and encourage greater introspection and body awareness. Based on an initial collaborative material exploration, I designed the breathing garment- a wearable vest used to guide the wearer through deep breathing techniques. The breathing garment presents a new use case of deep touch pressure as a modality for hapticbreathing feedback, which showed potential in supporting interoceptive awareness and relaxation. It allowed me to engage in a dialogue with my body, serving as a constant reminder to turn inwards and attend to my somatic experience. By pushing my torso forward, the actuators were able to engage my entire body while responding to my breath, creating a sense of intimacy, of being safe and taken care of. This work addresses a gap in HCI research around deep touch pressure and biosensing technology concerning the subjective experience of their emotional and cognitive impact. The longterm, felt engagement with different breathing techniques opened up a rich design space around pressure-based actuation in the context of breathing. This rendered a number of experiential qualities and affordances of the shape-changing pneumatic actuators, such as: applying subtle, slowly changing pressure to draw attention to specific body parts, but also disrupting the habitual way of breathing with asynchronous and asymmetric actuation patterns; taking on a leading or following role in the interaction, at times both simultaneously; and acting as a comforting companion or as a communication channel between two people as well as between one person and their soma.
Djuptrycksterapi (Deep Touch Pressure, DTP) används för att behandla personer som har problem med att processa sensoriska upplevelser. Detta genom att applicera ett fast tryck på kroppen för att aktivera nervsystemet och lindra ångest. Jag genomförde en långtidsutforskning av DTP ur ett första-persons-perspektiv, med hjälp av formförändrande tryckluftsaktuatorer, andnings sensorer och EKG-elektroder. Dess syfte var att undersöka ifall DTP kan guida användare till att engageras i semiautonoma interaktioner med sin andning och främja en större introspektion och kroppsmedvetenhet. Baserat på ett initialt samarbete kring undersökning av olika material, designade jag “the breathing garment” - en bärbar väst som guidar användaren genom djupandningstekniker. Andningsvästen visar på en ny användning av DTP som en modalitet av haptisk andningsfeedback, och den möjliggör ett stödjande av interoceptisk medvetenhet och avslappning. Andningsvästen tillät mig att delta i en dialog med min egen kropp, och fungerade som en ständig påminnelse att vända mig inåt och uppmärksamma mina somatiska upplevelser. Genom att trycka min bröstkorg framåt kunde aktuatorerna engagera hela min kropp när de svarade mot min andning, vilket skapade en känsla av intimitet, trygghet och att vara omhändertagen. Detta examensarbete uppmärksammar ett område som tidigare varit outforskat inom HCI av djuptrycksterapi och biosensorteknik kring den subjektiva upplevelsen av dess emotionella och kognitiva påverkan. Det långvariga engagemanget med aktivt upplevande av olika andningstekniker öppnade upp en stor designrymd kring tryckbaserade aktuatorer i en kontext av andning. Det visar på ett flertal experimentella kvaliteter och affordances av de formförändrande tryckluftsaktuatorerna, såsom: att applicera ett gradvis ökande och markant tryck för att dra uppmärksamheten till specifika kroppsdelar, men också för att bryta det vanliga andningsmönstret genom asynkron och asymmetrisk mönsterpåverkan; att ta en ledande eller följande roll i interaktionen, ibland båda samtidigt; och att agera som en tröstande följeslagare, eller som en kommunikationskanal mellan två människor, likväl som mellan en person och hennes soma.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Al-Showarah, Suleyman. "Effects of age on smartphone and tablet usability, based on eye-movement tracking and touch-gesture interactions". Thesis, University of Buckingham, 2015. http://bear.buckingham.ac.uk/29/.

Texto completo da fonte
Resumo:
The aim of this thesis is to provide an insight into the effects of user age on interactions with smartphones and tablets applications. The study considered two interaction methods to investigate the effects of user age on the usability of smartphones and tablets of different sizes: 1) eye-movements/browsing and 2) touch-gesture interactions. In eye movement studies, an eye tracker was used to trace and record users’ eye movements which were later analysed to understand the effects of age and screen-size on browsing effectiveness. Whilst in gesture interactions, an application developed for smartphones traced and recorded users’ touch-gestures data, which were later analysed to investigate the effects of age and screensize on touch-gesture performance. The motivation to conduct our studies is summarised as follows: 1) increasing number of elderly people in our society, 2) widespread use of smartphone technology across the world, 3) understanding difficulties for elderly when interacting smartphones technology, and 4) provide the existing body of literature with new understanding on the effects of ageing on smartphone usability. The work of this thesis includes five research projects conducted in two stages. Stage One included two researches used eye movement analysis to investigate the effects of user age and the influence of screen size on browsing smartphone interfaces. The first research examined the scan-paths dissimilarity of browsing smartphones applications or elderly users (60+) and younger users (20-39). The results revealed that the scan-paths dissimilarity in browsing smartphone applications was higher for elderly users (i.e., age-driven) than the younger users. The results also revealed that browsing smartphone applications were stimulus-driven rather than screen size-driven. The second study was conducted to understand the difficulties of information processing when browsing smartphone applications for elderly (60+), middle-age (40-59) and younger (20-39) users. The evaluation was performed using three different screen sizes of smartphone and tablet devices. The results revealed that processing of both local and global information on a smartphone/tablet interfaces was more difficult for elderly users than it was for the other age groups. Across all age groups, browsing on the smaller smartphone size proved to be more difficult compared to the larger screen sizes. Stage Two included three researches to investigate: the difficulties in interacting with gesture-based applications for elderly compared to younger users; and to evaluate the possibility of classifying user’s age-group based on on-screen gestures. The first research investigated the effects of user age and screen size on performing gesture swiping intuitively for four swiping directions: down, left, right, and up. The results revealed that the performance of gesture swiping was influenced by user age, screen size, as well as by the swiping orientation. The purpose of the second research was to investigate the effects of user age, screen sizes, and gesture complexity in performing accurate gestures on smartphones and tablets using gesture-based features. The results revealed that the elderly were less accurate, less efficient, slower, and exerted more pressure on the touch-screen when performing gestures than the younger users. On a small smartphone, all users were less accurate in gesture performance – more so for elderly – compared to mini-sized tablets. Also, the users, especially the elderly, were less efficient and less accurate when performing complex gestures on the small smartphone compared to the mini-tablet. The third research investigated the possibility of classifying a user’s age-group using touch gesture-based features (i.e., gesture speed, gesture accuracy, movement time, and finger pressure) on smartphones. In the third research, we provide evidence for the possibility of classifying a user’s age-group using gesture-based applications on smartphones for user-dependent and user-independent scenarios. The accuracy of age-group classification on smaller screens was higher than that on devices with larger screens due to larger screens being much easier to use for all users across both age groups. In addition, it was found that the age-group classification accuracy was higher for younger users than elderly users. This was due to the fact that some elderly users performed the gestures in the same way as the younger users do, which could be due to their longer experience in using smartphones than the typical elderly user. Overall, our results provided evidence that elderly users encounter difficulties when interacting with smartphones and tablet devices compared to younger users. Also, it was possible to classify user’s age-group based on users’ ability to perform touch-gestures on smartphones and tablets. The designers of smartphone interfaces should remove barriers that make browsing and processing local and global information on smartphones’ applications difficult. Furthermore, larger screen sizes should be considered for elderly users. Also, smartphones could include automatically customisable user interfaces to suite elderly users' abilities to accommodate their needs so that they can be equally efficient as younger users. The outcomes of this research could enhance the design of smartphones and tablets as well the applications that run on such devices, especially those that are aimed at elderly users. Such devices and applications could play an effective role in enhancing elderly peoples’ activities of daily lives.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Tong, Xin. "Interactive Visual Clutter Management in Scientific Visualization". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1471612150.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Wang, Huang-Jen, e 王皇仁. "The Performance and User Experience of Touch-based Interaction for Older Adults in Editng Tasks". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/26822110490750390857.

Texto completo da fonte
Resumo:
碩士
義守大學
工業管理學系
102
According to advance of technology, improvement of hygiene and high quality of care, those changes gradually extend people’s life span, so the demographic structure also turns traditional society into aging society. Instead of the conventional input devices, the keyboard and mouse, touch panel becomes an upward trend in technology because of smart phone and multi-touched products widely used in the world. In the age of high technology, the older adults should possess basic computer skills to adapt to ever-changing world. It is quite difficult for older adults to learn how to utilize computers and peripherals well due to their mantel and physical conditions, decreasing learning abilities by age. Therefore, this study is going to reveal that comparing touch panel with traditional editing operation to figure out which function is more suitable for use by the older adults. This study analyzes 15 senior subjects’ reaction after using different interfaces. Each experimental subject has been tested 9 kinds of blended trials; modal is consisted of 3x3 factors. There are two inter factor, panel-touched interface and editing operation. That has mouse, touchpad and touchscreen to be processing level, this has global selection, specific selection and rotation to be the processing level in this study. The result proves that touchscreen is easier than touchpad because subjects spend less time operating and make few errors; however, the mouse depends on difference of editing operation. It is not significantly different from touchpad in the global selection, but mouse shrinks the operation time. As a result, in global selection of editing operation, only could not the mouse be instead of. But when using the mouse to operation of rotation, it is significantly different in a result of analysis that touchscreen is more efficient than mouse, consequently decreasing operation time and errors; thus, the touchscreen is the most suitable in operation of rotation. Besides, the factors of subjective questionnaire-level of fatigue, relaxation, confidence and distractions-indicate that older adults prefer operating touchscreen interface to using other interfaces.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Chen, Hsu-Huan, e 陳緒桓. "Deep-learning SSD-based Smart Touch Interface and Interactive Robot Design". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cbxmkw.

Texto completo da fonte
Resumo:
碩士
國立臺灣海洋大學
電機工程學系
107
The main purpose of this thesis is to study the application of the deep learning object recognition to the control of several devices including omnidirectional mobile robot, delta robot arm and household appliances. The SSD (Single Shot multibox Detector) neural network is used to train and identify different objects to be controlled and then the corresponding manipulations can be performed. The system architecture can mainly be divided into a main control console and several controlled devices. The main control module uses the tablet computer combined with the neural network model to process the images captured through the front lens. Multiple target objects appearing in the input image can be identified for the category and position respectively. Furthermore, user can connect and control his/her interested identified item on the screen with Bluetooth interface. The three items on the current list of controlled devices all use Arduino pro mini as the control core and HC-05 Bluetooth to communicate with console. Omnidirectional mobile robot and delta robot arm both use three servo motors. The former uses servo motors to drive car and the latter to control the robot arm operations. When the tablet computer is connected with omnidirectional mobile robot, user can drag it to move in all directions with one finger or rotate body posture with two fingers on the screen. If system is connected with delta robot, user can click on the screen for cat or dog pictures placed under the robot arm and robot arm will perform classification and collection. Finally, if connected with desk lamp, user can click the lamp on the screen to control it open or close
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Franco, Jéssica Spínola. "Augmented reality selection through smart glasses". Master's thesis, 2019. http://hdl.handle.net/10400.13/2663.

Texto completo da fonte
Resumo:
O mercado de óculos inteligentes está em crescimento. Este crescimento abre a possibilidade de um dia os óculos inteligentes assumirem um papel mais ativo tal como os smartphones já têm na vida quotidiana das pessoas. Vários métodos de interação com esta tecnologia têm sido estudados, mas ainda não é claro qual o método que poderá ser o melhor para interagir com objetos virtuais. Neste trabalho são mencionados diversos estudos que se focam nos diferentes métodos de interação para aplicações de realidade aumentada. É dado destaque às técnicas de interação para óculos inteligentes tal como às suas vantagens e desvantagens. No contexto deste trabalho foi desenvolvido um protótipo de Realidade Aumentada para locais fechados, implementando três métodos de interação diferentes. Foram também estudadas as preferências do utilizador e sua vontade de executar o método de interação em público. Além disso, é extraído o tempo de reação que é o tempo entre a deteção de uma marca e o utilizador interagir com ela. Um protótipo de Realidade Aumentada ao ar livre foi desenvolvido a fim compreender os desafios diferentes entre uma aplicação de Realidade Aumentada para ambientes interiores e exteriores. Na discussão é possível entender que os utilizadores se sentem mais confortáveis usando um método de interação semelhante ao que eles já usam. No entanto, a solução com dois métodos de interação, função de toque nos óculos inteligentes e movimento da cabeça, permitem obter resultados próximos aos resultados do controlador. É importante destacar que os utilizadores não passaram por uma fase de aprendizagem os resultados apresentados nos testes referem-se sempre à primeira e única vez com o método de interação. O que leva a crer que o futuro de interação com óculos inteligentes possa ser uma fusão de diferentes técnicas de interação.
The smart glasses’ market continues growing. It enables the possibility of someday smart glasses to have a presence as smartphones have already nowadays in people's daily life. Several interaction methods for smart glasses have been studied, but it is not clear which method could be the best to interact with virtual objects. In this research, it is covered studies that focus on the different interaction methods for reality augmented applications. It is highlighted the interaction methods for smart glasses and the advantages and disadvantages of each interaction method. In this work, an Augmented Reality prototype for indoor was developed, implementing three different interaction methods. It was studied the users’ preferences and their willingness to perform the interaction method in public. Besides that, it is extracted the reaction time which is the time between the detection of a marker and the user interact with it. An outdoor Augmented Reality application was developed to understand the different challenges between indoor and outdoor Augmented Reality applications. In the discussion, it is possible to understand that users feel more comfortable using an interaction method similar to what they already use. However, the solution with two interaction methods, smart glass’s tap function, and head movement allows getting results close to the results of the controller. It is important to highlight that was always the first time of the users, so there was no learning before testing. This leads to believe that the future of smart glasses interaction can be the merge of different interaction methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Lu, Ming-Kun, e 呂鳴崑. "Multi-Camera Vision-based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Human Computer Interactive Systems". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vgss3p.

Texto completo da fonte
Resumo:
碩士
國立臺北科技大學
資訊工程系研究所
100
Nowadays, multi-touch technology has become a popular issue. Multi-touch has been implemented in several ways including resistive type, capacitive type and so on. However, because of limitations, multi-touch by these implementations cannot support large screens. Therefore, this thesis proposes a multi-camera vision-based finger detection, tracking, and event identification techniques for multi-touch sensing with implementation. The proposed system detects the multi-finger pressing on an acrylic board by capturing the infrared light through four infrared cameras. The captured infrared points, which are equivalent to the multi-finger touched points, can be used for input equipments and supply man-computer interface with convenience. Additionally, the proposed system is a multi-touch sensing with computer vision technology. Compared with the conventional touch technology, multi-touch technology allows users to input complex commands. The proposed multi-touch point detection algorithm identifies the multi-finger touched points by using the bright object segmentation techniques. The extracted bright objects are then tracked, and the trajectories of objects are recorded. Furthermore, the system will analyze the trajectories of objects and identify the corresponding events pre-defined in the proposed system. For applications, this thesis wants to provide a simple human-computer interface with easy operation. Users can access and input commands by touch and move fingers. Besides, the proposed system is implemented with a table-sized screen, which can support multi-user interaction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia