To see the other types of publications on this topic, follow the link: Natural users interfaces.

Dissertations / Theses on the topic 'Natural users interfaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Natural users interfaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jacobs, Gershwin. "User experience guidelines for mobile natural user interfaces: a case study of physically disabled users." Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/17547.

Full text
Abstract:
Motor impaired people are faced with many challenges, one being the of lack integration into certain spheres of society. Access to information is seen as a major issue for the motor impaired since most forms of interaction or interactive devices are not suited to the needs of motor impaired people. People with motor impairments, like the rest of the population, are increasingly using mobile phones. As a result of the current devices and methods used for interaction with content on mobile phones, various factors prohibit a pleasant experience for users with motor impairments. To counter these factors, this study recognizes the need to implement better suited methods of interaction and navigation to improve accessibility, usability and user experience for motor impaired users. The objective of the study was to gain an understanding of the nature of motor impairments and the challenges that this group of people face when using mobile phones. Once this was determined, a solution to address this problem was found in the form of natural user interfaces. In order to gain a better understanding of this technology, various forms of NUIs and the benefits thereof were studied by the researcher in order to determine how this technology can be implemented to meet the needs of motor impaired people. To test theory, the Samsung Galaxy s5 was selected as the NUI device for the study. It must be noted that this study started in the year 2013 and the Galaxy S5 was the latest device claiming to improve interaction for disabled people at the time. This device was used in a case study that made use of various data collection methods, including participant interviews. Various motor impaired participants were requested to perform predefined tasks on the device, along with the completion of a set of user experience questionnaires. Based on the results of the study, it was found that interaction with mobile phones is an issue for people with motor impairments and that alternative methods of interaction need to be implemented. These results contributed to the final output of this study, namely a set of user experience guidelines for the design of mobile human computer interaction for motor impaired users.
APA, Harvard, Vancouver, ISO, and other styles
2

Manresa, Yee Cristina Suemay. "Advanced and natural interaction system for motion-impaired users." Doctoral thesis, Universitat de les Illes Balears, 2009. http://hdl.handle.net/10803/9412.

Full text
Abstract:
Human-computer interaction is an important area that searches for better and more comfortable systems to promote communication between humans and machines. Vision-based interfaces can offer a more natural and appealing way of communication. Moreover, it can help in the e-accessibility component of the e-inclusion. The aim is to develop a usable system, that is, the end-user must consider the use of this device effective, efficient and satisfactory.
The research's main contribution is SINA, a hands-free interface based on computer vision techniques for motion impaired users. This interface does not require the user to use his upper body limbs, as only nose motion is considered. Besides the technical aspect, user's satisfaction when using an interface is a critical issue. The approach that we have adopted is to integrate usability evaluation at relevant points of the software development.
APA, Harvard, Vancouver, ISO, and other styles
3

De, La Cruz Laureano Eliel Josue. "Technology integration in the architectural schematic design phase: understanding the factors that affect users' acceptance." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17841.

Full text
Abstract:
Tangible user interfaces and augmented reality are just some of the emerging technologies which are blending in our everyday lives. The number of systems and devices which use these technologies is quickly increasing, yet little has been done towards investigating the connection between architectural designers’ preferences, the factors that determine their use of technology, and ways to integrate technology in the design process. This research identified and studied factors which affect users’ acceptance of technology, as well as its integration in the schematic phase of the architectural design process. The schematic, or conceptual phase is where designers use rough sketches to create a design scheme which seeks to define the general scope of the project. I used a practice-based design research method, to gather data from studies to improve technology adoption, which in turn increased productivity in the design process. We found that technology in the design process supports the improvement of design intent communication and designers’ productivity. We found a relationship between the age group of architects and how they perceive a technology’s usefulness, which in turn affects how a new technology is accepted. Our findings show that for future sketching interfaces to increase user’s acceptance, developers should focus on the pragmatic and hedonic qualities of the proposed interface. We found that participants prefer digitizing devices which provide users with the feel of paper and pen while simultaneously offering digital versatility. Our findings provide recommendations and insight for better integration of digitizing technologies in the design process, using the field of architecture as the study context. We suggest that the data from our research can be used to improve the development of the next generation of digitizing devices.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Tore Stefano. "Il corpo aumentato: le interfacce naturali come strategia semplesse di interazione uomo-macchina. Implicazioni didattiche e linee di ricerca." Doctoral thesis, Universita degli studi di Salerno, 2013. http://hdl.handle.net/10556/1329.

Full text
Abstract:
2011 - 2012
I concetti di corpo e di corporeità sono stati oggetto, nel corso degli ultimi decenni, di una attenzione che li ha condotti ad essere il luogo teorico di incontro (e di scontro) tra diversi saperi e diverse traiettorie di ricerca scientifica. Dalla filosofia alla medicina, dalle neuroscienze all‟antropologia, dal diritto alla pedagogia, le discipline che pongono l‟uomo come oggetto della propria indagine hanno rivendicato tutte, ognuna da un proprio peculiare punto di vista epistemico, la centralità del corpo. Nell‟ambito degli studi sulla cognizione e sull‟apprendimento, spinte provenienti da direzioni e tradizioni diverse hanno messo in crisi una visione che postulava una idea di conoscenza astratta, basata su regole formali, indipendente sia dagli aspetti biologici che dal contesto socioculturale; conseguentemente, aggettivi quali “situato”, “embodied”, “sociale”, “distribuito”, hanno cominciato ad accompagnare il concetto di cognizione nella letteratura scientifica Questa nuova prospettiva non considera più il corpo come semplice “operaio del pensiero” (M. Sibilio, 2002), ma riconosce che “la maggior parte delle conoscenze, specie quelle vitali, sono espresse nella struttura stessa del corpo”(Longo, 1995), il quale non è più considerato come un semplice mediatore tra il nostro cervello e la realtà esterna, ma come il "dispositivo principale attraverso il quale, realizzando esperienze, sviluppiamo apprendimento e produciamo conoscenza" (Rivoltella, 2012). In tale ottica "l'astrazione e le generalizzazioni possono produrre utilmente apprendimento solo se sono state costruite a partire dall'esperienza corporea del mondo"(Rivoltella, 2012)ed il corpo diviene cosi “macchina della conoscenza”(F. Varela, 1990). Il ruolo della tecnologia, nell‟accezione più ampia, non è alieno ai processi che hanno determinato questo rovesciamento di prospettive. L‟idea della non neutralità delle tecnologie rispetto alle forme di produzione della conoscenza non è certo una novità. Pure, va segnalata un‟accelerazione, un aumento vertiginoso del dato quantitativo: le sollecitazioni tecnologiche all‟idea di corpo e all‟idea di conoscenza sono numerose ed incessanti, e oggetto di riflessione da più parti. Diverse linee, in quest‟ambito, si sovrappongono in modo quasi inestricabile: l‟idea dei media come estensione delle facoltà umane (McLuhan, 2001), il “luogo di innovazione e di estensione delle tecnologie del potere” (Chignola, 2007)individuato dal concetto di biopolitica nell‟accezione focaultiana, l‟esplicita (pur se obsoleta) analogia tra mente e calcolatore postulata dal cognitivismo, la progettazione e l‟evoluzione di protesi intelligenti, il design di brain-computer interfaces. Queste linee contribuiscono a spostare e a rendere meno netti i confini del corpo umano: la macchina, prolunga le facoltà del soggetto, ben oltre quanto ipotizzato dai teorici della comunicazione, e ne infrange l‟integrità, ridefinendone l‟identità. In campo didattico, la relazione tecnologia/apprendimento, fino agli anni ‟80 si è inserita in una tradizione prevalentemente comportamentista e poi cognitivista, sulla scorta dello Human Information Processing (H.I.P.), concentrandosi sul computer-istruttore e veicolando l‟idea della conoscenza come rispecchiamento della realtà. “È nel corso degli anni „80 che diventano sempre più forti i segni di insoddisfazione verso questo quadro teorico. Quella particolare solidarietà tra modello della conoscenza (conoscenza come acquisizione-elaborazione di informazioni), modello didattico e di apprendimento (sequenziale-curricolare) e modello tecnologico (computer istruttore)”(Calvani, 1998), incomincia a sgretolarsi, e i concetti di corpo e tecnologia trovano un terreno comune Tale terreno comune, in principio circoscritto al campo della disabilità, cresce in progressione geometrica nel periodo recente; all‟idea di tecnologie della mente, ancora di stampo cognitivista, si sostituisce l‟idea di tecnologie della mente/corpo. “Le tecnologie della mente pura nascono belle e fatte dal programmatore/analista, quelle della mente/corpo „vengono al mondo‟; il tecnologo si limita a creare le condizioni iniziali di un processo di sviluppo, di apprendimento, di evoluzione”(Calvani, 1998). Da un punto di vista squisitamente tecnologico, la diffusione di Interfacce Naturali, basate su devices che consentono il recupero alla Human Computer Interaction di paradigmi naturali della interazione umana (suono, voce, tatto, movimento), fa saltare il collo di bottiglia delle interfacce grafiche: l‟interazione non avviene “attraverso lo specchio”(Carroll, 2012)dello schermo, ma avviene nella “bolla percettiva” del soggetto, nello spazio, digitalizzato, che circonda l‟utente. Gli ambienti digitali di apprendimento abbandonano gradualmente l‟appiattimento sul piano cartesiano che costringe la realtà in una dimensione non naturale, limitando l‟interazione al solo binomio occhio-mano, per espandersi nello spazio tridimensionale, fondando l‟interazione sull‟intero corpo, con implicazioni cognitive dall‟impatto non ancora esplorato. Le interfacce naturali (Natural User Interface) e le tecnologie di GestureRecognition prefigurano l‟imminente scenario, nella azione didattica, della convergenza tra centralità del corpo e dimensione tecnologica: il Corpo Aumentato inteso come corpo/interfaccia nell‟Augmented Reality e nell‟Augmented Learning. Con queste premesse, il presente lavoro intende presentare, nella prima parte, un framework concettuale per inquadrare il “corpo aumentato” nel perimetro corpo – apprendimento – tecnologia e, nella seconda parte, presentare le sperimentazioni condotte sulla base di questo framework. In questo senso, il primo capitolo presenta una riflessione sullo scenario globale del corpo aumentato, disegnando (a grana grossa) il quadro del corpo come macchina della conoscenza nel rapporto con le tecnologie e l‟attività educativa, attraverso la disamina del ruolo non neutrale delle tecnologie nelle forme di produzione della conoscenza e dei concetti di ergonomia cognitiva ed ergonomia didattica. Il secondo capitolo affronta, di conseguenza, la dimensione epistemologica, inquadrando lo scenario citato nell‟evoluzione del concetto di complessità, che ha costituito la prospettiva (meta)teorica della più recente riflessione in ambito educativo, e introducendo il concetto di semplessità (A. Berthoz, 2011). L‟elaborazione di Berthoz, infatti, offre non tanto un quadro teorico finito, paradigmatico, quanto piuttosto una serie di strumenti concettuali che forniscono una chiave epistemologica per risolvere, nella ricerca in ambito didattico, l‟equazione corpo – apprendimento – tecnologia. Attraverso la lente dei Simplifying Principles for a Complex World, i concetti di spazio, ambiente (naturale e digitale), conoscenza, apprendimento sembrano trovare una collocazione armonica e funzionale nell‟ambito degli “Human Brain Projects upon theWorld”(A. Berthoz, 2009). “L‟ipotesi a questo proposito è duplice. La prima è che gli strumenti mentali elaborati nel corso dell‟evoluzione per risolvere i molteplici problemi che pone l‟avanzamento nello spazio siano stati utilizzati anche per le funzioni cognitive più elevate: la memoria e il ragionamento, la relazione con l‟altro e anche la creatività. La seconda ipotesi è che i meccanismi mentali deputati all‟elaborazione spaziale permettano di semplificare numerosi altri problemi posti agli organismi viventi.”(A. Berthoz, 2011). Il terzo capitolo introduce l‟elemento delle Interfacce Naturali come principi semplificativi nell‟interazione uomo-macchina, che consentono la digitalizzazione dello user space e, di conseguenza, il recupero alle interfacce della dimensione spaziale, nel senso definito nel secondo capitolo, determinando, di fatto, una umwelt digitale che non sostituisce ma aumenta ed estende la umwelt naturale. Il quarto capitolo presenta la serie di sperimentazioni sul design di software didattico NUI-based, descrivendo nel dettaglio i tre percorsi avviati, autonomi e ad un tempo interconnessi, presentandone il framework teorico, la metodologia di progettazione e sviluppo, le scelte squisitamente tecnologiche e presentando i risultati dei test effettuati nelle scuole. Nelle conclusioni, infine, si presentano i commenti al percorso sperimentale, dalla definizione dello scenario, alla elaborazione del quadro concettuale, dal percorso di design e sviluppo degli applicativi alla raccolta dei dati. Complessivamente, il lavoro si presenta come un prodotto che, pur essendo compiuto (dall‟idea alla discussione dei dati sperimentali), si presenta come tutt‟altro che concluso, costituendo anzi il primo atto di una ricerca più ampia e rappresentando, in questo senso, la necessaria base sperimentale a dimostrazione della fondatezza degli assunti teorici, che si offre come punto di partenza per la sperimentazione successiva. [a cura dell'autore]
XI n.s.
APA, Harvard, Vancouver, ISO, and other styles
5

Janis, Sean Patrick. "Interactive natural user interfaces /." Online version of thesis, 2010. http://hdl.handle.net/1850/12267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mavridis, Avraam. "Navigation in 2.5D spaces using Natural User Interfaces." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-116482.

Full text
Abstract:
Natural User Interfaces (NUI) are system interfaces that aim to make the human-computer interaction more “natural”. Both academic and industry sectors have developed for years new ways to interact with the machines. With the development of these interaction techniques to support more and more complex communication actions between a human and a machine new challenges arise. In this work the theory part describe the challenges of developing NUI from the perspective of developers and designers of those systems and also discusses methods of evaluating the system under development. In addition for the aim of the thesis a prototype video game have been developed in which three different interaction methods have been encapsulated. The goal of the prototype was to explore possible NUI that can be used in 2.5D spaces. The interaction techniques that have been developed are evaluated by a small group of users using a questionnaire and the results are presented at the end of this document.
APA, Harvard, Vancouver, ISO, and other styles
7

Martín, San José Juan Fernando. "USING NATURAL USER INTERFACES TO SUPPORT LEARNING ENVIRONMENTS." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/55845.

Full text
Abstract:
[EN] Considering the importance of games and new technologies for learning, in this thesis, two different systems that use Natural User Interfaces (NUI) for learning about a period of history were designed and developed. One of these systems uses autostereoscopic visualization, which lets the children see themselves as a background in the game, and that renders the elements with 3D sensation without the need for wearing special glasses or other devices. The other system uses frontal projection over a large-size tabletop display for visualization. The two systems have been developed from scratch. A total of five studies were carried out to determine the efficacy of games with NUI interaction with regard to acquiring knowledge, ease of use, satisfaction, fun and engagement, and their influence on children. In the first study, a comparison of the autostereoscopic system with the frontal projected system was carried out. 162 children from 8 to 11 years old participated in the study. We observed that the different characteristics of the systems did not influence the children's acquired knowledge, engagement, or satisfaction; we also observed that the systems are specially suitable for boys and older children (9-11 years old). The children had the depth perception with the autostereoscopic system. The children considered the two systems easy to use. However, they found the frontal projection to be easier to use. A second study was performed to determine the mode in which the children learn more about the topic of the game. The two modes were the collaborative mode, where the children played with the game in pairs; and the individual mode, where the children played with the game solo. 46 children from 7 to 10 years old participated in this study. We observed that there were statistically significant differences between playing with the game in the two modes. The children who played with the game in pairs in the collaborative mode got better knowledge scores than children who played with the game individually. A third study comparing traditional learning with a collaborative learning method (in pairs and in large groups) using the game was carried out. 100 children from 8 to 10 years old participated in this study. The results are in line with the second study. The children obtained higher score when collaborated in large groups or in pairs than attending to a traditional class. There were no statistically significant differences between playing in large groups and playing in pairs. For personalized learning, a Free Learning Itinerary has been included, where the children can decide how to direct the flow of their own learning process. For comparison, a Linear Learning Itinerary has also been included, where the children follow a determined learning flow. A fourth study to compare the two different learning itineraries was carried out. 29 children from 8 to 9 years old participated in this fourth study. The results showed that there were no statistically significant differences between the two learning itineraries. Regarding the online formative assessment and multiple-choice questions, there is usually a question and several possible answers in questionnaires of this kind in which the student must select only one answer. It is very common for the answers to be just text. However, images could also be used. We have carried out a fifth study to determine if an added image that represents/defines an object helps the children to choose the correct answer. 94 children from 7 to 8 years old participated in the study. The children who filled out the questionnaires with imaged obtained higher score than the children who filled out the text-only questionnaire. No statistically significant differences were found between the two questionnaire types with images. The results from the studies suggest that games of this kind could be appropriate educational games, and that autostereoscopy is a technology to exploit in their development.
[ES] Teniendo en cuenta la importancia de los juegos y las nuevas tecnologías en el aprendizaje, en esta tesis se han diseñado y desarrollado dos sistemas diferentes que utilizan interfaces de usuario naturales (NUI) para aprender los periodos de la historia. Uno de estos sistemas utiliza visión autoestereoscópica, la cual permite a los niños verse a ellos mismos dentro del juego y muestra los elementos del juego en 3D sin necesidad de llevar gafas especiales u otros dispositivos. El otro sistema utiliza proyección frontal como método de visualización. Los dos sistemas han sido desarrollados desde cero. Se han llevado a cabo un total de cinco estudios para determinar la eficacia de los juegos con NUI en cuanto al aprendizaje, facilidad de uso, satisfacción, diversión y su influencia en los niños. En el primer estudio, se ha comparado el sistema autoestereoscópico con el sistema de proyección frontal. Un total de 162 niños de 8 a 11 años han participado en este estudio. Observamos que las diferentes características de los sistemas no han influido en el aprendizaje, en la usabilidad o en la satisfacción; también observamos que los sistemas son especialmente apropiados para chicos y niños mayores (de 9 a 11 años). Los niños tienen percepción de profundidad con el sistema autoestereoscópico. Los niños consideraron los dos sistemas fáciles de usar. Sin embargo, encontraron el sistema de proyección frontal más fácil de usar. En el segundo estudio, se determina el modo con el que los niños pueden aprender en mayor medida el tema del juego. Los dos modos comparados han sido el modo colaborativo, en el que los niños jugaban por parejas; y el modo individual, en el que los niños jugaban solos. 46 niños de 7 a 10 años han participado en este estudio. Observamos que existen diferencias estadísticas significativas entre jugar al juego de un modo o de otro. Los niños que jugaron al juego en parejas en el modo colaborativo obtuvieron un mejor resultado que los niños que jugaron al juego en el modo individual. El tercer estudio compara el aprendizaje tradicional con el aprendizaje colaborativo (en parejas y en grupos grandes) utilizando el juego desarrollado. 100 niños de 8 a 10 años han participado en este estudio. Los resultados son similares al segundo estudio. Los niños obtuvieron una mejor puntuación jugando en colaboración que en el método tradicional en clase. No hubo diferencias estadísticas significativas entre jugar en grupos grandes y jugar en parejas. Teniendo en cuenta el aprendizaje personalizado se ha incluido un itinerario libre de aprendizaje, en el cual los niños pueden elegir cómo quieren dirigir su propio proceso de aprendizaje. A modo de comparación, se ha incluido también un itinerario lineal de aprendizaje, donde los niños siguen un flujo predeterminado. En este cuarto estudio han participado 29 niños de 8 a 9 años. Los resultados han mostrado que no hubo diferencias estadísticas significativas entre los dos itinerarios de aprendizaje. En cuanto a la evaluación online con preguntas de test, normalmente, hay una pregunta y varias opciones de respuesta, donde se debe seleccionar solo una de ellas. Es muy común que la respuesta esté formada únicamente por texto. Sin embargo, también se pueden utilizar imágenes. En este quinto estudio se ha llevado a cabo una comparación para determinar si las imágenes que representan las respuestas son de ayuda para elegir la correcta. 94 niños de 7 a 8 años han participado en este estudio. Los niños que rellenaron los cuestionarios con imágenes obtuvieron una mejor puntuación que los niños que rellenaron los cuestionarios en los que solo había texto. No se encontraron diferencias estadísticas significativas entre los dos tipos de cuestionarios con imágenes. Los resultados de estos estudios sugieren que los juegos de este tipo podrían ser apropiados para utilizarlos como juegos educativos, y que la autoestereoscopía es una te
[CAT] Tenint en compte la importància dels jocs i les noves tecnologies en l'aprenentatge, en aquesta tesi s'han dissenyat i desenvolupat dos sistemes diferents que utilitzen interfícies naturals d'usuari per aprendre els períodes de la història. Un d'aquests sistemes utilitza visió autoestereoscòpica, la qual permet als xiquets veure's a ells mateixos dins del joc i mostra els elements del joc en 3D sense necessitat de portar ulleres especials o altres dispositius. L'altre sistema utilitza projecció frontal com a mètode de visualització. Els dos sistemes han sigut desenvolupats des de zero. S'han dut a terme un total de cinc estudis per a determinar l'eficàcia dels jocs amb interfícies d'usuari naturals quant a l'aprenentatge, facilitat d'ús, satisfacció, diversió i la seua influència en els xiquets. En el primer estudi, s'ha comparat el sistema autoestereoscòpic amb el sistema de projecció frontal. 162 xiquets de 8 a 11 anys han participat en aquest estudi. Pels resultats, observem que les diferents característiques dels sistemes no han influït en l'aprenentatge, en la usabilitat o en la satisfacció; també observem que els sistemes són especialment apropiats per a xics i xiquets majors (de 9 a 11 anys). Els xiquets tenen percepció de profunditat amb el sistema autoestereoscòpic. Els xiquets van considerar els dos sistemes fàcils d'usar. No obstant açò, van trobar el sistema de projecció frontal més fàcil d'usar. En el segon estudi es determina la manera amb el qual els xiquets poden aprendre en major mesura el tema del joc. Les dues maneres comparades han sigut la manera col·laborativa, en la qual els xiquets jugaven per parelles; i la manera individual, en la qual els xiquets jugaven sols. 46 xiquets de 7 a 10 anys han participat en aquest estudi. Pels resultats, observem que existeixen diferències estadístiques significatives entre jugar al joc d'una manera o d'una altra. Els xiquets que van jugar al joc en parelles en la manera col·laborativa van obtindre un millor resultat que els xiquets que van jugar al joc en la manera individual. El tercer estudi compara l'aprenentatge tradicional amb l'aprenentatge col·laboratiu (en parelles i en grups grans). 100 xiquets de 8 a 10 anys ha participat en aquest estudi. Els resultats són similars al segon. Els xiquets van obtindre una millor puntuació jugant en col·laboració que en el mètode tradicional en classe. No va haver-hi diferències estadístiques significatives entre jugar en grups grans i jugar en parelles. Tenint en compte l'aprenentatge personalitzat s'ha inclòs un itinerari lliure d'aprenentatge, en el qual els xiquets poden triar com volen dirigir el seu propi procés d'aprenentatge. A manera de comparació, s'ha inclòs també un itinerari lineal d'aprenentatge, on els xiquets segueixen un flux predeterminat. En aquest quart estudi han participat 29 xiquets de 8 a 9 anys. Els resultats han mostrat que no va haver-hi diferències estadístiques significatives entre els dos itineraris d'aprenentatge. Quant a l'avaluació online amb preguntes de test, normalment, hi ha una pregunta i diverses opcions de resposta, on s'ha de seleccionar solament una d'elles. És molt comú que la resposta estiga formada únicament per text. No obstant açò, també es poden utilitzar imatges. En aquest cinquè estudi s'ha dut a terme una comparació per a determinar si les imatges que representen les respostes són d'ajuda per a triar la correcta. 94 xiquets de 7 a 8 anys han participat en aquest estudi. Els xiquets que van emplenar els qüestionaris amb imatges van obtindre una millor puntuació que els xiquets que van emplenar els qüestionaris en els quals solament hi havia text. No es van trobar diferències estadístiques significatives entre els dos tipus de qüestionaris amb imatges. Els resultats d'aquests estudis suggereixen que els jocs d'aquest tipus podrien ser apropiats per a utilitzar-los com a jocs educatius, i que l'autoestereoscòpia és
Martín San José, JF. (2015). USING NATURAL USER INTERFACES TO SUPPORT LEARNING ENVIRONMENTS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/55845
TESIS
APA, Harvard, Vancouver, ISO, and other styles
8

Saber, Tehrani Daniel, and Lemon Samuel Johansson. "Natural and Assistive Driving Simulator User Interfaces for CARLA." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293836.

Full text
Abstract:
As the autonomous vehicles are getting clo-ser to commercial roll out, the challenges for the developersof the software are getting more complex. One challenge thedevelopers are facing is the interaction between humans andautonomous vehicles in traffic.Such situation requires a hugeamount of data to in order to design and proof test autonomoussystem than can handle complex interactions with humans.Such data can not be collected in real traffic situations withoutcompromising the safety of the human counterparts, thereforesimulations will be necessary. Since human driving behavior ishard to predict, these simulations need human interaction inorder to get valid data of human behaviour.The purpose of thisproject is to develop a driving interface and then evaluate theusers experience in an experiment. To do this we have designedand implemented steering,braking and acceleration on a userinterface for a simulator used in autonomous driving researchcalled Car Learning to Act (CARLA) at the Smart Mobility Lab(SML) at KTH. We have implemented two driving simulatoruser interfaces, with different levels of information feedbackto the user. To evaluate the developed user interface, a surveywas designed to measure how intuitive the driving experiencewas while also comparing it to the original setup at SML. Thesurvey showed that the driving experience was more intuitivewith the two developed user interfaces and that 60% would feelcomfortable using the new systems on a real vehicle in traffic.
Allteftersom autonoma bilar kommer närmare kommersiell lansering blir utmaningarna för utvecklarna av mjukvaran mer komplexa. En utmaning som utvecklarna står inför är interaktionen mellan autonoma bilar och människor i och utanför trafiken. Dessa situationer kommer kräva en stor mängd data för att säkerhetställa att autonoma bilar kommer kunna agera optimalt. För att inhämta sådan data utan att riskera säkerheten för alla ute i trafiken kommer simulatorer behövas. Eftersom vi inte kan förutspå mänskligt beteende kommer industrin behöva använda mänskliga förare i dessa simulatorer för att få realistiska resultat. Syftet med detta projekt är att utveckla ett förargränssnitt för människor och sedan utvärdera autenticiten av upplevelsen från ett mänskligt perspektiv. Genom att implementera olika bilmekanismer så som styrning, inbromsning, accelerationen och retardation i en simulator för autonom bil forskning, Car Learning To Act(CARLA) i Smart Mobility Lab(SML) på KTH. Vi implementerade två användargränssnitt med olika nivåer av informations återkoppling till användaren. För att utvärdera användargränssnitten utformades ett frågeformulär för att mäta hur intuitivt körupplevelsen var och samtidigt jämföra med det originella användargränssnittet i SML. Undersökningen visade att körupplevelsen var mer intuitiv med det två utvecklade användargränssnitten och att 60% skulle vara bekväma med att använda ett utav dessa system för att styra ett riktigt fordon i trafik.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
9

Chandra, Yohan. "Natural Language Interfaces to Databases." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5474/.

Full text
Abstract:
Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.
APA, Harvard, Vancouver, ISO, and other styles
10

Potgieter, Timothy Kyle. "Using natural user interfaces to support synchronous distributed collaborative work." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/10880.

Full text
Abstract:
Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW.
APA, Harvard, Vancouver, ISO, and other styles
11

Williamson, Brian. "RealNav: Exploring Natural User Interfaces for Locomotion in Video Games." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4285.

Full text
Abstract:
We present an exploration into realistic locomotion interfaces in video games using spatially convenient input hardware. In particular, we use Nintendo Wii Remotes to create natural mappings between user actions and their representation in a video game. Targeting American Football video games, we used the role of the quarterback as an exemplar since the game player needs to maneuver effectively in a small area, run down the field, and perform evasive gestures such as spinning, jumping, or the "juke". In our study, we developed three locomotion techniques. The first technique used a single Wii Remote, placed anywhere on the user's body, using only the acceleration data. The second technique just used the Wii Remote's infrared sensor and had to be placed on the user's head. The third technique combined a Wii Remote's acceleration and infrared data using a Kalman filter. The Wii Motion Plus was also integrated to add the orientation of the user into the video game. To evaluate the different techniques, we compared them with a cost effective six degree of freedom (6DOF) optical tracker and two Wii Remotes placed on the user's feet. Experiments were performed comparing each to this technique. Finally, a user study was performed to determine if a preference existed among these techniques. The results showed that the second and third technique had the same location accuracy as the cost effective 6DOF tracker, but the first was too inaccurate for video game players. Furthermore, the range of the Wii remote infrared and Motion Plus exceeded the optical tracker of the comparison technique. Finally, the user study showed that video game players preferred the third method over the second, but were split on the use of the Motion Plus when the tasks did not require it.
M.S.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science MS
APA, Harvard, Vancouver, ISO, and other styles
12

Williamson, Brian M. "RealNav exploring natural user interfaces for locomotion in video games /." Orlando, Fla. : University of Central Florida, 2009. http://purl.fcla.edu/fcla/etd/CFE0002938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Coelho, Tiago Miguel Martins. "Avatar modeling: a telepresence study with natural user interface." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10975.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Virtual environments are an increasing trend in today’s society. In this context, the avatar concept appears as the representation of the user in the virtual world. Nevertheless the relationship between avatars and human beings lacks on empirical studies in what concerns their interaction. Based on this motivation, this work aimed at studying how the morphology’s modeling and dynamics affect the control between the avatar and its user. An experiment was conducted to measure telepresence and ownership on the participants while using a natural user interface to control the avatar. In that experiment, affordances were used as behavioral assessment on the virtual environment as the user controls the avatar when it passes through apertures of various sizes. The results show that in virtual environments, the feelings of telepresence and ownership are greater when the kinematics and the avatar proportions are closer to the user.
APA, Harvard, Vancouver, ISO, and other styles
14

Kolagani, Vijay Kumar. "Gesture Based Human-Computer Interaction with Natural User Interface." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1542601474940954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Pons, Tomás Patricia. "Towards Intelligent Playful Environments for Animals based on Natural User Interfaces." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/113075.

Full text
Abstract:
El estudio de la interacción de los animales con la tecnología y el desarrollo de sistemas tecnológicos centrados en el animal está ganando cada vez más atención desde la aparición del área de Animal Computer Interaction (ACI). ACI persigue mejorar el bienestar de los animales en diferentes entornos a través del desarrollo de tecnología adecuada para ellos siguiendo un enfoque centrado en el animal. Entre las líneas de investigación que ACI está explorando, ha habido bastante interés en la interacción de los animales con la tecnología basada en el juego. Las actividades de juego tecnológicas tienen el potencial de proveer estimulación mental y física a los animales en diferentes contextos, pudiendo ayudar a mejorar su bienestar. Mientras nos embarcamos en la era de la Internet de las Cosas, las actividades de juego tecnológicas actuales para animales todavía no han explorado el desarrollo de soluciones pervasivas que podrían proveerles de más adaptación a sus preferencias a la vez que ofrecer estímulos tecnológicos más variados. En su lugar, estas actividades están normalmente basadas en interacciones digitales en lugar de explorar dispositivos tangibles o aumentar las interacciones con otro tipo de estímulos. Además, estas actividades de juego están ya predefinidas y no cambian con el tiempo, y requieren que un humano provea el dispositivo o la tecnología al animal. Si los humanos pudiesen centrarse más en su participación como jugadores de un sistema interactivo para animales en lugar de estar pendientes de sujetar un dispositivo para el animal o de mantener el sistema ejecutándose, esto podría ayudar a crear lazos más fuertes entre especies y promover mejores relaciones con los animales. Asimismo, la estimulación mental y física de los animales son aspectos importantes que podrían fomentarse si los sistemas de juego diseñados para ellos pudieran ofrecer un variado rango de respuestas, adaptarse a los comportamientos del animal y evitar que se acostumbre al sistema y pierda el interés. Por tanto, esta tesis propone el diseño y desarrollo de entornos tecnológicos de juego basados en Interfaces Naturales de Usuario que puedan adaptarse y reaccionar a las interacciones naturales de los animales. Estos entornos pervasivos permitirían a los animales jugar por si mismos o con una persona, ofreciendo actividades de juego más dinámicas y atractivas capaces de adaptarse con el tiempo.
L'estudi de la interacció dels animals amb la tecnologia i el desenvolupament de sistemes tecnològics centrats en l'animal està guanyant cada vegada més atenció des de l'aparició de l'àrea d'Animal Computer Interaction (ACI) . ACI persegueix millorar el benestar dels animals en diferents entorns a través del desenvolupament de tecnologia adequada per a ells amb un enfocament centrat en l'animal. Entre totes les línies d'investigació que ACI està explorant, hi ha hagut prou interès en la interacció dels animals amb la tecnologia basada en el joc. Les activitats de joc tecnològiques tenen el potencial de proveir estimulació mental i física als animals en diferents contextos, podent ajudar a millorar el seu benestar. Mentre ens embarquem en l'era de la Internet de les Coses, les activitats de joc tecnològiques actuals per a animals encara no han explorat el desenvolupament de solucions pervasives que podrien proveir-los de més adaptació a les seues preferències al mateix temps que oferir estímuls tecnològics més variats. En el seu lloc, estes activitats estan normalment basades en interaccions digitals en compte d'explorar dispositius tangibles o augmentar les interaccions amb estímuls de diferent tipus. A més, aquestes activitats de joc estan ja predefinides i no canvien amb el temps, mentre requereixen que un humà proveïsca el dispositiu o la tecnologia a l'animal. Si els humans pogueren centrar-se més en la seua participació com a jugadors actius d'un sistema interactiu per a animals en compte d'estar pendents de subjectar un dispositiu per a l'animal o de mantenir el sistema executant-se, açò podria ajudar a crear llaços més forts entre espècies i promoure millors relacions amb els animals. Així mateix, l'estimulació mental i física dels animals són aspectes importants que podrien fomentar-se si els sistemes de joc dissenyats per a ells pogueren oferir un rang variat de respostes, adaptar-se als comportaments de l'animal i evitar que aquest s'acostume al sistema i perda l'interès. Per tant, esta tesi proposa el disseny i desenvolupament d'entorns tecnològics de joc basats en Interfícies Naturals d'Usuari que puguen adaptar-se i reaccionar a les interaccions naturals dels animals. Aquestos escenaris pervasius podrien permetre als animals jugar per si mateixos o amb una persona, oferint activitats de joc més dinàmiques i atractives que siguen capaces d'adaptar-se amb el temps.
The study of animals' interactions with technology and the development of animal-centered technological systems is gaining attention since the emergence of the research area of Animal Computer Interaction (ACI). ACI aims to improve animals' welfare and wellbeing in several scenarios by developing suitable technology for the animal following an animal-centered approach. Among all the research lines ACI is exploring, there has been significant interest in animals' playful interactions with technology. Technologically mediated playful activities have the potential to provide mental and physical stimulation for animals in different environmental contexts, which could in turn help to improve their wellbeing. As we embark in the era of the Internet of Things, current technological playful activities for animals have not yet explored the development of pervasive solutions that could provide animals with more adaptation to their preferences as well as offering varied technological stimuli. Instead, playful technology for animals is usually based on digital interactions rather than exploring tangible devices or augmenting the interactions with different stimuli. In addition, these playful activities are already predefined and do not change over time, while they require that a human has to be the one providing the device or technology to the animal. If humans could focus more on their participation as active players of an interactive system aimed for animals instead of being concerned about holding a device for the animal or keep the system running, this might help to create stronger bonds between species and foster better relationships with animals. Moreover, animals' mental and physical stimulation are important aspects that could be fostered if the playful systems designed for animals could offer a varied range of outputs, be tailored to the animal's behaviors and prevented the animal to get used to the system and lose interest. Therefore, this thesis proposes the design and development of technological playful environments based on Natural User Interfaces that could adapt and react to the animals' natural interactions. These pervasive scenarios would allow animals to play by themselves or with a human, providing more engaging and dynamic playful activities that are capable of adapting over time.
Pons Tomás, P. (2018). Towards Intelligent Playful Environments for Animals based on Natural User Interfaces [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/113075
TESIS
APA, Harvard, Vancouver, ISO, and other styles
16

Micheloni, Edoardo. "Models and methods for sound-based input in Natural User Interfaces." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422847.

Full text
Abstract:
In the last years, the Multimodal Interfaces and the Natural User Interfaces (NUIs)are finding more and more applications, thanks to the diffusion of mobile devices and smart objects that do not allow a traditional WIMP interaction. In these contexts, the interaction modes most used are the natural language and the gestures recognition. The objective of this thesis is to explore innovative interfaces based on non-verbal sounds,produced by the interaction of the user with common objects. The potentialities and the problems related to the design and implementation of this type of interfaces will be discussed through three case studies, in which non-verbal sounds are used for interaction with embedded systems developed for the valorization of cultural heritage. The sounds analysed in these projects are i) broadband noises, ii) impulses and iii) pitched sounds.The obtained results, thanks to a strong multidisciplinary approach, opened to a fruitful technology transfer between university and companies/institutions involved. First of all, the study of broadband noisy sounds was addressed through the interpretation of air blown signal. The resulting sensors equipped system was included in a multimedia installation for the valorization of an ancient Pan flute preserved at the Museum ofArchaeological Sciences and Art of Padova (Italy). Secondly, the impulsive sounds were studied from footsteps detection on a wooden runway in order to realize a real-time position mapping technology. The resulting system was used for the 3D exploration of a usual 2D painting exposed during "The European Researchers’ Night 2018" in Padova(Italy). Finally, pitched sound signals were studied analysing notes produced by an acoustic piano. The resulting algorithm for real time note detection was applied to the video-gameMusa, which had the goal to teach children how to play the piano. In these projects, both the algorithms, by means of quantitative analysis, and the interfaces between user and computer, by means of qualitative analysis, were validated to assess the "naturalness" of the interaction.
APA, Harvard, Vancouver, ISO, and other styles
17

Lenz, Anthony M. "COFFEE: Context Observer For Fast Enthralling Entertainment." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1244.

Full text
Abstract:
Desktops, laptops, smartphones, tablets, and the Kinect, oh my! With so many devices available to the average consumer, the limitations and pitfalls of each interface are becoming more apparent. Swimming in devices, users often have to stop and think about how to interact with each device to accomplish the current tasks at hand. The goal of this thesis is to minimize user cognitive effort in handling multiple devices by creating a context aware hybrid interface. The context aware system will be explored through the hybridization of gesture and touch interfaces using a multi-touch coffee table and the next-generation Microsoft Kinect. Coupling gesture and touch interfaces creates a novel multimodal interface that can leverage the benefits of both gestures and touch. The hybrid interface is able to utilize the more intuitive and dynamic use of gestures, while maintaining the precision of a tactile touch interface. Joining these two interfaces in an intuitive and context aware way will open up a new avenue for design and innovation.
APA, Harvard, Vancouver, ISO, and other styles
18

Erazo, Moreta Orlando Ramiro. "A predictive model for user performance time with natural user interfaces based on touchless hand gestures." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/142559.

Full text
Abstract:
Doctor en Ciencias, Mención Computación
Las interfaces naturales de usuario (NUIs) basadas en gestos manuales sin contacto (THG) tienen ventajas sobre las interfaces de usuario (UIs) convencionales en varios escenarios. Sin embargo, éstas aún presentan problemas desafiantes por ser investigados, tales como su diseño y evaluación con la finalidad de obtener resultados satisfactorios. El enfoque clásico de la participación de usuarios para escoger gestos o analizar diseños de interfaces necesita ser complementado con evaluaciones predictivas para los casos en que los métodos basados en usuarios son inaplicables o costosos de realizar. En consecuencia, modelos de usuario cuantitativos son necesarios para efectuar esas evaluaciones. Dado que la evaluación basada en modelos es importante en HCI y que los modelos disponibles son insuficientes para evaluar NUIs basadas en THG, esta tesis estudia modelos para interfaces de este tipo. Dos enfoques de modelamiento son abordados, aunque el enfoque principal está puesto en modelos predictivos. La tesis primeramente presenta un estudio para entender la articulación de gestos, el cual permitió derivar un modelo descriptivo sobre la concepción y producción de gestos de usuarios, y una taxonomía de gestos. Después, la tesis se concentra en (1) el análisis de la viabilidad de utilizar modelos cuantitativos existentes para abarcar THG; (2) la formulación de un modelo para predecir el tiempo de rendimiento para realizar tareas; y (3) la validación de este nuevo modelo. En todos los casos, el rendimiento de los modelos es estudiado de acuerdo a varias métricas usadas para hacer comparaciones con rangos típicos en el área. Así, la contribución principal de esta tesis es un modelo para estimar el tiempo que un usuario necesita para hacer una tarea con una NUI basada en THG empleando su mano. El modelo propuesto, que es llamado THGLM, está basado en el modelo clásico KLM. Prescribe que los gestos manuales sean analizados de acuerdo a su estructura temporal; es decir, utilizando gesture units . THGLM predice el tiempo de rendimiento en una forma aceptable (error de predicción = 12 %, R2 > 0.9). Los experimentos realizados también confirman la utilidad del modelo para analizar y comparar diseños de interfaces. Si bien THGLM tiene ciertas limitaciones, tiene ventajas importantes tales como su relativa facilidad de usar y extender. Más allá de las limitaciones intrínsecas de THGLM, éste debería ayudar en el diseño y evaluación de NUIs basadas en THG. Los diseñadores de UIs pueden predecir el tiempo para completar tareas sin la participación de usuarios, y luego, usar ese valor como métrica para analizar o evaluar una UI. Esta estrategia es especialmente útil en situaciones donde es difícil llevar a cabo pruebas con usuarios o como un paso preliminar a la evaluación de una interfaz. Por lo tanto, se espera que el modelo propuesto se convierta en una herramienta útil para diseñadores de software para realizar evaluaciones de usabilidad, mejorar diseños de interfaces, y desarrollar mejores aplicaciones de software utilizando gestos.
Esta tesis ha sido parcialmente financiada por SENESCYT (Ecuador, Convocatoria Abierta 2011), Universidad Técnica Estatal de Quevedo (Ecuador) y NIC Chile (DCC, Universidad de Chile)
APA, Harvard, Vancouver, ISO, and other styles
19

Clifford, Rory. "Natural User Interface Design using Multiple Displays for Courier Dispatch Operations." Thesis, University of Canterbury. Human Interface Technology, 2013. http://hdl.handle.net/10092/8740.

Full text
Abstract:
This thesis explores how Natural User Interface (NUI) interaction and Multiple Display Technology (MDT) can be applied to an existing Freight Management System (FMS), to improve the command and control interface of the dispatch operators. Situational Awareness (SA) and Task Efficiency (TE) are identified as being the main requirements for dispatchers. Based on studies that have been performed on SA and TE in other time critical occupations such as Emergency Medical Dispatch (EMD) and Air Traffic Control (ATC), a substitute dispatch display system was designed with focus on courier driver and freight management systems and monitoring. This system aims to alleviate cognitive overheads without disrupting the flow of the existing CFMS by providing extended screen area matched with a natural input mechanism for command and control functionality. This Master’s thesis investigates which of commercial state-of-the-art interface tools is best to use in a wide Field-of-View (FOV) multiple screen display and to dicern if there is any practical impact that a proposed NUI system will have to courier dispatching. To assess the efficacy of such a hypothetical system the author has developed an experimental prototype that combines a set of three monitors in a Multi-Monitor System to create the overall display system, accompanied with two traditional and two advanced NUI direct and indirect interaction techniques (mouse, trackpad, touch screen and gesture controller). Experiments using the prototype were conducted to determine the optimum configuration for control/display interface based upon task effectiveness, bandwidth and overall user desirability of these methods in supporting behavioural requirements of dispatch workstation task handling. The author use the well-studied and robust Fitts' Law for measuring and analysing user behaviour with NUIs. Evaluation of the prototype system finds that the multi-touch system paired with the multi-monitor system was the most responsive of the interaction techniques, direct or indirect. Based on these findings, employing such an interaction system is a viable option for deployment in FMS's. However for optimal efficiency, the firmware that supports the interactivity dynamics should be re-designed so it is optimized to touch interaction. This will allow the multi-touch system to be used effectively as an affordance technology. Although the gesture interaction approach has a lot of potential as an alternative NUI device, the performance of gesture input in this experimental setting had the worst performance of all conditions. This finding was largely a result of the interface device limitation within the wide FOV display range of the multi-monitor system. Further design improvements and experimentation are proposed to alleviate this problem for the gesture tracking and for the touchscreen modalities of interaction.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Siyuan. "A Natural User Interface for Virtual Object Modeling for Immersive Gaming." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1048.

Full text
Abstract:
" We designed an interactive 3D user interface system to perform object modeling in virtual environments. Expanding on existing 3D user interface techniques, we integrate low-cost human gesture recognition that endows the user with powerful abilities to perform complex virtual object modeling tasks in an immersive game setting. Much research has been done to explore the possibilities of developing biosensors for Virtual Reality (VR) use. In the game industry, even though full body interaction techniques are involved in modern game consoles, most of the utilizations, in terms of game control, are still simple. In this project, we extended the use of motion tracking and gesture recognition techniques to create a new 3D UI system to support immersive gaming. We set a goal for the usability, which is virtual object modeling, and finally developed a game application to test its performance. "
APA, Harvard, Vancouver, ISO, and other styles
21

Milzoni, Alessandro. "Kinect e openNI a supporto delle NUI (Natural User Interface) applications." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6113/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Demeter, Nora. "Context aware voice user interface." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22417.

Full text
Abstract:
In this thesis I address the topic of a non-visual approach for interaction on mobile,as an alternative to their existing visual displays in situations where hands free usageof the device is preferred. The current technology will be examined through existingwork with special attention to its limitations, which user groups are currently using anysort of speech recognition or voice command functions and look at in which scenariosare these the most used and most desired. Then I will examine through interviews whypeople trust or distrust voice interactions and how they feel about the possibilities andlimitations of the technology at hand, how individual users use this currently and wheredo they see the technology in the future. After this I will develop an alternative voiceinteraction concept, and validate it through a set of workshops.
APA, Harvard, Vancouver, ISO, and other styles
23

Aragon, Ramirez V. "A user interface for the online elucidation of natural language search statements." Thesis, Lancaster University, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hassanien, Shehabeldin. "A Framework for Physical Rehabilitation Using Natural User Interface with Electromyography Biofeedback." University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1354205753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Woodley, Alan Paul. "NLPX : a natural language query interface for facilitating user-oriented XML-IR." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/16642/1/Alan_Woodley_Thesis.pdf.

Full text
Abstract:
Most information retrieval (IR) systems respond to users' representation of their information needs (queries) with a ranked list of relevant results, usually text documents. XML documents di er from traditional text documents by explicitly separating structure and content. XML-IR systems aim to exploit this separation by searching and retrieving relevant components of documents (called elements) rather than entire documents thereby, better ful lling users' information needs. Despite the potential bene t of XML-IR systems, most research in this area has not been centered on the needs of users. In particular, current XML-IR query formation interfaces, namely keywords-only and formal language, are not able to optimally address the needs of users. Keywords-only interfaces are too unsophisticated to fully capture the users' complex information needs that contain both content and structural requirements. In contrast, while formal languages are able to capture users' content and structural requirements they are too di cult to use, even for experts, and are too closely tied to the physical structure of the collection. This thesis presents a solution to these problems by presenting NLPX, a natural language interface for XML-IR systems. NLPX allows users to enter XML-IR queries in natural language and translates them into a formal language (NEXI) to be processed by existing XML retrieval systems. When evaluated by system testing, NLPX outperformed alternative translation approaches. When tested in a user-based experiment, NLPX performed comparably to a query-by-template interface, the baseline user-oriented interface for formulating structured queries. It is hoped that the outcomes of this thesis will help to refocus the eld of XML-IR around the user. This will lead to the development of more useful XML-IR systems, which will hopefully result in the more widespread use of XML-IR systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Woodley, Alan Paul. "NLPX : a natural language query interface for facilitating user-oriented XML-IR." Queensland University of Technology, 2008. http://eprints.qut.edu.au/16642/.

Full text
Abstract:
Most information retrieval (IR) systems respond to users' representation of their information needs (queries) with a ranked list of relevant results, usually text documents. XML documents di er from traditional text documents by explicitly separating structure and content. XML-IR systems aim to exploit this separation by searching and retrieving relevant components of documents (called elements) rather than entire documents thereby, better ful lling users' information needs. Despite the potential bene t of XML-IR systems, most research in this area has not been centered on the needs of users. In particular, current XML-IR query formation interfaces, namely keywords-only and formal language, are not able to optimally address the needs of users. Keywords-only interfaces are too unsophisticated to fully capture the users' complex information needs that contain both content and structural requirements. In contrast, while formal languages are able to capture users' content and structural requirements they are too di cult to use, even for experts, and are too closely tied to the physical structure of the collection. This thesis presents a solution to these problems by presenting NLPX, a natural language interface for XML-IR systems. NLPX allows users to enter XML-IR queries in natural language and translates them into a formal language (NEXI) to be processed by existing XML retrieval systems. When evaluated by system testing, NLPX outperformed alternative translation approaches. When tested in a user-based experiment, NLPX performed comparably to a query-by-template interface, the baseline user-oriented interface for formulating structured queries. It is hoped that the outcomes of this thesis will help to refocus the eld of XML-IR around the user. This will lead to the development of more useful XML-IR systems, which will hopefully result in the more widespread use of XML-IR systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Medeiros, Anna Carolina Soares. "Processo de desenvolvimento de gestos para interfaces de usuário." Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7863.

Full text
Abstract:
Submitted by Clebson Anjos (clebson.leandro54@gmail.com) on 2016-02-16T22:01:14Z No. of bitstreams: 1 arquivototal.pdf: 4433829 bytes, checksum: 4713f292d2686f95f4ec248322aee517 (MD5)
Made available in DSpace on 2016-02-16T22:01:14Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 4433829 bytes, checksum: 4713f292d2686f95f4ec248322aee517 (MD5) Previous issue date: 2015-05-26
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The use of our body language to communicate with computer systems is an increas-ingly possible and applicable feature in the real world. This fact is potentiated by the evolution of gesture recognition based commercial solutions. A gesture interface complements or replaces navigation in a conventional interface, it is up to each developer to choose the most appropriate option for their application. Therefore when opting for gesture usage, the gestures will be re-sponsible to activate the systems functions. This work presents a gesture development process that can be used to be used to aid the construction of gesture interfaces. The proposed process should help interface designers to incorporate gesture based natural interaction into their appli-cations in a more systematic way. In order to illustrate the Process, gestures for the actions "Select", "Rotate", "Translate", "Scale" and "Stop" were developed.
A utilização da nossa linguagem corporal para nos comunicarmos com sistemas computacionais é uma característica cada vez mais possível e aplicável no mundo real. Esse fato é potencializado pela evolução das soluções comerciais baseadas em reconhecimentos de gestos. Uma interface de gestos complementa ou substitui a forma de navegação de uma interface convencional, neste cenário cabe ao desenvolvedor escolher a opção mais adequada para a sua aplicação. Assim, quando se opta pelo uso de gestos, esses serão responsáveis por acionar as funções oferecidas pelos sistemas. O presente trabalho apresenta um processo de desenvolvimento de gestos que tem potencial para ser aplicado em qualquer interface em desenvolvimento que deseje agregar interação gestual. O processo proposto deve auxiliar designers de interfaces a integrarem interação natural por gestos em suas aplicações de forma mais sistemática. Como caso de uso foram gerados gestos para as funções: “Selecionar”, “Rotacionar”, “Transladar”, “Escalonar” e “Parar”.
APA, Harvard, Vancouver, ISO, and other styles
28

Ell, Basil [Verfasser]. "User Interfaces to the Web of Data based on Natural Language Generation / Basil Ell." Karlsruhe : KIT Scientific Publishing, 2017. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Oliveira, Fábio Henrique Monteiro. "Uso de interfaces naturais na modelagem de objetos virtuais." Universidade Federal de Uberlândia, 2013. https://repositorio.ufu.br/handle/123456789/14551.

Full text
Abstract:
Fundação de Amparo a Pesquisa do Estado de Minas Gerais
The researches about gestural interfaces have been grown significantly. In particular, after the development of sensors that can accurately capture bodily movements. Consequently, several fields arise for the application of these technologies. Among them stands the 3D modeling industry, which is characterized by having robust software. However, these software often lack a facilitator human-computer interface. This happens since the interaction is usually enabled by 2 degrees of freedom mouse. Due to these limitations, common tasks such as rotate the scene viewpoint or move an object are hardly assimilated by users. This discomfort with the usual and complex 3D modeling software interface is one of the reasons that lead to quit its use. In this context, Natural User Interfaces stand out by exploring the natural human gestures in a better way, in order to promote a more intuitive interface. This work presents a system that allows the user to perform 3D modeling using poses and hand gestures, providing an interface with 3 degrees of freedom. An evaluation was conducted with 10 people, to validate the strategy and application proposed. In this evaluation the participants reported that the system has potential to become an innovative interface, despite its limitations. Overall, the hand tracking approach to 3D modeling seems to be promising and deserves further investigation.
As pesquisas na área de interfaces gestuais vêm crescendo significativamente. Em especial, após o desenvolvimento de sensores que podem capturar movimentos corporais com precisão. Como consequência, surgem diversos campos para a aplicação destas tecnologias. Dentre eles, destaca-se o setor de modelagem 3D, o qual é marcado por possuir programas robustos. Entretanto, estes são muitas vezes ausentes de uma interface homem-computador facilitadora. Isto porque a interação, normalmente, é viabilizada pelo mouse com 2 graus de liberdade. Devido a estas limitações, tarefas frequentes como rotacionar o ponto de vista da cena e transladar um objeto são assimiladas pelo usuário com dificuldade. Este desconforto perante a usual e complexa interface dos programas para modelagem 3D é um dos fatores que culminam na desistência de seu uso. Neste contexto, Natural User Interfaces se destacam, por melhor explorar os gestos naturais humanos, a fim de promover uma interface mais intuitiva. Neste trabalho é apresentado um sistema que permite ao usuário realizar a modelagem 3D por meio de poses e gestos com as mãos provendo uma interface com 3 graus de liberdade. Uma avaliação foi conduzida com 10 pessoas, a fim de validar a estratégia e a aplicação proposta. Nesta avaliação os participantes reportaram que o sistema tem potencial para se tornar uma interface inovadora, a despeito de suas limitações. Em geral, a abordagem de rastreamento das mãos para modelagem 3D parece ser promissora e merece uma investigação mais aprofundada.
Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
30

Diniz, Wendell Fioravante da Silva 1982. "Acionamento de dispositivos robóticos através de interface natural em realidade aumentada." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264034.

Full text
Abstract:
Orientador: Eurípedes Guilherme de Oliveira Nóbrega
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-20T08:14:05Z (GMT). No. of bitstreams: 1 Diniz_WendellFioravantedaSilva_M.pdf: 9463256 bytes, checksum: 22904c4d85179e589b6a43dd655dc3ca (MD5) Previous issue date: 2012
Resumo: Desde o início da História e particularmente da Revolução Industrial, o homem tem buscado substituir ou complementar sua força de trabalho com dispositivos e máquinas capazes de ampliar a capacidade produtiva ou resolver tarefas para as quais o emprego de força humana não é satisfatório ou é perigoso. Esta necessidade impulsionou, no último século, o desenvolvimento de dispositivos robóticos, que foram se tornando cada vez mais complexos, à medida que a tecnologia avançava. No entanto, quanto mais complexos, mais difícil se torna sua operação. Reduzir a complexidade de operação destes dispositivos sem comprometer sua eficácia é um objetivo desejável e que tem impulsionado significativos esforços de pesquisa ultimamente. Neste trabalho, estudam-se duas tecnologias diferentes que podem ser aplicadas de forma complementar para tornar a operação de dispositivos robóticos mais simplificada. A área de Interfaces Naturais estuda a criação de mecanismos de operação baseados nas formas naturais de interação entre os seres humanos e os computadores, como gestos e fala, visando diminuir a curva de aprendizado para a operação de sistemas complexos, tornando esta uma tarefa intuitiva. A Realidade Aumentada tem entre seus objetivos ampliar o conhecimento sobre uma certa situação ao apresentar a um usuário informações adicionais fornecidas por instrumentação. Para isso, faz uso de técnicas de Computação Gráfica. Também permite criar situações simuladas para avaliar o comportamento de sistemas, ou treinamento para a operação destes sistemas de maneira virtual. Neste trabalho, montou-se um experimento em que um braço robótico será acionado de forma natural, através de gestos do usuário, em um ambiente de Realidade Aumentada. Como transdutor para a captura do movimento do usuário, foi o usado o sensor Kinect¿, que é um equipamento disponível comercialmente com custo relativamente baixo. Foi desenvolvida uma arquitetura que permite o acionamento da plataforma robótica de forma remota, através de comunicação em rede
Abstract: Since the beginning of History and particularly of the Industrial Revolution, human beings have sought to replace or supplement their workforce with devices and machines, in order to expand their productive capacity or to solve tasks for which the use of human power is not satisfactory or it is dangerous. This need pushed the development of robotic devices in the last century, which have become increasingly complex as technology advances. However, the more complex it is, more difficult its operation becomes. Reducing these devices operational complexity without compromising its effectiveness is a desirable goal that has driven significant research efforts lately. In this work, two different technologies are studied which can be applied in a complementary way to simplify robotic operation. The area of Natural Interfaces studies the creation of software mechanisms, based on natural forms of interaction between humans and computers, such as gestures and speech, in order to reduce the learning curve for the operation of complex systems, making it more intuitive. Augmented Reality has among its objectives to expand the knowledge about certain situations, presenting to the user additional instrumentation provided information. To accomplish this objective, Augmented Reality makes use of Computer Graphics techniques. It also allows the creation of simulated scenarios to evaluate the behavior of systems, or for virtually training the operators of these systems. In this work, an experiment was set up in which a robotic arm is activated through user gestures in an Augmented Reality environment. As a low cost commercially available equipment, the Kinect¿ sensor is adopted to capture the user's movement. An architecture was developed and tested with auspicious results, implementing the proposed natural interface to interpret the human gestures and to provoke the respective remote activation of the robotic platform through network communication
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
31

Johansson, Daniel. "Convergence in mixed reality-virtuality environments : facilitating natural user behavior." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-21054.

Full text
Abstract:
This thesis addresses the subject of converging real and virtual environments to a combined entity that can facilitate physiologically complying interfaces for the purpose of training. Based on the mobility and physiological demands of dismounted soldiers, the base assumption is that greater immersion means better learning and potentially higher training transfer. As the user can interface with the system in a natural way, more focus and energy can be used for training rather than for control itself. Identified requirements on a simulator relating to physical and psychological user aspects are support for unobtrusive and wireless use, high field of view, high performance tracking, use of authentic tools, ability to see other trainees, unrestricted movement and physical feedback. Using only commercially available systems would be prohibitively expensive whilst not providing a solution that would be fully optimized for the target group for this simulator. For this reason, most of the systems that compose the simulator are custom made to facilitate physiological human aspects as well as to bring down costs. With the use of chroma keying, a cylindrical simulator room and parallax corrected high field of view video see-though head mounted displays, the real and virtual reality are mixed. This facilitates use of real tool as well as layering and manipulation of real and virtual objects. Furthermore, a novel omnidirectional floor and thereto interface scheme is developed to allow limitless physical walking to be used for virtual translation. A physically confined real space is thereby transformed into an infinite converged environment. The omnidirectional floor regulation algorithm can also provide physical feedback through adjustment of the velocity in order to synchronize virtual obstacles with the surrounding simulator walls. As an alternative simulator target use, an omnidirectional robotic platform has been developed that can match the user movements. This can be utilized to increase situation awareness in telepresence applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Loachamín, Valencia Mauricio Renán. "Natural user interfaces and smart devices for the assessment of spatial memory using auditory stimuli." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/107955.

Full text
Abstract:
En esta tesis, el objetivo principal fue diseñar y desarrollar una nueva tarea que combinara interfaces de usuario naturales (NUI) y dispositivos inteligentes para evaluar la memoria espacial utilizando estímulos auditivos, y su validación tanto en niños como en adultos. La nueva tarea evalúa la capacidad de los participantes para detectar y localizar estímulos auditivos que se emiten en diferentes posiciones del área de trabajo. La tarea reconoce los movimientos de los brazos del usuario, utilizando para ello Kinect. Los dispositivos inteligentes (conejos Karotz) se utilizan para emitir estímulos auditivos y también como señales visuales. Por lo tanto, la tarea combina estímulos auditivos con claves visuales reales para la evaluación de la memoria espacial. La tarea incluye un total de 45 estímulos acústicos, repartidos en 5 niveles y cada nivel consta de 3 ensayos. Nuestra tarea es el primer trabajo que combina NUI y dispositivos inteligentes para la evaluación de la memoria espacial. Del mismo modo, nuestra tarea es el primer trabajo que utiliza estímulos auditivos para evaluar la memoria espacial. Para la validación, se llevaron a cabo 3 estudios. El rendimiento de nuestra tarea se comparó con métodos tradicionales. El primer estudio involucró niños con y sin síntomas de falta de atención. Un total de 34 niños participaron (17 niños con falta de atención). Los resultados demostraron que los niños con falta de atención mostraron un rendimiento estadísticamente peor en la tarea. Estos niños con falta de atención también mostraron un rendimiento estadísticamente peor con el método tradicional para evaluar el aprendizaje de sonidos verbales. No se encontraron diferencias estadísticamente significativas en el tiempo dedicado por cada grupo para completar la tarea. Los resultados sugieren que la tarea es una buena herramienta para distinguir las dificultades de memoria espacial en niños con falta de atención. El segundo estudio comparó el rendimiento en la tarea entre niños mayores y adultos (32 niños y 38 adultos sanos). Los resultados de rendimiento con la tarea fueron significativamente más bajos para los niños mayores. Se encontraron correlaciones entre nuestra tarea y los métodos tradicionales, lo que indica que nuestra tarea ha demostrado ser una herramienta válida para evaluar la memoria espacial mediante el uso de estímulos auditivos tanto para niños mayores como para adultos. A partir del análisis, podemos concluir que la satisfacción con la tarea de los niños mayores fue significativamente mayor que la de los adultos. El tercer estudio incluyó un total de 148 participantes (niños más pequeños, niños mayores y adultos). Los resultados están en línea con el segundo estudio. El rendimiento de la tarea se relacionó significativamente, de forma incremental y directa con el grupo de edad (niños más pequeños In this thesis, the main objective was to design and develop a new task that combine Natural User Interfaces (NUI) and smart devices for assessing spatial memory using auditory stimuli, and its validation in both children and adults. The new task tests the ability of participants to detect and localize auditory stimuli that are emitted in different positions of the task area. The task recognizes the movements of the arms of the user using Kinect. Smart devices (Karotz rabbits) are used for emitting auditory stimuli and also as visual cues. Therefore, the task combines auditory stimuli with real visual cues for the assessment of spatial memory. The task includes a total of 45 acoustic stimuli, which should be randomly emitted in different locations. The task is composed of five different levels. Each level consists of 3 trials. The difference between levels lies in the number of sounds to be used in each trial. To our knowledge, our task is the first work that combines NUI and smart devices for the assessment of spatial memory. Similarly, our task is the first work that uses auditory stimuli to assess the spatial memory. For the validation, three studies were carried out to determine the efficacy and utility of our task with regard to the performance outcomes, usability, fun, perception and overall satisfaction. The performance of our task was compared with traditional methods. The first study involved children with and without symptoms of inattention. A total of 34 children participated (17 children with inattention). The results showed that the children with inattention showed statistically worse performance in the task. These children with inattention also showed statistically worse performance in the traditional method for testing the learning of verbal sounds. There were no statistically significant differences in the time spent by each group to complete the task. The results suggest that the task is a good tool for distinguishing spatial memory difficulties in children with inattention. The second study compared the performance in the task between older children and adults. A total of 70 participants were involved in this study. There were 32 healthy children from 9 to 10 years old, and 38 healthy adults from 18 to 28 years old. The performance outcomes with the task were significantly lower for the older children. Correlations were found between our task and traditional methods, indicating that our task has proven to be a valid tool for assessing spatial memory by using auditory stimuli for both older children and adults. From the analysis, we can conclude that the older children were significantly more satisfied with the task than the adults. In the third study, a total of 148 participants were involved. They were distributed in three groups (younger children, older children and adults). A total of 100 children and 48 adults participated in this study. The results are in line with the second study. The task performance was significantly incrementally and directly related to the age group (younger children En aquesta tesi, l'objectiu principal va ser dissenyar i desenvolupar una nova tasca que combinés NUI i dispositius intel·ligents per a avaluar la memòria espacial utilitzant estímuls auditius, i la seua validació tant en xiquets, com en adults. La nova tasca avalua la capacitat dels participants per a detectar i localitzar estímuls auditius que s'emeten en diferents posicions de l'àrea de treball. La tasca reconeix els moviments dels braços de l'usuari, utilitzant per a açò Kinect. Els dispositius intel·ligents (conills Karotz) s'utilitzen per a emetre estímuls auditius i, també, com a senyals visuals. Per tant, la tasca combina estímuls auditius amb claus visuals reals per a l'avaluació de la memòria espacial. La tasca inclou un total de 45 estímuls acústics, repartits en 5 nivells diferents i cada nivell consta de 3 assajos. La nostra tasca és el primer treball que combina NUI i dispositius intel·ligents per a l'avaluació de la memòria espacial. De la mateixa manera, la nostra tasca és el primer treball que utilitza estímuls auditius per a avaluar la memòria espacial. Per a la validació, es van dur a terme tres estudis. El rendiment de la nostra tasca es va comparar amb mètodes tradicionals. El primer estudi va involucrar 34 xiquets (17 xiquets amb inatenció). Els resultats van demostrar que els xiquets amb inatenció van mostrar un rendiment estadísticament pitjor en la tasca. Aquests xiquets amb inatenció també van mostrar un rendiment estadísticament pitjor amb el mètode tradicional per a avaluar l'aprenentatge de sons verbals. No es van trobar diferències estadísticament significatives en el temps dedicat per cada grup per a completar la tasca. Els resultats suggereixen que la tasca és una bona ferramenta per a distingir les dificultats de memòria espacial en xiquets amb dificultats d'atenció. El segon estudi va comparar el rendiment en la tasca entre xiquets majors i adults. Un total de 70 participants van estar involucrats en aquest estudi. Van participar 32 xiquets i 38 adults sans. Els resultats de rendiment amb la tasca van ser significativament més baixos per als xiquets majors. Es van trobar correlacions entre la nostra tasca i els mètodes tradicionals, la qual cosa indica que la nostra tasca ha demostrat ser una ferramenta vàlida per a avaluar la memòria espacial mitjançant l'ús d'estímuls auditius tant per a xiquets majors, com per a adults. A partir de l'anàlisi, podem concloure que la satisfacció amb la tasca dels xiquets majors va ser significativament major que la dels adults. El tercer estudi va incloure un total de 148 participants (100 xiquets i 48 adults). Es van distribuir en tres grups (xiquets més xicotets, xiquets majors i adults). Els resultats estan en línia amb el segon estudi. El rendiment de la tasca es va relacionar significativament, de forma incremental i directa amb el grup d'edat (xiquets més xicotets Loachamín Valencia, MR. (2018). Natural user interfaces and smart devices for the assessment of spatial memory using auditory stimuli [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/107955
TESIS
APA, Harvard, Vancouver, ISO, and other styles
33

Sidhu, Jadvinder Singh. "Development of a natural language interface system that allows the user population to tailor the system iteratively to their own requirements." Thesis, Nottingham Trent University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lindberg, Martin. "Introducing Gestures: Exploring Feedforward in Touch-Gesture Interfaces." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23555.

Full text
Abstract:
This interaction design thesis aimed to explore how users could be introduced to the different functionalities of a gesture-based touch screen interface. This was done through a user-centred design research process where the designer was taught different artefacts by experienced users. Insights from this process lay the foundation for an interactive, digital gesture-introduction prototype.Testing said prototype with users yielded this study's results. While containing several areas for improvement regarding implementation and behaviour, the prototype's base methods and qualities were well received. Further development would be needed to fully assess its viability. The user-centred research methods used in this project proved valuable for later ideation and prototyping stages. Activities and results from this project indicate a potential for designers to further explore the possibilities for ensuring the discoverability of touch-gesture interactions. For future projects the author suggests more extensive research and testing using a greater sample size and wider demographic.
APA, Harvard, Vancouver, ISO, and other styles
35

Amanzi, Richard. "A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/6204.

Full text
Abstract:
Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to participate in more advanced forms of movements and sports. To be able to perform an FMS correctly, children need to learn the right way of performing it. By making use of technology, a system can be developed that can help facilitate the learning of FMSs. The objective of the research was to propose an effective natural user interface (NUI) architecture for detecting FMSs using the Kinect. In order to achieve the stated objective, an investigation into FMSs and the challenges faced when teaching them was presented. An investigation into NUIs was also presented including the merits of the Kinect as the most appropriate device to be used to facilitate the detection of an FMS. An NUI architecture was proposed that uses the Kinect to facilitate the detection of an FMS. A framework was implemented from the design of the architecture. The successful implementation of the framework provides evidence that the design of the proposed architecture is feasible. An instance of the framework incorporating the jump FMS was used as a case study in the development of a prototype that detects the correct and incorrect performance of a jump. The evaluation of the prototype proved the following: - The developed prototype was effective in detecting the correct and incorrect performance of the jump FMS; and - The implemented framework was robust for the incorporation of an FMS. The successful implementation of the prototype shows that an effective NUI architecture using the Kinect can be used to facilitate the detection of FMSs. The proposed architecture provides a structured way of developing a system using the Kinect to facilitate the detection of FMSs. This allows developers to add future FMSs to the system. This dissertation therefore makes the following contributions: - An experimental design to evaluate the effectiveness of a prototype that detects FMSs - A robust framework that incorporates FMSs; and - An effective NUI architecture to facilitate the detection of fundamental movement skills using the Kinect.
APA, Harvard, Vancouver, ISO, and other styles
36

Olcay, Taner. "Expressing Temporality In Graphical User Interface." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23102.

Full text
Abstract:
Temporality has been given attention in HCI research, with scholars arguing that temporal aspects in function-oriented graphical user interface are overlooked. However, these works have not adequately addressed practical approaches to manifest time in the design of such. This paper presents an approach for implementing temporal metaphors in the design of graphical user interface. In this design research, I materialize temporal metaphors into material qualities, in order to manifest time into the design of graphical user interface and shape the experiences of such designs. I argue that the design of temporal metaphors may express traces of time in graphical user interface differently from contemporary designs. I discuss implications and significance of unfolding experience over time. In conclusion, this design research, by articulating the experiences of its design works, sheds new light on the meanings of expressing temporal metaphors in the design of graphical user interface.
APA, Harvard, Vancouver, ISO, and other styles
37

Ell, Basil [Verfasser], and R. [Akademischer Betreuer] Studer. "User Interfaces to the Web of Data based on Natural Language Generation / Basil Ell. Betreuer: R. Studer." Karlsruhe : KIT-Bibliothek, 2015. http://d-nb.info/1097380998/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Figueiredo, Cátia Filipa Pinho. "A experiência mediada por interfaces gestuais touchless em contexto turístico." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17909.

Full text
Abstract:
Doutoramento em Informação e Comunicação em Plataformas Digitais
A evolução das Tecnologias da Informação e Comunicação impeliu novos modelos e estímulos para o sector do turismo. Estas mudanças, combinadas com uma nova postura do turista, repercutindo as dinâmicas da Web 2.0 e manifestando os contornos de uma cultura de participação, abriram espaço para o surgimento de novos serviços turísticos, de possível acesso ubíquo e personalizado ao longo de todo o ciclo da experiência turística. Simultaneamente, o surgimento de novos paradigmas de Interação Humano- Computador, de que são exemplo as interfaces gestuais touchless, acarretam oportunidades e desafios, quer ao nível da usabilidade e User Experience (UX), quer de um ponto de vista específico, quando concebida a sua potencial integração na experiência turística, como mais um veículo de consumo, partilha e manipulação de informação turística. A presente investigação, temporalmente, acompanhou o lançamento e sucesso do sensor Kinect, que aproximou e diversificou a aplicação e desenvolvimento de interfaces touchless em diferentes contextos. No âmbito turístico, foi identificado que a possível aplicação deste paradigma ainda não tinha sido explorado de forma detalhada. Verificava-se também a necessidade de contribuir para a definição de standards e estratégias para a exploração da UX em relação às interfaces gestuais touchless. Decorrendo da conjuntura apresentada, o presente estudo pretendeu focar a possível aplicação, potencialidades e experiência de utilização de soluções interativas com suporte de interação gestual touchless em contexto turístico. O estudo empírico desenhado e implementado envolveu dois momentos principais: a execução de entrevistas a experts e a realização de uma avaliação em contexto controlado de um protótipo de uma solução interativa touchless, destinada ao contexto turístico. A avaliação referida, na qual participaram 51 indivíduos, implicou o desenvolvimento de instrumentos e de um protocolo de teste adequado aos objetivos e características diferenciadoras do estudo. Como resultados gerais, o primeiro momento permitiu identificar um conjunto de vantagens e desvantagens, potencialidades e especificidades das interfaces gestuais touchless, quando concebida a sua aplicação ao turismo. O segundo momento, contando com o envolvimento dos participantes, destacou as questões relacionadas com a usabilidade e UX das interfaces touchless, permitindo estabelecer um conjunto de guias, metodologia e estratégias, que podem ser aplicadas no desenvolvimento e avaliação de outras soluções que suportem o paradigma referido. Recolheram-se ainda opiniões ao nível do potencial uso das mesmas em contexto turístico, identificadas no contributo dos utilizadores/participantes da avaliação em contexto controlado.
The evolution of communication and information technologies drove new approaches in the tourism industry. This stimulus, combined with the new tourist behaviour, aware of Web 2.0 dynamics and participative in the social web culture, have provided new opportunities for new tourism services, with ubiquitous and personalized access during the entire cycle of the touristic experience. Also, the emergence of new human-computer interaction paradigms - such as touchless gestural interfaces - lead to challenges and opportunities in what concerns usability and user experience (UX). Furthermore, when integrated in the touristic experience, those interfaces may enhance information sharing and manipulation, adding a new dimension to how we experience tourism. This research aroused with the launch of the Kinect sensor, which allowed the application and development of touchless interfaces in different contexts. In tourism, the application of this paradigm has not yet been fully discussed. It was also relevant to contribute to the definition of standards and strategies for researching and evaluating the UX with touchless interfaces. Thus, this study intended to focus on the possible application, potentialities and UX resulting from using interactive solutions with touchless gestural interaction in tourism. The empirical study had two main stages: first, the performance of interviews with experts and second, the execution of an evaluation in a controlled setting, using a prototype of an interactive gestural touchless interface. This evaluation, which was attended by 51 participants, implied the development of suitable tools and evaluation protocol. As a result, the first stage enabled the identification of a set of advantages, disadvantages, possibilities and features of this type of interactive solutions. The second stage focused on the issues related to usability and user experience of touchless gestural interfaces, to establish a set of guidelines, methodologies and approaches. It also collected opinions from users about the application of touchless gestural interfaces in tourism.
APA, Harvard, Vancouver, ISO, and other styles
39

Orso, Valeria. "Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424451.

Full text
Abstract:
In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device.
I sistemi computazionali hanno ormai da tempo abbandonato lo scenario immobile della scrivania e tendono oggi a coinvolgere sempre di più ambiti della vita quotidiana, in altre parole pervadono le nostre vite. Nel contesto del pervasive o ubiquitous computing, l’interazione tra l’utente e la macchina dipende in misura sempre minore da specifici sistemi di input (per esempio mouse e tastiera) e sfrutta sempre di più modalità di controllo naturali per operare con i dispositivi (per esempio tramite i gesti o il riconoscimento vocale). Numerosi sono stati i tentativi di trasformare in modo sostanziale il design dei computer e delle modalità di interazione tra cui l’impiego di sistemi per il riconoscimento dei comandi gestuali, dispositivi indossabili e la realtà aumentata. In tali contesti, i metodi tradizionalmente impiegati per lo studio della relazione uomo-macchina si rivelano poco efficaci e si delinea la necessità di una adeguata revisione di tali metodi per poter indagare adeguatamente le caratteristiche dei nuovi sistemi. Nel presente lavoro, sono state analizzate le modalità di interazione dell’utente con diversi sistemi innovativi, ciascuno caratterizzato da un diverso tipo di interfaccia. Sono stati inoltre considerati contesti d’uso diversi. I metodi impiegati sono stati concepiti per rispondere alle diverse caratteristiche delle interfacce in esame e una serie di raccomandazioni per gli sviluppatori sono state derivate dai risultati degli esperimenti. Il primo dominio di applicazione investigato è quello domestico. In particolare, è stato esaminato il design di una interfaccia gesturale per il controllo di un sistema di illuminazione integrato in un mobile della cucina. Un gruppo rappresentativo di utenti è stato osservato mentre interagiva con una simulazione virtuale del prototipo. In base all’analisi dei comportamenti spontanei degli utenti, abbiamo potuto osservare una serie di regolarità nelle azioni dei partecipanti. Il secondo dominio di applicazione riguarda l’esplorazione di un ambiente urbano in mobilità. In un esperimento comparativo, sono state confrontate un’interfaccia audio-aptica e una interfaccia audio- visiva per guidare gli utenti verso dei punti di interesse e per fornire loro delle informazioni a riguardo. I risultati indicano che entrambi i sistemi sono ugualmente efficienti ed entrambi hanno ricevuto valutazioni positive da parte degli utenti. In un compito di navigazione sono stati confrontati due display tattili, ciascuno integrato in un diverso dispositivo indossabile, ovvero un guanto e un giubbotto. Nonostante le differenze nella forma e nella dimensione, entrambi i sistemi hanno condotto efficacemente l’utente verso il target. I punti di forza e le debolezze dei due sistemi sono state evidenziate dagli utenti. In un contesto simile, sono stati confrontati due dispositivi che supportano la Realtà Aumentata, ovvero un paio di smartglass e uno smartphone. L’esperimento ci ha permesso di identificare le circostanze che favoriscono l’impiego dell’uno o dell’altro dispositivo. Considerando i risultati degli esperimenti complessivamente, possiamo quindi delineare una serie di raccomandazione per gli sviluppatori di sistemi innovativi. Innanzitutto, si evidenzia l’importanza di coinvolgere in modo adeguato gli utenti per indentificare modalità di interazione intuitive con le interfacce gesturali. Inoltre emerge l’importanza di fornire all’utente la possibilità di scegliere la modalità di interazione che meglio risponde alle caratteristiche del contesto insieme alla possibilità di personalizzare le proprietà di ciascuna modalità di interazione alle proprie esigenze. Infine, viene messa in luce le potenzialità dei dispositivi indossabili nelle interazioni in mobilità insieme con l’importanza di trovare il giusto equilibrio tra la quantità di informazioni che il dispositivo è in grado di inviare e la dimensione dello stesso.
APA, Harvard, Vancouver, ISO, and other styles
40

Sousa, Alexandre Martins Ferreira de. "Superfície mágica: criando superfícies interativas por meio de câmeras RGBD e projetores." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-23122015-104315/.

Full text
Abstract:
Em computação ubíqua, existe a ideia de tornar o computador onipresente, \"invisível\", de modo a aproximar computadores e humanos. Com o avanço das tecnologias de hardware e de software, torna-se interessante investigar possibilidades inovadoras de interação com os computadores. Neste trabalho, exploramos novas formas de interação inspiradas nos atos de desenhar, agarrar e gesticular. Para testá-las, desenvolvemos novos algoritmos baseados em câmeras RGBD para detecção, classificação e rastreamento de objetos, o que permite a concepção de uma instalação interativa que utilize equipamentos portáteis e de baixo custo. Para avaliar as formas de interação propostas, desenvolvemos a Superfície Mágica, um sistema que transforma uma superfície comum (como uma parede ou uma mesa) num espaço interativo multi-toque. A Superfície Mágica identifica toques de dedos de mãos, de canetas coloridas e de um apagador, oferecendo também suporte a uma varinha mágica para interação 3D. A Superfície Mágica suporta a execução de aplicativos, permitindo que uma superfície comum se transforme numa área interativa para desenho, num explorador de mapas, num simulador 3D para navegação em ambientes virtuais, entre outras possibilidades. As áreas de aplicação do sistema vão desde a educação até a arte interativa e o entretenimento. A instalação do protótipo envolve: um sensor Microsoft Kinect, um projetor de vídeo e um computador pessoal.
Ubiquitous computing is a concept where computing is thought to be omnipresent, effectively \"invisible\", so that humans and computers are brought together in a seamless way. The progress of hardware and software technologies make it compelling to investigate innovative possibilities of interaction with computers. In this work, we explore novel ways of interaction that are inspired by the acts of drawing, grasping and gesturing. In order to test them, we have developed new RGBD camera-based algorithms for object detection, classification and tracking. This allows the conception of an interactive installation that uses portable and low cost equipment. In order to evaluate the proposed ways of interaction, we have developed the Magic Surface, a system that transforms a regular surface (such as a wall or a tabletop) into a multitouch interactive space. The Magic Surface detects touch of hand fingers, colored pens and eraser. It also supports the usage of a magic wand for 3D interaction. The Magic Surface can run applications, allowing the transformation of a regular surface into an interactive drawing area, a map explorer, a 3D simulator for navigation in virtual environments, among other possibilities. Areas of application range from education to interactive art and entertainment. The setup of our prototype includes: a Microsoft Kinect sensor, a video projector and a personal computer.
APA, Harvard, Vancouver, ISO, and other styles
41

Wallén, Fredrik. "Comparing voice and touch interaction for smartphone radio and podcast application." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210693.

Full text
Abstract:
Today voice recognition is becoming mainstream and nowadays it is also possible to include in individual smartphone apps. However, it has not previously been investigated for which tasks it is preferable from a usability perspective to use voice recognition rather than touch. In order to investigate this, a voice user interface was created for a smartphone radio application, which already had a touch interface. The voice user interface was also tested with users in order to improve its usability. After that, a test was conducted where the participants were asked to perform the same tasks using both the touch and voice interface. The time they took to complete the tasks was measured and the participants rated the experience of completing the task on a scale. Finally, they were asked which interaction method they preferred. For most of the tasks tested, the voice interaction was both faster and got a higher rating. However, it should be noted that in a case where users don’t have specific tasks to perform it might be harder for them to know what a voice controlled app can and cannot do than when they are using touch. Many users also expressed that they were reluctant to use voice commands in public spaces out of fear of appearing strange. These results can be applied to other radio/podcast apps and, to a lesser extent, app for watching TV series and playing music.
Röststyrningen blir vanligare och numera är den också möjligt att använda i individuella appar för smartphones. Det har dock inte tidigare undersökts för vilka uppgifter det ur ett användbarhetsperspektiv är att föredra framför pekskärmsinteraktion. För att undersöka det skapades ett röstinterface för en radiooch podcast applikation som redan hade ett pekskärmsinterface. Röstinterfacet testades också med användare för att förbättra dess användbarhet. Efter det gjordes ett test där deltagarna blev ombedda att utföra samma uppgift med både pekskärm- och röstinterface. Den tid de tog på sig uppmättes och deltagarna betygsatte upplevelsen av att utföra uppgiften på en skala. Slutligen blev de tillfrågade omvilken interaktionsmetod de föredrog. För de flesta av de testade uppgifterna var röstinteraktion snabbare och fick högre betyg. Det ska dock noteras att i fall då användaren inte har specifika uppgifter att utföra kan det vara svårare för dem att veta vad en röststyrd app kan och inte kan göra än när de använder pekskärm. Många användare uttryckte också att de var motvilliga till att använda röstkommandon i allmänna utrymmen av rädsla föratt verka underliga. Dessa resultat kan tillämpas på radio/podcast appar och, i mindre utsträckning, appar för att titta på TV-serier och spela musik.
APA, Harvard, Vancouver, ISO, and other styles
42

Kern, Dagmar [Verfasser], Albrecht [Akademischer Betreuer] Schmidt, and Antonio [Akademischer Betreuer] Krüger. "Supporting the Development Process of Multimodal and Natural Automotive User Interfaces / Dagmar Kern. Gutachter: Antonio Krüger. Betreuer: Albrecht Schmidt." Duisburg, 2012. http://d-nb.info/1021899739/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Green, Anders. "Designing and Evaluating Human-Robot Communication : Informing Design through Analysis of User Interaction." Doctoral thesis, KTH, Människa-datorinteraktion, MDI, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9917.

Full text
Abstract:
This thesis explores the design and evaluation of human-robot communication for service robots that use natural language to interact with people.  The research is centred around three themes: design of human-robot communication; evaluation of miscommunication in human-robot communication; and the analysis of spatial influence as empiric phenomenon and design element.  The method has been to put users in situations of future use through means of Hi-fi simulation. Several scenarios were enacted using the Wizard-of-Oz technique: a robot intended for fetch- and carry services in an office environment; and a robot acting in what can be characterised as a home tour, where the user teaches objects and locations to the robot. Using these scenarios a corpus of human-robot communication was developed and analysed.  The analysis of the communicative behaviours led to the following observations: the users communicate with the robot in order to solve a main task goal. In order to fulfil this goal they overtake service actions that the robot is incapable of. Once users have understood that the robot is capable of performing actions, they explore its capabilities.  During the interactions the users continuously monitor the behaviour of the robot, attempting to elicit feedback or to draw its perceptual attention to the users’ communicative behaviour. Information related to the communicative status of the robot seems to have a fundamental impact on the quality of interaction. Large portions of the miscommunication that occurs in the analysed scenarios can be attributed to ill-timed, lacking or irrelevant feedback from the robot.  The analysis of the corpus data also showed that the users’ spatial behaviour seemed to be influenced by the robot’s communicative behaviour, embodiment and positioning. This means that we in robot design can consider the use strategies for spatial prompting to influence the users’ spatial behaviour.  The understanding of the importance of continuously providing information of the communicative status of the robot to it’s users leaves us with an intriguing design challenge for the future: When designing communication for a service robot we need to design communication for the robot work tasks; and simultaneously, provide information based on the systems communicative status to continuously make users aware of the robots communicative capability.
QC 20100714
APA, Harvard, Vancouver, ISO, and other styles
44

Tintarev, Nava. "Explaining recommendations." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=59438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Morin, Rudy. "Espace de conception et modèle d'interaction multi-tactile gestuel : un environnement de développement pour enrichir le modèle." Phd thesis, Université Toulouse le Mirail - Toulouse II, 2011. http://tel.archives-ouvertes.fr/tel-00634026.

Full text
Abstract:
L'affinage technique et l'adoption récente des technologies tactiles multi-points par les industriels et les utilisateurs ont fixé l'attention des designers d'interaction sur ces technologies. Tandis que de nombreuses études en interaction homme-machine se sont intéressées à comparer la performance de ces interfaces à celle des interfaces WIMP traditionnelles, peu se sont attachées à intégrer dans leur approche les spécificités du canal gestuel et les modalités d'interactions multi-tactiles. Dans cette étude, je défends l'idée que le design de telles interactions ne peut être approché qu'en suivant un modèle d'interaction spécifique intégrant l'ensemble des composantes physiques, cognitives, sensorielles et motrices du geste dans le couplage homme-machine. J'articule ma recherche autour d'un espace de conception, courte analyse sociotechnique de mon objet d'étude, dans lequel je définis un modèle d'interaction descriptif et génératif. Je détermine un ensemble de principes conceptuels et techniques permettant l'évaluation et la conception du design d'interfaces multi-tactiles de manière systémique et extensible. Au cours de cette étude, je précise les limites du paradigme d' " interface naturelle " en nuançant les effets du réalisme des interactions dans l'efficacité de tels systèmes. Enfin, je présente les travaux de conception et de développement d'un environnement de développement réalisé dans le cadre d'un dispositif CIFRE qui a accompagné cette étude et permis d'enrichir le modèle théorique.
APA, Harvard, Vancouver, ISO, and other styles
46

Pillon, Carolina Bravo. "Requisitos para o desenvolvimento de jogos digitais utilizando a interface natural a partir da perspectiva dos usuários idosos caidores." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/134925.

Full text
Abstract:
O aumento da população senescente no Brasil e no mundo implica ações específicas para satisfazer às necessidades e preferências do público idoso. Diversas áreas do conhecimento dedicam-se aos estudos relacionados ao envelhecimento humano com a finalidade de garantir a autonomia, independência, qualidade de vida e expectativa de vida saudável das pessoas com mais de 60 anos. Nesse contexto, novas tecnologias de intervenção baseadas nos jogos digitais têm sido utilizadas para promover a prá-tica de atividade física com o propósito de prevenir o declínio funcional em indivíduos idosos. Sendo assim, o objetivo desta pesquisa consiste em estabelecer um conjunto de requisitos de projeto para apoiar o desenvolvimento de jogos digitais que utilizam a interface natural, a partir da perspectiva dos usuários idosos caidores, a fim de contribuir para a melhora na qualidade de vida. Para tanto, realizou-se uma intervenção no projeto de extensão do Centro de Estudos de Lazer e Atividade Física do Idoso (Celari) da Escola de Educação Física (ESEF) da Universidade Federal do Rio Grande do Sul (UFRGS), em um período de oito semanas, com duas sessões semanais. Os instrumentos de avaliação aplicados na pesquisa foram dois questionários, incluindo questões fechadas, com delineamento pré e pós-intervenção. Além disso, utilizou-se a observação direta com o intuito de recolher informações acerca da amostra. O instrumento de intervenção adotado na pesquisa foi o console Xbox One® com o sensor de movimento Kinect 2.0® e sete jogos digitais. Estabeleceram-se, então, os requisitos de usuá-rios com base nas necessidades e preferências exigidos pelos participantes da pesquisa durante a inter-venção. Em uma etapa posterior, empregou-se o método do Desdobramento da Função Qualidade (QFD) para converter os requisitos de usuários em um conjunto de requisitos de projeto sistematizados de acordo com grau de importância atribuído pelos usuários. Com isso, pretendeu-se oferecer um con-junto de requisitos de projeto para, eventualmente, orientar o desenvolvimento de um jogo digital uti-lizando a interface natural, com vistas a melhorar a qualidade de vida, o equilíbrio e reduzir o risco de quedas das pessoas idosas.
The increase in the elderly population in Brazil and in the world implies specific actions to meet the needs and preferences of the elderly. Several areas of knowledge are involved with studies related to human aging in order to ensure autonomy, independence, life quality and healthy life expectancy of people over 60 years of age. In this context, new digital game-based intervention technologies have been used to promote the practice of physical activity aiming at preventing functional decline in the elderly. Thus, this research aims to establish a set of project requirements to support the development of digital games using the natural user interface from the perspective of elderly faller users in order to contribute for the improvement in life quality. Therefore, an intervention was accomplished in the extension project of the Center of Studies of Leisure and Physical Activity for the Elderly (Celari) from the School of Physical Education (ESEF), Federal University of Rio Grande do Sul (UFRGS), during a period of eight weeks, two sessions/week. The assessment tools used in the research were two questionnaires, with closed questions, pre- and post- test design. In addition, it was also used the direct observation in order to gather information about the sample. The intervention tool adopted in the research was the Xbox One® console with Kinect 2.0® motion sensor and seven digital games. Then, it was established the user’s requirements based on the needs and preferences required by the participants of the research during the intervention. At a later stage, it was used the Quality Function Deployment (QFD) method to convert the user requirements into a set of project requirements system-atized according to the degree of importance assigned by the users. Thus, it was intended to offer a set of requirements for the project in order to, possibly, orientate the development of a digital game using the natural user interface, aiming to improve the elderly’s life quality, balance as well as decrease their risk of falls.
APA, Harvard, Vancouver, ISO, and other styles
47

McEwan, Mitchell W. "The influence of naturally mapped control interfaces for video games on the player experience and intuitive interaction." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/107983/2/Mitchell_McEwan_Thesis.pdf.

Full text
Abstract:
This thesis empirically explores the influence of different types of naturally mapped control interfaces (NMCIs) for video games on the player experience and intuitive interaction. Across two repeated-measures experiments on racing and tennis games, more naturally mapped controls were shown to have largely positive effects, with some differences associated with player characteristics. The compensatory effects of natural mapping for casual players are revealed, along with some aversion to NMCIs amongst hardcore players. Overall implications are discussed, and a new NMCI Dimensions Framework presented, to aid future academic and design work leveraging NMCIs to improve video game accessibility and experiences.
APA, Harvard, Vancouver, ISO, and other styles
48

Freitag, Georg. "Konzepte der Anwendungsentwicklung für und mit Multi-Touch." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-162112.

Full text
Abstract:
Mit dem Aufkommen natürlicher Benutzerschnittstellen zum Erreichen einer möglichst intuitiven Interaktion mit Computern wird auch über die Bedeutung der Gestaltungsaspekte LOOK und FEEL der darzustellenden Benutzeroberflächen neu verhandelt. Dies bedeutet für den Entwurf und die Entwicklung neuer Anwendungen, die bisherigen Vorgehensmodelle, Werkzeuge und Interaktionen zu überdenken und hinsichtlich der neuen Herausforderungen zu überprüfen. Als Leitmotiv der vorliegenden Arbeit dient der Ansatz: Ähnliches wird durch Ähnliches entwickelt, der sich am Beispielfall der Multi-Touch-Technologie konkret mit dem Forschungsraum der natürlichen Benutzerschnittstellen auseinandersetzt. Anhand der drei aufeinander aufbauenden Aspekte Modell, Werkzeug und Interaktion wird die besondere Stellung des FEELs betont und diskutiert. Die Arbeit konzentriert sich dabei besonders auf die Phase des Prototypings, in der neue Ideen entworfen und später (weiter-) entwickelt werden. Die Arbeit nähert sich dabei dem Thema schrittweise an, vom Abstrakten hin zum Konkreten. Hierzu wird zunächst ein neu entwickeltes Vorgehensmodell vorgestellt, um auf die Besonderheiten des FEELs im Entwicklungsprozess natürlicher Benutzerschnittstellen eingehen zu können. Das Modell verbindet Ansätze agiler und klassischer Modelle, wobei die Iteration und die Entwicklung von Prototypen eine besondere Stellung einnehmen. Ausgehend vom neu vorgestellten Modell werden zwei Einsatzbereiche abgeleitet, die entsprechend des Leitmotivs der Arbeit mit zu konzipierenden Multi-Touch-Werkzeugen besetzt werden. Dabei wird besonderer Wert darauf gelegt, den Entwickler in die Rolle des Nutzers zu versetzen, indem die beiden Aktivitäten Umsetzung und Evaluation am selben Gerät stattfinden und fließend ineinander übergehen. Während das für den Entwurf erstellte Konzept TIQUID die Nachbildung von Verhalten und Abhängigkeiten mittels gestengesteuerter Animation ermöglicht, stellt das Konzept LIQUID dem Entwickler eine visuelle Programmiersprache zur Umsetzung des FEELs zur Verfügung. Die Bewertungen der beiden Werkzeuge erfolgte durch drei unabhängige Anwendungstests, welche die Einordnung im Entwicklungsprozess, den Vergleich mit alternativen Werkzeugen sowie die bevorzugte Interaktionsart untersuchten. Die Resultate der Evaluationen zeigen, dass die vorab gesteckten Zielstellungen einer einfachen Verwendung, der schnellen und umgehenden Darstellung des FEELs sowie die gute Bedienbarkeit mittels der Multi-Touch-Eingabe erfüllt und übertroffen werden konnten. Den Abschluss der Arbeit bildet die konkrete Auseinandersetzung mit der Multi-Touch-Interaktion, die für Entwickler und Nutzer die Schnittstelle zum FEEL der Anwendung ist. Die bisher auf die mittels Berührung beschränkte Interaktion mit dem Multi-Touch-Gerät wird im letzten Abschnitt der Arbeit mit Hilfe eines neuartigen Ansatzes um einen räumlichen Aspekt erweitert. Aus dieser Position heraus ergeben sich weitere Sichtweisen, die einen neuen Aspekt zum Verständnis von nutzerorientierten Aktivitäten beitragen. Diese, anhand einer technischen Umsetzung erprobte Vision neuer Interaktionskonzepte dient als Ansporn und Ausgangspunkt zur zukünftigen Erweiterung des zuvor entwickelten Vorgehensmodells und der konzipierten Werkzeuge. Der mit dieser Arbeit erreichte Stand bietet einen gesicherten Ausgangspunkt für die weitere Untersuchung des Fachgebietes natürlicher Benutzerschnittstellen. Neben zahlreichen Ansätzen, die zur vertiefenden Erforschung motivieren, bietet die Arbeit mit den sehr konkreten Umsetzungen TIQUID und LIQUID sowie der Erweiterung des Interaktionsraumes Schnittstellen an, um die erzielten Forschungsergebnisse in die Praxis zu übertragen. Eine fortführende Untersuchung des Forschungsraumes mit Hilfe alternativer Ansätze ist dabei ebenso denkbar wie der Einsatz einer zu Multi-Touch alternativen Eingabetechnologie.
APA, Harvard, Vancouver, ISO, and other styles
49

ROGER, BERTRAND. "Un systeme de dialogue intelligent avec un interlocuteur a la decouverte du monde simule par un logiciel quelconque." Paris 6, 1987. http://www.theses.fr/1987PA066202.

Full text
Abstract:
Le systeme presente s'adapte progressivement a son interlocuteur, en observant ses habitudes et ses preferences, ainsi que les connaissances que ce dernier acquiert au cours de sa decouverte du monde simule par chaque logiciel. La base de connaissances attachee a chaque logiciel est donnee sous forme d'objets representant la syntaxe et la semantique des commandes de ce logiciel et le modele conceptuel du monde qu'il simule. Le systeme lui-meme est constitue d'un ensemble d'expertises ou paquets de regles de production, independantes des logiciels auxquels il est applique
APA, Harvard, Vancouver, ISO, and other styles
50

Crawford, Alistair, and n/a. "Bad Behaviour: The Prevention of Usability Problems Using GSE Models." Griffith University. School of Information and Communication Technology, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061108.154141.

Full text
Abstract:
The aim of Human Computer Interaction or HCI is to both understand and improve the quality of the users' experience with the systems and technology they interact with. Recent HCI research requirements have stated a need for a unified predictive approach to system design that consolidates system engineering, cognitive modelling, and design principles into a single 'total system approach.' At present, few methods seek to integrate all three of these aspects into a single method and of those that do many are extensions to existing engineering techniques. This thesis, however proposes a new behaviour based approach designed to identify usability problems early in the design process before testing the system with actual users. In order to address the research requirements, this model uses a new design notation called Genetic Software Engineering (GSE) in conjunction with aspects of a cognitive modelling technique called NGOMSL (Natural GOMS Language) as the basis for this approach. GSE's behaviour tree notation, and NGOMSL's goal orientated format are integrated using a set of simple conversion rules defined in this study. Several well established design principles, believed to contribute to the eventual usability of a product, are then modelled in GSE. This thesis addresses the design of simple interfaces and the design of complex ubiquitous technology. The new GSE approach is used to model and predict usability problems in an extensive range of tasks from programming a VCR to making a video recording on a modern mobile phone. The validity of these findings is tested against actual user tests on the same tasks and devices to demonstrate the effectiveness of the GSE approach. Ultimately, the aim of the study is to demonstrate the effectiveness of the new cognitive and engineering based approach at predicting usability problems based on tangible representations of established design principles. This both fulfils the MCI research requirements for a 'total system approach' and establishes a new and novel approach to user interface and system design.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography