Littérature scientifique sur le sujet « Visual search 3D »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Visual search 3D ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Visual search 3D"

1

Lmaati, Elmustapha Ait, Ahmed El Oirrak et M. N. Kaddioui. « A Visual Similarity-Based 3D Search Engine ». Data Science Journal 8 (2009) : 78–87. http://dx.doi.org/10.2481/dsj.007-069.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Finlayson, Nonie J., et Philip M. Grove. « Visual search is influenced by 3D spatial layout ». Attention, Perception, & ; Psychophysics 77, no 7 (14 mai 2015) : 2322–30. http://dx.doi.org/10.3758/s13414-015-0924-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Finlayson, N., et P. Grove. « Visual search is influenced by 3D spatial layout ». Journal of Vision 14, no 10 (22 août 2014) : 914. http://dx.doi.org/10.1167/14.10.914.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ostrovsky, Y., et P. Sinha. « The role of 3D perspective in visual search ». Journal of Vision 1, no 3 (14 mars 2010) : 122. http://dx.doi.org/10.1167/1.3.122.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Chia-Ling, M. Pilar Aivar, Dmitry M. Kit, Matthew H. Tong et Mary M. Hayhoe. « Memory and visual search in naturalistic 2D and 3D environments ». Journal of Vision 16, no 8 (14 juin 2016) : 9. http://dx.doi.org/10.1167/16.8.9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Christmann, Olivier, Noëlle Carbonell et Simon Richir. « Visual search in dynamic 3D visualisations of unstructured picture collections ». Interacting with Computers 22, no 5 (septembre 2010) : 399–416. http://dx.doi.org/10.1016/j.intcom.2010.02.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bernhard, Matthias, Efstathios Stavrakis, Michael Hecher et Michael Wimmer. « Gaze-to-Object Mapping during Visual Search in 3D Virtual Environments ». ACM Transactions on Applied Perception 11, no 3 (28 octobre 2014) : 1–17. http://dx.doi.org/10.1145/2644812.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lago Angel, Miguel Angel, Craig Abbey et Miguel Eckstein. « Dissociations in ideal and human observer visual search in 3D images ». Journal of Vision 18, no 10 (1 septembre 2018) : 131. http://dx.doi.org/10.1167/18.10.131.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ghose, Tandra, Aman Mathur et Rupak Majumdar. « Study of Visual Search in 3D Space using Virtual Reality (VR) ». Journal of Vision 18, no 10 (1 septembre 2018) : 286. http://dx.doi.org/10.1167/18.10.286.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Shen, Helong, Yong Yin, Yongjin Li et Pengcheng Wang. « Real-time Dynamic Simulation of 3D Cloud for Marine Search and Rescue Simulator ». International Journal of Virtual Reality 8, no 2 (1 janvier 2009) : 59–63. http://dx.doi.org/10.20870/ijvr.2009.8.2.2725.

Texte intégral
Résumé :
As the main scenery of sky, the effect of 3D cloud influences the fidelity of visual system and the immersion effect of simulator. In this paper, based on the work of Y. Dobashi and T. Nishita, small region Cellular Automaton is generated and more realistic cloud simulation is improved. The experimental results show that the visual system of simulator can run in real-time and has a relatively higher refresh rate after changeable 3D cloud being applied.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Visual search 3D"

1

Wu, Hanwei. « Object Ranking for Mobile 3D Visual Search ». Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175146.

Texte intégral
Résumé :
In this thesis, we study object ranking in mobile 3D visual search. The conventional methods of object ranking achieve ranking results based on the appearance of objects in images captured by mobile devices while ignoring the underlying 3D geometric information. Thus, we propose to use the method of mobile 3D visual search to improve the ranking by using the underlying 3D geometry of the objects. We develop an algorithm of fast 3D geometric verication to re-rank the objects at low computational complexity. In that scene, the geometry of the objects such as round corners, sharp edges, or planar surfaces as well as the appearance of objects will be considered for 3D object ranking. On the other hand, we also investigate flaws of conventional vocabulary trees and improve the ranking results by introducing a credibility value to the TF-IDF scheme. By combining novel vocabulary trees and fast 3D geometric verification, we can improve the recall-datarate performance as well as the subjective ranking results for mobile 3D visual search.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ebri, Mars David. « Multi-View Vocabulary Trees for Mobile 3D Visual Search ». Thesis, KTH, Kommunikationsteori, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162268.

Texte intégral
Résumé :
Mobile Visual Search (MVS) is a research field which focuses on the recognition of real-world objects by using mobile devices such as smart phones or robots. Current mobile visual search solutions achieve search results based on the appearance of the objects in images captured by mobile devices. It is suitable for planar structured objects such as CD cover images, magazines and art works. However, these solutions fail if different real objects appear similar in the captured images. To solve this problem, the novel solution captures not only the visual appearance of the query object, but uses also the underlying 3D geometry. Vocabulary Tree (VT) methods have been widely used to efficiently find the match for a query in the database with a large volume of data. In this thesis, we study the vocabulary tree in the scenario of multi-view imagery for mobile visual search. We use hierarchically structured multi-view features to construct a multi-view vocabulary trees which represent the 3D geometric information of the objects. Relevant aspects of vocabulary trees such as the shaping of trees, tf-idf weighting and scoring functions have been studied and incorporated in the multi-view scenario. The experimental results show that our multi-view vocabulary trees improve the matching and ranking performance of mobile visual search.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bai, Hequn. « Mobile 3D Visual Search based on Local Stereo Image Features ». Thesis, KTH, Ljud- och bildbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102603.

Texte intégral
Résumé :
Many recent applications using local image features focus on 2D image recognition. Such applications can not distinguish between real objects and photos of objects. In this project, we present a 3D object recognition method using stereo images. Using the 3D information of the objects obtained from stereo images, objects with similar image description but different 3D shapes can be distinguished, such as real objects and photos of objects. Besides, the feature matching performance is improved compared with the method using only local image features. Knowing the fact that local image features may consume higher bitrates than transmitting the compressed images itself, we evaluate the performance of a recently proposed low-bitrate local image feature descriptor CHoG in 3D object reconstruction and recognition, and propose a difference compression method based on the quantized CHoG descriptor, which further reduces bitrates.
Styles APA, Harvard, Vancouver, ISO, etc.
4

McIntire, John Paul. « Visual Search Performance in a Dynamic Environment with 3D Auditory Cues ». Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1175611457.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Puglia, Luca. « About the development of visual search algorithms and their hardware implementations ». Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2570.

Texte intégral
Résumé :
2015 - 2016
The main goal of my work is to exploit the benefits of a hardware implementation of a 3D visual search pipeline. The term visual search refers to the task of searching objects in the environment starting from the real world representation. Object recognition today is mainly based on scene descriptors, an unique description for special spots in the data structure. This task has been implemented traditionally for years using just plain images: an image descriptor is a feature vector used to describe a position in the images. Matching descriptors present in different viewing of the same scene should allows the same spot to be found from different angles, therefore a good descriptor should be robust with respect to changes in: scene luminosity, camera affine transformations (rotation, scale and translation), camera noise and object affine transformations. Clearly, by using 2D images it is not possible to be robust with respect to the change in the projective space, e.g. if the object is rotated with respect to the up camera axes its 2D projection will dramatically change. For this reason, alongside 2D descriptors, many techniques have been proposed to solve the projective transformation problem using 3D descriptors that allow to map the shape of the objects and consequently the surface real appearance. This category of descriptors relies on 3D Point Cloud and Disparity Map to build a reliable feature vector which is invariant to the projective transformation. More sophisticated techniques are needed to obtain the 3D representation of the scene and, if necessary, the texture of the 3D model and obviously these techniques are also more computationally intensive than the simple image capture. The field of 3D model acquisition is very broad, it is possible to distinguish between two main categories: active and passive methods. In the active methods category we can find special devices able to obtain 3D information projecting special light and. Generally an infrared projector is coupled with a camera: while the infrared light projects a well known and fixed pattern, the camera will receive the information of the patterns reflection on a certain surface and the distortion in the pattern will give the precise depth of every point in the scene. These kind of sensors are of i i “output” — 2017/6/22 — 18:23 — page 3 — #3 i i i i i i 3 course expensive and not very efficient from the power consumption point of view, since a lot of power is wasted projecting light and the use of lasers also imposes eye safety rules on frame rate and transmissed power. Another way to obtain 3D models is to use passive stereo vision techniques, where two (or more) cameras are required which only acquire the scene appearance. Using the two (or more) images as input for a stereo matching algorithm it is possible to reconstruct the 3D world. Since more computational resources will be needed for this task, hardware acceleration can give an impressive performance boost over pure software approach. In this work I will explore the principal steps of a visual search pipeline composed by a 3D vision and a 3D description system. Both systems will take advantage of a parallelized architecture prototyped in RTL and implemented on an FPGA platform. This is a huge research field and in this work I will try to explain the reason for all the choices I made for my implementation, e.g. chosen algorithms, applied heuristics to accelerate the performance and selected device. In the first chapter we explain the Visual Search issues, showing the main components required by a Visual Search pipeline. Then I show the implemented architecture for a stereo vision system based on a Bio-informatics inspired approach, where the final system can process up to 30fps at 1024 × 768 pixels. After that a clever method for boosting the performance of 3D descriptor is presented and as last chapter the final architecture for the SHOT descriptor on FPGA will be presented. [edited by author]
L’obiettivo principale di questo lavoro e’ quello di esplorare i benefici di una implementazione hardware per una pipeline di visual search 3D. Il termine visual search si riferisce al problema di ricerca di oggetti nell’ambiente. L’object recognition ai giorni nostri e’ principalmente basato sull’uso di descrittori della scena, una descrizione univoca per i punti salienti. Questo compito e’ stato implementato per anni utilizzando immagini: il descrittore di un punto dell’immagine e’ un semplice vettore di caratteristiche. Accoppiando i descrittori presenti in differenti viste della stessa scena permette di trovare punti nello spazio visibili da entrambe le viste. Chiaramente, utilizzando immagini 2D non e’ possibile avere descrittori che sono robusti a cambiamenti della prospettiva, per questo motivo, molte tecniche sono state proposte per risolvere questo problema utilizzando descrittori 3D. Questa categoria di descrittori si avvale di 3D point cloud e mappe di disparita’. Ovviamente tecniche piu’ sofisticate sono necessarie per ottenere la rappresentazione 3D della scena. Il campo dell’acquisizione 3D e’ molto vasto ed e’ possibile distinguere tra due categorie di sensori: sensori attivi e passivi. Tra i sensori attivi possiamo annoverare dispositivi in grado di proiettare un pattern di luce infrarossa sulla scena, questo pattern noto presenta delle variazioni dovute agli oggetti presenti nella scena. Una camera infrarossi riceve l’immagine distorta del pattern e deduce la geometria della scena. Questo tipo di dispositivi non sono molto efficienti dal punto di vista energetico dato che un sacco di corrente viene consumata per proiettare il pattern. Un altro modo per ottenere un modello 3D e’ quello di usare sensori passivi, una coppia di telecamere puo’ essere utilizzata per ottenere informazioni utilizzando metodi di triangolazione. Questi metodi pero’ richiedono un sacco di potenza computazionale nel caso di applicazioni real time, per questo motivo e’ necessario utilizzare dispositivi ad-hoc quali architetture hardware dedicate implementate mediante l’uso di FPGA e ASIC. In questo lavoro ho esplorato gli step principali di una pipeline per la visual search composta da un sistema di visione 3D e uno per la descrizione di punti. Entrambi i sistemi si avvalgono di achitetture hardware dedicate prototipate in RTL e implementate su FPGA. Questo e’ un grosso campo di lavoro e provo ad esplorare i benefici di una implementazione harwadere per l’accelerazione degli algoritmi stessi e il risparmi di energia elettrica. [a cura dell'autore]
XV n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Andersson, Ulrika. « Effect of depth cues on visual search in a web-based environment ». Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204465.

Texte intégral
Résumé :
In recent years, 3D graphics has become more available for web development with low-level access to graphics hardware and increased power of web browsers. With core browsing tasks for users being to quickly scan a website and find what they are looking for, can 3D graphics – or depth cues – be used to facilitate these tasks? Therefore, the main focus of this work was to examine user performance on websites in terms of visual attention. Previous research on the use of 3D graphics in web design and other graphical interfaces has yielded mixed results, but some suggest depth cues might be used to segment a visual scene and improve visual attention. In this work, the main question asked was:  How do depth cues affect visual search in a web-based environment? To examine the question, a user study was conducted where participants performed a visual search task on four different web-based prototypes with varying depth cues. The findings suggest depth cues might have a negative effect by increasing reaction time, but certain cues can improve task completion (hit rate) in text-rich web environments. It is further elaborated that it might be useful to look at the problem from a more holistic perspective, also emphasizing other factors such as visual complexity and prototypicality of websites.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Chiu, Mi-chun, et 邱米淳. « Visual Search in Dynamic 3D Picture Collections ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/91038244916483938767.

Texte intégral
Résumé :
碩士
國立雲林科技大學
工業設計系碩士班
100
The purpose of this paper is to discuss the appropriate speed of dynamic 3D collections, and participant’s performance. The research is divided into two stages. The purpose of the first stage is to discuss the appropriate speed of two dynamic 3D collections. In the second stages, picture collections group by their dominant colors and form. The speed of picture collection revolve is the result of first stage. The aim is assess the possible effects on visual search efficiency by comparing participant’s performance. According to debriefings, the picture collections revolve speed not only has effect on efficiency, buy also make user feel tired and lose confidence. The group modus have effect on participant’s performance. The task grouped by form and organized on OV interface has the best performance. The picture collecting grouped by form could improve performance. Organized on IV interface could improve performance while pictures no grouped and the dominate color of pictures are evident.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Visual search 3D"

1

Sukno, Federico M., John L. Waddington et Paul F. Whelan. « Comparing 3D Descriptors for Local Search of Craniofacial Landmarks ». Dans Advances in Visual Computing, 92–103. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33191-6_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yi, Zili, Yang Li et Minglun Gong. « An Efficient Algorithm for Feature-Based 3D Point Cloud Correspondence Search ». Dans Advances in Visual Computing, 485–96. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-50835-1_44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kondarattsev, Vadim L., Alexander Yu Kryuchkov et Roman M. Chumak. « 3D Object Classification, Visual Search from RGB-D Data ». Dans Applied Mathematics and Computational Mechanics for Smart Applications, 353–75. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4826-4_24.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Moustakas, Konstantinos, G. Stavropoulos et Dimitrios Tzovaras. « Protrusion Fields for 3D Model Search and Retrieval Based on Range Image Queries ». Dans Advances in Visual Computing, 610–19. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33179-4_58.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Caunce, Angela, Chris Taylor et Tim Cootes. « Adding Facial Actions into 3D Model Search to Analyse Behaviour in an Unconstrained Environment ». Dans Advances in Visual Computing, 132–42. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17289-2_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Yunhong, Ruifeng Yu, Lei Feng et Xin Wu. « A Comparative Study on 3D/2D Visual Search Performance on Different Visual Display Terminal ». Dans Advances in Neuroergonomics and Cognitive Engineering, 233–42. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41691-5_20.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Banchs, Rafael E. « A Comparative Evaluation of 2D And 3D Visual Exploration of Document Search Results ». Dans Information Retrieval Technology, 100–111. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12844-3_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Schoeffmann, Klaus, David Ahlström et Laszlo Böszörmenyi. « A User Study of Visual Search Performance with Interactive 2D and 3D Storyboards ». Dans Adaptive Multimedia Retrieval. Large-Scale Multimedia Retrieval and Evaluation, 18–32. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37425-8_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

van Schooten, Boris W., Betsy van Dijk, Avan Suinesiaputra, Anton Nijholt et Johan H. C. Reiber. « Evaluating Visualisations and Automatic Warning Cues for Visual Search in Vascular Images ». Dans Cognitively Informed Intelligent Interfaces, 68–83. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1628-8.ch005.

Texte intégral
Résumé :
Visual search is a task that is performed in various application domains. The authors examine it in the domain of radiological analysis of 3D vascular images. They compare several major visualisations used in this domain, and study the possible benefits of automatic warning systems that highlight the sections that may contain visual targets and hence require the user’s attention. With help of a literature study, the authors present some theory about what result can be expected given the accuracy of a particular visual cue. They present the results of two experiments, in which they find that the Curved Planar Reformation visualisation, which presents a cross-section based on knowledge about the position of the blood vessel, is significantly more efficient than regular 3D visualisations, and that automatic warning systems that produce false alarms could work if they do not miss targets.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Andreas Lauterbach, Helge, et Andreas Nüchter. « Aerial 3D Mapping with Continuous Time ICP for Urban Search and Rescue ». Dans Autonomous Mobile Mapping Robots [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.108260.

Texte intégral
Résumé :
Fast reconnaissance is essential for strategic decisions during the immediate response phase of urban search and rescue missions. Nowadays, UAVs with their advantageous overview perspective are increasingly used for reconnaissance besides manual inspection of the scenario. However, data evaluation is often limited to visual inspection of images or video footage. We present our LiDAR-based aerial 3D mapping system, providing real-time maps of the environment. UAV-borne laser scans typically offer a reduced field of view. Moreover, UAV trajectories are more flexible and dynamic compared to those of ground vehicles, for which SLAM systems are often designed. We address these challenges by a two-step registration approach based on continuous time ICP. The experiments show that the resulting maps accurately represent the environment.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Visual search 3D"

1

Schoeffmann, Klaus, David Ahlstrom et Laszlo Boszormenyi. « 3D Storyboards for Interactive Visual Search ». Dans 2012 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2012. http://dx.doi.org/10.1109/icme.2012.62.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Petrelli, Alioscia, Danilo Pau, Emanuele Plebani et Luigi Di Stefano. « RGB-D Visual Search with Compact Binary Codes ». Dans 2015 International Conference on 3D Vision (3DV). IEEE, 2015. http://dx.doi.org/10.1109/3dv.2015.17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wu, Hanwei, Haopeng Li et Markus Flierl. « An embedded 3D geometry score for mobile 3D visual search ». Dans 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2016. http://dx.doi.org/10.1109/mmsp.2016.7813366.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rasouli, Amir, et John K. Tsotsos. « Sensor Planning for 3D Visual Search with Task Constraints ». Dans 2016 13th Conference on Computer and Robot Vision (CRV). IEEE, 2016. http://dx.doi.org/10.1109/crv.2016.11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gifford, Howard C. « Tests of a 3D visual-search model observer for SPECT ». Dans SPIE Medical Imaging, sous la direction de Craig K. Abbey et Claudia R. Mello-Thoms. SPIE, 2013. http://dx.doi.org/10.1117/12.2008073.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jaballah, Sami, Mohamed-Chaker Larabi et Jamel Belhadj Tahar. « Heuristic inspired search method for fast wedgelet pattern decision in 3D-HEVC ». Dans 2016 6th European Workshop on Visual Information Processing (EUVIP). IEEE, 2016. http://dx.doi.org/10.1109/euvip.2016.7764607.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Haopeng, et Markus Flierl. « Mobile 3D visual search using the Helmert transformation of stereo features ». Dans 2013 20th IEEE International Conference on Image Processing (ICIP). IEEE, 2013. http://dx.doi.org/10.1109/icip.2013.6738716.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Bernhard, Matthias, Efstathios Stavrakis, Michael Hecher et Michael Wimmer. « Gaze-to-object mapping during visual search in 3D virtual environments ». Dans the ACM Symposium. New York, New York, USA : ACM Press, 2014. http://dx.doi.org/10.1145/2628257.2656419.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yan, Yan, et Lin Kunhui. « 3D Visual Design for Mobile Search Result on 3G Mobile Phone ». Dans 2010 International Conference on Intelligent Computation Technology and Automation (ICICTA). IEEE, 2010. http://dx.doi.org/10.1109/icicta.2010.489.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yoshida, Syunsuke, Makoto Sei, Akira Utsumi et Hirotake Yamazoe. « Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation ». Dans 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2022. http://dx.doi.org/10.1109/vrw55335.2022.00179.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie