Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Visual search 3D.

Articles de revues sur le sujet « Visual search 3D »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Visual search 3D ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Lmaati, Elmustapha Ait, Ahmed El Oirrak et M. N. Kaddioui. « A Visual Similarity-Based 3D Search Engine ». Data Science Journal 8 (2009) : 78–87. http://dx.doi.org/10.2481/dsj.007-069.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Finlayson, Nonie J., et Philip M. Grove. « Visual search is influenced by 3D spatial layout ». Attention, Perception, & ; Psychophysics 77, no 7 (14 mai 2015) : 2322–30. http://dx.doi.org/10.3758/s13414-015-0924-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Finlayson, N., et P. Grove. « Visual search is influenced by 3D spatial layout ». Journal of Vision 14, no 10 (22 août 2014) : 914. http://dx.doi.org/10.1167/14.10.914.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ostrovsky, Y., et P. Sinha. « The role of 3D perspective in visual search ». Journal of Vision 1, no 3 (14 mars 2010) : 122. http://dx.doi.org/10.1167/1.3.122.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Chia-Ling, M. Pilar Aivar, Dmitry M. Kit, Matthew H. Tong et Mary M. Hayhoe. « Memory and visual search in naturalistic 2D and 3D environments ». Journal of Vision 16, no 8 (14 juin 2016) : 9. http://dx.doi.org/10.1167/16.8.9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Christmann, Olivier, Noëlle Carbonell et Simon Richir. « Visual search in dynamic 3D visualisations of unstructured picture collections ». Interacting with Computers 22, no 5 (septembre 2010) : 399–416. http://dx.doi.org/10.1016/j.intcom.2010.02.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bernhard, Matthias, Efstathios Stavrakis, Michael Hecher et Michael Wimmer. « Gaze-to-Object Mapping during Visual Search in 3D Virtual Environments ». ACM Transactions on Applied Perception 11, no 3 (28 octobre 2014) : 1–17. http://dx.doi.org/10.1145/2644812.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lago Angel, Miguel Angel, Craig Abbey et Miguel Eckstein. « Dissociations in ideal and human observer visual search in 3D images ». Journal of Vision 18, no 10 (1 septembre 2018) : 131. http://dx.doi.org/10.1167/18.10.131.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ghose, Tandra, Aman Mathur et Rupak Majumdar. « Study of Visual Search in 3D Space using Virtual Reality (VR) ». Journal of Vision 18, no 10 (1 septembre 2018) : 286. http://dx.doi.org/10.1167/18.10.286.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Shen, Helong, Yong Yin, Yongjin Li et Pengcheng Wang. « Real-time Dynamic Simulation of 3D Cloud for Marine Search and Rescue Simulator ». International Journal of Virtual Reality 8, no 2 (1 janvier 2009) : 59–63. http://dx.doi.org/10.20870/ijvr.2009.8.2.2725.

Texte intégral
Résumé :
As the main scenery of sky, the effect of 3D cloud influences the fidelity of visual system and the immersion effect of simulator. In this paper, based on the work of Y. Dobashi and T. Nishita, small region Cellular Automaton is generated and more realistic cloud simulation is improved. The experimental results show that the visual system of simulator can run in real-time and has a relatively higher refresh rate after changeable 3D cloud being applied.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Kyritsis, Markos, Stephen R. Gulliver et Eva Feredoes. « Visual Search Fixation Strategies in a 3D Image Set : An Eye-Tracking Study ». Interacting with Computers 32, no 3 (mai 2020) : 246–56. http://dx.doi.org/10.1093/iwc/iwaa018.

Texte intégral
Résumé :
Abstract In this study, we explore whether the inclusion of monocular depth within a pseudo-3D picture gallery negatively affects visual search strategy and performance. Experimental design facilitated control of (i) the number of visible depth planes and (ii) the presence of semantic sorting. Our results show that increasing the number of visual depth planes facilitates efficiency in search, which in turn results in a decreased response time to target selection and a reduction in participant average pupil dilation—used for measuring cognitive load. Furthermore, results identified that search strategy is based on sorting, which implies that an appropriate management of semantic associations can increase search efficiency by decreasing the number of potential targets.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Gilkey, Robert H., Brian D. Simpson, Douglas S. Brungart, Jeffery L. Cowgill et Adrienne Janae Ephrem. « 3D Audio Display for Pararescue Jumpers ». Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no 19 (octobre 2007) : 1349–52. http://dx.doi.org/10.1177/154193120705101916.

Texte intégral
Résumé :
Visual and audio navigations aids were compared in a virtual environment that depicted an urban combat search and rescue mission (CSAR). The participants' task was to rapidly move through a virtual maze depicted in a CAVE° to find a downed pilot, while dealing with automated hostile and friendly characters. The visual and audio displays were designed to present comparable information, which in separate conditions could be a simple realtime indication of the bearing to the pilot or to intermediate waypoints along the way. Auditory displays led to faster response times than visual displays (p = .011) and the waypoint display led to faster response times than the simple bearing display (p = .002). The results are considered in the context of the target CSAR application.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Wang, Hongling, Chengjin Zhang, Yong Song, Bao Pang et Guangyuan Zhang. « Three-Dimensional Reconstruction Based on Visual SLAM of Mobile Robot in Search and Rescue Disaster Scenarios ». Robotica 38, no 2 (21 mai 2019) : 350–73. http://dx.doi.org/10.1017/s0263574719000675.

Texte intégral
Résumé :
SummaryConventional simultaneous localization and mapping (SLAM) has concentrated on two-dimensional (2D) map building. To adapt it to urgent search and rescue (SAR) environments, it is necessary to combine the fast and simple global 2D SLAM and three-dimensional (3D) objects of interest (OOIs) local sub-maps. The main novelty of the present work is a method for 3D OOI reconstruction based on a 2D map, thereby retaining the fast performances of the latter. A theory is established that is adapted to a SAR environment, including the object identification, exploration area coverage (AC), and loop closure detection of revisited spots. Proposed for the first is image optical flow calculation with a 2D/3D fusion method and RGB-D (red, green, blue + depth) transformation based on Joblove–Greenberg mathematics and OpenCV processing. The mathematical theories of optical flow calculation and wavelet transformation are used for the first time to solve the robotic SAR SLAM problem. The present contributions indicate two aspects: (i) mobile robots depend on planar distance estimation to build 2D maps quickly and to provide SAR exploration AC; (ii) 3D OOIs are reconstructed using the proposed innovative methods of RGB-D iterative closest points (RGB-ICPs) and 2D/3D principle of wavelet transformation. Different mobile robots are used to conduct indoor and outdoor SAR SLAM. Both the SLAM and the SAR OOIs detection are implemented by simulations and ground-truth experiments, which provide strong evidence for the proposed 2D/3D reconstruction SAR SLAM approaches adapted to post-disaster environments.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Aizenman, Avigael, Matthew Thompson, Krista Ehinger et Jeremy Wolfe. « Visual search through a 3D volume : Studying novices in order to help radiologists ». Journal of Vision 15, no 12 (1 septembre 2015) : 1107. http://dx.doi.org/10.1167/15.12.1107.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Cain, Matthew, Emilie Josephs et Jeremy Wolfe. « Keep on rolling : Visual search asymmetries in 3D scenes with motion-defined targets ». Journal of Vision 15, no 12 (1 septembre 2015) : 1365. http://dx.doi.org/10.1167/15.12.1365.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Li, Chia-Ling, M. Pilar Aivar, Matthew Tong et Mary Hayhoe. « Memory in visual search is task-dependent in both 2D and 3D environments ». Journal of Vision 15, no 12 (1 septembre 2015) : 56. http://dx.doi.org/10.1167/15.12.56.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Aivar, M. Pilar, Chia-Ling Li, Dmitry Kit, Matthew Tong et Mary Hayhoe. « Spatial memory relative to the 3D environment guides body orientation in visual search. » Journal of Vision 15, no 12 (1 septembre 2015) : 947. http://dx.doi.org/10.1167/15.12.947.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Kyritsis, Markos, Stephen R. Gulliver et Eva Feredoes. « Environmental factors and features that influence visual search in a 3D WIMP interface ». International Journal of Human-Computer Studies 92-93 (août 2016) : 30–43. http://dx.doi.org/10.1016/j.ijhcs.2016.04.009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Shubina, Ksenia, et John K. Tsotsos. « Visual search for an object in a 3D environment using a mobile robot ». Computer Vision and Image Understanding 114, no 5 (mai 2010) : 535–47. http://dx.doi.org/10.1016/j.cviu.2009.06.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Pomplun, M., T. W. Garaas et M. Carrasco. « The effects of task difficulty on visual search strategy in virtual 3D displays ». Journal of Vision 13, no 3 (28 août 2013) : 24. http://dx.doi.org/10.1167/13.3.24.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Meghanathan, Radha Nila, Patrick Ruediger-Flore, Felix Hekele, Jan Spilski, Achim Ebert et Thomas Lachmann. « Spatial Sound in a 3D Virtual Environment : All Bark and No Bite ? » Big Data and Cognitive Computing 5, no 4 (13 décembre 2021) : 79. http://dx.doi.org/10.3390/bdcc5040079.

Texte intégral
Résumé :
Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on visual searches in 3D virtual environments with both visual and auditory noise. In an experiment, we asked participants to detect visual targets in a 360° video in conditions with and without environmental noise. Auditory cues indicating the target location were either absent or one of simple stereo or binaural audio, both of which assisted sound localization. To investigate the efficacy of these cues in distracting environments, we measured participant performance using a VR headset with an eye tracker. We found that the binaural cue outperformed both stereo and no auditory cues in terms of target detection irrespective of the environmental noise. We used two eye movement measures and two physiological measures to evaluate task dynamics and mental effort. We found that the absence of a cue increased target search duration and target search path, measured as time to fixation and gaze trajectory lengths, respectively. Our physiological measures of blink rate and pupil size showed no difference between the different stadium and cue conditions. Overall, our study provides evidence for the utility of binaural audio in a realistic, noisy and virtual environment for performing a target detection task, which is a crucial part of everyday behaviour—finding someone in a crowd.
Styles APA, Harvard, Vancouver, ISO, etc.
22

de Antonio, Angélica, Cristian Moral, Daniel Klepel et Martín J. Abente. « Gesture-based control of the 3D visual representation of document collections for exploration and search ». Information Services & ; Use 33, no 2 (30 octobre 2013) : 139–59. http://dx.doi.org/10.3233/isu-130698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Zang, Xuelian, Zhuanghua Shi, Hermann J. Müller et Markus Conci. « Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space ». Journal of Vision 17, no 5 (12 juin 2017) : 17. http://dx.doi.org/10.1167/17.5.17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Pomplun, M., T. Garaas et M. Carrasco. « The effects of task demands on the dynamics of visual search in virtual 3D displays ». Journal of Vision 9, no 8 (21 mars 2010) : 1214. http://dx.doi.org/10.1167/9.8.1214.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Lee, Marcus J. C., Stephen J. Tidman, Brendan S. Lay, Paul D. Bourke, David G. Lloyd et Jacqueline A. Alderson. « Visual Search Differs But Not Reaction Time When Intercepting a 3D Versus 2D Videoed Opponent ». Journal of Motor Behavior 45, no 2 (mars 2013) : 107–15. http://dx.doi.org/10.1080/00222895.2012.760512.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Wickens, Christopher D. « The When and How of Using 2-D and 3-D Displays for Operational Tasks ». Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no 21 (juillet 2000) : 3–403. http://dx.doi.org/10.1177/154193120004402107.

Texte intégral
Résumé :
Three different cannonical viewpoints into a 3D domain are defined to create a taxonomy of 3D displays. We then show how the information processing demands of each display viewpoint, provides benefits and or imposes costs on four categories of tasks, involving travel, image matching or situation awareness, visual search, and precise judgments. These task-display interactions are illustrated from experiments in aviation display design, battlefield judgments, and data visualization. Conclusions are offered regarding two possible ways of addressing the task-display interactions in design.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Wang, Lanfei, Jiangming Kan, Jun Guo et Chao Wang. « 3D Path Planning for the Ground Robot with Improved Ant Colony Optimization ». Sensors 19, no 4 (16 février 2019) : 815. http://dx.doi.org/10.3390/s19040815.

Texte intégral
Résumé :
Path planning is a fundamental issue in the aspect of robot navigation. As robots work in 3D environments, it is meaningful to study 3D path planning. To solve general problems of easily falling into local optimum and long search times in 3D path planning based on the ant colony algorithm, we proposed an improved the pheromone update and a heuristic function by introducing a safety value. We also designed two methods to calculate safety values. Concerning the path search, we designed a search mode combining the plane and visual fields and limited the search range of the robot. With regard to the deadlock problem, we adopted a 3D deadlock-free mechanism to enable ants to get out of the predicaments. With respect to simulations, we used a number of 3D terrains to carry out simulations and set different starting and end points in each terrain under the same external settings. According to the results of the improved ant colony algorithm and the basic ant colony algorithm, paths planned by the improved ant colony algorithm can effectively avoid obstacles, and their trajectories are smoother than that of the basic ant colony algorithm. The shortest path length is reduced by 8.164%, on average, compared with the results of the basic ant colony algorithm. We also compared the results of two methods for calculating safety values under the same terrain and external settings. Results show that by calculating the safety value in the environmental modeling stage in advance, and invoking the safety value directly in the path planning stage, the average running time is reduced by 91.56%, compared with calculating the safety value while path planning.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Lei, Haopeng, Guoliang Luo, Yuhua Li, Jianming Liu et Jihua Ye. « Sketch-Based 3D Model Retrieval Using Attributes ». International Journal of Grid and High Performance Computing 10, no 3 (juillet 2018) : 60–75. http://dx.doi.org/10.4018/ijghpc.2018070105.

Texte intégral
Résumé :
With the rapid growth of available 3D models on the Internet, how to retrieve 3D models based on hand-drawn sketch retrieval are becoming increasingly important. This article proposes a new sketch-based 3D model retrieval approach. This approach is different from current methods that make use of low-level visual features to capture the search intention of users. The proposed method uses two kinds of semantic attributes, including pre-defined attributes and latent attributes. Specifically, pre-defined attributes are defined manually which can provide prior knowledge about different sketch categories and latent-attributes are more discriminative which can differentiate sketch categories at a finer level. Therefore, these semantic attributes can provide a more descriptive and discriminative meaningful representation than low-level feature descriptors. The experiment results demonstrate that this proposed method can achieve superior performance over previously proposed sketch-based 3D model retrieval methods.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Liu, Jinping. « Three-Dimensional Modeling Design and Color Correction Algorithm of Packaging Structure Based on Human Visual Characteristics ». Mathematical Problems in Engineering 2022 (12 juillet 2022) : 1–12. http://dx.doi.org/10.1155/2022/2738900.

Texte intégral
Résumé :
Product packaging design is a work that integrates graphic design and three-dimensional design. 3D visualization technology and digital virtual interaction technology in virtual reality are developing rapidly in packaging design applications. On the one hand, this study discusses the application of digital technology in the three-dimensional design of packaging structure, including packaging carton structure design, packaging container modeling design, and packaging two-dimensional visual design. On the other hand, this study studies the color correction algorithm of packaging design and proposes a color correction based on the visual characteristics of human eyes. The pattern search algorithm finds a local optimum, and the initial value affects the correction result. Therefore, inspired by the color perception properties of the human visual system, the image is corrected by combining visual multichannel and visual nonlinearity. The experimental results of the algorithm are more consistent with human visual perception.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Lynen, Simon, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart et Torsten Sattler. « Large-scale, real-time visual–inertial localization revisited ». International Journal of Robotics Research 39, no 9 (7 juillet 2020) : 1061–84. http://dx.doi.org/10.1177/0278364920931151.

Texte intégral
Résumé :
The overarching goals in image-based localization are scale, robustness, and speed. In recent years, approaches based on local features and sparse 3D point-cloud models have both dominated the benchmarks and seen successful real-world deployment. They enable applications ranging from robot navigation, autonomous driving, virtual and augmented reality to device geo-localization. Recently, end-to-end learned localization approaches have been proposed which show promising results on small-scale datasets. However, the positioning accuracy, scalability, latency, and compute and storage requirements of these approaches remain open challenges. We aim to deploy localization at a global scale where one thus relies on methods using local features and sparse 3D models. Our approach spans from offline model building to real-time client-side pose fusion. The system compresses the appearance and geometry of the scene for efficient model storage and lookup leading to scalability beyond what has been demonstrated previously. It allows for low-latency localization queries and efficient fusion to be run in real-time on mobile platforms by combining server-side localization with real-time visual–inertial-based camera pose tracking. In order to further improve efficiency, we leverage a combination of priors, nearest-neighbor search, geometric match culling, and a cascaded pose candidate refinement step. This combination outperforms previous approaches when working with large-scale models and allows deployment at unprecedented scale. We demonstrate the effectiveness of our approach on a proof-of-concept system localizing 2.5 million images against models from four cities in different regions of the world achieving query latencies in the 200 ms range.
Styles APA, Harvard, Vancouver, ISO, etc.
31

de Andrade, Diogo, Nuno Fachada, Carlos M. Fernandes et Agostinho C. Rosa. « Generative Art with Swarm Landscapes ». Entropy 22, no 11 (12 novembre 2020) : 1284. http://dx.doi.org/10.3390/e22111284.

Texte intégral
Résumé :
We present a generative swarm art project that creates 3D animations by running a Particle Swarm Optimization algorithm over synthetic landscapes produced by an objective function. Different kinds of functions are explored, including mathematical expressions, Perlin noise-based terrain, and several image-based procedures. A method for displaying the particle swarm exploring the search space in aesthetically pleasing ways is described. Several experiments are detailed and analyzed and a number of interesting visual artifacts are highlighted.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Silva, Édimo Sousa, et Maria Andréia Formico Rodrigues. « Design and Evaluation of a Gesture-Controlled System for Interactive Manipulation of Medical Images and 3D Models ». Journal on Interactive Systems 5, no 3 (30 décembre 2014) : 1. http://dx.doi.org/10.5753/jis.2014.726.

Texte intégral
Résumé :
This work presents the design and evaluation of a gesture-controlled system for interactive manipulation of radiological images and 3D models using the Kinect device. Several abstractions have been implemented and refactored to improve the system performance, making the application simpler, at an affordable cost. Additionally, specific gestures to change the visualization settings of 3D models represented by layers were also successfully modeled. Further, we have conducted systematic and detailed usability testings with users to determine quantitative performance measures and qualitative analysis (usefulness, visual quality of the interface, ease of learning, ease of use, 3D spatial perception, level of interactivity, mental and physical fatigue, effectiveness and satisfaction). The results show that the participants are able to perform the tasks of search, selection and manipulation of 2D images (zoom in/out and translations) and 3D models (zoom in/out and rotations), quickly and accurately, demonstrating the usefulness of the system as a possible effective and competitive alternative solution, to the traditional use of the negatoscope.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Gupta, Ashish, Huan Chang et Alper Yilmaz. « GPS-DENIED GEO-LOCALISATION USING VISUAL ODOMETRY ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (3 juin 2016) : 263–70. http://dx.doi.org/10.5194/isprsannals-iii-3-263-2016.

Texte intégral
Résumé :
The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Gupta, Ashish, Huan Chang et Alper Yilmaz. « GPS-DENIED GEO-LOCALISATION USING VISUAL ODOMETRY ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (3 juin 2016) : 263–70. http://dx.doi.org/10.5194/isprs-annals-iii-3-263-2016.

Texte intégral
Résumé :
The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Wang, Wei, Xuefeng Hong, Sina Dang, Ning Xu et Jue Qu. « 3D Space Layout Design of Holographic Command Cabin Information Display in Mixed Reality Environment Based on HoloLens 2 ». Brain Sciences 12, no 8 (23 juillet 2022) : 971. http://dx.doi.org/10.3390/brainsci12080971.

Texte intégral
Résumé :
After the command and control information of the command and control cabin is displayed in the form of mixed reality, the large amount of real-time information and static information contained in it will form a dynamic situation that changes all the time. This brings a great burden to the system operator's cognition, decision-making and operation. In order to solve this problem, this paper studies the three-dimensional spatial layout of holographic command cabin information display in a mixed reality environment. A total of 15 people participated in the experiment, of which 10 were the subjects of the experiment and 5 were the staff of the auxiliary experiment. Ten subjects used the HoloLens 2 generation to conduct visual characteristics and cognitive load experiments and collected and analyzed the subjects’ task completion time, error rate, eye movement and EEG and subjective evaluation data. Through the analysis of experimental data, the laws of visual and cognitive features of three-dimensional space in a mixed reality environment can be obtained. This paper systematically explores the effects of three key attributes: depth distance, information layer number and target relative position depth distance of information distribution in a 3D space, on visual search performance and on cognitive load. The experimental results showed that the optimal depth distance range for information display in the mixed reality environment is: the best depth distance for operation interactions (0.6 m~1.0 m), the best depth distance for accurate identification (2.4 m~2.8 m) and the overall situational awareness best-in-class depth distance (3.4 m~3.6 m). Under a certain angle of view, the number of information layers in the space is as small as possible, and the number of information layers should not exceed five at most. The relative position depth distance between the information layers in space ranges from 0.2 m to 0.35 m. Based on this theory, information layout in a 3D space can achieve a faster and more accurate visual search in a mixed reality environment and effectively reduce the cognitive load.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Liu, Jian Ping, Bang Yan Ye et Jian Xi Peng. « View Loops Separation from Engineering Drawings Based on Multi-Granularity Information Acquisition ». Advanced Materials Research 108-111 (mai 2010) : 543–48. http://dx.doi.org/10.4028/www.scientific.net/amr.108-111.543.

Texte intégral
Résumé :
To improve the validity and efficiency of feature recognition for 3D reconstruction from engineering drawings, this paper presents a new method of view loops separation based on multi-granularity information acquisition. By analyzing the line-frames of combined primitive views and hidden semantics in engineering drawing, combined relationship of graphics primitives can be identified, then applies line-frames partition method of figuration analysis, view loop separation of individual primitive is realized by inertial loop search with pre-set priority. This algorithm is actualized in AutoCAD 2008 platform with development tools of Object ARX2008 and Visual C#2.0. With this algorithm, an application module of 3D intelligent reconstruction system is developed and finally the validity of this research is verified by examples.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Martínez, Pablo A., Mario Castelán et Gustavo Arechavaleta. « Vision based persistent localization of a humanoid robot for locomotion tasks ». International Journal of Applied Mathematics and Computer Science 26, no 3 (1 septembre 2016) : 669–82. http://dx.doi.org/10.1515/amcs-2016-0046.

Texte intégral
Résumé :
Abstract Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Siemonsma, Stephen, et Tyler Bell. « HoloKinect : Holographic 3D Video Conferencing ». Sensors 22, no 21 (23 octobre 2022) : 8118. http://dx.doi.org/10.3390/s22218118.

Texte intégral
Résumé :
Recent world events have caused a dramatic rise in the use of video conferencing solutions such as Zoom and FaceTime. Although 3D capture and display technologies are becoming common in consumer products (e.g., Apple iPhone TrueDepth sensors, Microsoft Kinect devices, and Meta Quest VR headsets), 3D telecommunication has not yet seen any appreciable adoption. Researchers have made great progress in developing advanced 3D telepresence systems, but often with burdensome hardware and network requirements. In this work, we present HoloKinect, an open-source, user-friendly, and GPU-accelerated platform for enabling live, two-way 3D video conferencing on commodity hardware and a standard broadband internet connection. A Microsoft Azure Kinect serves as the capture device and a Looking Glass Portrait multiscopically displays the final reconstructed 3D mesh for a hologram-like effect. HoloKinect packs color and depth information into a single video stream, leveraging multiwavelength depth (MWD) encoding to store depth maps in standard RGB video frames. The video stream is compressed with highly optimized and hardware-accelerated video codecs such as H.264. A search of the depth and video encoding parameter space was performed to analyze the quantitative and qualitative losses resulting from HoloKinect’s lossy compression scheme. Visual results were acceptable at all tested bitrates (3–30 Mbps), while the best results were achieved with higher video bitrates and full 4:4:4 chroma sampling. RMSE values of the recovered depth measurements were low across all settings permutations.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Rhee, S., S. Kim, H. R. Ahn et T. Kim. « COMPARING STEREO IMAGE MATCHING PERFORMANCE BY MULTIDIMENSIONAL SEARCH WINDOWS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4 (19 septembre 2018) : 523–27. http://dx.doi.org/10.5194/isprs-archives-xlii-4-523-2018.

Texte intégral
Résumé :
<p><strong>Abstract.</strong> Image matching is a key technology for extraction of dense point cloud and 3D terrain information using satellite/aerial imagery. In image matching using brightness values of pixels, the size of search window is an important factor for determining the matching performance. In this study, we perform matching using multi-dimensional search windows applicable to area-based matching and compare the performance. Also, the search window is reconfigured by using the linear information existing on the image, and the matching is tried. Comparing the fixed search window and the multi-window matching results, it was confirmed that the multiple windows under the same conditions show relatively high accuracy. We can also see that the method of applying the line element has slightly better accuracy. As a result of applying the line element extraction technique, a large number of pixels are not extracted compared with the total image pixel amount. There was no significant difference in the results of visual analysis. However, we have confirmed that this technique has contributed to improving accuracy.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
40

Ji, Yijun, Qing Xia et Zhijiang Zhang. « Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor ». International Journal of Optics 2017 (2017) : 1–11. http://dx.doi.org/10.1155/2017/9796127.

Texte intégral
Résumé :
3D reconstruction based on structured light or laser scan has been widely used in industrial measurement, robot navigation, and virtual reality. However, most modern range sensors fail to scan transparent objects and some other special materials, of which the surface cannot reflect back the accurate depth because of the absorption and refraction of light. In this paper, we fuse the depth and silhouette information from an RGB-D sensor (Kinect v1) to recover the lost surface of transparent objects. Our system is divided into two parts. First, we utilize the zero and wrong depth led by transparent materials from multiple views to search for the 3D region which contains the transparent object. Then, based on shape from silhouette technology, we recover the 3D model by visual hull within these noisy regions. Joint Grabcut segmentation is operated on multiple color images to extract the silhouette. The initial constraint for Grabcut is automatically determined. Experiments validate that our approach can improve the 3D model of transparent object in real-world scene. Our system is time-saving, robust, and without any interactive operation throughout the process.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Zhu, Libin, et Lihui Liu. « 3D Human Motion Posture Tracking Method Using Multilabel Transfer Learning ». Mobile Information Systems 2022 (8 août 2022) : 1–10. http://dx.doi.org/10.1155/2022/2211866.

Texte intégral
Résumé :
To overcome the high position and posture angle tracking error, long tracking loss time and posture tracking update response time, and low fitness problem of traditional human motion posture tracking methods, in this paper, a three-dimensional (3D) human motion posture tracking method using multilabel transfer learning is proposed. According to the human structure composition and degree of freedom constraints, the 3D human joint skeleton model is constructed to generate the 3D human pose image and perform the noise reduction operation. The background difference is used to detect the 3D human moving target. Using multilabel transfer learning, human motion posture features are extracted from joint position and joint angle, and the estimation results of 3D human motion posture are obtained. The tracking error of human motion posture is corrected by three-step search, and the visual 3D human motion posture tracking results are output. The results show that, compared with the traditional human motion posture tracking method, the position and posture angle tracking errors of the proposed method are 2.18 mm and 0.178 deg, respectively. The tracking loss time and posture tracking update response time are shorter, which proves that the proposed method has more advantages in tracking accuracy and higher adaptability.
Styles APA, Harvard, Vancouver, ISO, etc.
42

FREMONT, VINCENT, RYAD CHELLALI et JEAN-GUY FONTAINE. « GENERALIZATION OF THE DESARGUES THEOREM FOR SPARSE 3D RECONSTRUCTION ». International Journal of Humanoid Robotics 06, no 01 (mars 2009) : 49–69. http://dx.doi.org/10.1142/s0219843609001644.

Texte intégral
Résumé :
Visual perception for walking machines needs to handle more degrees of freedom than for wheeled robots. For humanoid, four- or six-legged robots, camera motion is 6D instead of 3D or planar motion. Classical 3D reconstruction methods cannot be applied directly, because explicit sensor motion is needed. In this paper, we propose an algorithm for 3D reconstruction of an unstructured environment using motion-free uncalibrated single camera. Computer vision techniques are employed to obtain an incremental geometrical reconstruction of the environment, therefore using vision as a sensor for robot control tasks like navigation, obstacle avoidance, manipulation, tracking, etc. and 3D model acquisition. The main contribution is that the offline 3D reconstruction problem is considered as a point trajectory search through the video stream. The algorithm takes into account the temporal aspect of the sequence of images in order to have an analytical expression of the geometrical locus of the point trajectories through the sequence of images. The approach is a generalization of the Desargues theorem applied to multiple views taken from nearby viewpoints. Experiments on both synthetic and real image sequences show the simplicity and efficiency of the proposed method. This method provides an alternative technical solution easy to use, flexible in the context of robotic applications and can significantly improve the 3D estimation accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Su, Shijie, Chao Wang, Ke Chen, Jian Zhang et Hui Yang. « MPCR-Net : Multiple Partial Point Clouds Registration Network Using a Global Template ». Applied Sciences 11, no 22 (9 novembre 2021) : 10535. http://dx.doi.org/10.3390/app112210535.

Texte intégral
Résumé :
With advancements in photoelectric technology and computer image processing technology, the visual measurement method based on point clouds is gradually being applied to the 3D measurement of large workpieces. Point cloud registration is a key step in 3D measurement, and its registration accuracy directly affects the accuracy of 3D measurements. In this study, we designed a novel MPCR-Net for multiple partial point cloud registration networks. First, an ideal point cloud was extracted from the CAD model of the workpiece and used as the global template. Next, a deep neural network was used to search for the corresponding point groups between each partial point cloud and the global template point cloud. Then, the rigid body transformation matrix was learned according to these correspondence point groups to realize the registration of each partial point cloud. Finally, the iterative closest point algorithm was used to optimize the registration results to obtain the final point cloud model of the workpiece. We conducted point cloud registration experiments on untrained models and actual workpieces, and by comparing them with existing point cloud registration methods, we verified that the MPCR-Net could improve the accuracy and robustness of the 3D point cloud registration.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Qiu, Yake, et Jingyu Tao. « The Development and Realization of Digital Panorama Media Technology Based on VR Technology ». Computational Intelligence and Neuroscience 2022 (28 avril 2022) : 1–7. http://dx.doi.org/10.1155/2022/1023865.

Texte intégral
Résumé :
The purpose of this paper is to understand the digital 3D multimedia panoramic visual communication technology based on virtual reality. Firstly, the key concepts and characteristics of virtual reality are introduced, including the development and application of digital three-dimensional panorama technology. Then, according to the theoretical research, some basic knowledge of 3D panoramic image Mosaic is introduced, including camera image modeling, image sharing, and image exchange. Finally, with the development of the virtual tour at the College of Normal University, the hardware of panoramic technology and the demand of panoramic image search have been expanded in the application. The design of panoramic Mosaic, panoramic image generation, and virtual tour school construction considers real-world issues. The innovation of this paper lies in that will be used by SketchUp8.0 software builds the geometry of 3d virtual scene and by the cylindrical panoramic images based on image of building 3 d virtual scene organic unifies in together and makes a panoramic image can be as the change of seasons in the real scene and real-time change, enhance the sense of the reality of the system and user immersive.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Oscar, Dominic, Jeanny Pragantha et Darius Andana Haris. « PEMBUATAN GAME FIRST PERSON SHOOTER “FIND ME ! SHOOT ME!” DENGAN FITUR SPLIT SCREEN ». Computatio : Journal of Computer Science and Information Systems 2, no 1 (22 mai 2018) : 45. http://dx.doi.org/10.24912/computatio.v2i1.1479.

Texte intégral
Résumé :
Game “Find Me! Shoot Me!” is a game that has First Person Shooter Genre with split screen feature in one screen and invisible character that is visual of the character is not visible. This game was created with the aim of providing a First Person Shooter game experience that is different than usual. This game has a 3D display and played using a windows based computer or laptop and using joystick controller for the control tool.This game was created using Unity with C# as a programming language. Player are assigned to search and kill other players. The game will end if one the player has reached a predetermined point. This multiplayer game feature is played at least two players and maximum of four players.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Wang, Huaping, Kailun Bai, Juan Cui, Qing Shi, Tao Sun, Qiang Huang, Paolo Dario et Toshio Fukuda. « Three-Dimensional Autofocusing Visual Feedback for Automated Rare Cells Sorting in Fluorescence Microscopy ». Micromachines 10, no 9 (27 août 2019) : 567. http://dx.doi.org/10.3390/mi10090567.

Texte intégral
Résumé :
Sorting rare cells from heterogeneous mixtures makes a significant contribution to biological research and medical treatment. However, the performances of traditional methods are limited due to the time-consuming preparation, poor purity, and recovery rate. In this paper, we proposed a cell screening method based on the automated microrobotic aspirate-and-place strategy under fluorescence microscopy. A fast autofocusing visual feedback (FAVF) method is introduced for precise and real-time three-dimensional (3D) location. In the context of this method, the scalable correlation coefficient (SCC) matching is presented for planar locating cells with regions of interest (ROI) created for autofocusing. When the overlap occurs, target cells are separated by a segmentation algorithm. To meet the shallow depth of field (DOF) limitation of the microscope, the improved multiple depth from defocus (MDFD) algorithm is used for depth detection, taking 850 ms a time with an accuracy rate of 96.79%. The neighborhood search based algorithm is applied for the tracking of the micropipette. Finally, experiments of screening NIH/3T3 (mouse embryonic fibroblast) cells verifies the feasibility and validity of this method with an average speed of 5 cells/min, 95% purity, and 80% recovery rate. Moreover, such versatile functions as cell counting and injection, for example, could be achieved by this expandable system.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Bilbokaitė, Renata. « THE BOYS’ NEED TO FOR SCIENTIFIC VISUALIZATION IN THE INTERNET ». GAMTAMOKSLINIS UGDYMAS / NATURAL SCIENCE EDUCATION 8, no 2 (25 juin 2011) : 33–39. http://dx.doi.org/10.48127/gu-nse/11.8.33a.

Texte intégral
Résumé :
The term visualization is defined as representation of images but this is only a brief explanation because the term encompasses all kinds of represented information in various forms of codes. Priority in information codes has shifted from verbal codes to visual codes. A computer screen has enabled an individual to see invisible and difficult objects and made cognitive and social processes easier. Visualization tools have the strongest position among other teaching/learning tools because of such features as complex 3D, spatial relationships, parameters of moving objects and comprehensible representation of images. Visualization takes into account schoolchildren’s perceptive and cognitive abilities including visual thinking and meta-cognition. The previous authors’ researches results enclosed that there were statisti-cally significant differences between two main categories that showed that boys more than girls used searches for visualization in the internet. According to this, there is a question – what has influenced this difference? The aim is to find out boys’ opinion why they are searching for visualization more than girls. The results enclose that the boys more frequently search for visualizations in the inter-net than girls. There were found out two main reasons that explained stated data. Search for visualization was used for learning purposes and for meeting their requirements. In firstly, boys want to have a deeper knowledge, also, they identify that visualization stimulates their cognitive processes and motivation too. Secondly, boys are noisy than girls during the lessons, they do not concentrate them selves to themes and mostly, often do not listen to the teacher. According to this, at home they feel lack of knowledge and want to compensate this by searching for visual information. Also, boys feel pleasure when they are searching in the inter-net. Key words: science education, visualization, need.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Bouarara, Hadj Ahmed, Reda Mohamed Hamou, Abdelmalek Amine et Amine Rahmani. « A Fireworks Algorithm for Modern Web Information Retrieval with Visual Results Mining ». International Journal of Swarm Intelligence Research 6, no 3 (juillet 2015) : 1–23. http://dx.doi.org/10.4018/ijsir.2015070101.

Texte intégral
Résumé :
The popularization of computers, the number of electronic documents available online /offline and the explosion of electronic communication have deeply rocked the relationship between man and information. Nowadays, we are awash in a rising tide of information where the web has impacted on almost every aspect of our life. Merely, the development of automatic tools for an efficient access to this huge amount of digital information appears as a necessity. This paper deals on the unveiling of a new web information retrieval system using fireworks algorithm (FWA-IR). It is based on a random explosion of fireworks and a set of operators (displacement, mapping, mutation, and selection). Each explosion of firework is a potential solution for the need of user (query). It generates a set of sparks (documents) with two locations (relevant and irrelevant). The authors experiments were performed on the MEDLARS dataset and using the validation measures (recall, precision, f-measure, silence, noise and accuracy) by studying the sensitive parameters of this technique (initial location number, iteration number, mutation probability, fitness function, selection method, text representation, and distance measure), aimed to show the benefit derived from using such approach compared to the results of others methods existed in literature (taboo search, simulated annealing, and naïve method). Finally, a result-mining tool was achieved for the purpose to see the outcome in graphical form (3d cub and cobweb) with more realism using the functionalities of zooming and rotation.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Lysenko, A. V., A. Y. Razumova, A. I. Yaremenko, V. M. Ivanov, S. V. Strelkov et A. A. Grigoriev. « A modern approach to planning and surgical removal of a foreign body from a maxillary sinus : a clinical case ». Parodontologiya 27, no 3 (22 septembre 2022) : 258–62. http://dx.doi.org/10.33925/1683-3759-2022-27-3-258-262.

Texte intégral
Résumé :
Relevance. If a foreign body is present in a maxillary sinus, it should be surgically removed. Endoscopic and radical surgery are the main methods. Clinician’s subjective feelings determine the surgical access, which can cause complications. Therefore, the search for new methods of planning and visualizing the operation stages remains relevant.Materials and methods. Before the operation, the patient had a cone-beam computed tomography in a marker holder frame. The 3D slicer program allowed the segmentation of the foreign body and surrounding anatomical features. A marker, fixed on the patient's head, allowed transmitting information to the augmented reality glasses during the operation.Results. The surgery was performed under local anesthesia in an outpatient facility. The diameter of the antrotomy hole was 5 mm. No postoperative complications were recorded.Conclusion. The proposed technique provides significant visual control and minimal trauma to the sinus during surgery.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Schätz, Martin, Olga Rubešová, Jan Mareš, David Girsa et Alan Spark. « Estimation of Covid-19 lungs damage based on computer tomography images analysis ». F1000Research 11 (17 mars 2022) : 326. http://dx.doi.org/10.12688/f1000research.109020.1.

Texte intégral
Résumé :
Modern treatment is based on reproducible quantitative analysis of available data. The Covid-19 pandemic did accelerate development and research in several multidisciplinary areas. One of them is the use of software tools for faster and reproducible patient data evaluation. A CT scan can be invaluable for a search of details, but it is not always easy to see the big picture in 3D data. Even in the visual analysis of CT slice by slice can inter and intra variability makes a big difference. We present an ImageJ tool developed together with the radiology center of Faculty hospital Královské Vinohrady for CT evaluation of patients with COVID-19. The tool was developed to help estimate the percentage of lungs affected by the infection. The patients can be divided into five groups based on percentage score and proper treatment can be applied
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie