Статті в журналах з теми "Multimodal object tracking"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Multimodal object tracking.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-37 статей у журналах для дослідження на тему "Multimodal object tracking".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zhang, Liwei, Jiahong Lai, Zenghui Zhang, Zhen Deng, Bingwei He, and Yucheng He. "Multimodal Multiobject Tracking by Fusing Deep Appearance Features and Motion Information." Complexity 2020 (September 25, 2020): 1–10. http://dx.doi.org/10.1155/2020/8810340.

Повний текст джерела
Анотація:
Multiobject Tracking (MOT) is one of the most important abilities of autonomous driving systems. However, most of the existing MOT methods only use a single sensor, such as a camera, which has the problem of insufficient reliability. In this paper, we propose a novel Multiobject Tracking method by fusing deep appearance features and motion information of objects. In this method, the locations of objects are first determined based on a 2D object detector and a 3D object detector. We use the Nonmaximum Suppression (NMS) algorithm to combine the detection results of the two detectors to ensure the detection accuracy in complex scenes. After that, we use Convolutional Neural Network (CNN) to learn the deep appearance features of objects and employ Kalman Filter to obtain the motion information of objects. Finally, the MOT task is achieved by associating the motion information and deep appearance features. A successful match indicates that the object was tracked successfully. A set of experiments on the KITTI Tracking Benchmark shows that the proposed MOT method can effectively perform the MOT task. The Multiobject Tracking Accuracy (MOTA) is up to 76.40% and the Multiobject Tracking Precision (MOTP) is up to 83.50%.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kota, John S., and Antonia Papandreou-Suppappola. "Joint Design of Transmit Waveforms for Object Tracking in Coexisting Multimodal Sensing Systems." Sensors 19, no. 8 (April 12, 2019): 1753. http://dx.doi.org/10.3390/s19081753.

Повний текст джерела
Анотація:
We examine a multiple object tracking problem by jointly optimizing the transmit waveforms used in a multimodal system. Coexisting sensors in this system were assumed to share the same spectrum. Depending on the application, a system can include radars tracking multiple targets or multiuser wireless communications and a radar tracking both multiple messages and a target. The proposed spectral coexistence approach was based on designing all transmit waveforms to have the same time-varying phase function while optimizing desirable performance metrics. Considering the scenario of tracking a target with a pulse–Doppler radar and multiple user messages, two signaling schemes were proposed after selecting the waveform parameters to first minimize multiple access interference. The first scheme is based on system interference minimization, whereas the second scheme explores the multiobjective optimization tradeoff between system interference and object parameter estimation error. Simulations are provided to demonstrate the performance tradeoffs due to different system requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Muresan, Mircea Paul, Ion Giosan, and Sergiu Nedevschi. "Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation." Sensors 20, no. 4 (February 18, 2020): 1110. http://dx.doi.org/10.3390/s20041110.

Повний текст джерела
Анотація:
The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis–measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Motlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger, and Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing." Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.

Повний текст джерела
Анотація:
We describe the design of a system consisting of several state-of-the-art real-time audio and video processing components enabling multimodal stream manipulation (e.g., automatic online editing for multiparty videoconferencing applications) in open, unconstrained environments. The underlying algorithms are designed to allow multiple people to enter, interact, and leave the observable scene with no constraints. They comprise continuous localisation of audio objects and its application for spatial audio object coding, detection, and tracking of faces, estimation of head poses and visual focus of attention, detection and localisation of verbal and paralinguistic events, and the association and fusion of these different events. Combined all together, they represent multimodal streams with audio objects and semantic video objects and provide semantic information for stream manipulation systems (like a virtual director). Various experiments have been performed to evaluate the performance of the system. The obtained results demonstrate the effectiveness of the proposed design, the various algorithms, and the benefit of fusing different modalities in this scenario.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Monir, Islam A., Mohamed W. Fakhr, and Nashwa El-Bendary. "Multimodal deep learning model for human handover classification." Bulletin of Electrical Engineering and Informatics 11, no. 2 (April 1, 2022): 974–85. http://dx.doi.org/10.11591/eei.v11i2.3690.

Повний текст джерела
Анотація:
Giving and receiving objects between humans and robots is a critical task which collaborative robots must be able to do. In order for robots to achieve that, they must be able to classify different types of human handover motions. Previous works did not mainly focus on classifying the motion type from both giver and receiver perspectives. However, they solely focused on object grasping, handover detection, and handover classification from one side only (giver/receiver). This paper discusses the design and implementation of different deep learning architectures with long short term memory (LSTM) network; and different feature selection techniques for human handover classification from both giver and receiver perspectives. Classification performance while using unimodal and multimodal deep learning models is investigated. The data used for evaluation is a publicly available dataset with four different modalities: motion tracking sensors readings, Kinect readings for 15 joints positions, 6-axis inertial sensor readings, and video recordings. The multimodality added a huge boost in the classification performance; achieving 96% accuracy with the feature selection based deep learning architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shibuya, Masaki, Kengo Ohnishi, and Isamu Kajitani. "Networked Multimodal Sensor Control of Powered 2-DOF Wrist and Hand." Journal of Robotics 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/7862178.

Повний текст джерела
Анотація:
A prosthetic limb control system to operate powered 2-DOF wrist and 1-DOF hand with environmental information, myoelectric signal, and forearm posture signal is composed and evaluated. Our concept model on fusing biosignal and environmental information for easier manipulation with upper limb prosthesis is assembled utilizing networking software and prosthetic component interlink platform. The target is to enhance the controllability of the powered wrist’s orientation by processing the information to derive the joint movement in a physiologically appropriate manner. We applied a manipulative skill model of prehension which is constrained by forearm properties, grasping object properties, and task. The myoelectric and forearm posture sensor signals were combined with the work plane posture and the operation mode for grasping object properties. To verify the reduction of the operational load with the proposed method, we conducted 2 performance tests: system performance test to identify the powered 2-DOF wrist’s tracking performance and user operation tests. From the system performance experiment, the fusion control was confirmed to be sufficient to control the wrist joint with respect to the work plane posture. Forearm posture angle ranges were reduced when the prosthesis was operated companying environmental information in the user operation tests.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kandylakis, Zacharias, Konstantinos Vasili, and Konstantinos Karantzalos. "Fusing Multimodal Video Data for Detecting Moving Objects/Targets in Challenging Indoor and Outdoor Scenes." Remote Sensing 11, no. 4 (February 21, 2019): 446. http://dx.doi.org/10.3390/rs11040446.

Повний текст джерела
Анотація:
Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. Towards this direction, we have designed a multisensor system based on thermal, shortwave infrared, and hyperspectral video sensors and propose a processing pipeline able to perform in real-time object detection tasks despite the huge amount of the concurrently acquired video streams. In particular, in order to avoid the computationally intensive coregistration of the hyperspectral data with other imaging modalities, the initially detected targets are projected through a local coordinate system on the hypercube image plane. Regarding the object detection, a detector-agnostic procedure has been developed, integrating both unsupervised (background subtraction) and supervised (deep learning convolutional neural networks) techniques for validation purposes. The detected and verified targets are extracted through the fusion and data association steps based on temporal spectral signatures of both target and background. The quite promising experimental results in challenging indoor and outdoor scenes indicated the robust and efficient performance of the developed methodology under different conditions like fog, smoke, and illumination changes.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kim, Jongwon, and Jeongho Cho. "RGDiNet: Efficient Onboard Object Detection with Faster R-CNN for Air-to-Ground Surveillance." Sensors 21, no. 5 (March 1, 2021): 1677. http://dx.doi.org/10.3390/s21051677.

Повний текст джерела
Анотація:
An essential component for the autonomous flight or air-to-ground surveillance of a UAV is an object detection device. It must possess a high detection accuracy and requires real-time data processing to be employed for various tasks such as search and rescue, object tracking and disaster analysis. With the recent advancements in multimodal data-based object detection architectures, autonomous driving technology has significantly improved, and the latest algorithm has achieved an average precision of up to 96%. However, these remarkable advances may be unsuitable for the image processing of UAV aerial data directly onboard for object detection because of the following major problems: (1) Objects in aerial views generally have a smaller size than in an image and they are uneven and sparsely distributed throughout an image; (2) Objects are exposed to various environmental changes, such as occlusion and background interference; and (3) The payload weight of a UAV is limited. Thus, we propose employing a new real-time onboard object detection architecture, an RGB aerial image and a point cloud data (PCD) depth map image network (RGDiNet). A faster region-based convolutional neural network was used as the baseline detection network and an RGD, an integration of the RGB aerial image and the depth map reconstructed by the light detection and ranging PCD, was utilized as an input for computational efficiency. Performance tests and evaluation of the proposed RGDiNet were conducted under various operating conditions using hand-labeled aerial datasets. Consequently, it was shown that the proposed method has a superior performance for the detection of vehicles and pedestrians than conventional vision-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Popp, Constantin, and Damian T. Murphy. "Creating Audio Object-Focused Acoustic Environments for Room-Scale Virtual Reality." Applied Sciences 12, no. 14 (July 20, 2022): 7306. http://dx.doi.org/10.3390/app12147306.

Повний текст джерела
Анотація:
Room-scale virtual reality (VR) affordance in movement and interactivity causes new challenges in creating virtual acoustic environments for VR experiences. Such environments are typically constructed from virtual interactive objects that are accompanied by an Ambisonic bed and an off-screen (“invisible”) music soundtrack, with the Ambisonic bed, music, and virtual acoustics describing the aural features of an area. This methodology can become problematic in room-scale VR as the player cannot approach or interact with such background sounds, contradicting the player’s motion aurally and limiting interactivity. Written from a sound designer’s perspective, the paper addresses these issues by proposing a musically inclusive novel methodology that reimagines an acoustic environment predominately using objects that are governed by multimodal rule-based systems and spatialized in six degrees of freedom using 3D binaural audio exclusively while minimizing the use of Ambisonic beds and non-diegetic music. This methodology is implemented using off-the-shelf, creator-oriented tools and methods and is evaluated through the development of a standalone, narrative, prototype room-scale VR experience. The experience’s target platform is a mobile, untethered VR system based on head-mounted displays, inside-out tracking, head-mounted loudspeakers or headphones, and hand-held controllers. The authors apply their methodology to the generation of ambiences based on sound-based music, sound effects, and virtual acoustics. The proposed methodology benefits the interactivity and spatial behavior of virtual acoustic environments but may be constrained by platform and project limitations.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Birchfield, David, and Mina Johnson-Glenberg. "A Next Gen Interface for Embodied Learning." International Journal of Gaming and Computer-Mediated Simulations 2, no. 1 (January 2010): 49–58. http://dx.doi.org/10.4018/jgcms.2010010105.

Повний текст джерела
Анотація:
Emerging research from the learning sciences and human-computer interaction supports the premise that learning is effective when it is embodied, collaborative, and multimodal. In response, we have developed a mixed-reality environment called the Situated Multimedia Arts Learning Laboratory (SMALLab). SMALLab enables multiple students to interact with one another and digitally mediated elements via 3D movements and gestures in real physical space. It uses 3D object tracking, real time graphics, and surround-sound to enhance learning. We present two studies from the earth science domain that address questions regarding the feasibility and efficacy of SMALLab in a classroom context. We present data demonstrating that students learn more during a recent SMALLab intervention compared to regular classroom instruction. We contend that well-designed, mixed-reality environments have much to offer STEM learners, and that the learning gains transcend those that can be expected from more traditional classroom procedures.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Becerra, Victor, Francisco J. Perales, Miquel Roca, José M. Buades, and Margaret Miró-Julià. "A Wireless Hand Grip Device for Motion and Force Analysis." Applied Sciences 11, no. 13 (June 29, 2021): 6036. http://dx.doi.org/10.3390/app11136036.

Повний текст джерела
Анотація:
A prototype portable device that allows for simultaneous hand and fingers motion and precise force measurements has been. Wireless microelectromechanical systems based on inertial and force sensors are suitable for tracking bodily measurements. In particular, they can be used for hand interaction with computer applications. Our interest is to design a multimodal wireless hand grip device that measures and evaluates this activity for ludic or medical rehabilitation purposes. The accuracy and reliability of the proposed device has been evaluated against two different commercial dynamometers (Takei model 5101 TKK, Constant 14192-709E). We introduce a testing application to provide visual feedback of all device signals. The combination of interaction forces and movements makes it possible to simulate the dynamic characteristics of the handling of a virtual object by fingers and palm in rehabilitation applications or some serious games. The combination of these above mentioned technologies and open and portable software are very useful in the design of applications for assistance and rehabilitation purposes that is the main objective of the device.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Westin, Thomas, José Neves, Peter Mozelius, Carla Sousa, and Lara Mantovan. "Inclusive AR-games for Education of Deaf Children: Challenges and Opportunities." European Conference on Games Based Learning 16, no. 1 (September 29, 2022): 597–604. http://dx.doi.org/10.34190/ecgbl.16.1.588.

Повний текст джерела
Анотація:
Game-based learning has had a rapid development in the 21st century, attracting an increasing audience. However, inclusion of all is still not a reality in society, with accessibility for deaf and hard of hearing children as a remaining challenge. To be excluded from learning due to communication barriers can have severe consequences for further studies and work. Based on previous research Augmented Reality (AR) games can be joyful learning tools that include activities with different sign languages, but AR based learning games for deaf and hard of hearing lack research. This paper aims to present opportunities and challenges of designing inclusive AR games for education of deaf children. Methods involved conducting a scoping review of previous studies about AR for deaf people. Experts were involved as co-authors for in-depth understanding of sign languages and challenges for deaf people. A set of AR input and output techniques were analysed for appropriateness, and various AR based game mechanics were compared. Results indicate that inclusive AR gameplay for deaf people could be built on AR based image and object tracking, complemented with sign recognition. These technologies provide input from the user and the real-world environment typically via the camera to the app. Scene tracking and GPS can be used for location-based game mechanics. Output to the user can be done via local signed videos ideally, but also with images and animations. Moreover, a civic intelligence approach can be applied to overcome many of the challenges that have been identified in five dimensions for inclusion of deaf people i.e., cultural, educational, psycho-social, semantic, and multimodal. The input from trusted, educated signers and teachers can enable the connection between real world objects and signed videos to provide explanations of concepts. The conclusion is that the development of an inclusive, multi-language AR game for deaf people needs to be carried out as an international collaboration, addressing all five dimensions.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Azar, Zeynep, and Aslı Özyürek. "Discourse management." Dutch Journal of Applied Linguistics 4, no. 2 (December 31, 2015): 222–40. http://dx.doi.org/10.1075/dujal.4.2.06aza.

Повний текст джерела
Анотація:
Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Tung, Tony, and Takashi Matsuyama. "Visual Tracking Using Multimodal Particle Filter." International Journal of Natural Computing Research 4, no. 3 (July 2014): 69–84. http://dx.doi.org/10.4018/ijncr.2014070104.

Повний текст джерела
Анотація:
Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gîrbacia, Florin. "Evaluation of CAD Model Manipulation in Desktop and Multimodal Immersive Interface." Applied Mechanics and Materials 325-326 (June 2013): 289–93. http://dx.doi.org/10.4028/www.scientific.net/amm.325-326.289.

Повний текст джерела
Анотація:
In this paper is presented an evaluation study regarding the manipulation of 3D CAD models using multimodal Virtual Reality (VR) interface and traditional desktop environment. A multimodal interface based on VR technologies that offers direct access to the space model by using several sensorial channels (visual, tactile, verbal) is described. Users can manipulate 3D objects using 6DOF tracking device, gestures and visualize the design status in an immersive CAVE - like system. The results of the evaluation study illustrate that using VR technologies as an alternative to replace WIMP CAD software interface is viable and brings more advantages.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Zhang, Yanqi, Xiaofei Kou, Haibin Liu, Shiqing Zhang, and Liangliang Qie. "IoT-Enabled Sustainable and Cost-Efficient Returnable Transport Management Strategies in Multimodal Transport Systems." Sustainability 14, no. 18 (September 16, 2022): 11668. http://dx.doi.org/10.3390/su141811668.

Повний текст джерела
Анотація:
Returnable transport items (RTIs) are widely used in multimodal transport systems. However, due to the lack of effective tracking methods, RTIs management efficiency is low and RTIs are easily lost, which directly and indirectly causes economic losses to enterprises. Internet of Things (IoT) technology is proved to be effective in realizing real-time tracking and tracing of various objects in diverse fields. However, an IoT-enabled RTIs management system in a multimodal transport system has not been widely accepted due to a lack of an effective cost decision model. To address these problems, this research first presents three typical schemes of RTIs management. through extensive field studies on collaborative logistics service providers in multimodal transport systems. Then, the cost–benefit analyses of these three schemes are conducted while the decision models on whether to adopt IoT technologies are built. Finally, based on the decision models, the main factors affecting the application of IoT-RTIs management systems are studied by numerical analysis, based on which several managerial implications are presented. These results can serve as a theoretical basis for enterprises interested in finding out whether IoT technology should be used in RTIs management.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Vortmann, Lisa-Marie, Leonid Schwenke, and Felix Putze. "Using Brain Activity Patterns to Differentiate Real and Virtual Attended Targets during Augmented Reality Scenarios." Information 12, no. 6 (May 26, 2021): 226. http://dx.doi.org/10.3390/info12060226.

Повний текст джерела
Анотація:
Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Kuhnt, Daniela, Miriam H. A. Bauer, Andreas Becker, Dorit Merhof, Amir Zolal, Mirco Richter, Peter Grummich, Oliver Ganslandt, Michael Buchfelder, and Christopher Nimsky. "Intraoperative Visualization of Fiber Tracking Based Reconstruction of Language Pathways in Glioma Surgery." Neurosurgery 70, no. 4 (September 23, 2011): 911–20. http://dx.doi.org/10.1227/neu.0b013e318237a807.

Повний текст джерела
Анотація:
Abstract BACKGROUND: For neuroepithelial tumors, the surgical goal is maximum resection with preservation of neurological function. This is contributed to by intraoperative magnetic resonance imaging (iMRI) combined with multimodal navigation. OBJECTIVE: We evaluated the contribution of diffusion tensor imaging (DTI)-based fiber tracking of language pathways with 2 different algorithms (tensor deflection, connectivity analysis [CA]) integrated in the navigation on the surgical outcome. METHODS: We evaluated 32 patients with neuroepithelial tumors who underwent surgery with DTI-based fiber tracking of language pathways integrated in neuronavigation. The tensor deflection algorithm was routinely used and its results intraoperatively displayed in all cases. The CA algorithm was furthermore evaluated in 23 cases. Volumetric assessment was performed in pre- and intraoperative MR images. To evaluate the benefit of fiber tractography, language deficits were evaluated pre- and postoperatively and compared with the volumetric analysis. RESULTS: Final gross-total resection was performed in 40.6% of patients. Absolute tumor volume was reduced from 55.33 ± 63.77 cm3 to 20.61 ± 21.67 cm3 in first iMRI resection control, to finally 11.56 ± 21.92 cm3 (P < .01). Fiber tracking of the 2 algorithms showed a deviation of the displayed 3D objects by <5 mm. In long-term follow-up only 1 patient (3.1%) had a persistent language deficit. CONCLUSION: Intraoperative visualization of language-related cortical areas and the connecting pathways with DTI-based fiber tracking can be successfully performed and integrated in the navigation system. In a setting of intraoperative high-field MRI this contributes to maximum tumor resection with low postoperative morbidity.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wang, Xin, and Pieter Jonker. "An Advanced Active Vision System with Multimodal Visual Odometry Perception for Humanoid Robots." International Journal of Humanoid Robotics 14, no. 03 (August 25, 2017): 1750006. http://dx.doi.org/10.1142/s0219843617500062.

Повний текст джерела
Анотація:
Using active vision to perceive surroundings instead of just passively receiving information, humans develop the ability to explore unknown environments. Humanoid robot active vision research has already half a century history. It covers comprehensive research areas and plenty of studies have been done. Nowadays, the new trend is to use a stereo setup or a Kinect with neck movements to realize active vision. However, human perception is a combination of eye and neck movements. This paper presents an advanced active vision system that works in a similar way as human vision. The main contributions are: a design of a set of controllers that mimic eye and neck movements, including saccade eye movements, pursuit eye movements, vestibulo-ocular reflex eye movements and vergence eye movements; an adaptive selection mechanism based on properties of objects to automatically choose an optimal tracking algorithm; a novel Multimodal Visual Odometry Perception method that combines stereopsis and convergence to enable robots to perform both precise action in action space and scene exploration in personal space. Experimental results prove the effectiveness and robustness of our system. Besides, the system works in real-time constraints with low-cost cameras and motors, providing an affordable solution for industrial applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Cao, Junjun, Junliang Cao, Zheng Zeng, and Lian Lian. "Nonlinear multiple-input-multiple-output adaptive backstepping control of underwater glider systems." International Journal of Advanced Robotic Systems 13, no. 6 (December 1, 2016): 172988141666948. http://dx.doi.org/10.1177/1729881416669484.

Повний текст джерела
Анотація:
In this article, an adaptive backstepping control is proposed for multi-input and multi-output nonlinear underwater glider systems. The developed method is established on the basis of the state-space equations, which are simplified from the full glider dynamics through reasonable assumptions. The roll angle, pitch angle, and velocity of the vehicle are considered as control objects, a Lyapunov function consisting of the tracking error of the state vectors is established. According to Lyapunov stability theory, the adaptive control laws are derived to ensure the tracking errors asymptotically converge to zero. The proposed nonlinear MIMO adaptive backstepping control (ABC) scheme is tested to control an underwater glider in saw-tooth motion, spiral motion, and multimode motion. The linear quadratic regular (LQR) control scheme is described and evaluated with the ABC for the motion control problems. The results demonstrate that both control strategies provide similar levels of robustness while using the proposed ABC scheme leads to the more smooth control efforts with less oscillatory behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Pancino, Niccolò, Caterina Graziani, Veronica Lachi, Maria Lucia Sampoli, Emanuel Ștefǎnescu, Monica Bianchini, and Giovanna Maria Dimitri. "A Mixed Statistical and Machine Learning Approach for the Analysis of Multimodal Trail Making Test Data." Mathematics 9, no. 24 (December 8, 2021): 3159. http://dx.doi.org/10.3390/math9243159.

Повний текст джерела
Анотація:
Eye-tracking can offer a novel clinical practice and a non-invasive tool to detect neuropathological syndromes. In this paper, we show some analysis on data obtained from the visual sequential search test. Indeed, such a test can be used to evaluate the capacity of looking at objects in a specific order, and its successful execution requires the optimization of the perceptual resources of foveal and extrafoveal vision. The main objective of this work is to detect if some patterns can be found within the data, to discern among people with chronic pain, extrapyramidal patients and healthy controls. We employed statistical tests to evaluate differences among groups, considering three novel indicators: blinking rate, average blinking duration and maximum pupil size variation. Additionally, to divide the three patient groups based on scan-path images—which appear very noisy and all similar to each other—we applied deep learning techniques to embed them into a larger transformed space. We then applied a clustering approach to correctly detect and classify the three cohorts. Preliminary experiments show promising results.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Veiga Almagro, Carlos, Giacomo Lunghi, Mario Di Castro, Diego Centelles Beltran, Raúl Marín Prades, Alessandro Masi, and Pedro J. Sanz. "Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios." Applied Sciences 10, no. 17 (September 3, 2020): 6144. http://dx.doi.org/10.3390/app10176144.

Повний текст джерела
Анотація:
The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Liang, Shuang, and Yang Li. "Using Camshift and Kalman Algorithm to Trajectory Characteristic Matching of Basketball Players." Complexity 2021 (June 16, 2021): 1–11. http://dx.doi.org/10.1155/2021/4728814.

Повний текст джерела
Анотація:
Because of its unique charm, sports video is widely welcomed by the public in today’s society. Therefore, the analysis and research of sports game video data have high practical significance and commercial value. Taking a basketball game video as an example, this paper studies the tracking feature matching of basketball players’ detection, recognition, and prediction in the game video. This paper is divided into four parts to improve the application of the interactive multimodel algorithm to track characteristic matching: moving object detection, recognition, basketball track characteristic matching, and player track characteristic matching. The main work and research results of each part are as follows: firstly, the improved K-means clustering algorithm is used to segment the golf field area; then, HSV is combined with the RGB Fujian value method to eliminate the field area; at last, straight field lines were extracted by Hough transform, and elliptical field lines were extracted by curve fitting, and the field lines were eliminated to realize the detection of moving objects. Seven normalized Hu invariant moments are used as the target features to realize the recognition of moving targets. By obtaining the feature distance between the sample and the template, the category of the sample is judged, which has a good robustness. The Kalman filter is used to match the characteristics of the basketball trajectory. Aiming at the occlusion of basketball, the least square method was used to fit the basketball trajectory, and the basketball position was predicted at the occlusion moment, which realized the occlusion trajectory matching. The matching of players’ track characteristics is realized by the CamShift algorithm based on the color model, which makes full use of players’ color information and realizes real-time performance. In order to solve the problem of occlusion between players in the track feature matching, CamShift and Kalman algorithms were used to determine the occlusion factor through the search window and then weighted Kalman and CamShift according to the occlusion degree to get the track feature matching result. The experimental results show that the detection time is greatly shortened, the memory space occupied is small, and the effect is very ideal.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Mao, Zhongjie, Xi Chen, Jia Yan, and Tao Qu. "Multimodal Object Tracking by Exploiting Appearance and Class Information." Journal of Visual Communication and Image Representation, October 2022, 103669. http://dx.doi.org/10.1016/j.jvcir.2022.103669.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Sun, Lichao, Christina D. Griep, and Hanako Yoshida. "Shared Multimodal Input Through Social Coordination: Infants With Monolingual and Bilingual Learning Experiences." Frontiers in Psychology 13 (April 19, 2022). http://dx.doi.org/10.3389/fpsyg.2022.745904.

Повний текст джерела
Анотація:
A growing number of children in the United States are exposed to multiple languages at home from birth. However, relatively little is known about the early process of word learning—how words are mapped to the referent in their child-centered learning experiences. The present study defined parental input operationally as the integrated and multimodal learning experiences as an infant engages with his/her parent in an interactive play session with objects. By using a head-mounted eye tracking device, we recorded visual scenes from the infant’s point of view, along with the parent’s social input with respect to gaze, labeling, and actions of object handling. Fifty-one infants and toddlers (aged 6–18 months) from an English monolingual or a diverse bilingual household were recruited to observe the early multimodal learning experiences in an object play session. Despite that monolingual parents spoke more and labeled more frequently relative to bilingual parents, infants from both language groups benefit from a comparable amount of socially coordinated experiences where parents name the object while the object is looked at by the infant. Also, a sequential path analysis reveals multiple social coordinated pathways that facilitate infant object looking. Specifically, young children’s attention to the referent objects is directly influenced by parent’s object handling. These findings point to the new approach to early language input and how multimodal learning experiences are coordinated socially for young children growing up with monolingual and bilingual learning contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Wang, Yunhan, and Regis Kopper. "Efficient and Accurate Object 3D Selection With Eye Tracking-Based Progressive Refinement." Frontiers in Virtual Reality 2 (June 17, 2021). http://dx.doi.org/10.3389/frvir.2021.607165.

Повний текст джерела
Анотація:
Selection by progressive refinement allows the accurate acquisition of targets with small visual sizes while keeping the required precision of the task low. Using the eyes as a means to perform 3D selections is naturally hindered by the low accuracy of eye movements. To account for this low accuracy, we propose to use the concept of progressive refinement to allow accurate 3D selection. We designed a novel eye tracking selection technique with progressive refinement–Eye-controlled Sphere-casting refined by QUAD-menu (EyeSQUAD). We propose an approximation method to stabilize the calculated point-of-regard and a space partitioning method to improve computation. We evaluated the performance of EyeSQUAD in comparison to two previous selection techniques–ray-casting and SQUAD–under different target size and distractor density conditions. Results show that EyeSQUAD outperforms previous eye tracking-based selection techniques, is more accurate and can achieve similar selection speed as ray-casting, and is less accurate and slower than SQUAD. We discuss implications of designing eye tracking-based progressive refinement interaction techniques and provide a potential solution for multimodal user interfaces with eye tracking.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Huang, Zishuo, Qinyou Hu, Qiang Mei, Chun Yang, and Zheng Wu. "Identity recognition on waterways: a novel ship information tracking method based on multimodal data." Journal of Navigation, June 25, 2021, 1–17. http://dx.doi.org/10.1017/s0373463321000503.

Повний текст джерела
Анотація:
Abstract Video monitoring is an important means of ship traffic supervision. In practice, regulators often need to use an electronic chart platform to determine basic information concerning ships passing through a video feed. To enrich the information in the surveillance video and to effectively use multimodal maritime data, this paper proposes a novel ship multi-object tracking technology based on improved single shot multibox detector (SSD) and DeepSORT algorithms. In addition, a night contrast enhancement algorithm is used to enhance the ship identification performance in night scenes and a multimodal data fusion algorithm is used to incorporate the ship automatic identification system (AIS) information into the video display. The experimental results indicate that the ship information tracking accuracies in the day and night scenes are 78⋅2% and 70⋅4%, respectively. Our method can effectively help regulators to quickly obtain ship information from a video feed and improve the supervision of a waterway.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Cognolato, Matteo, Manfredo Atzori, Roger Gassert, and Henning Müller. "Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping." Frontiers in Artificial Intelligence 4 (January 25, 2022). http://dx.doi.org/10.3389/frai.2021.744476.

Повний текст джерела
Анотація:
The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Sofroniew, Nicholas James, Yurii A. Vlasov, Samuel Andrew Hires, Jeremy Freeman, and Karel Svoboda. "Neural coding in barrel cortex during whisker-guided locomotion." eLife 4 (December 23, 2015). http://dx.doi.org/10.7554/elife.12559.

Повний текст джерела
Анотація:
Animals seek out relevant information by moving through a dynamic world, but sensory systems are usually studied under highly constrained and passive conditions that may not probe important dimensions of the neural code. Here, we explored neural coding in the barrel cortex of head-fixed mice that tracked walls with their whiskers in tactile virtual reality. Optogenetic manipulations revealed that barrel cortex plays a role in wall-tracking. Closed-loop optogenetic control of layer 4 neurons can substitute for whisker-object contact to guide behavior resembling wall tracking. We measured neural activity using two-photon calcium imaging and extracellular recordings. Neurons were tuned to the distance between the animal snout and the contralateral wall, with monotonic, unimodal, and multimodal tuning curves. This rich representation of object location in the barrel cortex could not be predicted based on simple stimulus-response relationships involving individual whiskers and likely emerges within cortical circuits.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sun, Zhongda, Minglu Zhu, Xuechuan Shan, and Chengkuo Lee. "Augmented tactile-perception and haptic-feedback rings as human-machine interfaces aiming for immersive interactions." Nature Communications 13, no. 1 (September 5, 2022). http://dx.doi.org/10.1038/s41467-022-32745-8.

Повний текст джерела
Анотація:
AbstractAdvancements of virtual reality technology pave the way for developing wearable devices to enable somatosensory sensation, which can bring more comprehensive perception and feedback in the metaverse-based virtual society. Here, we propose augmented tactile-perception and haptic-feedback rings with multimodal sensing and feedback capabilities. This highly integrated ring consists of triboelectric and pyroelectric sensors for tactile and temperature perception, and vibrators and nichrome heaters for vibro- and thermo-haptic feedback. All these components integrated on the ring can be directly driven by a custom wireless platform of low power consumption for wearable/portable scenarios. With voltage integration processing, high-resolution continuous finger motion tracking is achieved via the triboelectric tactile sensor, which also contributes to superior performance in gesture/object recognition with artificial intelligence analysis. By fusing the multimodal sensing and feedback functions, an interactive metaverse platform with cross-space perception capability is successfully achieved, giving people a face-to-face like immersive virtual social experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Zhu, Fan, Liangliang Wang, Yilin Wen, Lei Yang, Jia Pan, Zheng Wang, and Wenping Wang. "Failure Handling of Robotic Pick and Place Tasks With Multimodal Cues Under Partial Object Occlusion." Frontiers in Neurorobotics 15 (March 8, 2021). http://dx.doi.org/10.3389/fnbot.2021.570507.

Повний текст джерела
Анотація:
The success of a robotic pick and place task depends on the success of the entire procedure: from the grasp planning phase, to the grasp establishment phase, then the lifting and moving phase, and finally the releasing and placing phase. Being able to detect and recover from grasping failures throughout the entire process is therefore a critical requirement for both the robotic manipulator and the gripper, especially when considering the almost inevitable object occlusion by the gripper itself during the robotic pick and place task. With the rapid rising of soft grippers, which rely heavily on their under-actuated body and compliant, open-loop control, less information is available from the gripper for effective overall system control. Tackling on the effectiveness of robotic grasping, this work proposes a hybrid policy by combining visual cues and proprioception of our gripper for the effective failure detection and recovery in grasping, especially using a proprioceptive self-developed soft robotic gripper that is capable of contact sensing. We solved failure handling of robotic pick and place tasks and proposed (1) more accurate pose estimation of a known object by considering the edge-based cost besides the image-based cost; (2) robust object tracking techniques that work even when the object is partially occluded in the system and achieve mean overlap precision up to 80%; (3) contact and contact loss detection between the object and the gripper by analyzing internal pressure signals of our gripper; (4) robust failure handling with the combination of visual cues under partial occlusion and proprioceptive cues from our soft gripper to effectively detect and recover from different accidental grasping failures. The proposed system was experimentally validated with the proprioceptive soft robotic gripper mounted on a collaborative robotic manipulator, and a consumer-grade RGB camera, showing that combining visual cues and proprioception from our soft actuator robotic gripper was effective in improving the detection and recovery from the major grasping failures in different stages for the compliant and robust grasping.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Rolls, Edmund T., Gustavo Deco, Chu-Chung Huang, and Jianfeng Feng. "The human posterior parietal cortex: effective connectome, and its relation to function." Cerebral Cortex, July 15, 2022. http://dx.doi.org/10.1093/cercor/bhac266.

Повний текст джерела
Анотація:
Abstract The effective connectivity between 21 regions in the human posterior parietal cortex, and 360 cortical regions was measured in 171 Human Connectome Project (HCP) participants using the HCP atlas, and complemented with functional connectivity and diffusion tractography. Intraparietal areas LIP, VIP, MIP, and AIP have connectivity from early cortical visual regions, and to visuomotor regions such as the frontal eye fields, consistent with functions in eye saccades and tracking. Five superior parietal area 7 regions receive from similar areas and from the intraparietal areas, but also receive somatosensory inputs and connect with premotor areas including area 6, consistent with functions in performing actions to reach for, grasp, and manipulate objects. In the anterior inferior parietal cortex, PFop, PFt, and PFcm are mainly somatosensory, and PF in addition receives visuo-motor and visual object information, and is implicated in multimodal shape and body image representations. In the posterior inferior parietal cortex, PFm and PGs combine visuo-motor, visual object, and reward input and connect with the hippocampal system. PGi in addition provides a route to motion-related superior temporal sulcus regions involved in social interactions. PGp has connectivity with intraparietal regions involved in coordinate transforms and may be involved in idiothetic update of hippocampal visual scene representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Lin, Bingbing, Lanlan Zhang, Xiaolong Yin, Xiaocheng Chen, Chendong Ruan, Tiecheng Wu, Zhizhen Liu, and Jia Huang. "Modulation of entorhinal cortex–hippocampus connectivity and recognition memory following electroacupuncture on 3×Tg-AD model: Evidence from multimodal MRI and electrophysiological recordings." Frontiers in Neuroscience 16 (July 29, 2022). http://dx.doi.org/10.3389/fnins.2022.968767.

Повний текст джерела
Анотація:
Memory loss and aberrant neuronal network activity are part of the earliest hallmarks of Alzheimer’s disease (AD). Electroacupuncture (EA) has been recognized as a cognitive stimulation for its effects on memory disorder, but whether different brain regions or neural circuits contribute to memory recovery in AD remains unknown. Here, we found that memory deficit was ameliorated in 3×Tg-AD mice with EA-treatment, as shown by the increased number of exploring and time spent in the novel object. In addition, reduced locomotor activity was observed in 3×Tg-AD mice, but no significant alteration was seen in the EA-treated mice. Based on the functional magnetic resonance imaging, the regional spontaneous activity alterations of 3×Tg-AD were mainly concentrated in the accumbens nucleus, auditory cortex, caudate putamen, entorhinal cortex (EC), hippocampus, insular cortex, subiculum, temporal cortex, visual cortex, and so on. While EA-treatment prevented the chaos of brain activity in parts of the above regions, such as the auditory cortex, EC, hippocampus, subiculum, and temporal cortex. And then we used the whole-cell voltage-clamp recording to reveal the neurotransmission in the hippocampus, and found that EA-treatment reversed the synaptic spontaneous release. Since the hippocampus receives most of the projections of the EC, the hippocampus-EC circuit is one of the neural circuits related to memory impairment. We further applied diffusion tensor imaging (DTI) tracking and functional connectivity, and found that hypo-connected between the hippocampus and EC with EA-treatment. These data indicate that the hippocampus–EC connectivity is responsible for the recognition memory deficit in the AD mice with EA-treatment, and provide novel insight into potential therapies for memory loss in AD.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhu, Yipeng, Tao Wang, and Shiqiang Zhu. "A novel tracking system for human following robots with fusion of MMW radar and monocular vision." Industrial Robot: the international journal of robotics research and application ahead-of-print, ahead-of-print (September 16, 2021). http://dx.doi.org/10.1108/ir-02-2021-0030.

Повний текст джерела
Анотація:
Purpose This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system. Design/methodology/approach Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance. Findings A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person. Originality/value This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

"Multimodal Smart Wheelchair Integrated with Safety Alert System." International Journal of Engineering and Advanced Technology 9, no. 4 (April 30, 2020): 1324–30. http://dx.doi.org/10.35940/ijeat.d8423.049420.

Повний текст джерела
Анотація:
Elderly and people with disabilities often rely on others for their locomotion. With this emerging world of automation and technologically advanced society we live in a smart wheelchair with appropriate automation can be a life changing innovation for them. One might wonder what is the need for a wheelchair to be smart or for that matter why anything needs to be smart. The answer is simple, to overcome the limitations of the existing technology. We aim on integrating a simple Manually operated wheelchair with features like Obstacle Detection with appropriate technologies such as voice control or gesture control for people who are not able to locomote like a normal person can. With this project, not only do we help a person using a normal wheelchair more easily but also make life easier for those who have other disabilities which are standing as an obstacle making it difficult for them to walk. For instance, a blind person can use a smart wheelchair that allows voicecontrolled movement or even gesture-controlled movement. Another might be of a person who can’t speak, will now be able to control everything with his bare hands. Wheelchair coupled with the appropriate sensors that automatically detects the obstacles/objects in the proximity and takes appropriate action consequently and In addition to that it may also be controlled by another person taking care of the disabled by giving commands such as forward, backward, upright etc. This not only reduces the user's efforts but also helps people to take care of their elderly. Voice/gesture control system makes everything simple. Just visualize the application of it in a hospital where the nurse has to manually handle the patient's wheelchair for even the slightest of movement. This system on the other hand needs only text or voice input command, and based upon the predefined command received from the user, the system will execute the task. This project even enforces a GSM module which uses the sim card, which can help in tracking the wheelchair when required, like in case where a user is in difficulty and needs emergency help, a message asking for help can be sent to the intended person.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zhou, Feng, Yangjian Ji, and Roger J. Jiao. "Augmented Affective-Cognition for Usability Study of In-Vehicle System User Interface." Journal of Computing and Information Science in Engineering 14, no. 2 (February 12, 2014). http://dx.doi.org/10.1115/1.4026222.

Повний текст джерела
Анотація:
Usability of in-vehicle systems has become increasingly important for ease of operations and safety of driving. The user interface (UI) of in-vehicle systems is a critical focus of usability study. This paper studies how to use advanced computational, physiology- and behavior-based tools and methodologies to determine affective/emotional states and behavior of an individual in real time and in turn how to adapt the human-vehicle interaction to meet users' cognitive needs based on the real-time assessment. Specifically, we set up a set of physiological sensors that are capable of collecting EEG, facial EMG, skin conductance response, and respiration data and a set of motion sensing and tracking equipment that is capable of capturing eye ball movement and objects which the user is interacting with. All hardware components and software are integrated into an augmented sensor platform that can perform as “one coherent system” to enable multimodal data processing and information inference for context-aware analysis of emotional states and cognitive behavior based on the rough set inference engine. Meanwhile subjective data are also recorded for comparison. A usability study of in-vehicle system UI is shown to demonstrate the potential of the proposed methodology.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Drai-Zerbib, Véronique, Léa Bernigaud, Alexandre Gaston-Bellegarde, Jean-Michel Boucheix, and Thierry Baccino. "Eye Movements During Comprehension in Virtual Reality: The Influence of a Change in Point of View Between Auditory and Visual Information in the Activation of a Mental Model." Frontiers in Virtual Reality 3 (June 22, 2022). http://dx.doi.org/10.3389/frvir.2022.874054.

Повний текст джерела
Анотація:
This paper provides new research perspectives in the field of multimodal comprehension (auditory crossing visual information) by using immersion and incorporating eye tracking in a virtual reality environment. The objective is to investigate the influence of a change in narrative perspective (point of view) during the activation of a mental model underlying comprehension between visual and auditory modalities. Twenty-eight participants, equipped with a headset SMI HMD HTC eye-tracking 250 Hz watched 16 visual scenes in virtual reality accompanied by their corresponding auditory narration. The change in perspective may occur either in the visual scenes or in listening. Mean fixations durations on typical objects of the visual scenes (Area of Interest) that were related to the perspective shift were analyzed as well as the free recall of narratives. We split each scene into three periods according to different parts of the narration (Before, Target, After), the target was where a shift in perspective could occur. Results shown that when a visual change of perspective occurred, mean fixation duration was shorter (compared to no change) for both Target and After. However, when auditory change of perspective occurred, no difference was found on Target, although during After, mean fixation duration was longer (compared to no change). In the context of 3D video visualization, it seems that auditory processing prevails over visual processing of verbal information: The visual change of perspective induces less visual processing of the Area of Interest (AOIs) included in the visual scene, but the auditory change in perspective leads to increased visual processing of the visual scene. Moreover, the analysis showed higher recall of information (verbatim and paraphrase) when an auditory change in perspective was coupled with no visual change of perspective. Thus, our results indicate a more effective integration of information when there is an inconsistency between the narration heard and viewed. A change in perspective, instead of creating comprehension and integration difficulties, seems to effectively raise the attention and induce a shorter visual inspection. These results are discussed in the context of cross-modal comprehension.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії