Academic literature on the topic 'Multi-Objects perception'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Objects perception.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multi-Objects perception":
Martín, Francisco, Carlos E. Agüero, and José M. Cañas. "Active Visual Perception for Humanoid Robots." International Journal of Humanoid Robotics 12, no. 01 (March 2015): 1550009. http://dx.doi.org/10.1142/s0219843615500097.
O’Sullivan, James, Jose Herrero, Elliot Smith, Catherine Schevon, Guy M. McKhann, Sameer A. Sheth, Ashesh D. Mehta, and Nima Mesgarani. "Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception." Neuron 104, no. 6 (December 2019): 1195–209. http://dx.doi.org/10.1016/j.neuron.2019.09.007.
Han, Dong, Hong Nie, Jinbao Chen, Meng Chen, Zhen Deng, and Jianwei Zhang. "Multi-modal haptic image recognition based on deep learning." Sensor Review 38, no. 4 (September 17, 2018): 486–93. http://dx.doi.org/10.1108/sr-08-2017-0160.
Lisowski, Józef. "Radar Perception of Multi-Object Collision Risk Neural Domains during Autonomous Driving." Electronics 13, no. 6 (March 13, 2024): 1065. http://dx.doi.org/10.3390/electronics13061065.
Li, Yucheng, Fei Wang, Liangze Tao, and Juan Wu. "Multi-Modal Haptic Rendering Based on Genetic Algorithm." Electronics 11, no. 23 (November 24, 2022): 3878. http://dx.doi.org/10.3390/electronics11233878.
Zhou, Wenjun, Tianfei Wang, Xiaoqin Wu, Chenglin Zuo, Yifan Wang, Quan Zhang, and Bo Peng. "Salient Object Detection via Fusion of Multi-Visual Perception." Applied Sciences 14, no. 8 (April 18, 2024): 3433. http://dx.doi.org/10.3390/app14083433.
Hirsch, Herb L., and Cathleen M. Moore. "Simulating Light Source Motion in Single Images for Enhanced Perceptual Object Detection." Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 9, no. 3 (February 22, 2012): 269–78. http://dx.doi.org/10.1177/1548512911431814.
Marmodoro, Anna, and Matteo Grasso. "THE POWER OF COLOR." American Philosophical Quarterly 57, no. 1 (January 1, 2020): 65–78. http://dx.doi.org/10.2307/48570646.
Wang, Li, Ruifeng Li, Jingwen Sun, Xingxing Liu, Lijun Zhao, Hock Soon Seah, Chee Kwang Quah, and Budianto Tandianus. "Multi-View Fusion-Based 3D Object Detection for Robot Indoor Scene Perception." Sensors 19, no. 19 (September 21, 2019): 4092. http://dx.doi.org/10.3390/s19194092.
Zhu, Jinchao, Xiaoyu Zhang, Shuo Zhang, and Junnan Liu. "Inferring Camouflaged Objects by Texture-Aware Interactive Guidance Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3599–607. http://dx.doi.org/10.1609/aaai.v35i4.16475.
Dissertations / Theses on the topic "Multi-Objects perception":
Haddad, Lilas. "Impact of multiple affordances on object perception in natural scenes." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. https://pepite-depot.univ-lille.fr/ToutIDP/EDSHS/2023/2023ULILH060.pdf.
Object perception and action perception are closely interrelated. Perceiving visual objects also leads to the perception of various grasping components evoked by the objects, known as micro-affordances. We have numerous pieces of evidence that a single object may evoke micro-affordances such as a right- or left-hand grasp depending on object handle orientation or a power or precision grip depending on object size. However, natural scenes are usually composed of several objects evoking multiple affordances that may impact object perceptual processing. Moreover, objects presented in a common scene are usually semantically related, as they are part of the same context. The semantic relations between objects may then modulate how one perceives objects and their affordances. In this view, thematic relations between objects (e.g., key-lock) are particularly interesting as they share cognitive and neural substrates with use gesture knowledge. The aim of this thesis is to investigate the consequences of the evocation of multiple affordances on the perception and selection of a given object in naturalistic scenes. We investigated how the similarity of affordances would impact object selection and how thematic relations between objects would modulate object perceptual processing. In a first online behavioral study using a stimulus and response compatibility paradigm, we highlighted a processing cost when pairs of unrelated objects had similar right- or left-hand grasp affordances, with the similarity of affordances slowing down target selection. Furthermore, the cost entailed by similar handle affordances was restricted to action relevant situations, when responding with the dominant hand and when the response was compatible with the affordance of the target. In a second behavioral experiment using the stimulus and response compatibility paradigm in a 3D environment, we were able to extend these first findings to other types of micro-affordances (grasp size affordances). Again, we demonstrated a perceptual processing cost when pairs of objects had similar grasp size affordances. Furthermore, we highlighted a suppression of the cost entailed by similar affordances on target selection when objects were thematically related. In a third neurophysiological study using electroencephalography, we evaluated the correlates of the cost entailed by similar affordances on µ rhythm desynchronization, which is assumed to reflect the activity of the motor neural network during perception. Results revealed that during target selection, μ desynchronization was reduced when affordances were similar in comparison to dissimilar. This effect disappeared when objects were thematically related. Overall, behavioral and neurophysiological evidence support the model of affordance inhibition proposed by Vainio and Ellis (2020) and Caligiore et al. (2013). According to the inhibition hypothesis, the observer needs to inhibit distractor objects to select the target object. When the different objects in the scene have similar affordances, inhibition of the distractor object and its affordances leads to the automatic inhibition of the target affordance, which slows down target processing. The present work provides behavioral and neural evidence in favor of the inhibition model of affordance and object selection in more naturalistic scenes involving familiar meaningful objects. In addition, it first demonstrates the role of semantic relations in the regulation of affordance inhibition in naturalistic scenes
Vivet, Damien. "Perception de l'environnement par radar hyperfréquence. Application à la localisation et la cartographie simultanées, à la détection et au suivi d'objets mobiles en milieu extérieur." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00659270.
Chavez, Garcia Ricardo Omar. "Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM034/document.
Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck
Shao, Hang. "A Fast MLP-based Learning Method and its Application to Mine Countermeasure Missions." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23512.
Asvadi, Alireza. "Multi-Sensor Object Detection for Autonomous Driving." Doctoral thesis, 2018. http://hdl.handle.net/10316/81236.
Nesta tese é proposto um novo sistema multissensorial de detecção de obstáculos e objetos usando um LIDAR-3D, uma câmara monocular a cores e um sistema de posicionamento baseado em sensores inerciais e GPS, com aplicação a sistemas de condução autónoma. Em primeiro lugar, propõe-se a criação de um sistema de deteção de obstáculos, que incorpora dados 4D (3D espacial + tempo) e é composto por dois módulos principais: (i) uma estimativa do perfil do chão através de uma aproximação planar por partes e (ii) um modelo baseado numa grelha de voxels para a deteção de obstáculos estáticos e dinâmicos recorrendo à informação do próprio movimento do veículo. As funcionalidade do systemo foram posteriormente aumentado para permitir a Deteção e Seguimento de Objetos Móveis (DATMO) permitindo a percepção ao nível do objeto em cenas dinâmicas. De seguida procede-se à fusão dos dados obtidos pelo LIDAR-3D com os dados obtidos por uma câmara para melhorar o desempenho da função de seguimento do sistema DATMO. Em segundo lugar, é proposto um sistema de deteção de objetos baseado nos paradigmas de geração e verificação de hipóteses, usando dados obtidos pelo LIDAR-3D, recorrendo à utilização de redes neurais convolucionais (ConvNets). A geração de hipóteses é realizada aplicando um agrupamento de dados ao nível da nuvem de pontos. Na fase de verificação de hipóteses, é gerado um mapa de profundidade a partir dos dados do LIDAR-3D, sendo que esse mapa é inserido numa ConvNet para a deteção de objetos. Finalmente, é proposta uma detecção multimodal de objetos usando uma rede neuronal híbrida, composta por Deep ConvNets e uma rede neural do tipo Multi-Layer Perceptron (MLP). As modalidades sensoriais consideradas são: mapas de profundidade, mapas de reflectância geradas a partir do LIDAR-3D e imagens a cores. São definidos três detetores de objetos que individualmente, em cada modalidade, recorrendo a uma ConvNet detetam as bounding boxes do objeto. As deteções em cada uma das modalidades são depois consideradas em conjunto e fundidas por uma estratégia de fusão baseada em MLP. O propósito desta fusão é reduzir a taxa de erro na deteção de cada modalidade, o que leva a uma deteção mais precisa. Foram realizadas avaliações quantitativas e qualitativas dos métodos propostos, utilizando conjuntos de dados obtidos a partir dos datasets "Avaliação de Detecção de Objetos" e "Avaliação de Rastreamento de Objetos" do KITTI Vision Benchmark Suite. Os resultados obtidos demonstram a aplicabilidade e a eficiência da abordagem proposta para a deteção de obstáculos e objetos em cenários urbanos.
In this thesis, we propose on-board multisensor obstacle and object detection systems using a 3D-LIDAR, a monocular color camera and a GPS-aided Inertial Navigation System (INS) positioning data, with application in self-driving road vehicles. Firstly, an obstacle detection system is proposed that incorporates 4D data (3D spatial data and time), and composed by two main modules: (i) a ground surface estimation using piecewise planes, and (ii) a voxel grid model for static and moving obstacles detection using ego-motion information. An extension of the proposed obstacle detection system to a Detection And Tracking Moving Object (DATMO) system is proposed to achieve an object-level perception of dynamic scenes, followed by the fusion of 3D-LIDAR with camera data to improve the tracking function of the DATMO system. The obstacle detection we propose is to effectively model dynamic driving environment. The proposed DATMO method is able to deal with the localization error of the position sensing system when computing the motion. The proposed fusion tracking module integrates multiple sensors to improve object tracking. Secondly, an object detection system based on the hypothesis generation and verification paradigms is proposed using 3D-LIDAR data and Convolutional Neural Networks (ConvNets). Hypothesis generation is performed by applying clustering on point cloud data. In the hypothesis verification phase, a depth map is generated using 3D-LIDAR data, and the depth map values are inputted to a ConvNet for object detection. Finally, a multimodal object detection is proposed using a hybrid neural network, composed by deep ConvNets and a Multi-Layer Perceptron (MLP) neural network. Three modalities, depth and reflectance maps (both generated from 3D-LIDAR data) and a color image, are used as inputs. Three deep ConvNet-based object detectors run individually on each modality to detect the object bounding boxes. Detections on each one of the modalities are jointly learned and fused by an MLP-based late-fusion strategy. The purpose of the multimodal detection fusion is to reduce the misdetection rate from each modality, which leads to a more accurate detection. Quantitative and qualitative evaluations were performed using ‘Object Detection Evaluation’ dataset and ‘Object Tracking Evaluation’ based derived datasets from the KITTI Vision Benchmark Suite. Reported results demonstrate the applicability and efficiency of the proposed obstacle and object detection approaches in urban scenarios.
Book chapters on the topic "Multi-Objects perception":
Bruder, S., M. Farooq, and M. Bayoumi. "Multi-Sensor Integration for Robots Interacting with Autonomous Objects." In Active Perception and Robot Vision, 395–411. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77225-2_20.
Porquis, Lope Ben, Masashi Konyo, Naohisa Nagaya, and Satoshi Tadokoro. "Multi-contact Vacuum-Driven Tactile Display for Representing Force Vectors Applied on Grasped Objects." In Haptics: Perception, Devices, Mobility, and Communication, 218–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31404-9_40.
Hummel, Emilie, Claudio Pacchierotti, Valérie Gouranton, Ronan Gaugne, Theophane Nicolas, and Anatole Lécuyer. "Haptic Rattle: Multi-modal Rendering of Virtual Objects Inside a Hollow Container." In Haptics: Science, Technology, Applications, 189–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06249-0_22.
Altamirano Cabrera, Miguel, Juan Heredia, and Dzmitry Tsetserukou. "Tactile Perception of Objects by the User’s Palm for the Development of Multi-contact Wearable Tactile Displays." In Haptics: Science, Technology, Applications, 51–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58147-3_6.
Chen, Jian, Bingxi Jia, and Kaixiang Zhang. "Range Identification of Moving Objects." In Multi-View Geometry Based Visual Perception and Control of Robotic Systems, 17–126. Boca Raton, FL : CRC Press/Taylor &Francis Group, 2017.: CRC Press, 2018. http://dx.doi.org/10.1201/9780429489211-7.
Chen, Jian, Bingxi Jia, and Kaixiang Zhang. "Motion Estimation of Moving Objects." In Multi-View Geometry Based Visual Perception and Control of Robotic Systems, 127–40. Boca Raton, FL : CRC Press/Taylor &Francis Group, 2017.: CRC Press, 2018. http://dx.doi.org/10.1201/9780429489211-8.
Imanov, Elbrus, and Zubair Shah. "Applying Multi-layers Feature Fusion in SSD for Detection of Small-Scale Objects." In 11th International Conference on Theory and Application of Soft Computing, Computing with Words and Perceptions and Artificial Intelligence - ICSCCW-2021, 552–59. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92127-9_74.
Blasco, Jose, and Francisco Rovira-Más. "Advances in local perception for orchard robotics." In Burleigh Dodds Series in Agricultural Science, 75–102. Burleigh Dodds Science Publishing, 2024. http://dx.doi.org/10.19103/as.2023.0124.03.
Ababsa, Fakhreddine, Iman Maissa Zendjebil, and Jean-Yves Didier. "3D Camera Tracking for Mixed Reality using Multi-Sensors Technology." In Geographic Information Systems, 2164–75. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch128.
Yu, Hong, Zhiyue Wang, Yuanqiu Liu, and Han Liu. "Boosting Visual Question Answering Through Geometric Perception and Region Features." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230607.
Conference papers on the topic "Multi-Objects perception":
Wang, Yi Ru, Yuchi Zhao, Haoping Xu, Sagi Eppel, Alán Aspuru-Guzik, Florian Shkurti, and Animesh Garg. "MVTrans: Multi-View Perception of Transparent Objects." In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023. http://dx.doi.org/10.1109/icra48891.2023.10161089.
Nayeem, Rashida, Salah Bazzi, Mohsen Sadeghi, Reza Sharif Razavian, and Dagmar Sternad. "Multi-modal Interactive Perception in Human Control of Complex Objects." In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023. http://dx.doi.org/10.1109/icra48891.2023.10160375.
Luciani, Annie, Sile O'Modhrain, Charlotte Magnusson, Jean-Loup Florens, and Damien Couroussé. "Perception of Virtual Multi-Sensory Objects: Some Musings on the Enactive Approach." In 2008 International Conference on Cyberworlds (CW). IEEE, 2008. http://dx.doi.org/10.1109/cw.2008.107.
Ji, Jia-Hui, Yu Zhao, Jing-Wen Bu, and Tao Zhang. "Point Cloud Holographic Encryption Display System involving 3D Face Recognition and air-writing." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/3d.2023.jw2a.22.
Amiri, Saeid, Suhua Wei, Shiqi Zhang, Jivko Sinapov, Jesse Thomason, and Peter Stone. "Multi-modal Predicate Identification using Dynamically Learned Robot Controllers." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/645.
Martin Martin, Roberto, and Oliver Brock. "Online interactive perception of articulated objects with multi-level recursive estimation based on task-specific priors." In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6942902.
Levine, Evan, Can Chen, Manuel Martinello, Mahdi Nezamabadi, Siu-Kei Tin, Jinwei Ye, and Francisco Imai. "Challenges and solutions in 3D object capture: High-precision multi-view camera calibration using a rotating state; and 3D reconstruction of mirror-like objects using efficient ray coding." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2017. http://dx.doi.org/10.1364/3d.2017.dw4f.2.
Skaza, Maciej. "Between virtuality and reality: remarks about perception of city architecture." In Virtual City and Territory. Barcelona: Centre de Política de Sòl i Valoracions, 2016. http://dx.doi.org/10.5821/ctv.8055.
Faykus, Max Henry, Bradley Selee, and Melissa Smith. "Utilizing Neural Networks for Semantic Segmentation on RGB/LiDAR Fused Data for Off-road Autonomous Military Vehicle Perception." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0740.
Wippelhauser, András, Arpita Chand, Somak Datta Gupta, and Andras Varadi. "Performance and Network Architecture Options of Consolidated Object Data Service for Multi-RAT Vehicular Communication." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0857.