To see the other types of publications on this topic, follow the link: Sensor fusion.

Journal articles on the topic 'Sensor fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sensor fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Gon Woo, Ji Min Kim, Nosan Kwak, and Beom Hee Lee. "Hierarchical sensor fusion for building a probabilistic local map using active sensor modules." Robotica 26, no. 3 (May 2008): 307–22. http://dx.doi.org/10.1017/s026357470700392x.

Full text
Abstract:
SUMMARYAn algorithm for three-level hierarchical sensor fusions has beenproposed and applied to environment map building with enhanced accuracy and efficiency. The algorithm was realized through the two new types of sensor modules, which are composed of a halogen lamp-based active vision sensor and a semicircular ultrasonic (US) and infrared (IR) sensor system. In the first-level fusion, the US and IR sensor information is utilized in terms of the geometric characteristics of the sensor location. In the second-level fusion, the outputs from the US and IR sensors are combined with the sheet of halogen light through a proposed rule base. In the third-level fusion, local maps from the first- and second-level fusion are updated in a probabilistic way for a very accurate environment local map. A practical implementation has been carried out to demonstrate the efficiency and accuracy of the proposed hierarchical sensor fusion algorithm in environment map building.
APA, Harvard, Vancouver, ISO, and other styles
2

Jalil Piran, Mohammad, Amjad Ali, and Doug Young Suh. "Fuzzy-Based Sensor Fusion for Cognitive Radio-Based Vehicular Ad Hoc and Sensor Networks." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/439272.

Full text
Abstract:
In wireless sensor networks, sensor fusion is employed to integrate the acquired data from diverse sensors to provide a unified interpretation. The best and most salient advantage of sensor fusion is to obtain high-level information in both statistical and definitive aspects, which cannot be attained by a single sensor. In this paper, we propose a novel sensor fusion technique based on fuzzy theory for our earlier proposed Cognitive Radio-based Vehicular Ad Hoc and Sensor Networks (CR-VASNET). In the proposed technique, we considered four input sensor readings (antecedents) and one output (consequent). The employed mobile nodes in CR-VASNET are supposed to be equipped with diverse sensors, which cater to our antecedent variables, for example, The Jerk, Collision Intensity, and Temperature and Inclination Degree. Crash_Severity is considered as the consequent variable. The processing and fusion of the diverse sensory signals are carried out by fuzzy logic scenario. Accuracy and reliability of the proposed protocol, demonstrated by the simulation results, introduce it as an applicable system to be employed to reduce the causalities rate of the vehicles’ crashes.
APA, Harvard, Vancouver, ISO, and other styles
3

Abbas, Jabbar, Amin Al-Habaibeh, and Dai Zhong Su. "Sensor Fusion for Condition Monitoring System of End Milling Operations." Key Engineering Materials 450 (November 2010): 267–70. http://dx.doi.org/10.4028/www.scientific.net/kem.450.267.

Full text
Abstract:
This paper describes the utilisation of multi sensor fusion model using force, vibration, acoustic emission, strain and sound sensors for monitoring tool wear in end milling operations. The paper applies the ASPS approach (Automated Sensor and Signal Processing Selection) method for signal processing and sensor selection [1]. The sensory signals were processed using different signal processing methods to create a wide range of Sensory Characteristic Features (SCFs). The sensitivity of these SCFs to tool wear is investigated. The results indicate that the sensor fusion system is capable of detecting machining faults in comparison to a single sensor using the suggested approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Duan, Bo. "Sensor and sensor fusion technology in autonomous vehicles." Applied and Computational Engineering 52, no. 1 (March 27, 2024): 132–37. http://dx.doi.org/10.54254/2755-2721/52/20241470.

Full text
Abstract:
The perception and navigation of autonomous vehicles heavily rely on the utilization of sensor technology and the integration of sensor fusion techniques, which play an essential role in ensuring a secure and proficient understanding of the vehicle's environment.This paper highlights the significance of sensors in autonomous vehicles and how sensor fusion techniques enhance their capabilities. Firstly, the paper introduces the different types of sensors commonly used in autonomous vehicles and explains their principles of operation, strengths, and limitations in capturing essential information about the vehicles environment. Next, the paper discusses various sensor fusion algorithms, such as Kalman filters and particle filters. Furthermore, the paper explores the challenges associated with sensor fusion and addresses the issue of handling sensor failures or uncertainties. The benefits of sensor fusion technology in autonomous vehicles are also presented. These include improved perception of the environment, enhanced object recognition and tracking, better trajectory planning, and enhanced safety through redundancy and fault tolerance. Lastly, the paper discusses the advancements and highlights the integration of artificial intelligence and machine learning techniques to optimize sensor fusion algorithms and improve the overall autonomy of the vehicle. Following thorough analysis, the deduction can be made that sensor and sensor fusion technology assume a critical function in facilitating efficient and secure autonomous vehicle navigation within intricate surroundings.
APA, Harvard, Vancouver, ISO, and other styles
5

Yeong, De Jong, Gustavo Velasco-Hernandez, John Barry, and Joseph Walsh. "Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review." Sensors 21, no. 6 (March 18, 2021): 2140. http://dx.doi.org/10.3390/s21062140.

Full text
Abstract:
With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Ishikawa, Masatoshi, and Hiro Yamasaki. "Special issue "Sensor Fusion". Sensor Fusion Project." Journal of the Robotics Society of Japan 12, no. 5 (1994): 650–55. http://dx.doi.org/10.7210/jrsj.12.650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ye, Chunxuan, Zinan Lin, Alpaslan Demir, and Yan Li. "Performance Analysis of Different Types of Sensor Networks for Cognitive Radios." Journal of Electrical and Computer Engineering 2012 (2012): 1–15. http://dx.doi.org/10.1155/2012/632762.

Full text
Abstract:
We consider the problem of using multiple sensors to detect whether a certain spectrum is occupied or not. Each sensor sends its spectrum sensing result to a data fusion center, which combines all the results for an overall decision. With the existence of wireless fading on the channels from sensors to data fusion center, we examine three different mechanisms on the transmissions from sensors to data fusion center: (1) direct transmissions; (2) transmissions with the assistance of relays and (3) transmissions with the assistance of an intermediate fusion helper which fuses the sensing results from the sensors and sends the fused result to the data fusion center. For each mechanism, we analyze the correct probability of the overall decision made by the data fusion center. Our evaluation establishes that a sensor network with an intermediate fusion helper performs almost as good as a sensor network with relays, but providing energy and spectral advantages. Both networks significantly outperform the sensor network without relay or intermediate fusion helper. Such analysis facilitates the design of sensor networks. Furthermore, we generalize this evaluation to sensor networks with an arbitrary number of sensors and to sensor networks applying various information combining rules. Our simulations validate the analytical results.
APA, Harvard, Vancouver, ISO, and other styles
8

Senel, Numan, Gordon Elger, Klaus Kefferpütz, and Kristina Doycheva. "Multi-Sensor Data Fusion for Real-Time Multi-Object Tracking." Processes 11, no. 2 (February 7, 2023): 501. http://dx.doi.org/10.3390/pr11020501.

Full text
Abstract:
Sensor data fusion is essential for environmental perception within smart traffic applications. By using multiple sensors cooperatively, the accuracy and probability of the perception are increased, which is crucial for critical traffic scenarios or under bad weather conditions. In this paper, a modular real-time capable multi-sensor fusion framework is presented and tested to fuse data on the object list level from distributed automotive sensors (cameras, radar, and LiDAR). The modular multi-sensor fusion architecture receives an object list (untracked objects) from each sensor. The fusion framework combines classical data fusion algorithms, as it contains a coordinate transformation module, an object association module (Hungarian algorithm), an object tracking module (unscented Kalman filter), and a movement compensation module. Due to the modular design, the fusion framework is adaptable and does not rely on the number of sensors or their types. Moreover, the method continues to operate because of this adaptable design in case of an individual sensor failure. This is an essential feature for safety-critical applications. The architecture targets environmental perception in challenging time-critical applications. The developed fusion framework is tested using simulation and public domain experimental data. Using the developed framework, sensor fusion is obtained well below 10 milliseconds of computing time using an AMD Ryzen 7 5800H mobile processor and the Python programming language. Furthermore, the object-level multi-sensor approach enables the detection of changes in the extrinsic calibration of the sensors and potential sensor failures. A concept was developed to use the multi-sensor framework to identify sensor malfunctions. This feature will become extremely important in ensuring the functional safety of the sensors for autonomous driving.
APA, Harvard, Vancouver, ISO, and other styles
9

Vervečka, Martynas. "SENSOR NETWORK DATA FUSION METHODS." Mokslas - Lietuvos ateitis 2, no. 1 (February 28, 2010): 50–53. http://dx.doi.org/10.3846/mla.2010.011.

Full text
Abstract:
Sensor network data fusion is widely used in warfare, in areas such as automatic target recognition, battlefield surveillance, automatic vehicle control, multiple target surveillance, etc. Non-military use example are: medical equipment status monitoring, intelligent home. The paper describes sensor networks topologies, sensor network advantages against the isolated sensors, most common network topologies, their advantages and disadvantages.
APA, Harvard, Vancouver, ISO, and other styles
10

Granshaw, Stuart I. "Sensor fusion." Photogrammetric Record 35, no. 169 (March 2020): 6–9. http://dx.doi.org/10.1111/phor.12311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sasiadek, J. Z. "Sensor fusion." Annual Reviews in Control 26, no. 2 (January 2002): 203–28. http://dx.doi.org/10.1016/s1367-5788(02)00045-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sasiadek, Jurek Z. "Sensor Fusion." IFAC Proceedings Volumes 33, no. 27 (September 2000): 1–3. http://dx.doi.org/10.1016/s1474-6670(17)37896-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

FABRE, S., A. APPRIOU, and X. BRIOTTET. "SENSOR FUSION INTEGRATING CONTEXTUAL INFORMATION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 09, no. 03 (June 2001): 369–409. http://dx.doi.org/10.1142/s0218488501000855.

Full text
Abstract:
The application of multi-sensor fusion, which aims at recogizing a state among a set of hypotheses for object classification, is of major interest as regards the performance improvement brought by the sensor complementarily. Nevertheless this needs to take into account the more accurate as possible information and take advantage of the statistical learning of the previous measurements acquired by sensors. The classical probabilistic fusion methods lack of performance when the previous learning is not representative of the real measurements provided by sensors. The theory of evidence is then introduced to face this disadvantage by integrating a further information which is the context of the sensor acquisitions. In this paper, we propose a formalism of modeling of the sensor reliability to the context that leads to two methods of integration when all the hypotheses, associated to the objects of the scene acquired by sensors, are previously learnt: the first one amounts to integrate this further information in the fusion rule as degrees of trust and the second models the sensor reliability directly as mass functions. These two methods are based on the theory of fuzzy events. Afterwards, we are interested in the evolvement of these two methods in the case where the previous learning is unavailable for a hypothesis associated to an object of the scene and compare these two methods in order to deduce a global method of contextual information integration in the fusion process.
APA, Harvard, Vancouver, ISO, and other styles
14

Akamatsu, Motoyuki, and Takeshi Kasai. "Special issue "Sensor Fusion". Sensor Fusion in Human." Journal of the Robotics Society of Japan 12, no. 5 (1994): 656–63. http://dx.doi.org/10.7210/jrsj.12.656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Et. al., M. Hyndhavi,. "DEVELOPMENT OF VEHICLE TRACKING USING SENSOR FUSION." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (April 1, 2021): 731–39. http://dx.doi.org/10.17762/itii.v9i2.406.

Full text
Abstract:
The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment perception sensors consist of radar, camera, and lidar which have both pros and cons. The sensor fusion is a necessary technology for autonomous driving which provides a better vision and understanding of vehicles surrounding. We mainly focus on highway scenarios that enable an autonomous car to comfortably follow other cars at various speeds while keeping a secure distance and mix the advantages of both sensors with a sensor fusion approach. The radar and vision sensor information are fused to produce robust and accurate measurements. And the experimental results indicate that the comparison of using only radar sensors and sensor fusion of both camera and radar sensors is presented in this paper. The algorithm is described along with simulation results by using MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
16

Gu, Junyi, Artjom Lind, Tek Raj Chhetri, Mauro Bellone, and Raivo Sell. "End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles." Sensors 23, no. 15 (July 29, 2023): 6783. http://dx.doi.org/10.3390/s23156783.

Full text
Abstract:
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment.
APA, Harvard, Vancouver, ISO, and other styles
17

Ehlenbröker, Jan-Friedrich, Uwe Mönks, and Volker Lohweg. "Sensor defect detection in multisensor information fusion." Journal of Sensors and Sensor Systems 5, no. 2 (October 18, 2016): 337–53. http://dx.doi.org/10.5194/jsss-5-337-2016.

Full text
Abstract:
Abstract. In industrial processes a vast variety of different sensors is increasingly used to measure and control processes, machines, and logistics. One way to handle the resulting large amount of data created by hundreds or even thousands of different sensors in an application is to employ information fusion systems. Information fusion systems, e.g. for condition monitoring, combine different sources of information, like sensors, to generate the state of a complex system. The result of such an information fusion process is regarded as a health indicator of a complex system. Therefore, information fusion approaches are applied to, e.g., automatically inform one about a reduction in production quality, or detect possibly dangerous situations. Considering the importance of sensors in the previously described information fusion systems and in industrial processes in general, a defective sensor has several negative consequences. It may lead to machine failure, e.g. when wear and tear of a machine is not detected sufficiently in advance. In this contribution we present a method to detect faulty sensors by computing the consistency between sensor values. The proposed sensor defect detection algorithm exemplarily utilises the structure of a multilayered group-based sensor fusion algorithm. Defect detection results of the proposed method for different test cases and the method's capability to detect a number of typical sensor defects are shown.
APA, Harvard, Vancouver, ISO, and other styles
18

Qu, Yuanhao, Minghao Yang, Jiaqing Zhang, Wu Xie, Baohua Qiang, and Jinlong Chen. "An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation." Sensors 21, no. 5 (February 25, 2021): 1605. http://dx.doi.org/10.3390/s21051605.

Full text
Abstract:
Indoor autonomous navigation refers to the perception and exploration abilities of mobile agents in unknown indoor environments with the help of various sensors. It is the basic and one of the most important functions of mobile agents. In spite of the high performance of the single-sensor navigation method, multi-sensor fusion methods still potentially improve the perception and navigation abilities of mobile agents. This work summarizes the multi-sensor fusion methods for mobile agents’ navigation by: (1) analyzing and comparing the advantages and disadvantages of a single sensor in the task of navigation; (2) introducing the mainstream technologies of multi-sensor fusion methods, including various combinations of sensors and several widely recognized multi-modal sensor datasets. Finally, we discuss the possible technique trends of multi-sensor fusion methods, especially its technique challenges in practical navigation environments.
APA, Harvard, Vancouver, ISO, and other styles
19

Lin, Yung Chin, Kuo Lan Su, and Cheng Yun Chung. "Development of Intelligent Fire Detection Module Using Optic-Sensors." Applied Mechanics and Materials 300-301 (February 2013): 440–43. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.440.

Full text
Abstract:
The paper proposes an adaptive fusion algorithm using competitiveness sensors for fire detection module, and uses computer simulation results to select the optimal weight values for each optic-sensor. Then we design the fire detection module using the tuned weight values of optic-sensors. The competitiveness flame sensor type is ultra-violet sensor (R2868). The controller of the module is HOLTEK microchip, and acquires the detection signals from the optic-sensors through I/O pins, and transmits the detection signals of all sensors to the computer via wire series interface. The adaptive fusion algorithm can tunes weight values according to decision output of the fusion center. The fusion algorithms of the fusion center use Bayesian estimated method to decide the fire event to be true or not. We set the improved weight values in the module for each optic-sensor. From the simulation and experimental implementation results, it demonstrates that the proposed algorithms can compute the adequate weight values.
APA, Harvard, Vancouver, ISO, and other styles
20

Nobis, Felix, Ehsan Shafiei, Phillip Karle, Johannes Betz, and Markus Lienkamp. "Radar Voxel Fusion for 3D Object Detection." Applied Sciences 11, no. 12 (June 17, 2021): 5598. http://dx.doi.org/10.3390/app11125598.

Full text
Abstract:
Automotive traffic scenes are complex due to the variety of possible scenarios, objects, and weather conditions that need to be handled. In contrast to more constrained environments, such as automated underground trains, automotive perception systems cannot be tailored to a narrow field of specific tasks but must handle an ever-changing environment with unforeseen events. As currently no single sensor is able to reliably perceive all relevant activity in the surroundings, sensor data fusion is applied to perceive as much information as possible. Data fusion of different sensors and sensor modalities on a low abstraction level enables the compensation of sensor weaknesses and misdetections among the sensors before the information-rich sensor data are compressed and thereby information is lost after a sensor-individual object detection. This paper develops a low-level sensor fusion network for 3D object detection, which fuses lidar, camera, and radar data. The fusion network is trained and evaluated on the nuScenes data set. On the test set, fusion of radar data increases the resulting AP (Average Precision) detection score by about 5.1% in comparison to the baseline lidar network. The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes. Fusing additional camera data contributes positively only in conjunction with the radar fusion, which shows that interdependencies of the sensors are important for the detection result. Additionally, the paper proposes a novel loss to handle the discontinuity of a simple yaw representation for object detection. Our updated loss increases the detection and orientation estimation performance for all sensor input configurations. The code for this research has been made available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Shi Jun, Li Hong, and Yong Hong Hu. "A Distributed Bayesian Fusion Algorithm Research." Advanced Materials Research 181-182 (January 2011): 1006–12. http://dx.doi.org/10.4028/www.scientific.net/amr.181-182.1006.

Full text
Abstract:
In this paper, the signal detection problem when distributed sensors are used a global decision is desired is considered. Local decisions from the sensors are fed to the data fusion center which then yields a global decision based on a fusion rule. Based on The data fusion theories of Bayesian criterion used for a distributed parallel structure, fusion rules at the fusion center、 the decision rules of sensors and the results of the computer simulation for two identical sensors, two different sensors and three identical sensors are presented. The results of the computer simulation show that the performance of the fusion system, compared with the sensor, has been improved. For the case there are three identical sensors in the fusion system, Bayesian risk is reduced by 26.5%, compared with the sensor.
APA, Harvard, Vancouver, ISO, and other styles
22

Gu, Lei, and Juan Meng. "Wireless Sensor System of UAV Infrared Image and Visible Light Image Registration Fusion." Journal of Electrical and Computer Engineering 2022 (June 2, 2022): 1–15. http://dx.doi.org/10.1155/2022/9245014.

Full text
Abstract:
The application of multisource sensors to drones requires high-quality images to ensure it. In simultaneous interpreting two or more multisensor images based on the same scene or target, the image obtained by the UAV sensor is limited by the imaging time and the shooting angle. The images obtained may not be aligned in the spatial position, thus affecting the fusion effect. Therefore, different sensor images must be registered before image fusion. During the shooting process of the drone imaging sensor, imaging angle, and environmental conditions, the obtained various sensor images will have rotation, translation, and other deformations in the spatial position so that they do not reach the spatial position. Therefore, it is impossible to directly perform image fusion directly. Therefore, before the multisensor image fusion, the image registration process must be completed to ensure that the two images are aligned in space. This paper analyzes the principles; based on the principles of the Powell search algorithm and improved walking algorithm, an algorithm combining Powell and improved walking algorithm is proposed. This paper also studies several traditional image neutrosophic fusions. The algorithm combines the fusion optimization algorithm proposed in this paper greatly reduces the calculation speed and improves the performance of the optimization algorithm and success rate.
APA, Harvard, Vancouver, ISO, and other styles
23

Hou, Maowen, and Weiyun Wang. "Sensor Mathematical Model Data Fusion Biobjective Optimization." Journal of Sensors 2022 (January 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/1612715.

Full text
Abstract:
Sensors are an important tool to quantify the changes and an important part of the information acquisition system; the performance and accuracy of sensors are more strictly desired. In this paper, a highly sensitive fiber optic sensor for measuring temperature and refractive index is prepared by using femtosecond laser micromachining technology and fiber fusion technology. The multimode fiber is first spliced together with single-mode fiber in a positive pair, and then, the multimode fiber is perforated using a femtosecond laser. The incorporation of data model sensors has led to a rapid increase in the development and application of sensors as well. Based on the design concept and technical approach of the wireless sensor network system, a general development plan of the indoor environmental monitoring system is proposed, including the system architecture and functional definition, wireless communication protocols, and design methods of node applications. The sensor has obvious advantages over traditional electrical sensors; the sensor is resistant to electromagnetic interference, electrical insulation, corrosion resistance, low loss, small size, high accuracy, and other advantages. The upper computer program of the indoor environment monitoring system was developed in a Visual Studio development environment using C# language to implement the monitoring, display, and alarm functions of the indoor environment monitoring system network. The sensor-data model interfusion with each other for mutual integration performs the demonstration of the application.
APA, Harvard, Vancouver, ISO, and other styles
24

Ding, Lei, Chengjun Xu, Fangzi Cheng, and Mingkun Guo. "An Automatic Digital Camouflage Pattern Generation Method based on Texture Structure." Frontiers in Science and Engineering 3, no. 6 (June 20, 2023): 61–73. http://dx.doi.org/10.54691/fse.v3i6.5133.

Full text
Abstract:
This article investigates the problem of scattered burst signal detection based on multiple sensors to obtain overall decisions. In the explosion detection system studied in the article, sensors independently transmit their decisions on measuring explosion information to the data fusion processing terminal, which provides overall decisions based on fusion rules. The researchers focus on the data fusion theory of the distributed parallel detection burst point data fusion system based on the Bayesian rule. This paper has obtained the data fusion rule and sensor decision criteria that make the overall system optimal, and proposed a nonlinear Gauss Seidel mathematical variable algorithm that optimizes the data fusion rule and multi-sensor decision criteria The data fusion problem when detecting burst point signals with two different and three identical types of sensors. The data fusion algorithm proposed in this article is validated and simulated through computer experiments on the detection of three types of sensors. The relevant experimental data show that the performance of a data fusion system based on Bayesian detection is significantly improved compared with the sensor acquisition of burst point information. In the experiment, the risk of Bayesian missing detection of burst point signal coefficient of the data fusion system using three sensors with the same performance is reduced by 32.7%.
APA, Harvard, Vancouver, ISO, and other styles
25

Fu, Yuan, Xiang Chen, Yu Liu, Chan Son, and Yan Yang. "Multi-Source Information Fusion Fault Diagnosis for Gearboxes Based on SDP and VGG." Applied Sciences 12, no. 13 (June 21, 2022): 6323. http://dx.doi.org/10.3390/app12136323.

Full text
Abstract:
A decision-level approach using multi-sensor-based symmetry dot pattern (SDP) analysis with a Visual Geometry Group 16 network (VGG16) fault diagnosis model for multi-source information fusion was proposed to realize accurate and comprehensive fault diagnosis of gearbox gear teeth. Firstly, the SDP technique was used to perform a feature-level fusion of the fault states of gearbox gear collected by multiple sensors, which could initially visualize the vibration states of the gear teeth in different states. Secondly, the SDP images obtained were combined with the deep learning VGG16. In this way, the local diagnostic results of each sensor can be easily obtained. Finally, the local diagnostic results of each sensor were combined with the DS evidence theory to achieve decision-level fusion, which can better realize comprehensive fault detection for gearbox gear teeth. Before fusion, the accuracies of the three sensors were 96.43%, 93.97%, and 93.28%, respectively. When sensor 1 and sensor 2 were fused, the accuracy reached 99.93%, which is 3.52% and 6.34% better than when using sensors 1 and 2, respectively, alone. When sensor 1 and sensor 3 were fused, the accuracy reached 99.96%, marking an improvement of 3.36% and 6.85% over individual use of sensors 1 and 3, respectively. When sensor 2 and sensor 3 were fused, the accuracy reached 99.40%, which is 5.78% and 6.56% better than individual use of sensors 2 and 3, respectively. When the three sensors were fused simultaneously, the accuracy reached 99.98%, which is 3.68%, 6.40%, and 7.18% better than individual use of sensors 1, 2, and 3, respectively.
APA, Harvard, Vancouver, ISO, and other styles
26

Shao, Yuxuan. "Survey on Indoor Positioning and Navigation Technologies Based on Sensors Fusion." Highlights in Science, Engineering and Technology 94 (April 26, 2024): 62–69. http://dx.doi.org/10.54097/rvmbc424.

Full text
Abstract:
Due to the growth of productivity and the development of min-sensors, relevant localization and navigation algorithms are better suited today. However, indoor localization still faces many new problems, and with the popularity of usage scenarios, the variable and complex environmental information brings many new challenges to indoor localization and navigation. Currently, the widely used sensors on the market are vision sensors, ToF sensors and wireless sensors. These three types of sensors are used under different conditions and also have different accuracies and measurement ranges. Through sensors fusion algorithms can complement the strengths and weaknesses of different sensors to achieve better performance in complex using situations. The paper presents a survey on indoor positioning and navigation technologies based on sensor fusion from different sensors and relevant applications. We demonstrate the benefits of sensor fusion algorithms in practical applications by performing sensor fusion based on and single type of sensors and different types of sensors.
APA, Harvard, Vancouver, ISO, and other styles
27

Hsiao, Rong Shue, Ding Bing Lin, Hsin Piao Lin, and Jin Wang Zhou. "Multimodal Sensor Fusion for Indoor Occupancy Determination." Applied Mechanics and Materials 764-765 (May 2015): 1319–23. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.1319.

Full text
Abstract:
Pyroelectric infrared (PIR) sensors can detect the presence of human without the need to carry any device, which are widely used for human presence detection in home/office automation systems in order to improve energy efficiency. However, PIR detection is based on the movement of occupants. For occupancy detection, PIR sensors have inherent limitation when occupants remain relatively still. Multisensor fusion technology takes advantage of redundant, complementary, or more timely information from different modal sensors, which is considered an effective approach for solving the uncertainty and unreliability problems of sensing. In this paper, we proposed a simple multimodal sensor fusion algorithm, which is very suitable to be manipulated by the sensor nodes of wireless sensor networks. The inference algorithm was evaluated for the sensor detection accuracy and compared to the multisensor fusion using dynamic Bayesian networks. The experimental results showed that a detection accuracy of 97% in room occupancy can be achieved. The accuracy of occupancy detection is very close to that of the dynamic Bayesian networks.
APA, Harvard, Vancouver, ISO, and other styles
28

Chou, Jung-Chuan, and Chien-Cheng Chen. "WEIGHTED DATA FUSION FOR FLEXIBLE pH SENSORS ARRAY." Biomedical Engineering: Applications, Basis and Communications 21, no. 06 (December 2009): 365–69. http://dx.doi.org/10.4015/s1016237209001465.

Full text
Abstract:
Data fusion is a frequent statistic method that can be applied to sensor development field, such as multisensors and sensors array. In this study, the analytic data fusion methods consist of the arithmetic mean and weighted data fusion used to estimate the measured pH data of flexible pH sensors array. The main part of the flexible 2 × 4 pH sensors array was fabricated by screen printing, and the ruthenium dioxide ( RuO2 ) thin film on each sensor of the sensor array was deposited by radio frequency (RF)-sputtering method. In accordance with experiment results, the pH values estimated by weighted data fusion method are accurate than by arithmetic mean method. Furthermore, that the flexible sensors array is actually used to detect the pH value of different commercial drinks is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
29

Gong, Yuming, Zeyu Ma, Meijuan Wang, Xinyang Deng, and Wen Jiang. "A New Multi-Sensor Fusion Target Recognition Method Based on Complementarity Analysis and Neutrosophic Set." Symmetry 12, no. 9 (August 31, 2020): 1435. http://dx.doi.org/10.3390/sym12091435.

Full text
Abstract:
To improve the efficiency, accuracy, and intelligence of target detection and recognition, multi-sensor information fusion technology has broad application prospects in many aspects. Compared with single sensor, multi-sensor data contains more target information and effective fusion of multi-source information can improve the accuracy of target recognition. However, the recognition capabilities of different sensors are different during target recognition, and the complementarity between sensors needs to be analyzed during information fusion. This paper proposes a multi-sensor fusion recognition method based on complementarity analysis and neutrosophic set. The proposed method mainly has two parts: complementarity analysis and data fusion. Complementarity analysis applies the trained multi-sensor to extract the features of the verification set into the sensor, and obtain the recognition result of the verification set. Based on recognition result, the multi-sensor complementarity vector is obtained. Then the sensor output the recognition probability and the complementarity vector are used to generate multiple neutrosophic sets. Next, the generated neutrosophic sets are merged within the group through the simplified neutrosophic weighted average (SNWA) operator. Finally, the neutrosophic set is converted into crisp number, and the maximum value is the recognition result. The practicality and effectiveness of the proposed method in this paper are demonstrated through examples.
APA, Harvard, Vancouver, ISO, and other styles
30

Cai, Yang, Sean Hackett, Ben Graham, Florian Alber, and Mel Siegel. "Heads-Up Lidar Imaging with Sensor Fusion." Electronic Imaging 2020, no. 13 (January 26, 2020): 338–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.13.ervr-338.

Full text
Abstract:
In this paper, we present a novel Lidar imaging system for heads-up display. The imaging system consists of the onedimensional laser distance sensor and IMU sensors, including an accelerometer and gyroscope. By fusing the sensory data when the user moves their head, it creates a three-dimensional point cloud for mapping the space around. Compared to prevailing 2D and 3D Lidar imaging systems, the proposed system has no moving parts; it’s simple, light-weight, and affordable. Our tests show that the horizontal and vertical profile accuracy of the points versus the floor plan is 3 cm on average. For the bump detection the minimal detectable step height is 2.5 cm. The system can be applied to first responses such as firefighting, and to detect bumps on pavement for lowvision pedestrians.
APA, Harvard, Vancouver, ISO, and other styles
31

Zabihi, Mehdi, Bhawya, Parikshit Pandya, Brooke R. Shepley, Nicholas J. Lester, Syed Anees, Anthony R. Bain, Simon Rondeau-Gagné, and Mohammed Jalal Ahamed. "Inertial and Flexible Resistive Sensor Data Fusion for Wearable Breath Recognition." Applied Sciences 14, no. 7 (March 28, 2024): 2842. http://dx.doi.org/10.3390/app14072842.

Full text
Abstract:
This paper proposes a novel data fusion technique for a wearable multi-sensory patch that integrates an accelerometer and a flexible resistive pressure sensor to accurately capture breathing patterns. It utilizes an accelerometer to detect breathing-related diaphragmatic motion and other body movements, and a flex sensor for muscle stretch detection. The proposed sensor data fusion technique combines inertial and pressure sensors to eliminate nonbreathing body motion-related artifacts, ensuring that the filtered signal exclusively conveys information pertaining to breathing. The fusion technique mitigates the limitations of relying solely on one sensor’s data, providing a more robust and reliable solution for continuous breath monitoring in clinical and home environments. The sensing system was tested against gold-standard spirometry data from multiple participants for various breathing patterns. Experimental results demonstrate the effectiveness of the proposed approach in accurately monitoring breathing rates, even in the presence of nonbreathing-related body motion. The results also demonstrate that the multi-sensor patch presented in this paper can accurately distinguish between varying breathing patterns both at rest and during body movements.
APA, Harvard, Vancouver, ISO, and other styles
32

Luo, Jing, Xiangyu Zhou, Chao Zeng, Yiming Jiang, Wen Qi, Kui Xiang, Muye Pang, and Biwei Tang. "Robotics Perception and Control: Key Technologies and Applications." Micromachines 15, no. 4 (April 15, 2024): 531. http://dx.doi.org/10.3390/mi15040531.

Full text
Abstract:
The integration of advanced sensor technologies has significantly propelled the dynamic development of robotics, thus inaugurating a new era in automation and artificial intelligence. Given the rapid advancements in robotics technology, its core area—robot control technology—has attracted increasing attention. Notably, sensors and sensor fusion technologies, which are considered essential for enhancing robot control technologies, have been widely and successfully applied in the field of robotics. Therefore, the integration of sensors and sensor fusion techniques with robot control technologies, which enables adaptation to various tasks in new situations, is emerging as a promising approach. This review seeks to delineate how sensors and sensor fusion technologies are combined with robot control technologies. It presents nine types of sensors used in robot control, discusses representative control methods, and summarizes their applications across various domains. Finally, this survey discusses existing challenges and potential future directions.
APA, Harvard, Vancouver, ISO, and other styles
33

Qiao, Shuanghu, Baojian Song, Yunsheng Fan, and Guofeng Wang. "A Fuzzy Dempster–Shafer Evidence Theory Method with Belief Divergence for Unmanned Surface Vehicle Multi-Sensor Data Fusion." Journal of Marine Science and Engineering 11, no. 8 (August 15, 2023): 1596. http://dx.doi.org/10.3390/jmse11081596.

Full text
Abstract:
The safe navigation of unmanned surface vehicles in the marine environment requires multi-sensor collaborative perception, and multi-sensor data fusion technology is a prerequisite for realizing the collaborative perception of different sensors. To address the problem of poor fusion accuracy for existing multi-sensor fusion methods without prior knowledge, a fuzzy evidence theory multi-sensor data fusion method with belief divergence is proposed in this paper. First of all, an adjustable distance for measuring discrepancies between measurements is devised to evaluate the degree of measurement closeness to the true value, which improves the adaptability of the method to different classes of sensor data. Furthermore, an adaptive multi-sensor measurement fusion strategy is designed for the case where the sensor accuracy is known in advance. Secondly, the affiliation function of the fuzzy theory is introduced into the evidence theory approach to assign initial evidence of measurements in terms of defining the degree of fuzzy support between measurements, which improves the fusion accuracy of the method. Finally, the belief Jensen–Shannon divergence and the Rényi divergence are combined for measuring the conflict between the evidence pieces to obtain the credibility degree as the reliability of the evidence, which solves the problem of high conflict between evidence pieces. Three examples of multi-sensor data fusion in different domains are employed to validate the adaptability of the proposed method to different kinds of multi-sensors. The maximum relative error of the proposed method for multiple sensor experiments is greater than or equal to 0.18%, and its error accuracy is much higher than the best result of 0.46% among other comparative methods. The experimental results verify that the proposed data fusion method is more accurate than other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Pires, Ivan, Nuno Garcia, Nuno Pombo, and Francisco Flórez-Revuelta. "From Data Acquisition to Data Fusion: A Comprehensive Review and a Roadmap for the Identification of Activities of Daily Living Using Mobile Devices." Sensors 16, no. 2 (February 2, 2016): 184. http://dx.doi.org/10.3390/s16020184.

Full text
Abstract:
This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs).
APA, Harvard, Vancouver, ISO, and other styles
35

Pau, L. F. "Sensor data fusion." Journal of Intelligent & Robotic Systems 1, no. 2 (1988): 103–16. http://dx.doi.org/10.1007/bf00348718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Faruqi, I., M. B. Waluya, Y. Y. Nazaruddin, and T. A. Tamba. "Train Localization using Unscented Kalman Filter – Based Sensor Fusion." International Journal of Sustainable Transportation Technology 1, no. 2 (October 31, 2018): 35–41. http://dx.doi.org/10.31427/ijstt.2018.1.2.1.

Full text
Abstract:
This paper presents an application of sensor fusion methods based on Unscented Kalman filter (UKF) technique for solving train localization problem in rail systems. The paper first reports the development of a laboratory-scale rail system simulator which is equipped with various onboard and wayside sensors that are used to detect and locate the train vehicle movements in the rail track. Due to the low precision measurement data obtained by each individual sensor, a sensor fusion method based on the UKF technique is implemented to fuse the measurement data from several sensors. Experimental results which demonstrate the effectiveness of the proposed UKF-based sensor fusion method for solving the train localization problem is also reported.
APA, Harvard, Vancouver, ISO, and other styles
37

Tang, Yongchuan, Deyun Zhou, Zichang He, and Shuai Xu. "An improved belief entropy–based uncertainty management approach for sensor data fusion." International Journal of Distributed Sensor Networks 13, no. 7 (July 2017): 155014771771849. http://dx.doi.org/10.1177/1550147717718497.

Full text
Abstract:
In real applications, sensors may work in complicated environments; thus, how to measure the uncertain degree of sensor reports before applying sensor data fusion is a big challenge. To address this issue, an improved belief entropy–based uncertainty management approach for sensor data fusion is proposed in this article. First, the sensor report is modeled as the body of evidence in Dempster–Shafer framework. Then, the uncertainty measure of each body of evidence is based on the subjective uncertainty represented as the evidence sufficiency and evidence importance, and the objective uncertainty measure is expressed as the improved belief entropy. Evidence modification of conflict sensor data is based on the proposed uncertainty management approach before evidence fusion with Dempster’s rule of combination. Finally, the fusion result can be applied in real applications. A case study on sensor data fusion for fault diagnosis is presented to show the rationality of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
38

Yamaguchi, Yoshinori, Kenji Toda, Kenji Nishida, and Eiichi Takahashi. "Special issue "Sensor Fusion". A Parallel Architecture for Sensor Fusion." Journal of the Robotics Society of Japan 12, no. 5 (1994): 664–71. http://dx.doi.org/10.7210/jrsj.12.664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Yao, Yafeng, Ningping Yao, Chunmiao Liang, Hongchao Wei, Haitao Song, and Li Wang. "Accurate Measurement Method of Drilling Depth Based on Multi-Sensor Data Fusion." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 3 (May 20, 2022): 367–74. http://dx.doi.org/10.20965/jaciii.2022.p0367.

Full text
Abstract:
Based on the advanced detection drilling rigs used in underground coal mines, a real-time method of obtaining the depth of drilling is proposed. Displacement sensors are used to measure the stroke of the drilling rig’s feeding device during drilling, and an equation to calculate the depth of drilling is put forward. The same measurements are made by several sensors, improving measurement accuracy and reliability. The final drilling depth is obtained using a multi-sensor data fusion algorithm, combined with the calculation equation. The necessary derivation and calculation processes of the multi-sensor, adaptive weighted fusion algorithm are given. To optimize the integrated result, the weighting coefficients can be found through the algorithm corresponding to each sensor in an adaptive mode to optimize the fusion result. Three kinds of displacement sensors are installed on the feeding device of the drilling rig, and the drilling process is simulated in a laboratory test. The test proves that, compared with the mean method of three sensors, the data obtained by the multi-sensor and adaptive weighted fusion algorithm have the higher accuracy, and the sensor with the least variance in the fusion process has the most significant weighting coefficient. The drilling depth data that are obtained are more accurate than those obtained through the mean method with measurement data from a single sensor. The weighted coefficient of the measurement data is minimal when the measurement accuracy of the sensor suddenly deteriorates, so it has little effect on the measurement results. An experiment verifies this method’s effectiveness and fault tolerance, showing an improvement in measurement accuracy.
APA, Harvard, Vancouver, ISO, and other styles
40

Cao, Xinqian, Benying Tan, Yujie Li, and Shuxue Ding. "Dynamic Load Regulation of Robots with Multi-sensor Fusion." Journal of Physics: Conference Series 2400, no. 1 (December 1, 2022): 012022. http://dx.doi.org/10.1088/1742-6596/2400/1/012022.

Full text
Abstract:
Abstract Multi-sensor fusion technology has been extensively deployed for the autonomous navigation and control of robots. Theoretically, by increasing the number of sensors, more effective data on the surrounding environment and obstacle information can be obtained. However, in practical applications, a massive number of sensors will not only increase the cost of using sensors in the system but also add to the difficulty and cost of data processing, leading to unnecessary resource consumption. To solve this problem, first, the robot navigation area is divided according to the importance of the navigation area and the characteristics of sensor parameters. The navigation non-core area adopts the single lidar sensor, while the core area adopts the multi-sensor fusion of “lidar +” to reduce the number of sensors used in the fusion and improve the performance of multi-sensor fusion after the division. Then, in consideration of the higher requirement for computing capacity of the device imposed by deep learning, the multi-sensor fusion algorithm is introduced into the dynamic load regulation mechanism to solve the problem from the algorithm level so that the outperformance of deep learning in feature extraction and other aspects can be utilized to reduce the performance requirements of the device and the overall power consumption of the system. Finally, by building the ROS experimental platform, it is verified that the robot can use autonomous navigation in complex and diversified home scenes.
APA, Harvard, Vancouver, ISO, and other styles
41

Kumar S Rajanikanth, Praveen. "Drone Detection using Multi-Sensor Data Fusion." International Journal of Science and Research (IJSR) 13, no. 2 (February 5, 2024): 179–85. http://dx.doi.org/10.21275/sr24125030857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Duro, Natividad. "Sensor Data Fusion Analysis for Broad Applications." Sensors 24, no. 12 (June 7, 2024): 3725. http://dx.doi.org/10.3390/s24123725.

Full text
Abstract:
Sensor data fusion analysis plays a pivotal role in a variety of fields by integrating data from multiple sensors to produce more accurate, reliable, and comprehensive information than that achieved by individual sensors alone [...]
APA, Harvard, Vancouver, ISO, and other styles
43

Luo, Junhai, and Tao Li. "Bathtub-Shaped Failure Rate of Sensors for Distributed Detection and Fusion." Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/202950.

Full text
Abstract:
We study distributed detection and fusion in sensor networks with bathtub-shaped failure (BSF) rate of the sensors which may or not send data to the Fusion Center (FC). The reliability of semiconductor devices is usually represented by the failure rate curve (called the “bathtub curve”), which can be divided into the three following regions: initial failure period, random failure period, and wear-out failure period. Considering the possibility of the failed sensors which still work but in a bad situation, it is unreasonable to trust the data from these sensors. Based on the above situation, we bring in new characteristics to failed sensors. Each sensor quantizes its local observation into one bit of information which is sent to the FC for overall fusion because of power, communication, and bandwidth constraints. Under this sensor failure model, the Extension Log-likelihood Ratio Test (ELRT) rule is derived. Finally, the ROC curve for this model is presented. The simulation results show that the ELRT rule improves the robust performance of the system, compared with the traditional fusion rule without considering sensor failures.
APA, Harvard, Vancouver, ISO, and other styles
44

Nemec, Dusan, Jan Andel, Vojtech Simak, and Jozef Hrbcek. "Homogeneous Sensor Fusion Optimization for Low-Cost Inertial Sensors." Sensors 23, no. 14 (July 15, 2023): 6431. http://dx.doi.org/10.3390/s23146431.

Full text
Abstract:
The article deals with sensor fusion and real-time calibration in a homogeneous inertial sensor array. The proposed method allows for both estimating the sensors’ calibration constants (i.e., gain and bias) in real-time and automatically suppressing degraded sensors while keeping the overall precision of the estimation. The weight of the sensor is adaptively adjusted according to the RMSE concerning the weighted average of all sensors. The estimated angular velocity was compared with a reference (ground truth) value obtained using a tactical-grade fiber-optic gyroscope. We have experimented with low-cost MEMS gyroscopes, but the proposed method can be applied to basically any sensor array.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Xian Bao, Shi Hai Zhao, and Guo Wei. "Based on D - S Evidence Theory of Solution Concentration Detection Method Research." Applied Mechanics and Materials 494-495 (February 2014): 869–72. http://dx.doi.org/10.4028/www.scientific.net/amm.494-495.869.

Full text
Abstract:
According to the theory of multi-sensor information fusion technology, based on D - S evidence theory to fuse of multiple sensors feedback information from different angles for detecting solution concentration, and achieving the same judgment; This system uses of D - S evidence theory of multi-sensor data fusion method, not only make up the disadvantages of using a single sensor, but also largely reduce the uncertainty of the judgment. Additionally this system improves the rapidity and accuracy of the solution concentration detection, and broadens the application field of multi-sensor information fusion technology.
APA, Harvard, Vancouver, ISO, and other styles
46

Umeda, Kazunori, Jun Ota, and Hisayuki Kimura. "Fusion of Multiple Ultrasonic Sensor Data and Image Data for Measuring an Object’s Motion." Journal of Robotics and Mechatronics 17, no. 1 (February 20, 2005): 36–43. http://dx.doi.org/10.20965/jrm.2005.p0036.

Full text
Abstract:
Robot sensing requires two types of observation – intensive and wide-angle. We selected multiple ultrasonic sensors for intensive observation and an image sensor for wide-angle observation in measuring a moving object’s motion with sensors in two kinds of fusion – one fusing multiple ultrasonic sensor data and the other fusing the two types of sensor data. The fusion of multiple ultrasonic sensor data takes advantage of object movement from a measurement range of an ultrasonic sensor to another sensor’s range. They are formulated in a Kalman filter framework. Simulation and experiments demonstrate the effectiveness and applicability to an actual robot system.
APA, Harvard, Vancouver, ISO, and other styles
47

Kolar, Prasanna, Patrick Benavidez, and Mo Jamshidi. "Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation." Sensors 20, no. 8 (April 12, 2020): 2180. http://dx.doi.org/10.3390/s20082180.

Full text
Abstract:
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.
APA, Harvard, Vancouver, ISO, and other styles
48

John, Vijay, and Seiichi Mita. "Deep Feature-Level Sensor Fusion Using Skip Connections for Real-Time Object Detection in Autonomous Driving." Electronics 10, no. 4 (February 9, 2021): 424. http://dx.doi.org/10.3390/electronics10040424.

Full text
Abstract:
Object detection is an important perception task in autonomous driving and advanced driver assistance systems. The visible camera is widely used for perception, but its performance is limited by illumination and environmental variations. For robust vision-based perception, we propose a deep learning framework for effective sensor fusion of the visible camera with complementary sensors. A feature-level sensor fusion technique, using skip connection, is proposed for the sensor fusion of the visible camera with the millimeter-wave radar and the thermal camera. The two networks are called the RV-Net and the TV-Net, respectively. These networks have two input branches and one output branch. The input branches contain separate branches for the individual sensor feature extraction, which are then fused in the output perception branch using skip connections. The RVNet and the TVNet simultaneously perform sensor-specific feature extraction, feature-level fusion and object detection within an end-to-end framework. The proposed networks are validated with baseline algorithms on public datasets. The results obtained show that the feature-level sensor fusion is better than baseline early and late fusion frameworks.
APA, Harvard, Vancouver, ISO, and other styles
49

de Sousa Bezerra, Carlos Daniel, Flávio Henrique Teles Vieira, and Daniel Porto Queiroz Carneiro. "Autonomous Robotic Navigation Approach Using Deep Q-Network Late Fusion and People Detection-Based Collision Avoidance." Applied Sciences 13, no. 22 (November 15, 2023): 12350. http://dx.doi.org/10.3390/app132212350.

Full text
Abstract:
In this work, we propose an approach for the autonomous navigation of mobile robots using fusion the of sensor data by a Double Deep Q-Network with collision avoidance by detecting moving people via computer vision techniques. We evaluate two data fusion methods for the proposed autonomous navigation approach: Interactive and Late Fusion strategy. Both are used to integrate mobile robot sensors through the following sensors: GPS, IMU, and an RGB-D camera. The proposed collision avoidance module is implemented along with the sensor fusion architecture in order to prevent the autonomous mobile robot from colliding with moving people. The simulation results indicate a significant impact on the success of completing the proposed mission by the mobile robot with the fusion of sensors, indicating a performance increase (success rate) of ≈27% in relation to navigation without sensor fusion. With the addition of moving people in the environment, deploying the people detection and collision avoidance security module has improved about the success rate by 14% when compared to that of the autonomous navigation approach without the security module.
APA, Harvard, Vancouver, ISO, and other styles
50

Deo, Ankur, Vasile Palade, and Md Nazmul Huda. "Centralised and Decentralised Sensor Fusion-Based Emergency Brake Assist." Sensors 21, no. 16 (August 11, 2021): 5422. http://dx.doi.org/10.3390/s21165422.

Full text
Abstract:
Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sensors. As mono-sensor systems cannot provide reliable and consistent readings under all circumstances because of errors and other limitations, fusing data from multiple sensors ensures that the environmental parameters are perceived correctly and reliably for most scenarios, thereby substantially improving the reliability of the multi-sensor-based automotive systems. This paper first highlights the significance of efficiently fusing data from multiple sensors in ADAS features. An emergency brake assist (EBA) system is showcased using multiple sensors, namely, a light detection and ranging (LiDAR) sensor and camera. The architectures of the proposed ‘centralised’ and ‘decentralised’ sensor fusion approaches for EBA are discussed along with their constituents, i.e., the detection algorithms, the fusion algorithm, and the tracking algorithm. The centralised and decentralised architectures are built and analytically compared, and the performance of these two fusion architectures for EBA are evaluated in terms of speed of execution, accuracy, and computational cost. While both fusion methods are seen to drive the EBA application at an acceptable frame rate (~20 fps or higher) on an Intel i5-based Ubuntu system, it was concluded through the experiments and analytical comparisons that the decentralised fusion-driven EBA leads to higher accuracy; however, it has the downside of a higher computational cost. The centralised fusion-driven EBA yields comparatively less accurate results, but with the benefits of a higher frame rate and lesser computational cost.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography