To see the other types of publications on this topic, follow the link: Data fusion algorithms; Information filters.

Journal articles on the topic 'Data fusion algorithms; Information filters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data fusion algorithms; Information filters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jahan, Kausar, and Koteswara Rao Sanagapallea. "Fusion of Angle Measurements from Hull Mounted and Towed Array Sensors." Information 11, no. 9 (September 9, 2020): 432. http://dx.doi.org/10.3390/info11090432.

Full text
Abstract:
Two sensor arrays, hull-mounted array, and towed array sensors are considered for bearings-only tracking. An algorithm is designed to combine the information obtained as bearing (angle) measurements from both sensor arrays to give a better solution. Using data from two different sensor arrays reduces the problem of observability and the observer need not follow the S-maneuver to attain observability of the process. The performance of the fusion algorithm is comparable to that of theoretical Cramer–Rao lower bound and with that of the algorithm when bearing measurements from a single sensor array are considered. Different filters are used for analyzing both algorithms. Monte Carlo runs need to be done to evaluate the performance of algorithms more accurately. Also, the performance of the fusion algorithm is evaluated in terms of solution convergence time.
APA, Harvard, Vancouver, ISO, and other styles
2

HASAN, AHMED M., KHAIRULMIZAM SAMSUDIN, ABDUL RAHMAN RAMLI, and RAJA SYAMSUL AZMIR. "COMPARATIVE STUDY ON WAVELET FILTER AND THRESHOLDING SELECTION FOR GPS/INS DATA FUSION." International Journal of Wavelets, Multiresolution and Information Processing 08, no. 03 (May 2010): 457–73. http://dx.doi.org/10.1142/s0219691310003572.

Full text
Abstract:
Navigation and guidance of an autonomous vehicle require determination of the position and velocity of the vehicle. Therefore, fusing the Inertial Navigation System (INS) and Global Positioning System (GPS) is important. Various methods have been applied to smooth and predict the INS and GPS errors. Recently, wavelet de-noising methodologies have been applied to improve the accuracy and reliability of the GPS/INS system. In this work, analysis of real data to identify the optimal wavelet filter for each GPS and INS component for high quality error estimation is presented. A comprehensive comparison of various wavelet thresholding selections with different level of decomposition is conducted to study the effect on GPS/INS error estimation while maintaining the original features of the signal. Results show that while some wavelet filters and thresholding selection algorithms perform better than others on each of the GPS and INS components, no specific parameter selection perform uniformly better than others.
APA, Harvard, Vancouver, ISO, and other styles
3

Guoyan, Wang, A. V. Fomichev, and Dy Yiran. "Research on Improved Gaussian Smoothing Filters for SLAM Application." Mekhatronika, Avtomatizatsiya, Upravlenie 20, no. 12 (December 6, 2019): 756–64. http://dx.doi.org/10.17587/mau.20.756-764.

Full text
Abstract:
To address the navigation issues of the planetary rover and construct a map for the unknown environment as well as the surface of the planets in our solar system, the simultaneous localization and mapping can be seen as an alternative method. In terms of the navigation with the laser sensor, the Kalman filter and its improving algorithms, such as EKF and UKF are widely used in the the process of processing information. Nevertheless, these filter algorithms suffer from low accuracy and significant computation expensive. The EKF algorithm has a linearization process, the UKF algorithm is better matched in a nonlinear system than the EKF algorithm, but it has more computational complexity. The GP-RTSS filtering algorithm, based on a Gaussian filter, is significantly superior to the EKF and UKF algorithms regarding the sensor fusion accuracy. The Gaussian Process can be used in different non-linear system, does not need prediction model and linearization. However, the main barrier in the process of implementing the GP-RTSS algorithm is that the Gaussian core function requires a lot of computation. In this paper, an algorithm, so-called DIS RTSS filter under a distributed computation scheme, derived from the GP-RTSS Gaussia n smoothing and filter, is proposed. The distributed system can effectively reduce the cost of computation (computation expense and memory). Moreover, four fusion methods for the DIS RTSS filter, i.e., DIS RTP, DIS RTGP, DIS RTB, DIS RTrB are discussed in this paper. The experiments show that among the four algorithms described above, the DIS RTGP algorithm is the most effective solution for practical implementation. The DIS RTSS filtering algorithm can realize a high processing rate and can theoretically process an infinite number of data samples.
APA, Harvard, Vancouver, ISO, and other styles
4

López-Delis, Alberto, Cristiano J. Miosso, João L. A. Carvalho, Adson F. da Rocha, and Geovany A. Borges. "Continuous Estimation Prediction of Knee Joint Angles Using Fusion of Electromyographic and Inertial Sensors for Active Transfemoral Leg Prostheses." Advances in Data Science and Adaptive Analysis 10, no. 02 (April 2018): 1840008. http://dx.doi.org/10.1142/s2424922x18400089.

Full text
Abstract:
Information extracted from the surface electromyographic (sEMG) signals can allow for the detection of movement intention in transfemoral prostheses. The sEMG can help estimate the angle between the femur and the tibia in the sagittal plane. However, algorithms based exclusively on sEMG information can lead to inaccurate results. Data captured by inertial-sensors can improve this estimate. We propose three myoelectric algorithms that extract data from sEMG and inertial sensors using Kalman-filters. The proposed fusion-based algorithms showed improved performance compared to methods based exclusively on sEMG data, generating improvements in the accuracy of knee joint angle estimation and reducing estimation artifacts.
APA, Harvard, Vancouver, ISO, and other styles
5

Alshawabkeh, Yahya. "Color and Laser Data as a Complementary Approach for Heritage Documentation." Remote Sensing 12, no. 20 (October 21, 2020): 3465. http://dx.doi.org/10.3390/rs12203465.

Full text
Abstract:
Heritage recording has received much attention and benefits from recent developments in the field of range and imaging sensors. While these methods have often been viewed as two different methodologies, data integration can achieve different products, which are not always found in a single technique. Data integration in this paper can be divided into two levels: laser scanner data aided by photogrammetry and photogrammetry aided by scanner data. At the first level, superior radiometric information, mobility and accessibility of imagery can be actively used to add texture information and allow for new possibilities in terms of data interpretation and completeness of complex site documentation. In the second level, true orthophoto is generated based on laser data, the results are rectified images with a uniform scale representing all objects at their planimetric position. The proposed approaches enable flexible data fusion and allow images to be taken at an optimum time and position for radiometric information. Data fusion usually involves serious distortions in the form of a double mapping of occluded objects that affect the product quality. In order to enhance the efficiency of visibility analysis in complex structures, a proposed visibility algorithm is implemented into the developed methods of texture mapping and true orthophoto generation. The algorithm filters occluded areas based on a patch processing using a grid square unit set around the projected vertices. The depth of the mapped triangular vertices within the patch neighborhood is calculated to assign the visible one. In this contribution, experimental results from different historical sites in Jordan are presented as a validation of the proposed algorithms. Algorithms show satisfactory performance in terms of completeness and correctness of occlusion detection and spectral information mapping. The results indicate that hybrid methods could be used efficiently in the representation of heritage structures.
APA, Harvard, Vancouver, ISO, and other styles
6

Rao, Jin Jun, Tong Yue Gao, Zhen Jiang, and Zhen Bang Gong. "Position and Attitude Information Fusion for Portable Unmanned Aerial Vehicles." Key Engineering Materials 439-440 (June 2010): 155–60. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.155.

Full text
Abstract:
Portable Unmanned Aerial Vehicles (PUAVs) present an enormous application potential, and the real time accurate position and attitude information is the basis of autonomous flight of PUAVs. In order to obtain comprehensive and accurate position and attitude data of PUAVs in flight, focusing on the common sensors configuration of PUAVs, each type of sensor’s characteristic is analyzed, and the data fusion problem of SINS/GPS/Compass combination is presented and studied in this paper. Firstly, the error expressions of MEMS inertia sensors, attitude, velocity and position are researched and derived, and the state equation and observation equation are built, and the discrete equations are derived for computer implementation, so the data fusion model for Kalman Filter fusion algorithms is presented. Then, the data fusion system and algorithms are implemented in software, and the flight data obtained in flight experiment is fed to the software for data fusion. The comparison between original data and fusional data shows that SINS/GPS/Compass data fusion system can promote accuracy of position and attitude markedly.
APA, Harvard, Vancouver, ISO, and other styles
7

Guan, Binglei, and Xianfeng Tang. "Multisensor decentralized nonlinear fusion using adaptive cubature information filter." PLOS ONE 15, no. 11 (November 5, 2020): e0241517. http://dx.doi.org/10.1371/journal.pone.0241517.

Full text
Abstract:
In nonlinear multisensor system, abrupt state changes and unknown variance of measurement noise are very common, which challenges the majority of the previously developed models for precisely known multisensor fusion techniques. In terms of this issue, an adaptive cubature information filter (CIF) is proposed by embedding strong tracking filter (STF) and variational Bayesian (VB) method, and it is extended to multi-sensor fusion under the decentralized fusion framework with feedback. Specifically, the new algorithms use an equivalent description of STF, which avoid the problem of solving Jacobian matrix during determining strong trace fading factor and solve the interdependent problem of combination of STF and VB. Meanwhile, A simple and efficient method for evaluating global fading factor is developed by introducing a parameter variable named fading vector. The analysis shows that compared with the traditional information filter, this filter can effectively reduce the data transmission from the local sensor to the fusion center and decrease the computational burden of the fusion center. Therefore, it can quickly return to the normal error range and has higher estimation accuracy in response to abrupt state changes. Finally, the performance of the developed algorithms is evaluated through a target tracking problem.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Tao, Xiaoran Wang, and Mingyu Hong. "Gas Leak Location Detection Based on Data Fusion with Time Difference of Arrival and Energy Decay Using an Ultrasonic Sensor Array." Sensors 18, no. 9 (September 7, 2018): 2985. http://dx.doi.org/10.3390/s18092985.

Full text
Abstract:
Ultrasonic gas leak location technology is based on the detection of ultrasonic waves generated by the ejection of pressured gas from leak holes in sealed containers or pipes. To obtain more accurate leak location information and determine the locations of leak holes in three-dimensional space, this paper proposes an ultrasonic leak location approach based on multi-algorithm data fusion. With the help of a planar ultrasonic sensor array, the eigenvectors of two individual algorithms, i.e., the arrival distance difference, as determined from the time difference of arrival (TDOA) location algorithm, and the ratio of arrival distances from the energy decay (ED) location algorithm, are extracted and fused to calculate the three-dimensional coordinates of leak holes. The fusion is based on an extended Kalman filter, in which the results of the individual algorithms are seen as observation values. The final system state matrix is composed of distances between the measured leak hole and the sensors. Our experiments show that, under the condition in which the pressure in the measured container is 100 kPa, and the leak hole–sensor distance is 800 mm, the maximum error of the calculated results based on the data fusion location algorithm is less than 20 mm, and the combined accuracy is better than those of the individual location algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahrari, A. H., M. Kiavarz, M. Hasanlou, and M. Marofi. "THERMAL AND VISIBLE SATELLITE IMAGE FUSION USING WAVELET IN REMOTE SENSING AND SATELLITE IMAGE PROCESSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W4 (September 26, 2017): 11–15. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w4-11-2017.

Full text
Abstract:
Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar) and different decomposition filters (mean.linear,ma,min and rand) for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative) must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.
APA, Harvard, Vancouver, ISO, and other styles
10

Soundy, Andy W. R., Bradley J. Panckhurst, Phillip Brown, Andrew Martin, Timothy C. A. Molteno, and Daniel Schumayer. "Comparison of Enhanced Noise Model Performance Based on Analysis of Civilian GPS Data." Sensors 20, no. 21 (October 24, 2020): 6050. http://dx.doi.org/10.3390/s20216050.

Full text
Abstract:
We recorded the time series of location data from stationary, single-frequency (L1) GPS positioning systems at a variety of geographic locations. The empirical autocorrelation function of these data shows significant temporal correlations. The Gaussian white noise model, widely used in sensor-fusion algorithms, does not account for the observed autocorrelations and has an artificially large variance. Noise-model analysis—using Akaike’s Information Criterion—favours alternative models, such as an Ornstein–Uhlenbeck or an autoregressive process. We suggest that incorporating a suitable enhanced noise model into applications (e.g., Kalman Filters) that rely on GPS position estimates will improve performance. This provides an alternative to explicitly modelling possible sources of correlation (e.g., multipath, shadowing, or other second-order physical phenomena).
APA, Harvard, Vancouver, ISO, and other styles
11

Miller, Alexander, Boris Miller, and Gregory Miller. "On AUV Control with the Aid of Position Estimation Algorithms Based on Acoustic Seabed Sensing and DOA Measurements." Sensors 19, no. 24 (December 13, 2019): 5520. http://dx.doi.org/10.3390/s19245520.

Full text
Abstract:
This article discusses various approaches to the control of autonomous underwater vehicles (AUVs) with the aid of different velocity-position estimation algorithms. Traditionally this field is considered as the area of the extended Kalman filter (EKF) application: It became a universal tool for nonlinear observation models and its use is ubiquitous. Meanwhile, the specific characteristics of underwater navigation, such as an incomplete sets of measurements, constraints on the range metering or even impossibility of range measurements, observations provided by rather specific acoustic beacons, sonar observations, and other features seriously narrow the applicability of common instruments due to a high level of uncertainty and nonlinearity. The AUV navigation system, not being able to rely on a single source of position estimation, has to take into account all available information. This leads to the necessity of various complex estimation and data fusion algorithms, which are the matter of the present article. Here we discuss some approaches to the AUV position estimation such as conditionally minimax nonlinear filtering (CMNF) and unbiased pseudo-measurement filters (UPMFs) in conjunction with velocity estimation based on the seabed profile acoustic sensing. The presented estimation algorithms serve as a basis for a locally optimal AUV motion control algorithm, which is also presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Deneux, Thomas, and Olivier Faugeras. "EEG-fMRI Fusion of Paradigm-Free Activity Using Kalman Filtering." Neural Computation 22, no. 4 (April 2010): 906–48. http://dx.doi.org/10.1162/neco.2009.05-08-793.

Full text
Abstract:
We address here the use of EEG and fMRI, and their combination, in order to estimate the full spatiotemporal patterns of activity on the cortical surface in the absence of any particular assumptions on this activity such as stimulation times. For handling such a high-dimension inverse problem, we propose the use of (1) a global forward model of how these measures are functions of the “neural activity” of a large number of sources distributed on the cortical surface, formalized as a dynamical system, and (2) adaptive filters, as a natural solution to solve this inverse problem iteratively along the temporal dimension. This estimation framework relies on realistic physiological models, uses EEG and fMRI in a symmetric manner, and takes into account both their temporal and spatial information. We use the Kalman filter and smoother to perform such an estimation on realistic artificial data and demonstrate that the algorithm can handle the high dimensionality of these data and that it succeeds in solving this inverse problem, combining efficiently the information provided by the two modalities (this information being naturally predominantly temporal for EEG and spatial for fMRI). It performs particularly well in reconstructing a random temporally and spatially smooth activity spread over the cortex. The Kalman filter and smoother show some limitations, however, which call for the development of more specific adaptive filters. First, they do not cope well with the strong nonlinearity in the model that is necessary for an adequate description of the relation between cortical electric activities and the metabolic demand responsible for fMRI signals. Second, they fail to estimate a sparse activity (i.e., presenting sharp peaks at specific locations and times). Finally their computational cost remains high. We use schematic examples to explain these limitations and propose further developments of our method to overcome them.
APA, Harvard, Vancouver, ISO, and other styles
13

Yuan, Yubin, Yu Shen, Jing Peng, Lin Wang, and Hongguo Zhang. "Defogging Technology Based on Dual-Channel Sensor Information Fusion of Near-Infrared and Visible Light." Journal of Sensors 2020 (November 15, 2020): 1–17. http://dx.doi.org/10.1155/2020/8818650.

Full text
Abstract:
Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.
APA, Harvard, Vancouver, ISO, and other styles
14

Albanwan, H., and R. Qin. "ENHANCEMENT OF DEPTH MAP BY FUSION USING ADAPTIVE AND SEMANTIC-GUIDED SPATIOTEMPORAL FILTERING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 227–32. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-227-2020.

Full text
Abstract:
Abstract. Extracting detailed geometric information about a scene relies on the quality of the depth maps (e.g. Digital Elevation Surfaces, DSM) to enhance the performance of 3D model reconstruction. Elevation information from LiDAR is often expensive and hard to obtain. The most common approach to generate depth maps is through multi-view stereo (MVS) methods (e.g. dense stereo image matching). The quality of single depth maps, however, is often prone to noise, outliers, and missing data points due to the quality of the acquired image pairs. A reference multi-view image pair must be noise-free and clear to ensure high-quality depth maps. To avoid such a problem, current researches are headed toward fusing multiple depth maps to recover the shortcomings of single-depth maps resulted from a single pair of multi-view images. Several approaches tackled this problem by merging and fusing depth maps, using probabilistic and deterministic methods, but few discussed how these fused depth maps can be refined through adaptive spatiotemporal analysis algorithms (e.g. spatiotemporal filters). The motivation is to push towards preserving the high precision and detail level of depth maps while optimizing the performance, robustness, and efficiency of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Mensing, Christian, Stephan Sand, and Armin Dammann. "Hybrid Data Fusion and Tracking for Positioning with GNSS and 3GPP-LTE." International Journal of Navigation and Observation 2010 (August 2, 2010): 1–12. http://dx.doi.org/10.1155/2010/812945.

Full text
Abstract:
Global navigation satellite systems (GNSSs) can provide reliable positioning information under optimum conditions, where at least four satellites can be accessed with sufficient quality. In critical situations, for example, urban canyons or indoor, due to blocking of satellites by buildings and severe multipath effects, the GNSS performance can be decreased substantially. To overcome this limitation, we propose to exploit additionally information from communications systems for positioning purposes, for example, by using time difference of arrival (TDOA) information. To optimize the performance, hybrid data fusion and tracking algorithms can combine both types of sources and further exploit the mobility of the user. Simulation results for different filter types show the ability of this approach to compensate the lack of satellites by additional TDOA measurements from a future 3GPP-LTE communications system. This paper analyzes the performance in a fairly realistic manner by taking into account ray-tracing simulations to generate a coherent environment for GNSS and 3GPP-LTE.
APA, Harvard, Vancouver, ISO, and other styles
16

Xia, Haoran, Yuanping Zhang, Ming Yang, and Yufang Zhao. "Visual Tracking via Deep Feature Fusion and Correlation Filters." Sensors 20, no. 12 (June 14, 2020): 3370. http://dx.doi.org/10.3390/s20123370.

Full text
Abstract:
Visual tracking is a fundamental vision task that tries to figure out instances of several object classes from videos and images. It has attracted much attention for providing the basic semantic information for numerous applications. Over the past 10 years, visual tracking has made a great progress, but huge challenges still exist in many real-world applications. The facade of a target can be transformed significantly by pose changing, occlusion, and sudden movement, which possibly leads to a sudden target loss. This paper builds a hybrid tracker combining the deep feature method and correlation filter to solve this challenge, and verifies its powerful characteristics. Specifically, an effective visual tracking method is proposed to address the problem of low tracking accuracy due to the limitations of traditional artificial feature models, then rich hiearchical features of Convolutional Neural Networks are used to make the multi-layer features fusion improve the tracker learning accuracy. Finally, a large number of experiments are conducted on benchmark data sets OBT-100 and OBT-50, and show that our proposed algorithm is effective.
APA, Harvard, Vancouver, ISO, and other styles
17

Dang, Xiaochao, Xiong Si, Zhanjun Hao, and Yaning Huang. "A Novel Passive Indoor Localization Method by Fusion CSI Amplitude and Phase Information." Sensors 19, no. 4 (February 20, 2019): 875. http://dx.doi.org/10.3390/s19040875.

Full text
Abstract:
With the rapid development of wireless network technology, wireless passive indoor localization has become an increasingly important technique that is widely used in indoor location-based services. Channel state information (CSI) can provide more detailed and specific subcarrier information, which has gained the attention of researchers and has become an emphasis in indoor localization technology. However, existing research has generally adopted amplitude information for eigenvalue calculations. There are few research studies that have used phase information from CSI signals for localization purposes. To eliminate the signal interference existing in indoor environments, we present a passive human indoor localization method named FapFi, which fuses CSI amplitude and phase information to fully utilize richer signal characteristics to find location. In the offline stage, we filter out redundant values and outliers in the CSI amplitude information and then process the CSI phase information. A fusion method is utilized to store the processed amplitude and phase information as a fingerprint database. The experimental data from two typical laboratory and conference room environments were gathered and analyzed. The extensive experimental results demonstrate that the proposed algorithm is more efficient than other algorithms in data processing and achieves decimeter-level localization accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Bezmen, P. A. "Integration of Mobile Robot Control System Data Using the Extended Kalman Filter." Proceedings of the Southwest State University 23, no. 2 (July 9, 2019): 53–64. http://dx.doi.org/10.21869/2223-1560-2019-23-2-53-64.

Full text
Abstract:
Purpose of research. The article deals with the adaptation of the algorithm of the extended Kalman filter for the integration of data from sensors of physical values of a mobile robotMethods. Integration of data is the process of information (data) fusion for determination or prediction of the state of an object. Integration provides increased robustness of robot control and accuracy of machine perception of information. This process is similar to repeated experiments in order to determine in direct and/or indirect ways the value of a physical quantity with the required accuracy. In the control system of a mobile robot, the integration of sensor data is carried out by one or more computing devices (for example, processors or microcontrollers) [1-5].Results. Advances in digital signal processing and image processing are based on new algorithms, increasing the speed of data processing by computing devices and increasing the speed of access to data stored in storage (storage devices) and the capacity of the latter. Computing devices also perform averaging and filtering of signals of individual sensors and their further matching. The problem of sustainable integration and processing of information from different measuring devices can be solved with the help of the Kalman filter algorithm. The Kalman linear filter algorithm and, in particular, the extended Kalman filter algorithm perform a large amount of computation in the course of their work. In comparison with the linear Kalman filter, the extended Kalman filter significantly increases the requirements for the computing power of the onboard computer (computing device, computer) of amobile robot.Conclusion. The main effect of integration is to obtain fundamentally new information that cannot be obtained from individual sensors. This approach relieves data channels of large (excessive) data flows coming directly from the sensors, and reduces the requirements for computing power of the computing device of the upper level of the structure of the mobile robot control system.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, H., and I. Lee. "LOCALIZATION OF A CAR BASED ON MULTI-SENSOR FUSION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (September 26, 2018): 247–50. http://dx.doi.org/10.5194/isprs-archives-xlii-1-247-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> The vehicle localization is an essential component for stable autonomous car operation. There are many algorithms for the vehicle localization. However, it still needs much improvement in terms of its accuracy and cost. In this paper, sensor fusion based localization algorithm is used for solving this problem. Our sensor system is composed of in-vehicle sensors, GPS and vision sensors. The localization algorithm is based on extended Kalman filter and it has time update step and measurement update step. In the time update step, in-vehicle sensors are used such as yaw-rate and speed sensor. And GPS and vision sensor information are used to update the vehicle position in the measurement update step.We use visual odometry library to process vision sensor data and generate the moving distance and direction of the car. Especially, when performing visual odometry we use georeferenced image database to reduce the error accumulation. Through the experiments, the proposed localization algorithm is verified and evaluated. The RMS errors of the estimated result from the proposed algorithm are about 4.3<span class="thinspace"></span>m. This result shows about 40<span class="thinspace"></span>% improvement in accuracy even in comparison with the result from the GPS only method. It shows the possibility to use proposed localization algorithm. However, it is still necessary to improve the accuracy for applying this algorithm to the autonomous car. Therefore, we plan to use multiple cameras (rear cameras or AVM cameras) and more information such as high-definition map or V2X communication. And the filter and error modelling also need to be changed for the better results.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Kashinath, Shafiza Ariffin, Salama A. Mostafa, David Lim, Aida Mustapha, Hanayanti Hafit, and Rozanawati Darman. "A general framework of multiple coordinative data fusion modules for real-time and heterogeneous data sources." Journal of Intelligent Systems 30, no. 1 (January 1, 2021): 947–65. http://dx.doi.org/10.1515/jisys-2021-0083.

Full text
Abstract:
Abstract Designing a data-responsive system requires accurate input to ensure efficient results. The growth of technology in sensing methods and the needs of various kinds of data greatly impact data fusion (DF)-related study. A coordinative DF framework entails the participation of many subsystems or modules to produce coordinative features. These features are utilized to facilitate and improve solving certain domain problems. Consequently, this paper proposes a general Multiple Coordinative Data Fusion Modules (MCDFM) framework for real-time and heterogeneous data sources. We develop the MCDFM framework to adapt various DF application domains requiring macro and micro perspectives of the observed problems. This framework consists of preprocessing, filtering, and decision as key DF processing phases. These three phases integrate specific purpose algorithms or methods such as data cleaning and windowing methods for preprocessing, extended Kalman filter (EKF) for filtering, fuzzy logic for local decision, and software agents for coordinative decision. These methods perform tasks that assist in achieving local and coordinative decisions for each node in the network of the framework application domain. We illustrate and discuss the proposed framework in detail by taking a stretch of road intersections controlled by a traffic light controller (TLC) as a case study. The case study provides a clearer view of the way the proposed framework solves traffic congestion as a domain problem. We identify the traffic features that include the average vehicle count, average vehicle speed (km/h), average density (%), interval (s), and timestamp. The framework uses these features to identify three congestion periods, which are the nonpeak period with a congestion degree of 0.178 and a variance of 0.061, a medium peak period with a congestion degree of 0.588 and a variance of 0.0593, and a peak period with a congestion degree of 0.796 and a variance of 0.0296. The results of the TLC case study show that the framework provides various capabilities and flexibility features of both micro and macro views of the scenarios being observed and clearly presents viable solutions.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Shugang, Ru Wang, Yuqi Zhang, Hanyu Lu, Nannan Cai, and Zhaoxu Yu. "Potential social media influencers discrimination for concept marketing in online brand community." Journal of Intelligent & Fuzzy Systems 41, no. 1 (August 11, 2021): 317–29. http://dx.doi.org/10.3233/jifs-201809.

Full text
Abstract:
Identifying potential social media influencers (SMIs) accurately can achieve a long-time and effective concept marketing at a lower cost, and then promote the development of the corporate brand in online communities. However, potential SMIs discrimination often faces the problem of insufficient available information of the long-term evolution of the network, and the existing discriminant methods based on link analysis fail to obtain more accurate results. To fill this gap, a consensus smart discriminant algorithm (CSDA) is proposed to identify the potential SMIs with the aid of attention concentration (AC) between users in a closed triadic structure. CSDA enriches and expands the users’ AC information by fusing multiple attention concentration indexes (ACIs) as well as filters the noise information caused by multi-index fusion through consensus among the indexes. Specifically, to begin with, to enrich the available long-term network evolution information, the unidirectional attention concentration indexes (UACIs) and the bidirectional attention concentration indexes (BACIs) are defined; next, the consensus attention concentration index (CACI) is selected according to the principle of minimum upper and lower bounds of link prediction bias to filter noise information; the potential SMI is determined by adaptively calculating CACI among the user to be identified, unconnected user group and their common neighbor. The validity and reliability of the proposed method are verified by the actual data of Twitter.
APA, Harvard, Vancouver, ISO, and other styles
22

Foltz, Steven M., Qingsong Gao, Christopher J. Yoon, Amila Weerasinghe, Hua Sun, Lijun Yao, Mark A. Fiala, et al. "Comprehensive Multi-Omics Analysis of Gene Fusions in a Large Multiple Myeloma Cohort." Blood 132, Supplement 1 (November 29, 2018): 1898. http://dx.doi.org/10.1182/blood-2018-99-117245.

Full text
Abstract:
Abstract Introduction: Gene fusions are the result of genomic rearrangements that create hybrid protein products or bring the regulatory elements of one gene into close proximity of another. Fusions often dysregulate gene function or expression through oncogene overexpression or tumor suppressor underexpression (Gao, Liang, Foltz, et al. Cell Rep 2018). Some fusions such as EML4--ALK in lung adenocarcinoma are known druggable targets. Fusion detection algorithms utilize discordantly mapped RNA-seq reads. Careful consideration of detection and filtering procedures is vital for large-scale fusion detection because current methods are prone to reporting false positives and show poor concordance. Multiple myeloma (MM) is a blood cancer in which rapidly expanding clones of plasma cells spread in the bone marrow. Translocations that juxtapose the highly-expressed IGH enhancer with potential oncogenes are associated with overexpression of partner genes, although they may not lead to a detectable gene fusion in RNA-seq data. Previous studies have explored the fusion landscape of multiple myeloma cohorts (Cleynen, et al. Nat Comm 2017; Nasser, et al. Blood 2017). In this study, we developed a novel gene fusion detection pipeline and post-processing strategy to analyze 742 patient samples at the primary time point and 64 samples at follow-up time points (806 total samples) from the Multiple Myeloma Research Foundation (MMRF) CoMMpass Study using RNA-seq, WGS, and clinical data. Methods and Results: We overlapped five fusion detection algorithms (EricScript, FusionCatcher, INTEGRATE, PRADA, and STAR-Fusion) to report fusion events. Our filtered call set consisted of 2,817 fusions with a median of 3 fusions per sample (mean 3.8), similar to glioblastoma, breast, ovarian, and prostate cancers in TCGA. Major recurrent fusions involving immunoglobulin genes included IGH--WHSC1 (88 primary samples), IGL--BMI1 (29), and the upstream neighbor of MYC, PVT1, paired with IGH (6), IGK (3), and IGL (11). For each event, we used WGS data when available to determine if there was genomic support of the gene fusion (based on discordant WGS reads, SV event detection, and MMRF CoMMpass Seq-FISH WGS results) (Miller, et al. Blood 2016). WGS validation rates varied by the level of RNA-seq evidence supporting each fusion, with an overall rate of 24.1%, which is comparable to previously observed pan-cancer validation rates using low-pass WGS. We calculated the association between fusion status and gene expression and identified genes such as BCL2L11, CCND1/2, LTBR, and TXNDC5 that showed significant overexpression (t-test). We explored the clinical connections of fusion events through survival analysis and clinical data correlations, and by mining potentially druggable targets from our Database of Evidence for Precision Oncology (dinglab.wustl.edu/depo) (Sun, Mashl, Sengupta, et al. Bioinformatics 2018). Major examples of upregulated fusion kinases that could potentially be targeted with off-label drug use include FGFR3 and NTRK1. We examined the evolution of fusion events over multiple time points. In one MMRF patient with a t(8;14) translocation joining the IGH locus and transcription factor MAFA, we observed IGH fusions with TOP1MT (neighbor of MAFA) at all four time points with corresponding high expression of TOP1MT and MAFA. Using non-MMRF single-cell RNA data from different patients, we were able to track cell-type composition over time as well as detect subpopulations of cells harboring fusions at different time points with potential treatment implications. Discussion: Gene fusions offer potential targets for alternative MM therapies. Careful implementation of gene fusion detection algorithms and post-processing are essential in large cohort studies to reduce false positives and enrich results for clinically relevant information. Clinical fusion detection from untargeted RNA-seq remains a challenge due to poor sensitivity, specificity, and usability. By combining MMRF CoMMpass data from multiple platforms, we have produced a comprehensive fusion profile of 742 MM patients. We have shown novel gene fusion associations with gene expression and clinical data, and we identified candidates for druggability studies. Disclosures Vij: Bristol-Myers Squibb: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding; Celgene: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding; Jazz Pharmaceuticals: Honoraria, Membership on an entity's Board of Directors or advisory committees; Jansson: Honoraria, Membership on an entity's Board of Directors or advisory committees; Amgen: Honoraria, Membership on an entity's Board of Directors or advisory committees; Karyopharma: Honoraria, Membership on an entity's Board of Directors or advisory committees; Takeda: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
23

Han, Yanling, Xi Shi, Shuhu Yang, Yun Zhang, Zhonghua Hong, and Ruyan Zhou. "Hyperspectral Sea Ice Image Classification Based on the Spectral-Spatial-Joint Feature with the PCA Network." Remote Sensing 13, no. 12 (June 9, 2021): 2253. http://dx.doi.org/10.3390/rs13122253.

Full text
Abstract:
Sea ice is one of the most prominent causes of marine disasters occurring at high latitudes. The detection of sea ice is particularly important, and the classification of sea ice images is an important part of sea ice detection. Traditional sea ice classification based on optical remote sensing mostly uses spectral information only and does not fully extract rich spectral and spatial information from sea ice images. At the same time, it is difficult to obtain samples and the resulting small sample sizes used in sea ice classification has limited the improvement of classification accuracy to a certain extent. In response to the above problems, this paper proposes a hyperspectral sea ice image classification method involving spectral-spatial-joint features based on the principal component analysis (PCA) network. First, the method uses the gray-level co-occurrence matrix (GLCM) and Gabor filter to extract textural and spatial information about sea ice. Then, the optimal band combination is extracted with a band selection algorithm based on a hybrid strategy, and the information hidden in the sea ice image is deeply extracted through a fusion of spectral and spatial features. Then, the PCA network is designed based on principal component analysis filters in order to extract the depth features of sea ice more effectively, and hash binarization maps and block histograms are used to enhance the separation and reduce the dimensions of features. Finally, the low-level features in the data form more abstract and invariant high-level features for sea ice classification. In order to verify the effectiveness of the proposed method, we conducted experiments on two different data collection points in Bohai Bay and Baffin Bay. The experimental results show that, compared with other single feature and spectral-spatial-joint feature algorithms, the proposed method achieves better sea ice classification results (94.15% and 96.86%) by using fewer training samples and a shorter training time.
APA, Harvard, Vancouver, ISO, and other styles
24

Schön, Steffen, Claus Brenner, Hamza Alkhatib, Max Coenen, Hani Dbouk, Nicolas Garcia-Fernandez, Colin Fischer, et al. "Integrity and Collaboration in Dynamic Sensor Networks." Sensors 18, no. 7 (July 23, 2018): 2400. http://dx.doi.org/10.3390/s18072400.

Full text
Abstract:
Global Navigation Satellite Systems (GNSS) deliver absolute position and velocity, as well as time information (P, V, T). However, in urban areas, the GNSS navigation performance is restricted due to signal obstructions and multipath. This is especially true for applications dealing with highly automatic or even autonomous driving. Subsequently, multi-sensor platforms including laser scanners and cameras, as well as map data are used to enhance the navigation performance, namely in accuracy, integrity, continuity and availability. Although well-established procedures for integrity monitoring exist for aircraft navigation, for sensors and fusion algorithms used in automotive navigation, these concepts are still lacking. The research training group i.c.sens, integrity and collaboration in dynamic sensor networks, aims to fill this gap and to contribute to relevant topics. This includes the definition of alternative integrity concepts for space and time based on set theory and interval mathematics, establishing new types of maps that report on the trustworthiness of the represented information, as well as taking advantage of collaboration by improved filters incorporating person and object tracking. In this paper, we describe our approach and summarize the preliminary results.
APA, Harvard, Vancouver, ISO, and other styles
25

Jalal Abadi, Marzieh, Luca Luceri, Mahbub Hassan, Chun Tung Chou, and Monica Nicoli. "A Cooperative Machine Learning Approach for Pedestrian Navigation in Indoor IoT." Sensors 19, no. 21 (October 23, 2019): 4609. http://dx.doi.org/10.3390/s19214609.

Full text
Abstract:
This paper presents a system based on pedestrian dead reckoning (PDR) for localization of networked mobile users, which relies only on sensors embedded in the devices and device- to-device connectivity. The user trajectory is reconstructed by measuring step by step the user displacements. Though step length can be estimated rather accurately, heading evaluation is extremely problematic in indoor environments. Magnetometer is typically used, however measurements are strongly perturbed. To improve the location accuracy, this paper proposes a novel cooperative system to estimate the direction of motion based on a machine learning approach for perturbation detection and filtering, combined with a consensus algorithm for performance augmentation by cooperative data fusion at multiple devices. A first algorithm filters out perturbed magnetometer measurements based on a-priori information on the Earth’s magnetic field. A second algorithm aggregates groups of users walking in the same direction, while a third one combines the measurements of the aggregated users in a distributed way to extract a more accurate heading estimate. To the best of our knowledge, this is the first approach that combines machine learning with consensus algorithms for cooperative PDR. Compared to other methods in the literature, the method has the advantage of being infrastructure-free, fully distributed and robust to sensor failures thanks to the pre-filtering of perturbed measurements. Extensive indoor experiments show that the heading error is highly reduced by the proposed approach thus leading to noticeable enhancements in localization performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Long, Hui-Hui Wang, Sen Qiu, Yun-Cui Zhang, and Zheng-Dong Hao. "Paddle Stroke Analysis for Kayakers Using Wearable Technologies." Sensors 21, no. 3 (January 29, 2021): 914. http://dx.doi.org/10.3390/s21030914.

Full text
Abstract:
Proper stroke posture and rhythm are crucial for kayakers to achieve perfect performance and avoid the occurrence of sport injuries. The traditional video-based analysis method has numerous limitations (e.g., site and occlusion). In this study, we propose a systematic approach for evaluating the training performance of kayakers based on the multiple sensors fusion technology. Kayakers’ motion information is collected by miniature inertial sensor nodes attached on the body. The extend Kalman filter (EKF) method is used for data fusion and updating human posture. After sensor calibration, the kayakers’ actions are reconstructed by rigid-body model. The quantitative kinematic analysis is carried out based on joint angles. Machine learning algorithms are used for differentiating the stroke cycle into different phases, including entry, pull, exit and recovery. The experiment shows that our method can provide comprehensive motion evaluation information under real on-water scenario, and the phase identification of kayaker’s motions is up to 98% validated by videography method. The proposed approach can provide quantitative information for coaches and athletes, which can be used to improve the training effects.
APA, Harvard, Vancouver, ISO, and other styles
27

Shi, Qiaoqiao, Wei Li, Ran Tao, Xu Sun, and Lianru Gao. "Ship Classification Based on Multifeature Ensemble with Convolutional Neural Network." Remote Sensing 11, no. 4 (February 18, 2019): 419. http://dx.doi.org/10.3390/rs11040419.

Full text
Abstract:
As an important part of maritime traffic, ships play an important role in military and civilian applications. However, ships’ appearances are susceptible to some factors such as lighting, occlusion, and sea state, making ship classification more challenging. This is of great importance when exploring global and detailed information for ship classification in optical remote sensing images. In this paper, a novel method to obtain discriminative feature representation of a ship image is proposed. The proposed classification framework consists of a multifeature ensemble based on convolutional neural network (ME-CNN). Specifically, two-dimensional discrete fractional Fourier transform (2D-DFrFT) is employed to extract multi-order amplitude and phase information, which contains such important information as profiles, edges, and corners; completed local binary pattern (CLBP) is used to obtain local information about ship images; Gabor filter is used to gain the global information about ship images. Then, deep convolutional neural network (CNN) is applied to extract more abstract features based on the above information. CNN, extracting high-level features automatically, has performed well for object classification tasks. After high-feature learning, as the one of fusion strategies, decision-level fusion is investigated for the final classification result. The average accuracy of the proposed approach is 98.75% on the BCCT200-resize data, 92.50% on the original BCCT200 data, and 87.33% on the challenging VAIS data, which validates the effectiveness of the proposed method when compared to the existing state-of-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Lai, Y. C., C. C. Chang, C. M. Tsai, S. Y. Lin, and S. C. Huang. "DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (May 11, 2015): 81–86. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-81-2015.

Full text
Abstract:
This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its usability. The experiment results of the proposed navigation system demonstrate good navigation performance in indoor environment with the accurate initial location and direction.
APA, Harvard, Vancouver, ISO, and other styles
29

Muresan, Mircea Paul, Ion Giosan, and Sergiu Nedevschi. "Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation." Sensors 20, no. 4 (February 18, 2020): 1110. http://dx.doi.org/10.3390/s20041110.

Full text
Abstract:
The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis–measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m.
APA, Harvard, Vancouver, ISO, and other styles
30

Adriano, Bruno, Junshi Xia, Gerald Baier, Naoto Yokoya, and Shunichi Koshimura. "Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia." Remote Sensing 11, no. 7 (April 11, 2019): 886. http://dx.doi.org/10.3390/rs11070886.

Full text
Abstract:
This work presents a detailed analysis of building damage recognition, employing multi-source data fusion and ensemble learning algorithms for rapid damage mapping tasks. A damage classification framework is introduced and tested to categorize the building damage following the recent 2018 Sulawesi earthquake and tsunami. Three robust ensemble learning classifiers were investigated for recognizing building damage from Synthetic Aperture Radar (SAR) and optical remote sensing datasets and their derived features. The contribution of each feature dataset was also explored, considering different combinations of sensors as well as their temporal information. SAR scenes acquired by the ALOS-2 PALSAR-2 and Sentinel-1 sensors were used. The optical Sentinel-2 and PlanetScope sensors were also included in this study. A non-local filter in the preprocessing phase was used to enhance the SAR features. Our results demonstrated that the canonical correlation forests classifier performs better in comparison to the other classifiers. In the data fusion analysis, Digital Elevation Model (DEM)- and SAR-derived features contributed the most in the overall damage classification. Our proposed mapping framework successfully classifies four levels of building damage (with overall accuracy >90%, average accuracy >67%). The proposed framework learned the damage patterns from a limited available human-interpreted building damage annotation and expands this information to map a larger affected area. This process including pre- and post-processing phases were completed in about 3 h after acquiring all raw datasets.
APA, Harvard, Vancouver, ISO, and other styles
31

Ping, Xianyao, Shuo Cheng, Wei Yue, Yongchang Du, Xiangyu Wang, and Liang Li. "Adaptive estimations of tyre–road friction coefficient and body’s sideslip angle based on strong tracking and interactive multiple model theories." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 234, no. 14 (July 30, 2020): 3224–38. http://dx.doi.org/10.1177/0954407020941410.

Full text
Abstract:
Vehicle dynamic states and parameters, such as the tyre–road friction coefficient and body’s sideslip angle especially, are crucial for vehicle dynamics control with close-loop feedback laws. Autonomous vehicles also have strict demands on real-time knowledge of those information to make reliable decisions. With consideration of the cost saving, some estimation methods employing high-resolution vision and position devices are not for the production vehicles. Meanwhile, the bad adaptability of traditional Kalman filters to variable system structure restricts their practical applications. This paper introduces a cost-efficient estimation scheme using on-board sensors. Improved Strong Tracking Unscented Kalman filter is constructed to estimate the friction coefficient with fast convergence rate on time-variant road surfaces. On the basis of previous step, an estimator based on interactive multiple model is built to tolerant biased noise covariance matrices and observe body’s sideslip angle. After the vehicle modelling errors are considered, a Self-Correction Data Fusion algorithm is developed to integrate results of the estimator and direct integral method with error correction theory. Some simulations and experiments are also implemented, and their results verify the high accuracy and good robustness of the cooperative estimation scheme.
APA, Harvard, Vancouver, ISO, and other styles
32

Silva, Luiz A. Z. da, Vinicius F. Vidal, Leonardo M. Honório, Mário A. R. Dantas, Milena Faria Pinto, and Miriam Capretz. "A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections." Sensors 20, no. 18 (September 16, 2020): 5296. http://dx.doi.org/10.3390/s20185296.

Full text
Abstract:
The increase in the development of digital twins brings several advantages to inspection and maintenance, but also new challenges. Digital models capable of representing real equipment for full remote inspection demand the synchronization, integration, and fusion of several sensors and methodologies such as stereo vision, monocular Simultaneous Localization and Mapping (SLAM), laser and RGB-D camera readings, texture analysis, filters, thermal, and multi-spectral images. This multidimensional information makes it possible to have a full understanding of given equipment, enabling remote diagnosis. To solve this problem, the present work uses an edge-fog-cloud architecture running over a publisher-subscriber communication framework to optimize the computational costs and throughput. In this approach, each process is embedded in an edge node responsible for prepossessing a given amount of data that optimizes the trade-off of processing capabilities and throughput delays. All information is integrated with different levels of fog nodes and a cloud server to maximize performance. To demonstrate this proposal, a real-time 3D reconstruction problem using moving cameras is shown. In this scenario, a stereo and RDB-D cameras run over edge nodes, filtering, and prepossessing the initial data. Furthermore, the point cloud and image registration, odometry, and filtering run over fog clusters. A cloud server is responsible for texturing and processing the final results. This approach enables us to optimize the time lag between data acquisition and operator visualization, and it is easily scalable if new sensors and algorithms must be added. The experimental results will demonstrate precision by comparing the results with ground-truth data, scalability by adding further readings and performance.
APA, Harvard, Vancouver, ISO, and other styles
33

Hu, Jie, Zhongli Wu, Xiongzhen Qin, Huangzheng Geng, and Zhangbin Gao. "An Extended Kalman Filter and Back Propagation Neural Network Algorithm Positioning Method Based on Anti-lock Brake Sensor and Global Navigation Satellite System Information." Sensors 18, no. 9 (August 21, 2018): 2753. http://dx.doi.org/10.3390/s18092753.

Full text
Abstract:
Telematics box (T-Box) chip-level Global Navigation Satellite System (GNSS) receiver modules usually suffer from GNSS information failure or noise in urban environments. In order to resolve this issue, this paper presents a real-time positioning method for Extended Kalman Filter (EKF) and Back Propagation Neural Network (BPNN) algorithms based on Antilock Brake System (ABS) sensor and GNSS information. Experiments were performed using an assembly in the vehicle with a T-Box. The T-Box firstly use automotive kinematical Pre-EKF to fuse the four wheel speed, yaw rate and steering wheel angle data from the ABS sensor to obtain a more accurate vehicle speed and heading angle velocity. In order to reduce the noise of the GNSS information, After-EKF fusion vehicle speed, heading angle velocity and GNSS data were used and low-noise positioning data were obtained. The heading angle speed error is extracted as target and part of low-noise positioning data were used as input for training a BPNN model. When the positioning is invalid, the well-trained BPNN corrected heading angle velocity output and vehicle speed add the synthesized relative displacement to the previous absolute position to realize a new position. With the data of high-precision real-time kinematic differential positioning equipment as the reference, the use of the dual EKF can reduce the noise range of GNSS information and concentrate good-positioning signals of the road within 5 m (i.e. the positioning status is valid). When the GNSS information was shielded (making the positioning status invalid), and the previous data was regarded as a training sample, it is found that the vehicle achieved 15 minutes position without GNSS information on the recycling line. The results indicated this new position method can reduce the vehicle positioning noise when GNSS information is valid and determine the position during long periods of invalid GNSS information.
APA, Harvard, Vancouver, ISO, and other styles
34

You, Changhui, Hong Zheng, Zhongyuan Guo, Tianyu Wang, and Xiongbin Wu. "Multiscale Content-Independent Feature Fusion Network for Source Camera Identification." Applied Sciences 11, no. 15 (July 22, 2021): 6752. http://dx.doi.org/10.3390/app11156752.

Full text
Abstract:
In recent years, source camera identification has become a research hotspot in the field of image forensics and has received increasing attention. It has high application value in combating the spread of pornographic photos, copyright authentication of art photos, image tampering forensics, and so on. Although the existing algorithms greatly promote the research progress of source camera identification, they still cannot effectively reduce the interference of image content with image forensics. To suppress the influence of image content on source camera identification, a multiscale content-independent feature fusion network (MCIFFN) is proposed to solve the problem of source camera identification. MCIFFN is composed of three parallel branch networks. Before the image is sent to the first two branch networks, an adaptive filtering module is needed to filter the image content and extract the noise features, and then the noise features are sent to the corresponding convolutional neural networks (CNN), respectively. In order to retain the information related to the image color, this paper does not preprocess the third branch network, but directly sends the image data to CNN. Finally, the content-independent features of different scales extracted from the three branch networks are fused, and the fused features are used for image source identification. The CNN feature extraction network in MCIFFN is a shallow network embedded with a squeeze and exception (SE) structure called SE-SCINet. The experimental results show that the proposed MCIFFN is effective and robust, and the classification accuracy is improved by approximately 2% compared with the SE-SCINet network.
APA, Harvard, Vancouver, ISO, and other styles
35

Shao, Xiaorui, Chang-Soo Kim, and Palash Sontakke. "Accurate Deep Model for Electricity Consumption Forecasting Using Multi-channel and Multi-Scale Feature Fusion CNN–LSTM." Energies 13, no. 8 (April 12, 2020): 1881. http://dx.doi.org/10.3390/en13081881.

Full text
Abstract:
Electricity consumption forecasting is a vital task for smart grid building regarding the supply and demand of electric power. Many pieces of research focused on the factors of weather, holidays, and temperatures for electricity forecasting that requires to collect those data by using kinds of sensors, which raises the cost of time and resources. Besides, most of the existing methods only focused on one or two types of forecasts, which cannot satisfy the actual needs of decision-making. This paper proposes a novel hybrid deep model for multiple forecasts by combining Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) algorithm without additional sensor data, and also considers the corresponding statistics. Different from the conventional stacked CNN–LSTM, in the proposed hybrid model, CNN and LSTM extracted features in parallel, which can obtain more robust features with less loss of original information. Chiefly, CNN extracts multi-scale robust features by various filters at three levels and wide convolution technology. LSTM extracts the features which think about the impact of different time-steps. The features extracted by CNN and LSTM are combined with six statistical components as comprehensive features. Therefore, comprehensive features are the fusion of multi-scale, multi-domain (time and statistic domain) and robust due to the utilization of wide convolution technology. We validate the effectiveness of the proposed method on three natural subsets associated with electricity consumption. The comparative study shows the state-of-the-art performance of the proposed hybrid deep model with good robustness for very short-term, short-term, medium-term, and long-term electricity consumption forecasting.
APA, Harvard, Vancouver, ISO, and other styles
36

Chengqi Zhang*, Ling Guan**, and Zheru Chi. "Introduction to the Special Issue on Learning in Intelligent Algorithms and Systems Design." Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no. 6 (December 20, 1999): 439–40. http://dx.doi.org/10.20965/jaciii.1999.p0439.

Full text
Abstract:
Learning has long been and will continue to be a key issue in intelligent algorithms and systems design. Emulating the behavior and mechanisms of human learning by machines at such high levels as symbolic processing and such low levels as neuronal processing has long been a dominant interest among researchers worldwide. Neural networks, fuzzy logic, and evolutionary algorithms represent the three most active research areas. With advanced theoretical studies and computer technology, many promising algorithms and systems using these techniques have been designed and implemented for a wide range of applications. This Special Issue presents seven papers on learning in intelligent algorithms and systems design from researchers in Japan, China, Australia, and the U.S. <B>Neural Networks:</B> Emulating low-level human intelligent processing, or neuronal processing, gave birth of artificial neural networks more than five decades ago. It was hoped that devices based on biological neural networks would possess characteristics of the human brain. Neural networks have reattracted researchers' attention since the late 1980s when back-propagation algorithms were used to train multilayer feed-forward neural networks. In the last decades, we have seen promising progress in this research field yield many new models, learning algorithms, and real-world applications, evidenced by the publication of new journals in this field. <B>Fuzzy Logic:</B> Since L. A. Zadeh introduced fuzzy set theory in 1965, fuzzy logic has increasingly become the focus of many researchers and engineers opening up new research and problem solving. Fuzzy set theory has been favorably applied to control system design. In the last few years, fuzzy model applications have bloomed in image processing and pattern recognition. <B>Evolutionary Algorithms:</B> Evolutionary optimization algorithms have been studied over three decades, emulating natural evolutionary search and selection so powerful in global optimization. The study of evolutionary algorithms includes evolutionary programming (EP), evolutionary strategies (ESs), genetic algorithms (GAs), and genetic programming (GP). In the last few years, we have also seen multiple computational algorithms combined to maximize system performance, such as neurofuzzy networks, fuzzy neural networks, fuzzy logic and genetic optimization, neural networks, and evolutionary algorithms. This Special Issue also includes papers that introduce combined techniques. <B>Wang</B> et al present an improved fuzzy algorithm for enhanced eyeground images. Examination of the eyeground image is effective in diagnosing glaucoma and diabetes. Conventional eyeground image quality is usually too poor for doctors to obtain useful information, so enhancement is required to eliminate this. Due to details and uncertainties in eyeground images, conventional enhancement such as histogram equalization, edge enhancement, and high-pass filters fail to achieve good results. Fuzzy enhancement enhances images in three steps: (1) transferring an image from the spatial domain to the fuzzy domain; (2) conducting enhancement in the fuzzy domain; and (3) returning the image from the fuzzy domain to the spatial domain. The paper detailing this proposes improved mapping and fast implementation. <B>Mohammadian</B> presents a method for designing self-learning hierarchical fuzzy logic control systems based on the integration of evolutionary algorithms and fuzzy logic. The purpose of such an approach is to provide an integrated knowledge base for intelligent control and collision avoidance in a multirobot system. Evolutionary algorithms are used as in adaptation for learning fuzzy knowledge bases of control systems and learning, mapping, and interaction between fuzzy knowledge bases of different fuzzy logic systems. Fuzzy integral has been found useful in data fusion. <B>Pham and Wagner</B> present an approach based on the fuzzy integral and GAs to combine likelihood values of cohort speakers. The fuzzy integral nonlinearly fuses similarity measures of an utterance assigned to cohort speakers. In their approach, Gas find optimal fuzzy densities required for fuzzy fusion. Experiments using commercial speech corpus T146 show their approach achieves more favorable performance than conventional normalization. Evolution reflects the behavior of a society. <B>Puppala and Sen</B> present a coevolutionary approach to generating behavioral strategies for cooperating agent groups. Agent behavior evolves via GAs, where one genetic algorithm population is evolved per individual in the cooperative group. Groups are evaluated by pairing strategies from each population and best strategy pairs are stored together in shared memory. The approach is evaluated using asymmetric room painting and results demonstrate the superiority of shared memory over random pairing in consistently generating optimal behavior patterns. Object representation and template optimization are two main factors affecting object recognition performance. <B>Lu</B> et al present an evolutionary algorithm for optimizing handwritten numeral templates represented by rational B-spline surfaces of character foreground-background-distance distribution maps. Initial templates are extracted from training a feed-forward neural network instead of using arbitrarily chosen patterns to reduce iterations required in evolutionary optimization. To further reduce computational complexity, a fast search is used in selection. Using 1,000 optimized numeral templates, the classifier achieves a classification rate of 96.4% while rejecting 90.7% of nonnumeral patterns when tested on NIST Special Database 3. Determining an appropriate number of clusters is difficult yet important. <B>Li</B> et al based their approach based on rival penalized competitive learning (RPCL), addressing problems of overlapped clusters and dependent components of input vectors by incorporating full covariance matrices into the original RPCL algorithm. The resulting learning algorithm progressively eliminates units whose clusters contain only a small amount of training data. The algorithm is applied to determine the number of clusters in a Gaussian mixture distribution and to optimize the architecture of elliptical function networks for speaker verification and for vowel classification. Another important issue on learning is <B>Kurihara and Sugawara's</B> adaptive reinforcement learning algorithm integrating exploitation- and exploration-oriented learning. This algorithm is more robust in dynamically changing, large-scale environments, providing better performance than either exploitation- learning or exploration-oriented learning, making it is well suited for autonomous systems. In closing we would like to thank the authors who have submitted papers to this Special Issue and express our appreciation to the referees for their excellent work in reading papers under a tight schedule.
APA, Harvard, Vancouver, ISO, and other styles
37

Nguyen, Nhan Duc, Duong Trong Bui, Phuc Huu Truong, and Gu-Min Jeong. "Position-Based Feature Selection for Body Sensors regarding Daily Living Activity Recognition." Journal of Sensors 2018 (September 13, 2018): 1–13. http://dx.doi.org/10.1155/2018/9762098.

Full text
Abstract:
This paper proposes a novel approach to recognize activities based on sensor-placement feature selection. The method is designed to address a problem of multisensor fusion information of wearable sensors which are located in different positions of a human body. Precisely, the approach can extract the best feature set that characterizes each activity regarding a body-sensor location to recognize daily living activities. We firstly preprocess the raw data by utilizing a low-pass filter. After extracting various features, feature selection algorithms are applied separately on feature sets of each sensor to obtain the best feature set for each body position. Then, we investigate the correlation of the features in each set to optimize the feature set. Finally, a classifier is applied to an optimized feature set, which contains features from four body positions to classify thirteen activities. In experimental results, we obtain an overall accuracy of 99.13% by applying the proposed method to the benchmark dataset. The results show that we can reduce the computation time for the feature selection step and achieve a high accuracy rate by performing feature selection for the placement of each sensor. In addition, our proposed method can be used for a multiple-sensor configuration to classify activities of daily living. The method is also expected to deploy to an activity classification system-based big data platform since each sensor node only sends essential information characterizing itself to a cloud server.
APA, Harvard, Vancouver, ISO, and other styles
38

Polakova, Katerina Machova, Vojtech Kulvait, Jana Linhartova, Adela Brouckova, Simona Soverini, Monika Jaruskova, Caterina de Benedittis, et al. "Algorithms and Processing Pipeline For Error Correction and Detection Of Significant Mutations In The Kinase Domain Of BCR-ABL Analyzed By Next-Generation Sequencing: Implications For Clinical Practice Of Chronic Myeloid Leukemia." Blood 122, no. 21 (November 15, 2013): 3987. http://dx.doi.org/10.1182/blood.v122.21.3987.3987.

Full text
Abstract:
Abstract High sequencing depths of NGS may cause false-positive variant calls of minor subclones (up to 10%). Errors inserted into the NGS pipeline during sample preparation and sequencing manifest by erroneous detections of mutations including point mutations. Thus, there is a need for algorithms for filtering data which occur by error from relevant biological information. We developed algorithms and a processing pipeline for error correction and detection of significant mutations after NGS of BCR-ABL kinase domain (KD). We validated our algorithms on the retrospective NGS analysis of 135 samples from 15 CML patients in chronic phase (median 8 samples per patient; range 5-19), who developed resistant mutations (confirmed by Sanger Sequencing, SS) during 2-4 lines of therapy. Amplicon libraries were prepared using reverse transcription and 2-step PCR. The second PCR was performed partly using fusion-primers designed within the IRON-II study research consortium (Roche Applied Science) which tested 4 overlapping amplicons and partly using alternative in-house set of fusion-primers that we have developed upfront and which utilized 3 overlapping amplicons covering the KD coding region. Key concept of our error control algorithm was to apply statistics used for bacterial mutation rate prediction, Lea-Coulson probability distribution (Lea and Coulson, J of Genet 1949), to distinguish sequencing pipeline errors from biologically relevant mutations. We postulated that spontaneous mutations in bacteria are similar phenomena as enzyme errors in vitro. In both processes there are new generations of bacteria or transcripts in which mutations or errors replicate exponentially. The error rate distributions based on analysis of c-ABL kinase domain of healthy donors (n=24) were fitted to Lea-Coulson distribution. From this analysis we derived, for each type of single nucleotide substitution, estimated thresholds based on which a particular mutation may be called significant by a self-developed statistical test. We cross-checked our results with results of standard Roche pipeline including GS Amplicon Variant Analyzer. Table 1 summarizes the estimated thresholds to be applied for transitions and transversions. Higher frequency of errors was found in case of using a 3-amplicon assay in comparison to a 4-amplicon assay. The PCR products in the 3-amplicon assay are 71 bp longer on average than in the 4-amplicon assay, thus the error frequency distribution may be dependent on the length of the sequence amplified. Using our algorithm we processed NGS data and reported significant mutations. Overall, no significant mutation that caused resistance during the treatment was detected at the time of diagnosis. During 1st line imatinib treatment 10 resistant mutations in 9 patients were detected as significant 2-5 months earlier than by SS. At the time of therapy switchover, in 3 patients the algorithm already detected minor populations of one of significant mutations F317L, T315I and M351T, while SS did not. These mutations manifested after the therapy switchover and caused treatment failure. After the therapy switch, baseline mutations were still significantly detectable by our algorithm in NGS data, but not by SS in 7 patients who achieved at the time of the analysis PCgR and MMR. In 5 patients, who subsequently failed therapy after switchover, resistant mutations were significantly detected by our algorithm in NGS data 2-9 months earlier than by SS. New minor mutations were revealed by NGS after the therapy switch in 8 patients.Table 1TRANSITIONS (%)TRANSVERSIONS (%)P valueA/GG/AT/CC/TA/C+C/A+T/A+A/T+T/G+G/T+C/G+G/C3-amplicon assay0.0112.24.5311.84.770.570.053.031.032.931.100.134-amplicon assay0.015.171.934.501.930.130.051.200.431.030.430.03 Since enzymes create errors during reverse transcription, 2-step PCR and sequencing process, the error correction is an essential part of the bioinformatics pipeline for relevant interpretation of BCR-ABL KD mutations detected with the highly sensitive NGS assay. Our validated algorithm and processing pipeline for significant mutation evaluations from NGS data is helpful for future clinical practice as it filters errors and allows reporting only significant mutations. This avoids false-positive results and misleading interpretations which may negatively influence treatment management of CML patients. Supported by IGANT11555 and NT13899 Disclosures: Machova Polakova: Bristol-Myers Squibb: Honoraria, Research Funding; Novartis: Honoraria, Research Funding. Soverini:Novartis: Consultancy; Bristol-Myers Squibb: Consultancy; ARIAD: Consultancy. Haferlach:MLL Munich Leukemia Laboratory: Employment, Equity Ownership. Martinelli:NOVARTIS, BMS(Consultancy and speaker bureau), PFIZER, ARIAD ( Consultancy): Consultancy, Speakers Bureau. Kohlmann:MLL Munich Leukemia Laboratory: Employment; Roche Diagnostics: Honoraria. Klamova:Novartis: Honoraria, Research Funding; Bristol-Myers Squibb: Honoraria, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
39

Qi, Ji Yuan, and Xiao Jun Hu. "Track Handing Fusion in Data Fusion and Database." Advanced Materials Research 468-471 (February 2012): 959–62. http://dx.doi.org/10.4028/www.scientific.net/amr.468-471.959.

Full text
Abstract:
Track message is obtained with a lot of adopted sensors. For one sensor, the reliability may not the same in different condition. A reliability judgment matrix for each target is constructed based on the relative reliability of information offered by sensors. Then, the reliabilities of all the sensors for each target and their general reliability can be gotten. A numerical example is presented. The well reliability sensor’s information will be chose to merge with Kalman filter algorithm in tracking handling based on data fusion technique, and gain the best estimate track. According to the demand of data fusion for the data, establish a real-time database by using the objected technology to fulfill the real-time demand of fusion algorithm. And establish a history database by Relation Database. The data in data fusion system can be managed through the two databases. In this way the data fusion system can work efficiently.
APA, Harvard, Vancouver, ISO, and other styles
40

Sun, Shasha, Chuanpeng Li, Ning Lv, Xiaoman Zhang, Zhaoyan Yu, and Haibo Wang. "Attention based convolutional network for automatic sleep stage classification." Biomedical Engineering / Biomedizinische Technik 66, no. 4 (February 5, 2021): 335–43. http://dx.doi.org/10.1515/bmt-2020-0051.

Full text
Abstract:
Abstract Sleep staging is an important basis for diagnosing sleep-related problems. In this paper, an attention based convolutional network for automatic sleep staging is proposed. The network takes time-frequency image as input and predict sleep stage for each 30-s epoch as output. For each CNN feature maps, our model generate attention maps along two separate dimensions, time and filter, and then multiplied to form the final attention map. Residual-like fusion structure is used to append the attention map to the input feature map for adaptive feature refinement. In addition, to get the global feature representation with less information loss, the generalized mean pooling is introduced. To prove the efficacy of the proposed method, we have compared with two baseline method on sleep-EDF data set with different setting of the framework and input channel type, the experimental results show that the paper model has achieved significant improvements in terms of overall accuracy, Cohen’s kappa, MF1, sensitivity and specificity. The performance of the proposed network is compared with that of the state-of-the-art algorithms with an overall accuracy of 83.4%, a macro F1-score of 77.3%, κ = 0.77, sensitivity = 77.1% and specificity = 95.4%, respectively. The experimental results demonstrate the superiority of the proposed network.
APA, Harvard, Vancouver, ISO, and other styles
41

Tan, Xiu Hu. "An Information Fusion Algorithm Based on Kalman Filtering." Applied Mechanics and Materials 444-445 (October 2013): 1072–76. http://dx.doi.org/10.4028/www.scientific.net/amm.444-445.1072.

Full text
Abstract:
For the multisensor systems with unknown noise variances, by the statistics method, the mathematical model and the noise statistics are essential, and this limitation was settled by adaptive algorithm. The adaptive Kalman filter was proposed to solve the filtering problem of the system with unknown mathematical model or noise statistics in information fusion. Based on the probability method and the scalar weighting optimal information fusion criterion in the minimum variance sense, the algorithm can not only optimize the multi-channel data, but also obtain the minimum mean square error (MMSE) by introducing fusion equation, namely the algorithm is optimal under the sense of MMSE, and the error is the least than the original Kalman information fusion algorithm. The test result shows that the algorithm can precede information fusion effectively under the distributed acquisition system.
APA, Harvard, Vancouver, ISO, and other styles
42

Oh, Eunmi. "A method for track fusing using data association in naval combat system." Ciencia y tecnología de buques 9, no. 18 (January 29, 2016): 83. http://dx.doi.org/10.25043/19098642.131.

Full text
Abstract:
In today battlefield multi-sensors installed on naval ship are acquiring too much information. Information is used through naval combat system to improve reaction capability to threat more quickly and precise. For acting to threat, we have to make a decision whether same ones what each target from multi sensor and execute track fusion according to result of judgment. So in this paper, we propose the track fusion method using track’s varied information. We predicted and estimated the target state based on dynamic information using data association filter so made valid measurement area what is assumed that track exists. This algorithm can set up the criterion what is adaptive current status of track. Second we selected track among existing tracks in valid area by attributing weighting. The weight considers track’s information like identification, category and so on. We would like to execute more precisely track fusion through this fusion algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Atasever, U. H., P. Civicioglu, E. Besdok, and C. Ozkan. "A New Unsupervised Change Detection Approach Based On DWT Image Fusion And Backtracking Search Optimization Algorithm For Optical Remote Sensing Data." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7 (September 19, 2014): 15–18. http://dx.doi.org/10.5194/isprsarchives-xl-7-15-2014.

Full text
Abstract:
Change detection is one of the most important subjects of remote sensing discipline. In this paper, a new unsupervised change detection approach is proposed for multi-temporal remotely sensed optic imagery. This approach does not require any prior information about changed and unchanged pixels. The approach is based on Discrete Wavelet Transform (DWT) based image fusion and Backtracking Search Optimization Algorithm (BSA). In the first step of the approach, absolute-valued difference image and absolute-valued log-ratio image is calculated from co-registered and radiometrically corrected multi-temporal images. Then, these difference images are fused using DWT. The fused image is filtered by median filter for edge information preservation and by wiener filter for image smoothing. Then, a min-max normalization is applied to the filtered data. The normalized data is clustered into two groups with BSA as changed and unchanged pixels by minimizing an objective function, unlike classical methods using CVA, PCA, FCM or K-means techniques. To show effectiveness of proposed approach, two remote sensing data sets, Sardinia and Mexico, are used. False Alarm, Missed Alarm, Total Alarm and Total Error Rate are selected as performance criteria to evaluate the effectiveness of new approach using ground truth images. Experimental results show that proposed approach is effective for unsupervised change detection of optical remote sensing data.
APA, Harvard, Vancouver, ISO, and other styles
44

Liang, Zhuoqian, Ding Pan, and Yuan Deng. "Research on the Knowledge Association Reasoning of Financial Reports Based on a Graph Network." Sustainability 12, no. 7 (April 1, 2020): 2795. http://dx.doi.org/10.3390/su12072795.

Full text
Abstract:
With increasingly strict supervision, the complexity of enterprises’ annual reports has increased significantly, and the size of the text corpus has grown at an enormous rate. Information fusion for financial reporting has become a research hotspot. The difficulty of this problem is in filtering the massive amount of heterogeneous data and integrating related information distributed in different locations according to decision topics. This paper proposes a Graph NetWork (GNW) model that establishes the overall connection between decentralized information, as well as a graph network generation algorithm to filter large and complex data sets in financial reports and to mine key information to make it suitable for different decision situations. Finally, this paper uses the Planar Maximally Filtered Graph (PMFG) as a benchmark to show the effect of the generation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
45

Zeng, Qingxi, Dehui Liu, and Chade Lv. "UWB/Binocular VO Fusion Algorithm Based on Adaptive Kalman Filter." Sensors 19, no. 18 (September 19, 2019): 4044. http://dx.doi.org/10.3390/s19184044.

Full text
Abstract:
Among the existing wireless indoor positioning systems, UWB (ultra-wideband) is one of the most promising solutions. However, the single UWB positioning system is affected by factors such as non-line of sight and multipath, and the navigation accuracy will decrease. In order to make up for the shortcomings of a single UWB positioning system, this paper proposes a scheme based on binocular VO (visual odometer) and UWB sensor fusion. In this paper, the original distance measurement data of UWB and the position information of binocular VO are merged by adaptive Kalman filter, and the structural design of the fusion system and the realization of the fusion algorithm are elaborated. The experimental results show that compared with a single positioning system, the proposed data fusion method can significantly improve the positioning accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Efatmaneshnik, Mahmoud, Allison Kealy, Asghar Tabatabei Balaei, and Andrew G. Dempster. "Information Fusion for Localization Within Vehicular Networks." Journal of Navigation 64, no. 3 (June 7, 2011): 401–16. http://dx.doi.org/10.1017/s0373463311000075.

Full text
Abstract:
Cooperative positioning (CP) is a localization technique originally developed for use across wireless sensor networks. With the emergence of Dedicated Short Range Communications (DSRC) infrastructure for use in Intelligent Transportation Systems (ITS), CP techniques can now be adapted for use in location determination across vehicular networks. In vehicular networks, the technique of CP fuses GPS positions with additional sensed information such as inter-vehicle distances between the moving vehicles to determine their location within a neighbourhood. This paper presents the results obtained from a research study undertaken to demonstrate the capabilities of DSRC for meeting the positioning accuracies of road safety applications. The results show that a CP algorithm that fully integrates both measured/sensed data as well as navigation information such as map data can meet the positioning requirements of safety related applications of DSRC (<0·5 m). This paper presents the results of a Cramer Rao Lower Bound analysis which is used to benchmark the performance of the CP algorithm developed. The Kalman Filter (KF) models used in the CP algorithm are detailed and results obtained from integrating GPS positions, inter-vehicular ranges and information derived from in-vehicle maps are then discussed along with typical results as determined through a variety of network simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
47

Xue, Ying Hua, and Jing Li. "Distributed Information Fusion Structure Based on Data Fusion Tree." Advanced Materials Research 225-226 (April 2011): 488–91. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.488.

Full text
Abstract:
A distributed information fusion structure based on data fusion tree is built to realize precise localization and efficient navigation for the mobile robot. The multi-class, multi-level information from robot and environment is fused using different algorithms in different levels, and make the robot have a deeper understanding to the whole environment. Experiments demonstrate that the new model proposed in the paper can improve the positioning precision of robot greatly, and the search efficiency and success rate are also better than traditional mode.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Xian Wei, and Fu Cheng Cao. "Research on Data Fusion Technology of Body Posture Detection Based on Kalman Filter." Applied Mechanics and Materials 668-669 (October 2014): 1003–6. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.1003.

Full text
Abstract:
This paper discusses the body posture detection problem using low cost Micro-Electro-Mechanical System (MEMS) inertial sensors, for which a complementary sensor fusion solution is proposed. Considering the impact from the noise and bias drifts, through Kalman filter to complete the multi-sensor information fusion, achieved an accurate attitude determination. The experimental results show that, after using Kalman filtering algorithm to fuse acceleration sensor and signal gyroscope, it can effectively eliminate the accumulative error and significantly better dynamic characteristics of attitude angle measurement, Improving the reliability and accuracy of body posture estimation.
APA, Harvard, Vancouver, ISO, and other styles
49

Mao, Yi, Yi Yang, and Yuxin Hu. "Research into a Multi-Variate Surveillance Data Fusion Processing Algorithm." Sensors 19, no. 22 (November 15, 2019): 4975. http://dx.doi.org/10.3390/s19224975.

Full text
Abstract:
Targeted information sources include radar and ADS (Automatic Dependent Surveillance) for civil ATM (Air Traffic Management) systems, and the new navigation system based on satellites has the capability of global coverage. In order to solve the surveillance problem in mid-and-high altitude airspace and approaching airspace, this paper proposes a filter-based covariance matrix weighting method, measurement variance weighting method, and measurement-first weighted fusion method weighting integration algorithm to improve the efficiency of data integration calculation under fixed accuracy. Besides this, this paper focuses on the technology of the integration of a multi-radar surveillance system and automated related surveillance system in the ATM system and analyzes the constructional method of a multigeneration surveillance data integration system, as well as establishing the targeted model of sensors and the target track and designing the logical structure of multi-radar and ADS data integration.
APA, Harvard, Vancouver, ISO, and other styles
50

Romanovas, Michailas, Lasse Klingbeil, Martin Traechtler, and Yiannos Manoli. "APPLICATION OF FRACTIONAL SENSOR FUSION ALGORITHMS FOR INERTIAL MEMS SENSING." Mathematical Modelling and Analysis 14, no. 2 (June 30, 2009): 199–209. http://dx.doi.org/10.3846/1392-6292.2009.14.199-209.

Full text
Abstract:
The work presents an extension of the conventional Kalman filtering concept for systems of fractional order (FOS). Modifications are introduced using the Grünwald‐Letnikov (GL) definition of the fractional derivative (FD) and corresponding truncation of the history length. Two versions of the fractional Kalman filter (FKF) are shown, where the FD is calculated directly or by augmenting the state vector with the estimate of the FD. The filters are compared to conventional integer order (IO) Position (P‐KF) and Position‐Velocity (PV‐KF) Kalman filters as well as to an adaptive Interacting Multiple‐Model Kalman Filter (IMM‐KF). The performance of the filters is assessed based on a hand and a head motion data set. The feasibility of the given approach is shown.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography