Siga este link para ver outros tipos de publicações sobre o tema: Depth of field fusion.

Artigos de revistas sobre o tema "Depth of field fusion"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Depth of field fusion".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Wang, Shuzhen, Haili Zhao e Wenbo Jing. "Fast all-focus image reconstruction method based on light field imaging". ITM Web of Conferences 45 (2022): 01030. http://dx.doi.org/10.1051/itmconf/20224501030.

Texto completo da fonte
Resumo:
To achieve high-quality imaging of all focal planes with large depth of field information, a fast all-focus image reconstruction technology based on light field imaging is proposed: combining light field imaging to collect field of view information, and using light field reconstruction to obtain a multi-focus image source set, using the improved NSML image fusion method performs image fusion to quickly obtain an all-focus image with a large depth of field. Experiments have proved that this method greatly reduces the time consumed in the image fusion process by simplifying the calculation process of NSML, and improves the efficiency of image fusion. This method not only achieves excellent fusion image quality, but also improves the real-time performance of the algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, Jiaxin, Shuo Zhang e Youfang Lin. "Attention-based Multi-Level Fusion Network for Light Field Depth Estimation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1009–17. http://dx.doi.org/10.1609/aaai.v35i2.16185.

Texto completo da fonte
Resumo:
Depth estimation from Light Field (LF) images is a crucial basis for LF related applications. Since multiple views with abundant information are available, how to effectively fuse features of these views is a key point for accurate LF depth estimation. In this paper, we propose a novel attention-based multi-level fusion network. Combined with the four-branch structure, we design intra-branch fusion strategy and inter-branch fusion strategy to hierarchically fuse effective features from different views. By introducing the attention mechanism, features of views with less occlusions and richer textures are selected inside and between these branches to provide more effective information for depth estimation. The depth maps are finally estimated after further aggregation. Experimental results shows the proposed method achieves state-of-the-art performance in both quantitative and qualitative evaluation, which also ranks first in the commonly used HCI 4D Light Field Benchmark.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Piao, Yongri, Miao Zhang, Xiaohui Wang e Peihua Li. "Extended depth of field integral imaging using multi-focus fusion". Optics Communications 411 (março de 2018): 8–14. http://dx.doi.org/10.1016/j.optcom.2017.10.081.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

De, Ishita, Bhabatosh Chanda e Buddhajyoti Chattopadhyay. "Enhancing effective depth-of-field by image fusion using mathematical morphology". Image and Vision Computing 24, n.º 12 (dezembro de 2006): 1278–87. http://dx.doi.org/10.1016/j.imavis.2006.04.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Pu, Can, Runzi Song, Radim Tylecek, Nanbo Li e Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks". Remote Sensing 11, n.º 5 (27 de fevereiro de 2019): 487. http://dx.doi.org/10.3390/rs11050487.

Texto completo da fonte
Resumo:
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset).
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Bouzos, Odysseas, Ioannis Andreadis e Nikolaos Mitianoudis. "Conditional Random Field-Guided Multi-Focus Image Fusion". Journal of Imaging 8, n.º 9 (5 de setembro de 2022): 240. http://dx.doi.org/10.3390/jimaging8090240.

Texto completo da fonte
Resumo:
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are likely to produce artifacts. In order to cope with these issues, we introduce the Conditional Random Field (CRF) CRF-Guided fusion method. A novel Edge Aware Centering method is proposed and employed to extract the low and high frequencies of the input images. The Independent Component Analysis—ICA transform is applied to high-frequency components and a Conditional Random Field (CRF) model is created from the low frequency and the transform coefficients. The CRF model is solved efficiently with the α-expansion method. The estimated labels are used to guide the fusion of the low-frequency components and the transform coefficients. Inverse ICA is then applied to the fused transform coefficients. Finally, the fused image is the addition of the fused low frequency and the fused high frequency. CRF-Guided fusion does not introduce artifacts during fusion and supports image denoising during fusion by applying transform domain coefficient shrinkage. Quantitative and qualitative evaluation demonstrate the superior performance of CRF-Guided fusion compared to state-of-the-art multi-focus image fusion methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Pei, Xiangyu, Shujun Xing, Xunbo Yu, Gao Xin, Xudong Wen, Chenyu Ning, Xinhui Xie et al. "Three-dimensional light field fusion display system and coding scheme for extending depth of field". Optics and Lasers in Engineering 169 (outubro de 2023): 107716. http://dx.doi.org/10.1016/j.optlaseng.2023.107716.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jie, Yuchan, Xiaosong Li, Mingyi Wang e Haishu Tan. "Multi-Focus Image Fusion for Full-Field Optical Angiography". Entropy 25, n.º 6 (16 de junho de 2023): 951. http://dx.doi.org/10.3390/e25060951.

Texto completo da fonte
Resumo:
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field can be acquired using existing FFOA imaging techniques, resulting in partially unclear images. To produce fully focused FFOA images, an FFOA image fusion method based on the nonsubsampled contourlet transform and contrast spatial frequency is proposed. Firstly, an imaging system is constructed, and the FFOA images are acquired by intensity-fluctuation modulation effect. Secondly, we decompose the source images into low-pass and bandpass images by performing nonsubsampled contourlet transform. A sparse representation-based rule is introduced to fuse the lowpass images to effectively retain the useful energy information. Meanwhile, a contrast spatial frequency rule is proposed to fuse bandpass images, which considers the neighborhood correlation and gradient relationships of pixels. Finally, the fully focused image is produced by reconstruction. The proposed method significantly expands the range of focus of optical angiography and can be effectively extended to public multi-focused datasets. Experimental results confirm that the proposed method outperformed some state-of-the-art methods in both qualitative and quantitative evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Hui-Feng, Gui-ping Wang, Xiao-Yan Wang, Chi Ruan e Shi-qin Chen. "A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving". Sensor Review 36, n.º 1 (18 de janeiro de 2016): 7–13. http://dx.doi.org/10.1108/sr-04-2015-0055.

Texto completo da fonte
Resumo:
Purpose – This study aims to consider active vision in low-visibility environments to reveal the factors of optical properties which affect visibility and to explore a method of obtaining different depths of fields by multimode imaging.Bad weather affects the driver’s visual range tremendously and thus has a serious impact on transport safety. Design/methodology/approach – A new mechanism and a core algorithm for obtaining an excellent large field-depth image which can be used to aid safe driving is designed and implemented. In this mechanism, atmospheric extinction principle and field expansion system are researched as the basis, followed by image registration and fusion algorithm for the Infrared Extended Depth of Field (IR-EDOF) sensor. Findings – The experimental results show that the idea we propose can work well to expand the field depth in a low-visibility road environment as a new aided safety-driving sensor. Originality/value – The paper presents a new kind of active optical extension, as well as enhanced driving aids, which is an effective solution to the problem of weakening of visual ability. It is a practical engineering sensor scheme for safety driving in low-visibility road environments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Xiao, Yuhao, Guijin Wang, Xiaowei Hu, Chenbo Shi, Long Meng e Huazhong Yang. "Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack". Sensors 19, n.º 22 (7 de novembro de 2019): 4845. http://dx.doi.org/10.3390/s19224845.

Texto completo da fonte
Resumo:
Three dimensional (3D) imaging technology has been widely used for many applications, such as human–computer interactions, making industrial measurements, and dealing with cultural relics. However, existing active methods often require both large apertures of projector and camera to maximize light throughput, resulting in a shallow working volume in which projector and camera are simultaneously in focus. In this paper, we propose a novel method to extend the working range of the structured light 3D imaging system based on the focal stack. Specifically in the case of large depth variation scenes, we first adopted the gray code method for local, 3D shape measurement with multiple focal distance settings. Then we extracted the texture map of each focus position into a focal stack to generate a global coarse depth map. Under the guidance of the global coarse depth map, the high-quality 3D shape measurement of the overall scene was obtained by local, 3D shape-measurement fusion. To validate the method, we developed a prototype system that can perform high-quality measurements in the depth range of 400 mm with a measurement error of 0.08%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Lv, Chen, Min Xiao, Bingbing Zhang, Pengbo Chen e Xiaomin Liu. "P‐2.29: Depth Estimation of Light Field Image Based on Compressed Sensing and Multi‐clue Fusion". SID Symposium Digest of Technical Papers 54, S1 (abril de 2023): 594–97. http://dx.doi.org/10.1002/sdtp.16363.

Texto completo da fonte
Resumo:
In the current research field of light field depth estimation, the occlusion of complex scenes and a large amount of computational data are the problems that every researcher must face. For complex occlusion scenes, this paper proposes a depth estimation method based on the fusion of adaptive defocus cues and constrained angular entropy cues, which is more robust to occlusion. At the same time, the compressed sensing theory is used to compress and reconstruct the light field image to solve the problem of a large amount of data in the process of light field image acquisition, transmission, and processing. The experimental results show that the proposed method has a good overall effect in dealing with the depth estimation of the occlusion scene, and the correct depth information can be obtained. The light field image reconstructed by compressed sensing can not only obtain good depth estimation results but also reduce the amount of data effectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Xiao, Min, Chen Lv e Xiaomin Liu. "FPattNet: A Multi-Scale Feature Fusion Network with Occlusion Awareness for Depth Estimation of Light Field Images". Sensors 23, n.º 17 (28 de agosto de 2023): 7480. http://dx.doi.org/10.3390/s23177480.

Texto completo da fonte
Resumo:
A light field camera can capture light information from various directions within a scene, allowing for the reconstruction of the scene. The light field image inherently contains the depth information of the scene, and depth estimations of light field images have become a popular research topic. This paper proposes a depth estimation network of light field images with occlusion awareness. Since light field images contain many views from different viewpoints, identifying the combinations that contribute the most to the depth estimation of the center view is critical to improving the depth estimation accuracy. Current methods typically rely on a fixed set of views, such as vertical, horizontal, and diagonal, which may not be optimal for all scenes. To address this limitation, we propose a novel approach that considers all available views during depth estimation while leveraging an attention mechanism to assign weights to each view dynamically. By inputting all views into the network and employing the attention mechanism, we enable the model to adaptively determine the most informative views for each scene, thus achieving more accurate depth estimation. Furthermore, we introduce a multi-scale feature fusion strategy that amalgamates contextual information and expands the receptive field to enhance the network’s performance in handling challenging scenarios, such as textureless and occluded regions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Tsai, Yu-Hsiang, Yung-Jhe Yan, Meng-Hsin Hsiao, Tzu-Yi Yu e Mang Ou-Yang. "Real-Time Information Fusion System Implementation Based on ARM-Based FPGA". Applied Sciences 13, n.º 14 (23 de julho de 2023): 8497. http://dx.doi.org/10.3390/app13148497.

Texto completo da fonte
Resumo:
In this study, an information fusion system displayed fusion information on a transparent display by considering the relationships among the display, background exhibit, and user’s gaze direction. We used an ARM-based field-programmable gate array (FPGA) to perform virtual–real fusion of this system as well as evaluated the virtual–real fusion execution speed. The ARM-based FPGA used Intel® RealsenseTM D435i depth cameras to capture depth and color images of an observer and exhibit. The image data was received by the ARM side and fed to the FPGA side for real-time object detection. The FPGA accelerated the computation of the convolution neural networks to recognize observers and exhibits. In addition, a module performed by the FPGA was developed for rapid registration between the color and depth images. The module calculated the size and position of the information displayed on a transparent display according to the pixel coordinates and depth values of the human eye and exhibit. A personal computer with GPU RTX2060 performed information fusion in ~47 ms, whereas the ARM-based FPGA accomplished it in 25 ms. Thus, the fusion speed of the ARM-based FPGA was 1.8 times faster than on the computer.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Ren, Nai Fei, Lei Jia e Dian Wang. "Numerical Simulation Analysis on the Temperature Field in Indirect Selective Laser Sintering of 316L". Advanced Materials Research 711 (junho de 2013): 209–13. http://dx.doi.org/10.4028/www.scientific.net/amr.711.209.

Texto completo da fonte
Resumo:
Using APDL programming language, an appropriate finite element model is created and the moving cyclic loads of Gauss heat source are realized. From the detailed qualitative analysis of the results, the variety laws of temperature field in indirect SLS are obtained. Plot results at different moments, temperature cyclic curves of key points and the curves of depth of fusion and width of fusion on the set paths, are of important guiding significance for subsequent physical experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Chen, Junying, Boxuan Wang, Xiuyu Chen, Qingshan Jiang, Wei Feng, Zhilong Xu e Zhenye Zhao. "A Micro-Topography Measurement and Compensation Method for the Key Component Surface Based on White-Light Interferometry". Sensors 23, n.º 19 (8 de outubro de 2023): 8307. http://dx.doi.org/10.3390/s23198307.

Texto completo da fonte
Resumo:
The grinding grooves of material removal machining and the residues of a machining tool on the key component surface cause surface stress concentration. Thus, it is critical to carry out precise measurements on the key component surface to evaluate the stress concentration. Based on white-light interferometry (WLI), we studied the measurement distortion caused by the reflected light from the steep side of the grinding groove being unable to return to the optical system for imaging. A threshold value was set to eliminate the distorted measurement points, and the cubic spline algorithm was used to interpolate the eliminated points for compensation. The compensation result agrees well with the atomic force microscope (AFM) measurement result. However, for residues on the surface, a practical method was established to obtain a microscopic 3D micro-topography point cloud and a super-depth-of-field fusion image simultaneously. Afterward, the semantic segmentation network U-net was adopted to identify the residues in the super-depth-of-field fusion image and achieved a recognition accuracy of 91.06% for residual identification. Residual feature information, including height, position, and size, was obtained by integrating the information from point clouds and super-depth-of-field fusion images. This work can provide foundational data to study surface stress concentration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Liu, Hang, Hengyu Li, Jun Luo, Shaorong Xie e Yu Sun. "Construction of All-in-Focus Images Assisted by Depth Sensing". Sensors 19, n.º 6 (22 de março de 2019): 1409. http://dx.doi.org/10.3390/s19061409.

Texto completo da fonte
Resumo:
Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Zhuang, Chuanqing, Zhengda Lu, Yiqun Wang, Jun Xiao e Ying Wang. "ACDNet: Adaptively Combined Dilated Convolution for Monocular Panorama Depth Estimation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junho de 2022): 3653–61. http://dx.doi.org/10.1609/aaai.v36i3.20278.

Texto completo da fonte
Resumo:
Depth estimation is a crucial step for 3D reconstruction with panorama images in recent years. Panorama images maintain the complete spatial information but introduce distortion with equirectangular projection. In this paper, we propose an ACDNet based on the adaptively combined dilated convolution to predict the dense depth map for a monocular panoramic image. Specifically, we combine the convolution kernels with different dilations to extend the receptive field in the equirectangular projection. Meanwhile, we introduce an adaptive channel-wise fusion module to summarize the feature maps and get diverse attention areas in the receptive field along the channels. Due to the utilization of channel-wise attention in constructing the adaptive channel-wise fusion module, the network can capture and leverage the cross-channel contextual information efficiently. Finally, we conduct depth estimation experiments on three datasets (both virtual and real-world) and the experimental results demonstrate that our proposed ACDNet substantially outperforms the current state-of-the-art (SOTA) methods. Our codes and model parameters are accessed in https://github.com/zcq15/ACDNet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Kim, Yeon-Soo, Taek-Jin Kim, Yong-Joo Kim, Sang-Dae Lee, Seong-Un Park e Wan-Soo Kim. "Development of a Real-Time Tillage Depth Measurement System for Agricultural Tractors: Application to the Effect Analysis of Tillage Depth on Draft Force during Plow Tillage". Sensors 20, n.º 3 (8 de fevereiro de 2020): 912. http://dx.doi.org/10.3390/s20030912.

Texto completo da fonte
Resumo:
The objectives of this study were to develop a real-time tillage depth measurement system for agricultural tractor performance analysis and then to validate these configured systems through soil non-penetration tests and field experiment during plow tillage. The real-time tillage depth measurement system was developed by using a sensor fusion method, consisting of a linear potentiometer, inclinometer, and optical distance sensor to measure the vertical penetration depth of the attached implement. In addition, a draft force measurement system was developed using six-component load cells, and an accuracy of 98.9% was verified through a static load test. As a result of the soil non-penetration tests, it was confirmed that sensor fusion type A, consisting of a linear potentiometer and inclinometer, was 6.34–11.76% more accurate than sensor fusion type B, consisting of an optical distance sensor and inclinometer. Therefore, sensor fusion type A was used during field testing as it was found to be more suitable for use in severe working environments. To verify the accuracy of the real-time tillage depth measurement system, a linear regression analysis was performed between the measured draft and the predicted values calculated using the American Society of Agricultural and Biological Engineers (ASABE) standards-based equation. Experimental data such as traveling speed and draft force showed that it was significantly affected by tillage depth, and the coefficient of determination value at M3–Low was 0.847, which is relatively higher than M3–High. In addition, the regression analysis of the integrated data showed an R-square value of 0.715, which is an improvement compared to the accuracy of the ASABE standard prediction formula. In conclusion, the effect of tillage depth on draft force of agricultural tractors during plow tillage was analyzed by the simultaneous operation of the proposed real-time tillage depth measurement system and draft force measurement system. In addition, system accuracy is higher than the predicted accuracy of ± 40% based on the ASABE standard equation, which is considered to be useful for various agricultural machinery research fields. In future studies, real-time tillage depth measurement systems can be used in tractor power train design and to ensure component reliability, in accordance with agricultural working conditions, by predicting draft force and axle loads depending on the tillage depth during tillage operations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Zhou, Youyong, Lingjie Yu, Chao Zhi, Chuwen Huang, Shuai Wang, Mengqiu Zhu, Zhenxia Ke, Zhongyuan Gao, Yuming Zhang e Sida Fu. "A Survey of Multi-Focus Image Fusion Methods". Applied Sciences 12, n.º 12 (20 de junho de 2022): 6281. http://dx.doi.org/10.3390/app12126281.

Texto completo da fonte
Resumo:
As an important branch in the field of image fusion, the multi-focus image fusion technique can effectively solve the problem of optical lens depth of field, making two or more partially focused images fuse into a fully focused image. In this paper, the methods based on boundary segmentation was put forward as a group of image fusion method. Thus, a novel classification method of image fusion algorithms is proposed: transform domain methods, boundary segmentation methods, deep learning methods, and combination fusion methods. In addition, the subjective and objective evaluation standards are listed, and eight common objective evaluation indicators are described in detail. On the basis of lots of literature, this paper compares and summarizes various representative methods. At the end of this paper, some main limitations in current research are discussed, and the future development of multi-focus image fusion is prospected.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Zhou, Junyu. "Comparative Study on BEV Vision and LiDAR Point Cloud Data Fusion Methods". Transactions on Computer Science and Intelligent Systems Research 2 (21 de dezembro de 2023): 14–18. http://dx.doi.org/10.62051/ww28m534.

Texto completo da fonte
Resumo:
With the gradual maturity of autonomous driving technology, efficient fusion and processing of multimodal sensor data has become an important research direction. This study mainly explores the strategy of integrating BEV (Bird's Eye View) vision with LiDAR point cloud data. We evaluated the performance and applicability of each of the three main data fusion methods through in-depth comparison: early fusion, mid-term fusion, and late fusion. First of all, we summarize the working principle and data characteristics of BEV vision and LiDAR, and emphasize their key roles in auto drive system. Then, the theoretical basis and implementation methods of the three fusion strategies were described in detail. The experimental results show that different fusion strategies exhibit their own advantages in different application scenarios and requirements. For example, early fusion performs well in high-precision tasks, but has a high demand for computing resources. And mid-term fusion is more suitable in scenarios with high real-time requirements. Overall, this study provides in-depth insights and practical suggestions on the fusion of BEV vision and LiDAR data in the field of autonomous driving, laying a solid foundation for future research and applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Pagliari, D., F. Menna, R. Roncella, F. Remondino e L. Pinto. "Kinect Fusion improvement using depth camera calibration". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (6 de junho de 2014): 479–85. http://dx.doi.org/10.5194/isprsarchives-xl-5-479-2014.

Texto completo da fonte
Resumo:
Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

HANSEN, JIM, e DENNIS D. HARWIG. "Impact of Electrode Rotation on Aluminum GMAW Bead Shape". Welding Journal 102, n.º 6 (1 de junho de 2023): 125–36. http://dx.doi.org/10.29391/2023.102.010.

Texto completo da fonte
Resumo:
Aluminum gas metal arc welding (GMAW) uses inert shielding gas to minimize weld pool oxidation and reduce susceptibility to porosity and incomplete fusion defects. For aluminum shipbuilding, naval welding requirements highly recommend the use of helium-argon mixtures or pure helium shielding gas to provide a broader heat field and better weld toe fusion. Pure argon shielding gas can be used but has been susceptible to incomplete fusion and porosity defects, where argon’s lower thermal conductivity promotes a narrower arc heat field and shallow weld fusion depth. Using helium is a concern because it is a finite resource that costs approximately five times more than argon. The rotating electrode pulsed GMAW process was investigated to improve argon shielding fusion characteristics and reduce helium usage. Argon-shielded bead-on-plate tests were used to evaluate the relationship between ER5183 electrode rotation parameters and arc power on constant deposit area bead shape. These tests were compared to stringer beads (no oscillation) that were made with argon, helium, and helium-argon shielding gases. Electrode rotation improved underbead fusion depth width and toe fusion. With preferred rotation parameters, the bead width and incomplete fusion at weld toes were equivalent to helium-based welds. For weld reinforcement, electrode rotation promoted a nonsymmetric profile with deposit bias on the bead side, where rotation direction was aligned with travel direction. The bead-side deposit bias is an advantage based on preliminary horizontal V-groove welding procedures using ceramic backing. Electrode rotation can offset the effects of gravity, promoting a smoother bead and fusion profile.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Han, Qihui, e Cheolkon Jung. "Guided filtering based data fusion for light field depth estimation with L0 gradient minimization". Journal of Visual Communication and Image Representation 55 (agosto de 2018): 449–56. http://dx.doi.org/10.1016/j.jvcir.2018.06.020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Yang, Ning, Kangpeng Chang, Jian Tang, Lijia Xu, Yong He, Rubing Huang e Junjie Yu. "Detection method of rice blast based on 4D light field refocusing depth information fusion". Computers and Electronics in Agriculture 205 (fevereiro de 2023): 107614. http://dx.doi.org/10.1016/j.compag.2023.107614.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Jin, Chul Kyu, Jae Hyun Kim e Bong-Seop Lee. "Powder Bed Fusion 3D Printing and Performance of Stainless-Steel Bipolar Plate with Rectangular Microchannels and Microribs". Energies 15, n.º 22 (12 de novembro de 2022): 8463. http://dx.doi.org/10.3390/en15228463.

Texto completo da fonte
Resumo:
For the high performance of a fuel cell where a bipolar plate (BP) is applied, rectangular channel, microchannel width, micro-rib, enough channel quantity, adequate channel depth, and innovative flow field design should be realized from a configuration standpoint. In this study, a stainless-steel BP with a microchannel flow field is fabricated with a powder bed fusion (PBF) 3D printer to improve fuel cell performance. A BP with a triple serpentine flow field, rectangular channel, 300 μm channel width, 300 μm rib, and 500 μm channel depth is designed. The print is completed perfectly until the flow field. The bending phenomenon due to thermal deformation does not occur in the BP fabricated by designing the thickness at 2 mm. Performance tests are conducted using fabricated stainless-steel BPs. The current density value is 1.2052 A/cm2 at 0.6 V. This value is higher by 52.8% than the BP with 940 μm channels (rectangle, 940 μm ribs, and 500 μm channel depth). In addition, the value is higher by 24.9% than a graphite BP with 940 μm channels (rectangle, 940 μm ribs, and 1000 μm channel depth). The current density values are measured at 0.6 V for 260 h.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

DOULAMIS, ANASTASIOS, NIKOLAOS DOULAMIS, KLIMIS NTALIANIS e STEFANOS KOLLIAS. "EFFICIENT UNSUPERVISED CONTENT-BASED SEGMENTATION IN STEREOSCOPIC VIDEO SEQUENCES". International Journal on Artificial Intelligence Tools 09, n.º 02 (junho de 2000): 277–303. http://dx.doi.org/10.1142/s0218213000000197.

Texto completo da fonte
Resumo:
This paper presents an efficient technique for unsupervised content-based segmentation in stereoscopic video sequences by appropriately combined different content descriptors in a hierarchical framework. Three main modules are involved in the proposed scheme; extraction of reliable depth information, image partition into color and depth regions and a constrained fusion algorithm of color segments using information derived from the depth map. In the first module, each stereo pair is analyzed and the disparity field and depth map are estimated. Occlusion detection and compensation are also applied for improving the depth map estimation. In the following phase, color and depth regions are created using a novel complexity-reducing multiresolution implementation of the Recursive Shortest Spanning Tree algorithm (M-RSST). While depth segments provide a coarse representation of the image content, color regions describe very accurately object boundaries. For this reason, in the final phase, a new segmentation fusion algorithm is employed which projects color segments onto depth segments. Experimental results are presented which exhibit the efficiency of the proposed scheme as content-based descriptor, even in case of images with complicated visual content.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Tucci, Michelle A., Robert A. McGuire, Gerri A. Wilson, David P. Gordy e Hamed A. Benghuzzi. "Treatment Depth Effects of Combined Magnetic Field Technology using a Commercial Bone Growth Stimulator". Journal of the Mississippi Academy of Sciences Vol 66 (1), January (1) (1 de janeiro de 2021): 28–34. http://dx.doi.org/10.31753//jmas.66_1028.

Texto completo da fonte
Resumo:
Lumbar spinal fusion is one of the more common spinal surgeries, and its use is on the rise. If the bone fails to fuse properly, then a pseudarthrosis or “false joint” develops and results in pain, instability, and disability. Since1974, three types of electrical stimulation technologies have been approved for clinical use to enhance spinal fusions. One such technology is inductive coupling, which includes combined magnetic fields (CMFs). The purpose of this study was to evaluate the effects of a CMF device known as the Donjoy (SpinaLogic®) on MG-63 (ATCC® CRL1427TM) human osteosarcoma cells at treatment depths ranging from 0.5” to 6.0”. The cells were grown to confluence on 4-well chamber slides that were kept in a nickel-alloy chamber within an incubator to shield the cells from unwanted environmental electromagnetic fields. During treatment, a specially designed apparatus held both the treatment device and the chamber slide. Briefly, the chamber slide was placed inside an acrylic tube at a specific distance from the transducer housing, and the device was turned on for 30 minutes. The chamber slides were then returned to the incubator to be evaluated at 7, 14, and 21 days post treatment for cell viability and bone nodule formation. Our results showed that compared to control cells, the cells located at 3” from the source had the greatest increase in bone nodule formation 7 days post treatment which is the depth at this consistent with manufacturer recommendations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Fan, Xuqing, Sai Deng, Zhengxing Wu, Junfeng Fan e Chao Zhou. "Spatial Domain Image Fusion with Particle Swarm Optimization and Lightweight AlexNet for Robotic Fish Sensor Fault Diagnosis". Biomimetics 8, n.º 6 (17 de outubro de 2023): 489. http://dx.doi.org/10.3390/biomimetics8060489.

Texto completo da fonte
Resumo:
Safety and reliability are vital for robotic fish, which can be improved through fault diagnosis. In this study, a method for diagnosing sensor faults is proposed, which involves using Gramian angular field fusion with particle swarm optimization and lightweight AlexNet. Initially, one-dimensional time series sensor signals are converted into two-dimensional images using the Gramian angular field method with sliding window augmentation. Next, weighted fusion methods are employed to combine Gramian angular summation field images and Gramian angular difference field images, allowing for the full utilization of image information. Subsequently, a lightweight AlexNet is developed to extract features and classify fused images for fault diagnosis with fewer parameters and a shorter running time. To improve diagnosis accuracy, the particle swarm optimization algorithm is used to optimize the weighted fusion coefficient. The results indicate that the proposed method achieves a fault diagnosis accuracy of 99.72% when the weighted fusion coefficient is 0.276. These findings demonstrate the effectiveness of the proposed method for diagnosing depth sensor faults in robotic fish.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Wang, Hantao, Ente Guo, Feng Chen e Pingping Chen. "Depth Completion in Autonomous Driving: Adaptive Spatial Feature Fusion and Semi-Quantitative Visualization". Applied Sciences 13, n.º 17 (30 de agosto de 2023): 9804. http://dx.doi.org/10.3390/app13179804.

Texto completo da fonte
Resumo:
The safety of autonomous driving is closely linked to accurate depth perception. With the continuous development of autonomous driving, depth completion has become one of the crucial methods in this field. However, current depth completion methods have major shortcomings in small objects. To solve this problem, this paper proposes an end-to-end architecture with adaptive spatial feature fusion by encoder–decoder (ASFF-ED) module for depth completion. The architecture is built on the basis of the network architecture proposed in this paper, and is able to extract depth information adaptively with different weights on the specified feature map, which effectively solves the problem of insufficient depth accuracy of small objects. At the same time, this paper also proposes a depth map visualization method with a semi-quantitative visualization, which makes the depth information more intuitive to display. Compared with the currently available depth map visualization methods, this method has stronger quantitative analysis and horizontal comparison ability. Through experiments of ablation study and comparison, the results show that the method proposed in this paper exhibits a lower root-mean-squared error (RMSE) and better small object detection performance on the KITTI dataset.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Chi, Xiaoni, Qinyuan Meng, Qiuxuan Wu, Yangyang Tian, Hao Liu, Pingliang Zeng, Botao Zhang e Chaoliang Zhong. "A Laser Data Compensation Algorithm Based on Indoor Depth Map Enhancement". Electronics 12, n.º 12 (17 de junho de 2023): 2716. http://dx.doi.org/10.3390/electronics12122716.

Texto completo da fonte
Resumo:
The field of mobile robotics has seen significant growth regarding the use of indoor laser mapping technology, but most two-dimensional Light Detection and Ranging (2D LiDAR) can only scan a plane of fixed height, and it is difficult to obtain the information of objects below the fixed height, so inaccurate environmental mapping and navigation mis-collision problems easily occur. Although three-dimensional (3D) LiDAR is also gradually applied, it is less used in indoor mapping because it is more expensive and requires a large amount of memory and computation. Therefore, a laser data compensation algorithm based on indoor depth map enhancement is proposed in this paper. Firstly, the depth map acquired by the depth camera is removed and smoothed by bilateral filters to achieve the enhancement of depth map data, and the multi-layer projection transformation is performed to reduce the dimension to compress it into pseudo-laser data. Secondly, the pseudo-laser data are used to remap the laser data according to the positional relationship between the two sensors and the obstacle. Finally, the fused laser data are added to the simultaneous localization and mapping (SLAM) front-end matching to achieve multi-level data fusion. The performance of the multi-sensor fusion before and after is compared with that of the existing fusion scheme via simulation and in kind. The experimental results show that the fusion algorithm can achieve a more comprehensive perception of environmental information and effectively improve the accuracy of map building.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Cheng, Haoyuan, Qi Chen, Xiangwei Zeng, Haoxun Yuan e Linjie Zhang. "The Polarized Light Field Enables Underwater Unmanned Vehicle Bionic Autonomous Navigation and Automatic Control". Journal of Marine Science and Engineering 11, n.º 8 (16 de agosto de 2023): 1603. http://dx.doi.org/10.3390/jmse11081603.

Texto completo da fonte
Resumo:
In response to the critical need for autonomous navigation capabilities of underwater vehicles independent of satellites, this paper studies a novel navigation and control method based on underwater polarization patterns. We propose an underwater course angle measurement algorithm and develop underwater polarization detection equipment. By establishing the automatic control model of an ROV (Remote Operated Vehicle) with polarization information, we develop a strapdown navigation method combining polarization and inertial information. We verify the feasibility of angle measurement based on polarization in the water tank. The measurement accuracy of polarization azimuth is less than 0.69°. Next, we conduct ROV navigation at different water depths in a real underwater environment. At a depth of 5 m, the MSE (Mean Square Error) and SD (Standard Deviation) of angle error are 16.57° and 4.07°, respectively. Underwater navigation accuracy of traveling 100 m is better than 5 m within a depth of 5 m. Key technologies such as underwater polarization detection, multi-source information fusion, and the ROV automatic control model with polarization have been broken through. This method can effectively improve ROV underwater work efficiency and accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Wang, Jin, e Yanfei Gao. "Suspect Multifocus Image Fusion Based on Sparse Denoising Autoencoder Neural Network for Police Multimodal Big Data Analysis". Scientific Programming 2021 (7 de janeiro de 2021): 1–12. http://dx.doi.org/10.1155/2021/6614873.

Texto completo da fonte
Resumo:
In recent years, the success rate of solving major criminal cases through big data has been greatly improved. The analysis of multimodal big data plays a key role in the detection of suspects. However, the traditional multiexposure image fusion methods have low efficiency and are largely time-consuming due to the artifact effect in the image edge and other sensitive factors. Therefore, this paper focuses on the suspect multiexposure image fusion. The self-coding neural network based on deep learning has become a hotspot in the research of data dimension reduction, which can effectively eliminate the irrelevant and redundant learning data. In the case of limited field depth, due to the limited focusing depth of the camera, the focusing plane cannot obtain the global clear image of the target in the depth scene, which is prone to defocusing and blurring phenomena. Therefore, this paper proposes a multifocus image fusion based on a sparse denoising autoencoder neural network. To realize an unsupervised end-to-end fusion network, the sparse denoising autoencoder neural network is adopted to extract features and learn fusion rules and reconstruction rules simultaneously. The initial decision graph of the multifocus image is taken as a prior input to learn the rich detailed information of the image. The local strategy is added to the loss function to ensure that the image is restored accurately. The results show that this method is superior to the state-of-the-art fusion methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Wang, Jin, e Yanfei Gao. "Suspect Multifocus Image Fusion Based on Sparse Denoising Autoencoder Neural Network for Police Multimodal Big Data Analysis". Scientific Programming 2021 (7 de janeiro de 2021): 1–12. http://dx.doi.org/10.1155/2021/6614873.

Texto completo da fonte
Resumo:
In recent years, the success rate of solving major criminal cases through big data has been greatly improved. The analysis of multimodal big data plays a key role in the detection of suspects. However, the traditional multiexposure image fusion methods have low efficiency and are largely time-consuming due to the artifact effect in the image edge and other sensitive factors. Therefore, this paper focuses on the suspect multiexposure image fusion. The self-coding neural network based on deep learning has become a hotspot in the research of data dimension reduction, which can effectively eliminate the irrelevant and redundant learning data. In the case of limited field depth, due to the limited focusing depth of the camera, the focusing plane cannot obtain the global clear image of the target in the depth scene, which is prone to defocusing and blurring phenomena. Therefore, this paper proposes a multifocus image fusion based on a sparse denoising autoencoder neural network. To realize an unsupervised end-to-end fusion network, the sparse denoising autoencoder neural network is adopted to extract features and learn fusion rules and reconstruction rules simultaneously. The initial decision graph of the multifocus image is taken as a prior input to learn the rich detailed information of the image. The local strategy is added to the loss function to ensure that the image is restored accurately. The results show that this method is superior to the state-of-the-art fusion methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Li, Ying, Wenyue Li, Zhijie Zhao e JiaHao Fan. "DRI-MVSNet: A depth residual inference network for multi-view stereo images". PLOS ONE 17, n.º 3 (23 de março de 2022): e0264721. http://dx.doi.org/10.1371/journal.pone.0264721.

Texto completo da fonte
Resumo:
Three-dimensional (3D) image reconstruction is an important field of computer vision for restoring the 3D geometry of a given scene. Due to the demand for large amounts of memory, prevalent methods of 3D reconstruction yield inaccurate results, because of which the highly accuracy reconstruction of a scene remains an outstanding challenge. This study proposes a cascaded depth residual inference network, called DRI-MVSNet, that uses a cross-view similarity-based feature map fusion module for residual inference. It involves three improvements. First, a combined module is used for processing channel-related and spatial information to capture the relevant contextual information and improve feature representation. It combines the channel attention mechanism and spatial pooling networks. Second, a cross-view similarity-based feature map fusion module is proposed that learns the similarity between pairs of pixel in each source and reference image at planes of different depths along the frustum of the reference camera. Third, a deep, multi-stage residual prediction module is designed to generate a high-precision depth map that uses a non-uniform depth sampling strategy to construct hypothetical depth planes. The results of extensive experiments show that DRI-MVSNet delivers competitive performance on the DTU and the Tanks & Temples datasets, and the accuracy and completeness of the point cloud reconstructed by it are significantly superior to those of state-of-the-art benchmarks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

K. Kannan. "Application of Partial Differential Equations in Multi Focused Image Fusion". International Journal of Advanced Networking and Applications 14, n.º 01 (2022): 5266–70. http://dx.doi.org/10.35444/ijana.2022.14105.

Texto completo da fonte
Resumo:
Image Fusion is a process used to combine two or more images to form more informative image. More often, machine vision cameras are affected by limited depth of field and capture the clear view of the objects which are in focus. Other objects in the scene will be blurred. So, it is necessary to combine set of images to have the clear view of all objects in the scene. This is called Multi focused image fusion. This paper compares and presents the performance of second order and fourth order partial differential equation in multi focused image fusion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Qin, Xiaomei, Yuxi Ban, Peng Wu, Bo Yang, Shan Liu, Lirong Yin, Mingzhe Liu e Wenfeng Zheng. "Improved Image Fusion Method Based on Sparse Decomposition". Electronics 11, n.º 15 (26 de julho de 2022): 2321. http://dx.doi.org/10.3390/electronics11152321.

Texto completo da fonte
Resumo:
In the principle of lens imaging, when we project a three-dimensional object onto a photosensitive element through a convex lens, the point intersecting the focal plane can show a clear image of the photosensitive element, and the object point far away from the focal plane presents a fuzzy image point. The imaging position is considered to be clear within the limited size of the front and back of the focal plane. Otherwise, the image is considered to be fuzzy. In microscopic scenes, an electron microscope is usually used as the shooting equipment, which can basically eliminate the factors of defocus between the lens and the object. Most of the blur is caused by the shallow depth of field of the microscope, which makes the image defocused. Based on this, this paper analyzes the causes of defocusing in a video microscope and finds out that the shallow depth of field is the main reason, so we choose the corresponding deblurring method: the multi-focus image fusion method. We proposed a new multi-focus image fusion method based on sparse representation (DWT-SR). The operation burden is reduced by decomposing multiple frequency bands, and multi-channel operation is carried out by GPU parallel operation. The running time of the algorithm is further reduced. The results indicate that the DWT-SR algorithm introduced in this paper is higher in contrast and has much more details. It also solves the problem that dictionary training sparse approximation takes a long time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Zeng, Hui, Bin Yang, Xiuqing Wang, Jiwei Liu e Dongmei Fu. "RGB-D Object Recognition Using Multi-Modal Deep Neural Network and DS Evidence Theory". Sensors 19, n.º 3 (27 de janeiro de 2019): 529. http://dx.doi.org/10.3390/s19030529.

Texto completo da fonte
Resumo:
With the development of low-cost RGB-D (Red Green Blue-Depth) sensors, RGB-D object recognition has attracted more and more researchers’ attention in recent years. The deep learning technique has become popular in the field of image analysis and has achieved competitive results. To make full use of the effective identification information in the RGB and depth images, we propose a multi-modal deep neural network and a DS (Dempster Shafer) evidence theory based RGB-D object recognition method. First, the RGB and depth images are preprocessed and two convolutional neural networks are trained, respectively. Next, we perform multi-modal feature learning using the proposed quadruplet samples based objective function to fine-tune the network parameters. Then, two probability classification results are obtained using two sigmoid SVMs (Support Vector Machines) with the learned RGB and depth features. Finally, the DS evidence theory based decision fusion method is used for integrating the two classification results. Compared with other RGB-D object recognition methods, our proposed method adopts two fusion strategies: Multi-modal feature learning and DS decision fusion. Both the discriminative information of each modality and the correlation information between the two modalities are exploited. Extensive experimental results have validated the effectiveness of the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

He, Kangjian, Jian Gong e Dan Xu. "Focus-pixel estimation and optimization for multi-focus image fusion". Multimedia Tools and Applications 81, n.º 6 (28 de janeiro de 2022): 7711–31. http://dx.doi.org/10.1007/s11042-022-12031-x.

Texto completo da fonte
Resumo:
AbstractTo integrate the effective information and improve the quality of multi-source images, many spatial or transform domain-based image fusion methods have been proposed in the field of information fusion. The key purpose of multi-focus image fusion is to integrate the focused pixels and remove redundant information of each source image. Theoretically, if the focused pixels and complementary information of different images are detected completely, the fusion image with best quality can be obtained. For this goal, we propose a focus-pixel estimation and optimization based multi-focus image fusion framework in this paper. Because the focused pixels of an image are in the same depth of field (DOF), we propose a multi-scale focus-measure algorithm for the focused pixels matting to integrate the focused region firstly. Then, the boundaries of focused and defocused regions are obtained accurately by the proposed optimizing strategy. And the boundaries are also fused to reduce the influence of insufficient boundary precision. The experimental results demonstrate that the proposed method outperforms some previous typical methods in both objective evaluations and visual perception.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

An, Ying. "Medical Social Security System Considering Multisensor Data Fusion and Online Analysis". Mobile Information Systems 2022 (8 de agosto de 2022): 1–12. http://dx.doi.org/10.1155/2022/2312333.

Texto completo da fonte
Resumo:
At present, multi-sensor data fusion technology has been applied in many fields, and we have to understand the algorithm principle of multisensor data fusion technology, master the basic theory of multisensor data fusion technology, and analyze the application field of multisensor data fusion technology. The technical challenges of its application area should be understood. The discussion on its development direction has promoted the wide application of multisensor data fusion technology. By improving the query planning, query interpretation, and cache query optimization mechanism of different data organization models, a scalable and efficient distributed hybrid online analytical processing system is designed and implemented. With the opening of the medical system reform, the construction of the medical security system will be continuously improved. The in-depth reform of the medical security system will affect the medical treatment of more than one billion people, involving thousands of households, and it is a great cause for the welfare of the people. In this paper, exploration and research are carried out, combined with the basic algorithm and typical application of data fusion, and the construction of urban residents’ medical insurance is studied. The experimental results show that the Cavani index of medical expenses generally shows a downward trend in urban and rural areas, and the urban area has decreased from −0.1916 in 2012 to −0.2995 in 2020. This paper focuses on the in-depth study of the development process, actual situation, and existing problems of the social medical insurance system for urban residents, compares the development models of urban medical insurance in other places, and tries to put forward valuable and constructive countermeasures for the medical insurance system for urban residents.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Tian, Wei, Zhenwen Deng, Dong Yin, Zehan Zheng, Yuyao Huang e Xin Bi. "3D Pedestrian Detection in Farmland by Monocular RGB Image and Far-Infrared Sensing". Remote Sensing 13, n.º 15 (23 de julho de 2021): 2896. http://dx.doi.org/10.3390/rs13152896.

Texto completo da fonte
Resumo:
The automated driving of agricultural machinery is of great significance for the agricultural production efficiency, yet is still challenging due to the significantly varied environmental conditions through day and night. To address operation safety for pedestrians in farmland, this paper proposes a 3D person sensing approach based on monocular RGB and Far-Infrared (FIR) images. Since public available datasets for agricultural 3D pedestrian detection are scarce, a new dataset is proposed, named as “FieldSafePedestrian”, which includes field images in both day and night. The implemented data augmentations of night images and semi-automatic labeling approach are also elaborated to facilitate the 3D annotation of pedestrians. To fuse heterogeneous images of sensors with non-parallel optical axis, the Dual-Input Depth-Guided Dynamic-Depthwise-Dilated Fusion network (D5F) is proposed, which assists the pixel alignment between FIR and RGB images with estimated depth information and deploys a dynamic filtering to guide the heterogeneous information fusion. Experiments on field images in both daytime and nighttime demonstrate that compared with the state-of-the-arts, the dynamic aligned image fusion achieves an accuracy gain of 3.9% and 4.5% in terms of center distance and BEV-IOU, respectively, without affecting the run-time efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Hariharan, H., A. Koschan, B. Abidi, D. Page, M. Abidi, J. Frafjord e S. Dekanich. "Extending Depth of Field in LC-SEM Scenes by Partitioning Sharpness Transforms". Microscopy Today 16, n.º 2 (março de 2008): 18–21. http://dx.doi.org/10.1017/s1551929500055875.

Texto completo da fonte
Resumo:
When imaging a sample, it is desirable to have the entire area of interest in focus in the acquired image. Typically, microscopes have a limited depth of field (DOF) and this makes the acquisition of such an all-in-focus image difficult. This is a major problem in many microscopic applications and applies equally in the realm of scanning electron microscopy as well. In multifocus fusion, the central idea is to acquire focal information from multiple images at different focal planes and fuse them into one all-in-focus image where all the focal planes appear to be in focus.Large chamber scanning electron microscopes (LC-SEM) are one of the latest members in the SEM family that has found extensive use for nondestructive evaluations. Large objects (~1 meter) can be scanned in micro- or nano-scale using this microscope. An LC-SEM can provide characterization of conductive and non-conductive surfaces with a magnification from 10× to 200,000×. The LC-SEM, as with other SEMs, suffers from the problem of limited DOF making it difficult to inspect a large object while keeping all areas in focus.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Xiaomin, Liu, Du Mengzhu, Zang Huaping, Ma Zhibang, Zhu Yunfei e Chen Pengbo. "P‐8.2: Anti‐occlusion depth estimation method based on multi‐clue fusion for light field image". SID Symposium Digest of Technical Papers 52, S1 (fevereiro de 2021): 560. http://dx.doi.org/10.1002/sdtp.14554.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Chaudhary, Vishal, e Vinay Kumar. "Block-based image fusion using multi-scale analysis to enhance depth of field and dynamic range". Signal, Image and Video Processing 12, n.º 2 (7 de setembro de 2017): 271–79. http://dx.doi.org/10.1007/s11760-017-1155-y.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Llavador, Anabel, Gabriele Scrofani, Genaro Saavedra e Manuel Martinez-Corral. "Large Depth-of-Field Integral Microscopy by Use of a Liquid Lens". Sensors 18, n.º 10 (10 de outubro de 2018): 3383. http://dx.doi.org/10.3390/s18103383.

Texto completo da fonte
Resumo:
Integral microscopy is a 3D imaging technique that permits the recording of spatial and angular information of microscopic samples. From this information it is possible to calculate a collection of orthographic views with full parallax and to refocus computationally, at will, through the 3D specimen. An important drawback of integral microscopy, especially when dealing with thick samples, is the limited depth of field (DOF) of the perspective views. This imposes a significant limitation on the depth range of computationally refocused images. To overcome this problem, we propose here a new method that is based on the insertion, at the pupil plane of the microscope objective, of an electrically controlled liquid lens (LL) whose optical power can be changed by simply tuning the voltage. This new apparatus has the advantage of controlling the axial position of the objective focal plane while keeping constant the essential parameters of the integral microscope, that is, the magnification, the numerical aperture and the amount of parallax. Thus, given a 3D sample, the new microscope can provide a stack of integral images with complementary depth ranges. The fusion of the set of refocused images permits to enlarge the reconstruction range, obtaining images in focus over the whole region.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Bi, Xin, Shichao Yang e Panpan Tong. "Moving Object Detection Based on Fusion of Depth Information and RGB Features". Sensors 22, n.º 13 (22 de junho de 2022): 4702. http://dx.doi.org/10.3390/s22134702.

Texto completo da fonte
Resumo:
The detection of moving objects is one of the key problems in the field of computer vision. It is very important to detect moving objects accurately and rapidly for automatic driving. In this paper, we propose an improved moving object detection method to overcome the disadvantages of the RGB information-only-based method in detecting moving objects that are susceptible to shadow interference and illumination changes by adding depth information. Firstly, a convolutional neural network (CNN) based on the color edge-guided super-resolution reconstruction of depth maps is proposed to perform super-resolution reconstruction of low-resolution depth images obtained by depth cameras. Secondly, the RGB-D moving object detection algorithm is based on fusing the depth information of the same scene with RGB features for detection. Finally, in order to evaluate the effectiveness of the algorithm proposed in this paper, the Middlebury 2005 dataset and the SBM-RGBD dataset are successively used for testing. The experimental results show that our super-resolution reconstruction algorithm achieves the best results among the six commonly used algorithms, and our moving object detection algorithm improves the detection accuracy by up to 18.2%, 9.87% and 40.2% in three scenes, respectively, compared with the original algorithm, and it achieves the best results compared with the other three recent RGB-D-based methods. The algorithm proposed in this paper can better overcome the interference caused by shadow or illumination changes and detect moving objects more accurately.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Zhang, Xiaoqian. "Construction and Implementation of Curriculum Knowledge Bases by Integrating New Educational Resources". International Journal of Emerging Technologies in Learning (iJET) 18, n.º 20 (17 de outubro de 2023): 137–50. http://dx.doi.org/10.3991/ijet.v18i20.44213.

Texto completo da fonte
Resumo:
With the development of information technology (IT), the educational field has gradually entered a modernization stage. In this context, it is important to effectively integrate various educational resources to further improve teaching quality. However, existing methods of constructing curriculum knowledge bases face many challenges in several aspects, such as data fusion, knowledge reasoning, and so on. To address these issues, this study conducted in-depth research and practice on the construction of curriculum knowledge bases in the educational modernization field. The main content of this study includes the data fusion and knowledge reasoning of curriculum knowledge bases and the intelligent Q&A framework design of the bases in the educational modernization field. In the data fusion part, an entity alignment method based on semantic similarity was adopted, which effectively solved the problem of semantic relationship fusion between entities. In the knowledge reasoning part, the option-aware network based on attention mechanisms was used to significantly improve the accuracy and efficiency of reasoning. As part of the intelligent Q&A framework design, a sequence-to-sequence machine translation model was adopted, which intelligently queried the knowledge bases successfully and provided customized teaching content, thereby optimizing the learning process. This study provides effective theoretical and practical support for promoting educational modernization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Lv, Chunlei, e Lihua Cao. "Target Recognition Algorithm Based on Optical Sensor Data Fusion". Journal of Sensors 2021 (26 de outubro de 2021): 1–12. http://dx.doi.org/10.1155/2021/1979523.

Texto completo da fonte
Resumo:
Optical sensor data fusion technology is a research hotspot in the field of information science in recent years, which is widely used in military and civilian fields because of its advantages of high accuracy and low cost, and target recognition is one of the important research directions. Based on the characteristics of small target optical imaging, this paper fully utilizes the frontier theoretical methods in the field of image processing and proposes a small target recognition algorithm process framework based on visible and infrared image data fusion and improves the accuracy as well as stability of target recognition by improving the multisensor information fusion algorithm in the photoelectric meridian tracking system. A practical guide is provided for the solution of the small target recognition problem. To facilitate and quickly verify the multisensor fusion algorithm, a simulation platform for the intelligent vehicle and the experimental environment is built based on Gazebo software, which can realize the sensor data acquisition and the control decision function of the intelligent vehicle. The kinematic model of the intelligent vehicle is firstly described according to the design requirements, and the camera coordinate system, LiDAR coordinate system, and vehicle body coordinate system of the sensors are established. Then, the imaging models of the depth camera and LiDAR, the data acquisition principles of GPS and IMU, and the time synchronization relationship of each sensor are analyzed, and the error calibration and data acquisition experiments of each sensor are completed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Park, Seo-Jeon, Byung-Gyu Kim e Naveen Chilamkurti. "A Robust Facial Expression Recognition Algorithm Based on Multi-Rate Feature Fusion Scheme". Sensors 21, n.º 21 (20 de outubro de 2021): 6954. http://dx.doi.org/10.3390/s21216954.

Texto completo da fonte
Resumo:
In recent years, the importance of catching humans’ emotions grows larger as the artificial intelligence (AI) field is being developed. Facial expression recognition (FER) is a part of understanding the emotion of humans through facial expressions. We proposed a robust multi-depth network that can efficiently classify the facial expression through feeding various and reinforced features. We designed the inputs for the multi-depth network as minimum overlapped frames so as to provide more spatio-temporal information to the designed multi-depth network. To utilize a structure of a multi-depth network, a multirate-based 3D convolutional neural network (CNN) based on a multirate signal processing scheme was suggested. In addition, we made the input images to be normalized adaptively based on the intensity of the given image and reinforced the output features from all depth networks by the self-attention module. Then, we concatenated the reinforced features and classified the expression by a joint fusion classifier. Through the proposed algorithm, for the CK+ database, the result of the proposed scheme showed a comparable accuracy of 96.23%. For the MMI and the GEMEP-FERA databases, it outperformed other state-of-the-art models with accuracies of 96.69% and 99.79%. For the AFEW database, which is known as one in a very wild environment, the proposed algorithm achieved an accuracy of 31.02%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Pire, Taihú, Rodrigo Baravalle, Ariel D'Alessandro e Javier Civera. "Real-time dense map fusion for stereo SLAM". Robotica 36, n.º 10 (20 de junho de 2018): 1510–26. http://dx.doi.org/10.1017/s0263574718000528.

Texto completo da fonte
Resumo:
SUMMARYA robot should be able to estimate an accurate and dense 3D model of its environment (a map), along with its pose relative to it, all of it in real time, in order to be able to navigate autonomously without collisions.As the robot moves from its starting position and the estimated map grows, the computational and memory footprint of a dense 3D map increases and might exceed the robot capabilities in a short time. However, a global map is still needed to maintain its consistency and plan for distant goals, possibly out of the robot field of view.In this work, we address such problem by proposing a real-time stereo mapping pipeline, feasible for standard CPUs, which is locally dense and globally sparse and accurate. Our algorithm is based on a graph relating poses and salient visual points, in order to maintain a long-term accuracy with a small cost. Within such framework, we propose an efficient dense fusion of several stereo depths in the locality of the current robot pose.We evaluate the performance and the accuracy of our algorithm in the public datasets of Tsukuba and KITTI, and demonstrate that it outperforms single-view stereo depth. We release the code as open-source, in order to facilitate the system use and comparisons.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Brover, Andrey V., Galina I. Brover e Irina A. Topolskaya. "Physical Aspects of Laser Steel Processing in the Magnetic Field". Materials Science Forum 1052 (3 de fevereiro de 2022): 128–33. http://dx.doi.org/10.4028/p-qi2mk6.

Texto completo da fonte
Resumo:
The article contains the results of experiments carried out on laser irradiation of steels in a constant magnetic field. The experiment shows it possible to increase the cooling speed while heat transfer decreases from the surface into lower metal layers, which slightly reduces laser-hardened layer depth, increases convective mixing of the fusion area, which have a positive impact on the surface layers quality of steel and alloys. This has a positive effect on strength product properties. Magnetic field processing of metal reduces the degree of local plastic deformation of irradiated layer due to magnetostriction phenomenon. Magnetic field superimposition contributes to two-phase decay of the martensite, and reduces residual stresses and crack risk.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia