Статті в журналах з теми "Backbone Extraction"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Backbone Extraction.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Backbone Extraction".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Neal, Zachary P. "backbone: An R package to extract network backbones." PLOS ONE 17, no. 5 (May 31, 2022): e0269137. http://dx.doi.org/10.1371/journal.pone.0269137.

Повний текст джерела
Анотація:
Networks are useful for representing phenomena in a broad range of domains. Although their ability to represent complexity can be a virtue, it is sometimes useful to focus on a simplified network that contains only the most important edges: the backbone. This paper introduces and demonstrates a substantially expanded version of the backbone package for R, which now provides methods for extracting backbones from weighted networks, weighted bipartite projections, and unweighted networks. For each type of network, fully replicable code is presented first for small toy examples, then for complete empirical examples using transportation, political, and social networks. The paper also demonstrates the implications of several issues of statistical inference that arise in backbone extraction. It concludes by briefly reviewing existing applications of backbone extraction using the backbone package, and future directions for research on network backbone extraction.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liu, Yudong, Yongtao Wang, Siwei Wang, Tingting Liang, Qijie Zhao, Zhi Tang, and Haibin Ling. "CBNet: A Novel Composite Backbone Network Architecture for Object Detection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11653–60. http://dx.doi.org/10.1609/aaai.v34i07.6834.

Повний текст джерела
Анотація:
In existing CNN based detectors, the backbone network is a very important component for basic feature1 extraction, and the performance of the detectors highly depends on it. In this paper, we aim to achieve better detection performance by building a more powerful backbone from existing ones like ResNet and ResNeXt. Specifically, we propose a novel strategy for assembling multiple identical backbones by composite connections between the adjacent backbones, to form a more powerful backbone named Composite Backbone Network (CBNet). In this way, CBNet iteratively feeds the output features of the previous backbone, namely high-level features, as part of input features to the succeeding backbone, in a stage-by-stage fashion, and finally the feature maps of the last backbone (named Lead Backbone) are used for object detection. We show that CBNet can be very easily integrated into most state-of-the-art detectors and significantly improve their performances. For example, it boosts the mAP of FPN, Mask R-CNN and Cascade R-CNN on the COCO dataset by about 1.5 to 3.0 points. Moreover, experimental results show that the instance segmentation results can be improved as well. Specifically, by simply integrating the proposed CBNet into the baseline detector Cascade Mask R-CNN, we achieve a new state-of-the-art result on COCO dataset (mAP of 53.3) with a single model, which demonstrates great effectiveness of the proposed CBNet architecture. Code will be made available at https://github.com/PKUbahuangliuhe/CBNet.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gomes Ferreira, Carlos Henrique, Fabricio Murai, Ana P. C. Silva, Martino Trevisan, Luca Vassio, Idilio Drago, Marco Mellia, and Jussara M. Almeida. "On network backbone extraction for modeling online collective behavior." PLOS ONE 17, no. 9 (September 15, 2022): e0274218. http://dx.doi.org/10.1371/journal.pone.0274218.

Повний текст джерела
Анотація:
Collective user behavior in social media applications often drives several important online and offline phenomena linked to the spread of opinions and information. Several studies have focused on the analysis of such phenomena using networks to model user interactions, represented by edges. However, only a fraction of edges contribute to the actual investigation. Even worse, the often large number of non-relevant edges may obfuscate the salient interactions, blurring the underlying structures and user communities that capture the collective behavior patterns driving the target phenomenon. To solve this issue, researchers have proposed several network backbone extraction techniques to obtain a reduced and representative version of the network that better explains the phenomenon of interest. Each technique has its specific assumptions and procedure to extract the backbone. However, the literature lacks a clear methodology to highlight such assumptions, discuss how they affect the choice of a method and offer validation strategies in scenarios where no ground truth exists. In this work, we fill this gap by proposing a principled methodology for comparing and selecting the most appropriate backbone extraction method given a phenomenon of interest. We characterize ten state-of-the-art techniques in terms of their assumptions, requirements, and other aspects that one must consider to apply them in practice. We present four steps to apply, evaluate and select the best method(s) to a given target phenomenon. We validate our approach using two case studies with different requirements: online discussions on Instagram and coordinated behavior in WhatsApp groups. We show that each method can produce very different backbones, underlying that the choice of an adequate method is of utmost importance to reveal valuable knowledge about the particular phenomenon under investigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Brauckhoff, Daniela, Xenofontas Dimitropoulos, Arno Wagner, and Kavé Salamatian. "Anomaly Extraction in Backbone Networks Using Association Rules." IEEE/ACM Transactions on Networking 20, no. 6 (December 2012): 1788–99. http://dx.doi.org/10.1109/tnet.2012.2187306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cao, Jie, Cuiling Ding, and Benyun Shi. "Motif-based functional backbone extraction of complex networks." Physica A: Statistical Mechanics and its Applications 526 (July 2019): 121123. http://dx.doi.org/10.1016/j.physa.2019.121123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Dai, Liang, Ben Derudder, and Xingjian Liu. "Transport network backbone extraction: A comparison of techniques." Journal of Transport Geography 69 (May 2018): 271–81. http://dx.doi.org/10.1016/j.jtrangeo.2018.05.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yuan, Hanning, Yanni Han, Ning Cai, and Wei An. "A Multi-Granularity Backbone Network Extraction Method Based on the Topology Potential." Complexity 2018 (October 22, 2018): 1–8. http://dx.doi.org/10.1155/2018/8604132.

Повний текст джерела
Анотація:
Inspired by the theory of physics field, in this paper, we propose a novel backbone network compression algorithm based on topology potential. With consideration of the network connectivity and backbone compression precision, the method is flexible and efficient according to various network characteristics. Meanwhile, we define a metric named compression ratio to evaluate the performance of backbone networks, which provides an optimal extraction granularity based on the contributions of degree number and topology connectivity. We apply our method to the public available Internet AS network and Hep-th network, which are the public datasets in the field of complex network analysis. Furthermore, we compare the obtained results with the metrics of precision ratio and recall ratio. All these results show that our algorithm is superior to the compared methods. Moreover, we investigate the characteristics in terms of degree distribution and self-similarity of the extracted backbone. It is proven that the compressed backbone network has a lot of similarity properties to the original network in terms of power-law exponent.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Qian, Liqiang, Zhan Bu, Mei Lu, Jie Cao, and Zhiang Wu. "Extracting Backbones from Weighted Complex Networks with Incomplete Information." Abstract and Applied Analysis 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/105385.

Повний текст джерела
Анотація:
The backbone is the natural abstraction of a complex network, which can help people understand a networked system in a more simplified form. Traditional backbone extraction methods tend to include many outliers into the backbone. What is more, they often suffer from the computational inefficiency—the exhaustive search of all nodes or edges is often prohibitively expensive. In this paper, we propose a backbone extraction heuristic with incomplete information (BEHwII) to find the backbone in a complex weighted network. First, a strict filtering rule is carefully designed to determine edges to be preserved or discarded. Second, we present a local search model to examine part of edges in an iterative way, which only relies on the local/incomplete knowledge rather than the global view of the network. Experimental results on four real-life networks demonstrate the advantage of BEHwII over the classic disparity filter method by either effectiveness or efficiency validity.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Decai, and Xingguo Jiang. "Kinship Verification Method of Face Image Deep Feature Fusion." Academic Journal of Science and Technology 5, no. 1 (February 28, 2023): 57–62. http://dx.doi.org/10.54097/ajst.v5i1.5348.

Повний текст джерела
Анотація:
Kinship verification is an important and challenging problem in computer vision. How to extract discriminative features is the key to improve the accuracy of kinship verification. At present, convolutional neural networks (CNNs) for feature extraction in the field of computer vision has achieved remarkable success, making it the most scholars used to study kinship verification related issues. However, few people use the self-attention mechanism with global capture capability to build a backbone feature classification network. Therefore, this paper proposes a backbone feature extraction network model based on a non-convolution, which expands the selection range of traditional classification networks for kinship verification related issues. Specifically, the paper proposes to use Vision Transformers as the basic backbone feature extraction network, combined with CNN with local attention mechanism, to provide a unique integrated solution in kinship verification. The proposed GLANet model is used for kinship verification and can verify 11 kinship pairs. The final experimental results show that in the FIW dataset, compared with the RFIW2020 challenge leading method, the proposed method has better verification effect in kinship, and the accuracy rate can reach 79.6 %.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Song, Huina, Han Wu, Jianhua Huang, Hua Zhong, Meilin He, Mingkun Su, Gaohang Yu, Mengyuan Wang, and Jianwu Zhang. "HA-Unet: A Modified Unet Based on Hybrid Attention for Urban Water Extraction in SAR Images." Electronics 11, no. 22 (November 17, 2022): 3787. http://dx.doi.org/10.3390/electronics11223787.

Повний текст джерела
Анотація:
Urban water plays a significant role in the urban ecosystem, but urban water extraction is still a challenging task in automatic interpretation of synthetic aperture radar (SAR) images. The influence of radar shadows and strong scatters in urban areas may lead to misclassification in urban water extraction. Nevertheless, the local features captured by convolutional layers in Convolutional Neural Networks (CNNs) are generally redundant and cannot make effective use of global information to guide the prediction of water pixels. To effectively emphasize the identifiable water characteristics and fully exploit the global information of SAR images, a modified Unet based on hybrid attention mechanism is proposed to improve the performance of urban water extraction in this paper. Considering the feature extraction ability and the global modeling capability in SAR image segmentation, the Channel and Spatial Attention Module (CSAM) and the Multi-head Self-Attention Block (MSAB) are both introduced into the proposed Hybrid Attention Unet (HA-Unet). In this work, Resnet50 is adopted as the backbone of HA-Unet to extract multi-level features of SAR images. During the feature extraction process, CSAM based on local attention is adopted to enhance the meaningful water features and ignore unnecessary features adaptively in feature maps of two shallow layers. In the last two layers of the backbone, MSAB is introduced to capture the global information of SAR images to generate global attention. In addition, two global attention maps generated by MSAB are aggregated together to reconstruct the spatial feature relationship of SAR images from high-resolution feature maps. The experimental results on Sentinel-1A SAR images show that the proposed urban water extraction method has a strong ability to extract water bodies in the complex urban areas. The ablation experiment and visualization results vividly indicate that both CSAM and MSAB contribute significantly to extracting urban water accurately and effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sivanandan, Revathy, and J. Jayakumari. "Development of a Novel CNN Architecture for Improved Diagnosis from Liver Ultrasound Tumor Images." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 30, no. 02 (April 2022): 189–210. http://dx.doi.org/10.1142/s0218488522500088.

Повний текст джерела
Анотація:
Malignant liver tumors are considered as one of the most common cancers and a leading cause of cancer death worldwide. While using convolutional neural networks (CNNs) for feature extraction from ultrasound (US) images and tasks thereafter, most works focus on pre-trained architectures using transfer learning which can sometimes cause negative transfer and reduced performance in medical domain. A new method based on Pascal’s Triangle was developed for feature extraction using CNN. The convolutions and the kernels in Pascal’s Triangle based CNN (PT-CNN) are according to the coefficients at each level of Pascal’s Triangle. Due to the fuzzy nature of US images, the input layer takes a combination of the image and its neutrosophically pre-processed components as a single unit to improve noise robustness. The proposed CNN when implemented as a backbone for feature extraction for binary classification, gave validation accuracy > 90% and test accuracy >95% than other state-of-the-art CNN architectures. For tumor segmentation using Mask R-CNN framework, the aggregated feature maps from all convolutional layers were given as the input to the region proposal network for multiscale region proposals and to facilitate the detection of tumors of varying sizes. This gave an F1 score > 0.95 when compared with other architectures as backbone on Mask R-CNN framework and simple U-Net based segmentation. It is suggested that the promising results of PT-CNN as a feature extraction backbone could be further investigated in other domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hu, Haijian, Yicen Liu, and Haina Rong. "Detection of Insulators on Power Transmission Line Based on an Improved Faster Region-Convolutional Neural Network." Algorithms 15, no. 3 (March 1, 2022): 83. http://dx.doi.org/10.3390/a15030083.

Повний текст джерела
Анотація:
Detecting insulators on a power transmission line is of great importance for the safe operation of power systems. Aiming at the problem of the missed detection and misjudgment of the original feature extraction network VGG16 of a faster region-convolutional neural network (R-CNN) in the face of insulators of different sizes, in order to improve the accuracy of insulators’ detection on power transmission lines, an improved faster R-CNN algorithm is proposed. The improved algorithm replaces the original backbone feature extraction network VGG16 in faster R-CNN with the Resnet50 network with deeper layers and a more complex structure, adding an efficient channel attention module based on the channel attention mechanism. Experimental results show that the feature extraction performance has been effectively improved through the improvement of the backbone feature extraction network. The network model is trained on a training set consisting of 6174 insulator pictures, and is tested on a testing set consisting of 686 pictures. Compared with the traditional faster R-CNN, the mean average precision of the improved faster R-CNN increases to 89.37%, with an improvement of 1.63%.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Xu, Wenhao. "MEU Convolutional Neural Network and Random Noise Suppression of Seismic Data." Highlights in Science, Engineering and Technology 7 (August 3, 2022): 299–304. http://dx.doi.org/10.54097/hset.v7i.1086.

Повний текст джерела
Анотація:
In allusion to the strong interference problem of random noise in seismic exploration, this paper proposed a Multiscale enhancement U-Net (MEU-Net) for the first time. First, the network carries out multiple convolution and pooling on the data in the backbone feature extraction network, then conducts channel addition, convolution and upsampling in the enhancement feature extraction network for extracting and restoring data information, finally, further improve the denoising effect through the dilated convolution, residual module and attention mechanism in the multi-scale enhancement module. The actual data application shows that the method in this paper can achieve good denoising effect under noise with different intensities, and can be widely used in data denoising processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhao, Yibo, Jianjun Liu, Jinlong Yang, and Zebin Wu. "Remote Sensing Image Scene Classification via Self-Supervised Learning and Knowledge Distillation." Remote Sensing 14, no. 19 (September 27, 2022): 4813. http://dx.doi.org/10.3390/rs14194813.

Повний текст джерела
Анотація:
The main challenges of remote sensing image scene classification are extracting discriminative features and making full use of the training data. The current mainstream deep learning methods usually only use the hard labels of the samples, ignoring the potential soft labels and natural labels. Self-supervised learning can take full advantage of natural labels. However, it is difficult to train a self-supervised network due to the limitations of the dataset and computing resources. We propose a self-supervised knowledge distillation network (SSKDNet) to solve the aforementioned challenges. Specifically, the feature maps of the backbone are used as supervision signals, and the branch learns to restore the low-level feature maps after background masking and shuffling. The “dark knowledge” of the branch is transferred to the backbone through knowledge distillation (KD). The backbone and branch are optimized together in the KD process without independent pre-training. Moreover, we propose a feature fusion module to fuse feature maps dynamically. In general, SSKDNet can make full use of soft labels and has excellent discriminative feature extraction capabilities. Experimental results conducted on three datasets demonstrate the effectiveness of the proposed approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ji, Xiaodong, Qiaoning Yang, Xiuhui Yang, Jiahao Zheng, and Mengyan Gong. "Human Pose Estimation: Multi-stage Network Based on HRNet." Journal of Physics: Conference Series 2400, no. 1 (December 1, 2022): 012034. http://dx.doi.org/10.1088/1742-6596/2400/1/012034.

Повний текст джерела
Анотація:
Abstract Multi-stage network uses stacked networks to enhance the feature extraction capability, and can gradually refine the keypoints with the information of previous stages’ output. Obviously, multi-stage networks are more suitable for human pose estimation. However, most current multi-stage networks use a codec structure as the backbone in which downsample will cause information loss. HRNet maintains high-resolution features to supply the information which is lost in down-sampling stage. In this regard, we propose a novel two-stage network with HRNet as the backbone and stacked codec structure. HRNet has more efficient feature extraction capability, and the stacked codec network can utilize the multi-scale features generated by HRNet more effectively. This method obtains a 1.2AP improvement compared to HRNet and a significant improvement compared to other two-stage networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Xu, Shanyong, Jicheng Deng, Yourui Huang, Liuyi Ling, and Tao Han. "Research on Insulator Defect Detection Based on an Improved MobilenetV1-YOLOv4." Entropy 24, no. 11 (November 2, 2022): 1588. http://dx.doi.org/10.3390/e24111588.

Повний текст джерела
Анотація:
Insulator devices are important for transmission lines, and defects such as insulator bursting and string loss affect the safety of transmission lines. In this study, we aim to investigate the problems of slow detection speed and low efficiency of traditional insulator defect detection algorithms, and to improve the accuracy of insulator fault identification and the convenience of daily work; therefore, we propose an insulator defect detection algorithm based on an improved MobilenetV1-YOLOv4. First, the backbone feature extraction network of YOLOv4 ‘Backbone’ is replaced with the lightweight module Mobilenet-V1. Second, the scSE attention mechanism is introduced in stages of preliminary feature extraction and enhanced feature extraction, sequentially. Finally, the depthwise separable convolution substitutes the 3 × 3 convolution of the enhanced feature extraction network to reduce the overall number of network parameters. The experimental results show that the weight of the improved algorithm is 57.9 MB, which is 62.6% less than that obtained by the MobilenetV1-YOLOv4 model; the average accuracy of insulator defect detection is improved by 0.26% and reaches 98.81%; and the detection speed reaches 190 frames per second with an increase of 37 frames per second.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Li, Ping, Taiyu Han, Yifei Ren, Peng Xu, and Hongliu Yu. "Improved YOLOv4-tiny based on attention mechanism for skin detection." PeerJ Computer Science 9 (March 10, 2023): e1288. http://dx.doi.org/10.7717/peerj-cs.1288.

Повний текст джерела
Анотація:
Background An automatic bathing robot needs to identify the area to be bathed in order to perform visually-guided bathing tasks. Skin detection is the first step. The deep convolutional neural network (CNN)-based object detection algorithm shows excellent robustness to light and environmental changes when performing skin detection. The one-stage object detection algorithm has good real-time performance, and is widely used in practical projects. Methods In our previous work, we performed skin detection using Faster R-CNN (ResNet50 as backbone), Faster R-CNN (MobileNetV2 as backbone), YOLOv3 (DarkNet53 as backbone), YOLOv4 (CSPDarknet53 as backbone), and CenterNet (Hourglass as backbone), and found that YOLOv4 had the best performance. In this study, we considered the convenience of practical deployment and used the lightweight version of YOLOv4, i.e., YOLOv4-tiny, for skin detection. Additionally, we added three kinds of attention mechanisms to strengthen feature extraction: SE, ECA, and CBAM. We added the attention module to the two feature layers of the backbone output. In the enhanced feature extraction network part, we applied the attention module to the up-sampled features. For full comparison, we used other lightweight methods that use MobileNetV1, MobileNetV2, and MobileNetV3 as the backbone of YOLOv4. We established a comprehensive evaluation index to evaluate the performance of the models that mainly reflected the balance between model size and mAP. Results The experimental results revealed that the weight file of YOLOv4-tiny without attention mechanisms was reduced to 9.2% of YOLOv4, but the mAP maintained 67.3% of YOLOv4. YOLOv4-tiny’s performance improved after combining the CBAM and ECA modules, but the addition of SE deteriorated the performance of YOLOv4-tiny. MobileNetVX_YOLOv4 (X = 1, 2, 3), which used MobileNetV1, MobileNetV2, and MobileNetV3 as the backbone of YOLOv4, showed higher mAP than YOLOv4-tiny series (including YOLOv4-tiny and three improved YOLOv4-tiny based on the attention mechanism) but had a larger weight file. The network performance was evaluated using the comprehensive evaluation index. The model, which integrates the CBAM attention mechanism and YOLOv4-tiny, achieved a good balance between model size and detection accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Han, Jiapeng, Zhenzhou Wang, Yun Wang, and Weimin Hou. "Building extraction algorithm from remote sensing images based on improved DeepLabv3+ network." Journal of Physics: Conference Series 2303, no. 1 (July 1, 2022): 012010. http://dx.doi.org/10.1088/1742-6596/2303/1/012010.

Повний текст джерела
Анотація:
Abstract With the development of deep learning, quickly extracting high-precision building information from remote sensing images has become the research focus of intelligent application and processing of remote sensing data. Aiming at the problems of slow extraction speed and incomplete edge segmentation in building extraction in remote sensing images, a building extraction algorithm of remote sensing images based on an improved deeplabv3+ network is proposed. The more lightweight network MobileNetv3 is used to replace the original deeplabv3+ semantic segmentation model feature extraction backbone network Xception, and the standard convolution in the hole space pyramid pooling module is replaced with deep separable convolution, which reduces the amount of calculation and improves the training speed.DAMM (Dual Attention Mechanism Module) is connected in parallel with ASPP (Atous Spatial Pyramid Pooling) to improve the segmentation accuracy of edge targets. The model is verified on WHU and Massachusetts data sets. The results show that the number of training parameters and training time of the model are reduced, and the accuracy of the building extraction is effectively improved, which can meet the requirements of rapid extraction of high-precision buildings.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Norita, Hassan, Ahmad Haji Sahrim, Norhamidi Muhamad, and Mohd Afian Omar. "Thermoplastic Natural Rubber (TPNR) as a Backbone Binder for Metal Injection Molding Process." Advanced Materials Research 428 (January 2012): 24–27. http://dx.doi.org/10.4028/www.scientific.net/amr.428.24.

Повний текст джерела
Анотація:
Thermoplastic natural rubbers (TPNR) blends consist of thermoplastics, such as low density polyethylene (LDPE), alloyed with natural rubber (NR) with different thermoplastic to NR ratios. The soft grade TPNR is produced by blends with compositions richer in rubber while the harder grades can contain up to about 30% NR. This study investigate the influence of new binder system containning of TPNR on injection parameter, density of injected molded specimen and changes during solvent extraction. Results shows that TPNR plays an important role as a good binder system and shortens the solvent extraction process.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fang, Jian Jun, Qiang Qiang Zhao, and Ming Fang Du. "Extraction and Segmentation of Books Call Number Image for Books on the Shelves of Library." Applied Mechanics and Materials 614 (September 2014): 374–77. http://dx.doi.org/10.4028/www.scientific.net/amm.614.374.

Повний текст джерела
Анотація:
It is a key technology that extract and segment book call number from books’ backbone images for book retrieval and return robot to take an operation of sending and withdrawing books to or from the shelves of library. This paper presents an image extraction and segmentation method using color feature and edge information to segment adjacent book call number with similar background. Take thick, thicker, medium thick, thin, little thin and hybrid books’ backbone images as experiment samples, and the results show that this method effectively extracts and segments adjacent book call number. The segmentation approach provides a technical support for book retrieval and return robot to carry out automatic books’ retrieval and return operation.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wang, Wei, Bowen Xing, Lan Zhang, and Wugui Wang. "Research on U-Net based Underwater Holothurian Recognition Method." Journal of Physics: Conference Series 2213, no. 1 (March 1, 2022): 012037. http://dx.doi.org/10.1088/1742-6596/2213/1/012037.

Повний текст джерела
Анотація:
Abstract Holothurians can integrate into nearby environment by changing body color which increased the difficulty of version identification. A U-Net network based visual recognition algorithm suitable for underwater organisms was proposed. The backbone feature extraction network adopted VGG16. In additional, upper sampling was used twice for feature fusion. Test results shown that the algorithm has high recognition accuracy and good feature extraction for holothurian with different shapes in natural environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Qing, Yuhao, Wenyi Liu, Liuyan Feng, and Wanjia Gao. "Improved YOLO Network for Free-Angle Remote Sensing Target Detection." Remote Sensing 13, no. 11 (June 1, 2021): 2171. http://dx.doi.org/10.3390/rs13112171.

Повний текст джерела
Анотація:
Despite significant progress in object detection tasks, remote sensing image target detection is still challenging owing to complex backgrounds, large differences in target sizes, and uneven distribution of rotating objects. In this study, we consider model accuracy, inference speed, and detection of objects at any angle. We also propose a RepVGG-YOLO network using an improved RepVGG model as the backbone feature extraction network, which performs the initial feature extraction from the input image and considers network training accuracy and inference speed. We use an improved feature pyramid network (FPN) and path aggregation network (PANet) to reprocess feature output by the backbone network. The FPN and PANet module integrates feature maps of different layers, combines context information on multiple scales, accumulates multiple features, and strengthens feature information extraction. Finally, to maximize the detection accuracy of objects of all sizes, we use four target detection scales at the network output to enhance feature extraction from small remote sensing target pixels. To solve the angle problem of any object, we improved the loss function for classification using circular smooth label technology, turning the angle regression problem into a classification problem, and increasing the detection accuracy of objects at any angle. We conducted experiments on two public datasets, DOTA and HRSC2016. Our results show the proposed method performs better than previous methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Shi, Peicheng, Xinhe Chen, Heng Qi, Chenghui Zhang, and Zhiqiang Liu. "Object Detection Based on Swin Deformable Transformer-BiPAFPN-YOLOX." Computational Intelligence and Neuroscience 2023 (March 9, 2023): 1–18. http://dx.doi.org/10.1155/2023/4228610.

Повний текст джерела
Анотація:
Object detection technology plays a crucial role in people’s everyday lives, as well as enterprise production and modern national defense. Most current object detection networks, such as YOLOX, employ convolutional neural networks instead of a Transformer as a backbone. However, these techniques lack a global understanding of the images and may lose meaningful information, such as the precise location of the most active feature detector. Recently, a Transformer with larger receptive fields showed superior performance to corresponding convolutional neural networks in computer vision tasks. The Transformer splits the image into patches and subsequently feeds them to the Transformer in a sequence structure similar to word embeddings. This makes it capable of global modeling of entire images and implies global understanding of images. However, simply using a Transformer with a larger receptive field raises several concerns. For example, self-attention in the Swin Transformer backbone will limit its ability to model long range relations, resulting in poor feature extraction results and low convergence speed during training. To address the above problems, first, we propose an important region-based Reconstructed Deformable Self-Attention that shifts attention to important regions for efficient global modeling. Second, based on the Reconstructed Deformable Self-Attention, we propose the Swin Deformable Transformer backbone, which improves the feature extraction ability and convergence speed. Finally, based on the Swin Deformable Transformer backbone, we propose a novel object detection network, namely, Swin Deformable Transformer-BiPAFPN-YOLOX. experimental results on the COCO dataset show that the training period is reduced by 55.4%, average precision is increased by 2.4%, average precision of small objects is increased by 3.7%, and inference speed is increased by 35%.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hotchko, M. "Automated extraction of backbone deuteration levels from amide H/2H mass spectrometry experiments." Protein Science 15, no. 3 (February 1, 2006): 583–601. http://dx.doi.org/10.1110/ps.051774906.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Liebig, J., and A. Rao. "Fast extraction of the backbone of projected bipartite networks to aid community detection." EPL (Europhysics Letters) 113, no. 2 (January 1, 2016): 28003. http://dx.doi.org/10.1209/0295-5075/113/28003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hong, Danyang, Chunping Qiu, Anzhu Yu, Yujun Quan, Bing Liu, and Xin Chen. "Multi-Task Learning for Building Extraction and Change Detection from Remote Sensing Images." Applied Sciences 13, no. 2 (January 12, 2023): 1037. http://dx.doi.org/10.3390/app13021037.

Повний текст джерела
Анотація:
Building extraction (BE) and change detection (CD) from remote sensing (RS) imagery are significant yet highly challenging tasks with substantial application potential in urban management. Learning representative multi-scale features from RS images is a crucial step toward practical BE and CD solutions, as in other DL-based applications. To better exploit the available labeled training data for representation learning, we propose a multi-task learning (MTL) network for simultaneous BE and CD, comprising the state-of-the-art (SOTA) powerful Swin transformer as a shared backbone network and multiple heads for predicting building labels and changes. Using the popular CD dataset the Wuhan University building change detection dataset (WHU-CD), we benchmarked detailed designs of the MTL network, including backbone and pre-training choices. With a selected optimal setting, the intersection over union (IoU) score was improved from 70 to 81 for the WHU-CD. The experimental results of different settings demonstrated the effectiveness of the proposed MTL method. In particular, we achieved top scores in BE and CD from optical images in the 2021 Gaofen Challenge. Our method also shows transferable performance on an unseen CD dataset, indicating high label efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Dixit, Mayank, Kuldeep Chaurasia, Vipul Kumar Mishra, Dilbag Singh, and Heung-No Lee. "6+: A Novel Approach for Building Extraction from a Medium Resolution Multi-Spectral Satellite." Sustainability 14, no. 3 (January 29, 2022): 1615. http://dx.doi.org/10.3390/su14031615.

Повний текст джерела
Анотація:
For smart, sustainable cities and urban planning, building extraction through satellite images becomes a crucial activity. It is challenging in the medium spatial resolution. This work proposes a novel methodology named ‘6+’ for improving building extraction in 10 m medium spatial resolution multispectral satellite images. Data resources used are Sentinel-2A satellite images and OpenStreetMap (OSM). The proposed methodology merges the available high-resolution bands, super-resolved Short-Wave InfraRed (SWIR) bands, and an Enhanced Normalized Difference Impervious Surface Index (ENDISI) built-up index-based image to produce enhanced multispectral satellite images that contain additional information on impervious surfaces for improving building extraction results. The proposed methodology produces a novel building extraction dataset named ‘6+’. Another dataset named ‘6 band’ is also prepared for comparison by merging super-resolved bands 11 and 12 along with all the highest spatial resolution bands. The building ground truths are prepared using OSM shapefiles. The models specific for extracting buildings, i.e., BRRNet, JointNet, SegUnet, Dilated-ResUnet, and other Unet based encoder-decoder models with a backbone of various state-of-art image segmentation algorithms, are applied on both datasets. The comparative analyses of all models applied to the ‘6+’ dataset achieve a better performance in terms of F1-Score and Intersection over Union (IoU) than the ‘6 band’ dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yu, Jianyong, and Dejin Zhao. "Design of vision recognition system for picking robots." Journal of Physics: Conference Series 2383, no. 1 (December 1, 2022): 012086. http://dx.doi.org/10.1088/1742-6596/2383/1/012086.

Повний текст джерела
Анотація:
Based on yolov4-tiny deep learning neural network, an improved yolov4-tiny network model is proposed in order to achieve the reduction of the network model for overlapping fruits and branch-obscured fruits in natural environment and to realize the accurate and fast recognition of apple-pear fruits, the main improvement measures include: firstly, the CSPBlock residual network module of the backbone network is introduced in the module of the backbone network to replace the 3×3 convolution kernel in it, which improves the perceptual field of the feature layer in the network and enhances the extraction capability of the target feature information through the spatial consistency and channel specificity of the Involution operator. second, the output of the first layer of the CSPBlock module in the backbone network containing the rich surface information of the image is extracted in the feature pyramid with the first and second scale feature maps for multi-scale feature fusion to enhance the extraction capability of dense small target feature information, by conducting training experiments on the apple pear dataset collected by ourselves, the experimental results show that the accuracy of the improved network structure is 95.45%, an improvement of 2.84%, and the recall rate is 94.92%, an improvement of 2.83%, compared with yolov4-tiny, the improved method improves the accuracy of fruit recognition and provides the theoretical basis for the subsequent apple pear picking robot to quickly identify picked apple-pears.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Eiden, A., T. Eickhoff, J. C. Göbel, C. Apostolov, P. Savarino, and T. Dickopf. "Data Networking for Industrial Data Analysis Based on a Data Backbone System." Proceedings of the Design Society 2 (May 2022): 693–702. http://dx.doi.org/10.1017/pds.2022.71.

Повний текст джерела
Анотація:
AbstractIndustrial Data Analytics needs access to huge amounts of data, which is scattered across different IT systems. As part of an integrated reference kit for Industrial Data Analytics, there is a need for a data backend system that provides access to data. This system needs to have solutions for the extraction of data, the management of data and an analysis pipeline for those data. This paper presents an approach for this data backend system.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Wang, Libo, Fangcheng Liu, Tengfei Li, Dawei Liu, Yaqin Xu, and Yu Yang. "Enzyme Assisted Extraction, Purification and Structure Analysis of the Polysaccharides from Naked Pumpkin Seeds." Applied Sciences 8, no. 10 (October 10, 2018): 1866. http://dx.doi.org/10.3390/app8101866.

Повний текст джерела
Анотація:
Enzyme assisted extraction was used to extract the polysaccharides from pumpkin seeds (PSP) and the extraction parameters were optimized by response surface methodology (RSM). Under the optimum experimental parameters: Extraction temperature of 60 °C, extraction time of 43 min, enzyme concentration of 2.5%, and pH of 6.0, the yield of PSP was 3.22 ± 0.04%, which was in close agreement with the predicted value (3.24%). After further purification on anion exchange column and gelfiltration column, a novel purified polysaccharide (PSPE) with molecular weight of 16,700 g/mol was obtained. PSPE was mainly composed of mannose, galactose and glucose in the molar ratio of 1.00:3.84:1.62. NMR spectra analysis showed that the major backbone of PSPE consisted of →4)-α-d-Glcp-(1→, →4)-β-d-Manp-(1→, →3,6)-β-d-Glap-(1→, and β-d-galactose.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Hoque, Bosirul, M. Inês G. S. Almeida, Robert W. Cattrall, Thiruvancheril G. Gopakumar, and Spas D. Kolev. "Improving the extraction performance of polymer inclusion membranes by cross-linking their polymeric backbone." Reactive and Functional Polymers 160 (March 2021): 104813. http://dx.doi.org/10.1016/j.reactfunctpolym.2021.104813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Jin, Yuhui, Xin Li, Sainan Zhu, Bin Tong, Fang Chen, Ru Cui, and Jian Huang. "Accurate landslide identification by multisource data fusion analysis with improved feature extraction backbone network." Geomatics, Natural Hazards and Risk 13, no. 1 (September 1, 2022): 2313–32. http://dx.doi.org/10.1080/19475705.2022.2116357.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhang, Lili, Yu Fan, Ruijie Yan, Yehong Shao, Gaoxu Wang, and Jisen Wu. "Fine-Grained Tidal Flat Waterbody Extraction Method (FYOLOv3) for High-Resolution Remote Sensing Images." Remote Sensing 13, no. 13 (July 2, 2021): 2594. http://dx.doi.org/10.3390/rs13132594.

Повний текст джерела
Анотація:
The tidal flat is long and narrow area along rivers and coasts with high sediment content, so there is little feature difference between the waterbody and the background, and the boundary of the waterbody is blurry. The existing waterbody extraction methods are mostly used for the extraction of large water bodies like rivers and lakes, whereas less attention has been paid to tidal flat waterbody extraction. Extracting tidal flat waterbody accurately from high-resolution remote sensing imagery is a great challenge. In order to solve the low accuracy problem of tidal flat waterbody extraction, we propose a fine-grained tidal flat waterbody extraction method, named FYOLOv3, which can extract tidal flat water with high accuracy. The FYOLOv3 mainly includes three parts: an improved object detection network based on YOLOv3 (Seattle, WA, USA), a fully convolutional network (FCN) without pooling layers, and a similarity algorithm for water extraction. The improved object detection network uses 13 convolutional layers instead of Darknet-53 as the model backbone network, which guarantees the water detection accuracy while reducing the time cost and alleviating the overfitting phenomenon; secondly, the FCN without pooling layers is proposed to obtain the accurate pixel value of the tidal flat waterbody by learning the semantic information; finally, a similarity algorithm for water extraction is proposed to distinguish the waterbody from non-water pixel by pixel to improve the extraction accuracy of tidal flat water bodies. Compared to the other convolutional neural network (CNN) models, the experiments show that our method has higher accuracy on the waterbody extraction of tidal flats from remote sensing images, and the IoU of our method is 2.43% higher than YOLOv3 and 3.7% higher than U-Net (Freiburg, Germany).
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Alshaikhli, Tamara, Wen Liu, and Yoshihisa Maruyama. "Simultaneous Extraction of Road and Centerline from Aerial Images Using a Deep Convolutional Neural Network." ISPRS International Journal of Geo-Information 10, no. 3 (March 8, 2021): 147. http://dx.doi.org/10.3390/ijgi10030147.

Повний текст джерела
Анотація:
The extraction of roads and centerlines from aerial imagery is considered an important topic because it contributes to different fields, such as urban planning, transportation engineering, and disaster mitigation. Many researchers have studied this topic as a two-separated task that affects the quality of extracted roads and centerlines because of the correlation between these two tasks. Accurate road extraction enhances accurate centerline extraction if these two tasks are processed simultaneously. This study proposes a multitask learning scheme using a gated deep convolutional neural network (DCNN) to extract roads and centerlines simultaneously. The DCNN is composed of one encoder and two decoders implemented on the U-Net backbone. The decoders are assigned to extract roads and centerlines from low-resolution feature maps. Before extraction, the images are processed within an encoder to extract the spatial information from a complex, high-resolution image. The encoder consists of the residual blocks (Res-Block) connected to a bridge represented by a Res-Block, and the bridge connects the two identical decoders, which consists of stacking convolutional layers (Conv.layer). Attention gates (AGs) are added to our model to enhance the selection process for the true pixels that represent road or centerline classes. Our model is trained on a dataset of high-resolution aerial images, which is open to the public. The model succeeds in efficiently extracting roads and centerlines compared with other multitask learning models.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Sheng, Jiajia, Youqiang Sun, He Huang, Wenyu Xu, Haotian Pei, Wei Zhang, and Xiaowei Wu. "HBRNet: Boundary Enhancement Segmentation Network for Cropland Extraction in High-Resolution Remote Sensing Images." Agriculture 12, no. 8 (August 22, 2022): 1284. http://dx.doi.org/10.3390/agriculture12081284.

Повний текст джерела
Анотація:
Cropland extraction has great significance in crop area statistics, intelligent farm machinery operations, agricultural yield estimates, and so on. Semantic segmentation is widely applied to remote sensing image cropland extraction. Traditional semantic segmentation methods using convolutional networks result in a lack of contextual and boundary information when extracting large areas of cropland. In this paper, we propose a boundary enhancement segmentation network for cropland extraction in high-resolution remote sensing images (HBRNet). HBRNet uses Swin Transformer with the pyramidal hierarchy as the backbone to enhance the boundary details while obtaining context. We separate the boundary features and body features from the low-level features, and then perform a boundary detail enhancement module (BDE) on the high-level features. Endeavoring to fuse the boundary features and body features, the module for interaction between boundary information and body information (IBBM) is proposed. We select remote sensing images containing large-scale cropland in Yizheng City, Jiangsu Province as the Agricultural dataset for cropland extraction. Our algorithm is applied to the Agriculture dataset to extract cropland with mIoU of 79.61%, OA of 89.4%, and IoU of 84.59% for cropland. In addition, we conduct experiments on the DeepGlobe, which focuses on the rural areas and has a diversity of cropland cover types. The experimental results indicate that HBRNet improves the segmentation performance of the cropland.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wang, Fei, and Yalu Ying. "Evaluation of Students’ Innovation and Entrepreneurship Ability Based on ResNet Network." Mobile Information Systems 2022 (February 22, 2022): 1–11. http://dx.doi.org/10.1155/2022/7772415.

Повний текст джерела
Анотація:
As the country’s high-quality talents, college students are an important force in national construction. Evaluating the innovative and entrepreneurial abilities for Chinese students will help promote innovation and entrepreneurship education system and improve the reform of educational system and mechanism of colleges, thereby enhancing the innovation and entrepreneurship abilities of college students and then pushing the country into the ranks of a strong country in human resources and a strong country in talents. This work designs a ResNet-based evaluation method to college innovation and entrepreneurship abilities; the main contributions are as follows. (1) When ResNet performs feature extraction, there are problems of bloated network structure and feature loss. A feature extraction backbone network based on ResNet is proposed. To solve the issue of loss for shallow features in process of feature extraction, a skip architecture is added to fuse the shallow details and spatial information with the deep semantic information. To solve the problem of weak model generalization ability caused by the shallow network, a network stacking strategy is proposed to deepen the network structure. (2) Aiming at the problem that ResNet using single-scale feature prediction cannot effectively utilize multiscale features in the network, a multiscale feature prediction is designed. According to idea of feature pyramid, multiple feature maps with different scales are selected for the improved residual network. It designed a multiscale feature fusion strategy for fusing the selected multiscale feature maps into a feature map and evaluated the innovation and entrepreneurship abilities on the fused feature maps. Finally, comparative experiment proves that the improved feature extraction backbone network and multiscale feature scheme can improve performance accuracy on constructed dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Drageset, Audun, and Hans-René Bjørsvik. "Continuous flow synthesis concatenated with continuous flow liquid–liquid extraction for work-up and purification: selective mono- and di-iodination of the imidazole backbone." Reaction Chemistry & Engineering 1, no. 4 (2016): 436–44. http://dx.doi.org/10.1039/c6re00091f.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Han, Ping, Dayu Liao, Binbin Han, and Zheng Cheng. "SEAN: A Simple and Efficient Attention Network for Aircraft Detection in SAR Images." Remote Sensing 14, no. 18 (September 19, 2022): 4669. http://dx.doi.org/10.3390/rs14184669.

Повний текст джерела
Анотація:
Due to the unique imaging mechanism of synthetic aperture radar (SAR), which leads to a discrete state of aircraft targets in images, its detection performance is vulnerable to the influence of complex ground objects. Although existing deep learning detection algorithms show good performance, they generally use a feature pyramid neck design and large backbone network, which reduces the detection efficiency to some extent. To address these problems, we propose a simple and efficient attention network (SEAN) in this paper, which takes YOLOv5s as the baseline. First, we shallow the depth of the backbone network and introduce a structural re-parameterization technique to increase the feature extraction capability of the backbone. Second, the neck architecture is designed by using a residual dilated module (RDM), a low-level semantic enhancement module (LSEM), and a localization attention module (LAM), substantially reducing the number of parameters and computation of the network. The results on the Gaofen-3 aircraft target dataset show that this method achieves 97.7% AP at a speed of 83.3 FPS on a Tesla M60, exceeding YOLOv5s by 1.3% AP and 8.7 FPS with 40.51% of the parameters and 86.25% of the FLOPs.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Nikam, Rohan, Ritesh Pardeshi, Yash Patel, and Ekta Sarda. "Deep Learning based Automatic Extraction of Student Performance from Gazette Assessment Data." ITM Web of Conferences 40 (2021): 03022. http://dx.doi.org/10.1051/itmconf/20214003022.

Повний текст джерела
Анотація:
Everyday millions of files are generated worldwide containing humongous amounts of data in an unstructured format. Most of us come across at least one new document every week, which tells the large volume of data associated with documents. All the data in these documents is in unstructured format which makes it difficult for further processing. The extraction of data from this documents still remains largely a manual effort resulting in higher processing time. A system that could extract the required fields from documents and store them in a structured format automatically will be of much significance. In this paper, we have described an approach for extracting the data from Exam Result Gazette document and then storing it in a CSV file. Mask RCNN model having a backbone of ResNeXt-101-32x8d and Feature Pyramid Network(FPN) has been hypertuned for detecting the required fields. Then PyTesseract Optical Character Recognition System has been used for extracting the data from detected fields. Our proposed system is trained on custom data set created by us and then evaluated on test data to extract the required fields. The overall accuracy of our system is 98.69%. The results indicate that the system could be used for efficiently extracting the required fields from given exam result gazette document.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hosseini, Seyed Mohammad Kazem. "Robust prediction of CO2 corrosion rate in extraction and production hydrocarbon industry." Anti-Corrosion Methods and Materials 64, no. 1 (January 3, 2017): 36–42. http://dx.doi.org/10.1108/acmm-08-2015-1564.

Повний текст джерела
Анотація:
Purpose CO2 corrosion rate prediction is regarded as the backbone of materials selection in upstream hydrocarbon industry. This study aims to identify common types of errors in CO2 rate calculation and to give guidelines on how to avoid them. Design/methodology/approach For the purpose of this study, 15 different “corrosion study and materials selection reports” carried out previously in upstream hydrocarbon industry were selected, and their predicted CO2 corrosion rates were evaluated using various corrosion models. Errors captured in the original materials selection reports were categorized based on their type and nature. Findings The errors identified in the present study are classified into the following four main types: using inadequate or false data as the input to the model, failing to address factors which may have significant influence on corrosion rate, utilizing corrosion models beyond their validity range and utilizing a corrosion model for a specific set of input, where the model is considered to be inaccurate even though the input lies within the software’s range of validity. Research limitations/implications This study is mainly based on the use of various corrosion models, and except few cases for which some actual field corrosion monitoring data were available, no laboratory tests were performed to verify the predicted data. Practical implications The paper provides a checklist of common types of errors in CO2 corrosion rate prediction and the guidelines on how to avoid them. Originality/value CO2 corrosion rate calculation is regarded as the backbone of materials selection in hydrocarbon industry. In this work, the source of errors in terms of corrosion modeling tool and human factors were identified.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Nazir, Danish, Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker, and Muhammad Zeshan Afzal. "HybridTabNet: Towards Better Table Detection in Scanned Document Images." Applied Sciences 11, no. 18 (September 11, 2021): 8396. http://dx.doi.org/10.3390/app11188396.

Повний текст джерела
Анотація:
Tables in document images are an important entity since they contain crucial information. Therefore, accurate table detection can significantly improve the information extraction from documents. In this work, we present a novel end-to-end trainable pipeline, HybridTabNet, for table detection in scanned document images. Our two-stage table detector uses the ResNeXt-101 backbone for feature extraction and Hybrid Task Cascade (HTC) to localize the tables in scanned document images. Moreover, we replace conventional convolutions with deformable convolutions in the backbone network. This enables our network to detect tables of arbitrary layouts precisely. We evaluate our approach comprehensively on ICDAR-13, ICDAR-17 POD, ICDAR-19, TableBank, Marmot, and UNLV. Apart from the ICDAR-17 POD dataset, our proposed HybridTabNet outperformed earlier state-of-the-art results without depending on pre- and post-processing steps. Furthermore, to investigate how the proposed method generalizes unseen data, we conduct an exhaustive leave-one-out-evaluation. In comparison to prior state-of-the-art results, our method reduced the relative error by 27.57% on ICDAR-2019-TrackA-Modern, 42.64% on TableBank (Latex), 41.33% on TableBank (Word), 55.73% on TableBank (Latex + Word), 10% on Marmot, and 9.67% on the UNLV dataset. The achieved results reflect the superior performance of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cai, Maodong, Xiaomei Yi, Guoying Wang, Lufeng Mo, Peng Wu, Christine Mwanza, and Kasanda Ernest Kapula. "Image Segmentation Method for Sweetgum Leaf Spots Based on an Improved DeeplabV3+ Network." Forests 13, no. 12 (December 8, 2022): 2095. http://dx.doi.org/10.3390/f13122095.

Повний текст джерела
Анотація:
This paper discusses a sweetgum leaf-spot image segmentation method based on an improved DeeplabV3+ network to address the low accuracy in plant leaf spot segmentation, problems with the recognition model, insufficient datasets, and slow training speeds. We replaced the backbone feature extraction network of the model's encoder with the MobileNetV2 network, which greatly reduced the amount of calculation being performed in the model and improved its calculation speed. Then, the attention mechanism module was introduced into the backbone feature extraction network and the decoder, which further optimized the model’s edge recognition effect and improved the model's segmentation accuracy. Given the category imbalance in the sweetgum leaf spot dataset (SLSD), a weighted loss function was introduced and assigned to two different types of weights, for spots and the background, respectively, to improve the segmentation of disease spot regions in the model. Finally, we graded the degree of the lesions. The experimental results show that the PA, mRecall, and mIou algorithms of the improved model were 94.5%, 85.4%, and 81.3%, respectively, which are superior to the traditional DeeplabV3+, Unet, Segnet models and other commonly used plant disease semantic segmentation methods. The model shows excellent performance for different degrees of speckle segmentation, demonstrating that this method can effectively improve the model’s segmentation performance for sweetgum leaf spots.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhang, Qianqian, Hongyang Wei, Xusheng Du, Xue Li, and Jiong Yu. "FAIAD: Feature Adaptive-based Image Anomaly Detection." Journal of Physics: Conference Series 2333, no. 1 (August 1, 2022): 012005. http://dx.doi.org/10.1088/1742-6596/2333/1/012005.

Повний текст джерела
Анотація:
Abstract Image abnormality detection is a hot research topic in the field of data mining, and it has great application value in the fields of industrial appearance defect detection and medical image analysis. To address the problem of poor performance of anomaly detection models caused by incomplete feature extraction, we propose a feature-adaptive image anomaly detection model. FAIAD first trains the initial feature extraction model by pre-training the model. Then introduce feature adaptation methods to improve image feature extraction performance. The last step is to calculate the accuracy of image anomaly detection. In order to explore the feature extraction effects of different neural networks, this paper designs three kinds of backbone network comparison experiments. Experimenting on both Cifar-10 and Fashion-MNIST datasets, the accuracy of our model improved by 3.5% and 2.3%, respectively, compared to the baseline model. The experimental results show that combining pre-trained models with feature adaptation methods can effectively improve the performance of anomaly detection models.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wang, H., F. Yu, J. Xie, H. Wang, and H. Zheng. "ROAD EXTRACTION BASED ON IMPROVED DEEPLABV3 PLUS IN REMOTE SENSING IMAGE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-3/W2-2022 (October 27, 2022): 67–72. http://dx.doi.org/10.5194/isprs-archives-xlviii-3-w2-2022-67-2022.

Повний текст джерела
Анотація:
Abstract. Urban roads in remote sensing images will be disturbed by surrounding ground features such as building shadows and tree shadows, and the extraction results are prone to problems such as incomplete road structure, poor topological connectivity, and poor accuracy. For mountain roads, there will also be problems such as hill shadow or vegetation occlusion. We propose an improved Deeplabv3+ semantic segmentation network method. This method uses ResNeSt, which introduces channel attention, as the backbone network, and combines the ASPP module to obtain multi-scale information, thereby improving the accuracy of road extraction. Analysis of the experimental results on the Deeplglobe dataset shows that the intersection ratio and accuracy of the method in this paper are 63.15% and 73.16%, respectively, which are better than other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Lavudi, Harikrishna Naik, Seshagirirao Kottapalli, and Francisco M. Goycoolea. "Extraction, purification and characterization of water soluble galactomannans from Mimosa pudica seeds." EuroBiotech Journal 1, no. 4 (October 27, 2017): 303–9. http://dx.doi.org/10.24190/issn2564-615x/2017/04.07.

Повний текст джерела
Анотація:
Abstract Water soluble galactomannans from seed endosperm of Mimosa pudica L. was extracted and characterized (Fig. 1). Nuclear magnetic resonance spectroscopy and Gas Chromatography results revealed the presence of 4-linked mannose backbone with galactose side chains linked at the C6 position. Scanning Electron Micrographs showed smooth, elongated and irregular granular structure of galactomannan. Structural analysis by Attenuated total reflection infrared spectroscopy presented the Mannose to Galactose ratio while the X-ray diffraction studies showed the presences of A-type crystalline pattern of the galactomannan. Thermo Gravitimetric Analysis showed the three steps weight loss event and determined the thermal stability. The results showed that the extracted polysaccharides are typically amorphous, thermally stable and have desirable properties for industrial applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Miah, Md Saef Ullah, Junaida Sulaiman, Talha Bin Sarwar, Ateeqa Naseer, Fasiha Ashraf, Kamal Zuhairi Zamli, and Rajan Jose. "Sentence Boundary Extraction from Scientific Literature of Electric Double Layer Capacitor Domain: Tools and Techniques." Applied Sciences 12, no. 3 (January 27, 2022): 1352. http://dx.doi.org/10.3390/app12031352.

Повний текст джерела
Анотація:
Given the growth of scientific literature on the web, particularly material science, acquiring data precisely from the literature has become more significant. Material information systems, or chemical information systems, play an essential role in discovering data, materials, or synthesis processes using the existing scientific literature. Processing and understanding the natural language of scientific literature is the backbone of these systems, which depend heavily on appropriate textual content. Appropriate textual content means a complete, meaningful sentence from a large chunk of textual content. The process of detecting the beginning and end of a sentence and extracting them as correct sentences is called sentence boundary extraction. The accurate extraction of sentence boundaries from PDF documents is essential for readability and natural language processing. Therefore, this study provides a comparative analysis of different tools for extracting PDF documents into text, which are available as Python libraries or packages and are widely used by the research community. The main objective is to find the most suitable technique among the available techniques that can correctly extract sentences from PDF files as text. The performance of the used techniques Pypdf2, Pdfminer.six, Pymupdf, Pdftotext, Tika, and Grobid is presented in terms of precision, recall, f-1 score, run time, and memory consumption. NLTK, Spacy, and Gensim Natural Language Processing (NLP) tools are used to identify sentence boundaries. Of all the techniques studied, the Grobid PDF extraction package using the NLP tool Spacy achieved the highest f-1 score of 93% and consumed the least amount of memory at 46.13 MegaBytes.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Liu, Panting, Haishun Du, and Junpeng Sha. "Multiple Granularity Person Re-identification Network Based on Representation Learning and Metric Learning." Journal of Physics: Conference Series 2216, no. 1 (March 1, 2022): 012105. http://dx.doi.org/10.1088/1742-6596/2216/1/012105.

Повний текст джерела
Анотація:
Abstract Powerful local features can be extracted from multiple body regions of a pedestrian. Early person re-identification research has focused on extracting local features by locating regions with specific pre-defined semantics, which is not effective and increases the complexity of the network. In this paper, we propose a multiple granularity person re-identification network based on representation learning and metric learning for learning discriminative representations of pedestrian images. Multiple granularity person re-identification network consists of a multiple granularity feature extraction part and a combined loss part. In particular, the multiple granularity feature extraction part extracts global features and local features of different granularities from the feature maps of Conv4 and Conv5 of the ResNet50 backbone network, respectively, the extracted feature information is more comprehensive and discriminative. The combined loss part employs a joint representation learning and metric learning approach for supervised learning, which enables the model to learn more optimal parameters. The experimental results show that the Rank-1 accuracy of the multiple granularity person re-identification network reaches 95.2% and 88.2% on the Market1501 dataset and DukeMTMC-reID dataset, respectively, which illustrates the effectiveness of the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Tran, Van-Nhan, Suk-Hwan Lee, Hoanh-Su Le, and Ki-Ryong Kwon. "High Performance DeepFake Video Detection on CNN-Based with Attention Target-Specific Regions and Manual Distillation Extraction." Applied Sciences 11, no. 16 (August 20, 2021): 7678. http://dx.doi.org/10.3390/app11167678.

Повний текст джерела
Анотація:
The rapid development of deep learning models that can produce and synthesize hyper-realistic videos are known as DeepFakes. Moreover, the growth of forgery data has prompted concerns about malevolent intent usage. Detecting forgery videos are a crucial subject in the field of digital media. Nowadays, most models are based on deep learning neural networks and vision transformer, SOTA model with EfficientNetB7 backbone. However, due to the usage of excessively large backbones, these models have the intrinsic drawback of being too heavy. In our research, a high performance DeepFake detection model for manipulated video is proposed, ensuring accuracy of the model while keeping an appropriate weight. We inherited content from previous research projects related to distillation methodology but our proposal approached in a different way with manual distillation extraction, target-specific regions extraction, data augmentation, frame and multi-region ensemble, along with suggesting a CNN-based model as well as flexible classification with a dynamic threshold. Our proposal can reduce the overfitting problem, a common and particularly important problem affecting the quality of many models. So as to analyze the quality of our model, we performed tests on two datasets. DeepFake Detection Dataset (DFDC) with our model obtains 0.958 of AUC and 0.9243 of F1-score, compared with the SOTA model which obtains 0.972 of AUC and 0.906 of F1-score, and the smaller dataset Celeb-DF v2 with 0.978 of AUC and 0.9628 of F1-score.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wang, Yun, Suye Wang, and XueBin Hong. "Road Extraction using High Resolution Satellite Images based on Receptive Field and Improved Deeplabv3+." Journal of Physics: Conference Series 2320, no. 1 (August 1, 2022): 012021. http://dx.doi.org/10.1088/1742-6596/2320/1/012021.

Повний текст джерела
Анотація:
Abstract Road extraction from high resolution remote sensing images is an important and challenging computer vision task. This paper presents a road segmentation based on Receptive Field and Improved Deeplabv3+, which obtains the best training image set by calculating the edged energy function after simply clipping. To solve the problem of data homogeneity across as well as convergence, we innovatively use the initialization method Leaky-He to extract the layer backbone network in the network structure. Using the DeepGlobe Road Extraction dataset as the training dataset, the experimental results show that the best mloU score of the test set is 0.7099, which can improve the results by 0.1919 and 0.1596 in this paper compared with U-Net and D-LinkNet classical networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chen and Seko. "Cleavage of the Graft Bonds in PVDF–g–St Films by Boiling Xylene Extraction and the Determination of the Molecular Weight of the Graft Chains." Polymers 11, no. 7 (June 28, 2019): 1098. http://dx.doi.org/10.3390/polym11071098.

Повний текст джерела
Анотація:
To determine the molecular weight of graft chains in grafted films, the polystyrene graft chains of PVDF–g–St films synthesized by a pre-irradiation graft method are cleaved and separated by boiling xylene extraction. The analysis of the extracted material and the residual films by FTIR, nuclear magnetic resonance (NMR), and gel permeation chromatography (GPC) analyses indicates that most graft chains are removed from the PVDF–g–St films within 72 h of extraction time. Furthermore, the molecular weight of the residual films decreases quickly within 8 h of extraction and then remains virtually unchanged up to 72 h after extraction time. The degradation is due to the cleavage of graft bonds, which is mainly driven by the thermal degradation and the swelling of graft chains in solution. This allows determination of the molecular weight of graft chains by GPC analysis of the extracted material. The results indicate that the PVDF–g–St prepared in this study has the structure where one or two graft chains hang from each PVDF backbone.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії