Siga este link para ver outros tipos de publicações sobre o tema: Fully- and weakly-Supervised learning.

Artigos de revistas sobre o tema "Fully- and weakly-Supervised learning"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Fully- and weakly-Supervised learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Cuypers, Suzanna, Maarten Bassier e Maarten Vergauwen. "Deep Learning on Construction Sites: A Case Study of Sparse Data Learning Techniques for Rebar Segmentation". Sensors 21, n.º 16 (11 de agosto de 2021): 5428. http://dx.doi.org/10.3390/s21165428.

Texto completo da fonte
Resumo:
With recent advancements in deep learning models for image interpretation, it has finally become possible to automate construction site monitoring processes that rely on remote sensing. However, the major drawback of these models is their dependency on large datasets of training images labeled at pixel level, which have to be produced manually by skilled personnel. To alleviate the need for training data, this study evaluates weakly- and semi-supervised semantic segmentation models for construction site imagery to efficiently automate monitoring tasks. As a case study, we compare fully-, weakly- and semi-supervised methods for the detection of rebar covers, which are useful for quality control. In the experiments, recent models, i.e., IRNet, DeepLabv3+ and the cross-consistency training model, are compared for their ability to segment rebar covers from construction site imagery with minimal manual input. The results show that weakly- and semi-supervised models can indeed approach the performance of fully-supervised models, with the majority of the target objects being properly found. Through this study, construction site stakeholders are provided with detailed information on how tp leverage deep learning for efficient construction site monitoring and weigh preprocessing, training and testing efforts against each other in order to decide between fully-, weakly- and semi-supervised training.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Ning, Jiajun Deng e Mingbo Jia. "Cycle-Consistency Learning for Captioning and Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de março de 2024): 5535–43. http://dx.doi.org/10.1609/aaai.v38i6.28363.

Texto completo da fonte
Resumo:
We present that visual grounding and image captioning, which perform as two mutually inverse processes, can be bridged together for collaborative training by careful designs. By consolidating this idea, we introduce CyCo, a cyclic-consistent learning framework to ameliorate the independent training pipelines of visual grounding and image captioning. The proposed framework (1) allows the semi-weakly supervised training of visual grounding; (2) improves the performance of fully supervised visual grounding; (3) yields a general captioning model that can describe arbitrary image regions. Extensive experiments show that our fully supervised grounding model achieves state-of-the-art performance, and the semi-weakly supervised one also exhibits competitive performance compared to the fully supervised counterparts. Our image captioning model has the capability to freely describe image regions and meanwhile shows impressive performance on prevalent captioning benchmarks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Wang, Guangyao. "A Study of Object Detection Based on Weakly Supervised Learning". International Journal of Computer Science and Information Technology 2, n.º 1 (25 de março de 2024): 476–78. http://dx.doi.org/10.62051/ijcsit.v2n1.50.

Texto completo da fonte
Resumo:
Object detection is one of the important research contents in the field of computer vision. At present, the classical object detection methods can be divided into two categories: fully supervised-based target detection and weakly supervised-based target detection. Since the fully supervised object detection model requires a large number of training data with category labels and target bounding boxes, and such labeled data is difficult to obtain, it is of great significance to explore the weakly supervised object detection method that only needs category label data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Adke, Shrinidhi, Changying Li, Khaled M. Rasheed e Frederick W. Maier. "Supervised and Weakly Supervised Deep Learning for Segmentation and Counting of Cotton Bolls Using Proximal Imagery". Sensors 22, n.º 10 (12 de maio de 2022): 3688. http://dx.doi.org/10.3390/s22103688.

Texto completo da fonte
Resumo:
The total boll count from a plant is one of the most important phenotypic traits for cotton breeding and is also an important factor for growers to estimate the final yield. With the recent advances in deep learning, many supervised learning approaches have been implemented to perform phenotypic trait measurement from images for various crops, but few studies have been conducted to count cotton bolls from field images. Supervised learning models require a vast number of annotated images for training, which has become a bottleneck for machine learning model development. The goal of this study is to develop both fully supervised and weakly supervised deep learning models to segment and count cotton bolls from proximal imagery. A total of 290 RGB images of cotton plants from both potted (indoor and outdoor) and in-field settings were taken by consumer-grade cameras and the raw images were divided into 4350 image tiles for further model training and testing. Two supervised models (Mask R-CNN and S-Count) and two weakly supervised approaches (WS-Count and CountSeg) were compared in terms of boll count accuracy and annotation costs. The results revealed that the weakly supervised counting approaches performed well with RMSE values of 1.826 and 1.284 for WS-Count and CountSeg, respectively, whereas the fully supervised models achieve RMSE values of 1.181 and 1.175 for S-Count and Mask R-CNN, respectively, when the number of bolls in an image patch is less than 10. In terms of data annotation costs, the weakly supervised approaches were at least 10 times more cost efficient than the supervised approach for boll counting. In the future, the deep learning models developed in this study can be extended to other plant organs, such as main stalks, nodes, and primary and secondary branches. Both the supervised and weakly supervised deep learning models for boll counting with low-cost RGB images can be used by cotton breeders, physiologists, and growers alike to improve crop breeding and yield estimation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ni, Ansong, Pengcheng Yin e Graham Neubig. "Merging Weak and Active Supervision for Semantic Parsing". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8536–43. http://dx.doi.org/10.1609/aaai.v34i05.6375.

Texto completo da fonte
Resumo:
A semantic parser maps natural language commands (NLs) from the users to executable meaning representations (MRs), which are later executed in certain environment to obtain user-desired results. The fully-supervised training of such parser requires NL/MR pairs, annotated by domain experts, which makes them expensive to collect. However, weakly-supervised semantic parsers are learnt only from pairs of NL and expected execution results, leaving the MRs latent. While weak supervision is cheaper to acquire, learning from this input poses difficulties. It demands that parsers search a large space with a very weak learning signal and it is hard to avoid spurious MRs that achieve the correct answer in the wrong way. These factors lead to a performance gap between parsers trained in weakly- and fully-supervised setting. To bridge this gap, we examine the intersection between weak supervision and active learning, which allows the learner to actively select examples and query for manual annotations as extra supervision to improve the model trained under weak supervision. We study different active learning heuristics for selecting examples to query, and various forms of extra supervision for such queries. We evaluate the effectiveness of our method on two different datasets. Experiments on the WikiSQL show that by annotating only 1.8% of examples, we improve over a state-of-the-art weakly-supervised baseline by 6.4%, achieving an accuracy of 79.0%, which is only 1.3% away from the model trained with full supervision. Experiments on WikiTableQuestions with human annotators show that our method can improve the performance with only 100 active queries, especially for weakly-supervised parsers learnt from a cold start. 1
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Colin, Aurélien, Ronan Fablet, Pierre Tandeo, Romain Husson, Charles Peureux, Nicolas Longépé e Alexis Mouche. "Semantic Segmentation of Metoceanic Processes Using SAR Observations and Deep Learning". Remote Sensing 14, n.º 4 (11 de fevereiro de 2022): 851. http://dx.doi.org/10.3390/rs14040851.

Texto completo da fonte
Resumo:
Through the Synthetic Aperture Radar (SAR) embarked on the satellites Sentinel-1A and Sentinel-1B of the Copernicus program, a large quantity of observations is routinely acquired over the oceans. A wide range of features from both oceanic (e.g., biological slicks, icebergs, etc.) and meteorologic origin (e.g., rain cells, wind streaks, etc.) are distinguishable on these acquisitions. This paper studies the semantic segmentation of ten metoceanic processes either in the context of a large quantity of image-level groundtruths (i.e., weakly-supervised framework) or of scarce pixel-level groundtruths (i.e., fully-supervised framework). Our main result is that a fully-supervised model outperforms any tested weakly-supervised algorithm. Adding more segmentation examples in the training set would further increase the precision of the predictions. Trained on 20 × 20 km imagettes acquired from the WV acquisition mode of the Sentinel-1 mission, the model is shown to generalize, under some assumptions, to wide-swath SAR data, which further extents its application domain to coastal areas.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Cai, Tingting, Hongping Yan, Kun Ding, Yan Zhang e Yueyue Zhou. "WSPolyp-SAM: Weakly Supervised and Self-Guided Fine-Tuning of SAM for Colonoscopy Polyp Segmentation". Applied Sciences 14, n.º 12 (8 de junho de 2024): 5007. http://dx.doi.org/10.3390/app14125007.

Texto completo da fonte
Resumo:
Ensuring precise segmentation of colorectal polyps holds critical importance in the early diagnosis and treatment of colorectal cancer. Nevertheless, existing deep learning-based segmentation methods are fully supervised, requiring extensive, precise, manual pixel-level annotation data, which leads to high annotation costs. Additionally, it remains challenging to train large-scale segmentation models when confronted with limited colonoscopy data. To address these issues, we introduce the general segmentation foundation model—the Segment Anything Model (SAM)—into the field of medical image segmentation. Fine-tuning the foundation model is an effective approach to tackle sample scarcity. However, current SAM fine-tuning techniques still rely on precise annotations. To overcome this limitation, we propose WSPolyp-SAM, a novel weakly supervised approach for colonoscopy polyp segmentation. WSPolyp-SAM utilizes weak annotations to guide SAM in generating segmentation masks, which are then treated as pseudo-labels to guide the fine-tuning of SAM, thereby reducing the dependence on precise annotation data. To improve the reliability and accuracy of pseudo-labels, we have designed a series of enhancement strategies to improve the quality of pseudo-labels and mitigate the negative impact of low-quality pseudo-labels. Experimental results on five medical image datasets demonstrate that WSPolyp-SAM outperforms current fully supervised mainstream polyp segmentation networks on the Kvasir-SEG, ColonDB, CVC-300, and ETIS datasets. Furthermore, by using different amounts of training data in weakly supervised and fully supervised experiments, it is found that weakly supervised fine-tuning can save 70% to 73% of annotation time costs compared to fully supervised fine-tuning. This study provides a new perspective on the combination of weakly supervised learning and SAM models, significantly reducing annotation time and offering insights for further development in the field of colonoscopy polyp segmentation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Hong, Yining, Qing Li, Daniel Ciao, Siyuan Huang e Song-Chun Zhu. "Learning by Fixing: Solving Math Word Problems with Weak Supervision". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 6 (18 de maio de 2021): 4959–67. http://dx.doi.org/10.1609/aaai.v35i6.16629.

Texto completo da fonte
Resumo:
Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions. In this paper, we address this issue by introducing a weakly-supervised paradigm for learning MWPs. Our method only requires the annotations of the final answers and can generate various solutions for a single problem. To boost weakly-supervised learning, we propose a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning. Specifically, for an incorrect solution tree generated by the neural network, the fixing mechanism propagates the error from the root node to the leaf nodes and infers the most probable fix that can be executed to get the desired answer. To generate more diverse solutions, tree regularization is applied to guide the efficient shrinkage and exploration of the solution space, and a memory buffer is designed to track and save the discovered various fixes for each problem. Experimental results on the Math23K dataset show the proposed LBF framework significantly outperforms reinforcement learning baselines in weakly-supervised learning. Furthermore, it achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods, demonstrating its strength in producing diverse solutions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Chen, Shaolong, e Zhiyong Zhang. "A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning". Sensors 24, n.º 12 (16 de junho de 2024): 3893. http://dx.doi.org/10.3390/s24123893.

Texto completo da fonte
Resumo:
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Zhang, Yachao, Zonghao Li, Yuan Xie, Yanyun Qu, Cuihua Li e Tao Mei. "Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 4 (18 de maio de 2021): 3421–29. http://dx.doi.org/10.1609/aaai.v35i4.16455.

Texto completo da fonte
Resumo:
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotation. Intuitively, weakly supervised training is a direct solution to reduce the labeling costs. However, for weakly supervised large-scale point cloud semantic segmentation, too few annotations will inevitably lead to ineffective learning of network. We propose an effective weakly supervised method containing two components to solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,} point cloud colorization, with a self-supervised training manner to transfer the learned prior knowledge from a large amount of unlabeled point cloud to a weakly supervised network. In this way, the representation capability of the weakly supervised network can be improved by knowledge from a heterogeneous task. Besides, to generative pseudo label for unlabeled data, a sparse label propagation mechanism is proposed with the help of generated class prototypes, which is used to measure the classification confidence of unlabeled point. Our method is evaluated on large-scale point cloud datasets with different scenarios including indoor and outdoor. The experimental results show the large gain against existing weakly supervised methods and comparable results to fully supervised methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Qian, Xiaoliang, Chenyang Lin, Zhiwu Chen e Wei Wang. "SAM-Induced Pseudo Fully Supervised Learning for Weakly Supervised Object Detection in Remote Sensing Images". Remote Sensing 16, n.º 9 (26 de abril de 2024): 1532. http://dx.doi.org/10.3390/rs16091532.

Texto completo da fonte
Resumo:
Weakly supervised object detection (WSOD) in remote sensing images (RSIs) aims to detect high-value targets by solely utilizing image-level category labels; however, two problems have not been well addressed by existing methods. Firstly, the seed instances (SIs) are mined solely relying on the category score (CS) of each proposal, which is inclined to concentrate on the most salient parts of the object; furthermore, they are unreliable because the robustness of the CS is not sufficient due to the fact that the inter-category similarity and intra-category diversity are more serious in RSIs. Secondly, the localization accuracy is limited by the proposals generated by the selective search or edge box algorithm. To address the first problem, a segment anything model (SAM)-induced seed instance-mining (SSIM) module is proposed, which mines the SIs according to the object quality score, which indicates the comprehensive characteristic of the category and the completeness of the object. To handle the second problem, a SAM-based pseudo-ground truth-mining (SPGTM) module is proposed to mine the pseudo-ground truth (PGT) instances, for which the localization is more accurate than traditional proposals by fully making use of the advantages of SAM, and the object-detection heads are trained by the PGT instances in a fully supervised manner. The ablation studies show the effectiveness of the SSIM and SPGTM modules. Comprehensive comparisons with 15 WSOD methods demonstrate the superiority of our method on two RSI datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Cherikbayeva, L. Ch, N. K. Mukazhanov, Z. Alibiyeva, S. A. Adilzhanova, G. A. Tyulepberdinova e M. Zh Sakypbekova. "SOLUTION TO THE PROBLEM WEAKLY CONTROLLED REGRESSION USING COASSOCIATION MATRIX AND REGULARIZATION". Herald of the Kazakh-British technical university 21, n.º 2 (1 de julho de 2024): 83–94. http://dx.doi.org/10.55452/1998-6688-2024-21-2-83-94.

Texto completo da fonte
Resumo:
Currently, the theory and methods of machine learning (ML) are rapidly developing and are increasingly used in various fields of science and technology, in particular in manufacturing, education and medicine. Weakly supervised learning is a subset of machine learning research that aims to develop models and methods for analyzing various types of information. When formulating a weakly supervised learning problem, it is assumed that some objects in the model are not defined correctly. This inaccuracy can be understood in different ways. Weakly supervised learning is a type of machine learning method in which a model is trained using incomplete, inaccurate, or imprecise observation signals rather than using fully validated data. Weakly supervised learning often occurs in real-world problems for various reasons. This may be due to the high cost of the data labeling process, low sensor accuracy, lack of expert experience, or human error. For example, labeling of poor control is carried out in cases obtained by crowdsourcing methods: for each object there is a set of different assessments, the quality of which depends on the skill of the performers. Another example is the problem of object detection in an image. Boundary lines are a common way to indicate the location and size of objects detected in an image in object detection tasks. The article presents an algorithm for solving a multi-objective weakly supervised regression problem using the Wasserstein metric, various regularizations and a co-association matrix as a similarity matrix. The work also improved the algorithm for calculating the weighted average co-association matrix. We compare the proposed algorithm with existing supervised learning and unsupervised learning algorithms on synthetic and real data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Feng, Jiahao, Ce Li e Jin Wang. "CAM-TMIL: A Weakly-Supervised Segmentation Framework for Histopathology based on CAMs and MIL". Journal of Physics: Conference Series 2547, n.º 1 (1 de julho de 2023): 012014. http://dx.doi.org/10.1088/1742-6596/2547/1/012014.

Texto completo da fonte
Resumo:
Abstract Semantic segmentation plays a significant role in histopathology by assisting pathologists in diagnosis. Although fully-supervised learning achieves excellent success on segmentation for histopathological images, it costs pathologists and experts great efforts on pixel-level annotation in the meantime. Thus, to reduce the annotation workload, we proposed a weakly-supervised learning framework called CAM-TMIL, which assembles methods based on class activation maps (CAMs) and multiple instance learning (MIL) to perform segmentation with image-level labels. By leveraging the MIL method, we effectively alleviate the influence caused by that CAMs only focus on discriminative regions. As a result, we achieved comparable performance with fully-supervised learning on Camelyon 16 only with image-level labels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Chen, Jie, Fen He, Yi Zhang, Geng Sun e Min Deng. "SPMF-Net: Weakly Supervised Building Segmentation by Combining Superpixel Pooling and Multi-Scale Feature Fusion". Remote Sensing 12, n.º 6 (24 de março de 2020): 1049. http://dx.doi.org/10.3390/rs12061049.

Texto completo da fonte
Resumo:
The lack of pixel-level labeling limits the practicality of deep learning-based building semantic segmentation. Weakly supervised semantic segmentation based on image-level labeling results in incomplete object regions and missing boundary information. This paper proposes a weakly supervised semantic segmentation method for building detection. The proposed method takes the image-level label as supervision information in a classification network that combines superpixel pooling and multi-scale feature fusion structures. The main advantage of the proposed strategy is its ability to improve the intactness and boundary accuracy of a detected building. Our method achieves impressive results on two 2D semantic labeling datasets, which outperform some competing weakly supervised methods and are close to the result of the fully supervised method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Wu, Zhenyu, Lin Wang, Wei Wang, Qing Xia, Chenglizhao Chen, Aimin Hao e Shuo Li. "Pixel Is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 3 (26 de junho de 2023): 2883–91. http://dx.doi.org/10.1609/aaai.v37i3.25390.

Texto completo da fonte
Resumo:
Although weakly-supervised techniques can reduce the labeling effort, it is unclear whether a saliency model trained with weakly-supervised data (e.g., point annotation) can achieve the equivalent performance of its fully-supervised version. This paper attempts to answer this unexplored question by proving a hypothesis: there is a point-labeled dataset where saliency models trained on it can achieve equivalent performance when trained on the densely annotated dataset. To prove this conjecture, we proposed a novel yet effective adversarial trajectory-ensemble active learning (ATAL). Our contributions are three-fold: 1) Our proposed adversarial attack triggering uncertainty can conquer the overconfidence of existing active learning methods and accurately locate these uncertain pixels. 2) Our proposed trajectory-ensemble uncertainty estimation method maintains the advantages of the ensemble networks while significantly reducing the computational cost. 3) Our proposed relationship-aware diversity sampling algorithm can conquer oversampling while boosting performance. Experimental results show that our ATAL can find such a point-labeled dataset, where a saliency model trained on it obtained 97%-99% performance of its fully-supervised version with only 10 annotated points per image.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Liu, Xiangquan, e Xiaoming Huang. "Weakly supervised salient object detection via bounding-box annotation and SAM model". Electronic Research Archive 32, n.º 3 (2024): 1624–45. http://dx.doi.org/10.3934/era.2024074.

Texto completo da fonte
Resumo:
<abstract><p>Salient object detection (SOD) aims to detect the most attractive region in an image. Fully supervised SOD based on deep learning usually needs a large amount of data with human annotation. Researchers have gradually focused on the SOD task using weakly supervised annotation such as category, scribble, and bounding-box, while these existing weakly supervised methods achieve limited performance and demonstrate a huge performance gap with fully supervised methods. In this work, we proposed one novel two-stage weakly supervised method based on bounding-box annotation and the recent large visual model Segment Anything (SAM). In the first stage, we regarded the bounding-box annotation as the box prompt of SAM to generate initial labels and proposed object completeness check and object inversion check to exclude low quality labels, then we selected reliable pseudo labels for the training initial SOD model. In the second stage, we used the initial SOD model to predict the saliency map of excluded images and adopted SAM with the everything mode to generate segmentation candidates, then we fused the saliency map and segmentation candidates to predict pseudo labels. Finally we used all reliable pseudo labels generated in the two stages to train one refined SOD model. We also designed a simple but effective SOD model, which can capture rich global context information. Performance evaluation on four public datasets showed that the proposed method significantly outperforms other weakly supervised methods and also achieves comparable performance with fully supervised methods.</p></abstract>
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Božič, Jakob, Domen Tabernik e Danijel Skočaj. "Mixed supervision for surface-defect detection: From weakly to fully supervised learning". Computers in Industry 129 (agosto de 2021): 103459. http://dx.doi.org/10.1016/j.compind.2021.103459.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Ge, Yongtao, Qiang Zhou, Xinlong Wang, Chunhua Shen, Zhibin Wang e Hao Li. "Point-Teaching: Weakly Semi-supervised Object Detection with Point Annotations". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junho de 2023): 667–75. http://dx.doi.org/10.1609/aaai.v37i1.25143.

Texto completo da fonte
Resumo:
Point annotations are considerably more time-efficient than bounding box annotations. However, how to use cheap point annotations to boost the performance of semi-supervised object detection is still an open question. In this work, we present Point-Teaching, a weakly- and semi-supervised object detection framework to fully utilize the point annotations. Specifically, we propose a Hungarian-based point-matching method to generate pseudo labels for point-annotated images. We further propose multiple instance learning (MIL) approaches at the level of images and points to supervise the object detector with point annotations. Finally, we propose a simple data augmentation, named Point-Guided Copy-Paste, to reduce the impact of those unmatched points. Experiments demonstrate the effectiveness of our method on a few datasets and various data regimes. In particular, Point-Teaching outperforms the previous best method Group R-CNN by 3.1 AP with 5% fully labeled data and 2.3 AP with 30% fully labeled data on the MS COCO dataset. We believe that our proposed framework can largely lower the bar of learning accurate object detectors and pave the way for its broader applications. The code is available at https://github.com/YongtaoGe/Point-Teaching.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Fu, Kun, Wanxuan Lu, Wenhui Diao, Menglong Yan, Hao Sun, Yi Zhang e Xian Sun. "WSF-NET: Weakly Supervised Feature-Fusion Network for Binary Segmentation in Remote Sensing Image". Remote Sensing 10, n.º 12 (6 de dezembro de 2018): 1970. http://dx.doi.org/10.3390/rs10121970.

Texto completo da fonte
Resumo:
Binary segmentation in remote sensing aims to obtain binary prediction mask classifying each pixel in the given image. Deep learning methods have shown outstanding performance in this task. These existing methods in fully supervised manner need massive high-quality datasets with manual pixel-level annotations. However, the annotations are generally expensive and sometimes unreliable. Recently, using only image-level annotations, weakly supervised methods have proven to be effective in natural imagery, which significantly reduce the dependence on manual fine labeling. In this paper, we review existing methods and propose a novel weakly supervised binary segmentation framework, which is capable of addressing the issue of class imbalance via a balanced binary training strategy. Besides, a weakly supervised feature-fusion network (WSF-Net) is introduced to adapt to the unique characteristics of objects in remote sensing image. The experiments were implemented on two challenging remote sensing datasets: Water dataset and Cloud dataset. Water dataset is acquired by Google Earth with a resolution of 0.5 m, and Cloud dataset is acquired by Gaofen-1 satellite with a resolution of 16 m. The results demonstrate that using only image-level annotations, our method can achieve comparable results to fully supervised methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Roth, Holger R., Dong Yang, Ziyue Xu, Xiaosong Wang e Daguang Xu. "Going to Extremes: Weakly Supervised Medical Image Segmentation". Machine Learning and Knowledge Extraction 3, n.º 2 (2 de junho de 2021): 507–24. http://dx.doi.org/10.3390/make3020026.

Texto completo da fonte
Resumo:
Medical image annotation is a major hurdle for developing precise and robust machine-learning models. Annotation is expensive, time-consuming, and often requires expert knowledge, particularly in the medical field. Here, we suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model which, in effect, can be used to speed up medical image annotation. An initial segmentation is generated based on the extreme points using the random walker algorithm. This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network that can segment the organ of interest, based on the provided user clicks. Through experimentation on several medical imaging datasets, we show that the predictions of the network can be refined using several rounds of training with the prediction from the same weakly annotated data. Further improvements are shown using the clicked points within a custom-designed loss and attention mechanism. Our approach has the potential to speed up the process of generating new training datasets for the development of new machine-learning and deep-learning-based models for, but not exclusively, medical image analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Nartey, Obed Tettey, Guowu Yang, Sarpong Kwadwo Asare, Jinzhao Wu e Lady Nadia Frempong. "Robust Semi-Supervised Traffic Sign Recognition via Self-Training and Weakly-Supervised Learning". Sensors 20, n.º 9 (8 de maio de 2020): 2684. http://dx.doi.org/10.3390/s20092684.

Texto completo da fonte
Resumo:
Traffic sign recognition is a classification problem that poses challenges for computer vision and machine learning algorithms. Although both computer vision and machine learning techniques have constantly been improved to solve this problem, the sudden rise in the number of unlabeled traffic signs has become even more challenging. Large data collation and labeling are tedious and expensive tasks that demand much time, expert knowledge, and fiscal resources to satisfy the hunger of deep neural networks. Aside from that, the problem of having unbalanced data also poses a greater challenge to computer vision and machine learning algorithms to achieve better performance. These problems raise the need to develop algorithms that can fully exploit a large amount of unlabeled data, use a small amount of labeled samples, and be robust to data imbalance to build an efficient and high-quality classifier. In this work, we propose a novel semi-supervised classification technique that is robust to small and unbalanced data. The framework integrates weakly-supervised learning and self-training with self-paced learning to generate attention maps to augment the training set and utilizes a novel pseudo-label generation and selection algorithm to generate and select pseudo-labeled samples. The method improves the performance by: (1) normalizing the class-wise confidence levels to prevent the model from ignoring hard-to-learn samples, thereby solving the imbalanced data problem; (2) jointly learning a model and optimizing pseudo-labels generated on unlabeled data; and (3) enlarging the training set to satisfy the hunger of deep learning models. Extensive evaluations on two public traffic sign recognition datasets demonstrate the effectiveness of the proposed technique and provide a potential solution for practical applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Watanabe, Takumi, Hiroki Takahashi, Yusuke Iwasawa, Yutaka Matsuo e Ikuko Eguchi Yairi. "Weakly Supervised Learning for Evaluating Road Surface Condition from Wheelchair Driving Data". Information 11, n.º 1 (19 de dezembro de 2019): 2. http://dx.doi.org/10.3390/info11010002.

Texto completo da fonte
Resumo:
Providing accessibility information about sidewalks for people with difficulties with moving is an important social issue. We previously proposed a fully supervised machine learning approach for providing accessibility information by estimating road surface conditions using wheelchair accelerometer data with manually annotated road surface condition labels. However, manually annotating road surface condition labels is expensive and impractical for extensive data. This paper proposes and evaluates a novel method for estimating road surface conditions without human annotation by applying weakly supervised learning. The proposed method only relies on positional information while driving for weak supervision to learn road surface conditions. Our results demonstrate that the proposed method learns detailed and subtle features of road surface conditions, such as the difference in ascending and descending of a slope, the angle of slopes, the exact locations of curbs, and the slight differences of similar pavements. The results demonstrate that the proposed method learns feature representations that are discriminative for a road surface classification task. When the amount of labeled data is 10% or less in a semi-supervised setting, the proposed method outperforms a fully supervised method that uses manually annotated labels to learn feature representations of road surface conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Wang, Lukang, Min Zhang, Xu Gao e Wenzhong Shi. "Advances and Challenges in Deep Learning-Based Change Detection for Remote Sensing Images: A Review through Various Learning Paradigms". Remote Sensing 16, n.º 5 (25 de fevereiro de 2024): 804. http://dx.doi.org/10.3390/rs16050804.

Texto completo da fonte
Resumo:
Change detection (CD) in remote sensing (RS) imagery is a pivotal method for detecting changes in the Earth’s surface, finding wide applications in urban planning, disaster management, and national security. Recently, deep learning (DL) has experienced explosive growth and, with its superior capabilities in feature learning and pattern recognition, it has introduced innovative approaches to CD. This review explores the latest techniques, applications, and challenges in DL-based CD, examining them through the lens of various learning paradigms, including fully supervised, semi-supervised, weakly supervised, and unsupervised. Initially, the review introduces the basic network architectures for CD methods using DL. Then, it provides a comprehensive analysis of CD methods under different learning paradigms, summarizing commonly used frameworks. Additionally, an overview of publicly available datasets for CD is offered. Finally, the review addresses the opportunities and challenges in the field, including: (a) incomplete supervised CD, encompassing semi-supervised and weakly supervised methods, which is still in its infancy and requires further in-depth investigation; (b) the potential of self-supervised learning, offering significant opportunities for Few-shot and One-shot Learning of CD; (c) the development of Foundation Models, with their multi-task adaptability, providing new perspectives and tools for CD; and (d) the expansion of data sources, presenting both opportunities and challenges for multimodal CD. These areas suggest promising directions for future research in CD. In conclusion, this review aims to assist researchers in gaining a comprehensive understanding of the CD field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Baek, Kyungjune, Minhyun Lee e Hyunjung Shim. "PsyNet: Self-Supervised Approach to Object Localization Using Point Symmetric Transformation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 10451–59. http://dx.doi.org/10.1609/aaai.v34i07.6615.

Texto completo da fonte
Resumo:
Existing co-localization techniques significantly lose performance over weakly or fully supervised methods in accuracy and inference time. In this paper, we overcome common drawbacks of co-localization techniques by utilizing self-supervised learning approach. The major technical contributions of the proposed method are two-fold. 1) We devise a new geometric transformation, namely point symmetric transformation and utilize its parameters as an artificial label for self-supervised learning. This new transformation can also play the role of region-drop based regularization. 2) We suggest a heat map extraction method for computing the heat map from the network trained by self-supervision, namely class-agnostic activation mapping. It is done by computing the spatial attention map. Based on extensive evaluations, we observe that the proposed method records new state-of-the-art performance in three fine-grained datasets for unsupervised object localization. Moreover, we show that the idea of the proposed method can be adopted in a modified manner to solve the weakly supervised object localization task. As a result, we outperform the current state-of-the-art technique in weakly supervised object localization by a significant gap.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Hoang, Nhat M., Kehong Gong, Chuan Guo e Michael Bi Mi. "MotionMix: Weakly-Supervised Diffusion for Controllable Motion Generation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2157–65. http://dx.doi.org/10.1609/aaai.v38i3.27988.

Texto completo da fonte
Resumo:
Controllable generation of 3D human motions becomes an important topic as the world embraces digital transformation. Existing works, though making promising progress with the advent of diffusion models, heavily rely on meticulously captured and annotated (e.g., text) high-quality motion corpus, a resource-intensive endeavor in the real world. This motivates our proposed MotionMix, a simple yet effective weakly-supervised diffusion model that leverages both noisy and unannotated motion sequences. Specifically, we separate the denoising objectives of a diffusion model into two stages: obtaining conditional rough motion approximations in the initial T-T* steps by learning the noisy annotated motions, followed by the unconditional refinement of these preliminary motions during the last T* steps using unannotated motions. Notably, though learning from two sources of imperfect data, our model does not compromise motion generation quality compared to fully supervised approaches that access gold data. Extensive experiments on several benchmarks demonstrate that our MotionMix, as a versatile framework, consistently achieves state-of-the-art performances on text-to-motion, action-to-motion, and music-to-dance tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Qian, Rui, Yunchao Wei, Honghui Shi, Jiachen Li, Jiaying Liu e Thomas Huang. "Weakly Supervised Scene Parsing with Point-Based Distance Metric Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 8843–50. http://dx.doi.org/10.1609/aaai.v33i01.33018843.

Texto completo da fonte
Resumo:
Semantic scene parsing is suffering from the fact that pixellevel annotations are hard to be collected. To tackle this issue, we propose a Point-based Distance Metric Learning (PDML) in this paper. PDML does not require dense annotated masks and only leverages several labeled points that are much easier to obtain to guide the training process. Concretely, we leverage semantic relationship among the annotated points by encouraging the feature representations of the intra- and intercategory points to keep consistent, i.e. points within the same category should have more similar feature representations compared to those from different categories. We formulate such a characteristic into a simple distance metric loss, which collaborates with the point-wise cross-entropy loss to optimize the deep neural networks. Furthermore, to fully exploit the limited annotations, distance metric learning is conducted across different training images instead of simply adopting an image-dependent manner. We conduct extensive experiments on two challenging scene parsing benchmarks of PASCALContext and ADE 20K to validate the effectiveness of our PDML, and competitive mIoU scores are achieved.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Sebai, Meriem, Xinggang Wang e Tianjiang Wang. "MaskMitosis: a deep learning framework for fully supervised, weakly supervised, and unsupervised mitosis detection in histopathology images". Medical & Biological Engineering & Computing 58, n.º 7 (22 de maio de 2020): 1603–23. http://dx.doi.org/10.1007/s11517-020-02175-z.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Lin, Jianghang, Yunhang Shen, Bingquan Wang, Shaohui Lin, Ke Li e Liujuan Cao. "Weakly Supervised Open-Vocabulary Object Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 4 (24 de março de 2024): 3404–12. http://dx.doi.org/10.1609/aaai.v38i4.28127.

Texto completo da fonte
Resumo:
Despite weakly supervised object detection (WSOD) being a promising step toward evading strong instance-level annotations, its capability is confined to closed-set categories within a single training dataset. In this paper, we propose a novel weakly supervised open-vocabulary object detection framework, namely WSOVOD, to extend traditional WSOD to detect novel concepts and utilize diverse datasets with only image-level annotations. To achieve this, we explore three vital strategies, including dataset-level feature adaptation, image-level salient object localization, and region-level vision-language alignment. First, we perform data-aware feature extraction to produce an input-conditional coefficient, which is leveraged into dataset attribute prototypes to identify dataset bias and help achieve cross-dataset generalization. Second, a customized location-oriented weakly supervised region proposal network is proposed to utilize high-level semantic layouts from the category-agnostic segment anything model to distinguish object boundaries. Lastly, we introduce a proposal-concept synchronized multiple-instance network, i.e., object mining and refinement with visual-semantic alignment, to discover objects matched to the text embeddings of concepts. Extensive experiments on Pascal VOC and MS COCO demonstrate that the proposed WSOVOD achieves new state-of-the-art compared with previous WSOD methods in both close-set object localization and detection tasks. Meanwhile, WSOVOD enables cross-dataset and open-vocabulary learning to achieve on-par or even better performance than well-established fully-supervised open-vocabulary object detection (FSOVOD).
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Krishnamurthy, Jayant, e Thomas Kollar. "Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World". Transactions of the Association for Computational Linguistics 1 (dezembro de 2013): 193–206. http://dx.doi.org/10.1162/tacl_a_00220.

Texto completo da fonte
Resumo:
This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Zhang, Wei, Ping Tang, Thomas Corpetti e Lijun Zhao. "WTS: A Weakly towards Strongly Supervised Learning Framework for Remote Sensing Land Cover Classification Using Segmentation Models". Remote Sensing 13, n.º 3 (23 de janeiro de 2021): 394. http://dx.doi.org/10.3390/rs13030394.

Texto completo da fonte
Resumo:
Land cover classification is one of the most fundamental tasks in the field of remote sensing. In recent years, fully supervised fully convolutional network (FCN)-based semantic segmentation models have achieved state-of-the-art performance in the semantic segmentation task. However, creating pixel-level annotations is prohibitively expensive and laborious, especially when dealing with remote sensing images. Weakly supervised learning methods from weakly labeled annotations can overcome this difficulty to some extent and achieve impressive segmentation results, but results are limited in accuracy. Inspired by point supervision and the traditional segmentation method of seeded region growing (SRG) algorithm, a weakly towards strongly (WTS) supervised learning framework is proposed in this study for remote sensing land cover classification to handle the absence of well-labeled and abundant pixel-level annotations when using segmentation models. In this framework, only several points with true class labels are required as the training set, which are much less expensive to acquire compared with pixel-level annotations through field survey or visual interpretation using high-resolution images. Firstly, they are used to train a Support Vector Machine (SVM) classifier. Once fully trained, the SVM is used to generate the initial seeded pixel-level training set, in which only the pixels with high confidence are assigned with class labels whereas others are unlabeled. They are used to weakly train the segmentation model. Then, the seeded region growing module and fully connected Conditional Random Fields (CRFs) are used to iteratively update the seeded pixel-level training set for progressively increasing pixel-level supervision of the segmentation model. Sentinel-2 remote sensing images are used to validate the proposed framework, and SVM is selected for comparison. In addition, FROM-GLC10 global land cover map is used as training reference to directly train the segmentation model. Experimental results show that the proposed framework outperforms other methods and can be highly recommended for land cover classification tasks when the pixel-level labeled datasets are insufficient by using segmentation models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Wang, Sherrie, William Chen, Sang Michael Xie, George Azzari e David B. Lobell. "Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery". Remote Sensing 12, n.º 2 (7 de janeiro de 2020): 207. http://dx.doi.org/10.3390/rs12020207.

Texto completo da fonte
Resumo:
Accurate automated segmentation of remote sensing data could benefit applications from land cover mapping and agricultural monitoring to urban development surveyal and disaster damage assessment. While convolutional neural networks (CNNs) achieve state-of-the-art accuracy when segmenting natural images with huge labeled datasets, their successful translation to remote sensing tasks has been limited by low quantities of ground truth labels, especially fully segmented ones, in the remote sensing domain. In this work, we perform cropland segmentation using two types of labels commonly found in remote sensing datasets that can be considered sources of “weak supervision”: (1) labels comprised of single geotagged points and (2) image-level labels. We demonstrate that (1) a U-Net trained on a single labeled pixel per image and (2) a U-Net image classifier transferred to segmentation can outperform pixel-level algorithms such as logistic regression, support vector machine, and random forest. While the high performance of neural networks is well-established for large datasets, our experiments indicate that U-Nets trained on weak labels outperform baseline methods with as few as 100 labels. Neural networks, therefore, can combine superior classification performance with efficient label usage, and allow pixel-level labels to be obtained from image labels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Xie, Fei, Panpan Zhang, Tao Jiang, Jiao She, Xuemin Shen, Pengfei Xu, Wei Zhao, Gang Gao e Ziyu Guan. "Lesion Segmentation Framework Based on Convolutional Neural Networks with Dual Attention Mechanism". Electronics 10, n.º 24 (13 de dezembro de 2021): 3103. http://dx.doi.org/10.3390/electronics10243103.

Texto completo da fonte
Resumo:
Computational intelligence has been widely used in medical information processing. The deep learning methods, especially, have many successful applications in medical image analysis. In this paper, we proposed an end-to-end medical lesion segmentation framework based on convolutional neural networks with a dual attention mechanism, which integrates both fully and weakly supervised segmentation. The weakly supervised segmentation module achieves accurate lesion segmentation by using bounding-box labels of lesion areas, which solves the problem of the high cost of pixel-level labels with lesions in the medical images. In addition, a dual attention mechanism is introduced to enhance the network’s ability for visual feature learning. The dual attention mechanism (channel and spatial attention) can help the network pay attention to feature extraction from important regions. Compared with the current mainstream method of weakly supervised segmentation using pseudo labels, it can greatly reduce the gaps between ground-truth labels and pseudo labels. The final experimental results show that our proposed framework achieved more competitive performances on oral lesion dataset, and our framework further extended to dermatological lesion segmentation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Wang, Yaodong, Lili Yue e Maoqing Li. "Cascaded Searching Reinforcement Learning Agent for Proposal-Free Weakly-Supervised Phrase Comprehension". Electronics 13, n.º 5 (27 de fevereiro de 2024): 898. http://dx.doi.org/10.3390/electronics13050898.

Texto completo da fonte
Resumo:
Phrase comprehension (PC) aims to locate a specific object in an image according to a given linguistic query. The existing PC methods work in either a fully supervised or proposal-based weakly supervised manner, which rely explicitly or implicitly on expensive region annotations. In order to completely remove the dependence on the supervised region information, this paper proposes to address PC in a proposal-free weakly supervised training paradigm. To this end, we developed a novel cascaded searching reinforcement learning agent (CSRLA). Concretely, we first leveraged a visual language pre-trained model to generate a visual–textual cross-modal attention heatmap. Accordingly, a coarse salient initial region of the referential target was located. Then, we formulated the visual object grounding as a Markov decision process (MDP) in a reinforcement learning framework, where an agent was trained to iteratively search for the target’s complete region from the salient local region. Additionally, we developed a novel confidence discrimination reward function (ConDis_R) to constrain the model to search for a complete and exclusive object region. The experimental results on three benchmark datasets of Refcoco, Refcoco+, and Refcocog demonstrated the effectiveness of our proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Ouassit, Youssef, Reda Moulouki, Mohammed Yassine El Ghoumari, Mohamed Azzouazi e Soufiane Ardchir. "Liver Segmentation: A Weakly End-to-End Supervised Model". International Journal of Online and Biomedical Engineering (iJOE) 16, n.º 09 (13 de agosto de 2020): 77. http://dx.doi.org/10.3991/ijoe.v16i09.15159.

Texto completo da fonte
Resumo:
Liver segmentation in CT images has multiple clinical applications and is expanding in scope. Clinicians can employ segmentation for pathological diagnosis of liver disease, surgical planning, visualization and volumetric assessment to select the appropriate treatment. However, segmentation of the liver is still a challenging task due to the low contrast in medical images, tissue similarity with neighbor abdominal organs and high scale and shape variability. Recently, deep learning models are the state of art in many natural images processing tasks such as detection, classification, and segmentation due to the availability of annotated data. In the medical field, labeled data is limited due to privacy, expert need, and a time-consuming labeling process. In this paper, we present an efficient model combining a selective pre-processing, augmentation, post-processing and an improved SegCaps network. Our proposed model is an end-to-end learning, fully automatic with a good generalization score on such limited amount of training data. The model has been validated on two 3D liver segmentation datasets and have obtained competitive segmentation results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Yan, Qing, Tao Sun, Jingjing Zhang e Lina Xun. "Visibility Estimation Based on Weakly Supervised Learning under Discrete Label Distribution". Sensors 23, n.º 23 (24 de novembro de 2023): 9390. http://dx.doi.org/10.3390/s23239390.

Texto completo da fonte
Resumo:
This paper proposes an end-to-end neural network model that fully utilizes the characteristic of uneven fog distribution to estimate visibility in fog images. Firstly, we transform the original single labels into discrete label distributions and introduce discrete label distribution learning on top of the existing classification networks to learn the difference in visibility information among different regions of an image. Then, we employ the bilinear attention pooling module to find the farthest visible region of fog in the image, which is incorporated into an attention-based branch. Finally, we conduct a cascaded fusion of the features extracted from the attention-based branch and the base branch. Extensive experimental results on a real highway dataset and a publicly available synthetic road dataset confirm the effectiveness of the proposed method, which has low annotation requirements, good robustness, and broad application space.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Zhao, Lulu, Yanan Zhao, Ting Liu e Hanbing Deng. "A Weakly Supervised Semantic Segmentation Model of Maize Seedlings and Weed Images Based on Scrawl Labels". Sensors 23, n.º 24 (15 de dezembro de 2023): 9846. http://dx.doi.org/10.3390/s23249846.

Texto completo da fonte
Resumo:
The task of semantic segmentation of maize and weed images using fully supervised deep learning models requires a large number of pixel-level mask labels, and the complex morphology of the maize and weeds themselves can further increase the cost of image annotation. To solve this problem, we proposed a Scrawl Label-based Weakly Supervised Semantic Segmentation Network (SL-Net). SL-Net consists of a pseudo label generation module, encoder, and decoder. The pseudo label generation module converts scrawl labels into pseudo labels that replace manual labels that are involved in network training, improving the backbone network for feature extraction based on the DeepLab-V3+ model and using a migration learning strategy to optimize the training process. The results show that the intersection over union of the pseudo labels that are generated by the pseudo label module with the ground truth is 83.32%, and the cosine similarity is 93.55%. In the semantic segmentation testing of SL-Net for image seedling of maize plants and weeds, the mean intersection over union and average precision reached 87.30% and 94.06%, which is higher than the semantic segmentation accuracy of DeepLab-V3+ and PSPNet under weakly and fully supervised learning conditions. We conduct experiments to demonstrate the effectiveness of the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Zhang, Shuyuan, Hongli Xu, Xiaoran Zhu e Lipeng Xie. "Automatic Crack Detection Using Weakly Supervised Semantic Segmentation Network and Mixed-Label Training Strategy". Foundations of Computing and Decision Sciences 49, n.º 1 (1 de fevereiro de 2024): 95–118. http://dx.doi.org/10.2478/fcds-2024-0007.

Texto completo da fonte
Resumo:
Abstract Automatic crack detection in construction facilities is a challenging yet crucial task. However, existing deep learning (DL)-based semantic segmentation methods for this field are based on fully supervised learning models and pixel-level manual annotation, which are time-consuming and labor-intensive. To solve this problem, this paper proposes a novel crack semantic segmentation network using weakly supervised approach and mixed-label training strategy. Firstly, an image patch-level classifier of crack is trained to generate a coarse localization map for automatic pseudo-labeling of cracks combined with a thresholding-based method. Then, we integrated the pseudo-annotated with manual-annotated samples with a ratio of 4:1 to train the crack segmentation network with a mixed-label training strategy, in which the manual labels were assigned with a higher weight value. The experimental data on two public datasets demonstrate that our proposed method achieves a comparable accuracy with the fully supervised methods, reducing over 65% of the manual annotation workload.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Chen, Hao, Shuang Peng, Chun Du, Jun Li e Songbing Wu. "SW-GAN: Road Extraction from Remote Sensing Imagery Using Semi-Weakly Supervised Adversarial Learning". Remote Sensing 14, n.º 17 (23 de agosto de 2022): 4145. http://dx.doi.org/10.3390/rs14174145.

Texto completo da fonte
Resumo:
Road networks play a fundamental role in our daily life. It is of importance to extract the road structure in a timely and precise manner with the rapid evolution of urban road structure. Recently, road network extraction using deep learning has become an effective and popular method. The main shortcoming of the road extraction using deep learning methods lies in the fact that there is a need for a large amount of training datasets. Additionally, the datasets need to be elaborately annotated, which is usually labor-intensive and time-consuming; thus, lots of weak annotations (such as the centerline from OpenStreetMap) have accumulated over the past a few decades. To make full use of the weak annotations, we propose a novel semi-weakly supervised method based on adversarial learning to extract road networks from remote sensing imagery. Our method uses a small set of pixel-wise annotated data and a large amount of weakly annotated data for training. The experimental results show that the proposed approach can achieve a maintained performance compared with the methods that use a large number of full pixel-wise annotations while using less fully annotated data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Zheng, Shida, Chenshu Chen, Xi Yang e Wenming Tan. "MaskBooster: End-to-End Self-Training for Sparsely Supervised Instance Segmentation". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 3 (26 de junho de 2023): 3696–704. http://dx.doi.org/10.1609/aaai.v37i3.25481.

Texto completo da fonte
Resumo:
The present paper introduces sparsely supervised instance segmentation, with the datasets being fully annotated bounding boxes and sparsely annotated masks. A direct solution to this task is self-training, which is not fully explored for instance segmentation yet. In this paper, we propose MaskBooster for sparsely supervised instance segmentation (SpSIS) with comprehensive usage of pseudo masks. MaskBooster is featured with (1) dynamic and progressive pseudo masks from an online updating teacher model, (2) refining binary pseudo masks with the help of bounding box prior, (3) learning inter-class prediction distribution via knowledge distillation for soft pseudo masks. As an end-to-end and universal self-training framework, MaskBooster can empower fully supervised algorithms and boost their segmentation performance on SpSIS. Abundant experiments are conducted on COCO and BDD100K datasets and validate the effectiveness of MaskBooster. Specifically, on different COCO protocols and BDD100K, we surpass sparsely supervised baseline by a large margin for both Mask RCNN and ShapeProp. MaskBooster on SpSIS also outperforms weakly and semi-supervised instance segmentation state-of-the-art on the datasets with similar annotation budgets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Qiang, Zhuang, Jingmin Shi e Fanhuai Shi. "Phenotype Tracking of Leafy Greens Based on Weakly Supervised Instance Segmentation and Data Association". Agronomy 12, n.º 7 (29 de junho de 2022): 1567. http://dx.doi.org/10.3390/agronomy12071567.

Texto completo da fonte
Resumo:
Phenotype analysis of leafy green vegetables in planting environment is the key technology of precision agriculture. In this paper, deep convolutional neural network is employed to conduct instance segmentation of leafy greens by weakly supervised learning based on box-level annotations and Excess Green (ExG) color similarity. Then, weeds are filtered based on area threshold, K-means clustering and time context constraint. Thirdly, leafy greens tracking is achieved by bipartite graph matching based on mask IoU measure. Under the framework of phenotype tracking, some time-context-dependent phenotype analysis tasks such as growth monitoring can be performed. Experiments show that the proposed method can achieve 0.95 F1-score and 76.3 sMOTSA (soft multi-object tracking and segmentation accuracy) by using weakly supervised annotation data. Compared with the fully supervised approach, the proposed method can effectively reduce the requirements for agricultural data annotation, which has more potential in practical applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Liu, Yiqing, Qiming He, Hufei Duan, Huijuan Shi, Anjia Han e Yonghong He. "Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images". Sensors 22, n.º 16 (13 de agosto de 2022): 6053. http://dx.doi.org/10.3390/s22166053.

Texto completo da fonte
Resumo:
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as ‘tumor’ or ‘normal’. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Mo, Shaoyi, Yufeng Shi, Qi Yuan e Mingyue Li. "A Survey of Deep Learning Road Extraction Algorithms Using High-Resolution Remote Sensing Images". Sensors 24, n.º 5 (6 de março de 2024): 1708. http://dx.doi.org/10.3390/s24051708.

Texto completo da fonte
Resumo:
Roads are the fundamental elements of transportation, connecting cities and rural areas, as well as people’s lives and work. They play a significant role in various areas such as map updates, economic development, tourism, and disaster management. The automatic extraction of road features from high-resolution remote sensing images has always been a hot and challenging topic in the field of remote sensing, and deep learning network models are widely used to extract roads from remote sensing images in recent years. In light of this, this paper systematically reviews and summarizes the deep-learning-based techniques for automatic road extraction from high-resolution remote sensing images. It reviews the application of deep learning network models in road extraction tasks and classifies these models into fully supervised learning, semi-supervised learning, and weakly supervised learning based on their use of labels. Finally, a summary and outlook of the current development of deep learning techniques in road extraction are provided.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Fan, Yifei. "Image semantic segmentation using deep learning technique". Applied and Computational Engineering 4, n.º 1 (14 de junho de 2023): 810–17. http://dx.doi.org/10.54254/2755-2721/4/2023439.

Texto completo da fonte
Resumo:
With the deepening research on image understanding in many application fields, including auto drive system, unmanned aerial vehicle (UAV) landing point judgment, virtual reality wearable devices, etc., computer vision and machine learning researchers are paying more and more attention to image semantic segmentation (ISS). In this paper, according to the different region generation algorithms, the regional classification image semantic segmentation methods are classified into the candidate region method and the segmentation mask method, according to different learning methods, the image semantic segmentation methods based on super pixels are divided into fully supervised learning method and weakly supervised learning method. The typical algorithms in these various categories are summarized and compared. In addition, this paper also systematically expounds the role of DL technology in the field of ISS, and discusses the main challenges and future development prospects in this field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Kuutti, Sampo, Richard Bowden e Saber Fallah. "Weakly Supervised Reinforcement Learning for Autonomous Highway Driving via Virtual Safety Cages". Sensors 21, n.º 6 (13 de março de 2021): 2032. http://dx.doi.org/10.3390/s21062032.

Texto completo da fonte
Resumo:
The use of neural networks and reinforcement learning has become increasingly popular in autonomous vehicle control. However, the opaqueness of the resulting control policies presents a significant barrier to deploying neural network-based control in autonomous vehicles. In this paper, we present a reinforcement learning based approach to autonomous vehicle longitudinal control, where the rule-based safety cages provide enhanced safety for the vehicle as well as weak supervision to the reinforcement learning agent. By guiding the agent to meaningful states and actions, this weak supervision improves the convergence during training and enhances the safety of the final trained policy. This rule-based supervisory controller has the further advantage of being fully interpretable, thereby enabling traditional validation and verification approaches to ensure the safety of the vehicle. We compare models with and without safety cages, as well as models with optimal and constrained model parameters, and show that the weak supervision consistently improves the safety of exploration, speed of convergence, and model performance. Additionally, we show that when the model parameters are constrained or sub-optimal, the safety cages can enable a model to learn a safe driving policy even when the model could not be trained to drive through reinforcement learning alone.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Wang, Zhuhui, Shijie Wang, Haojie Li, Zhi Dou e Jianjun Li. "Graph-Propagation Based Correlation Learning for Weakly Supervised Fine-Grained Image Classification". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12289–96. http://dx.doi.org/10.1609/aaai.v34i07.6912.

Texto completo da fonte
Resumo:
The key of Weakly Supervised Fine-grained Image Classification (WFGIC) is how to pick out the discriminative regions and learn the discriminative features from them. However, most recent WFGIC methods pick out the discriminative regions independently and utilize their features directly, while neglecting the facts that regions' features are mutually semantic correlated and region groups can be more discriminative. To address these issues, we propose an end-to-end Graph-propagation based Correlation Learning (GCL) model to fully mine and exploit the discriminative potentials of region correlations for WFGIC. Specifically, in discriminative region localization phase, a Criss-cross Graph Propagation (CGP) sub-network is proposed to learn region correlations, which establishes correlation between regions and then enhances each region by weighted aggregating other regions in a criss-cross way. By this means each region's representation encodes the global image-level context and local spatial context simultaneously, thus the network is guided to implicitly discover the more powerful discriminative region groups for WFGIC. In discriminative feature representation phase, the Correlation Feature Strengthening (CFS) sub-network is proposed to explore the internal semantic correlation among discriminative patches' feature vectors, to improve their discriminative power by iteratively enhancing informative elements while suppressing the useless ones. Extensive experiments demonstrate the effectiveness of proposed CGP and CFS sub-networks, and show that the GCL model achieves better performance both in accuracy and efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Cheng, Jianpeng, Siva Reddy, Vijay Saraswat e Mirella Lapata. "Learning an Executable Neural Semantic Parser". Computational Linguistics 45, n.º 1 (março de 2019): 59–94. http://dx.doi.org/10.1162/coli_a_00342.

Texto completo da fonte
Resumo:
This article describes a neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The parser generates tree-structured logical forms with a transition-based approach, combining a generic tree-generation algorithm with domain-general grammar defined by the logical language. The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. To tackle mismatches between natural language and logical form tokens, various attention mechanisms are explored. Finally, we consider different training settings for the neural semantic parser, including fully supervised training where annotated logical forms are given, weakly supervised training where denotations are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. Experiments across a wide range of data sets demonstrate the effectiveness of our parser.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Sali, Rasoul, Nazanin Moradinasab, Shan Guleria, Lubaina Ehsan, Philip Fernandes, Tilak U. Shah, Sana Syed e Donald E. Brown. "Deep Learning for Whole-Slide Tissue Histopathology Classification: A Comparative Study in the Identification of Dysplastic and Non-Dysplastic Barrett’s Esophagus". Journal of Personalized Medicine 10, n.º 4 (23 de setembro de 2020): 141. http://dx.doi.org/10.3390/jpm10040141.

Texto completo da fonte
Resumo:
The gold standard of histopathology for the diagnosis of Barrett’s esophagus (BE) is hindered by inter-observer variability among gastrointestinal pathologists. Deep learning-based approaches have shown promising results in the analysis of whole-slide tissue histopathology images (WSIs). We performed a comparative study to elucidate the characteristics and behaviors of different deep learning-based feature representation approaches for the WSI-based diagnosis of diseased esophageal architectures, namely, dysplastic and non-dysplastic BE. The results showed that if appropriate settings are chosen, the unsupervised feature representation approach is capable of extracting more relevant image features from WSIs to classify and locate the precursors of esophageal cancer compared to weakly supervised and fully supervised approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Wolf, Daniel, Sebastian Regnery, Rafal Tarnawski, Barbara Bobek-Billewicz, Joanna Polańska e Michael Götz. "Weakly Supervised Learning with Positive and Unlabeled Data for Automatic Brain Tumor Segmentation". Applied Sciences 12, n.º 21 (24 de outubro de 2022): 10763. http://dx.doi.org/10.3390/app122110763.

Texto completo da fonte
Resumo:
A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To mitigate this problem, we propose a new method to obtain high-quality classifiers from a dataset with only small parts of labeled tumor areas. This is achieved by using positive and unlabeled learning in conjunction with a domain adaptation technique. The proposed approach leverages the tumor volume, and we show that it can be either derived with simple measures or completely automatic with a proposed estimation method. While learning from sparse samples allows reducing the necessary annotation time from 4 h to 5 min, we show that the proposed approach further reduces the necessary annotation by roughly 50% while maintaining comparative accuracies compared to traditionally trained classifiers with this approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Wolf, Daniel, Sebastian Regnery, Rafal Tarnawski, Barbara Bobek-Billewicz, Joanna Polańska e Michael Götz. "Weakly Supervised Learning with Positive and Unlabeled Data for Automatic Brain Tumor Segmentation". Applied Sciences 12, n.º 21 (24 de outubro de 2022): 10763. http://dx.doi.org/10.3390/app122110763.

Texto completo da fonte
Resumo:
A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To mitigate this problem, we propose a new method to obtain high-quality classifiers from a dataset with only small parts of labeled tumor areas. This is achieved by using positive and unlabeled learning in conjunction with a domain adaptation technique. The proposed approach leverages the tumor volume, and we show that it can be either derived with simple measures or completely automatic with a proposed estimation method. While learning from sparse samples allows reducing the necessary annotation time from 4 h to 5 min, we show that the proposed approach further reduces the necessary annotation by roughly 50% while maintaining comparative accuracies compared to traditionally trained classifiers with this approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Wolf, Daniel, Sebastian Regnery, Rafal Tarnawski, Barbara Bobek-Billewicz, Joanna Polańska e Michael Götz. "Weakly Supervised Learning with Positive and Unlabeled Data for Automatic Brain Tumor Segmentation". Applied Sciences 12, n.º 21 (24 de outubro de 2022): 10763. http://dx.doi.org/10.3390/app122110763.

Texto completo da fonte
Resumo:
A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To mitigate this problem, we propose a new method to obtain high-quality classifiers from a dataset with only small parts of labeled tumor areas. This is achieved by using positive and unlabeled learning in conjunction with a domain adaptation technique. The proposed approach leverages the tumor volume, and we show that it can be either derived with simple measures or completely automatic with a proposed estimation method. While learning from sparse samples allows reducing the necessary annotation time from 4 h to 5 min, we show that the proposed approach further reduces the necessary annotation by roughly 50% while maintaining comparative accuracies compared to traditionally trained classifiers with this approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia