Journal articles on the topic 'Strongly supervised learning'

To see the other types of publications on this topic, follow the link: Strongly supervised learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Strongly supervised learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lucas, Thomas, Philippe Weinzaepfel, and Gregory Rogez. "Barely-Supervised Learning: Semi-supervised Learning with Very Few Labeled Images." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1881–89. http://dx.doi.org/10.1609/aaai.v36i2.20082.

Full text
Abstract:
This paper tackles the problem of semi-supervised learning when the set of labeled samples is limited to a small number of images per class, typically less than 10, problem that we refer to as barely-supervised learning. We analyze in depth the behavior of a state-of-the-art semi-supervised method, FixMatch, which relies on a weakly-augmented version of an image to obtain supervision signal for a more strongly-augmented version. We show that it frequently fails in barely-supervised scenarios, due to a lack of training signal when no pseudo-label can be predicted with high confidence. We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels. We then propose two methods to refine the pseudo-label selection process which lead to further improvements.The first one relies on a per-sample history of the model predictions, akin to a voting scheme. The second iteratively up-dates class-dependent confidence thresholds to better explore classes that are under-represented in the pseudo-labels. Our experiments show that our approach performs significantly better on STL-10 in the barely-supervised regime,e.g. with 4 or 8 labeled images per class.
APA, Harvard, Vancouver, ISO, and other styles
2

She, Dongyu, Ming Sun, and Jufeng Yang. "Learning Discriminative Sentiment Representation from Strongly- and Weakly Supervised CNNs." ACM Transactions on Multimedia Computing, Communications, and Applications 15, no. 3s (January 22, 2020): 1–19. http://dx.doi.org/10.1145/3326335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Junwen, Qi Bi, Yanzhan Yang, Pengfei Zhu, and Cheng Bian. "Label-Efficient Hybrid-Supervised Learning for Medical Image Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2026–34. http://dx.doi.org/10.1609/aaai.v36i2.20098.

Full text
Abstract:
Due to the lack of expertise for medical image annotation, the investigation of label-efficient methodology for medical image segmentation becomes a heated topic. Recent progresses focus on the efficient utilization of weak annotations together with few strongly-annotated labels so as to achieve comparable segmentation performance in many unprofessional scenarios. However, these approaches only concentrate on the supervision inconsistency between strongly- and weakly-annotated instances but ignore the instance inconsistency inside the weakly-annotated instances, which inevitably leads to performance degradation. To address this problem, we propose a novel label-efficient hybrid-supervised framework, which considers each weakly-annotated instance individually and learns its weight guided by the gradient direction of the strongly-annotated instances, so that the high-quality prior in the strongly-annotated instances is better exploited and the weakly-annotated instances are depicted more precisely. Specially, our designed dynamic instance indicator (DII) realizes the above objectives, and is adapted to our dynamic co-regularization (DCR) framework further to alleviate the erroneous accumulation from distortions of weak annotations. Extensive experiments on two hybrid-supervised medical segmentation datasets demonstrate that with only 10% strong labels, the proposed framework can leverage the weak labels efficiently and achieve competitive performance against the 100% strong-label supervised scenario.
APA, Harvard, Vancouver, ISO, and other styles
4

Gui, Haitian, Tao Su, Zhiyong Pang, Han Jiao, Lang Xiong, Xinhua Jiang, Li Li, and Zixin Wang. "Diagnosis of Breast Cancer with Strongly Supervised Deep Learning Neural Network." Electronics 11, no. 19 (September 22, 2022): 3003. http://dx.doi.org/10.3390/electronics11193003.

Full text
Abstract:
The strongly supervised deep convolutional neural network (DCNN) has better performance in assessing breast cancer (BC) because of the more accurate features from the slice-level precise labeling compared with the image-level labeling weakly supervised DCNN. However, manual slice-level precise labeling is time consuming and expensive. In addition, the slice-level diagnosis adopted in the DCNN system is incomplete and defective because of the lack of other slices’ information. In this paper, we studied the impact of the region of interest (ROI) and lesion-level multi-slice diagnosis in the DCNN auxiliary diagnosis system. Firstly, we proposed an improved region-growing algorithm to generate slice-level precise ROI. Secondly, we adopted the average weighting method as the lesion-level diagnosis criteria after exploring four different weighting methods. Finally, we proposed our complete system, which combined the densely connected convolutional network (DenseNet) with the slice-level ROI and the average weighting lesion-level diagnosis after evaluating the performance of five DCNNs. The proposed system achieved an AUC of 0.958, an accuracy of 92.5%, a sensitivity of 95.0%, and a specificity of 90.0%. The experimental results showed that our proposed system had a better performance in BC diagnosis because of the more precise ROI and more complete information of multi-slices.
APA, Harvard, Vancouver, ISO, and other styles
5

Kasihmuddin, Mohd Shareduwan Mohd, Siti Zulaikha Mohd Jamaludin, Mohd Asyraf Mansor, Habibah A. Wahab, and Siti Maisharah Sheikh Ghadzi. "Supervised Learning Perspective in Logic Mining." Mathematics 10, no. 6 (March 13, 2022): 915. http://dx.doi.org/10.3390/math10060915.

Full text
Abstract:
Creating optimal logic mining is strongly dependent on how the learning data are structured. Without optimal data structure, intelligence systems integrated into logic mining, such as an artificial neural network, tend to converge to suboptimal solution. This paper proposed a novel logic mining that integrates supervised learning via association analysis to identify the most optimal arrangement with respect to the given logical rule. By utilizing Hopfield neural network as an associative memory to store information of the logical rule, the optimal logical rule from the correlation analysis will be learned and the corresponding optimal induced logical rule can be obtained. In other words, the optimal logical rule increases the chances for the logic mining to locate the optimal induced logic that generalize the datasets. The proposed work is extensively tested on a variety of benchmark datasets with various performance metrics. Based on the experimental results, the proposed supervised logic mining demonstrated superiority and the least competitiveness compared to the existing method.
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Jun, and Guolin Yu. "Lagrangian Regularized Twin Extreme Learning Machine for Supervised and Semi-Supervised Classification." Symmetry 14, no. 6 (June 9, 2022): 1186. http://dx.doi.org/10.3390/sym14061186.

Full text
Abstract:
Twin extreme learning machine (TELM) is a phenomenon of symmetry that improves the performance of the traditional extreme learning machine classification algorithm (ELM). Although TELM has been widely researched and applied in the field of machine learning, the need to solve two quadratic programming problems (QPPs) for TELM has greatly limited its development. In this paper, we propose a novel TELM framework called Lagrangian regularized twin extreme learning machine (LRTELM). One significant advantage of our LRTELM over TELM is that the structural risk minimization principle is implemented by introducing the regularization term. Meanwhile, we consider the square of the l2-norm of the vector of slack variables instead of the usual l1-norm in order to make the objective functions strongly convex. Furthermore, a simple and fast iterative algorithm is designed for solving LRTELM, which only needs to iteratively solve a pair of linear equations in order to avoid solving two QPPs. Last, we extend LRTELM to semi-supervised learning by introducing manifold regularization to improve the performance of LRTELM when insufficient labeled samples are available, as well as to obtain a Lagrangian semi-supervised regularized twin extreme learning machine (Lap-LRTELM). Experimental results on most datasets show that the proposed LRTELM and Lap-LRTELM are competitive in terms of accuracy and efficiency compared to the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yuzhuo, Hangting Chen, Jian Wang, Pei Wang, and Pengyuan Zhang. "Confidence Learning for Semi-Supervised Acoustic Event Detection." Applied Sciences 11, no. 18 (September 15, 2021): 8581. http://dx.doi.org/10.3390/app11188581.

Full text
Abstract:
In recent years, the involvement of synthetic strongly labeled data, weakly labeled data, and unlabeled data has drawn much research attention in semi-supervised acoustic event detection (SAED). The classic self-training method carries out predictions for unlabeled data and then selects predictions with high probabilities as pseudo-labels for retraining. Such models have shown its effectiveness in SAED. However, probabilities are poorly calibrated confidence estimates, and samples with low probabilities are ignored. Hence, we introduce a confidence-based semi-supervised Acoustic event detection (C-SAED) framework. The C-SAED method learns confidence deliberately and retrains all data distinctly by applying confidence as weights. Additionally, we apply a power pooling function whose coefficient can be trained automatically and use weakly labeled data more efficiently. The experimental results demonstrate that the generated confidence is proportional to the accuracy of the predictions. Our C-SAED framework achieves a relative error rate reduction of 34% in contrast to the baseline model.
APA, Harvard, Vancouver, ISO, and other styles
8

She, Dong-Yu, and Kun Xu. "Contrastive Self-supervised Representation Learning Using Synthetic Data." International Journal of Automation and Computing 18, no. 4 (May 11, 2021): 556–67. http://dx.doi.org/10.1007/s11633-021-1297-9.

Full text
Abstract:
AbstractLearning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability. Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Waspada, Indra, Adi Wibowo, and Noel Segura Meraz. "SUPERVISED MACHINE LEARNING MODEL FOR MICRORNA EXPRESSION DATA IN CANCER." Jurnal Ilmu Komputer dan Informasi 10, no. 2 (June 30, 2017): 108. http://dx.doi.org/10.21609/jiki.v10i2.481.

Full text
Abstract:
The cancer cell gene expression data in general has a very large feature and requires analysis to find out which genes are strongly influencing the specific disease for diagnosis and drug discovery. In this paper several methods of supervised learning (decisien tree, naïve bayes, neural network, and deep learning) are used to classify cancer cells based on the expression of the microRNA gene to obtain the best method that can be used for gene analysis. In this study there is no optimization and tuning of the algorithm to test the ability of general algorithms. There are 1881 features of microRNA gene epresi on 25 cancer classes based on tissue location. A simple feature selection method is used to test the comparison of the algorithm. Expreriments were conducted with various scenarios to test the accuracy of the classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Zhen, Luping Zhou, Lei Wang, Yinghuan Shi, and Yang Gao. "LaSSL: Label-Guided Self-Training for Semi-supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9208–16. http://dx.doi.org/10.1609/aaai.v36i8.20907.

Full text
Abstract:
The key to semi-supervised learning (SSL) is to explore adequate information to leverage the unlabeled data. Current dominant approaches aim to generate pseudo-labels on weakly augmented instances and train models on their corresponding strongly augmented variants with high-confidence results. However, such methods are limited in excluding samples with low-confidence pseudo-labels and under-utilization of the label information. In this paper, we emphasize the cruciality of the label information and propose a Label-guided Self-training approach to Semi-supervised Learning (LaSSL), which improves pseudo-label generations from two mutually boosted strategies. First, with the ground-truth labels and iteratively-polished pseudo-labels, we explore instance relations among all samples and then minimize a class-aware contrastive loss to learn discriminative feature representations that make same-class samples gathered and different-class samples scattered. Second, on top of improved feature representations, we propagate the label information to the unlabeled samples across the potential data manifold at the feature-embedding level, which can further improve the labelling of samples with reference to their neighbours. These two strategies are seamlessly integrated and mutually promoted across the whole training process. We evaluate LaSSL on several classification benchmarks under partially labeled settings and demonstrate its superiority over the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
11

Grollmisch, Sascha, and Estefanía Cano. "Improving Semi-Supervised Learning for Audio Classification with FixMatch." Electronics 10, no. 15 (July 28, 2021): 1807. http://dx.doi.org/10.3390/electronics10151807.

Full text
Abstract:
Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that they strongly rely on the augmentation of unannotated data. This is vastly unexplored for audio data. In this work, SSL using the state-of-the-art FixMatch approach is evaluated on three audio classification tasks, including music, industrial sounds, and acoustic scenes. The performance of FixMatch is compared to Convolutional Neural Networks (CNN) trained from scratch, Transfer Learning, and SSL using the Mean Teacher approach. Additionally, a simple yet effective approach for selecting suitable augmentation methods for FixMatch is introduced. FixMatch with the proposed modifications always outperformed Mean Teacher and the CNNs trained from scratch. For the industrial sounds and music datasets, the CNN baseline performance using the full dataset was reached with less than 5% of the initial training data, demonstrating the potential of recent SSL methods for audio data. Transfer Learning outperformed FixMatch only for the most challenging dataset from acoustic scene classification, showing that there is still room for improvement.
APA, Harvard, Vancouver, ISO, and other styles
12

Blume, Christian, Katja Matthes, and Illia Horenko. "Supervised Learning Approaches to Classify Sudden Stratospheric Warming Events." Journal of the Atmospheric Sciences 69, no. 6 (June 1, 2012): 1824–40. http://dx.doi.org/10.1175/jas-d-11-0194.1.

Full text
Abstract:
Abstract Sudden stratospheric warmings are prominent examples of dynamical wave–mean flow interactions in the Arctic stratosphere during Northern Hemisphere winter. They are characterized by a strong temperature increase on time scales of a few days and a strongly disturbed stratospheric vortex. This work investigates a wide class of supervised learning methods with respect to their ability to classify stratospheric warmings, using temperature anomalies from the Arctic stratosphere and atmospheric forcings such as ENSO, the quasi-biennial oscillation (QBO), and the solar cycle. It is demonstrated that one representative of the supervised learning methods family, namely nonlinear neural networks, is able to reliably classify stratospheric warmings. Within this framework, one can estimate temporal onset, duration, and intensity of stratospheric warming events independently of a particular pressure level. In contrast to classification methods based on the zonal-mean zonal wind, the approach herein distinguishes major, minor, and final warmings. Instead of a binary measure, it provides continuous conditional probabilities for each warming event representing the amount of deviation from an undisturbed polar vortex. Additionally, the statistical importance of the atmospheric factors is estimated. It is shown how marginalized probability distributions can give insights into the interrelationships between external factors. This approach is applied to 40-yr and interim ECMWF (ERA-40/ERA-Interim) and NCEP–NCAR reanalysis data for the period from 1958 through 2010.
APA, Harvard, Vancouver, ISO, and other styles
13

Kiyono, Shun, Jun Suzuki, and Kentaro Inui. "Mixture of Expert/Imitator Networks: Scalable Semi-Supervised Learning Framework." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4073–81. http://dx.doi.org/10.1609/aaai.v33i01.33014073.

Full text
Abstract:
The current success of deep neural networks (DNNs) in an increasingly broad range of tasks involving artificial intelligence strongly depends on the quality and quantity of labeled training data. In general, the scarcity of labeled data, which is often observed in many natural language processing tasks, is one of the most important issues to be addressed. Semisupervised learning (SSL) is a promising approach to overcoming this issue by incorporating a large amount of unlabeled data. In this paper, we propose a novel scalable method of SSL for text classification tasks. The unique property of our method, Mixture of Expert/Imitator Networks, is that imitator networks learn to “imitate” the estimated label distribution of the expert network over the unlabeled data, which potentially contributes a set of features for the classification. Our experiments demonstrate that the proposed method consistently improves the performance of several types of baseline DNNs. We also demonstrate that our method has the more data, better performance property with promising scalability to the amount of unlabeled data.
APA, Harvard, Vancouver, ISO, and other styles
14

Toney, Liam, David Fee, Alex Witsil, and Robin S. Matoza. "Waveform Features Strongly Control Subcrater Classification Performance for a Large, Labeled Volcano Infrasound Dataset." Seismic Record 2, no. 3 (July 1, 2022): 167–75. http://dx.doi.org/10.1785/0320220019.

Full text
Abstract:
Abstract Volcano infrasound data contain a wealth of information about eruptive patterns, for which machine learning (ML) is an emerging analysis tool. Although global catalogs of labeled infrasound events exist, the application of supervised ML to local (<15 km) volcano infrasound signals has been limited by a lack of robust labeled datasets. Here, we automatically generate a labeled dataset of >7500 explosions recorded by a five-station infrasound network at the highly active Yasur Volcano, Vanuatu. Explosions are located via backprojection and associated with one of Yasur’s two summit subcraters. We then apply a supervised ML approach to classify the subcrater of origin. When trained and tested on data from the same station, our chosen algorithm is >95% accurate; when training and testing on different stations, accuracy drops to about 75%. The choice of waveform features provided to the algorithm strongly influences classification performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Yiyuan, Yongquan Zhang, Xingxing Zha, and Dongyin Wang. "On stochastic accelerated gradient with non-strongly convexity." AIMS Mathematics 7, no. 1 (2021): 1445–59. http://dx.doi.org/10.3934/math.2022085.

Full text
Abstract:
<abstract><p>In this paper, we consider stochastic approximation algorithms for least-square and logistic regression with no strong-convexity assumption on the convex loss functions. We develop two algorithms with varied step-size motivated by the accelerated gradient algorithm which is initiated for convex stochastic programming. We analyse the developed algorithms that achieve a rate of $ O(1/n^{2}) $ where $ n $ is the number of samples, which is tighter than the best convergence rate $ O(1/n) $ achieved so far on non-strongly-convex stochastic approximation with constant-step-size, for classic supervised learning problems. Our analysis is based on a non-asymptotic analysis of the empirical risk (in expectation) with less assumptions that existing analysis results. It does not require the finite-dimensionality assumption and the Lipschitz condition. We carry out controlled experiments on synthetic and some standard machine learning data sets. Empirical results justify our theoretical analysis and show a faster convergence rate than existing other methods.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Wei, Ping Tang, Thomas Corpetti, and Lijun Zhao. "WTS: A Weakly towards Strongly Supervised Learning Framework for Remote Sensing Land Cover Classification Using Segmentation Models." Remote Sensing 13, no. 3 (January 23, 2021): 394. http://dx.doi.org/10.3390/rs13030394.

Full text
Abstract:
Land cover classification is one of the most fundamental tasks in the field of remote sensing. In recent years, fully supervised fully convolutional network (FCN)-based semantic segmentation models have achieved state-of-the-art performance in the semantic segmentation task. However, creating pixel-level annotations is prohibitively expensive and laborious, especially when dealing with remote sensing images. Weakly supervised learning methods from weakly labeled annotations can overcome this difficulty to some extent and achieve impressive segmentation results, but results are limited in accuracy. Inspired by point supervision and the traditional segmentation method of seeded region growing (SRG) algorithm, a weakly towards strongly (WTS) supervised learning framework is proposed in this study for remote sensing land cover classification to handle the absence of well-labeled and abundant pixel-level annotations when using segmentation models. In this framework, only several points with true class labels are required as the training set, which are much less expensive to acquire compared with pixel-level annotations through field survey or visual interpretation using high-resolution images. Firstly, they are used to train a Support Vector Machine (SVM) classifier. Once fully trained, the SVM is used to generate the initial seeded pixel-level training set, in which only the pixels with high confidence are assigned with class labels whereas others are unlabeled. They are used to weakly train the segmentation model. Then, the seeded region growing module and fully connected Conditional Random Fields (CRFs) are used to iteratively update the seeded pixel-level training set for progressively increasing pixel-level supervision of the segmentation model. Sentinel-2 remote sensing images are used to validate the proposed framework, and SVM is selected for comparison. In addition, FROM-GLC10 global land cover map is used as training reference to directly train the segmentation model. Experimental results show that the proposed framework outperforms other methods and can be highly recommended for land cover classification tasks when the pixel-level labeled datasets are insufficient by using segmentation models.
APA, Harvard, Vancouver, ISO, and other styles
17

Ruokolainen, Teemu, Oskar Kohonen, Kairit Sirts, Stig-Arne Grönroos, Mikko Kurimo, and Sami Virpioja. "A Comparative Study of Minimally Supervised Morphological Segmentation." Computational Linguistics 42, no. 1 (March 2016): 91–120. http://dx.doi.org/10.1162/coli_a_00243.

Full text
Abstract:
This article presents a comparative study of a subfield of morphology learning referred to as minimally supervised morphological segmentation. In morphological segmentation, word forms are segmented into morphs, the surface forms of morphemes. In the minimally supervised data-driven learning setting, segmentation models are learned from a small number of manually annotated word forms and a large set of unannotated word forms. In addition to providing a literature survey on published methods, we present an in-depth empirical comparison on three diverse model families, including a detailed error analysis. Based on the literature survey, we conclude that the existing methodology contains substantial work on generative morph lexicon-based approaches and methods based on discriminative boundary detection. As for which approach has been more successful, both the previous work and the empirical evaluation presented here strongly imply that the current state of the art is yielded by the discriminative boundary detection methodology.
APA, Harvard, Vancouver, ISO, and other styles
18

Park, Nojin, and Hanseok Ko. "Weakly Supervised Learning for Object Localization Based on an Attention Mechanism." Applied Sciences 11, no. 22 (November 19, 2021): 10953. http://dx.doi.org/10.3390/app112210953.

Full text
Abstract:
Recently, deep learning has been successfully applied to object detection and localization tasks in images. When setting up deep learning frameworks for supervised training with large datasets, strongly labeling the objects facilitates good performance; however, the complexity of the image scene and large size of the dataset make this a laborious task. Hence, it is of paramount importance that the expensive work associated with the tasks involving strong labeling, such as bounding box annotation, is reduced. In this paper, we propose a method to perform object localization tasks without bounding box annotation in the training process by means of employing a two-path activation-map-based classifier framework. In particular, we develop an activation-map-based framework to judicially control the attention map in the perception branch by adding a two-feature extractor so that better attention weights can be distributed to induce improved performance. The experimental results indicate that our method surpasses the performance of the existing deep learning models based on weakly supervised object localization. The experimental results show that the proposed method achieves the best performance, with 75.21% Top-1 classification accuracy and 55.15% Top-1 localization accuracy on the CUB-200-2011 dataset.
APA, Harvard, Vancouver, ISO, and other styles
19

Das, Partha Pratim, Monjur Morshed Rabby, Vamsee Vadlamudi, and Rassel Raihan. "Moisture Content Prediction in Polymer Composites Using Machine Learning Techniques." Polymers 14, no. 20 (October 18, 2022): 4403. http://dx.doi.org/10.3390/polym14204403.

Full text
Abstract:
The principal objective of this study is to employ non-destructive broadband dielectric spectroscopy/impedance spectroscopy and machine learning techniques to estimate the moisture content in FRP composites under hygrothermal aging. Here, classification and regression machine learning models that can accurately predict the current moisture saturation state are developed using the frequency domain dielectric response of the composite, in conjunction with the time domain hygrothermal aging effect. First, to categorize the composites based on the present state of the absorbed moisture supervised classification learning models (i.e., quadratic discriminant analysis (QDA), support vector machine (SVM), and artificial neural network-based multilayer perceptron (MLP) classifier) have been developed. Later, to accurately estimate the relative moisture absorption from the dielectric data, supervised regression models (i.e., multiple linear regression (MLR), decision tree regression (DTR), and multi-layer perceptron (MLP) regression) have been developed, which can effectively estimate the relative moisture absorption from the dielectric response of the material with an R2 value greater than 0.95. The physics behind the hygrothermal aging of the composites has then been interpreted by comparing the model attributes to see which characteristics most strongly influence the predictions.
APA, Harvard, Vancouver, ISO, and other styles
20

Theocharides, Spyros, Marios Theristis, George Makrides, Marios Kynigos, Chrysovalantis Spanias, and George E. Georghiou. "Comparative Analysis of Machine Learning Models for Day-Ahead Photovoltaic Power Production Forecasting." Energies 14, no. 4 (February 18, 2021): 1081. http://dx.doi.org/10.3390/en14041081.

Full text
Abstract:
A main challenge for integrating the intermittent photovoltaic (PV) power generation remains the accuracy of day-ahead forecasts and the establishment of robust performing methods. The purpose of this work is to address these technological challenges by evaluating the day-ahead PV production forecasting performance of different machine learning models under different supervised learning regimes and minimal input features. Specifically, the day-ahead forecasting capability of Bayesian neural network (BNN), support vector regression (SVR), and regression tree (RT) models was investigated by employing the same dataset for training and performance verification, thus enabling a valid comparison. The training regime analysis demonstrated that the performance of the investigated models was strongly dependent on the timeframe of the train set, training data sequence, and application of irradiance condition filters. Furthermore, accurate results were obtained utilizing only the measured power output and other calculated parameters for training. Consequently, useful information is provided for establishing a robust day-ahead forecasting methodology that utilizes calculated input parameters and an optimal supervised learning approach. Finally, the obtained results demonstrated that the optimally constructed BNN outperformed all other machine learning models achieving forecasting accuracies lower than 5%.
APA, Harvard, Vancouver, ISO, and other styles
21

Yun, JooYeol, JungWoo Oh, and IlDong Yun. "Gradually Applying Weakly Supervised and Active Learning for Mass Detection in Breast Ultrasound Images." Applied Sciences 10, no. 13 (June 29, 2020): 4519. http://dx.doi.org/10.3390/app10134519.

Full text
Abstract:
We propose a method for effectively utilizing weakly annotated image data in an object detection tasks of breast ultrasound images. Given the problem setting where a small, strongly annotated dataset and a large, weakly annotated dataset with no bounding box information are available, training an object detection model becomes a non-trivial problem. We suggest a controlled weight for handling the effect of weakly annotated images in a two stage object detection model. We also present a subsequent active learning scheme for safely assigning weakly annotated images a strong annotation using the trained model. Experimental results showed a 24% point increase in correct localization (CorLoc) measure, which is the ratio of correctly localized and classified images, by assigning the properly controlled weight. Performing active learning after a model is trained showed an additional increase in CorLoc. We tested the proposed method on the Stanford Dog datasets to assure that it can be applied to general cases, where strong annotations are insufficient to obtain resembling results. The presented method showed that higher performance is achievable with lesser annotation effort.
APA, Harvard, Vancouver, ISO, and other styles
22

Tang, Rui, Fangling Pu, Rui Yang, Zhaozhuo Xu, and Xin Xu. "Multi-Domain Fusion Graph Network for Semi-Supervised PolSAR Image Classification." Remote Sensing 15, no. 1 (December 27, 2022): 160. http://dx.doi.org/10.3390/rs15010160.

Full text
Abstract:
The expensive acquisition of labeled data limits the practical use of supervised learning on polarimetric synthetic aperture radar (PolSAR) image analysis. Semi-supervised learning has attracted considerable attention as it can utilize few labeled data and very many unlabeled data. The scattering response of PolSAR data is strongly spatial distribution dependent, which provides rich information about land-cover properties. In this paper, we propose a semi-supervised learning method named multi-domain fusion graph network (MDFGN) to explore the multi-domain fused features including spatial domain and feature domain. Three major factors strengthen the proposed method for PolSAR image analysis. Firstly, we propose a novel sample selection criterion to select reliable unlabeled data for training set expansion. Multi-domain fusion graph is proposed to improve the feature diversity by extending the sample selection from the feature domain to the spatial-feature fusion domain. In this way, the selecting accuracy is improved. By few labeled data, very many accurate unlabeled data are obtained. Secondly, multi-model triplet encoder is proposed to achieve superior feature extraction. Equipped with triplet loss, limited training samples are fully utilized. For expanding training samples with different patch sizes, multiple models are obtained for the fused classification result acquisition. Thirdly, multi-level fusion strategy is proposed to apply different image patch sizes for different expanded training data and obtain the fused classification result. The experiments are conducted on Radarsat-2 and AIRSAR images. With few labeled samples (about 0.003–0.007%), the overall accuracy of the proposed method ranges between 94.78% and 99.24%, which demonstrates the proposed method’s robustness and excellence.
APA, Harvard, Vancouver, ISO, and other styles
23

Couture, Heather D. "Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review." Journal of Personalized Medicine 12, no. 12 (December 7, 2022): 2022. http://dx.doi.org/10.3390/jpm12122022.

Full text
Abstract:
Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Ran, Yanghan Ou, Xiaoyue Fang, M. Reza Azarpazhooh, Haitao Gan, Zhiwei Ye, J. David Spence, Xiangyang Xu, and Aaron Fenster. "Ultrasound carotid plaque segmentation via image reconstruction-based self-supervised learning with limited training labels." Mathematical Biosciences and Engineering 20, no. 2 (2022): 1617–36. http://dx.doi.org/10.3934/mbe.2023074.

Full text
Abstract:
<abstract><p>Carotid total plaque area (TPA) is an important contributing measurement to the evaluation of stroke risk. Deep learning provides an efficient method for ultrasound carotid plaque segmentation and TPA quantification. However, high performance of deep learning requires datasets with many labeled images for training, which is very labor-intensive. Thus, we propose an image reconstruction-based self-supervised learning algorithm (IR-SSL) for carotid plaque segmentation when few labeled images are available. IR-SSL consists of pre-trained and downstream segmentation tasks. The pre-trained task learns region-wise representations with local consistency by reconstructing plaque images from randomly partitioned and disordered images. The pre-trained model is then transferred to the segmentation network as the initial parameters in the downstream task. IR-SSL was implemented with two networks, UNet++ and U-Net, and evaluated on two independent datasets of 510 carotid ultrasound images from 144 subjects at SPARC (London, Canada) and 638 images from 479 subjects at Zhongnan hospital (Wuhan, China). Compared to the baseline networks, IR-SSL improved the segmentation performance when trained on few labeled images (n = 10, 30, 50 and 100 subjects). For 44 SPARC subjects, IR-SSL yielded Dice-similarity-coefficients (DSC) of 80.14–88.84%, and algorithm TPAs were strongly correlated ($ r = 0.962 - 0.993 $, $ p $ &lt; 0.001) with manual results. The models trained on the SPARC images but applied to the Zhongnan dataset without retraining achieved DSCs of 80.61–88.18% and strong correlation with manual segmentation ($ r = 0.852 - 0.978 $, $ p $ &lt; 0.001). These results suggest that IR-SSL could improve deep learning when trained on small labeled datasets, making it useful for monitoring carotid plaque progression/regression in clinical use and trials.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
25

Pinho, Eduardo, and Carlos Costa. "Unsupervised Learning for Concept Detection in Medical Images: A Comparative Analysis." Applied Sciences 8, no. 8 (July 24, 2018): 1213. http://dx.doi.org/10.3390/app8081213.

Full text
Abstract:
As digital medical imaging becomes more prevalent and archives increase in size, representation learning exposes an interesting opportunity for enhanced medical decision support systems. On the other hand, medical imaging data is often scarce and short on annotations. In this paper, we present an assessment of unsupervised feature learning approaches for images in biomedical literature which can be applied to automatic biomedical concept detection. Six unsupervised representation learning methods were built, including traditional bags of visual words, autoencoders, and generative adversarial networks. Each model was trained, and their respective feature spaces evaluated using images from the ImageCLEF 2017 concept detection task. The highest mean F1 score of 0.108 was obtained using representations from an adversarial autoencoder, which increased to 0.111 when combined with the representations from the sparse denoising autoencoder. We conclude that it is possible to obtain more powerful representations with modern deep learning approaches than with previously popular computer vision methods. The possibility of semi-supervised learning as well as its use in medical information retrieval problems are the next steps to be strongly considered.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Suting, Dongwei Shao, Xiao Shu, Chuang Zhang, and Jun Wang. "FCC-Net: A Full-Coverage Collaborative Network for Weakly Supervised Remote Sensing Object Detection." Electronics 9, no. 9 (August 21, 2020): 1356. http://dx.doi.org/10.3390/electronics9091356.

Full text
Abstract:
With an ever-increasing resolution of optical remote-sensing images, how to extract information from these images efficiently and effectively has gradually become a challenging problem. As it is prohibitively expensive to label every object in these high-resolution images manually, there is only a small number of high-resolution images with detailed object labels available, highly insufficient for common machine learning-based object detection algorithms. Another challenge is the huge range of object sizes: it is difficult to locate large objects, such as buildings and small objects, such as vehicles, simultaneously. To tackle these problems, we propose a novel neural network based remote sensing object detector called full-coverage collaborative network (FCC-Net). The detector employs various tailored designs, such as hybrid dilated convolutions and multi-level pooling, to enhance multiscale feature extraction and improve its robustness in dealing with objects of different sizes. Moreover, by utilizing asynchronous iterative training alternating between strongly supervised and weakly supervised detectors, the proposed method only requires image-level ground truth labels for training. To evaluate the approach, we compare it against a few state-of-the-art techniques on two large-scale remote-sensing image benchmark sets. The experimental results show that FCC-Net significantly outperforms other weakly supervised methods in detection accuracy. Through a comprehensive ablation study, we also demonstrate the efficacy of the proposed dilated convolutions and multi-level pooling in increasing the scale invariance of an object detector.
APA, Harvard, Vancouver, ISO, and other styles
27

Kristiansen, Stein, Konstantinos Nikolaidis, Thomas Plagemann, Vera Goebel, Gunn Marit Traaen, Britt Øverland, Lars Aakerøy, et al. "Machine Learning for Sleep Apnea Detection with Unattended Sleep Monitoring at Home." ACM Transactions on Computing for Healthcare 2, no. 2 (March 2021): 1–25. http://dx.doi.org/10.1145/3433987.

Full text
Abstract:
Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.
APA, Harvard, Vancouver, ISO, and other styles
28

Dumitrescu, Florin, Bogdan Ceachi, Ciprian-Octavian Truică, Mihai Trăscău, and Adina Magda Florea. "A Novel Deep Learning-Based Relabeling Architecture for Space Objects Detection from Partially Annotated Astronomical Images." Aerospace 9, no. 9 (September 17, 2022): 520. http://dx.doi.org/10.3390/aerospace9090520.

Full text
Abstract:
Space Surveillance and Tracking is a task that requires the development of systems that can accurately discriminate between natural and man-made objects that orbit around Earth. To manage the discrimination between these objects, it is required to analyze a large amount of partially annotated astronomical images collected using a network of on-ground and potentially space-based optical telescopes. Thus, the main objective of this article is to propose a novel architecture that improves the automatic annotation of astronomical images. To achieve this objective, we present a new method for automatic detection and classification of space objects (point-like and streaks) in a supervised manner, given real-world partially annotated images in the FITS (Flexible Image Transport System) format. Results are strongly dependent on the preprocessing techniques applied to the images. Therefore, different techniques were tested including our method for object filtering and bounding box extraction. Based on our relabeling pipeline, we can easily follow how the number of detected objects is gradually increasing after each iteration, achieving a mean average precision of 98%.
APA, Harvard, Vancouver, ISO, and other styles
29

Griffiths, H. M., D. P. Kalivas, G. P. Petropoulos, and P. Dimou. "Mapping erosion and deposition changes in the protected wetlands of the Axios River Delta, N. Greece using remote sensing and GIS." Bulletin of the Geological Society of Greece 47, no. 1 (December 21, 2016): 245. http://dx.doi.org/10.12681/bgsg.10938.

Full text
Abstract:
Our study explores the use of a range of image processing methods combined with Landsat TM imagery for mapping the morphodynamics of the delta of the Axios River, one of the largest rivers of Greece, between 1984 and 2009. The techniques evaluated ranged from the traditional spectral bands arithmetic operations to unsupervised and supervised classification method. Changes in coastline morphology and erosion and deposition magnitudes were also estimated from direct photo-interpretation of the TM images, forming our reference dataset. Our analysis, conducted in a GIS environment, showed noticeable changes in the coastline of the study area, with erosion occurring mostly in the early periods followed by deposition later on. In addition, relatively similar patterns of coastline change were obtained from the different approaches, albeit of different magnitude. The differences observed were largely attributed to the varying ability of the different approaches to utilise the spectral information content of the TM data, strongly linked to the relative strengths and weaknesses underlying the implementation of the different techniques. Notably, supervised classifiers based on machine learning showed the closest results to the photo interpretation of TM, evidencing a promising potential for monitoring shoreline changes over long timescales in a cost-effective and rapid manner.
APA, Harvard, Vancouver, ISO, and other styles
30

Girard, Simon R., Vincent Legault, Guy Bois, and Jean-François Boland. "Avionics Graphics Hardware Performance Prediction with Machine Learning." Scientific Programming 2019 (June 3, 2019): 1–15. http://dx.doi.org/10.1155/2019/9195845.

Full text
Abstract:
Within the strongly regulated avionic engineering field, conventional graphical desktop hardware and software application programming interface (API) cannot be used because they do not conform to the avionic certification standards. We observe the need for better avionic graphical hardware, but system engineers lack system design tools related to graphical hardware. The endorsement of an optimal hardware architecture by estimating the performance of a graphical software, when a stable rendering engine does not yet exist, represents a major challenge. As proven by previous hardware emulation tools, there is also a potential for development cost reduction, by enabling developers to have a first estimation of the performance of its graphical engine early in the development cycle. In this paper, we propose to replace expensive development platforms by predictive software running on a desktop computer. More precisely, we present a system design tool that helps predict the rendering performance of graphical hardware based on the OpenGL Safety Critical API. First, we create nonparametric models of the underlying hardware, with machine learning, by analyzing the instantaneous frames per second (FPS) of the rendering of a synthetic 3D scene and by drawing multiple times with various characteristics that are typically found in synthetic vision applications. The number of characteristic combinations used during this supervised training phase is a subset of all possible combinations, but performance predictions can be arbitrarily extrapolated. To validate our models, we render an industrial scene with characteristic combinations not used during the training phase and we compare the predictions to those real values. We find a median prediction error of less than 4 FPS.
APA, Harvard, Vancouver, ISO, and other styles
31

KUANG, RUI, EUGENE IE, KE WANG, KAI WANG, MAHIRA SIDDIQI, YOAV FREUND, and CHRISTINA LESLIE. "PROFILE-BASED STRING KERNELS FOR REMOTE HOMOLOGY DETECTION AND MOTIF EXTRACTION." Journal of Bioinformatics and Computational Biology 03, no. 03 (June 2005): 527–50. http://dx.doi.org/10.1142/s021972000500120x.

Full text
Abstract:
We introduce novel profile-based string kernels for use with support vector machines (SVMs) for the problems of protein classification and remote homology detection. These kernels use probabilistic profiles, such as those produced by the PSI-BLAST algorithm, to define position-dependent mutation neighborhoods along protein sequences for inexact matching of k-length subsequences ("k-mers") in the data. By use of an efficient data structure, the kernels are fast to compute once the profiles have been obtained. For example, the time needed to run PSI-BLAST in order to build the profiles is significantly longer than both the kernel computation time and the SVM training time. We present remote homology detection experiments based on the SCOP database where we show that profile-based string kernels used with SVM classifiers strongly outperform all recently presented supervised SVM methods. We further examine how to incorporate predicted secondary structure information into the profile kernel to obtain a small but significant performance improvement. We also show how we can use the learned SVM classifier to extract "discriminative sequence motifs" — short regions of the original profile that contribute almost all the weight of the SVM classification score — and show that these discriminative motifs correspond to meaningful structural features in the protein data. The use of PSI-BLAST profiles can be seen as a semi-supervised learning technique, since PSI-BLAST leverages unlabeled data from a large sequence database to build more informative profiles. Recently presented "cluster kernels" give general semi-supervised methods for improving SVM protein classification performance. We show that our profile kernel results also outperform cluster kernels while providing much better scalability to large datasets. Supplementary website:.
APA, Harvard, Vancouver, ISO, and other styles
32

Chaki, D., A. Das, and MI Zaber. "A comparison of three discrete methods for classification of heart disease data." Bangladesh Journal of Scientific and Industrial Research 50, no. 4 (December 11, 2015): 293–96. http://dx.doi.org/10.3329/bjsir.v50i4.25839.

Full text
Abstract:
The classification of heart disease patients is of great importance in cardiovascular disease diagnosis. Numerous data mining techniques have been used so far by the researchers to aid health care professionals in the diagnosis of heart disease. For this task, many algorithms have been proposed in the previous few years. In this paper, we have studied different supervised machine learning techniques for classification of heart disease data and have performed a procedural comparison of these. We have used the C4.5 decision tree classifier, a naïve Bayes classifier, and a Support Vector Machine (SVM) classifier over a large set of heart disease data. The data used in this study is the Cleveland Clinic Foundation Heart Disease Data Set available at UCI Machine Learning Repository. We have found that SVM outperformed both naïve Bayes and C4.5 classifier, giving the best accuracy rate of correctly classifying highest number of instances. We have also found naïve Bayes classifier achieved a competitive performance though the assumption of normality of the data is strongly violated.Bangladesh J. Sci. Ind. Res. 50(4), 293-296, 2015
APA, Harvard, Vancouver, ISO, and other styles
33

Farr, Ryan J., Christina L. Rootes, John Stenos, Chwan Hong Foo, Christopher Cowled, and Cameron R. Stewart. "Detection of SARS-CoV-2 infection by microRNA profiling of the upper respiratory tract." PLOS ONE 17, no. 4 (April 5, 2022): e0265670. http://dx.doi.org/10.1371/journal.pone.0265670.

Full text
Abstract:
Host biomarkers are increasingly being considered as tools for improved COVID-19 detection and prognosis. We recently profiled circulating host-encoded microRNA (miRNAs) during SARS-CoV-2 infection, revealing a signature that classified COVID-19 cases with 99.9% accuracy. Here we sought to develop a signature suited for clinical application by analyzing specimens collected using minimally invasive procedures. Eight miRNAs displayed altered expression in anterior nasal tissues from COVID-19 patients, with miR-142-3p, a negative regulator of interleukin-6 (IL-6) production, the most strongly upregulated. Supervised machine learning analysis revealed that a three-miRNA signature (miR-30c-2-3p, miR-628-3p and miR-93-5p) independently classifies COVID-19 cases with 100% accuracy. This study further defines the host miRNA response to SARS-CoV-2 infection and identifies candidate biomarkers for improved COVID-19 detection.
APA, Harvard, Vancouver, ISO, and other styles
34

Xie, Hao, Yunyan Du, Huapeng Yu, Yongxin Chang, Zhiyong Xu, and Yuanyan Tang. "Open set face recognition with deep transfer learning and extreme value statistics." International Journal of Wavelets, Multiresolution and Information Processing 16, no. 04 (July 2018): 1850034. http://dx.doi.org/10.1142/s0219691318500340.

Full text
Abstract:
Deep face recognition model learned on big dataset surpasses humans on difficult unconstrained face dataset. But open set face recognition, i.e. robust to both variations and unknown faces, is still a big challenge. In this paper, we propose a robust open set face recognition approach with deep transfer learning and extreme value statistics. First, we demonstrate that transferring the feature representations of a pre-trained deep face model to specific tasks is an efficient and effective approach for face recognition on small datasets. We learn both higher layer representations and the final linear multi-class SVMs with transferred features. Second, we propose a novel approach for unknown people recognition with extreme value statistics. Different from traditional distribution fitting, our approach only makes use of a simple statistical quantity — standard deviation of tail data. Empirical evidence shows that standard deviation of the tail of multi-class SVMs recognition scores is efficient and robust for unknown people recognition. Finally, we also empirically explore an important open problem — attributes and transferability of different layer features of the deep model. We argue that lower layer features are both local and general, while higher layer ones are both global and specific which embrace both intra-class invariance and inter-class discrimination. The results of unsupervised feature visualization and supervised face identification strongly support our view.
APA, Harvard, Vancouver, ISO, and other styles
35

Sehgal, Raghav, Albert Higgins-Chen, Margarita Meer, and Morgan Levine. "SYSTEM SPECIFIC AGING SCORES: A STATE OF THE ART AGING CLOCK BUILT USING AGING SCORES FROM DIFFERENT BODILY FUNCTIONS." Innovation in Aging 6, Supplement_1 (November 1, 2022): 20–21. http://dx.doi.org/10.1093/geroni/igac059.076.

Full text
Abstract:
Abstract Aging is a highly heterogeneous process at multiple levels. Different individuals, organs, tissues, and cell types are innately diverse and age in quantitatively different manners. Epigenetic clocks have been developed to capture overall degree of aging and typically report a single biological age value. However, single measures fail to provide insight into differential aging across organ systems. Our aim was to develop novel systems-specific methylation clocks, that when assessed in blood, capture distinct aging subtypes. We utilized three large human cohort studies and employed both supervised and unsupervised machine learning models by linking DNA methylation to lower dimensional vectors composed of system specific clinical chemistry and functional assays. In doing so, we were able to develop 11 unique system-specific scores–heart, lung, kidney, liver, brain, immune, inflammatory, hematopoietic, musculoskeletal, hormone, and metabolic. We observe that in independent data, the specific systems relate to meaningful outcomes–for instance the brain score is strongly associated with cognitive functioning; musculoskeletal score is strongly associated with physical functioning; and the lung score is strongly associated with lung cancer. Additionally, system scores and the composite systems clock outperforms presently available clocks in terms of associations with a wide variety of aging phenotypes and conditions. Overall, our biological systems based epigenetic clock outperforms presently available epigenetic aging clocks and provides meaningful insights into heterogeneity in aging.
APA, Harvard, Vancouver, ISO, and other styles
36

Rambabu, M., N. S. S. Ramakrishna, and P. Kumar Polamarasetty. "Prediction and Analysis of Household Energy Consumption by Machine Learning Algorithms in Energy Management." E3S Web of Conferences 350 (2022): 02002. http://dx.doi.org/10.1051/e3sconf/202235002002.

Full text
Abstract:
Now the world is becoming more sophisticated and networked, and a massive amount of data is being generated daily. For energy management in residential and commercial properties, it is essential to know how much energy each appliance uses. The forecast would be more clear and practical if the task is based purely on energy usage data. But in the real world, it’s not the case, energy consumption is strongly dependent on weather and surroundings also. In a home appliances network when measured/observed data is available then algorithms of supervised-based machine learning provide an immeasurable alternative to the annoyance associated with many engineering and data mining methodologies. The patterns of household energy consumption are changing based on temperature, humidity, hour of the day, etc. For predicting household energy consumption feature engineering is performed, and models are trained by using different machine learning algorithms such as Linear Regression, Lasso Regression, Random Forest, Extra Tree Regressor, XG Boost, etc.. To evaluate the models R square is used as the forecasting is based on time. R square tells how much percentage of variance in the dependent variable can be predicted. Finally, it is suggested that tree-based models are giving best results.
APA, Harvard, Vancouver, ISO, and other styles
37

Patino-Alonso, Carmen, Marta Gómez-Sánchez, Leticia Gómez-Sánchez, Benigna Sánchez Salgado, Emiliano Rodríguez-Sánchez, Luis García-Ortiz, and Manuel A. Gómez-Marcos. "Predictive Ability of Machine-Learning Methods for Vitamin D Deficiency Prediction by Anthropometric Parameters." Mathematics 10, no. 4 (February 17, 2022): 616. http://dx.doi.org/10.3390/math10040616.

Full text
Abstract:
Background: Vitamin D deficiency affects the general population and is very common among elderly Europeans. This study compared different supervised learning algorithms in a cohort of Spanish individuals aged 35–75 years to predict which anthropometric parameter was most strongly associated with vitamin D deficiency. Methods: A total of 501 participants were recruited by simple random sampling with replacement (reference population: 43,946). The analyzed anthropometric parameters were waist circumference (WC), body mass index (BMI), waist-to-height ratio (WHtR), body roundness index (BRI), visceral adiposity index (VAI), and the Clinical University of Navarra body adiposity estimator (CUN-BAE) for body fat percentage. Results: All the anthropometric indices were associated, in males, with vitamin D deficiency (p < 0.01 for the entire sample) after controlling for possible confounding factors, except for CUN-BAE, which was the only parameter that showed a correlation in females. Conclusions: The capacity of anthropometric parameters to predict vitamin D deficiency differed according to sex; thus, WC, BMI, WHtR, VAI, and BRI were most useful for prediction in males, while CUN-BAE was more useful in females. The naïve Bayes approach for machine learning showed the best area under the curve with WC, BMI, WHtR, and BRI, while the logistic regression model did so in VAI and CUN-BAE.
APA, Harvard, Vancouver, ISO, and other styles
38

Le, Trang T., Weixuan Fu, and Jason H. Moore. "Scaling tree-based automated machine learning to biomedical big data with a feature set selector." Bioinformatics 36, no. 1 (June 4, 2019): 250–56. http://dx.doi.org/10.1093/bioinformatics/btz470.

Full text
Abstract:
Abstract Motivation Automated machine learning (AutoML) systems are helpful data science assistants designed to scan data for novel features, select appropriate supervised learning models and optimize their parameters. For this purpose, Tree-based Pipeline Optimization Tool (TPOT) was developed using strongly typed genetic programing (GP) to recommend an optimized analysis pipeline for the data scientist’s prediction problem. However, like other AutoML systems, TPOT may reach computational resource limits when working on big data such as whole-genome expression data. Results We introduce two new features implemented in TPOT that helps increase the system’s scalability: Feature Set Selector (FSS) and Template. FSS provides the option to specify subsets of the features as separate datasets, assuming the signals come from one or more of these specific data subsets. FSS increases TPOT’s efficiency in application on big data by slicing the entire dataset into smaller sets of features and allowing GP to select the best subset in the final pipeline. Template enforces type constraints with strongly typed GP and enables the incorporation of FSS at the beginning of each pipeline. Consequently, FSS and Template help reduce TPOT computation time and may provide more interpretable results. Our simulations show TPOT-FSS significantly outperforms a tuned XGBoost model and standard TPOT implementation. We apply TPOT-FSS to real RNA-Seq data from a study of major depressive disorder. Independent of the previous study that identified significant association with depression severity of two modules, TPOT-FSS corroborates that one of the modules is largely predictive of the clinical diagnosis of each individual. Availability and implementation Detailed simulation and analysis code needed to reproduce the results in this study is available at https://github.com/lelaboratoire/tpot-fss. Implementation of the new TPOT operators is available at https://github.com/EpistasisLab/tpot. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
39

Arain, Ghulam Ali, Zeeshan Ahmed Bhatti, Imran Hameed, and Yu-Hui Fang. "Top-down knowledge hiding and innovative work behavior (IWB): a three-way moderated-mediation analysis of self-efficacy and local/foreign status." Journal of Knowledge Management 24, no. 2 (October 5, 2019): 127–49. http://dx.doi.org/10.1108/jkm-11-2018-0687.

Full text
Abstract:
Purpose This paper aims to examine the consequences for innovative work behavior (IWB) of top-down knowledge hiding – that is, supervisors’ knowledge hiding from supervisees (SKHS). Drawing on social learning theory, the authors test the three-way moderated-mediation model in which the direct effect of SKHS on IWB is first mediated by self-efficacy and then further moderated by supervisor and supervisee nationality (locals versus foreigners). Design/methodology/approach The authors collected multi-sourced data from 446 matched supervisor-supervisee pairs working in a diverse range of organizations operating in the Kingdom of Saudi Arabia. After initial data screening, confirmatory factor analysis was conducted to test for the factorial validity of the used measures with AMOS. The hypothesized relationships were tested in regression analysis with SPSS. Findings Results showed that SKHS had both direct and mediation effects, via the self-efficacy mediator, on supervisee IWB. The mediation effect was further moderated by supervisor and supervisee nationality (local versus foreigners), which highlighted that the effect was stronger for supervisor–supervisee pairs that were local-local or foreigner-foreigner than for pairs that were local-foreigner or foreigner-local. Originality/value This study contributes to both knowledge hiding and IWB literature and discusses the useful theoretical and practical implications of the findings.
APA, Harvard, Vancouver, ISO, and other styles
40

Ha, Minh-Quyet, Duong-Nguyen Nguyen, Viet-Cuong Nguyen, Takahiro Nagata, Toyohiro Chikyow, Hiori Kino, Takashi Miyake, Thierry Denœux, Van-Nam Huynh, and Hieu-Chi Dam. "Evidence-based recommender system for high-entropy alloys." Nature Computational Science 1, no. 7 (July 2021): 470–78. http://dx.doi.org/10.1038/s43588-021-00097-w.

Full text
Abstract:
AbstractExisting data-driven approaches for exploring high-entropy alloys (HEAs) face three challenges: numerous element-combination candidates, designing appropriate descriptors, and limited and biased existing data. To overcome these issues, here we show the development of an evidence-based material recommender system (ERS) that adopts Dempster–Shafer theory, a general framework for reasoning with uncertainty. Herein, without using material descriptors, we model, collect and combine pieces of evidence from data about the HEA phase existence of alloys. To evaluate the ERS, we compared its HEA-recommendation capability with those of matrix-factorization- and supervised-learning-based recommender systems on four widely known datasets of up-to-five-component alloys. The k-fold cross-validation on the datasets suggests that the ERS outperforms all competitors. Furthermore, the ERS shows good extrapolation capabilities in recommending quaternary and quinary HEAs. We experimentally validated the most strongly recommended Fe–Co-based magnetic HEA (namely, FeCoMnNi) and confirmed that its thin film shows a body-centered cubic structure.
APA, Harvard, Vancouver, ISO, and other styles
41

Opperhuizen, Alette Eva, Erik Hans Klijn, and Kim Schouten. "How do media, political and regulatory agendas influence one another in high risk policy issues?" Policy & Politics 48, no. 3 (July 1, 2020): 461–83. http://dx.doi.org/10.1332/030557319x15734252420020.

Full text
Abstract:
This article shows how an emerging risk is covered by the media and how this interacts with political attention and policy implementation. Gas drilling has resulted in earthquakes in the Netherlands over the past 25 years. We show that an increase in the frequency and magnitude has not stimulated greater media attention. Media and political attention increased only after the media had interpreted the risk as a safety issue. Once this had happened, newspapers and political debates tended to focus on the emotionally loaded aspects. This is in contrast with the regulatory agenda, which followed its own course by focusing on factual information. By using a new method ‐ supervised-machine learning ‐ we analyse a large, longitudinal data set to explore patterns over time. Our findings shed new light on risk- and agenda-setting theory, confirming that media and politics agendas reinforce each other, but the regulatory agenda is not strongly influenced by them.
APA, Harvard, Vancouver, ISO, and other styles
42

Gnecco, Giorgio, Marco Gori, Stefano Melacci, and Marcello Sanguineti. "Foundations of Support Constraint Machines." Neural Computation 27, no. 2 (February 2015): 388–480. http://dx.doi.org/10.1162/neco_a_00686.

Full text
Abstract:
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
APA, Harvard, Vancouver, ISO, and other styles
43

Lim, Yen Ying, Jenalle E. Baker, Loren Bruns, Andrea Mills, Christopher Fowler, Jurgen Fripp, Stephanie R. Rainey-Smith, David Ames, Colin L. Masters, and Paul Maruff. "Association of deficits in short-term learning and Aβ and hippocampal volume in cognitively normal adults." Neurology 95, no. 18 (September 4, 2020): e2577-e2585. http://dx.doi.org/10.1212/wnl.0000000000010728.

Full text
Abstract:
ObjectiveTo determine the extent to which deficits in learning over 6 days are associated with β-amyloid–positive (Aβ+) and hippocampal volume in cognitively normal (CN) adults.MethodsEighty CN older adults who had undergone PET neuroimaging to determine Aβ status (n = 42 Aβ− and 38 Aβ+), MRI to determine hippocampal and ventricular volume, and repeated assessment of memory were recruited from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study. Participants completed the Online Repeatable Cognitive Assessment–Language Learning Test (ORCA-LLT), which required they learn associations between 50 Chinese characters and their English language equivalents over 6 days. ORCA-LLT assessments were supervised on the first day and were completed remotely online for all remaining days.ResultsLearning curves in the Aβ+ CN participants were significantly worse than those in matched Aβ− CN participants, with the magnitude of this difference very large (d [95% confidence interval (CI)] 2.22 [1.64–2.75], p < 0.001), and greater than differences between these groups for memory decline since their enrollment in AIBL (d [95% CI] 0.52 [0.07–0.96], p = 0.021), or memory impairment at their most recent visit. In Aβ+ CN adults, slower rates of learning were associated with smaller hippocampal and larger ventricular volumes.ConclusionsThese results suggest that in CN participants, Aβ+ is associated more strongly with a deficit in learning than any aspect of memory dysfunction. Slower rates of learning in Aβ+ CN participants were associated with hippocampal volume loss. Considered together, these data suggest that the primary cognitive consequence of Aβ+ is a failure to benefit from experience when exposed to novel stimuli, even over very short periods.
APA, Harvard, Vancouver, ISO, and other styles
44

Farr, Ryan J., Christina L. Rootes, Louise C. Rowntree, Thi H. O. Nguyen, Luca Hensen, Lukasz Kedzierski, Allen C. Cheng, et al. "Altered microRNA expression in COVID-19 patients enables identification of SARS-CoV-2 infection." PLOS Pathogens 17, no. 7 (July 28, 2021): e1009759. http://dx.doi.org/10.1371/journal.ppat.1009759.

Full text
Abstract:
The host response to SARS-CoV-2 infection provide insights into both viral pathogenesis and patient management. The host-encoded microRNA (miRNA) response to SARS-CoV-2 infection, however, remains poorly defined. Here we profiled circulating miRNAs from ten COVID-19 patients sampled longitudinally and ten age and gender matched healthy donors. We observed 55 miRNAs that were altered in COVID-19 patients during early-stage disease, with the inflammatory miR-31-5p the most strongly upregulated. Supervised machine learning analysis revealed that a three-miRNA signature (miR-423-5p, miR-23a-3p and miR-195-5p) independently classified COVID-19 cases with an accuracy of 99.9%. In a ferret COVID-19 model, the three-miRNA signature again detected SARS-CoV-2 infection with 99.7% accuracy, and distinguished SARS-CoV-2 infection from influenza A (H1N1) infection and healthy controls with 95% accuracy. Distinct miRNA profiles were also observed in COVID-19 patients requiring oxygenation. This study demonstrates that SARS-CoV-2 infection induces a robust host miRNA response that could improve COVID-19 detection and patient management.
APA, Harvard, Vancouver, ISO, and other styles
45

Kuo, Chung-Feng Jeffrey, Yu-Shu Liao, Jagadish Barman, and Shao-Cheng Liu. "Semi-Supervised Deep Learning Semantic Segmentation for 3D Volumetric Computed Tomographic Scoring of Chronic Rhinosinusitis: Clinical Correlations and Comparison with Lund-Mackay Scoring." Tomography 8, no. 2 (March 7, 2022): 718–29. http://dx.doi.org/10.3390/tomography8020059.

Full text
Abstract:
Background: The traditional Lund-Mackay score (TLMs) is unable to subgrade the volume of inflammatory disease. We aimed to propose an effective modification and calculated the volume-based modified LM score (VMLMs), which should correlate more strongly with clinical symptoms than the TLMs. Methods: Semi-supervised learning with pseudo-labels used for self-training was adopted to train our convolutional neural networks, with the algorithm including a combination of MobileNet, SENet, and ResNet. A total of 175 CT sets, with 50 participants that would undergo sinus surgery, were recruited. The Sinonasal Outcomes Test-22 (SNOT-22) was used to assess disease-specific symptoms before and after surgery. A 3D-projected view was created and VMLMs were calculated for further comparison. Results: Our methods showed a significant improvement both in sinus classification and segmentation as compared to state-of-the-art networks, with an average Dice coefficient of 91.57%, an MioU of 89.43%, and a pixel accuracy of 99.75%. The sinus volume exhibited sex dimorphism. There was a significant positive correlation between volume and height, but a trend toward a negative correlation between maxillary sinus and age. Subjects who underwent surgery had significantly greater TLMs (14.9 vs. 7.38) and VMLMs (11.65 vs. 4.34) than those who did not. ROC-AUC analyses showed that the VMLMs had excellent discrimination at classifying a high probability of postoperative improvement with SNOT-22 reduction. Conclusions: Our method is suitable for obtaining detailed information, excellent sinus boundary prediction, and differentiating the target from its surrounding structure. These findings demonstrate the promise of CT-based volumetric analysis of sinus mucosal inflammation.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Jianwei, Congan Xu, Hang Su, Long Gao, and Taoyang Wang. "Deep Learning for SAR Ship Detection: Past, Present and Future." Remote Sensing 14, no. 11 (June 5, 2022): 2712. http://dx.doi.org/10.3390/rs14112712.

Full text
Abstract:
After the revival of deep learning in computer vision in 2012, SAR ship detection comes into the deep learning era too. The deep learning-based computer vision algorithms can work in an end-to-end pipeline, without the need of designing features manually, and they have amazing performance. As a result, it is also used to detect ships in SAR images. The beginning of this direction is the paper we published in 2017BIGSARDATA, in which the first dataset SSDD was used and shared with peers. Since then, lots of researchers focus their attention on this field. In this paper, we analyze the past, present, and future of the deep learning-based ship detection algorithms in SAR images. In the past section, we analyze the difference between traditional CFAR (constant false alarm rate) based and deep learning-based detectors through theory and experiment. The traditional method is unsupervised while the deep learning is strongly supervised, and their performance varies several times. In the present part, we analyze the 177 published papers about SAR ship detection. We highlight the dataset, algorithm, performance, deep learning framework, country, timeline, etc. After that, we introduce the use of single-stage, two-stage, anchor-free, train from scratch, oriented bounding box, multi-scale, and real-time detectors in detail in the 177 papers. The advantages and disadvantages of speed and accuracy are also analyzed. In the future part, we list the problem and direction of this field. We can find that, in the past five years, the AP50 has boosted from 78.8% in 2017 to 97.8 % in 2022 on SSDD. Additionally, we think that researchers should design algorithms according to the specific characteristics of SAR images. What we should do next is to bridge the gap between SAR ship detection and computer vision by merging the small datasets into a large one and formulating corresponding standards and benchmarks. We expect that this survey of 177 papers can make people better understand these algorithms and stimulate more research in this field.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Shengyin, Vibekananda Dutta, Xin He, and Takafumi Matsumaru. "Deep Learning Based One-Class Detection System for Fake Faces Generated by GAN Network." Sensors 22, no. 20 (October 13, 2022): 7767. http://dx.doi.org/10.3390/s22207767.

Full text
Abstract:
Recently, the dangers associated with face generation technology have been attracting much attention in image processing and forensic science. The current face anti-spoofing methods based on Generative Adversarial Networks (GANs) suffer from defects such as overfitting and generalization problems. This paper proposes a new generation method using a one-class classification model to judge the authenticity of facial images for the purpose of realizing a method to generate a model that is as compatible as possible with other datasets and new data, rather than strongly depending on the dataset used for training. The method proposed in this paper has the following features: (a) we adopted various filter enhancement methods as basic pseudo-image generation methods for data enhancement; (b) an improved Multi-Channel Convolutional Neural Network (MCCNN) was adopted as the main network, making it possible to accept multiple preprocessed data individually, obtain feature maps, and extract attention maps; (c) as a first ingenuity in training the main network, we augmented the data using weakly supervised learning methods to add attention cropping and dropping to the data; (d) as a second ingenuity in training the main network, we trained it in two steps. In the first step, we used a binary classification loss function to ensure that known fake facial features generated by known GAN networks were filtered out. In the second step, we used a one-class classification loss function to deal with the various types of GAN networks or unknown fake face generation methods. We compared our proposed method with four recent methods. Our experiments demonstrate that the proposed method improves cross-domain detection efficiency while maintaining source-domain accuracy. These studies show one possible direction for improving the correct answer rate in judging facial image authenticity, thereby making a great contribution both academically and practically.
APA, Harvard, Vancouver, ISO, and other styles
48

Ingelfinger, Florian, Michael Kramer, Sebastian Utz, Sarah Mundt, Sinduya Krishnarajah, Edoardo Galli, Mirjam Lutz, et al. "Myasthenia gravis: From single cell signatures to cancer diagnosis." Journal of Immunology 204, no. 1_Supplement (May 1, 2020): 224.50. http://dx.doi.org/10.4049/jimmunol.204.supp.224.50.

Full text
Abstract:
Abstract Myasthenia gravis is a rare but archetypic autoimmune disease that is characterized by the autoantibody-mediated disruption of the neuromuscular junction leading to a skeletal muscle weakness. Immunomodulatory treatment options for Myasthenia gravis patients are largely unspecific, include suppression of the entire immune compartment and are often accompanied by severe side effects. In order to identify novel biomarkers for more targeted and effective therapeutic approaches, we combined high-dimensional mass and flow cytometry with supervised and unsupervised machine-learning algorithms. Analysis of the peripheral immune compartment of Myasthenia gravis patients and healthy controls revealed a cellular immune signature consisting of inflammatory memory T helper cells with a defined cytokine profile. The abundance of the identified leukocytes in the blood strongly correlated with the patients clinical disease activity, far better than auto-Ab titers. Moreover, we were able to locate T cells with the defined signature enriched in the inflamed thymus of Myasthenia gravis patients – the key organ for the induction and maintenance of the autoimmune disease. Lastly, using an unbiased pattern recognition approach, we identified lymphomas in a subset of Myasthenia gravis patients, further highlighting the potential of the applied analysis tools.
APA, Harvard, Vancouver, ISO, and other styles
49

Yamamoto, Shuhei, and Tetsuji Satoh. "Two phase estimation method for multi-classifying real life tweets." International Journal of Web Information Systems 10, no. 4 (November 11, 2014): 378–93. http://dx.doi.org/10.1108/ijwis-04-2014-0013.

Full text
Abstract:
Purpose – This paper aims to propose a multi-label method that estimates appropriate aspects against unknown tweets using the two-phase estimation method. Many Twitter users share daily events and opinions. Some beneficial comments are posted on such real-life aspects as eating, traffic, weather and so on. Such posts as “The train is not coming” are categorized in the Traffic aspect. Such tweets as “The train is delayed by heavy rain” are categorized in both the Traffic and Weather aspects. Design/methodology/approach – The proposed method consists of two phases. In the first, many topics are extracted from a sea of tweets using Latent Dirichlet Allocation (LDA). In the second, associations among many topics and fewer aspects are built using a small set of labeled tweets. The aspect scores for tweets were calculated using associations based on the extracted terms. Appropriate aspects are labeled for unknown tweets by averaging the aspect scores. Findings – Using a large amount of actual tweets, the sophisticated experimental evaluations demonstrate the high efficiency of the proposed multi-label classification method. It is confirmed that high F-measure aspects are strongly associated with topics that have high relevance. Low F-measure aspects are associated with topics that are connected to many other aspects. Originality/value – The proposed method features two-phase semi-supervised learning. Many topics are extracted using an unsupervised learning model called LDA. Associations among many topics and fewer aspects are built using labeled tweets.
APA, Harvard, Vancouver, ISO, and other styles
50

Rovini, Erika, Carlo Maremmani, and Filippo Cavallo. "A Wearable System to Objectify Assessment of Motor Tasks for Supporting Parkinson’s Disease Diagnosis." Sensors 20, no. 9 (May 5, 2020): 2630. http://dx.doi.org/10.3390/s20092630.

Full text
Abstract:
Objective assessment of the motor evaluation test for Parkinson’s disease (PD) diagnosis is an open issue both for clinical and technical experts since it could improve current clinical practice with benefits both for patients and healthcare systems. In this work, a wearable system composed of four inertial devices (two SensHand and two SensFoot), and related processing algorithms for extracting parameters from limbs motion was tested on 40 healthy subjects and 40 PD patients. Seventy-eight and 96 kinematic parameters were measured from lower and upper limbs, respectively. Statistical and correlation analysis allowed to define four datasets that were used to train and test five supervised learning classifiers. Excellent discrimination between the two groups was obtained with all the classifiers (average accuracy ranging from 0.936 to 0.960) and all the datasets (average accuracy ranging from 0.953 to 0.966), over three conditions that included parameters derived from lower, upper or all limbs. The best performances (accuracy = 1.00) were obtained when classifying all the limbs with linear support vector machine (SVM) or gaussian SVM. Even if further studies should be done, the current results are strongly promising to improve this system as a support tool for clinicians in objectifying PD diagnosis and monitoring.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography