Academic literature on the topic 'Classification based on generative models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Classification based on generative models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Classification based on generative models":

1

Cazzanti, Luca, Maya R. Gupta, and Anjali J. Koppal. "Generative models for similarity-based classification." Pattern Recognition 41, no. 7 (July 2008): 2289–97. http://dx.doi.org/10.1016/j.patcog.2008.01.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wei, Wei, Jun Fang, Ning Yang, Qi Li, Lin Hu, Lanbo Zhao, and Jie Han. "AC-ModNet: Molecular Reverse Design Network Based on Attribute Classification." International Journal of Molecular Sciences 25, no. 13 (June 25, 2024): 6940. http://dx.doi.org/10.3390/ijms25136940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep generative models are becoming a tool of choice for exploring the molecular space. One important application area of deep generative models is the reverse design of drug compounds for given attributes (solubility, ease of synthesis, etc.). Although there are many generative models, these models cannot generate specific intervals of attributes. This paper proposes a AC-ModNet model that effectively combines VAE with AC-GAN to generate molecular structures in specific attribute intervals. The AC-ModNet is trained and evaluated using the open 250K ZINC dataset. In comparison with related models, our method performs best in the FCD and Frag model evaluation indicators. Moreover, we prove the AC-ModNet created molecules have potential application value in drug design by comparing and analyzing them with medical records in the PubChem database. The results of this paper will provide a new method for machine learning drug reverse design.
3

Gopal, Narendra, and Sivakumar D. "DIMENSIONALITY REDUCTION BASED CLASSIFICATION USING GENERATIVE ADVERSARIAL NETWORKS DATASET GENERATION." ICTACT Journal on Image and Video Processing 13, no. 01 (August 1, 2022): 2786–90. http://dx.doi.org/10.21917/ijivp.2022.0396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The term data augmentation refers to an approach that can be used to prevent overfitting in the training dataset, which is where the issue first manifests itself. This is based on the assumption that extra datasets can be improved by include new information that is of use. It is feasible to create an artificially larger training dataset by utilizing methods such as data warping and oversampling. This will allow for the creation of more accurate models. This idea is demonstrated through the application of a variety of different methods, some of which include neural style transfer, adversarial training, and erasure by random erasure, amongst others. By utilizing oversampling augmentations, it is feasible to create synthetic instances that can be incorporated into the training data. This is made possible by the generation of synthetic instances. There are numerous illustrations of this, including image merging, feature space enhancements, and generative adversarial networks, to name a few (GANs). In this paper, we aim to provide evidence that a Generative Adversarial Network can be used to convert regular images into Hyper Spectral Images (HSI). The purpose of the model is to generate data by including a certain amount of unpredictable noise.
4

Shastry, K. Aditya, B. A. Manjunatha, T. G. Mohan Kumar, and D. U. Karthik. "Generative Adversarial Networks Based Scene Generation on Indian Driving Dataset." Journal of ICT Research and Applications 17, no. 2 (August 31, 2023): 181–200. http://dx.doi.org/10.5614/itbj.ict.res.appl.2023.17.2.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The rate of advancement in the field of artificial intelligence (AI) has drastically increased over the past twenty years or so. From AI models that can classify every object in an image to realistic chatbots, the signs of progress can be found in all fields. This work focused on tackling a relatively new problem in the current scenario-generative capabilities of AI. While the classification and prediction models have matured and entered the mass market across the globe, generation through AI is still in its initial stages. Generative tasks consist of an AI model learning the features of a given input and using these learned values to generate completely new output values that were not originally part of the input dataset. The most common input type given to generative models are images. The most popular architectures for generative models are autoencoders and generative adversarial networks (GANs). Our study aimed to use GANs to generate realistic images from a purely semantic representation of a scene. While our model can be used on any kind of scene, we used the Indian Driving Dataset to train our model. Through this work, we could arrive at answers to the following questions: (1) the scope of GANs in interpreting and understanding textures and variables in complex scenes; (2) the application of such a model in the field of gaming and virtual reality; (3) the possible impact of generating realistic deep fakes on society.
5

Ekolle, Zie Eya, and Ryuji Kohno. "GenCo: A Generative Learning Model for Heterogeneous Text Classification Based on Collaborative Partial Classifications." Applied Sciences 13, no. 14 (July 14, 2023): 8211. http://dx.doi.org/10.3390/app13148211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The use of generative learning models in natural language processing (NLP) has significantly contributed to the advancement of natural language applications, such as sentimental analysis, topic modeling, text classification, chatbots, and spam filtering. With a large amount of text generated each day from different sources, such as web-pages, blogs, emails, social media, and articles, one of the most common tasks in NLP is the classification of a text corpus. This is important in many institutions for planning, decision-making, and creating archives of their projects. Many algorithms exist to automate text classification tasks but the most intriguing of them is that which also learns these tasks automatically. In this study, we present a new model to infer and learn from data using probabilistic logic and apply it to text classification. This model, called GenCo, is a multi-input single-output (MISO) learning model that uses a collaboration of partial classifications to generate the desired output. It provides a heterogeneity measure to explain its classification results and enables a reduction in the curse of dimensionality in text classification. Experiments with the model were carried out on the Twitter US Airline dataset, the Conference Paper dataset, and the SMS Spam dataset, outperforming baseline models with 98.40%, 89.90%, and 99.26% accuracy, respectively.
6

Zhai, Junhai, Jiaxing Qi, and Chu Shen. "Binary imbalanced data classification based on diversity oversampling by generative models." Information Sciences 585 (March 2022): 313–43. http://dx.doi.org/10.1016/j.ins.2021.11.058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Eunbeen, Jaeuk Moon, Jonghwa Shim, and Eenjun Hwang. "DualDiscWaveGAN-Based Data Augmentation Scheme for Animal Sound Classification." Sensors 23, no. 4 (February 10, 2023): 2024. http://dx.doi.org/10.3390/s23042024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Animal sound classification (ASC) refers to the automatic identification of animal categories by sound, and is useful for monitoring rare or elusive wildlife. Thus far, deep-learning-based models have shown good performance in ASC when training data is sufficient, but suffer from severe performance degradation if not. Recently, generative adversarial networks (GANs) have shown the potential to solve this problem by generating virtual data. However, in a multi-class environment, existing GAN-based methods need to construct separate generative models for each class. Additionally, they only consider the waveform or spectrogram of sound, resulting in poor quality of the generated sound. To overcome these shortcomings, we propose a two-step sound augmentation scheme using a class-conditional GAN. First, common features are learned from all classes of animal sounds, and multiple classes of animal sounds are generated based on the features that consider both waveforms and spectrograms using class-conditional GAN. Second, we select data from the generated data based on the confidence of the pretrained ASC model to improve classification performance. Through experiments, we show that the proposed method improves the accuracy of the basic ASC model by up to 18.3%, which corresponds to a performance improvement of 13.4% compared to the second-best augmentation method.
8

Kannan, K. Gokul, and T. R. Ganesh Babu. "Semi Supervised Generative Adversarial Network for Automated Glaucoma Diagnosis with Stacked Discriminator Models." Journal of Medical Imaging and Health Informatics 11, no. 5 (May 1, 2021): 1334–40. http://dx.doi.org/10.1166/jmihi.2021.3787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Generative Adversarial Network (GAN) is neural network architecture, widely used in many computer vision applications such as super-resolution image generation, art creation and image to image translation. A conventional GAN model consists of two sub-models; generative model and discriminative model. The former one generates new samples based on an unsupervised learning task, and the later one classifies them into real or fake. Though GAN is most commonly used for training generative models, it can be used for developing a classifier model. The main objective is to extend the effectiveness of GAN into semi-supervised learning, i.e., for the classification of fundus images to diagnose glaucoma. The discriminator model in the conventional GAN is improved via transfer learning to predict n + 1 classes by training the model for both supervised classification (n classes) and unsupervised classification (fake or real). Both models share all feature extraction layers and differ in the output layers. Thus any update in one of the model will impact both models. Results show that the semi-supervised GAN performs well than a standalone Convolution Neural Networks (CNNs) model.
9

Chen, Zirui. "Diffusion Models-based Data Augmentation for the Cell Cycle Phase Classification." Journal of Physics: Conference Series 2580, no. 1 (September 1, 2023): 012001. http://dx.doi.org/10.1088/1742-6596/2580/1/012001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract For biological research, sample size imbalance is common due to the nature of the research subjects. For example, in the study of the cell cycle phase, the sample size of dividing cells is also much smaller due to the extremely short duration of the mitotic phase compared to the interphase. Data augmentation using image generative models is an excellent way to address insufficient sample size and imbalanced distribution. In addition to the GAN-like models that have been extensively applied, the diffusion model, as an emerging model, has shown extraordinary performance in the field of image generation. This experiment uses the diffusion model as a means of image data enhancement. The experimental results expose that the performance of the classifier with data augmentation is significantly improved compared with the original dataset, and the positive predictive value is increased from about 0.7 to more than 0.9. The results reveal that the diffusion model has a good application prospect in the area of data enhancement and can effectively solve the problem of insufficient data or unbalanced sample size.
10

Bhavani, N. Sree, G. Narendra Babu Reddy, Y. Sravani Devi, M. Bhavani, P. Chandana Reddy, and V. Abhignya Reddy. "Generative Data Augmentation and ARMD Classification." International Journal for Research in Applied Science and Engineering Technology 11, no. 6 (June 30, 2023): 3662–67. http://dx.doi.org/10.22214/ijraset.2023.54178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: Age Related Macular Degeneration (ARMD) is a type of eye disease which normally have an effect on the central vision of a person. This Disease might sometimes lead to permanent vision loss for some people. It affects the people over the age of 50. So, basically there are 2 different types of ARMD i.e., Dry and Wet. Dry ARMD will generate a tiny amount of protein deposits called drusen, whereas Wet ARMD occurs whenever any abnormal blood vessel is developed under the retina, so sometimes this blood vessels might leak blood fluid, this type of ARMD is very severe and can even lead to permanent central vision loss. Therefore, it is necessary for early detection of the disease. Generative Data Augmentation for ARMD Classification is deep learning based which uses Convolutional Neural Network (CNN) model for generating images to accurately identify the disease. Deep Learning Diagnostic models require expertly graded images from extensive data sets obtained in large scale clinical trials which may not exist. Therefore, (Generative Adversarial Networks) GAN-based generative data augmentation method called Style GAN is used for generating the images. Generative deep learning techniques is used to synthesize new large datasets of artificial retinal images from different stages of ARMD using the images from the already existing datasets. The performance of ARMD diagnostic DCNNs will be trained on the combination of both real and synthetic datasets. Images obtained by using GAN appear to be realistic, and increase the accuracy of the model. It then continues with classifying the retinal images into one of the three classes i.e., dry, wet or normal using CNN model. It also compares the accuracy against the model with traditional augmentation techniques, towards improving the performance of real-world ARMD classification tasks.

Dissertations / Theses on the topic "Classification based on generative models":

1

Cazzanti, Luca. "Generative models of similarity-based classification /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ljungberg, Lucas. "Using unsupervised classification with multiple LDA derived models for text generation based on noisy and sensitive data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Creating models to generate contextual responses to input queries is a difficult problem. It is even more difficult when available data contains noise and sensitive data. Finding models or methods to handle such issues is important in order to use data for productive means.This thesis proposes a model based on a cooperating pair of Topic Models of differing tasks (LDA and GSDMM) in order to alleviate the problematic properties of data. The model is tested on a real-world dataset with these difficulties as well as a dataset without them. The goal is to 1) look at the behaviour of the different topic models to see if their topical representation of the data is of use as input or output to other models and 2) find out what properties can be alleviated as a result.The results show that topic modeling can represent the semantic information of documents well enough to produce well-behaved input data for other models, which can also deal well with large vocabularies and noisy data. The topical clustering of the response data is sufficient enough for a classification model to predict the context of the response, from which valid responses can be created.
Att skapa modeller som genererar kontextuella svar på frågor är ett svårt problem från början, någonting som blir än mer svårt när tillgänglig data innehåller både brus och känslig information. Det är både viktigt och av stort intresse att hitta modeller och metoder som kan hantera dessa svårigheter så att även problematisk data kan användas produktivt.Detta examensarbete föreslår en modell baserat på ett par samarbetande Topic Models (ämnesbaserade modeller) med skiljande ansvarsområden (LDA och GSDMM) för att underlätta de problematiska egenskaperna av datan. Modellen testas på ett verkligt dataset med dessa svårigheter samt ett dataset utan dessa. Målet är att 1) inspektera båda ämnesmodellernas beteende för att se om dessa kan representera datan på ett sådant sätt att andra modeller kan använda dessa som indata eller utdata och 2) förstå vilka av dessa svårigheter som kan hanteras som följd.Resultaten visar att ämnesmodellerna kan representera semantiken och betydelsen av dokument bra nog för att producera välartad indata för andra modeller. Denna representation kan även hantera stora ordlistor och brus i texten. Resultaten visar även att ämnesgrupperingen av responsdatan är godartad nog att användas som mål för klassificeringsmodeller sådant att korrekta meningar kan genereras som respons.
3

Malazizi, Ladan. "Development of Artificial Intelligence-based In-Silico Toxicity Models. Data Quality Analysis and Model Performance Enhancement through Data Generation." Thesis, University of Bradford, 2008. http://hdl.handle.net/10454/4262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.
4

Bornelöv, Susanne. "Rule-based Models of Transcriptional Regulation and Complex Diseases : Applications and Development." Doctoral thesis, Uppsala universitet, Beräknings- och systembiologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-230159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As we gain increased understanding of genetic disorders and gene regulation more focus has turned towards complex interactions. Combinations of genes or gene and environmental factors have been suggested to explain the missing heritability behind complex diseases. Furthermore, gene activation and splicing seem to be governed by a complex machinery of histone modification (HM), transcription factor (TF), and DNA sequence signals. This thesis aimed to apply and develop multivariate machine learning methods for use on such biological problems. Monte Carlo feature selection was combined with rule-based classification to identify interactions between HMs and to study the interplay of factors with importance for asthma and allergy. Firstly, publicly available ChIP-seq data (Paper I) for 38 HMs was studied. We trained a classifier for predicting exon inclusion levels based on the HMs signals. We identified HMs important for splicing and illustrated that splicing could be predicted from the HM patterns. Next, we applied a similar methodology on data from two large birth cohorts describing asthma and allergy in children (Paper II). We identified genetic and environmental factors with importance for allergic diseases which confirmed earlier results and found candidate gene-gene and gene-environment interactions. In order to interpret and present the classifiers we developed Ciruvis, a web-based tool for network visualization of classification rules (Paper III). We applied Ciruvis on classifiers trained on both simulated and real data and compared our tool to another methodology for interaction detection using classification. Finally, we continued the earlier study on epigenetics by analyzing HM and TF signals in genes with or without evidence of bidirectional transcription (Paper IV). We identified several HMs and TFs with different signals between unidirectional and bidirectional genes. Among these, the CTCF TF was shown to have a well-positioned peak 60-80 bp upstream of the transcription start site in unidirectional genes.
5

Haghebaert, Marie. "Outils et méthodes pour la modélisation de la dynamique des écosystèmes microbiens complexes à partir d'observations expérimentales temporelles : application à la dynamique du microbiote intestinal." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASM036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse est issue du projet Européen Homo.symbiosus qui étudie les transitions d'équilibre des interactions entre l'hôte et son microbiote intestinal. Pour étudier les transitions nous suivons deux directions : la modélisation mécaniste des interactions hôte-microbiote et l'analyse de données temporelles de comptage microbien.Nous avons enrichi et simulé un modèle déterministe de la crypte intestinale grâce au schéma numérique EDK, en étudiant notamment l'impact des différents paramètres en utilisant la méthode des effets élémentaires de Morris. Ce modèle s'est avéré capable de simuler d'une part des états symbiotiques et dysbiotiques des interactions et d'autre part des scénarios de transition.En parallèle, un modèle EDO compartimental du colon inspiré de travaux existants a été développé et couplé au modèle de crypte. La thèse a contribué à l'enrichissement de la modélisation du métabolisme bactérien et à la modélisation de l'immunité innée à l'échelle de la muqueuse intestinale. Une exploration numérique nous a permis d'évaluer l'influence de l'alimentation sur l'état stationnaire du modèle et d'étudier l'effet d'un scénario pathologique en mimant une brèche de la barrière épithéliale.De plus, nous avons développé une approche d'analyse des données microbiennes visant à évaluer la déviation des écosystèmes microbiens subissant une forte perturbation de leur environnement par rapport à un état de référence. Cette méthode, basée sur une classification DMM, permet d'étudier les transitions d'équilibre de l'écosystème dans le cas de données avec peu d'individus et peu de points de temps. Par ailleurs, une méthode de classification de courbes utilisant le modèle SBM a été appliquée pour étudier l'effet de différentes perturbations de l'écosystème microbien, des résultats de cette étude ont pu être utilisés pour enrichir le modèle d'interactions hôte-microbiote
This thesis stems from the European project Homo.symbiosus, which investigates the equilibrium transitions of interactions between the host and its intestinal microbiota. To study these transitions, we pursue two directions: the mechanistic modeling of host-microbiota interactions and the analysis of temporal microbial count data.We enriched and simulated a deterministic model of the intestinal crypt using the EDK numerical scheme, particularly studying the impact of different parameters using the Morris Elementary Effects method. This model proved capable of simulating, on one hand, symbiotic and dysbiotic interaction states and, on the other hand, transition scenarios between states of dysbiosis and symbiosis.In parallel, a compartmental ODE model of the colon, inspired by existing studies, was developed and coupled with the crypt model. The thesis contributed to the enhancement of bacterial metabolism modeling and the modeling of innate immunity at the scale of the intestinal mucosa. A numerical exploration allowed us to assess the influence of diet on the steady state of the model and to study the effect of a pathological scenario by mimicking a breach in the epithelial barrier.Furthermore, we developed an approach to analyze microbial data aimed at assessing the deviation of microbial ecosystems undergoing significant environmental disturbances compared to a reference state. This method, based on DMM classification, enables the study of ecosystem equilibrium transitions in cases with few individuals and few time points. Moreover, a curve classification method using the SBM model was applied to investigate the effects of various disturbances on the microbial ecosystem; the results from this study were used to enrich the host-microbiota interaction model
6

Müller, Richard. "Software Visualization in 3D." Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-164699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The focus of this thesis is on the implementation, the evaluation and the useful application of the third dimension in software visualization. Software engineering is characterized by a complex interplay of different stakeholders that produce and use several artifacts. Software visualization is used as one mean to address this increasing complexity. It provides role- and task-specific views of artifacts that contain information about structure, behavior, and evolution of a software system in its entirety. The main potential of the third dimension is the possibility to provide multiple views in one software visualization for all three aspects. However, empirical findings concerning the role of the third dimension in software visualization are rare. Furthermore, there are only few 3D software visualizations that provide multiple views of a software system including all three aspects. Finally, the current tool support lacks of generating easy integrateable, scalable, and platform independent 2D, 2.5D, and 3D software visualizations automatically. Hence, the objective is to develop a software visualization that represents all important structural entities and relations of a software system, that can display behavioral and evolutionary aspects of a software system as well, and that can be generated automatically. In order to achieve this objective the following research methods are applied. A literature study is conducted, a software visualization generator is conceptualized and prototypically implemented, a structured approach to plan and design controlled experiments in software visualization is developed, and a controlled experiment is designed and performed to investigate the role of the third dimension in software visualization. The main contributions are an overview of the state-of-the-art in 3D software visualization, a structured approach including a theoretical model to control influence factors during controlled experiments in software visualization, an Eclipse-based generator for producing automatically role- and task-specific 2D, 2.5D, and 3D software visualizations, the controlled experiment investigating the role of the third dimension in software visualization, and the recursive disk metaphor combining the findings with focus on the structure of software including useful applications of the third dimension regarding behavior and evolution.
7

Ozer, Gizem. "Fuzzy Classification Models Based On Tanaka." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610785/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In some classification problems where human judgments, qualitative and imprecise data exist, uncertainty comes from fuzziness rather than randomness. Limited number of fuzzy classification approaches is available for use for these classification problems to capture the effect of fuzzy uncertainty imbedded in data. The scope of this study mainly comprises two parts: new fuzzy classification approaches based on Tanaka&rsquo
s Fuzzy Linear Regression (FLR) approach, and an improvement of an existing one, Improved Fuzzy Classifier Functions (IFCF). Tanaka&rsquo
s FLR approach is a well known fuzzy regression technique used for the prediction problems including fuzzy type of uncertainty. In the first part of the study, three alternative approaches are presented, which utilize the FLR approach for a particular customer satisfaction classification problem. A comparison of their performances and their applicability in other cases are discussed. In the second part of the study, the improved IFCF method, Nonparametric Improved Fuzzy Classifier Functions (NIFCF), is presented, which proposes to use a nonparametric method, Multivariate Adaptive Regression Splines (MARS), in clustering phase of the IFCF method. NIFCF method is applied on three data sets, and compared with Fuzzy Classifier Function (FCF) and Logistic Regression (LR) methods.
8

Elzobi, Moftah M. [Verfasser]. "Unconstrained recognition of offline Arabic handwriting using generative and discriminative classification models / Moftah M. Elzobi." Magdeburg : Universitätsbibliothek, 2017. http://d-nb.info/1135662185/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Santiago, Dionny. "A Model-Based AI-Driven Test Generation System." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and development of automated test scripts to support regression testing. This thesis is motivated by the opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to software testing. We present an approach that combines a trainable web component classifier, a test case description language, and a trainable test generation and execution system that can learn to generate new test cases. Training data was collected and hand-labeled across 7 systems, 95 web pages, and 17,360 elements. A total of 250 test flows were also manually hand-crafted for training purposes. Various machine learning algorithms were evaluated. Results showed that Random Forest classifiers performed well on several web component classification problems. In addition, Long Short-Term Memory neural networks were able to model and generate new valid test flows.
10

Birks, Daniel J. "Computational Agent-Based Models of Offending: Assessing the Generative Sufficiency of Opportunity-Based Explanations of the Crime Event." Thesis, Griffith University, 2012. http://hdl.handle.net/10072/367327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis demonstrates that agent-based modelling offers a viable compatriot to traditional experimental methodologies for criminology scholars, that can be applied to explore the divide between micro-level criminological theory and macro-level observations of crime; and in turn, aid in the assessment of those theories which aim to describe the crime event. The following overarching research question is addressed: Are the micro-level mechanisms of the opportunity theories generatively sufficient to explain macroscopic patterns commonly observed in the empirical study of crime? Drawing on the approach of generative social science (Epstein, 1999), this thesis presents a systematic assessment of the generative sufficiency of three distinct mechanisms of offender movement, target selection and learning derived from the routine activity approach (Cohen & Felson, 1979), rational choice perspective (Clarke, 1980; Cornish & Clarke, 1986) and crime pattern theory (Brantingham & Brantingham, 1978, 1981). An agent-based model of offending is presented, in which an artificial landscape is inhabited by both potential victims and offenders who behave according to several of the key propositions of the routine activity approach, rational choice perspective and crime pattern theory. Following a computational laboratory-based approach, for each hypothetical mechanism studied, control and experimental behaviours are developed to represent the absence or presence of a proposed mechanism within the virtual population.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Criminology and Criminal Justice
Arts, Education and Law
Full Text

Books on the topic "Classification based on generative models":

1

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton: Princeton University Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mackay, David Scott. Knowledge based classification of higher order terrain objects on digital elevation models. Ottawa: National Library of Canada = Bibliothèque nationale du Canada, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Marchenko, Aleksey, and Mihail Nemcov. Electronics. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1587595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The material of module 2 "Electronics" is systematically presented in accordance with the modern university program of the discipline " Electrical Engineering and Electronics" for non-electrotechnical areas of training of bachelors and certified specialists. The element base of semiconductor electronics devices is considered: classification, voltage and frequency characteristics, features of the use of electronic devices in various operating modes are given. The principles of construction and functioning of typical analog, pulse and digital devices are described in detail. A separate chapter is devoted to the principles of converting light energy into electrical energy and vice versa, the design and operation of optoelectronic devices and fiber- optic lines of information transmission. Meets the requirements of the federal state educational standards of higher education of the latest generation. For students of higher educational institutions studying in non-electro- technical areas of bachelor's and graduate training.
4

Serebryakov, Andrey, and Gennadiy Zhuravlev. Exploitation of oil and gas fields by horizontal wells. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/971768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The textbook describes the design features of offshore horizontal multi-hole production wells, as well as the bottom-hole components of horizontal multi-hole wells. The classification of complications of multi-hole horizontal wells, methods of their prevention and elimination are given. Methods of underground geonavigation of the development of offshore horizontal production wells are proposed. The geological and field bases of operation of horizontal offshore multi-hole oil and gas wells, modes and dynamics of oil, gas and associated water production, methods for calculating dynamic bottom-hole and reservoir pressures are specified. The technologies of operation of offshore horizontal multi-hole wells are presented. The composition and scope of environmental, field and research marine monitoring of the operation of offshore horizontal multi-hole wells and the protection of the marine environment in the production of oil and gas are justified. Meets the requirements of the federal state educational standards of higher education of the latest generation. It is intended for undergraduates of the enlarged group of "Earth Sciences" training areas, as well as for teachers, employees of the fuel and energy complex, industrial geological exploration and oil and gas production enterprises, scientific and design organizations.
5

Serebryakov, Andrey, Lyubov' Ushivceva, Viktor Pyhalov, and Zhanetta Kalashnik. Calculation of geological reserves and resources of oil, gas, condensate and commercial products. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1225035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The modern methods of assessing geological reserves and resources of oil, gas and condensate, concepts and criteria for allocating categories of reserves and resources in accordance with the properties of oils, gases and condensates, which are scientifically based on the international market, are described. For the first time, the calculation of the stocks of commercial products contained in the composition of oil, gas and condensate is given. The categories of reserves and resources according to Russian and foreign classifications are compared. The state of hydrocarbon reserves by countries and continents is described. The interrelationships of the stages of geological exploration with the calculation technologies and categories of reserves and resources are clarified. The ecological tasks of exploration and development of hydrocarbons are highlighted. The main directions and technologies of oil, gas and condensate refining, which are an integral stage of calculating and developing reserves, are given. At the end of each chapter, control questions and tasks are given to assess the level of knowledge and the volume of assimilation of materials. Meets the requirements of the federal state standards of higher education of the latest generation. It is intended for undergraduates of the "Geology" direction, graduate students of the "Earth Sciences" direction, students and teachers of universities, specialists in the exploration and processing of oil, gas and condensate, employees of the fuel and energy complex.
6

Vasil'eva, Natal'ya. Mathematical models in the management of copper production: ideas, methods, examples. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1014071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Presents the current status in modelling of metallurgical processes considered by the model the mathematical model used in the description of the processes of copper production and their classification. Set out a system of methods and models in the field of mathematical modeling of technological processes, including balance sheet, statistics, optimization models, forecasting models and predictive models. For specific technological processes are developed: the model of the balance of the cycle of pyrometallurgical production of copper, polynomial model for prediction of matte composition on the basis of the passive experiment, predictive model of quantitative estimation of the copper content in the matte based on fuzzy logic. Of interest to students, postgraduates, teachers of technical universities, engineers and research workers who use mathematical methods for processing of data of laboratory and industrial experiments.
7

Astaf'eva, Ol'ga, Natal'ya Moiseenko, Aleksandr Kozlovskiy, Tat'yana Shemyakina, and Viktor Serov. Risk management in construction. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1842952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The monograph is devoted to the issues of risk management in the organizations of the investment and construction complex. The issues of risk classification are consistently considered, approaches to determining the types and types of risks are established. Attention is paid to approaches to the construction of a risk management mechanism and the specifics of the impact on the identified risks in terms of minimizing possible damage. The issues of state regulation are highlighted, a complex economic problem related to the study of the effectiveness of the chosen strategy of real investment projects based on the use of various methods and models of risk analysis is considered. Modern educational and methodological materials tested in the practice of enterprises and organizations of the construction complex of Moscow and the Moscow region were used. For a wide range of readers interested in the issues of risk management in construction. It will be useful for students, postgraduates and teachers of economic universities.
8

Naumov, Vladimir. Consumer behavior. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1014653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The book describes the basic issues concerning consumer behavior on the basis of the simulation of the decision-making process on buying behavior of customers in the sales area of the store and shopping Internet sites. The classification of models of consumer behavior, based on research in the area of economic, social and psychological theories and empirical evidence regarding decision-making by consumers when purchasing the goods, including online stores. Methods of qualitative and quantitative research of consumer behavior, fundamentals of statistical processing of empirical data. Attention is paid to the processes of consumers ' perception of brands (brands) and advertising messages, the basic rules for the display of goods (merchandising) and its impact on consumer decision, recommendations on the use of psychology of consumer behavior in personal sales. Presents an integrated model of consumer behavior in the Internet environment, the process of perception of the visitor of the company, the factors influencing consumer choice of goods online. Is intended for preparation of bachelors in directions of preparation 38.03.02 "Management", 38.03.06 "trading business" and can be used for training of bachelors in direction of training 43.03.01 "Service", and will also be useful for professionals working in the field of marketing, distribution and sales.
9

Bogumil, Veniamin, and Sarango Duke. Telematics on urban passenger transport. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1819882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The monograph discusses the application of telematics in dispatch control systems in urban passenger transport. The role of telematics as a technological basis in automating the solution of control tasks, accounting and analysis of the volume and quality of transport work in modern dispatch control systems on urban passenger transport is shown. Analytical models have been developed to estimate the capacity of a high-speed bus transportation system on a dedicated line. Mathematical models and algorithms for predicting passenger vehicle interior filling at critical stages of urban passenger transport routes are presented. The issues of application of the concept of the phase space of states introduced by the authors to assess the quality of the passenger transportation process on the route of urban passenger transport are described. The developed classification of service levels and their application in order to inform passengers at stopping points about the degree of filling of the passenger compartment of the arriving vehicle is described. The material is based on the results of theoretical research and practical work on the creation and implementation of automated control systems for urban passenger transport in Russian cities. The material of M.H. Duque Sarango's dissertation submitted for the degree of Candidate of Technical Sciences in the specialty 05.22.10 "Operation of motor transport" was used. It will be useful to specialists in the field of telematics on urban passenger transport.
10

Cevelev, Aleksandr. Material management of railway transport. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1064961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the monograph reviewed the development of the inventory management of railway transport in the new economic environment of market economy. According to the results of theoretical research, innovative and production potential of the supply system of railway transport the main directions and methods of transformation of the restructuring process under the corporate changes of JSC "RZD", positioned value system of the logistics of railway transportation, and developed a classification model used logistical resources. Evaluation of activity of structural divisions of Russian Railways supply is proposed to be viewed through an integrated and comprehensive approach to the development of systems of balanced indicators of supply and prompt handling of material resources, the implementation of which allows to distribute the strategic objectives of the company "Russian Railways" activities in the system of logistics of the Railways and also to involve in economic circulation of excessive and unused inventories of material and technical resources and efficiently reallocate them among enterprises at the site of the railway. Recommendations for the implementation of the developed algorithms and models are long term in nature and are based on the concept of logistics management and improve the business processes of the logistics system. Will be useful for managers and specialists of directorates of logistics of Russian Railways supply, undergraduates and graduate students interested in the economy of railway transport.

Book chapters on the topic "Classification based on generative models":

1

Akrout, Mohamed, Bálint Gyepesi, Péter Holló, Adrienn Poór, Blága Kincső, Stephen Solis, Katrina Cirone, et al. "Diffusion-Based Data Augmentation for Skin Disease Classification: Impact Across Original Medical Datasets to Fully Synthetic Images." In Deep Generative Models, 99–109. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53767-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Noor, Muhammad Nouman, Imran Ashraf, and Muhammad Nazir. "Analysis of GAN-Based Data Augmentation for GI-Tract Disease Classification." In Advances in Deep Generative Models for Medical Artificial Intelligence, 43–64. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46341-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zeng, Zhi, Wei Liang, Heping Li, and Shuwu Zhang. "A Novel Video Classification Method Based on Hybrid Generative/Discriminative Models." In Lecture Notes in Computer Science, 705–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89689-0_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shankar, Venkatesh Gauri, and Dilip Singh Sisodia. "Deep Generative Adversarial Network-Based MRI Slices Reconstruction and Enhancement for Alzheimer’s Stages Classification." In Advances in Deep Generative Models for Medical Artificial Intelligence, 65–82. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46341-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Çetiner, Halit, and Sedat Metlek. "A New CNN-Based Deep Learning Model Approach for Skin Cancer Detection and Classification." In Advances in Deep Generative Models for Medical Artificial Intelligence, 177–99. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-46341-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Lei, Zheng Pei, Peng Chen, Zhisheng Gao, Zhihao Gan, and Kang Feng. "An Effective GAN-Based Multi-classification Approach for Financial Time Series." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 1100–1107. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractDeep learning has achieved significant success in various applications due to its powerful feature representations of complex data. Financial time series forecasting is no exception. In this work we leverage Generative Adversarial Nets (GAN), which has been extensively studied recently, for the end-to-end multi-classification of financial time series. An improved generative model based on Convolutional Long Short-Term Memory (ConvLSTM) and Multi-Layer Perceptron (MLP) is proposed to effectively capture temporal features and mine the data distribution of volatility trends (short, neutral, and long) from given financial time series data. We empirically compare the proposed approach with state-of-the-art multi-classification methods on real-world stock dataset. The results show that the proposed GAN-based method outperforms its competitors in precision and F1 score.
7

Trivedi, Tvisha, S. Geetha, and P. Punithavathi. "A Hyperspectral Image Classification Method-Based Auxiliary Generative Adversarial Networks with Probabilistic Graph Model." In Lecture Notes in Electrical Engineering, 363–73. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1244-2_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Rahul, K. Karthik, and S. Sowmya Kamath. "GAN-Based Encoder-Decoder Model for Multi-Label Diagnostic Scan Classification and Automated Radiology Report Generation." In Handbook of AI-Based Models in Healthcare and Medicine, 93–109. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003363361-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Krüger, Nina, Jan Brüning, Leonid Goubergrits, Matthias Ivantsits, Lars Walczak, Volkmar Falk, Henryk Dreger, Titus Kühne, and Anja Hennemuth. "Deep Learning-Based Pulmonary Artery Surface Mesh Generation." In Statistical Atlases and Computational Models of the Heart. Regular and CMRxRecon Challenge Papers, 140–51. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52448-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractProperties of the pulmonary artery play an essential role in the diagnosis and treatment planning of diseases such as pulmonary hypertension. Patient-specific simulation of hemodynamics can support the planning of interventions. However, the variable complex branching structure of the pulmonary artery poses a challenge for image-based generation of suitable geometries. State-of-the-art segmentation-based approaches require an interactive 3D surface reconstruction to prepare the simulation geometry. We propose a deep learning approach to generate a 3D surface mesh of the pulmonary artery from CT images suitable for simulation. The proposed method is based on the Voxel2Mesh algorithm and includes a voxel encoder and decoder as well as a mesh decoder to deform a prototype mesh. An additional centerline coverage loss facilitates the reconstruction of the branching structure. Furthermore, vertex classification allows for the definition of in- and outlets. Our model was trained with 48 human cases and tested on 10 human cases annotated by two observers. The differences in the anatomical parameters inferred from the automatic surface generation correspond to the differences between the observers’ annotations. The suitability of the generated mesh geometries for numerical flow simulations is demonstrated.
10

Liu, Xinyue, Gang Yang, Yang Zhou, Yajie Yang, Weichen Huang, Dayong Ding, and Jun Wu. "Fine-Grained Multi-modal Fundus Image Generation Based on Diffusion Models for Glaucoma Classification." In MultiMedia Modeling, 58–70. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53302-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Classification based on generative models":

1

Reilly, Ciaran, Stephen O Shaughnessy, and Christina Thorpe. "Robustness of Image-Based Malware Classification Models trained with Generative Adversarial Networks." In EICC 2023: European Interdisciplinary Cybersecurity Conference. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3590777.3590792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Zijie, Rong Zhi, Wuqaing Zhang, Baofeng Wang, Zhijie Fang, Vitali Kaiser, Julian Wiederer, and Fabian Flohr. "Generative Model based Data Augmentation for Special Person Classification." In 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020. http://dx.doi.org/10.1109/iv47402.2020.9304604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Zijie, Rong Zhi, Wuqaing Zhang, Baofeng Wang, Zhijie Fang, Vitali Kaiser, Julian Wiederer, and Fabian Flohr. "Generative Model based Data Augmentation for Special Person Classification." In 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020. http://dx.doi.org/10.1109/iv47402.2020.9304604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bissoto, Alceu, and Sandra Avila. "Improving Skin Lesion Analysis with Generative Adversarial Networks." In Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sibgrapi.est.2020.12986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Melanoma is the most lethal type of skin cancer. Early diagnosis is crucial to increase the survival rate of those patients due to the possibility of metastasis. Automated skin lesion analysis can play an essential role by reaching people that do not have access to a specialist. However, since deep learning became the state-of-the-art for skin lesion analysis, data became a decisive factor in pushing the solutions further. The core objective of this M.Sc. dissertation is to tackle the problems that arise by having limited datasets. In the first part, we use generative adversarial networks to generate synthetic data to augment our classification model’s training datasets to boost performance. Our method generates high-resolution clinically-meaningful skin lesion images, that when compound our classification model’s training dataset, consistently improved the performance in different scenarios, for distinct datasets. We also investigate how our classification models perceived the synthetic samples and how they can aid the model’s generalization. Finally, we investigate a problem that usually arises by having few, relatively small datasets that are thoroughly re-used in the literature: bias. For this, we designed experiments to study how our models’ use data, verifying how it exploits correct (based on medical algorithms), and spurious (based on artifacts introduced during image acquisition) correlations. Disturbingly, even in the absence of any clinical information regarding the lesion being diagnosed, our classification models presented much better performance than chance (even competing with specialists benchmarks), highly suggesting inflated performances.
5

Nik Aznan, Nik Khadijah, Amir Atapour-Abarghouei, Stephen Bonner, Jason D. Connolly, Noura Al Moubayed, and Toby P. Breckon. "Simulating Brain Signals: Creating Synthetic EEG Data via Neural-Based Generative Models for Improved SSVEP Classification." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ukwuoma, Chiagoziem C., Md Belal Bin Heyat, Mahmoud Masadeh, Faijan Akhtar, Qin Zhiguang, Emmanuel Bondzie - Selby, Omar AlShorman, and Fahad Alkahtani. "Image Inpainting and Classification Agent Training Based on Reinforcement Learning and Generative Models with Attention Mechanism." In 2021 International Conference on Microelectronics (ICM). IEEE, 2021. http://dx.doi.org/10.1109/icm52667.2021.9664950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ye, Yaping, Chu He, and Zhang Zhi. "Classification of time series of SAR images based on generative model." In IGARSS 2016 - 2016 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2016. http://dx.doi.org/10.1109/igarss.2016.7729294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nayak, Rikin J., and Jitendra P. Chaudhari. "Generative Model for Image Classification based on Hybrid Adversarial Auto Encoder." In 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC). IEEE, 2023. http://dx.doi.org/10.1109/icaaic56838.2023.10140940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Çelik, Mustafa, and Ahmet HaydarÖrnek. "GAN-Based Data Augmentation and Anonymization for Mask Classification." In 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep learning methods, especially convolutional neural networks (CNNs), have made a major contribution to computer vision. However, deep learning classifiers need large-scale annotated datasets to be trained without over-fitting. Also, in high-data diversity, trained models generalize better. However, collecting such a large-scale dataset remains challenging. Furthermore, it is invaluable for researchers to protect the subjects' confidentiality when using their personal data such as face images. In this paper, we propose a deep learning Generative Adversarial Networks (GANs) which generates synthetic samples for our mask classification model. Our contributions in this work are two-fold that the synthetics images provide. First, GANs' models can be used as an anonymization tool when the subjects' confidentiality is matters. Second, the generated masked/unmasked face images boost the performance of the mask classification model by using the synthetic images as a form of data augmentation. In our work, the classification accuracy using only traditional data augmentations is 93.71 %. By using both synthetic data and original data with traditional data augmentations the result is 95.50 %. It is shown that the GAN-generated synthetic data boosts the performance of deep learning classifiers.
10

Yousif, Shermeen. "Using Language-Based and Generative Deep Learning Models for Encoding Design Intentions and Modifying Architectural Design." In 110th ACSA Annual Meeting Paper Proceedings. ACSA Press, 2022. http://dx.doi.org/10.35483/acsa.am.110.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence models are moving design exploration beyond the deterministic rule-based parametric systems by offering new possibilities and expanding the design space, which has become more flexible and adaptive to change. Yet, the fact that AI models are independently learning on their own, raises issues with designers’ control over the process. More recently, models that bridge natural language processing and computer-vision such as Contrastive Language-Image Pre-Training (CLIP) have been integrated into generative deep learning models such as StyleGAN, combining the generative and classification functionalities. This way, to some degree, a certain level of designer’s agency can be attained when using text prompts to modify the generative process, which was the motivation of this work. We investigate here the issue of prototyping a new design system with employing language-based models and deep learning models into an expanded design space towards informing design revision and modification. Our methodology involves experimenting with the targeted deep learning models, prototyping a new framework with language-based models are integrated into the generative process, and testing the prototype by applying the proposed system to a design case. As a result of experimentation, the generative model was modified using a set of text-prompts that describe the intended design alteration. Overall, the results show successful approaches to guiding the generative process and informing design revision, and offer insights into associated potentials and limitations, as discussed in the paper.

Reports on the topic "Classification based on generative models":

1

Cook, Samantha, Matthew Bigl, Sandra LeGrand, Nicholas Webb, Gayle Tyree, and Ronald Treminio. Landform identification in the Chihuahuan Desert for dust source characterization applications : developing a landform reference data set. Engineer Research and Development Center (U.S.), October 2022. http://dx.doi.org/10.21079/11681/45644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ERDC-Geo is a surface erodibility parameterization developed to improve dust predictions in weather forecasting models. Geomorphic landform maps used in ERDC-Geo link surface dust emission potential to landform type. Using a previously generated southwest United States landform map as training data, a classification model based on machine learning (ML) was established to generate ERDC-Geo input data. To evaluate the ability of the ML model to accurately classify landforms, an independent reference landform data set was created for areas in the Chihuahuan Desert. The reference landform data set was generated using two separate map-ping methodologies: one based on in situ observations, and another based on the interpretation of satellite imagery. Existing geospatial data layers and recommendations from local rangeland experts guided site selections for both in situ and remote landform identification. A total of 18 landform types were mapped across 128 sites in New Mexico, Texas, and Mexico using the in situ (31 sites) and remote (97 sites) techniques. The final data set is critical for evaluating the ML-classification model and, ultimately, for improving dust forecasting models.
2

Asher, Sam, Denis Nekipelov, Paul Novosad, and Stephen Ryan. Classification Trees for Heterogeneous Moment-Based Models. Cambridge, MA: National Bureau of Economic Research, December 2016. http://dx.doi.org/10.3386/w22976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Berney, Ernest, Naveen Ganesh, Andrew Ward, J. Newman, and John Rushing. Methodology for remote assessment of pavement distresses from point cloud analysis. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ability to remotely assess road and airfield pavement condition is critical to dynamic basing, contingency deployment, convoy entry and sustainment, and post-attack reconnaissance. Current Army processes to evaluate surface condition are time-consuming and require Soldier presence. Recent developments in the area of photogrammetry and light detection and ranging (LiDAR) enable rapid generation of three-dimensional point cloud models of the pavement surface. Point clouds were generated from data collected on a series of asphalt, concrete, and unsurfaced pavements using ground- and aerial-based sensors. ERDC-developed algorithms automatically discretize the pavement surface into cross- and grid-based sections to identify physical surface distresses such as depressions, ruts, and cracks. Depressions can be sized from the point-to-point distances bounding each depression, and surface roughness is determined based on the point heights along a given cross section. Noted distresses are exported to a distress map file containing only the distress points and their locations for later visualization and quality control along with classification and quantification. Further research and automation into point cloud analysis is ongoing with the goal of enabling Soldiers with limited training the capability to rapidly assess pavement surface condition from a remote platform.
4

Mbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, May 2023. http://dx.doi.org/10.3289/sw_2_2023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Automated and Integrated Seafloor Classification Workflow (AI-SCW) is a semi-automated underwater image processing pipeline that has been customized for use in classifying the seafloor into semantic habitat categories. The current implementation has been tested against a sequence of underwater images collected by the Ocean Floor Observation System (OFOS), in the Clarion-Clipperton Zone of the Pacific Ocean. Despite this, the workflow could also be applied to images acquired by other platforms such as an Autonomous Underwater Vehicle (AUV), or Remotely Operated Vehicle (ROV). The modules in AI-SCW have been implemented using the python programming language, specifically using libraries such as scikit-image for image processing, scikit-learn for machine learning and dimensionality reduction, keras for computer vision with deep learning, and matplotlib for generating visualizations. Therefore, AI-SCW modularized implementation allows users to accomplish a variety of underwater computer vision tasks, which include: detecting laser points from the underwater images for use in scale determination; performing contrast enhancement and color normalization to improve the visual quality of the images; semi-automated generation of annotations to be used downstream during supervised classification; training a convolutional neural network (Inception v3) using the generated annotations to semantically classify each image into one of pre-defined seafloor habitat categories; evaluating sampling strategies for generation of balanced training images to be used for fitting an unsupervised k-means classifier; and visualization of classification results in both feature space view and in map view geospatial co-ordinates. Thus, the workflow is useful for a quick but objective generation of image-based seafloor habitat maps to support monitoring of remote benthic ecosystems.
5

Desa, Hazry, and Muhammad Azizi Azizan. OPTIMIZING STOCKPILE MANAGEMENT THROUGH DRONE MAPPING FOR VOLUMETRIC CALCULATION. Penerbit Universiti Malaysia Perlis, 2023. http://dx.doi.org/10.58915/techrpt2023.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stockpile volumetric calculation is an important aspect in many industries, including construction, mining, and agriculture. Accurate calculation of stockpile volumes is essential for efficient inventory management, logistics planning, and quality control. Traditionally, stockpile volumetric calculation is done using ground-based survey methods, which can be time-consuming, labour-intensive, and often inaccurate. However, with the recent advancements in drone technology, it has become possible to use drones for stockpile volumetric calculation, providing a faster, safer, and more accurate solution. The duration of this project is one year, from May 1st, 2019, until April 30th, 2020, and is comprised of two primary research components: analyzing the properties and classification of limestone and conducting digital aerial mapping to calculate stockpile volumetrics. The scope of this technical report is specifically limited to the aerial mapping aspect of the project, which was carried out using drones. The project involved two phases, with drone flights taking place during each phase, spaced about six months apart. The first drone flight for data collection occurred on July 12th, 2019, while the second took place on December 15th, 2020. The project aims to utilize drone technology for stockpile volumetric calculation, providing a more efficient and cost-effective solution. The project will involve the use of advanced drone sensors and imaging technology to capture high-resolution data of the stockpile area. The data will then be processed using sophisticated software algorithms to generate accurate 3D models and volumetric calculations of the stockpile.
6

Osadcha, Kateryna, Viacheslav Osadchyi, Serhiy Semerikov, Hanna Chemerys, and Alona Chorna. The Review of the Adaptive Learning Systems for the Formation of Individual Educational Trajectory. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article is devoted to the review of the adaptive learning systems. We considered the modern state and relevance of usage of the adaptive learning systems to be a useful tool of the formation of individual educational trajectory for achieving the highest level of intellectual development according to the natural abilities and inclination with the help of formation of individual trajectory of education, the usage of adaptive tests for monitoring of the quality of acquired knowledge, the formation of complicated model of the knowledge assessment, building of the complicated model of the subject of education, in particular considering the social-emotional characteristics. The existing classification of the adaptive learning systems was researched. We provide the comparative analysis of relevant adaptive learning systems according to the sphere of usage, the type of adaptive learning, the functional purpose, the integration with the existing Learning Management Systems, the appliance of modern technologies of generation and discernment of natural language and courseware features, ratings are based on CWiC Framework for Digital Learning. We conducted the research of the geography of usage of the systems by the institutions of higher education. We describe the perspectives of effective usage of adaptive systems of learning for the implementation and support of new strategies of learning and teaching and improvement of results of studies.
7

Marra de Artiñano, Ignacio, Franco Riottini Depetris, and Christian Volpe Martincus. Automatic Product Classification in International Trade: Machine Learning and Large Language Models. Inter-American Development Bank, July 2023. http://dx.doi.org/10.18235/0005012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurately classifying products is essential in international trade. Virtually all countries categorize products into tariff lines using the Harmonized System (HS) nomenclature for both statistical and duty collection purposes. In this paper, we apply and assess several different algorithms to automatically classify products based on text descriptions. To do so, we use agricultural product descriptions from several public agencies, including customs authorities and the United States Department of Agriculture (USDA). We find that while traditional machine learning (ML) models tend to perform well within the dataset in which they were trained, their precision drops dramatically when implemented outside of it. In contrast, large language models (LLMs) such as GPT 3.5 show a consistently good performance across all datasets, with accuracy rates ranging between 60% and 90% depending on HS aggregation levels. Our analysis highlights the valuable role that artificial intelligence (AI) can play in facilitating product classification at scale and, more generally, in enhancing the categorization of unstructured data.
8

Sadoune, Igor, Marcelin Joanis, and Andrea Lodi. Implementing a Hierarchical Deep Learning Approach for Simulating multilevel Auction Data. CIRANO, September 2023. http://dx.doi.org/10.54932/lqog8430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present a deep learning solution to address the challenges of simulating realistic synthetic first-price sealed-bid auction data. The complexities encountered in this type of auction data include high-cardinality discrete feature spaces and a multilevel structure arising from multiple bids associated with a single auction instance. Our methodology combines deep generative modeling (DGM) with an artificial learner that predicts the conditional bid distribution based on auction characteristics, contributing to advancements in simulation-based research. This approach lays the groundwork for creating realistic auction environments suitable for agent-based learning and modeling applications. Our contribution is twofold: we introduce a comprehensive methodology for simulating multilevel discrete auction data, and we underscore the potential ofDGMas a powerful instrument for refining simulation techniques and fostering the development of economic models grounded in generative AI.
9

Huang, Lei, Meng Song, Hui Shen, Huixiao Hong, Ping Gong, Deng Hong-Wen, and Zhang Chaoyang. Deep learning methods for omics data imputation. Engineer Research and Development Center (U.S.), February 2024. http://dx.doi.org/10.21079/11681/48221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One common problem in omics data analysis is missing values, which can arise due to various reasons, such as poor tissue quality and insufficient sample volumes. Instead of discarding missing values and related data, imputation approaches offer an alternative means of handling missing data. However, the imputation of missing omics data is a non-trivial task. Difficulties mainly come from high dimensionality, non-linear or nonmonotonic relationships within features, technical variations introduced by sampling methods, sample heterogeneity, and the non-random missingness mechanism. Several advanced imputation methods, including deep learning-based methods, have been proposed to address these challenges. Due to its capability of modeling complex patterns and relationships in large and high-dimensional datasets, many researchers have adopted deep learning models to impute missing omics data. This review provides a comprehensive overview of the currently available deep learning-based methods for omics imputation from the perspective of deep generative model architectures such as autoencoder, variational autoencoder, generative adversarial networks, and Transformer, with an emphasis on multi-omics data imputation. In addition, this review also discusses the opportunities that deep learning brings and the challenges that it might face in this field.
10

Kingston, A. W., A. Mort, C. Deblonde, and O H Ardakani. Hydrogen sulfide (H2S) distribution in the Triassic Montney Formation of the Western Canadian Sedimentary Basin. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Montney Formation is a highly productive hydrocarbon reservoir with significant reserves of hydrocarbon gases and liquids making it of great economic importance to Canada. However, high concentrations of hydrogen sulfide (H2S) have been encountered during exploration and development that have detrimental effects on environmental, health, and economics of production. H2S is a highly toxic and corrosive gas and therefore it is essential to understand the distribution of H2S within the basin in order to enhance identification of areas with a high risk of encountering elevated H2S concentrations in order to mitigate against potential negative impacts. Gas composition data from Montney wells is routinely collected by operators for submission to provincial regulators and is publicly available. We have combined data from Alberta (AB) and British Columbia (BC) to create a basin-wide database of Montney H2S concentrations. We then used an iterative quality control and quality assurance process to produce a dataset that best represents gas composition in reservoir fluids. This included: 1) designating gas source formation based on directional surveys using a newly developed basin-wide 3D model incorporating AGS's Montney model of Alberta with a model in BC, which removes errors associated with reported formations; 2) removed injection and disposal wells; 3) assessed wells with the 50 highest H2S concentrations to determine if gas composition data is accurate and reflective of reservoir fluid chemistry; and 4) evaluated spatially isolated extreme values to ensure data accuracy and prevent isolated highs from negatively impacting data interpolation. The resulting dataset was then used to calculate statistics for each x, y location to input into the interpolation process. Three interpolations were constructed based on the associated phase classification: H2S in gas, H2S in liquid (C7+), and aqueous H2S. We used Empirical Bayesian Kriging interpolation to generate H2S distribution maps along with a series of model uncertainty maps. These interpolations illustrate that H2S is heterogeneously distributed across the Montney basin. In general, higher concentrations are found in AB compared with BC with the highest concentrations in the Grande Prairie region along with several other isolated region in the southeastern portion of the basin. The interpolations of H2S associated with different phases show broad similarities. Future mapping research will focus on subdividing intra-Montney sub-members plus under- and overlying strata to further our understanding of the role migration plays in H2S distribution within the Montney basin.

To the bibliography