Добірка наукової літератури з теми "Maximum Mean Discrepancy (MMD)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Maximum Mean Discrepancy (MMD)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Maximum Mean Discrepancy (MMD)"

1

Huang, Qihang, Yulin He, and Zhexue Huang. "A Novel Maximum Mean Discrepancy-Based Semi-Supervised Learning Algorithm." Mathematics 10, no. 1 (December 23, 2021): 39. http://dx.doi.org/10.3390/math10010039.

Повний текст джерела
Анотація:
To provide more external knowledge for training self-supervised learning (SSL) algorithms, this paper proposes a maximum mean discrepancy-based SSL (MMD-SSL) algorithm, which trains a well-performing classifier by iteratively refining the classifier using highly confident unlabeled samples. The MMD-SSL algorithm performs three main steps. First, a multilayer perceptron (MLP) is trained based on the labeled samples and is then used to assign labels to unlabeled samples. Second, the unlabeled samples are divided into multiple groups with the k-means clustering algorithm. Third, the maximum mean discrepancy (MMD) criterion is used to measure the distribution consistency between k-means-clustered samples and MLP-classified samples. The samples having a consistent distribution are labeled as highly confident samples and used to retrain the MLP. The MMD-SSL algorithm performs an iterative training until all unlabeled samples are consistently labeled. We conducted extensive experiments on 29 benchmark data sets to validate the rationality and effectiveness of the MMD-SSL algorithm. Experimental results show that the generalization capability of the MLP algorithm can gradually improve with the increase of labeled samples and the statistical analysis demonstrates that the MMD-SSL algorithm can provide better testing accuracy and kappa values than 10 other self-training and co-training SSL algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhou, Zhaokun, Yuanhong Zhong, Xiaoming Liu, Qiang Li, and Shu Han. "DC-MMD-GAN: A New Maximum Mean Discrepancy Generative Adversarial Network Using Divide and Conquer." Applied Sciences 10, no. 18 (September 14, 2020): 6405. http://dx.doi.org/10.3390/app10186405.

Повний текст джерела
Анотація:
Generative adversarial networks (GANs) have a revolutionary influence on sample generation. Maximum mean discrepancy GANs (MMD-GANs) own competitive performance when compared with other GANs. However, the loss function of MMD-GANs is an empirical estimate of maximum mean discrepancy (MMD) and not precise in measuring the distance between sample distributions, which inhibits MMD-GANs training. We propose an efficient divide-and-conquer model, called DC-MMD-GANs, which constrains the loss function of MMD to tight bound on the deviation between empirical estimate and expected value of MMD and accelerates the training process. DC-MMD-GANs contain a division step and conquer step. In the division step, we learn the embedding of training images based on auto-encoder, and partition the training images into adaptive subsets through k-means clustering based on the embedding. In the conquer step, sub-models are fed with subsets separately and trained synchronously. The loss function values of all sub-models are integrated to compute a new weight-sum loss function. The new loss function with tight deviation bound provides more precise gradients for improving performance. Experimental results show that with a fixed number of iterations, DC-MMD-GANs can converge faster, and achieve better performance compared with the standard MMD-GANs on celebA and CIFAR-10 datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xu, Haoji. "Generate Faces Using Ladder Variational Autoencoder with Maximum Mean Discrepancy (MMD)." Intelligent Information Management 10, no. 04 (2018): 108–13. http://dx.doi.org/10.4236/iim.2018.104009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sun, Jiancheng. "Complex Network Construction of Univariate Chaotic Time Series Based on Maximum Mean Discrepancy." Entropy 22, no. 2 (January 24, 2020): 142. http://dx.doi.org/10.3390/e22020142.

Повний текст джерела
Анотація:
The analysis of chaotic time series is usually a challenging task due to its complexity. In this communication, a method of complex network construction is proposed for univariate chaotic time series, which provides a novel way to analyze time series. In the process of complex network construction, how to measure the similarity between the time series is a key problem to be solved. Due to the complexity of chaotic systems, the common metrics is hard to measure the similarity. Consequently, the proposed method first transforms univariate time series into high-dimensional phase space to increase its information, then uses Gaussian mixture model (GMM) to represent time series, and finally introduces maximum mean discrepancy (MMD) to measure the similarity between GMMs. The Lorenz system is used to validate the correctness and effectiveness of the proposed method for measuring the similarity.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Xiangqing, Yan Feng, Shun Zhang, Nan Wang, Shaohui Mei, and Mingyi He. "Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy Distance." Remote Sensing 15, no. 11 (June 4, 2023): 2928. http://dx.doi.org/10.3390/rs15112928.

Повний текст джерела
Анотація:
Detecting sparse, small, lost persons with only a few pixels in high-resolution aerial images was, is, and remains an important and difficult mission, in which a vital role is played by accurate monitoring and intelligent co-rescuing for the search and rescue (SaR) system. However, many problems have not been effectively solved in existing remote-vision-based SaR systems, such as the shortage of person samples in SaR scenarios and the low tolerance of small objects for bounding boxes. To address these issues, a copy-paste mechanism (ISCP) with semi-supervised object detection (SSOD) via instance segmentation and maximum mean discrepancy distance is proposed (MMD), which can provide highly robust, multi-task, and efficient aerial-based person detection for the prototype SaR system. Specifically, numerous pseudo-labels are obtained by accurately segmenting the instances of synthetic ISCP samples to obtain their boundaries. The SSOD trainer then uses soft weights to balance the prediction entropy of the loss function between the ground truth and unreliable labels. Moreover, a novel evaluation metric MMD for anchor-based detectors is proposed to elegantly compute the IoU of the bounding boxes. Extensive experiments and ablation studies on Heridal and optimized public datasets demonstrate that our approach is effective and achieves state-of-the-art person detection performance in aerial images.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhao, Ji, and Deyu Meng. "FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test." Neural Computation 27, no. 6 (June 2015): 1345–72. http://dx.doi.org/10.1162/neco_a_00732.

Повний текст джерела
Анотація:
The maximum mean discrepancy (MMD) is a recently proposed test statistic for the two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner’s theorem and Fourier transform (Rahimi & Recht, 2007 ). Taking advantage of sampling the Fourier transform, FastMMD decreases the time complexity for MMD calculation from [Formula: see text] to [Formula: see text], where N and d are the size and dimension of the sample set, respectively. Here, L is the number of basis functions for approximating kernels that determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to [Formula: see text] by using the Fastfood technique (Le, Sarlós, & Smola, 2013 ). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We also provide a geometric explanation for our method, ensemble of circular discrepancy, which helps us understand the insight of MMD and we hope will lead to more extensive metrics for assessing the two-sample test task. Experimental results substantiate that the accuracy of FastMMD is similar to that of MMD and with faster computation and lower variance than existing MMD approximation methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Williamson, Sinead A., and Jette Henderson. "Understanding Collections of Related Datasets Using Dependent MMD Coresets." Information 12, no. 10 (September 23, 2021): 392. http://dx.doi.org/10.3390/info12100392.

Повний текст джерела
Анотація:
Understanding how two datasets differ can help us determine whether one dataset under-represents certain sub-populations, and provides insights into how well models will generalize across datasets. Representative points selected by a maximum mean discrepancy (MMD) coreset can provide interpretable summaries of a single dataset, but are not easily compared across datasets. In this paper, we introduce dependent MMD coresets, a data summarization method for collections of datasets that facilitates comparison of distributions. We show that dependent MMD coresets are useful for understanding multiple related datasets and understanding model generalization between such datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Kangji, Borui Wei, Qianqian Tang, and Yufei Liu. "A Data-Efficient Building Electricity Load Forecasting Method Based on Maximum Mean Discrepancy and Improved TrAdaBoost Algorithm." Energies 15, no. 23 (November 22, 2022): 8780. http://dx.doi.org/10.3390/en15238780.

Повний текст джерела
Анотація:
Building electricity load forecasting plays an important role in building energy management, peak demand and power grid security. In the past two decades, a large number of data-driven models have been applied to building and larger-scale energy consumption predictions. Although these models have been successful in specific cases, their performances would be greatly affected by the quantity and quality of the building data. Moreover, for older buildings with sparse data, or new buildings with no historical data, accurate predictions are difficult to achieve. Aiming at such a data silos problem caused by the insufficient data collection in the building energy consumption prediction, this study proposes a building electricity load forecasting method based on a similarity judgement and an improved TrAdaBoost algorithm (iTrAdaBoost). The Maximum Mean Discrepancy (MMD) is used to search similar building samples related to the target building from public datasets. Different from general Boosting algorithms, the proposed iTrAdaBoost algorithm iteratively updates the weights of the similar building samples and combines them together with the target building samples for a prediction accuracy improvement. An educational building’s case study is carried out in this paper. The results show that even when the target and source samples belong to different domains, i.e., the geographical location and meteorological condition of the buildings are different, the proposed MMD-iTradaBoost method has a better prediction accuracy in the transfer learning process than the BP or traditional AdaBoost models. In addition, compared with other advanced deep learning models, the proposed method has a simple structure and is easy for engineering implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lee, Junghyun, Gwangsu Kim, Mahbod Olfat, Mark Hasegawa-Johnson, and Chang D. Yoo. "Fast and Efficient MMD-Based Fair PCA via Optimization over Stiefel Manifold." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7363–71. http://dx.doi.org/10.1609/aaai.v36i7.20699.

Повний текст джерела
Анотація:
This paper defines fair principal component analysis (PCA) as minimizing the maximum mean discrepancy (MMD) between the dimensionality-reduced conditional distributions of different protected classes. The incorporation of MMD naturally leads to an exact and tractable mathematical formulation of fairness with good statistical properties. We formulate the problem of fair PCA subject to MMD constraints as a non-convex optimization over the Stiefel manifold and solve it using the Riemannian Exact Penalty Method with Smoothing (REPMS). Importantly, we provide a local optimality guarantee and explicitly show the theoretical effect of each hyperparameter in practical settings, extending previous results. Experimental comparisons based on synthetic and UCI datasets show that our approach outperforms prior work in explained variance, fairness, and runtime.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Han, Chao, Deyun Zhou, Zhen Yang, Yu Xie, and Kai Zhang. "Discriminative Sparse Filtering for Multi-Source Image Classification." Sensors 20, no. 20 (October 16, 2020): 5868. http://dx.doi.org/10.3390/s20205868.

Повний текст джерела
Анотація:
Distribution mismatch caused by various resolutions, backgrounds, etc. can be easily found in multi-sensor systems. Domain adaptation attempts to reduce such domain discrepancy by means of different measurements, e.g., maximum mean discrepancy (MMD). Despite their success, such methods often fail to guarantee the separability of learned representation. To tackle this issue, we put forward a novel approach to jointly learn both domain-shared and discriminative representations. Specifically, we model the feature discrimination explicitly for two domains. Alternating discriminant optimization is proposed to obtain discriminative features with an l2 constraint in labeled source domain and sparse filtering is introduced to capture the intrinsic structures exists in the unlabeled target domain. Finally, they are integrated in a unified framework along with MMD to align domains. Extensive experiments compared with state-of-the-art methods verify the effectiveness of our method on cross-domain tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Maximum Mean Discrepancy (MMD)"

1

Cherief-Abdellatif, Badr-Eddine. "Contributions to the theoretical study of variational inference and robustness." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG001.

Повний текст джерела
Анотація:
Cette thèse de doctorat traite de l'inférence variationnelle et de la robustesse en statistique et en machine learning. Plus précisément, elle se concentre sur les propriétés statistiques des approximations variationnelles et sur la conception d'algorithmes efficaces pour les calculer de manière séquentielle, et étudie les estimateurs basés sur le Maximum Mean Discrepancy comme règles d'apprentissage qui sont robustes à la mauvaise spécification du modèle.Ces dernières années, l'inférence variationnelle a été largement étudiée du point de vue computationnel, cependant, la littérature n'a accordé que peu d'attention à ses propriétés théoriques jusqu'à très récemment. Dans cette thèse, nous étudions la consistence des approximations variationnelles dans divers modèles statistiques et les conditions qui assurent leur consistence. En particulier, nous abordons le cas des modèles de mélange et des réseaux de neurones profonds. Nous justifions également d'un point de vue théorique l'utilisation de la stratégie de maximisation de l'ELBO, un critère numérique qui est largement utilisé dans la communauté VB pour la sélection de modèle et dont l'efficacité a déjà été confirmée en pratique. En outre, l'inférence Bayésienne offre un cadre d'apprentissage en ligne attrayant pour analyser des données séquentielles, et offre des garanties de généralisation qui restent valables même en cas de mauvaise spécification des modèles et en présence d'adversaires. Malheureusement, l'inférence Bayésienne exacte est rarement tractable en pratique et des méthodes d'approximation sont généralement employées, mais ces méthodes préservent-elles les propriétés de généralisation de l'inférence Bayésienne ? Dans cette thèse, nous montrons que c'est effectivement le cas pour certains algorithmes d'inférence variationnelle (VI). Nous proposons de nouveaux algorithmes tempérés en ligne et nous en déduisons des bornes de généralisation. Notre résultat théorique repose sur la convexité de l'objectif variationnel, mais nous soutenons que notre résultat devrait être plus général et présentons des preuves empiriques à l'appui. Notre travail donne des justifications théoriques en faveur des algorithmes en ligne qui s'appuient sur des méthodes Bayésiennes approchées.Une autre question d'intérêt majeur en statistique qui est abordée dans cette thèse est la conception d'une procédure d'estimation universelle. Cette question est d'un intérêt majeur, notamment parce qu'elle conduit à des estimateurs robustes, un thème d'actualité en statistique et en machine learning. Nous abordons le problème de l'estimation universelle en utilisant un estimateur de minimisation de distance basé sur la Maximum Mean Discrepancy. Nous montrons que l'estimateur est robuste à la fois à la dépendance et à la présence de valeurs aberrantes dans le jeu de données. Nous mettons également en évidence les liens qui peuvent exister avec les estimateurs de minimisation de distance utilisant la distance L2. Enfin, nous présentons une étude théorique de l'algorithme de descente de gradient stochastique utilisé pour calculer l'estimateur, et nous étayons nos conclusions par des simulations numériques. Nous proposons également une version Bayésienne de notre estimateur, que nous étudions à la fois d'un point de vue théorique et d'un point de vue computationnel
This PhD thesis deals with variational inference and robustness. More precisely, it focuses on the statistical properties of variational approximations and the design of efficient algorithms for computing them in an online fashion, and investigates Maximum Mean Discrepancy based estimators as learning rules that are robust to model misspecification.In recent years, variational inference has been extensively studied from the computational viewpoint, but only little attention has been put in the literature towards theoretical properties of variational approximations until very recently. In this thesis, we investigate the consistency of variational approximations in various statistical models and the conditions that ensure the consistency of variational approximations. In particular, we tackle the special case of mixture models and deep neural networks. We also justify in theory the use of the ELBO maximization strategy, a model selection criterion that is widely used in the Variational Bayes community and is known to work well in practice.Moreover, Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even under model mismatch and with adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference? In this thesis, we show that this is indeed the case for some variational inference algorithms. We propose new online, tempered variational algorithms and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that our result should hold more generally and present empirical evidence in support of this. Our work presents theoretical justifications in favor of online algorithms that rely on approximate Bayesian methods. Another point that is addressed in this thesis is the design of a universal estimation procedure. This question is of major interest, in particular because it leads to robust estimators, a very hot topic in statistics and machine learning. We tackle the problem of universal estimation using a minimum distance estimator based on the Maximum Mean Discrepancy. We show that the estimator is robust to both dependence and to the presence of outliers in the dataset. We also highlight the connections that may exist with minimum distance estimators using L2-distance. Finally, we provide a theoretical study of the stochastic gradient descent algorithm used to compute the estimator, and we support our findings with numerical simulations. We also propose a Bayesian version of our estimator, that we study from both a theoretical and a computational points of view
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jia, Xiaodong. "Data Suitability Assessment and Enhancement for Machine Prognostics and Health Management Using Maximum Mean Discrepancy." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1544002523636343.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Oskarsson, Joel. "Probabilistic Regression using Conditional Generative Adversarial Networks." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166637.

Повний текст джерела
Анотація:
Regression is a central problem in statistics and machine learning with applications everywhere in science and technology. In probabilistic regression the relationship between a set of features and a real-valued target variable is modelled as a conditional probability distribution. There are cases where this distribution is very complex and not properly captured by simple approximations, such as assuming a normal distribution. This thesis investigates how conditional Generative Adversarial Networks (GANs) can be used to properly capture more complex conditional distributions. GANs have seen great success in generating complex high-dimensional data, but less work has been done on their use for regression problems. This thesis presents experiments to better understand how conditional GANs can be used in probabilistic regression. Different versions of GANs are extended to the conditional case and evaluated on synthetic and real datasets. It is shown that conditional GANs can learn to estimate a wide range of different distributions and be competitive with existing probabilistic regression models.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yang, Qibo. "A Transfer Learning Methodology of Domain Generalization for Prognostics and Health Management." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613749034966366.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Atta-Asiamah, Ernest. "Distributed Inference for Degenerate U-Statistics with Application to One and Two Sample Test." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31777.

Повний текст джерела
Анотація:
In many hypothesis testing problems such as one-sample and two-sample test problems, the test statistics are degenerate U-statistics. One of the challenges in practice is the computation of U-statistics for a large sample size. Besides, for degenerate U-statistics, the limiting distribution is a mixture of weighted chi-squares, involving the eigenvalues of the kernel of the U-statistics. As a result, it’s not straightforward to construct the rejection region based on this asymptotic distribution. In this research, we aim to reduce the computation complexity of degenerate U-statistics and propose an easy-to-calibrate test statistic by using the divide-and-conquer method. Specifically, we randomly partition the full n data points into kn even disjoint groups, and compute U-statistics on each group and combine them by averaging to get a statistic Tn. We proved that the statistic Tn has the standard normal distribution as the limiting distribution. In this way, the running time is reduced from O(n^m) to O( n^m/km_n), where m is the order of the one sample U-statistics. Besides, for a given significance level , it’s easy to construct the rejection region. We apply our method to the goodness of fit test and two-sample test. The simulation and real data analysis show that the proposed test can achieve high power and fast running time for both one and two-sample tests.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mayo, Thomas Richard. "Machine learning for epigenetics : algorithms for next generation sequencing data." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33055.

Повний текст джерела
Анотація:
The advent of Next Generation Sequencing (NGS), a little over a decade ago, has led to a vast and rapid increase in the generation of genomic data. The drastically reduced cost has in turn enabled powerful modifications that can be used to investigate not just genetic, but epigenetic, phenomena. Epigenetics refers to the study of mechanisms effecting gene expression other than the genetic code itself and thus, at the transcription level, incorporates DNA methylation, transcription factor binding and histone modifications amongst others. This thesis outlines and tackles two major challenges in the computational analysis of such data using techniques from machine learning. Firstly, I address the problem of testing for differential methylation between groups of bisulfite sequencing data sets. DNA methylation plays an important role in genomic imprinting, X-chromosome inactivation and the repression of repetitive elements, as well as being implicated in numerous diseases, such as cancer. Bisulfite sequencing provides single nucleotide resolution methylation data at the whole genome scale, but a sensitive analysis of such data is difficult. I propose a solution that uses a powerful kernel-based machine learning technique, the Maximum Mean Discrepancy, to leverage well-characterised spatial correlations in DNA methylation, and adapt the method for this particular use. I use this tailored method to analyse a novel data set from a study of ageing in three different tissues in the mouse. This study motivates further modifications to the method and highlights the utility of the underlying measure as an exploratory tool for methylation analysis. Secondly, I address the problem of predictive and explanatory modelling of chromatin immunoprecipitation sequencing data (ChIP-Seq). ChIP-Seq is typically used to assay the binding of a protein of interest, such as a transcription factor or histone, to the DNA, and as such is one of the most widely used sequencing assays. While peak callers are a powerful tool in identifying binding sites of sparse and clean ChIPSeq profiles, more broad signals defy analysis in this framework. Instead, generative models that explain the data in terms of the underlying sequence can help uncover mechanisms that predicting binding or the lack thereof. I explore current problems with ChIP-Seq analysis, such as zero-inflation and the use of the control experiment, known as the input. I then devise a method for representing k-mers that enables the use of longer DNA sub-sequences within a flexible model development framework, such as generalised linear models, without heavy programming requirements. Finally, I use these insights to develop an appropriate Bayesian generative model that predicts ChIP-Seq count data in terms of the underlying DNA sequence, incorporating DNA methylation information where available, fitting the model with the Expectation-Maximization algorithm. The model is tested on simulated data and real data pertaining to the histone mark H3k27me3. This thesis therefore straddles the fields of bioinformatics and machine learning. Bioinformatics is both plagued and blessed by the plethora of different techniques available for gathering data and their continual innovations. Each technique presents a unique challenge, and hence out-of-the-box machine learning techniques have had little success in solving biological problems. While I have focused on NGS data, the methods developed in this thesis are likely to be applicable to future technologies, such as Third Generation Sequencing methods, and the lessons learned in their adaptation will be informative for the next wave of computational challenges.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ebert, Anthony C. "Dynamic queueing networks: Simulation, estimation and prediction." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/180771/1/Anthony_Ebert_Thesis.pdf.

Повний текст джерела
Анотація:
Inspired by the problem of managing passenger flow in airport terminals, novel statistical approaches to simulation, estimation and prediction of these systems were developed. A simulation algorithm was developed with computational speed-ups of more than one hundred-fold. The computational improvement was leveraged to infer parameters governing a dynamic queueing system for the first time. Motivated by the original application, contributions to both functional data analysis as well as combined parameter and state inference were made.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rahman, Mohammad Mahfujur. "Deep domain adaptation and generalisation." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/205619/1/Mohammad%20Mahfujur_Rahman_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis addresses a critical problem in computer vision of dealing with dataset bias between source and target environments. Variations in image data can arise from multiple factors including contrasts in picture quality (shading, brightness, colour, resolution, and occlusion), diverse backgrounds, distinct circumstances, changes in camera viewpoint, and implicit heterogeneity of the samples themselves. This research developed strategies to address this domain shift problem for the object recognition task. Several domain adaptation and generalization approaches based on deep neural networks were introduced to improve poor performance due to domain shift or domain bias.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gupta, Yash. "Model Extraction Defense using Modified Variational Autoencoder." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4430.

Повний текст джерела
Анотація:
Machine Learning as a Service (MLaaS) exposes machine learning (ML) models that are trained on confidential datasets to users in the form of an Application Programming Interface (API). Since the MLaaS models are deployed for commer- cial purposes the API is available as a pay-per-query service. A malicious user or attacker can exploit these APIs to extract a close approximation of the MLaaS model by training a substitute model using only black-box query access to the API, in a process called model extraction. The attacker is restricted to extract the MLaaS model using a limited query budget because of the paid service. The model extraction attack is invoked by firing queries that belong to a substitute dataset that consists of either (i) Synthetic Non-Problem Domain (SNPD), (ii) Synthetic Problem Domain (SPD), or (iii) Natural Non-Problem Domain data (NNPD). In this work, we propose a novel defense framework against model extraction, using a hybrid anomaly detector composed of an encoder and a detector. In particu- lar, we propose a modified Variational Autoencoder, VarDefend, which uses a loss function, specially designed, to separate the encodings of queries fired by malicious users from those by benign users. We consider two scenarios: (i) stateful defense where an MLaaS provider stores the queries made by each client for discovering any malicious pattern, (ii) stateless defense where individual queries are discarded if they are flagged as out-of-distribution. Treating encoded queries from benign users as normal, one can use outlier detection models to identify encoded queries from malicious users in the stateless approach. For the stateful approach, a statistical test known as Maximum Mean Discrepancy (MMD) is used to match the distri- bution of the encodings of the malicious queries with those of the in-distribution encoded samples. In our experiments, we observed that our stateful defense mech- anism can completely block one representative attack for each of the three types of substitute datasets, without raising a single false alarm against queries made by a benign user. The number of queries required to block an attack is much smaller than those required by the current state-of-the-art model extraction de- fense PRADA. Further, our proposed approach can block NNPD queries that cannot be blocked by PRADA. Our stateless defense mechanism is useful against a group of colluding attackers without significantly impacting benign users. Our experiments demonstrate that, for MNIST and Fashion MNIST dataset, proposed stateless defense rejects more than 98% of the queries made by an attacker be- longing to either SNPD, SPD or NNPD datasets while rejecting only about 0.05% of all the queries made by a benign user. Our experiments also demonstrate that the proposed approach makes the MLaaS model significantly more robust to ad- versarial examples crafted using the substitute model by blocking transferability.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Diu, Michael. "Image Analysis Applications of the Maximum Mean Discrepancy Distance Measure." Thesis, 2013. http://hdl.handle.net/10012/7558.

Повний текст джерела
Анотація:
The need to quantify distance between two groups of objects is prevalent throughout the signal processing world. The difference of group means computed using the Euclidean, or L2 distance, is one of the predominant distance measures used to compare feature vectors and groups of vectors, but many problems arise with it when high data dimensionality is present. Maximum mean discrepancy (MMD) is a recent unsupervised kernel-based pattern recognition method which may improve differentiation between two distinct populations over many commonly used methods such as the difference of means, when paired with the proper feature representations and kernels. MMD-based distance computation combines many powerful concepts from the machine learning literature, such as data distribution-leveraging similarity measures and kernel methods for machine learning. Due to this heritage, we posit that dissimilarity-based classification and changepoint detection using MMD can lead to enhanced separation between different populations. To test this hypothesis, we conduct studies comparing MMD and the difference of means in two subareas of image analysis and understanding: first, to detect scene changes in video in an unsupervised manner, and secondly, in the biomedical imaging field, using clinical ultrasound to assess tumor response to treatment. We leverage effective computer vision data descriptors, such as the bag-of-visual-words and sparse combinations of SIFT descriptors, and choose from an assessment of several similarity kernels (e.g. Histogram Intersection, Radial Basis Function) in order to engineer useful systems using MMD. Promising improvements over the difference of means, measured primarily using precision/recall for scene change detection, and k-nearest neighbour classification accuracy for tumor response assessment, are obtained in both applications.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Maximum Mean Discrepancy (MMD)"

1

Slimene, Alya, and Ezzeddine Zagrouba. "Kernel Maximum Mean Discrepancy for Region Merging Approach." In Computer Analysis of Images and Patterns, 475–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40246-3_59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Diu, Michael, Mehrdad Gangeh, and Mohamed S. Kamel. "Unsupervised Visual Changepoint Detection Using Maximum Mean Discrepancy." In Lecture Notes in Computer Science, 336–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39094-4_38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yang, Pengcheng, Fuli Luo, Shuangzhi Wu, Jingjing Xu, and Dongdong Zhang. "Learning Unsupervised Word Mapping via Maximum Mean Discrepancy." In Natural Language Processing and Chinese Computing, 290–302. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32233-5_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Luna-Naranjo, D. F., J. V. Hurtado-Rincon, D. Cárdenas-Peña, V. H. Castro, H. F. Torres, and G. Castellanos-Dominguez. "EEG Channel Relevance Analysis Using Maximum Mean Discrepancy on BCI Systems." In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 820–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13469-3_95.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhu, Xiaofeng, Kim-Han Thung, Ehsan Adeli, Yu Zhang, and Dinggang Shen. "Maximum Mean Discrepancy Based Multiple Kernel Learning for Incomplete Multimodality Neuroimaging Data." In Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, 72–80. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66179-7_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wickstrøm, Kristoffer, J. Emmanuel Johnson, Sigurd Løkse, Gustau Camps-Valls, Karl Øyvind Mikalsen, Michael Kampffmeyer, and Robert Jenssen. "The Kernelized Taylor Diagram." In Communications in Computer and Information Science, 125–31. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17030-0_10.

Повний текст джерела
Анотація:
AbstractThis paper presents the kernelized Taylor diagram, a graphical framework for visualizing similarities between data populations. The kernelized Taylor diagram builds on the widely used Taylor diagram, which is used to visualize similarities between populations. However, the Taylor diagram has several limitations such as not capturing non-linear relationships and sensitivity to outliers. To address such limitations, we propose the kernelized Taylor diagram. Our proposed kernelized Taylor diagram is capable of visualizing similarities between populations with minimal assumptions of the data distributions. The kernelized Taylor diagram relates the maximum mean discrepancy and the kernel mean embedding in a single diagram, a construction that, to the best of our knowledge, have not been devised prior to this work. We believe that the kernelized Taylor diagram can be a valuable tool in data visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Maximum Mean Discrepancy (MMD)"

1

Zhang, Wen, and Dongrui Wu. "Discriminative Joint Probability Maximum Mean Discrepancy (DJP-MMD) for Domain Adaptation." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xu, Zhiwei, Dapeng Li, Yunpeng Bai, and Guoliang Fan. "MMD-MIX: Value Function Factorisation with Maximum Mean Discrepancy for Cooperative Multi-Agent Reinforcement Learning." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Qiao, and Hui Xue. "Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/378.

Повний текст джерела
Анотація:
Unsupervised domain adaptation (UDA) has been received increasing attention since it does not require labels in target domain. Most existing UDA methods learn domain-invariant features by minimizing discrepancy distance computed by a certain metric between domains. However, these discrepancy-based methods cannot be robustly applied to unsupervised time series domain adaptation (UTSDA). That is because discrepancy metrics in these methods contain only low-order and local statistics, which have limited expression for time series distributions and therefore result in failure of domain matching. Actually, the real-world time series are always non-local distributions, i.e., with non-stationary and non-monotonic statistics. In this paper, we propose an Adversarial Spectral Kernel Matching (AdvSKM) method, where a hybrid spectral kernel network is specifically designed as inner kernel to reform the Maximum Mean Discrepancy (MMD) metric for UTSDA. The hybrid spectral kernel network can precisely characterize non-stationary and non-monotonic statistics in time series distributions. Embedding hybrid spectral kernel network to MMD not only guarantees precise discrepancy metric but also benefits domain matching. Besides, the differentiable architecture of the spectral kernel network enables adversarial kernel learning, which brings more discriminatory expression for discrepancy matching. The results of extensive experiments on several real-world UTSDA tasks verify the effectiveness of our proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Yanghao, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. "Demystifying Neural Style Transfer." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/310.

Повний текст джерела
Анотація:
Neural Style Transfer has recently demonstrated very exciting results which catches eyes in both academia and industry. Despite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this paper, we propose a novel interpretation of neural style transfer by treating it as a domain adaptation problem. Specifically, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neural style transfer is to match the feature distributions between the style images and the generated images. To further support our standpoint, we experiment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two important research fields, and could enlighten future researches.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Qian, Sheng, Guanyue Li, Wen-Ming Cao, Cheng Liu, Si Wu, and Hau San Wong. "Improving representation learning in autoencoders via multidimensional interpolation and dual regularizations." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/453.

Повний текст джерела
Анотація:
Autoencoders enjoy a remarkable ability to learn data representations. Research on autoencoders shows that the effectiveness of data interpolation can reflect the performance of representation learning. However, existing interpolation methods in autoencoders do not have enough capability of traversing a possible region between two datapoints on a data manifold, and the distribution of interpolated latent representations is not considered.To address these issues, we aim to fully exert the potential of data interpolation and further improve representation learning in autoencoders. Specifically, we propose the multidimensional interpolation to increase the capability of data interpolation by randomly setting interpolation coefficients for each dimension of latent representations. In addition, we regularize autoencoders in both the latent and the data spaces by imposing a prior on latent representations in the Maximum Mean Discrepancy (MMD) framework and encouraging generated datapoints to be realistic in the Generative Adversarial Network (GAN) framework. Compared to representative models, our proposed model has empirically shown that representation learning exhibits better performance on downstream tasks on multiple benchmarks.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kim, Beomjoon, and Joelle Pineau. "Maximum Mean Discrepancy Imitation Learning." In Robotics: Science and Systems 2013. Robotics: Science and Systems Foundation, 2013. http://dx.doi.org/10.15607/rss.2013.ix.038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cai, Mingzhi, Baoguo Wei, Yue Zhang, Xu Li, and Lixin Li. "Maximum Mean Discrepancy Adversarial Active Learning." In 2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). IEEE, 2022. http://dx.doi.org/10.1109/icspcc55723.2022.9984505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Wei, Brian Barr, and John Paisley. "Understanding Counterfactual Generation using Maximum Mean Discrepancy." In ICAIF '22: 3rd ACM International Conference on AI in Finance. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3533271.3561759.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lin, Weiwei, Man-Wai Mak, Longxin Li, and Jen-Tzung Chien. "Reducing Domain Mismatch by Maximum Mean Discrepancy Based Autoencoders." In Odyssey 2018 The Speaker and Language Recognition Workshop. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/odyssey.2018-23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tian, Yi, Qiuqi Ruan, and Gaoyun An. "Zero-shot Action Recognition via Empirical Maximum Mean Discrepancy." In 2018 14th IEEE International Conference on Signal Processing (ICSP). IEEE, 2018. http://dx.doi.org/10.1109/icsp.2018.8652306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії