Добірка наукової літератури з теми "Discriminative classifier"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Discriminative classifier".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Discriminative classifier"

1

Tan, Alan W. C., M. V. C. Rao, and B. S. Daya Sagar. "A Discriminative Signal Subspace Speech Classifier." IEEE Signal Processing Letters 14, no. 2 (February 2007): 133–36. http://dx.doi.org/10.1109/lsp.2006.882091.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hassan, Anthony Rotimi, Rasaki Olawale Olanrewaju, Queensley C. Chukwudum, Sodiq Adejare Olanrewaju, and S. E. Fadugba. "Comparison Study of Generative and Discriminative Models for Classification of Classifiers." International Journal of Mathematics and Computers in Simulation 16 (June 28, 2022): 76–87. http://dx.doi.org/10.46300/9102.2022.16.12.

Повний текст джерела
Анотація:
In classification of classifier analysis, researchers have been worried about the classifier of existing generative and discriminative models in practice for analyzing attributes data. This makes it necessary to give an in-depth, systematic, interrelated, interconnected, and classification of classifier of generative and discriminative models. Generative models of Logistic and Multinomial Logistic regression models and discriminative models of Linear Discriminant Analysis (LDA) (for attribute P=1 and P>1), Quadratic Discriminant Analysis (QDA) and Naïve Bayes were thoroughly dealt with analytically and mathematically. A step-by-step empirical analysis of the mentioned models were carried-out via chemical analysis of wines grown in a region in Italy that was derived from three different cultivars (The three types of wines that constituted the three different cultivars or three classifiers). Naïve Bayes Classifier set the pace via leading a-prior probabilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hu, Kai-Jun, He-Feng Yin, and Jun Sun. "Discriminative non-negative representation based classifier for image recognition." Journal of Algorithms & Computational Technology 15 (January 2021): 174830262110449. http://dx.doi.org/10.1177/17483026211044922.

Повний текст джерела
Анотація:
During the past decade, representation based classification method has received considerable attention in the community of pattern recognition. The recently proposed non-negative representation based classifier achieved superb recognition results in diverse pattern classification tasks. Unfortunately, discriminative information of training data is not fully exploited in non-negative representation based classifier, which undermines its classification performance in practical applications. To address this problem, we introduce a decorrelation regularizer into the formulation of non-negative representation based classifier and propose a discriminative non-negative representation based classifier for pattern classification. The decorrelation regularizer is able to reduce the correlation of representation results of different classes, thus promoting the competition among them. Experimental results on benchmark datasets validate the efficacy of the proposed discriminative non-negative representation based classifier, and it can outperform some state-of-the-art deep learning based methods. The source code of our proposed discriminative non-negative representation based classifier is accessible at https://github.com/yinhefeng/DNRC .
Стилі APA, Harvard, Vancouver, ISO та ін.
4

SHI, Hong-bo, Ya-qin LIU, and Ai-jun LI. "Discriminative parameter learning of Bayesian network classifier." Journal of Computer Applications 31, no. 4 (June 9, 2011): 1074–78. http://dx.doi.org/10.3724/sp.j.1087.2011.01074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Devi, Rajkumari Bidyalakshmi, Yambem Jina Chanu, and Khumanthem Manglem Singh. "Incremental visual tracking via sparse discriminative classifier." Multimedia Systems 27, no. 2 (January 18, 2021): 287–99. http://dx.doi.org/10.1007/s00530-020-00748-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tang, Hui, and Kui Jia. "Discriminative Adversarial Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5940–47. http://dx.doi.org/10.1609/aaai.v34i04.6054.

Повний текст джерела
Анотація:
Given labeled instances on a source domain and unlabeled ones on a target domain, unsupervised domain adaptation aims to learn a task classifier that can well classify target instances. Recent advances rely on domain-adversarial training of deep networks to learn domain-invariant features. However, due to an issue of mode collapse induced by the separate design of task and domain classifiers, these methods are limited in aligning the joint distributions of feature and category across domains. To overcome it, we propose a novel adversarial learning method termed Discriminative Adversarial Domain Adaptation (DADA). Based on an integrated category and domain classifier, DADA has a novel adversarial objective that encourages a mutually inhibitory relation between category and domain predictions for any input instance. We show that under practical conditions, it defines a minimax game that can promote the joint distribution alignment. Except for the traditional closed set domain adaptation, we also extend DADA for extremely challenging problem settings of partial and open set domain adaptation. Experiments show the efficacy of our proposed methods and we achieve the new state of the art for all the three settings on benchmark datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ropelewska, Ewa. "The Application of Computer Image Analysis Based on Textural Features for the Identification of Barley Kernels Infected with Fungi of the Genus Fusarium." Agricultural Engineering 22, no. 3 (September 1, 2018): 49–56. http://dx.doi.org/10.1515/agriceng-2018-0026.

Повний текст джерела
Анотація:
AbstractThe aim of this study was to develop discrimination models based on textural features for the identification of barley kernels infected with fungi of the genus Fusarium and healthy kernels. Infected barley kernels with altered shape and discoloration and healthy barley kernels were scanned. Textures were computed using MaZda software. The kernels were classified as infected and healthy with the use of the WEKA application. In the case of RGB, Lab and XYZ color models, the classification accuracies based on 10 selected textures with the highest discriminative power ranged from 95 to 100%. The lowest result (95%) was noted in XYZ color model and Multi Class Classifier for the textures selected using the Ranker method and the OneR attribute evaluator. Selected classifiers were characterized by 100% accuracy in the case of all color models and selection methods. The highest number of 100% results was obtained for the Lab color model with Naive Bayes, LDA, IBk, Multi Class Classifier and J48 classifiers in the Best First selection method with the CFS subset evaluator.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ćwiklińska-Jurkowska, Małgorzata M. "Visualization and Comparison of Single and Combined Parametric and Nonparametric Discriminant Methods for Leukemia Type Recognition Based on Gene Expression." Studies in Logic, Grammar and Rhetoric 43, no. 1 (December 1, 2015): 73–99. http://dx.doi.org/10.1515/slgr-2015-0043.

Повний текст джерела
Анотація:
Abstract A gene expression data set, containing 3051 genes and 38 tumor mRNA training samples, from a leukemia microarray study, was used for differentiation between ALL and AML groups of leukemia. In this paper, single and combined discriminant methods were applied on the basis of the selected few most discriminative variables according to Wilks’ lambda or the leave-one-out error of first nearest neighbor classifier. For the linear, quadratic, regularized, uncorrelated discrimination, kernel, nearest neighbor and naive Bayesian classifiers, two-dimensional graphs of the boundaries and discriminant functions for diagnostics are presented. Cross-validation and leave-one-out errors were used as measures of classifier performance to support diagnosis coming from this genomic data set. A small number of best discriminating genes, from two to ten, was sufficient to build discriminant methods of good performance. Especially useful were nearest neighbor methods. The results presented herein were comparable with outcomes obtained by other authors for larger numbers of applied genes. The linear, quadratic, uncorrelated Bayesian and regularized discrimination methods were subjected to bagging or boosting in order to assess the accuracy of the fusion. A conclusion drawn from the analysis was that resampling ensembles were not beneficial for two-dimensional discrimination.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Prevost, Lionel, Loïc Oudot, Alvaro Moises, Christian Michel-Sendis, and Maurice Milgram. "Hybrid generative/discriminative classifier for unconstrained character recognition." Pattern Recognition Letters 26, no. 12 (September 2005): 1840–48. http://dx.doi.org/10.1016/j.patrec.2005.03.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ahmadi, Ehsan, Zohreh Azimifar, Maryam Shams, Mahmoud Famouri, and Mohammad Javad Shafiee. "Document image binarization using a discriminative structural classifier." Pattern Recognition Letters 63 (October 2015): 36–42. http://dx.doi.org/10.1016/j.patrec.2015.06.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Discriminative classifier"

1

Masip, Rodó David. "Face Classification Using Discriminative Features and Classifier Combination." Doctoral thesis, Universitat Autònoma de Barcelona, 2005. http://hdl.handle.net/10803/3051.

Повний текст джерела
Анотація:
A mesura que la tecnologia evoluciona, apareixen noves aplicacions en el mon de la classificació facial. En el reconeixement de patrons, normalment veiem les cares com a punts en un espai de alta dimensionalitat definit pels valors dels seus pixels. Aquesta aproximació pateix diversos problemes: el fenomen de la "la maledicció de la dimensionalitat", la presència d'oclusions parcials o canvis locals en la il·luminació. Tradicionalment, només les característiques internes de les imatges facials s'han utilitzat per a classificar, on normalment es fa una extracció de característiques. Les tècniques d'extracció de característiques permeten reduir la influencia dels problemes mencionats, reduint també el soroll inherent de les imatges naturals alhora que es poden aprendre característiques invariants de les imatges facials. En la primera part d'aquesta tesi presentem alguns mètodes d'extracció de característiques clàssics: Anàlisi de Components Principals (PCA), Anàlisi de Components Independents (ICA), Factorització No Negativa de Matrius (NMF), i l'Anàlisi Discriminant de Fisher (FLD), totes elles fent alguna mena d'assumpció en les dades a classificar. La principal contribució d'aquest treball es una nova família de tècniques d'extracció de característiques usant el algorisme del Adaboost. El nostre mètode no fa cap assumpció en les dades a classificar, i construeix de forma incremental la matriu de projecció tenint en compte els exemples mes difícils
Per altra banda, en la segon apart de la tesi explorem el rol de les característiques externes en el procés de classificació facial, i presentem un nou mètode per extreure un conjunt alineat de característiques a partir de la informació externa que poden ser combinades amb les tècniques clàssiques millorant els resultats globals de classificació.
As technology evolves, new applications dealing with face classification appear. In pattern recognition, faces are usually seen as points in a high dimensional spaces defined by their pixel values. This approach must deal with several problems such as: the curse of dimensionality, the presence of partial occlusions or local changes in the illumination. Traditionally, only the internal features of face images have been used for classification purposes, where usually a feature extraction step is performed. Feature extraction techniques allow to reduce the influence of the problems mentioned, reducing also the noise inherent from natural images and learning invariant characteristics from face images. In the first part of this thesis some internal feature extraction methods are presented: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Non Negative Matrix Factorization (NMF), and Fisher Linear Discriminant Analysis (FLD), all of them making some kind of the assumption on the data to classify. The main contribution of our work is a non parametric feature extraction family of techniques using the Adaboost algorithm. Our method makes no assumptions on the data to classify, and incrementally builds the projection matrix taking into account the most difficult samples.
On the other hand, in the second part of this thesis we also explore the role of external features in face classification purposes, and present a method for extracting an aligned feature set from external face information that can be combined with the classic internal features improving the global performance of the face classification task.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Georgatzis, Konstantinos. "Dynamical probabilistic graphical models applied to physiological condition monitoring." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28838.

Повний текст джерела
Анотація:
Intensive Care Units (ICUs) host patients in critical condition who are being monitored by sensors which measure their vital signs. These vital signs carry information about a patient’s physiology and can have a very rich structure at fine resolution levels. The task of analysing these biosignals for the purposes of monitoring a patient’s physiology is referred to as physiological condition monitoring. Physiological condition monitoring of patients in ICUs is of critical importance as their health is subject to a number of events of interest. For the purposes of this thesis, the overall task of physiological condition monitoring is decomposed into the sub-tasks of modelling a patient’s physiology a) under the effect of physiological or artifactual events and b) under the effect of drug administration. The first sub-task is concerned with modelling artifact (such as the taking of blood samples, suction events etc.), and physiological episodes (such as bradycardia), while the second sub-task is focussed on modelling the effect of drug administration on a patient’s physiology. The first contribution of this thesis is the formulation, development and validation of the Discriminative Switching Linear Dynamical System (DSLDS) for the first sub-task. The DSLDS is a discriminative model which identifies the state-of-health of a patient given their observed vital signs using a discriminative probabilistic classifier, and then infers their underlying physiological values conditioned on this status. It is demonstrated on two real-world datasets that the DSLDS is able to outperform an alternative, generative approach in most cases of interest, and that an a-mixture of the two models achieves higher performance than either of the two models separately. The second contribution of this thesis is the formulation, development and validation of the Input-Output Non-Linear Dynamical System (IO-NLDS) for the second sub-task. The IO-NLDS is a non-linear dynamical system for modelling the effect of drug infusions on the vital signs of patients. More specifically, in this thesis the focus is on modelling the effect of the widely used anaesthetic drug Propofol on a patient’s monitored depth of anaesthesia and haemodynamics. A comparison of the IO-NLDS with a model derived from the Pharmacokinetics/Pharmacodynamics (PK/PD) literature on a real-world dataset shows that significant improvements in predictive performance can be provided without requiring the incorporation of expert physiological knowledge.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Klautau, Aldebaro. "Speech recognition using discriminative classifiers /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3091208.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xue, Jinghao. "Aspects of generative and discriminative classifiers." Thesis, Connect to e-thesis, 2008. http://theses.gla.ac.uk/272/.

Повний текст джерела
Анотація:
Thesis (Ph.D.) - University of Glasgow, 2008.
Ph.D. thesis submitted to the Department of Statistics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pernot, Etienne. "Choix d'un classifieur en discrimination." Paris 9, 1994. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1994PA090014.

Повний текст джерела
Анотація:
Dans cette thèse, nous nous posons le problème de la détermination du classifieur le plus adapté à résoudre un problème donné de discrimination. Le choix du classifieur est déjà guidé par des contraintes opérationnelles, mais au-delà de ces contraintes, et après que le classifieur a été configuré grâce à une base d'apprentissage, c'est le taux de généralisation du classifieur (ou taux de réussite) qui est le critère caractérisant sa performance. Ce taux, généralement inconnu, est estimé à l'aide d'une base de généralisation. Cette estimation dépend donc du problème de discrimination étudié, du classifieur utilisé, de la base d'apprentissage et de la base de généralisation. Ces différentes dépendances sont étudiées soit théoriquement, soit de manière expérimentale, sur une douzaine de classifieurs différents, neuronaux et classiques. Le problème de la validité de la comparaison de deux classifieurs par les estimations de leur taux de généralisation est aussi étudié, et nous obtenons des informations sur les tailles relatives à donner aux bases d'apprentissage et de généralisation. Dans un objectif de comparaison de classifieurs, Neuroclasse, un outil logiciel donnant la possibilité de tester un grand nombre de classifieurs différents, a été développé, et est précisément décrit. Dans Neuroclasse est aussi intégré un système pour la détermination automatique du classifieur fournissant le meilleur taux de généralisation estimé sur une base de généralisation fixée. Ce système est implanté sous forme d'un système expert. Ce système, testé sur différentes bases de données, donne de bons résultats, mais met en évidence un phénomène d'apprentissage de la base de généralisation, dû aux tests successifs de nombreux classifieurs sur une même base de généralisation. Nous étudions ce phénomène expérimentalement, et nous donnons un ordre de grandeur du nombre de classifieurs qu'il est possible de tester en limitant cet effet
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Katz, Marcel [Verfasser]. "Discriminative classifiers for speaker Recognition / Marcel Katz." Saarbrücken : Südwestdeutscher Verlag für Hochschulschriften, 2009. http://www.vdm-verlag.de.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

ABDALLAH, HICHAM. "Application de l'analyse relationnelle pour classifier descripteurs et modalites en mode discrimination." Paris 6, 1996. http://www.theses.fr/1996PA066001.

Повний текст джерела
Анотація:
Dans le processus presente dans la these, quatre niveaux fondamentaux ont ete a l'origine de notre travail: 1 la classification de l'espace des descripteurs ou des variables pour reduire et resserrer cet espace: en introduisant de nouvelles structures (par des criteres d'associations connus) permettant de donner des bornes ou seuils au dela desquels on considere les deux descripteurs ou variables sont ressemblants. Un point important ici est la possibilite de tout expliciter en fonction du rand ou du chi-deux. Cette etude permet de voir apparaitre la notion de severite pour chaque critere, qui nous permet de choisir suivant le contexte ceux qui sont les plus adaptes. 2 une fois trouvee la partition des descripteurs. On agrege les descripteurs de chaque classe par le descripteur consensus. 3 la classification des modalites pour la structuration et pour la phase de discrimination proprement dite ou le regroupement modalitaire permet une interpretation plus grande du phenomene de caracterisation des concepts. Ceci entre autre a ete obtenu apres une interrogation sur le bien fonde de la notion de referant pour les indices de similarite avec une caracterisation de leurs proprietes metriques. De meme, cette etude nous permet de choisir suivant le contexte les indices qui sont les plus adaptes. 4 le raffinement des concepts
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dastile, Xolani Collen. "Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1002807.

Повний текст джерела
Анотація:
The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rüther, Johannes. "Navigating Deep Classifiers : A Geometric Study Of Connections Between Adversarial Examples And Discriminative Features In Deep Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291775.

Повний текст джерела
Анотація:
Although deep networks are powerful and effective in numerous applications, their high vulnerability to adversarial perturbations remains a critical limitation in domains such as security, personalized medicine or autonomous systems. While the sensitivity to adversarial perturbations is generally viewed as a bug of deep classifiers, recent research suggests that they are actually a manifestation of non-robust features that deep classifiers exploit for predictive accuracy. In this work, we therefore systematically compute and analyze these perturbations to understand how they relate to discriminative features that models use. Most of the insights obtained in this work take a geometrical perspective on classifiers, specifically the location of decision boundaries in the vicinity of samples. Perturbations that successfully flip classification decisions are conceived as directions in which samples can be moved to transition into other classification regions. Thereby we reveal that navigating classification spaces is surprisingly simple: Any sample can be moved into a target region within a small distance by following a single direction extracted from adversarial perturbations. Moreover, we reveal that for simple data sets such as MNIST, discriminative features used by deep classifiers with standard training are indeed composed of elements found in adversarial examples. Finally, our results also demonstrate that adversarial training fundamentally changes classifier geometry in the vicinity of samples, yielding more diverse and complex decision boundaries.
Även om djupa neurala nät är kraftfulla och effektiva i många användningar, är deras stora sårbarhet för medvetna störningar (adversarial perturbations) fortfarande en kritisk begränsning inom områden som säkerhet, individanpassad medicin eller autonoma system. Även om känsligheten för medvetna störningar i allmänhet betraktas som en brist hos klassifierare baserade på djupa nät, tyder färsk forskning på att de i själva verket är ett uttryck för orobusta features som klassifierarna utnyttjar för att göra exakta prediktioner. I detta arbete beräknar och analyserar vi därför systematiskt dessa störningar för att förstå hur de förhåller sig till diskriminativa features som modellerna använder. De flesta insikter som erhålls i detta arbete har ett geometriskt perspektiv på klassificerare, särskilt placeringen av beslutsgränserna i närheten av datasamplen. Störningar som framgångsrikt kan ändra på klassificeringsbeslut utformas som riktning där datasamplen kan flyttas in till andra klassificeringsregioner. På så sätt avslöjar vi att det är förvånansvärt enkelt att navigera i klassificeringsrymden: Ett godtyckligt sampel kan flyttas till en annan närliggande klassificeringsregion genom att man följer riktningen som extraherats från medvetna störningar. Dessutom avslöjar vi att när det gäller enkla datauppsättningar som MNIST, består de diskriminerande features som används av djupa klassifierare, tränade med standardmetoder, faktiskt av element som återfinns bland de medvetna störningsexemplen. Slutligen visar våra resultat också att medvetna störningar i grunden förändrar klassificerargeometrin i närheten av datasampel, vilket ger mer varierande och komplexa beslutsgränser.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Musayeva, Khadija. "Generalization Performance of Margin Multi-category Classifiers." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0096/document.

Повний текст джерела
Анотація:
Cette thèse porte sur la théorie de la discrimination multi-classe à marge. Elle a pour cadre la théorie statistique de l’apprentissage de Vapnik et Chervonenkis. L’objectif est d’établir des bornes de généralisation possédant une dépendances explicite au nombre C de catégories, à la taille m de l’échantillon et au paramètre de marge gamma, lorsque la fonction de perte considérée est une fonction de perte à marge possédant la propriété d’être lipschitzienne. La borne de généralisation repose sur la performance empirique du classifieur ainsi que sur sa "capacité". Dans cette thèse, les mesures de capacité considérées sont les suivantes : la complexité de Rademacher, les nombres de recouvrement et la dimension fat-shattering. Nos principales contributions sont obtenues sous l’hypothèse que les classes de fonctions composantes calculées par le classifieur ont des dimensions fat-shattering polynomiales et que les fonctions composantes sont indépendantes. Dans le contexte du schéma de calcul introduit par Mendelson, qui repose sur les relations entre les mesures de capacité évoquées plus haut, nous étudions l’impact que la décomposition au niveau de l’une de ces mesures de capacité a sur les dépendances (de la borne de généralisation) à C, m et gamma. En particulier, nous démontrons que la dépendance à C peut être considérablement améliorée par rapport à l’état de l’art si la décomposition est reportée au niveau du nombre de recouvrement ou de la dimension fat-shattering. Ce changement peut affecter négativement le taux de convergence (dépendance à m), ce qui souligne le fait que l’optimisation par rapport aux trois paramètres fondamentaux se traduit par la recherche d’un compromis
This thesis deals with the theory of margin multi-category classification, and is based on the statistical learning theory founded by Vapnik and Chervonenkis. We are interested in deriving generalization bounds with explicit dependencies on the number C of categories, the sample size m and the margin parameter gamma, when the loss function considered is a Lipschitz continuous margin loss function. Generalization bounds rely on the empirical performance of the classifier as well as its "capacity". In this work, the following scale-sensitive capacity measures are considered: the Rademacher complexity, the covering numbers and the fat-shattering dimension. Our main contributions are obtained under the assumption that the classes of component functions implemented by a classifier have polynomially growing fat-shattering dimensions and that the component functions are independent. In the context of the pathway of Mendelson, which relates the Rademacher complexity to the covering numbers and the latter to the fat-shattering dimension, we study the impact that decomposing at the level of one of these capacity measures has on the dependencies on C, m and gamma. In particular, we demonstrate that the dependency on C can be substantially improved over the state of the art if the decomposition is postponed to the level of the metric entropy or the fat-shattering dimension. On the other hand, this impacts negatively the rate of convergence (dependency on m), an indication of the fact that optimizing the dependencies on the three basic parameters amounts to looking for a trade-off
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Discriminative classifier"

1

Baillo, Amparo, Antonio Cuevas, and Ricardo Fraiman. Classification methods for functional data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.10.

Повний текст джерела
Анотація:
This article reviews the literature concerning supervised and unsupervised classification of functional data. It first explains the meaning of unsupervised classification vs. supervised classification before discussing the supervised classification problem in the infinite-dimensional case, showing that its formal statement generally coincides with that of discriminant analysis in the classical multivariate case. It then considers the optimal classifier and plug-in rules, empirical risk and empirical minimization rules, linear discrimination rules, the k nearest neighbor (k-NN) method, and kernel rules. It also describes classification based on partial least squares, classification based on reproducing kernels, and depth-based classification. Finally, it examines unsupervised classification methods, focusing on K-means for functional data, K-means for data in a Hilbert space, and impartial trimmed K-means for functional data. Some practical issues, in particular real-data examples and simulations, are reviewed and some selected proofs are given.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Schor, Paul. Counting Americans. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199917853.001.0001.

Повний текст джерела
Анотація:
By telling how the US census classified and divided Americans by race and origin from the founding of the United States to World War II, this book shows how public statistics have been used to create an unequal representation of the nation. From the beginning, the census was a political undertaking, torn between the conflicting demands of the state, political actors, social scientists, businesses, and interest groups. Through the extensive archives of the Bureau of the Census, it traces the interactions that led to the adoption or rejection of changes in the ways different Americans were classified, as well as the changing meaning of seemingly stable categories over time. Census workers and directors by necessity constantly interpreted official categories in the field and in the offices. The difficulties they encountered, the mobilization and resistance of actors, the negotiations with the census, all tell a social history of the relation of the state to the population. Focusing in detail on slaves and their descendants, on racialized groups, and on immigrants, as well as on the troubled imposition of US racial categories upon the population of newly acquired territories, the book demonstrates that census-taking in the United States has been at its core a political undertaking shaped by racial ideologies that reflect its violent history of colonization, enslavement, segregation, and discrimination.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Andrade, M. J. Tumours and masses. Oxford University Press, 2011. http://dx.doi.org/10.1093/med/9780199599639.003.0022.

Повний текст джерела
Анотація:
Transthoracic and transoesophageal echocardiography is the first-line diagnostic tool for imaging space-occupying lesions of the heart. Cardiac masses can be classified as tumours, thrombi, vegetations, iatrogenic material, or normal variants. Occasionally, extracardiac masses may compress the heart and create a mass effect. Cardiac masses may be suspected from the clinical presentation. This is the case in patients with an embolic event presumed of cardiac origin or in patients with infective endocarditis. Otherwise, a cardiac mass can be identified during the routine investigation of common, non-specific cardiac manifestations or as an incidental finding.In general, an integrated approach which correlates the patient’s clinical picture with the echocardiographic findings may reasonably predict the specific nature of encountered cardiac masses and, in the case of tumours, discriminate between primary versus secondary, and benign versus malignant. Furthermore, echocardiography alone or with complementary imaging modalities, can provide information to help decide on the resectability of cardiac tumours, enhance effective diagnosis and management of infective endocarditis, and assist in planning therapy and follow-up. Because several normal structures and variants may mimic pathological lesions, a thorough knowledge of potential sources of misinterpretation is crucial for a correct diagnosis. After surgical resection, histological investigation is mandatory to confirm the diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Andrade, Maria João, Jadranka Separovic Hanzevacki, and Ricardo Ronderos. Cardiac tumours. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198726012.003.0052.

Повний текст джерела
Анотація:
Transthoracic and transoesophageal echocardiography represent the first-line diagnostic tools for imaging space-occupying lesions of the heart. Cardiac masses can be classified as tumours, thrombi, vegetations, iatrogenic material, or normal variants. Occasionally, extracardiac masses may compress the heart and create a mass effect. Cardiac masses may be suspected from the clinical presentation. This is the case in patients with an embolic event presumed to be of cardiac origin or in patients with infective endocarditis. Otherwise, a cardiac mass can be identified during the routine investigation of common, non-specific cardiac manifestations or as an incidental finding. In general, an integrated approach which correlates the patient’s clinical picture with the echocardiographic findings may reasonably predict the specific nature of encountered cardiac masses and, in the case of tumours, discriminate between primary versus secondary, and benign versus malignant. Furthermore, echocardiography alone or with complementary imaging modalities, can provide information to decide on the resectability of cardiac tumours and assist on planning the therapy and follow-up. Because several normal structures and variants may mimic pathological lesions, a thorough knowledge of potential sources of misinterpretation is crucial for a correct diagnosis. After surgical resection, histological investigation is mandatory to confirm the diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Discriminative classifier"

1

Zhu, Yi, and Baojie Fan. "Multi-classifier Guided Discriminative Siamese Tracking Network." In Pattern Recognition and Computer Vision, 102–13. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60639-8_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Feng, Qi, Fengzhan Tian, and Houkuan Huang. "A Discriminative Learning Method of TAN Classifier." In Lecture Notes in Computer Science, 443–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75256-1_40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Cheng-Lin. "Polynomial Network Classifier with Discriminative Feature Extraction." In Lecture Notes in Computer Science, 732–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11815921_80.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shen, Xiang-Jun, Wen-Chao Zhang, Wei Cai, Ben-Bright B. Benuw, He-Ping Song, Qian Zhu, and Zheng-Jun Zha. "Building Locally Discriminative Classifier Ensemble Through Classifier Fusion Among Nearest Neighbors." In Lecture Notes in Computer Science, 211–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48890-5_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lucas, Simon M. "Discriminative Training of the Scanning N-Tuple Classifier." In Computational Methods in Neural Modeling, 222–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44868-3_29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Du, Jinhua, Junbo Guo, and Fei Zhao. "Discriminative Latent Variable Based Classifier for Translation Error Detection." In Communications in Computer and Information Science, 127–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41644-6_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sharma, Vijay K., Bibhudendra Acharya, K. K. Mahapatra, and Vijay Nath. "Learning Discriminative Classifier Parameter for Visual Object Tracking by Detection." In Nanoelectronics, Circuits and Communication Systems, 355–69. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2854-5_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kishore Kumar, K., and P. Trinatha Rao. "Face Verification Across Ages Using Discriminative Methods and See 5.0 Classifier." In Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems: Volume 2, 439–48. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30927-9_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hou, Xielian, Caikou Chen, Shengwei Zhou, and Jingshan Li. "Discriminative Weighted Low-Rank Collaborative Representation Classifier for Robust Face Recognition." In Biometric Recognition, 257–64. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97909-0_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Pappas, Emmanuel, and Sotiris Kotsiantis. "Integrating Global and Local Application of Discriminative Multinomial Bayesian Classifier for Text Classification." In Advances in Intelligent Systems and Computing, 49–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-32063-7_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Discriminative classifier"

1

Yang, Jian, and Delin Chu. "Sparse Representation Classifier Steered Discriminative Projection." In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.175.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nwe, Tin Lay, Balaji Nataraj, Xie Shudong, Li Yiqun, Lin Dongyun, and Dong Sheng. "Discriminative Features for Incremental Learning Classifier." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803133.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Akhtar, Naveed, Ajmal Mian, and Fatih Porikli. "Joint Discriminative Bayesian Dictionary and Classifier Learning." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Liu, Jie, Jiu-Qing Song, and Ya-Lou Huang. "A Generative/Discriminative Hybrid Model: Bayes Perceptron Classifier." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gözüaçık, Ömer, Alican Büyükçakır, Hamed Bonab, and Fazli Can. "Unsupervised Concept Drift Detection with a Discriminative Classifier." In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mei, Peng, Fuquan Zhang, Lin Xu, Hongyong Leng, Lei Chen, and Guo Liu. "Transitioning conditional probability to discriminative classifier over inductive reasoning." In 2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST). IEEE, 2017. http://dx.doi.org/10.1109/icawst.2017.8256510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Baggenstoss, Paul M. "The Projected Belief Network Classifier: both Generative and Discriminative." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287706.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Yan, Dawei Yang, and Guangsan Li. "Research on weighted naive Bayesian classifier in discriminative tracking." In 2014 International Conference on Mechatronics, Electronic, Industrial and Control Engineering. Paris, France: Atlantis Press, 2014. http://dx.doi.org/10.2991/meic-14.2014.386.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Weiwei, Chunyu Yang, and Qiao Li. "Discriminative Analysis Dictionary and Classifier Learning for Pattern Classification." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yan, Yuguang, Wen Li, Michael Ng, Mingkui Tan, Hanrui Wu, Huaqing Min, and Qingyao Wu. "Learning Discriminative Correlation Subspace for Heterogeneous Domain Adaptation." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/454.

Повний текст джерела
Анотація:
Domain adaptation aims to reduce the effort on collecting and annotating target data by leveraging knowledge from a different source domain. The domain adaptation problem will become extremely challenging when the feature spaces of the source and target domains are different, which is also known as the heterogeneous domain adaptation (HDA) problem. In this paper, we propose a novel HDA method to find the optimal discriminative correlation subspace for the source and target data. The discriminative correlation subspace is inherited from the canonical correlation subspace between the source and target data, and is further optimized to maximize the discriminative ability for the target domain classifier. We formulate a joint objective in order to simultaneously learn the discriminative correlation subspace and the target domain classifier. We then apply an alternating direction method of multiplier (ADMM) algorithm to address the resulting non-convex optimization problem. Comprehensive experiments on two real-world data sets demonstrate the effectiveness of the proposed method compared to the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Discriminative classifier"

1

Nelson, Bruce, and Ammon Birenzvigo. Linguistic-Fuzzy Classifier for Discrimination and Confidence Value Estimation. Fort Belvoir, VA: Defense Technical Information Center, July 2004. http://dx.doi.org/10.21236/ada426951.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wurtz, R., and A. Kaplan. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design. Office of Scientific and Technical Information (OSTI), October 2015. http://dx.doi.org/10.2172/1236750.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії