Thèses sur le sujet « Classification analysi »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Classification analysi.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Classification analysi ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Marchetti, A. « Automatic classification of galaxy spectra in large redshift surveys ». Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/243304.

Texte intégral
Résumé :
In my thesis work I make use of a Principal Component Analysis to classify galaxy spectra in large redshift surveys. In particular, I apply this classification to the first public data release spectra of galaxies in the range 0.4
Styles APA, Harvard, Vancouver, ISO, etc.
2

ROMELLI, KATIA. « Discourse, society and mental disorders : deconstructing DSM over time through critical and lacanian discourse analysis ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/83278.

Texte intégral
Résumé :
This dissertation presents an interdisciplinary work aimed at investigate the discursive construction of otherness in mental health domain in the Western culture, and in detail the role of Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the APA, in this process. Critical psychology perspective oriented the research and the data collection. The analysis is conducted through a multi-method approach (Critical Discourse Analysis, Semiotic Analysis and Lacanian Discourse Analysis), which integrates several traditions with a particular concern about the ways in which power and ideology are discursively enacted, produced and resisted by text and talk and shape the concept of mental disorders. The study (1) examines how legitimisation and hegemony have been discursively constructed, legitimised and consolidated over time. The study (2) aims to investigate the scientific debate around the deletion of NPD in order to reconstruct and deconstruct the decisional process through which the boundaries between normality and pathology are constructed. The study (3) aims to investigate how the discourse of DSM was contested by discourse of other social actors involved in the mental-health domain in order to analyze the effect on shaping subjectivity of patients and mental-health professionals.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bonneau, Jean-Christophe. « La classification des contrats : essai d'une analyse systémique des classifications du Code civil ». Grenoble, 2010. http://www.theses.fr/2010GREND017.

Texte intégral
Résumé :
La classification des contrats telle qu'elle est énoncée aux articles 1102 et suivants du Code civil se distingue structurellement des classifications modernes lui ayant été ajoutées. Prenant au sérieux l'idée d’une approche globale de la classification, les classifications du Code civil, séparées d'un régime juridique qui ne dépend pas en réalité d'elles et de notions qui lui sont étrangères, comme la cause, ont été envisagées dans leurs rapports de logique et de complémentarité. L'existence des chaînes de classifications, nouvelle classification résultant de l'assemblage cohérent des différentes classifications prévues par le Code civil, a pu être révélée au terme d'une étude visant à comprendre comment ces classifications se lient et se combinent entre elles. Les fonctionnalités de la classification des contrats ont alors été déduites de la structure même des classifications du Code civil réunies en chaînes. Celles-ci ont pour propriété de révéler ce qui constitue l'essence du contrat, en permettant de le distinguer de certaines figures qui tentent de s'y assimiler mais s'en distinguent néanmoins dès lors que l'aptitude d'un objet juridique à s'intégrer dans les chaînes de classifications est perçu comme conditionnant la qualification contractuelle elle-même. Envisagées comme un critère privilégié de définition du contrat, qui peut inspirer les projets visant à élaborer un droit européen des contrats, les chaînes de classifications ont ensuite été pensées dans leurs rapports avec la diversité des contrats nommés. Les chaînes de classifications absorbent ces derniers ainsi que leur régime juridique qui peut, en conséquence, être transposé aux contrats innomés. Permettant un renouvellement des regroupements et des distinctions généralement perçus, les chaînes de classifications apportent un éclairage nouveau au processus de qualification du contrat, contribuent à préciser le domaine de la modification du contrat, et fournissent, enfin, un fondement à l'action contractuelle directe qui s'exerce dans les chaînes de contrats
The classification of contracts as it is stated in the civil Code articles 1102 onwards structurally distinguishes itself from modern classifications having been added to it. Looking thoroughly at the matter of a global approach of classification, the classifications of the civil Code, separated from a legal regime which does not in fact depend on them and on notions which are foreign to it, such as the concept of “cause”, were considered in their connections of logic and complementarity. The existence of the chains of classifications, a new classification resulting from the coherent assembly of the various classifications provided for the civil Code, were brought to light thanks to a study aiming at understanding how these classifications are bound and harmonized. The features of the classification of contracts were then deducted from the very structure of the classifications of the civil Code combined in chains. These have for feature to reveal what constitutes the essence of the contract, by allowing to distinguish it from certain figures which try to assimilate to it but nevertheless distinguish themselves from it since the capacity of a legal object to become integrated into the chains of classifications is perceived as conditional on the contractual qualification itself. Considered as a preferred criterion of the definition of the contract, which can give rise to projects aiming at the elaboration of a body of European contract laws, the chains of classifications were then conceptualised in their connections with the variety of the named contracts. The chains of classifications absorb these contracts as well as their legal regime which can, consequently, be transposed into the unnamed contracts. Allowing a renewal of the groupings generally perceived, the chains of classifications bring a new light to the process of qualification of the contract. They contribute to specify the domain of the modification of the contract, and finally supply a foundation for the direct contractual action which is applied to the chains of contracts
Styles APA, Harvard, Vancouver, ISO, etc.
4

Llobell, Fabien. « Classification de tableaux de données, applications en analyse sensorielle ». Thesis, Nantes, Ecole nationale vétérinaire, 2020. http://www.theses.fr/2020ONIR143F.

Texte intégral
Résumé :
Les données structurées sous forme de tableaux se rapportant aux mêmes individus sont de plus en plus fréquentes dans plusieurs secteurs d’application. C’est en particulier le cas en évaluation sensorielle où plusieurs épreuves conduisent à l’obtention de tableaux multiples ; chaque tableau étant rapporté à un sujet (juge, consommateur, …). L’analyse exploratoire de ce type de données a suscité un vif intérêt durant les trente dernières années. Cependant, la classification de tableaux multiples n’a été que très peu abordée alors que le besoin pour ce type de données est important. Dans ce contexte, une méthode appelée CLUSTATIS permettant de segmenter les tableaux de données est proposée. Au cœur de cette approche se trouve la méthode STATIS, qui est une stratégie d’analyse exploratoire de tableaux multiples. Plusieurs extensions de la méthode de classification CLUSTATIS sont présentées. En particulier, le cas des données issues d’une épreuve dite « Check-All-That-Apply » (CATA) est considéré. Une méthode de classification ad-hoc, nommée CLUSCATA, est discutée. Afin d’améliorer l’homogénéité des classes issues aussi bien de CLUSTATIS que de CLUSCATA, une option consistant à rajouter une classe supplémentaire, appelée « K+1 », est introduite. Cette classe additionnelle a pour vocation de collecter les tableaux de données identifiés comme atypiques. Le choix du nombre de classes est abordé, et des solutions sont proposées. Des applications dans le cadre de l’évaluation sensorielle ainsi que des études de simulation permettent de souligner la pertinence de l’approche de classification. Des implémentations dans le logiciel XLSTAT et dans l’environnement R sont présentées
Multiblock datasets are more and more frequent in several areas of application. This is particularly the case in sensory evaluation where several tests lead to multiblock datasets, each dataset being related to a subject (judge, consumer, ...). The statistical analysis of this type of data has raised an increasing interest over the last thirty years. However, the clustering of multiblock datasets has received little attention, even though there is an important need for this type of data.In this context, a method called CLUSTATIS devoted to the cluster analysis of datasets is proposed. At the heart of this approach is the STATIS method, which is a multiblock datasets analysis strategy. Several extensions of the CLUSTATIS clustering method are presented. In particular, the case of data from the so-called "Check-All-That-Apply" (CATA) task is considered. An ad-hoc clustering method called CLUSCATA is discussed.In order to improve the homogeneity of clusters from both CLUSTATIS and CLUSCATA, an option to add an additional cluster, called "K+1", is introduced. The purpose of this additional cluster is to collect datasets identified as atypical.The choice of the number of clusters is discussed, ans solutions are proposed. Applications in sensory analysis as well as simulation studies highlight the relevance of the clustering approach.Implementations in the XLSTAT software and in the R environment are presented
Styles APA, Harvard, Vancouver, ISO, etc.
5

Platon, Ludovic. « Algorithms for ab initio identification and classification of ncRNAs ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLE003/document.

Texte intégral
Résumé :
L'identification des ARN non codants (ARNncs) permet d'améliorer notre compréhension de la biologie.Actuellement, les fonctions biologiques d'une grande partie des ARNncs sont connues.Mais il reste d'autre classes à découvrir.L'identification et la classification des ARNncs n'est pas une tâche triviale.Elle dépend de plusieurs types de données hétérogènes (séquence, structure secondaire, interaction avec d'autres composants biologiques, etc.) et nécessite l'utilisation de méthode appropriées.Durant cette thèse, nous avons développé des méthodes basées sur les cartes auto-organisatrice (SOM).Les SOMs nous permettent analyser et de représenter les ARNncs par une carte où la topologie des données est conservée.Nous avons proposé un nouvel algorithme de SOM qui permet d'intégrer plusieurs sources de données sous forme numérique ou sous forme complexe (représenté par des noyaux).Ce nouvel algorithm que nous appelons MSSOM calcule une SOM pour chaque source de données et les combine à l'aide d'une SOM finale.MSSOM calcule pour chaque cluster la meilleur combinaison de sources.Nous avons par ailleurs développer une variante supervisée de SOM qui s'appelle SLSOM.SLSOM classifie les classes connues à l'aide d'un perceptron multicouche et de la sortie d'une SOM.SLSOM intègre également une option de rejet qui lui permet de rejeter les prédictions incertaines et d’identifier de nouvelles classes.Ces méthodes nous ont permis de développer deux nouveaux outils bioinformatique.Le premier est l'application d'une variante de SLSOM pour la discrimination entre les ARNs codants et non-codants.Cet outil que nous appelons IRSOM a été testé sur plusieurs espèce venant de différents règnes (plantes, animales, bactéries et champignons).A l'aide de caractéristique simples, nous avons montré que IRSOM permet de séparer les ARNs codants des non-codants.De plus, avec la visualisation de SOM et l'option de rejet nous avons pu identifier les ARNs ambiguë chez l'humain.Le second s'appelle CRSOM et permet de classifier les ARNncs en différentes sous-classes.CRSOM est une combinaison de MSSOM et SLSOM et utilise deux sources de données qui sont la fréquence des k-mers de séquence et un noyau Gaussien de structure secondaire utilisant la distance d'édition.Nous avons montrer que CRSOM obtient des performances comparable à l'outil de référence (nRC) sans rejet, et de meilleur résultats avec le rejet
The non-coding RNA (ncRNA) identification helps to improve our comprehension of biology. We know the biological functions for a majority of ncRNA classes. But, we don't know all the classes of ncRNAs. Besides, the identification of ncRNAs using computational methods is not a trivial task. The relevant features for each class of ncRNAs rely on multiple heterogeneous sources of data (sequences, secondary structure, interaction with other biological components, etc.). During this thesis, we developed methods relying on Self-Organizing Maps (SOM).The SOM is used to analyze and represent the ncRNAs by a map of clusters where the topology of the data is preserved.We proposed a new SOM version called MSSOM which can handle multiple sources of data composed of numerical data or complex data (represented by kernels). MSSOM combines data sources by using a SOM for each source and learns the weights of each source at the cluster level.We also proposed a supervised variant of SOM with rejection called SLSOM. SLSOM is able to identify and classify the known classes using multi layer perceptron and the output of a SOM.The rejection options associated to the output layer allow to reject the unreliable prediction and to identify the potential new classes.These methods lead to the development of bioinformatic tools.We applied a variant of SLSOM to the discrimination of coding and non-coding RNAs. This method called IRSOM has been evaluated on a wide range of species coming from different reigns (plants, animals, bacteria and fungi).By using a simple set of sequence features, we showed that IRSOM is able to separate the coding and non-coding RNAs efficiently.With the SOM visualization and the rejection option, we also highlighted and analyzed some ambiguous RNAs on the human. The second one is called CRSOM.CRSOM classify ncRNAs into sub classes by integrating two data sources which are the sequence k-mer frequencies and a Gaussian kernel using the edit distance. We show that CRSOM give comparable results with the reference tool (nRC) without reject and better results with the rejection option
Styles APA, Harvard, Vancouver, ISO, etc.
6

Neovius, Sofia. « René Descartes’ Foundations of Analytic Geometry and Classification of Curves ». Thesis, Uppsala universitet, Algebra och geometri, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-202147.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Fazeli, Goldisse. « Classification and discriminant analysis ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ47800.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

de, Roos Dolf. « Spectral analysis classification sonars ». Thesis, University of Canterbury. Electrical Engineering, 1986. http://hdl.handle.net/10092/5575.

Texte intégral
Résumé :
Sonar target classification based on frequency-domain echo analysis is investigated. Conventional pulsed sonars are compared with continuous transmission frequency modulated (CTFM) sonars, and differences relating to target classification are discussed. A practical technique is introduced which eliminates the blind time inherent in CTFM technology. The value and implications of modelling underwater sonars in air are discussed and illustrated. The relative merits of auditory, visual and computer analysis of echoes are examined, and the effects of using two or more analysis methods simultaneously are investigated. Various statistical techniques for detecting and classifying targets are explored. It is seen that with present hardware limitations, a two-stage echo analysis approach offers the most efficient means of target classification. A novel design for three-section quarter-wavelength transducers is presented and evaluated. Their inherently flat frequency response makes these transducers well suited to broadband applications. The design philosophy and construction details of a Diver's Sonar and an underwater Classification Sonar are given. Sea trials reveal that using the Diver's Sonar, a blind-folded diver can successfully navigate in an unknown environment, and locate and classify targets; using the Classification Sonar, targets may be located and classified using either operators or computer software.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lee, Lily 1971. « Gait analysis for classification ». Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8116.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 121-124).
This thesis describes a representation of gait appearance for the purpose of person identification and classification. This gait representation is based on simple localized image features such as moments extracted from orthogonal view video silhouettes of human walking motion. A suite of time-integration methods, spanning a range of coarseness of time aggregation and modeling of feature distributions, are applied to these image features to create a suite of gait sequence representations. Despite their simplicity, the resulting feature vectors contain enough information to perform well on human identification and gender classification tasks. We demonstrate the accuracy of recognition on gait video sequences collected over different days and times, and under varying lighting environments. Each of the integration methods are investigated for their advantages and disadvantages. An improved gait representation is built based on our experiences with the initial set of gait representations. In addition, we show gender classification results using our gait appearance features, the effect of our heuristic feature selection method, and the significance of individual features.
by Lily Lee.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Duong, Minh Duc <1992&gt. « Classification by pairwise coupling ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/16806.

Texte intégral
Résumé :
Pairwise Coupling is a statistical procedure designed to solve multi-class classification problems thought a combination of binary classifications. This thesis considers three different methods for pairwise coupling namely the Hastie and Tibshirani (1998) algorithm, the PKPD algorithm (Price et al., 1995) and voting rule (Knerr 1990; Friedman 1996). For each method, both linear discriminant analysis and logistic regression are considered to compute the pairwise probabilities. The three pairwise coupling methods are studied in detail and compared through simulations. Finally, real data are used to illustrate the methods."
Styles APA, Harvard, Vancouver, ISO, etc.
11

Pektaş, Abdurrahman. « Behavior based malware classification using online machine learning ». Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM065/document.

Texte intégral
Résumé :
Les malwares, autrement dit programmes malicieux ont grandement évolué ces derniers temps et sont devenus une menace majeure pour les utilisateurs grand public, les entreprises et même le gouvernement. Malgré la présence et l'utilisation intensive de divers outils anti-malwares comme les anti-virus, systèmes de détection d'intrusions, pare-feux etc ; les concepteurs de malwares peuvent significativement contourner ses protections en utilisant les techniques d'offuscation. Afin de limiter ces problèmes, les chercheurs spécialisés dans les malwares ont proposé différentes approches comme l'exploration des données (data mining) ou bien l'apprentissage automatique (machine learning) pour détecter et classifier les échantillons de malwares en fonction de leur propriétés statiques et dynamiques. De plus les méthodes proposées sont efficaces sur un petit ensemble de malwares, le passage à l'échelle de ses méthodes pour des grands ensembles est toujours en recherche et n'a pas été encore résolu.Il est évident aussi que la majorité des malwares sont une variante des précédentes versions. Par conséquent, le volume des nouvelles variantes créées dépasse grandement la capacité d'analyse actuelle. C'est pourquoi développer la classification des malwares est essentiel pour lutter contre cette augmentation pour la communauté informatique spécialisée en sécurité. Le challenge principal dans l'identification des familles de malware est de réussir à trouver un équilibre entre le nombre d'échantillons augmentant et la précision de la classification. Pour surmonter cette limitation, contrairement aux systèmes de classification existants qui appliquent des algorithmes d'apprentissage automatique pour sauvegarder les données ; ce sont des algorithmes hors-lignes ; nous proposons une nouvelle classification de malwares en ligne utilisant des algorithmes d'apprentissage automatique qui peuvent fournir une mise à jour instantanée d'un nouvel échantillon de malwares en suivant son introduction dans le système de classification.Pour atteindre notre objectif, premièrement nous avons développé une version portable, évolutive et transparente d'analyse de malware appelée VirMon pour analyse dynamique de malware visant les OS windows. VirMon collecte le comportement des échantillons analysés au niveau bas du noyau à travers son pilote mini-filtre développé spécifiquement. Deuxièmement, nous avons mis en place un cluster de 5 machines pour notre module d'apprentissage en ligne ( Jubatus);qui permet de traiter une quantité importante de données. Cette configuration permet à chaque machine d'exécuter ses tâches et de délivrer les résultats obtenus au gestionnaire du cluster.Notre outil proposé consiste essentiellement en trois niveaux majeures. Le premier niveau permet l'extraction des comportements des échantillons surveillés et observe leurs interactions avec les ressources de l'OS. Durant cette étape, le fichier exemple est exécuté dans un environnement « sandbox ». Notre outil supporte deux « sandbox »:VirMon et Cuckoo. Durant le second niveau, nous appliquons des fonctionnalités d'extraction aux rapports d'analyses. Le label de chaque échantillon est déterminé Virustotal, un outil regroupant plusieurs anti-virus permettant de scanner en ligne constitués de 46 moteurs de recherches. Enfin au troisième niveau, la base de données de malware est partitionnée en ensemble de test et d'apprentissage. L'ensemble d'apprentissage est utilisé pour obtenir un modèle de classification et l'ensemble de test est utilisé pour l'évaluation.Afin de valider l'efficacité et l'évolutivité de notre méthode, nous l'avons évalué en se basant sur une base de 18 000 fichiers malicieux récents incluant des virus, trojans, backdoors, vers etc, obtenue depuis VirusShare. Nos résultats expérimentaux montrent que permet la classification de malware avec une précision de 92 %
Recently, malware, short for malicious software has greatly evolved and became a major threat to the home users, enterprises, and even to the governments. Despite the extensive use and availability of various anti-malware tools such as anti-viruses, intrusion detection systems, firewalls etc., malware authors can readily evade these precautions by using obfuscation techniques. To mitigate this problem, malware researchers have proposed various data mining and machine learning approaches for detecting and classifying malware samples according to the their static or dynamic feature set. Although the proposed methods are effective over small sample set, the scalability of these methods for large data-set are in question.Moreover, it is well-known fact that the majority of the malware is the variant of the previously known samples. Consequently, the volume of new variant created far outpaces the current capacity of malware analysis. Thus developing malware classification to cope with increasing number of malware is essential for security community. The key challenge in identifying the family of malware is to achieve a balance between increasing number of samples and classification accuracy. To overcome this limitation, unlike existing classification schemes which apply machine learning algorithm to stored data, i.e., they are off-line, we proposed a new malware classification system employing online machine learning algorithms that can provide instantaneous update about the new malware sample by following its introduction to the classification scheme.To achieve our goal, firstly we developed a portable, scalable and transparent malware analysis system called VirMon for dynamic analysis of malware targeting Windows OS. VirMon collects the behavioral activities of analyzed samples in low kernel level through its developed mini-filter driver. Secondly we set up a cluster of five machines for our online learning framework module (i.e. Jubatus), which allows to handle large scale of data. This configuration allows each analysis machine to perform its tasks and delivers the obtained results to the cluster manager.Essentially, the proposed framework consists of three major stages. The first stage consists in extracting the behavior of the sample file under scrutiny and observing its interactions with the OS resources. At this stage, the sample file is run in a sandboxed environment. Our framework supports two sandbox environments: VirMon and Cuckoo. During the second stage, we apply feature extraction to the analysis report. The label of each sample is determined by using Virustotal, an online multiple anti-virus scanner framework consisting of 46 engines. Then at the final stage, the malware dataset is partitioned into training and testing sets. The training set is used to obtain a classification model and the testing set is used for evaluation purposes .To validate the effectiveness and scalability of our method, we have evaluated our method on 18,000 recent malicious files including viruses, trojans, backdoors, worms, etc., obtained from VirusShare, and our experimental results show that our method performs malware classification with 92% of accuracy
Styles APA, Harvard, Vancouver, ISO, etc.
12

Anteryd, Fredrik. « Information Classification in Swedish Governmental Agencies : Analysis of Classification Guidelines ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11493.

Texte intégral
Résumé :
Information classification deals with the handling of sensitive information, such as patient records and social security information. It is of utmost importance that this information is treated with caution in order to ensure its integrity and security. In Sweden, the Civil Contingencies Agency has established a set of guidelines for how governmental agencies should handle such information. However, there is a lack of research regarding how well these guidelines are followed as well as if the agencies have made accommodations of these guidelines of their own. This work presents the results from a survey sent to 245 governmental agencies in Sweden, investigating how information classification actually is performed today. The questionnaire was answered by 144 agencies and 54 agencies provided detailed documents of their classification process. The overall results show that the classification process is difficult, while those who provided documents proved to have good guidelines, but not always consistent with the existing recommendations.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Söderholm, Marianne. « Stream Classification and Solubility of the Dispersion Equation for Piecewise Constant Vorticity ». Thesis, Linköpings universitet, Matematiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-146205.

Texte intégral
Résumé :
This thesis concerns the water wave problem corresponding to a piecewise constant vorticity function. There are several results connected to this field. In [1] the authors prove the existence of small-amplitude capillary-gravity water waves in the setting of unidirectional waves, and present an explicit form of the dispersion equation in the case when the vorticity function has two jumps. A two-layer model with constant but different vorticities is studied in [2], while in [3], an analysis of the dispersion equation for a three-layer model is given. In this thesis we first classify all stream solutions to the problem specified above, and then use our classification to prove and analyze solubility of the dispersion equation for a vorticity function with one jump. We do not require streams to be unidirectional (that is, we allow underlying counter-currents and internal stagnation).
Styles APA, Harvard, Vancouver, ISO, etc.
14

Jamain, Adrien. « Meta-analysis of classification methods ». Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413686.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Asher, Rebecca J. (Rebecca Jennie). « Capnographic analysis for disease classification ». Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/79320.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 73-76).
Existing methods for extracting diagnostic information from carbon dioxide in the exhaled breath are qualitative, through visual inspection, and therefore imprecise. In this thesis, we quantify the CO₂ waveform, or capnogram, in order to discriminate among various lung disorders. Quantitative analyses of the capnogram are conducted by extracting several physiological waveform features and performing classification by discriminant analysis with voting. Our classification methods are tested in distinguishing between records from subjects with normal lung function and patients with cardiorespiratory disease. In a second step, we discriminate between capnograms from patients with obstructive lung disease (chronic obstructive pulmonary disease) and those with restrictive lung disease (congestive heart failure). Our results demonstrate the diagnostic potential of capnography.
by Rebecca J. Asher.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Benabdallah, Abdelwahab. « La nawba algéroise : de l'analyse à la classification ». Thesis, Paris 4, 2015. http://www.theses.fr/2015PA040234.

Texte intégral
Résumé :
La nawba est la macroforme vocale et instrumentale de référence de l’héritage musical dit "arabo-andalou" des pays du Maghreb. Au sein de ce vaste répertoire, la nawba algéroise transmise dans l'école d'Alger mérite une étude particulière. Le travail de cette thèse porte donc sur le répertoire de la nawba algéroise et plus précisément sur les pièces vocales, classées dans les 16 modes/nawbât et réparties dans cinq mouvements à savoir : mṣaddar, bṭayḥi, darj, inṣiraf et ḫlaṣ. La nawba est une suite de pièces vocales et instrumentales qui s'enchainent dans un ordre bien établi selon le ṭab’ (le mode) et le mîzân (le rythme) qui sont des critères importants pour la classification des pièces. Notre analyse sera essentiellement fondée sur le ṭab’ afin de comprendre comment fonctionnent les seize modes de la nawba à Alger et comment les différencier afin de définir les caractéristiques propres de chaque pièce vocale de la nawba. Le travail d'analyse commencera par une analyse préliminaire pour caractériser précisément les seize modes algérois et repérer les anomalies. Un deuxième niveau d'analyse prendra en compte les pièces repérées afin de pouvoir clarifier le classement. Enfin nous proposons dans l’annexe, les transcriptions complètes du corpus sous forme d’un diwan ou recueil mélodique, à partir du classement scientifique mélodique
The nawba is vocal and instrumental macroform reference of the musical heritage called "Arabo-Andalou" of the Maghreb countries. Within this vast repertoire, the algiers nawba transmitted in the algiers school deserves special study. The work of this thesis therefore focuses on the repertoire of algiers nawba and specifically on the vocal parts, classified in 16 modes/nawbât and in five movements is: mṣaddar, bṭayḥi, darj, inṣiraf and ḫlaṣ. The nawba is a suite of vocal and instrumental pieces that keep coming in an established order in the ṭab’ (mode) and mîzân (rhythm), which are important criteria for the classification of parts. Our analysis will be based essentially on the ṭab’ to understand how the sixteen modes of nawba in algiers and how to differentiate in order to define the characteristics of each voice piece of nawba. The analytical work will begin with a preliminary analysis to accurately characterize the sixteen algiers modes and identify anomalies. A second level of analysis will consider the items identified in order to clarify the classification. Finally we propose in the Annex, the full transcripts of the corpus as a diwan or melodic collection, from the melodic scientific classification
Styles APA, Harvard, Vancouver, ISO, etc.
17

Watkins, Peter. « Classification of sheep category using chemical analysis and statistical classification algorithms ». Thesis, Watkins, Peter (2011) Classification of sheep category using chemical analysis and statistical classification algorithms. PhD thesis, Murdoch University, 2011. https://researchrepository.murdoch.edu.au/id/eprint/6249/.

Texte intégral
Résumé :
In Australia, dentition (eruption of permanent incisors) is used as a proxy for age to define sheep meat quality. Lamb is defined as having no permanent incisors, hogget as having at least one incisor and mutton is defined as having two or more incisors. Classification of the carcase is done at the abattoir prior to the removal of an animal’s head. Recently, an Australian Senate inquiry into meat marketing reported that there was concern that substitution of hogget and mutton for lamb may be occurring in the industry. At present, no objective method is available that can be used for classifying sheep category. The general aims of this thesis were to i) evaluate whether chemical analysis of branched chain fatty acid (BCFA) content could be used as an objective tool to determine sheep age, ii) understand the effect that some production factors had on BCFA concentrations in Australian sheep and iii) develop new approaches (whether chemical and/or statistical) for determining sheep category (age). BCFAs are implicated as the main contributors to “mutton flavour”, often associated with the cooked meat of older animals. BCFAs are reported to increase with age, which suggests that chemical analysis of these compounds could be used as an objective method. Concentrations of three BCFAs (4-methyloctanoic (MOA), 4-ethyloctanoic (EOA) and 4- methylnonanoic (MNA) acids) were measured in a survey of fat samples taken from 533 sheep carcases at abattoirs in New South Wales, Victoria and Western Australia. This thesis shows that, on its own, chemical analysis of the BCFAs is not sufficient to discriminate lamb from hogget and mutton as pre-slaughter nutrition is a significant factor in classifying sheep using this approach. Uncertainty at the BCFA concentration ranges found in Australian sheep was determined to be high making it difficult to discriminate between sheep carcases of different ages based on the BCFA level. Fast gas chromatography was evaluated as the basis for a high throughput chemical technique but was not sufficiently sensitive for BCFA measurements. Solid-phase microextraction (SPME) was also found to be suitable for sampling 3-methylindole and p-cresol, compounds responsible for diet-related “pastoral flavour” in sheep fat, but further work is needed to validate this approach for measurement of these compounds in sheep fat. Statistical classification algorithms, when applied to the chromatograms measured for the 533 carcasses, showed great promise for predicting sheep category. Specifically, the random forests algorithm, when applied to mean-centred data, gave 100% predictive accuracy when differentiating between lamb, hogget and mutton. This approach could be used for the development of an objective method for determining sheep age and category, suitable for use by the Australian sheep meat industry.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Mustofadee, Affan. « Classification of muscles from ultrasound image sequences ». Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2391.

Texte intégral
Résumé :

The analysis of the health condition in Rheumatoid Arthritis (RA) remains a qualitative process dependent on visual inspection by a clinician. Fully automatic techniques that can accurately classify the health of the muscle have yet to be developed. The intended purpose of this work is to develop a novel spatio-temporal technique to assist in a rehabilitation program framework, by identifying motion features inherited in the muscles in order to classify them as either healthy or diseased. Experiments are based on ultrasound image sequences during which the muscles were undergoing contraction. The proposed system uses an optical flow technique to estimate the velocity of contraction. Analyzing and manipulating the velocity vectors reveal valuable information which encourages the extraction of motion features to discriminate the healthy against the sick. Experimental results for classification prove helpful in essential developments of therapy processes and the performance of the system has been validated by the cross-validation technique “leave-one-out”. The method leads to an analytical description of both the global and local muscle’s features in a way which enables the derivation of an appropriate strategy for classification. To our knowledge this is the first reported spatio-temporal method developed and evaluated for RA assessment. In addition, the progress of physical therapy to improve strength of muscles in RA patients has also been evaluated by the features used for classification.

Styles APA, Harvard, Vancouver, ISO, etc.
19

Ng, Liang Shing. « Combining multiple features in texture classification ». Thesis, University of Southampton, 1999. https://eprints.soton.ac.uk/253030/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Shin, Hyejin. « Infinite dimensional discrimination and classification ». Texas A&M University, 2003. http://hdl.handle.net/1969.1/5832.

Texte intégral
Résumé :
Modern data collection methods are now frequently returning observations that should be viewed as the result of digitized recording or sampling from stochastic processes rather than vectors of finite length. In spite of great demands, only a few classification methodologies for such data have been suggested and supporting theory is quite limited. The focus of this dissertation is on discrimination and classification in this infinite dimensional setting. The methodology and theory we develop are based on the abstract canonical correlation concept of Eubank and Hsing (2005), and motivated by the fact that Fisher's discriminant analysis method is intimately tied to canonical correlation analysis. Specifically, we have developed a theoretical framework for discrimination and classification of sample paths from stochastic processes through use of the Loeve-Parzen isomorphism that connects a second order process to the reproducing kernel Hilbert space generated by its covariance kernel. This approach provides a seamless transition between the finite and infinite dimensional settings and lends itself well to computation via smoothing and regularization. In addition, we have developed a new computational procedure and illustrated it with simulated data and Canadian weather data.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Chzhen, Evgenii. « Plug-in methods in classification ». Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2027/document.

Texte intégral
Résumé :
Ce manuscrit étudie plusieurs problèmes de classification sous contraintes. Dans ce cadre de classification, notre objectif est de construire un algorithme qui a des performances aussi bonnes que la meilleure règle de classification ayant une propriété souhaitée. Fait intéressant, les méthodes de classification de type plug-in sont bien appropriées à cet effet. De plus, il est montré que, dans plusieurs configurations, ces règles de classification peuvent exploiter des données non étiquetées, c'est-à-dire qu'elles sont construites de manière semi-supervisée. Le Chapitre 1 décrit deux cas particuliers de la classification binaire - la classification où la mesure de performance est reliée au F-score, et la classification équitable. A ces deux problèmes, des procédures semi-supervisées sont proposées. En particulier, dans le cas du F-score, il s'avère que cette méthode est optimale au sens minimax sur une classe usuelle de distributions non-paramétriques. Aussi, dans le cas de la classification équitable, la méthode proposée est consistante en terme de risque de classification, tout en satisfaisant asymptotiquement la contrainte d’égalité des chances. De plus, la procédure proposée dans ce cadre d'étude surpasse en pratique les algorithmes de pointe. Le Chapitre 3 décrit le cadre de la classification multi-classes par le biais d'ensembles de confiance. Là encore, une procédure semi-supervisée est proposée et son optimalité presque minimax est établie. Il est en outre établi qu'aucun algorithme supervisé ne peut atteindre une vitesse de convergence dite rapide. Le Chapitre 4 décrit un cas de classification multi-labels dans lequel on cherche à minimiser le taux de faux-négatifs sous réserve de contraintes de type presque sûres sur les règles de classification. Dans cette partie, deux contraintes spécifiques sont prises en compte: les classifieurs parcimonieux et ceux soumis à un contrôle des erreurs négatives à tort. Pour les premiers, un algorithme supervisé est fourni et il est montré que cet algorithme peut atteindre une vitesse de convergence rapide. Enfin, pour la seconde famille, il est montré que des hypothèses supplémentaires sont nécessaires pour obtenir des garanties théoriques sur le risque de classification
This manuscript studies several problems of constrained classification. In this frameworks of classification our goal is to construct an algorithm which performs as good as the best classifier that obeys some desired property. Plug-in type classifiers are well suited to achieve this goal. Interestingly, it is shown that in several setups these classifiers can leverage unlabeled data, that is, they are constructed in a semi-supervised manner.Chapter 2 describes two particular settings of binary classification -- classification with F-score and classification of equal opportunity. For both problems semi-supervised procedures are proposed and their theoretical properties are established. In the case of the F-score, the proposed procedure is shown to be optimal in minimax sense over a standard non-parametric class of distributions. In the case of the classification of equal opportunity the proposed algorithm is shown to be consistent in terms of the misclassification risk and its asymptotic fairness is established. Moreover, for this problem, the proposed procedure outperforms state-of-the-art algorithms in the field.Chapter 3 describes the setup of confidence set multi-class classification. Again, a semi-supervised procedure is proposed and its nearly minimax optimality is established. It is additionally shown that no supervised algorithm can achieve a so-called fast rate of convergence. In contrast, the proposed semi-supervised procedure can achieve fast rates provided that the size of the unlabeled data is sufficiently large.Chapter 4 describes a setup of multi-label classification where one aims at minimizing false negative error subject to almost sure type constraints. In this part two specific constraints are considered -- sparse predictions and predictions with the control over false negative errors. For the former, a supervised algorithm is provided and it is shown that this algorithm can achieve fast rates of convergence. For the later, it is shown that extra assumptions are necessary in order to obtain theoretical guarantees in this case
Styles APA, Harvard, Vancouver, ISO, etc.
22

Biernacki, Christophe. « Choix de modèles en classification ». Compiègne, 1997. http://www.theses.fr/1997COMP1043.

Texte intégral
Résumé :
L'objectif de ce travail de thèse, est de comparer et de proposer des méthodes de choix de modèles, lorsque la classification (classification automatique et discrimination) s'appuie sur un modèle de mélange gaussien. Les modèles considérés sont de deux sortes : les modèles gaussiens (ensemble de contraintes sur les matrices de variance et sur les proportions) et le nombre de classes (pour la classification automatique uniquement). Un logiciel, qui regroupe plusieurs méthodes d'estimation des paramètres du mélange, tout en tenant compte des contraintes imposées par le modèle gaussien, a été écrit dans le langage Splus. Ensuite, nous avons comparé, dans la cadre du choix du modèle gaussien, bon nombre de critères classiques. Ces comparaisons se font avec une "optique classification" : le meilleur modèle est celui qui produit la meilleure partition en classification automatique, et est celui qui produit la meilleure règle de classement en discrimination. En classification automatique, le critère AIC3 de Bozdogan donne les meilleurs résultats. En discrimination, deux critères se distinguent : le critère AIC3, de nouveau, ainsi que le critère de validation croisée. En classification automatique, nous proposons simplement d'utiliser le critère de vraisemblance classifiant de Symons pour trouver le nombre de classes (le modèle gaussien est connu). Ce critère très simple peut s'interpréter comme une pénalisation de la vraisemblance par une mesure de la classifiabilité des données, et également, dans certains cas, comme une pénalisation du célèbre critère des k-means de Sebestyen. Les nombreux essais (sur des données simulées et sur des données réelles) montrent des résultats très encourageants, si les classes sont bien séparées, pour résoudre le délicat problème du nombre de classes.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Soukhoroukova, Nadejda. « Data classification through nonsmooth optimization ». Thesis, University of Ballarat [Mt. Helen, Vic.] :, 2003. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/42220.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Brock, James L. « Acoustic classification using independent component analysis / ». Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/2067.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Lee, Ho-Jin. « Functional data analysis : classification and regression ». Texas A&M University, 2004. http://hdl.handle.net/1969.1/2805.

Texte intégral
Résumé :
Functional data refer to data which consist of observed functions or curves evaluated at a finite subset of some interval. In this dissertation, we discuss statistical analysis, especially classification and regression when data are available in function forms. Due to the nature of functional data, one considers function spaces in presenting such type of data, and each functional observation is viewed as a realization generated by a random mechanism in the spaces. The classification procedure in this dissertation is based on dimension reduction techniques of the spaces. One commonly used method is Functional Principal Component Analysis (Functional PCA) in which eigen decomposition of the covariance function is employed to find the highest variability along which the data have in the function space. The reduced space of functions spanned by a few eigenfunctions are thought of as a space where most of the features of the functional data are contained. We also propose a functional regression model for scalar responses. Infinite dimensionality of the spaces for a predictor causes many problems, and one such problem is that there are infinitely many solutions. The space of the parameter function is restricted to Sobolev-Hilbert spaces and the loss function, so called, e-insensitive loss function is utilized. As a robust technique of function estimation, we present a way to find a function that has at most e deviation from the observed values and at the same time is as smooth as possible.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Tan, Tieniu. « Image texture analysis : classification and segmentation ». Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/8697.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Mikkelinen, Nicklas. « Analysis of information classification best practices ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11551.

Texte intégral
Résumé :
Information security, information management systems and more specifically information classification are important parts of an organisations information security. More and more information is being processed each day, and needs to be secured. Without proper information classification guidelines in place and lacking research within the subject, organisations could be vulnerable to attacks from third parties. This project displays a list of best practices found within information classification guidelines published online by different organisations. Out of 100 reviewed documents, 30 included information classification guidelines, and when analysed with a thematic analysis provides best practices within information classification.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Dunlap, John. « Classification and analysis of longwall delays / ». This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-05022009-040545/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Folkes, Simon Richard. « Analysis and classification of galaxy spectra ». Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624783.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Michie, Alexander David. « Analysis and classification of protein structure ». Thesis, University College London (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267834.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Dunlap, James 1963. « Classification and analysis of longwall delays ». Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/42403.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Kübler, Bernhard Christian. « Risk classification by means of clustering ». Frankfurt, M. Berlin Bern Bruxelles New York, NY Oxford Wien Lang, 2009. http://d-nb.info/998737291/04.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Koci, Elvis, Maik Thiele, Oscar Romero et Wolfgang Lehner. « Cell Classification for Layout Recognition in Spreadsheets ». Springer, 2016. https://tud.qucosa.de/id/qucosa%3A75562.

Texte intégral
Résumé :
Spreadsheets compose a notably large and valuable dataset of documents within the enterprise settings and on the Web. Although spreadsheets are intuitive to use and equipped with powerful functionalities, extracting and reusing data from them remains a cumbersome and mostly manual task. Their greatest strength, the large degree of freedom they provide to the user, is at the same time also their greatest weakness, since data can be arbitrarily structured. Therefore, in this paper we propose a supervised learning approach for layout recognition in spreadsheets. We work on the cell level, aiming at predicting their correct layout role, out of five predefined alternatives. For this task we have considered a large number of features not covered before by related work. Moreover, we gather a considerably large dataset of annotated cells, from spreadsheets exhibiting variability in format and content. Our experiments, with five different classification algorithms, show that we can predict cell layout roles with high accuracy. Subsequently, in this paper we focus on revising the classification results, with the aim of repairing misclassifications. We propose a sophisticated approach, composed of three steps, which effectively corrects a reasonable number of inaccurate predictions.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kordogly, Rima. « The classification patterns of bank financial ratios ». Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6815.

Texte intégral
Résumé :
Financial ratios are key units of analysis in most quantitative financial research including bankruptcy prediction, performance and efficiency analysis, mergers and acquisitions, and credit ratings, amongst others. Since hundreds of ratios can be computed using available financial data and given the substantial overlap in information provided by many of these ratios, choosing amongst ratios has been a significant issue facing practitioners and researchers. An important contribution of the present thesis is to show that ratios can be arranged into groups where each group describes a separate financial aspect or dimension of a given firm or industry. Then by choosing representative ratios from each group, a small, yet comprehensive, set of ratios can be identified and used for further analysis. Whilst a substantial part of the financial ratio literature has focused on classifying financial ratios empirically and on assessing the stability of the ratio groups over different periods and industries, relatively little attention has been paid to the classifying of financial ratios of the banking sector. This study aims to explore the classification patterns of 56 financial ratios for banks of different type, size and age. Using data from the Uniform Bank Performance Report (UBPR), large samples of commercial, savings, and De Novo (newlychartered) commercial banks were obtained for the period between 2001 and 2005, inclusive. Principal Component Analysis (PCA) was performed on a yearly basis to classify the banks' ratios after applying the inverse sinh transformation to enhance the distributional properties of the data. The number of patterns were decided using Parallel Analysis. The study also uses various methods including visual comparison, correlation, congruency, and transformation analysis to assess the time series stability and cross-sectional similarity of the identified ratio patterns. The study identifies 13 or 14 ratio patterns for commercial banks and 10 or 11 ratio patterns for savings banks over the period on which the study is based. These patterns are generally stable over time; yet, some dissimilarity was found between the ratio patterns for the two types of banks – that is, the commercial and savings banks. A certain degree of dissimilarity was also found between the financial patterns for commercial banks belonging to different asset-size classes. Furthermore, four ratio patterns were consistently identified for the De Novo commercial banks in the first year of their operations. However, no evidence of convergence was found between the ratio patterns of the De Novo commercial banks and the ratio patterns of the incumbent (that is, long established) commercial banks. The findings of this study bring useful insights particularly to researchers who employ bank financial ratios in empirical analysis. Methodologically, this research pioneers the application of the inverse sinh transformation and parallel analysis in the area of the ratio classification literature. Also, it contributes to the use of transformation analysis as a factor comparison technique by deriving a significance test for the outputs of this analysis. Moreover, this is the only large scale study to be conducted on the classification patterns of bank financial ratios.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Kurujyibwami, Celestin. « Admissible transformations and the group classification of Schrödinger equations ». Doctoral thesis, Linköpings universitet, Matematik och tillämpad matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-137424.

Texte intégral
Résumé :
We study admissible transformations and solve group classification problems for various classes of linear and nonlinear Schrödinger equations with an arbitrary number n of space variables. The aim of the thesis is twofold. The first is the construction of the new theory of uniform seminormalized classes of differential equations and its application to solving group classification problems for these classes. Point transformations connecting two equations (source and target) from the class under study may have special properties of semi-normalization. This makes the group classification of that class using the algebraic method more involved. To extend this method we introduce the new notion of uniformly semi-normalized classes. Various types of uniform semi-normalization are studied: with respect to the corresponding equivalence group, with respect to a proper subgroup of the equivalence group as well as the corresponding types of weak uniform semi-normalization. An important kind of uniform semi-normalization is given by classes of homogeneous linear differential equations, which we call uniform semi-normalization with respect to linear superposition of solutions. The class of linear Schrödinger equations with complex potentials is of this type and its group classification can be effectively carried out within the framework of the uniform semi-normalization. Computing the equivalence groupoid and the equivalence group of this class, we show that it is uniformly seminormalized with respect to linear superposition of solutions. This allow us to apply the version of the algebraic method for uniformly semi-normalized classes and to reduce the group classification of this class to the classification of appropriate subalgebras of its equivalence algebra. To single out the classification cases, integers that are invariant under equivalence transformations are introduced. The complete group classification of linear Schrödinger equations is carried out for the cases n = 1 and n = 2. The second aim is to study group classification problem for classes of generalized nonlinear Schrödinger equations which are not uniformly semi-normalized. We find their equivalence groupoids and their equivalence groups and then conclude whether these classes are normalized or not. The most appealing classes are the class of nonlinear Schrödinger equations with potentials and modular nonlinearities and the class of generalized Schrödinger equations with complex-valued and, in general, coefficients of Laplacian term. Both these classes are not normalized. The first is partitioned into an infinite number of disjoint normalized subclasses of three kinds: logarithmic nonlinearity, power nonlinearity and general modular nonlinearity. The properties of the Lie invariance algebras of equations from each subclass are studied for arbitrary space dimension n, and the complete group classification is carried out for each subclass in dimension (1+2). The second class is successively reduced into subclasses until we reach the subclass of (1+1)-dimensional linear Schrödinger equations with variable mass, which also turns out to be non-normalized. We prove that this class is mapped by a family of point transformations to the class of (1+1)-dimensional linear Schrödinger equations with unique constant mass.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Price, Matthew. « Automatic Modulation Classification Using Grey Relational Analysis ». Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/42441.

Texte intégral
Résumé :
One component of wireless communications of increasing necessity in both civilian and military applications is the process of automatic modulation classification. Modulation of a detected signal of unknown origin requiring interpretation must first be determined before the signal can be demodulated. This thesis presents a novel architecture for a modulation classifier that determines the most likely modulation using Grey Relational Analysis with the extraction and combination of multiple signal features. An evaluation of data preprocessing methods is conducted and performance of the classifier is investigated with the addition of each new signal feature used for classification.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
37

Ibbou, Smaïl. « Classification, analyse des correspondances et methodes neuronales ». Paris 1, 1998. http://www.theses.fr/1998PA010020.

Texte intégral
Résumé :
Ce travail traite des contributions que peuvent apporter les techniques neuronales au domaine de l'analyse des données et plus particulièrement a la classification et à l'analyse des correspondances. Ce document peut être découpé en trois parties : - La première partie aborde le problème complexe du choix du nombre pertinent de classes à retenir dans une classification de données. Pour cela, nous étudions un algorithme de fusion de données dans rd propose par Y. F Wong en 1993. La méthode se base sur la minimisation de l'énergie libre qui est souvent utilisée en mécanique statistique. Nous apportons une formalisation rigoureuse du problème ainsi que l'étude complète de l'algorithme dans le cas ou D est égal à un. - La seconde partie est consacrée à l'utilisation de l'algorithme de Kohonen dans le cas de données incomplètes. Nous proposons l'adaptation de la méthode ainsi qu'une étude empirique sur la robustesse de l'algorithme de Kohonen face aux données manquantes. L'étude empirique est menée sur des exemples simulés et réels en regardant d'une part les désorganisations du réseau et d'autre part en mesurant des erreurs ad hoc. On définit également une méthode d'estimation des données manquantes. - Dans la troisième partie on présente deux méthodes originales pour le traitement des variables qualitatives via l'algorithme de Kohonen. Baptisées kacm 1 et kacm II, ces deux algorithmes permettent de réaliser l'analogue d'une analyse des correspondances multiples en classant les modalités des variables et les individus sur une même carte de Kohonen.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Bremner, Alexandra P. « Localised splitting criteria for classification and regression trees ». Thesis, Bremner, Alexandra P. (2004) Localised splitting criteria for classification and regression trees. PhD thesis, Murdoch University, 2004. https://researchrepository.murdoch.edu.au/id/eprint/440/.

Texte intégral
Résumé :
This thesis presents a modification of existing entropy-based splitting criteria for classification and regression trees. Trees are typically grown using splitting criteria that choose optimal splits without taking future splits into account. This thesis examines localised splitting criteria that are based on local averaging in regression trees or local proportions in classification trees. The use of a localised criterion is motivated by the fact that future splits result in leaves that contain local observations, and hence local deviances provide a better approximation of the deviance of the fully grown tree. While most recent research has focussed on tree-averaging techniques that are aimed at taking a moderately successful splitting criterion and improving its predictive power, this thesis concentrates on improving the splitting criterion. Use of a localised splitting criterion captures local structures and enables later splits to capitalise on the placement of earlier splits when growing a tree. Using the localised splitting criterion results in much simpler trees for pure interaction data (data with no main effects) and can produce trees with fewer errors and lower residual mean deviances than those produced using a global splitting criterion when applied to real data sets with strong interaction effects. The superiority of the localised splitting criterion can persist when multiple trees are grown and averaged using simple methods. Although a single tree grown using the localised splitting criterion can outperform tree averaging using the global criterion, generally improvements in predictive performance are achieved by utilising the localised splitting criterion's property of detecting local discontinuities and averaging over sets of trees grown by placing splits where the deviance is locally minimal. Predictive performance improves further when the degree of localisation of the splitting criterion is randomly selected and weighted randomisation is used with locally minimal deviances to produce sets of trees to average over. Although state of the art methods quickly average very large numbers of trees, thus making the performance of the splitting criterion less critical, predictive performance when the localised criterion is used in bagging indicates that different splitting methods warrant investigation. The localised splitting criterion is most useful for growing one tree or a small number of trees to examine structure in the data. Structurally different trees can be obtained by simply splitting the data where the localised splitting criterion is locally optimal.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Bremner, Alexandra P. « Localised splitting criteria for classification and regression trees / ». Access via Murdoch University Digital Theses Project, 2004. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20040606.142949.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Podder, Mohua. « Robust genotype classification using dynamic variable selection ». Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1602.

Texte intégral
Résumé :
Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide –A, T, C or G – is altered. Arguably, SNPs account for more than 90% of human genetic variation. Dr. Tebbutt's laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). The strength of this platform is its unique redundancy having multiple probes for a single SNP. Using this microarray platform, we have developed fully-automated genotype calling algorithms based on linear models for individual probe signals and using dynamic variable selection at the prediction level. The algorithms combine separate analyses based on the multiple probe sets to give a final confidence score for each candidate genotypes. Our proposed classification model achieved an accuracy level of >99.4% with 100% call rate for the SNP genotype data which is comparable with existing genotyping technologies. We discussed the appropriateness of the proposed model related to other existing high-throughput genotype calling algorithms. In this thesis we have explored three new ideas for classification with high dimensional data: (1) ensembles of various sets of predictors with built-in dynamic property; (2) robust classification at the prediction level; and (3) a proper confidence measure for dealing with failed predictor(s). We found that a mixture model for classification provides robustness against outlying values of the explanatory variables. Furthermore, the algorithm chooses among different sets of explanatory variables in a dynamic way, prediction by prediction. We analyzed several data sets, including real and simulated samples to illustrate these features. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any ‘bad data’ corresponding to image artifacts on the microarray slide or failure of a specific chemistry. Though motivated by this genotyping application, the proposed methodology would apply to other classification problems where the explanatory variables fall naturally into groups or outliers in the explanatory variables require variable selection at the prediction stage for robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Brandoni, Domitilla <1994&gt. « Tensor-Train decomposition for image classification problems ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10121/3/phd_thesis_DomitillaBrandoni_final.pdf.

Texte intégral
Résumé :
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Negli ultimi anni si è registrato un notevole sviluppo di nuove tecniche per il riconoscimento automatico di oggetti, anche dovuto alle possibili ricadute di tali avanzamenti nel campo medico o automobilistico. A tal fine sono stati sviluppati svariati modelli matematici dai metodi di regressione fino alle reti neurali. Un aspetto cruciale di questi cosiddetti algoritmi di classificazione è l'uso di aspetti algebrici per la rappresentazione e l'approssimazione dei dati in input. In questa tesi esamineremo due diversi modelli per la classificazione di immagini basati sulla decomposizione Tensor-Train (TT). In generale, l'uso di approcci tensoriali è fondamentale per preservare la struttura intrinsecamente multidimensionale dei dati. Inoltre l'occupazione di memoria per la decomposizione Tensor-Train non cresce esponenzialmente all'aumentare dei dati, a differenza di altre decomposizioni tensoriali. Questo la rende particolarmente adatta nel caso di dati di grandi dimensioni. Inoltre permette, attraverso l'uso di opportune strategie di troncamento, di limitare notevolmente l'occupazione di memoria senza ricadute negative sulle performance di classificazione. Il primo modello proposto in questa tesi è basato su una decomposizione diretta del database tramite la decomposizione TT. In questo modo viene determinata una base che verrà di seguito utilizzata nella classificazione di nuove immagini. Il secondo è invece un modello di dictionary learning tensoriale sempre basato sulla decomposizione TT in cui i termini della decomposizione sono determinati utilizzando un nuovo metodo di ottimizzazione alternato con l'utilizzo di passi spettrali.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Ali, Khan Syed Irteza. « Classification using residual vector quantization ». Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50300.

Texte intégral
Résumé :
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Lemoine, Yves. « Classification et discrimination Analyse discriminante typologique et applications / ». Metz : Université Metz, 2008. ftp://ftp.scd.univ-metz.fr/pub/Theses/1979/Lemoine.Yves.SMZ79004.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ammoura, Adnan. « Geometrie analagmatique et triangulation de delaunay : contribution de l'analyse des donnees aux etudes marketing sur les medicaments ». Paris 6, 1988. http://www.theses.fr/1988PA066022.

Texte intégral
Résumé :
Dans la partie theorique, nous presentons la contribution de la triangulation de delaunay a la classification ascendante hierarchique; nous montrons ici l'unicite de la decomposition de delaunay dans le cas generique ou les facettes de l'enveloppe convexe de l'ensemble a classer sont toutes des simplexes. Grace a cette recherche theorique, nous ponvons dire que la methode de triangulation de delaunay est a la base de l'algorithme accelere de la cah. Dans la partie appliquee, on presente la contribution de l'analyse des donnees aux etudes marketing sur les medicaments en appliquant l'afc sur le tableau de donnees qui contient 99 medicaments et qui a pour objectif: la recherche du medicament le plus utilise et le plus apprecie par les patientes interrogees, leurs conjoints et leurs enfants. Grace a la technique du regroupement des individus en categories semblables par combinaison lineaire des modalites du signalement, nous avons pu determiner clairement les medicaments les plus utilises et les plus apprecies par les patientes, leurs conjoints et leurs enfants en fonction des maux, de la region, de la classe d'age et de la classe sociale
Styles APA, Harvard, Vancouver, ISO, etc.
45

Hamed, Nabil. « Conception et realisation d'un systeme de classification en teledetection par combinaison d'analyses radiometriques et spatiales ». Université Louis Pasteur (Strasbourg) (1971-2008), 1987. http://www.theses.fr/1987STR13153.

Texte intégral
Résumé :
Lorsque la resolution spatiale devient trop fine, l'utilisation de l'analyse radiometrique pour l'extraction d'informations des images en teledetection est insuffisante. Aussi, pour pallier a ce probleme, on presente une nouvelle classification combinant une analyse spatiale a l'analyse radiometrique
Styles APA, Harvard, Vancouver, ISO, etc.
46

Burka, Zak. « Perceptual audio classification using principal component analysis / ». Online version of thesis, 2010. http://hdl.handle.net/1850/12247.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Reiner, Ulrike. « Automatic Analysis of Dewey Decimal Classification Notations ». Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200701390.

Texte intégral
Résumé :
Ulrike Reiner, Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG) Göttingen, stellte ihr Projekt der automatischen Analyse von Notationen der Dewey-Dezimalklassifikation (DDC) vor. DDC-Notationen zeichnen sich dadurch aus, dass sie in aller Regel lang und komplex sind und in ihrer Herstellung zahlreiche Regeln zu durchlaufen haben. Ihr Computerprogramm analysiert DDC-Notationen und gibt alle in einer DDC-Notation enthaltenen DDC-Notationen samt DDC-Klassenbenennungen aus. Die gewonnenen DDC-Klassenbenennungen können z.B. für eine DDC-basierte Suche verwendet werden.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Stammers, Jon. « Audio event classification for urban soundscape analysis ». Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/19142/.

Texte intégral
Résumé :
The study of urban soundscapes has gained momentum in recent years as more people become concerned with the level of noise around them and the negative impact this can have on comfort. Monitoring the sounds present in a sonic environment can be a laborious and time–consuming process if performed manually. Therefore, techniques for automated signal identification are gaining importance if soundscapes are to be objectively monitored. This thesis presents a novel approach to feature extraction for the purpose of classifying urban audio events, adding to the library of techniques already established in the field. The research explores how techniques with their origins in the encoding of speech signals can be adapted to represent the complex everyday sounds all around us to allow accurate classification. The analysis methods developed herein are based on the zero–crossings information contained within a signal. Originally developed for the classification of bioacoustic signals, the codebook of Time–Domain Signal Coding (TDSC) has its band–limited restrictions removed to become more generic. Classification using features extracted with the new codebook achieves accuracies of over 80% when combined with a Multilayer Perceptron classifier. Further advancements are made to the standard TDSC algorithm, drawing inspiration from wavelets, resulting in a novel dyadic representation of time–domain features. Carrying the label of Multiscale TDSC (MTDSC), classification accuracies of 70% are achieved using these features. Recommendations for further work focus on expanding the library of training data to improve the accuracy of the classification system. Further research into classifier design is also suggested.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Luo, Xiang Yang. « Color image analysis for cereal grain classification ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23630.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Voicu, Iulian. « Analyse, caractérisation et classification de signaux foetaux ». Phd thesis, Université François Rabelais - Tours, 2011. http://tel.archives-ouvertes.fr/tel-00907317.

Texte intégral
Résumé :
Cette thèse s'inscrit dans le domaine biomédical, à l'interface entre l'instrumentation et le traitement du signal. L'objectif de ce travail est d'obtenir, grâce à une mélange de différentes informations, un monitorage de l'activité du fœtus (rythme cardiaque et mouvements fœtaux) pour apprécier son état de bien-être ou de souffrance, et ceci aux différents stades de la grossesse. Actuellement, les paramètres qui caractérisent la souffrance fœtale, issus du rythme cardiaque et des mouvements fœtaux, sont évalués par le médecin et ils sont réunis dans le score de Manning. Deux inconvénients majeurs existent: a) l'évaluation du score est trop longue puisqu'elle dure 1 heure; b) il existe des variations inter et intra-opérateur conduisant à différentes interprétations du bilan médical de la patiente. Pour s'affranchir de ces désavantages nous évaluons le bien-être fœtal d'une façon objective, à travers le calcul d'un score. Pour atteindre ce but, nous avons développé une technologie ultrasonore multi-capteurs (12 capteurs) permettant de recueillir une soixantaine de (paires de) signaux Doppler en provenance du cœur, des membres inférieurs et supérieurs. Dans cette thèse notre première contribution s'appuie sur la mise en œuvre de nouveaux algorithmes de détection du rythme cardiaque (mono-canal et multi-canaux). Notre deuxième contribution concerne l'implémentation de deux catégories de paramètres basés sur le rythme cardiaque : a) la classes des paramètres "traditionnels" utilisés par les obstétriciens et évalués aussi dans le test de Manning (ligne de base, accélérations, décélérations); b) la classe des paramètres utilisés par les cardiologues et qui caractérisent la complexité d'une série temporelle (entropie approximée, entropie échantillonnée, entropie multi-échelle, graphe de récurrence, etc.). Notre troisième contribution consiste également à apporter des modifications notables aux différents algorithmes du calcul des mouvements fœtaux et des paramètres qui en découlent comme : le nombre de mouvements, le pourcentage du temps passé en mouvement par un fœtus, la durée des mouvements. Notre quatrième contribution concerne l'analyse conjointe du rythme cardiaque et des mouvements fœtaux. Cette analyse nous conduit alors à l'identification de différents états comportementaux. Le développement ou le non-développement de ces états est un indicateur de l'évolution neurologique du fœtus. Nous proposons d'évaluer les paramètres de mouvements pour chaque état comportemental. Enfin, notre dernière contribution concerne la mise en œuvre de différents scores et des classifications qui en découlent. Les perspectives directes à donner à ce travail concernent l'intégration des scores ou paramètres les plus pertinents dans un dispositif de surveillance à domicile ou bien dans un dispositif de monitorage clinique.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!