Dissertationen zum Thema „Classification“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Classification.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Classification" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bogers, Toine, Willem Thoonen und den Bosch Antal van. „Expertise classification: Collaborative classification vs. automatic extraction“. dLIST, 2006. http://hdl.handle.net/10150/105709.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Social classification is the process in which a community of users categorizes the resources in that community for their own use. Given enough users and categorization, this will lead to any given resource being represented by a set of labels or descriptors shared throughout the community (Mathes, 2004). Social classification has become an extremely popular way of structuring online communities in recent years. Well-known examples of such communities are the bookmarking websites Furl (http://www.furl.net/) and del.icio.us (http://del.icio.us/), and Flickr (http://www.flickr.com/) where users can post their own photos and tag them. Social classification, however, is not limited to tagging resources: another possibility is to tag people, examples of which are Consumating (http://www.consumating.com/), a collaborative tag-based personals website, and Kevo (http://www.kevo.com/), a website that lets users tag and contribute media and information on celebrities. Another application of people tagging is expertise classification, an emerging subfield of social classification. Here, members of a group or community are classified and ranked based on the expertise they possess on a particular topic. Expertise classification is essentially comprised of two different components: expertise tagging and expert ranking. Expertise tagging focuses on describing one person at a time by assigning tags that capture that person's topical expertise, such as â speech recognition' or â small-world networks'. information request, such as, for instance, a query submitted to a search engine. Methods are developed to combine the information about individual members' expertise (tags), to provide on-the-fly query-driven rankings of community members. Expertise classification can be done in two principal ways. The simplest option follows the principle of social bookmarking websites: members are asked to supply tags that describe their own expertise and to rank the other community members with regard to a specific request for information. Alternatively, automatic expertise classification ideally extracts expertise terms automatically from a user's documents and e-mails by looking for terms that are representative for that user. These terms are then matched on the information request to produce an expert ranking of all community members. In this paper we describe such an automatic method of expertise classification and evaluate it using human expertise classification judgments. In the next section we will describe some of the related work on expertise classification, after which we will describe our automatic method of expertise classification and our evaluation of them in sections 3 and 4. Sections 5.1 and 5.1 describe our findings on expertise tagging and expert rankings, followed by discussion and our conclusions in section 6 and recommendations for future work in section 7.
2

Ravindra, Dilip. „Firmware and classification algorithm development for vehicle classification“. Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1603749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Vehicle classification is one of the active research topic in Intelligent Transport System. This project proposes an approach to classify the vehicles on freeway with respect to the size of the vehicle. This vehicle classification is based on threshold based algorithm. This system consists of two AMR magneto-resistive sensors connected to TI msp430 development board. The data collected from the two magneto resistive sensors is analyzed and supplied to threshold based algorithm to differentiate the vehicles. With the use of minimum number features extracted from the data it was possible to produce very efficient algorithm that is capable of differentiating the vehicles.

3

Phillips, Rhonda D. „A Probabilistic Classification Algorithm With Soft Classification Output“. Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/26701.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis presents a shared memory parallel version of the hybrid classification algorithm IGSCR (iterative guided spectral class rejection), a novel data reduction technique that can be used in conjunction with PIGSCR (parallel IGSCR), a noise removal method based on the maximum noise fraction (MNF), and a continuous version of IGSCR (CIGSCR) that outputs soft classifications. All of the above are either classification algorithms or preprocessing algorithms necessary prior to the classification of high dimensional, noisy images. PIGSCR was developed to produce fast and portable code using Fortran 95, OpenMP, and the Hierarchical Data Format version 5 (HDF5) and accompanying data access library. The feature reduction method introduced in this thesis is based on the singular value decomposition (SVD). This feature reduction technique demonstrated that SVD-based feature reduction can lead to more accurate IGSCR classifications than PCA-based feature reduction. This thesis describes a new algorithm used to adaptively filter a remote sensing dataset based on signal-to-noise ratios (SNRs) once the maximum noise fraction (MNF) has been applied. The adaptive filtering scheme improves image quality as shown by estimated SNRs and classification accuracy improvements greater than 10%. The continuous iterative guided spectral class rejection (CIGSCR) classification method is based on the iterative guided spectral class rejection (IGSCR) classification method for remotely sensed data. Both CIGSCR and IGSCR use semisupervised clustering to locate clusters that are associated with classes in a classification scheme. This type of semisupervised classification method is particularly useful in remote sensing where datasets are large, training data are difficult to acquire, and clustering makes the identification of subclasses adequate for training purposes less difficult. Experimental results indicate that the soft classification output by CIGSCR is reasonably accurate (when compared to IGSCR), and the fundamental algorithmic changes in CIGSCR (from IGSCR) result in CIGSCR being less sensitive to input parameters that influence iterations.
Ph. D.
4

Матусевич, Олександр Павлович. „Classification Fonts“. Thesis, Київський національний університет технологій та дизайну, 2017. https://er.knutd.edu.ua/handle/123456789/7344.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ярмак, Любов Павлівна, Любовь Павловна Ярмак, Liubov Pavlivna Yarmak, Оксана Робертівна Гладченко, Оксана Робертовна Гладченко und Oksana Robertivna Hladchenko. „Test classification“. Thesis, Сумський державний університет, 2014. http://essuir.sumdu.edu.ua/handle/123456789/34677.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
We will outline here rather briefly some of the ways tests can be classified. Understanding contrasting exam types can be helpful to teachers since tests of one kind may not always be successfully substituted for those of another kind. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/34677
6

Taylor, Paul Clifford. „Classification trees“. Thesis, University of Bath, 1990. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306312.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bonneau, Jean-Christophe. „La classification des contrats : essai d'une analyse systémique des classifications du Code civil“. Grenoble, 2010. http://www.theses.fr/2010GREND017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
La classification des contrats telle qu'elle est énoncée aux articles 1102 et suivants du Code civil se distingue structurellement des classifications modernes lui ayant été ajoutées. Prenant au sérieux l'idée d’une approche globale de la classification, les classifications du Code civil, séparées d'un régime juridique qui ne dépend pas en réalité d'elles et de notions qui lui sont étrangères, comme la cause, ont été envisagées dans leurs rapports de logique et de complémentarité. L'existence des chaînes de classifications, nouvelle classification résultant de l'assemblage cohérent des différentes classifications prévues par le Code civil, a pu être révélée au terme d'une étude visant à comprendre comment ces classifications se lient et se combinent entre elles. Les fonctionnalités de la classification des contrats ont alors été déduites de la structure même des classifications du Code civil réunies en chaînes. Celles-ci ont pour propriété de révéler ce qui constitue l'essence du contrat, en permettant de le distinguer de certaines figures qui tentent de s'y assimiler mais s'en distinguent néanmoins dès lors que l'aptitude d'un objet juridique à s'intégrer dans les chaînes de classifications est perçu comme conditionnant la qualification contractuelle elle-même. Envisagées comme un critère privilégié de définition du contrat, qui peut inspirer les projets visant à élaborer un droit européen des contrats, les chaînes de classifications ont ensuite été pensées dans leurs rapports avec la diversité des contrats nommés. Les chaînes de classifications absorbent ces derniers ainsi que leur régime juridique qui peut, en conséquence, être transposé aux contrats innomés. Permettant un renouvellement des regroupements et des distinctions généralement perçus, les chaînes de classifications apportent un éclairage nouveau au processus de qualification du contrat, contribuent à préciser le domaine de la modification du contrat, et fournissent, enfin, un fondement à l'action contractuelle directe qui s'exerce dans les chaînes de contrats
The classification of contracts as it is stated in the civil Code articles 1102 onwards structurally distinguishes itself from modern classifications having been added to it. Looking thoroughly at the matter of a global approach of classification, the classifications of the civil Code, separated from a legal regime which does not in fact depend on them and on notions which are foreign to it, such as the concept of “cause”, were considered in their connections of logic and complementarity. The existence of the chains of classifications, a new classification resulting from the coherent assembly of the various classifications provided for the civil Code, were brought to light thanks to a study aiming at understanding how these classifications are bound and harmonized. The features of the classification of contracts were then deducted from the very structure of the classifications of the civil Code combined in chains. These have for feature to reveal what constitutes the essence of the contract, by allowing to distinguish it from certain figures which try to assimilate to it but nevertheless distinguish themselves from it since the capacity of a legal object to become integrated into the chains of classifications is perceived as conditional on the contractual qualification itself. Considered as a preferred criterion of the definition of the contract, which can give rise to projects aiming at the elaboration of a body of European contract laws, the chains of classifications were then conceptualised in their connections with the variety of the named contracts. The chains of classifications absorb these contracts as well as their legal regime which can, consequently, be transposed into the unnamed contracts. Allowing a renewal of the groupings generally perceived, the chains of classifications bring a new light to the process of qualification of the contract. They contribute to specify the domain of the modification of the contract, and finally supply a foundation for the direct contractual action which is applied to the chains of contracts
8

Van, der Westhuizen Cornelius Stephanus. „Nearest hypersphere classification : a comparison with other classification techniques“. Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95839.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (MCom)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Classification is a widely used statistical procedure to classify objects into two or more classes according to some rule which is based on the input variables. Examples of such techniques are Linear and Quadratic Discriminant Analysis (LDA and QDA). However, classification of objects with these methods can get complicated when the number of input variables in the data become too large (􀝊 ≪ 􀝌), when the assumption of normality is no longer met or when classes are not linearly separable. Vapnik et al. (1995) introduced the Support Vector Machine (SVM), a kernel-based technique, which can perform classification in cases where LDA and QDA are not valid. SVM makes use of an optimal separating hyperplane and a kernel function to derive a rule which can be used for classifying objects. Another kernel-based technique was proposed by Tax and Duin (1999) where a hypersphere is used for domain description of a single class. The idea of a hypersphere for a single class can be easily extended to classification when dealing with multiple classes by just classifying objects to the nearest hypersphere. Although the theory of hyperspheres is well developed, not much research has gone into using hyperspheres for classification and the performance thereof compared to other classification techniques. In this thesis we will give an overview of Nearest Hypersphere Classification (NHC) as well as provide further insight regarding the performance of NHC compared to other classification techniques (LDA, QDA and SVM) under different simulation configurations. We begin with a literature study, where the theory of the classification techniques LDA, QDA, SVM and NHC will be dealt with. In the discussion of each technique, applications in the statistical software R will also be provided. An extensive simulation study is carried out to compare the performance of LDA, QDA, SVM and NHC for the two-class case. Various data scenarios will be considered in the simulation study. This will give further insight in terms of which classification technique performs better under the different data scenarios. Finally, the thesis ends with the comparison of these techniques on real-world data.
AFRIKAANSE OPSOMMING: Klassifikasie is ’n statistiese metode wat gebruik word om objekte in twee of meer klasse te klassifiseer gebaseer op ’n reël wat gebou is op die onafhanklike veranderlikes. Voorbeelde van hierdie metodes sluit in Lineêre en Kwadratiese Diskriminant Analise (LDA en KDA). Wanneer die aantal onafhanklike veranderlikes in ’n datastel te veel raak, die aanname van normaliteit nie meer geld nie of die klasse nie meer lineêr skeibaar is nie, raak die toepassing van metodes soos LDA en KDA egter te moeilik. Vapnik et al. (1995) het ’n kern gebaseerde metode bekendgestel, die Steun Vektor Masjien (SVM), wat wel vir klassifisering gebruik kan word in situasies waar metodes soos LDA en KDA misluk. SVM maak gebruik van ‘n optimale skeibare hipervlak en ’n kern funksie om ’n reël af te lei wat gebruik kan word om objekte te klassifiseer. ’n Ander kern gebaseerde tegniek is voorgestel deur Tax and Duin (1999) waar ’n hipersfeer gebruik kan word om ’n gebied beskrywing op te stel vir ’n datastel met net een klas. Dié idee van ’n enkele klas wat beskryf kan word deur ’n hipersfeer, kan maklik uitgebrei word na ’n multi-klas klassifikasie probleem. Dit kan gedoen word deur slegs die objekte te klassifiseer na die naaste hipersfeer. Alhoewel die teorie van hipersfere goed ontwikkeld is, is daar egter nog nie baie navorsing gedoen rondom die gebruik van hipersfere vir klassifikasie nie. Daar is ook nog nie baie gekyk na die prestasie van hipersfere in vergelyking met ander klassifikasie tegnieke nie. In hierdie tesis gaan ons ‘n oorsig gee van Naaste Hipersfeer Klassifikasie (NHK) asook verdere insig in terme van die prestasie van NHK in vergelyking met ander klassifikasie tegnieke (LDA, KDA en SVM) onder sekere simulasie konfigurasies. Ons gaan begin met ‘n literatuurstudie, waar die teorie van die klassifikasie tegnieke LDA, KDA, SVM en NHK behandel gaan word. Vir elke tegniek gaan toepassings in die statistiese sagteware R ook gewys word. ‘n Omvattende simulasie studie word uitgevoer om die prestasie van die tegnieke LDA, KDA, SVM en NHK te vergelyk. Die vergelyking word gedoen vir situasies waar die data slegs twee klasse het. ‘n Verskeidenheid van data situasies gaan ook ondersoek word om verdere insig te toon in terme van wanneer watter tegniek die beste vaar. Die tesis gaan afsluit deur die genoemde tegnieke toe te pas op praktiese datastelle.
9

Olin, Per. „Evaluation of text classification techniques for log file classification“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166641.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
System log files are filled with logged events, status codes, and other messages. By analyzing the log files, the systems current state can be determined, and find out if something during its execution went wrong. Log file analysis has been studied for some time now, where recent studies have shown state-of-the-art performance using machine learning techniques. In this thesis, document classification solutions were tested on log files in order to classify regular system runs versus abnormal system runs. To solve this task, supervised and unsupervised learning methods were combined. Doc2Vec was used to extract document features, and Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) based architectures on the classification task. With the use of the machine learning models and preprocessing techniques the tested models yielded an f1-score and accuracy above 95% when classifying log files.
10

Anteryd, Fredrik. „Information Classification in Swedish Governmental Agencies : Analysis of Classification Guidelines“. Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11493.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Information classification deals with the handling of sensitive information, such as patient records and social security information. It is of utmost importance that this information is treated with caution in order to ensure its integrity and security. In Sweden, the Civil Contingencies Agency has established a set of guidelines for how governmental agencies should handle such information. However, there is a lack of research regarding how well these guidelines are followed as well as if the agencies have made accommodations of these guidelines of their own. This work presents the results from a survey sent to 245 governmental agencies in Sweden, investigating how information classification actually is performed today. The questionnaire was answered by 144 agencies and 54 agencies provided detailed documents of their classification process. The overall results show that the classification process is difficult, while those who provided documents proved to have good guidelines, but not always consistent with the existing recommendations.
11

Lekic, Sasa, und Kasper Liu. „Intent classification through conversational interfaces : Classification within a small domain“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-257863.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Natural language processing and Machine learning are subjects undergoing intense study nowadays. These fields are continually spreading, and are more interrelated than ever before. A case in point is text classification which is an instance of Machine learning(ML) application in Natural Language processing(NLP).Although these subjects have evolved over the recent years, they still have some problems that have to be considered. Some are related to the computing power techniques from these subjects require, whereas the others to how much training data they require.The research problem addressed in this thesis regards lack of knowledge on whether Machine learning techniques such as Word2Vec, Bidirectional encoder representations from transformers (BERT) and Support vector machine(SVM) classifier can be used for text classification, provided only a small training set. Furthermore, it is not known whether these techniques can be run on regular laptops.To solve the research problem, the main purpose of this thesis was to develop two separate conversational interfaces utilizing text classification techniques. These interfaces, provided with user input, can recognise the intent behind it, viz. classify the input sentence within a small set of pre-defined categories. Firstly, a conversational interface utilizing Word2Vec, and SVM classifier was developed. Secondly, an interface utilizing BERT and SVM classifier was developed. The goal of the thesis was to determine whether a small dataset can be used for intent classification and with what accuracy, and if it can be run on regular laptops.The research reported in this thesis followed a standard applied research method. The main purpose was achieved and the two conversational interfaces were developed. Regarding the conversational interface utilizing Word2Vec pre-trained dataset, and SVM classifier, the main results showed that it can be used for intent classification with the accuracy of 60%, and that it can be run on regular computers. Concerning the conversational interface utilizing BERT and SVM Classifier, the results showed that this interface cannot be trained and run on regular laptops. The training ran over 24 hours and then crashed.The results showed that it is possible to make a conversational interface which is able to classify intents provided only a small training set. However, due to the small training set, and consequently low accuracy, this conversational interface is not a suitable option for important tasks, but can be used for some non-critical classification tasks.
Natural language processing och maskininlärning är ämnen som forskas mycket om idag. Dessa områden fortsätter växa och blir allt mer sammanvävda, nu mer än någonsin. Ett område är textklassifikation som är en gren av maskininlärningsapplikationer (ML) inom Natural language processing (NLP).Även om dessa ämnen har utvecklats de senaste åren, finns det fortfarande problem att ha i å tanke. Vissa är relaterade till rå datakraft som krävs för dessa tekniker medans andra problem handlar om mängden data som krävs.Forskningsfrågan i denna avhandling handlar om kunskapsbrist inom maskininlärningtekniker som Word2vec, Bidirectional encoder representations from transformers (BERT) och Support vector machine(SVM) klassificierare kan användas som klassification, givet endast små träningsset. Fortsättningsvis, vet man inte om dessa metoder fungerar på vanliga datorer.För att lösa forskningsproblemet, huvudsyftet för denna avhandling var att utveckla två separata konversationsgränssnitt som använder textklassifikationstekniker. Dessa gränssnitt, give med data, kan känna igen syftet bakom det, med andra ord, klassificera given datamening inom ett litet set av fördefinierade kategorier. Först, utvecklades ett konversationsgränssnitt som använder Word2vec och SVM klassificerare. För det andra, utvecklades ett gränssnitt som använder BERT och SVM klassificerare. Målet med denna avhandling var att avgöra om ett litet dataset kan användas för syftesklassifikation och med vad för träffsäkerhet, och om det kan användas på vanliga datorer.Forskningen i denna avhandling följde en standard tillämpad forskningsmetod. Huvudsyftet uppnåddes och de två konversationsgränssnitten utvecklades. Angående konversationsgränssnittet som använde Word2vec förtränat dataset och SVM klassificerar, visade resultatet att det kan användas för syftesklassifikation till en träffsäkerhet på 60%, och fungerar på vanliga datorer. Angående konversationsgränssnittet som använde BERT och SVM klassificerare, visade resultatet att det inte går att köra det på vanliga datorer. Träningen kördes i över 24 timmar och kraschade efter det.Resultatet visade att det är möjligt att skapa ett konversationsgränssnitt som kan klassificera syften, givet endast ett litet träningsset. Däremot, på grund av det begränsade träningssetet, och konsekvent låg träffsäkerhet, är denna konversationsgränssnitt inte lämplig för viktiga uppgifter, men kan användas för icke kritiska klassifikationsuppdrag.
12

Knudsen, Anne Kari. „Cancer pain classification“. Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for kreftforskning og molekylær medisin, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-16631.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Klassifikasjon av kreftsmerte Kreftsmerte – hva skal et fremtidig klassifikasjonssystem inneholde? Smerte er et subjektivt, sammensatt og plagsomt symptom som forekommer hyppig hos kreftpasienter. Til tross for eksisterende retningslinjer, er det mange kreftpasienter som ikke får god smertebehandling, særlig ved langkommet sykdom. En av mange årsaker til dette, er mangelen på et allment akseptert klassifiseringssystem for kreftsmerte – et verktøy for å stille en korrekt diagnose. På bakgrunn av ovennevnte forhold ble den internasjonale EU-finansierte forskningsgruppen ‘European Palliative Care Research Collaborative’ (EPCRC) dannet. En av gruppens hovedmålsettinger var å utvikle klassifikasjonssystem for tre vanlige symptomer hos kreftpasienter med langtkommet sykdom: smerte, depresjon og ufrivillig vekttap. Arbeidene i denne avhandlingen har vært utført i nær tilknytning til EPCRC. Det overordnete målet med avhandlingen er å bidra i utviklingsprosessen av et internasjonalt klassifikasjonssystem for smerte hos kreftpasienter blant annet ved å finne frem til noen faktorer som er avgjørende for å kunne beskrive en smertetilstand og derved å kunne stille en korrekt smertediagnose. Hovedfunnene i avhandlingen er: Det foreligger flere systemer for klassifisering av smerte hos kreftpasienter, men ingen av disse er i utstrakt bruk, verken i forskning eller klinisk praksis. Smertens intensitet og patofysiologi, forekomst av gjennombruddssmerte, psykisk stress og respons på behandling inngår i to eller flere av de seks formelle systemene som ble funnet ved systematisk litteraturgjennomgang. Pasienter bekreftet i intervju at faktorer påvist å være viktige for kreftsmerte i tidligere studier, også var relevante for deres smerteopplevelse. De vektla fysiske og psykiske aspekter ved det å ha smerte, og søvn ble ansett som en viktig faktor. I en europeisk studie hvor mer enn 2000 kreftpasienter som brukte sterke smertestillende (opioider) deltok, ble følgende faktorer funnet å ha betydning for grad av smerteintensitet og/eller smertelindring: gjennombruddssmerte, smertens lokalisasjon, opioiddose, bruk av svake smertestillende, søvn, psykisk stress, smertens patofysiologi, misbruk av alkohol/narkotika, kreftdiagnose og lokalisasjon av spredning av kreftsykdommen. I en italiensk studie hvor 1800 kreftpasienter deltok, ble de fem førstnevntes relevans bekreftet. Videre ble det i den samme studien påvist at smerteintensitet og opplevd smertelindring målt ved studiens oppstart samt forekomst av gjennombruddssmerte, smertens lokalisasjon, alder og kreftdiagnose var faktorer som kunne predikere smerte etter to uker. Minst tre hovedutfordringer må løses for å komme nærmere et internasjonalt klassifikasjonssystem for kreftsmerte: å velge de mest relevante faktorene for inklusjon i systemet, inkludert å velge et tilstrekkelig antall faktorer, å oppnå enighet om hvilke endepunkt som skal brukes og til slutt å innføre det fremtidige klassifikasjonssystemet i klinisk praksis.
Cancer pain classification – what should be the content of a future system? Pain is a subjective, complex and burdensome symptom which is very common in cancer patients. Despite existing treatment guidelines, several cancer patients still do not receive optimal pain treatment, in particular patients with advanced disease. The lack of a common classification system for cancer pain – a diagnostic tool – has been identified as one of several causes for this undertreatment. Motivated by these considerations, the international EU-funded ‘European Palliative Care Research Collaborative’ (EPCRC) was established. One of the main aims was to develop a classification system for three common symptoms in cancer patients with advanced disease: pain, depression, and cancer related weight loss. The papers included in this thesis have been performed in close collaboration with the EPCRC. The overall aim of the thesis is to contribute in the development process of an international classification system for pain in cancer patients by for example to identify factors that are important for describing pain and thus improve diagnostics and treatment of cancer pain. The main results in this thesis are: There are several systems for pain classification in cancer patients, but none of these are widely used in research or in clinical practice. Pain intensity and pathophysiology, the presence of breakthrough pain, psychological distress, and response to treatment are included in two or more of the six formal systems that were identified by systematically reviewing existing literature. Patients confirmed in interviews that the factors identified to be important for cancer pain in previous studies, were relevant also for their experience of pain. They emphasised physical and psychological aspects of being in pain, and sleep was considered important. In an European study where more than 2000 cancer patients using strong pain medication (opioids) participated, the following factors were identified to be of importance for the degree of pain intensity and pain relief: breakthrough pain, localisation of pain, opioid dose, use of weak pain medication, sleep, psychological distress, pathophysiology of pain, substance abuse, cancer diagnosis, and localisation of metastases. In an Italian study where 1800 cancer patients participated, the relevance of the five first factors listed above was confirmed. Furthermore, results from the same study showed that pain intensity and pain relief measured at study start as well as the presence of breakthrough pain, localisation of pain, age, and cancer diagnosis were factors that could predict pain after two weeks. At least three major challenges for the further development a future international classification system for cancer pain: to choose the most relevant factors (and how many) to include in the system, to achieve agreement on what outcomes to use, and finally to start using the classification system in clinical practice
13

Karnsund, Alice, und Elin Samuelsson. „Stem Cell Classification“. Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214731.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Machine learning and neural networks haverecently become hot topics in many research areas. They havealready proved to be useful in the fields of medicine andbiotechnology. In these areas, they can be used to facilitatecomplicated and time consuming analysis processes. Animportant application is image recognition of cells, tumours etc.,which also is the focus of this paper.Our project was to construct both Fully Connected NeuralNetworks and Convolutional Neural Networks with the ability torecognize pictures of muscular stem cells (MuSCs). We wanted toinvestigate if the intensity values in each pixel of the images weresufficient to use as indata for classification.By optimizing the structure of our networks, we obtained goodresults. Using only the pixel values as input, the pictures werecorrectly classified with up to 95.1% accuracy. If the image sizewas added to the indata, the accuracy was as best 97.9 %.The conclusion was that it is sensible and practical to use pixelintensity values as indata to classification programs. Importantrelationships exist and by adding some other easily accessiblecharacteristics, the success rate can be compared to a human’sability to classify these cells.
14

Evans, Reuben James Emmanuel. „Clustering for Classification“. The University of Waikato, 2007. http://hdl.handle.net/10289/2403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Advances in technology have provided industry with an array of devices for collecting data. The frequency and scale of data collection means that there are now many large datasets being generated. To find patterns in these datasets it would be useful to be able to apply modern methods of classification such as support vector machines. Unfortunately these methods are computationally expensive, quadratic in the number of data points in fact, so cannot be applied directly. This thesis proposes a framework whereby a variety of clustering methods can be used to summarise datasets, that is, reduce them to a smaller but still representative dataset so that these advanced methods can be applied. It compares the results of using this framework against using random selection on a large number of classification and regression problems. Results show that the clustered datasets are on average fifty percent smaller than the original datasets without loss of classification accuracy which is significantly better than random selection. They also show that there is no free lunch, for each dataset it is important to choose a clustering method carefully.
15

Magee, Christopher, und Weck Olivier de. „Complex System Classification“. International Council On Systems Engineering (INCOSE), 2004. http://hdl.handle.net/1721.1/6753.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The use of terms such as “Engineering Systems”, “System of systems” and others have been coming into greater use over the past decade to denote systems of importance but with implied higher complexity than for the term systems alone. This paper searches for a useful taxonomy or classification scheme for complex Systems. There are two aspects to this problem: 1) distinguishing between Engineering Systems (the term we use) and other Systems, and 2) differentiating among Engineering Systems. Engineering Systems are found to be differentiated from other complex systems by being human-designed and having both significant human complexity as well as significant technical complexity. As far as differentiating among various engineering systems, it is suggested that functional type is the most useful attribute for classification differentiation. Information, energy, value and mass acted upon by various processes are the foundation concepts underlying the technical types.
Engineering Systems Division and Mechanical Engineering, Center for Innovation in Product Development
16

Richard, Keelan. „Lexical Aspectual Classification“. Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22906.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This work is a first attempt at classification of Lexical Aspect. In this dissertation I describe eight lexical aspectual classes, each initially containing a few members. Using distributional analysis I generate 132 additional seeds, each of which was approved by at least seven out of nine judges. These seeds are in turn fed into a supervised machine learning system, trained on 136 lexical and syntactic features. I experiment on one 8-way classification task, one 3-way classification task, and ten binary classification tasks, and show that five of the eight classes are identified better than by a random baseline measure by a statistically significant margin. Finally, I analyze the relative contribution of each of four feature groups and conclude that the same features which are best in identifying phrasal aspect are also most informative for lexical aspect.
17

Ke, Shih Wen. „Automatic email classification“. Thesis, University of Sunderland, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488788.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Mahrousa, Zakria Zaki. „Computerised electrocardiogram classification“. Thesis, Cardiff University, 2004. http://orca.cf.ac.uk/55932/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Advances in computing have resulted in many engineering processes being automated. Electrocardiogram (ECG) classification is one such process. The analysis of ECGs can benefit from the wide availability and power of modern computers. This study presents the usage of computer technology in the field of computerised ECG classification. Computerised electrocardiogram classification can help to reduce healthcare costs by enabling suitably equipped general practitioners to refer to hospital only those people with serious heart problems. Computerised ECG classification can also be very useful in shortening hospital waiting lists and saving life by discovering heart diseases early. The thesis investigates the automatic classification of ECGs into different disease categories using Artificial Intelligence (AI) techniques. A comparison of the use of different feature sets and AI classifiers is presented. The feature sets include conventional cardiological features, as well as features taken directly from time domain samples of an ECG. The benchmark AI classifiers tested include those based on neural network, k-Nearest Neighbour and inductive learning techniques. The research proposes two modifications to the learning vector quantisation (LVQ) neural network, namely the All Weights Updating-LVQ (AWU-LVQ) algorithm and the Neighbouring Weights Updating-LVQ (NWU-LVQ) algorithm, yielding an "intelligent" diagnostic heart system with higher accuracy and reduced training time compared to existing AI techniques.
19

Roberts, Paul J. „Automatic product classification“. Thesis, University of Reading, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.542272.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Roach, M. J. „Video genre classification“. Thesis, Swansea University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638674.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The thesis progresses to look at feature extraction and description for content based analysis involving audio and visual modes. The notion calculation is itself very simple - just a summation of the difference at points in two images. The measure is filtered using a simple digital filter whose output is analysed by Fourier. This is then processed to derive feature vectors from spectral and from regressive analysis. I do have some reservations here - it would appear that the classified data are aliased and there is no discussion of appropriate strategies to mitigate this, or as to whether it leads to untoward effects. I do note that one genre appears largely to be noise. I do have reservations beyond technical ones for it does appear that some of these points could be discussed within the papers bound into the thesis, but I seek clarification and guidance on these points at viva voce. I shall also seek information concerning how the parameters were set since this would appear important, yet some aspects in this choice remain unclear. The research does appear to have developed one of the more demanding data sets for experimental evaluation. This claim could have been amplified, or even made. That new technique appears to have been deployed to success would appear to argue against my technical concerns, but there is an issue of generalisation capability, or the new approach's likely capability of categorising correctly data that is as yet unseen but has been presented in an appropriate format. I certainly consider that the results should be strengthened by an analysis of statistical significance. I certainly agree that there is considerable scope for future work in this area. Mr Roach is also to be congratulated on a good record on conference publication but I do have some reservations of a technical nature for which I shall seek clarification at viva voce.
21

Rajan, Jebu Jacob. „Time series classification“. Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339538.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

GOMES, FELIPE REIS. „PRODUCT OFFERING CLASSIFICATION“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=22577@1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Este trabalho apresenta o EasyLearn, um framework para apoiar o desenvolvimento de aplicações voltadas ao aprendizado supervisionado. O EasyLearn define uma camada intermediaria, de simples configuração e entendimento, entre a aplicação e o WEKA, um framework de aprendizado de máquina criado pela Universidade de Waikato. Todos os classificadores e filtros implementados pelo WEKA podem ser facilmente encapsulados para serem utilizados pelo EasyLearn. O EasyLearn recebe como entrada um conjunto de arquivos de configuração no formato XML contendo a definição do fluxo de processamento a ser executado, além da fonte de dados a ser processada, independente do formato. Sua saída é adaptável e pode ser configurada para produzir, por exemplo, relatórios de acurácia da classificação, a própria da fonte de dados classificada, ou o modelo de classificação já treinado. A arquitetura do EasyLearn foi definida após a análise detalhada dos processos de classificação, permitindo identificar inúmeras atividades em comum entre os três processos estudados aprendizado, avaliação e classificação). Através desta percepção e tomando as linguagens orientadas a objetos como inspiração, foi criado um framework capaz de comportar os processos de classificação e suas possíveis variações, além de permitir o reaproveitamento das configurações, através da implementação de herança e polimorfismo para os seus arquivos de configuração. A dissertação ilustra o uso do framework criado através de um estudo de caso completo sobre classificação de produtos do comércio eletrônico, incluindo a criação do corpus, engenharia de atributos e análise dos resultados obtidos.
This dissertation presents EasyLearn, a framework to support the development of supervised learning applications. EasyLearn dfines an intermediate layer, which is easy to configure and understand, between the application and WEKA, a machine learning framework created by the University of Waikato. All classifiers and filters implemented by WEKA can be easily encapsulated to be used by EasyLearn. EasyLearn receives as input a set of configuration files in XML format containing the definition of the processing flow to be executed, in addition to the data source to be classified, regardless of format. Its output is customizable and can be configured to produce classification accuracy reports, the classified data source, or the trained classification model. The architecture of EasyLearn was defined after a detailed analysis of the classification process, which identified a set of common activities among the three analyzed processes (learning, evaluation and classification). Through this insight and taking the object-oriented languages as inspiration, a framework was created which is able to support the classification processes and its variations, and which also allows reusing settings by implementing inheritance and polymorphism in their configuration files. This dissertation also illustrates the use of the created framework presenting a full case study about e-commerce product classification, including corpus creation, attribute engineering and result analysis.
23

Tang, Xiaoou. „Transform texture classification“. Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Tsai, Filip, und Henrik Hellström. „Stem Cell Classification“. Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200606.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Almeida, Hugo Ricardo da Costa. „Automatic cymbal classification“. Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/4923.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Informática
Most of the research on automatic music transcription is focused on the transcription of pitched instruments, like the guitar and the piano. Little attention has been given to unpitched instruments, such as the drum kit, which is a collection of unpitched instruments. Yet, over the last few years this type of instrument started to garner more attention, perhaps due to increasing popularity of the drum kit in the western music. There has been work on automatic music transcription of the drum kit, especially the snare drum, bass drum, and hi-hat. Still, much work has to be done in order to achieve automatic music transcription of all unpitched instruments. An example of a type of unpitched instrument that has very particular acoustic characteristics and that has deserved almost no attention by the research community is the drum kit cymbals. A drum kit contains several cymbals and usually these are treated as a single instrument or are totally disregarded by automatic music classificators of unpitched instruments. We propose to fill this gap and as such, the goal of this dissertation is automatic music classification of drum kit cymbal events, and the identification of which class of cymbals they belong to. As stated, the majority of work developed on this area is mostly done with very different percussive instruments, like the snare drum, bass drum, and hi-hat. On the other hand, cymbals are very similar between them. Their geometry, type of alloys, spectral and sound traits shows us just that. Thus, the great achievement of this work is not only being able to correctly classify the different cymbals, but to be able to identify such similar instruments, which makes this task even harder.
26

Gama, João Manuel Portela da. „Combining classification algorithms“. Doctoral thesis, Universidade do Porto. Reitoria, 1999. http://hdl.handle.net/10216/10017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Dissertação de Doutoramento em Ciência de Computadores apresentada à Faculdade de Ciências da Universidade do Porto
A capacidade de um algoritmo de aprendizagem induzir, para um determinado problema, uma boa generalização depende da linguagem de representação usada para generalizar os exemplos. Como diferentes algoritmos usam diferentes linguagens de representação e estratégias de procura, são explorados espaços diferentes e são obtidos resultados diferentes. O problema de encontrar a representação mais adequada para o problema em causa, é uma área de investigação bastante activa. Nesta dissertação, em vez de procurar métodos que fazem o ajuste aos dados usando uma única linguagem de representação, apresentamos uma família de algoritmos, sob a designação genérica de Generalização em Cascata, onde o espaço de procura contem modelos que utilizam diferentes linguagens de representação. A ideia básica do método consiste em utilizar os algoritmos de aprendizagem em sequência. Em cada iteração ocorre um processo com dois passos. No primeiro passo, um classificador constrói um modelo. No segundo passo, o espaço definido pelos atributos é estendido pela inserção de novos atributos gerados utilizando este modelo. Este processo de construção de novos atributos constrói atributos na linguagem de representação do classificador usado para construir o modelo. Se posteriormente na sequência, um classificador utiliza um destes novos atributos para construir o seu modelo, a sua capacidade de representação foi estendida. Desta forma as restrições da linguagem de representação dosclassificadores utilizados a mais alto nível na sequência, são relaxadas pela incorporação de termos da linguagem derepresentação dos classificadores de base. Esta é a metodologia base subjacente ao sistema Ltree e à arquitecturada Generalização em Cascata.O método é apresentado segundo duas perspectivas. Numa primeira parte, é apresentado como uma estratégia paraconstruir árvores de decisão multivariadas. É apresentado o sistema Ltree que utiliza como operador para a construção de atributos um discriminante linear. ...
27

Початко, Тетяна Володимирівна, Татьяна Владимировна Початко, Tetiana Volodymyrivna Pochatko und І. В. Юрко. „Classification of spacecrafts“. Thesis, Видавництво СумДУ, 2007. http://essuir.sumdu.edu.ua/handle/123456789/17453.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Samuelsson, Elin, und Alice Karnsund. „Stem Cell Classification“. Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210867.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Machine learning and neural networks have recently become hot topics in many research areas. They have already proved to be useful in the fields of medicine and biotechnology. In these areas, they can be used to facilitate complicated and time consuming analysis processes. An important application is image recognition of cells, tumours etc., which also is the focus of this paper.Our project was to construct both Fully Connected Neural Networks and Convolutional Neural Networks with the ability to recognize pictures of muscular stem cells (MuSCs). We wanted to investigate if the intensity values in each pixel of the images were sufficient to use as indata for classification.By optimizing the structure of our networks, we obtained good results. Using only the pixel values as input, the pictures were correctly classified with up to 95.1% accuracy. If the image size was added to the indata, the accuracy was as best 97.9 %.The conclusion was that it is sensible and practical to use pixel intensity values as indata to classification programs. Important relationships exist and by adding some other easily accessible characteristics, the success rate can be compared to a human’s ability to classify these cells.
29

Sen, Suman Kumar Marron James Stephen. „Classification on manifolds“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,2726.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Mar. 10, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
30

Payne, Scott Marshall. „Classification of aquifers“. Diss., [Missoula, Mont.] : The University of Montana, 2010. http://etd.lib.umt.edu/theses/available/etd-03082010-112041.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Hendges, Graciela Rabuske. „Tackling genre classification“. Florianópolis, SC, 2007. http://repositorio.ufsc.br/xmlui/handle/123456789/90448.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão. Programa de Pós-Graduação em Letras/Inglês e Literatura Correspondente
Made available in DSpace on 2012-10-23T10:39:26Z (GMT). No. of bitstreams: 1 249271.pdf: 3171345 bytes, checksum: 00f207cece278de30d1f5b7fd246c496 (MD5)
Pesquisas recentes sobre comunicação científica têm revelado que desde o final dos anos de 1990 o uso de periódicos acadêmicos passou da mídia impressa para o mídia eletrônica (Tenopir, 2002, 2003; Tenopir & King, 2001, 2002) e, conseqüentemente, há previsões de que por volta de 2010 cerca de 80% dos periódicos terão apenas versões online (Harnad, 1998). Todavia, essas pesquisas mostram também que nem todas as disciplinas estão migrando para a Internet com a mesma velocidade. Enquanto que áreas como as Ciências da Informação, Arquivologia, Web design e Medicina têm mostrado interesse e preocupação em entnder e explicar esse fenômeno, em Lingüística Aplicada, particularmente em Análise de Gênero, os estudos ainda são escassos. Neste trabalho, portanto, procuro investigar em que medida o meio eletrônico (Internet) afeta o gênero artigo acadêmico no seu processo de mudança da mídia impressa para a mídia eletrônica. Mais especificamente, examino artigos acadêmicos em HTML nas áreas de Lingüística e Medicina com vistas a verificar se esse hypertexto é um gênero novo ou não. A abordagem metodológica adotada nesta pesquisa deriva da proposta de Askehave e Swales (2001) e de Swales (2004), na qual o critéro predominante para a classificação de um gênero é o propósito comunicativo, o qual só pode ser definido com base em uma análise textual tanto quanto em uma análise contextual. Dessa forma, neste estudo foram coletados e analisados dados textuais e contextuais e os resultados de ambas análises revelam que o artigo acadêmico em HTML é um gênero novo, cujo propósito comunicativo é realizado por hiperlinks e portanto, esse gênero é profundamente dependente da mídia eletrônica.
32

Gama, João Manuel Portela da. „Combining classification algorithms“. Tese, Universidade do Porto. Reitoria, 1999. http://hdl.handle.net/10216/10017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Dissertação de Doutoramento em Ciência de Computadores apresentada à Faculdade de Ciências da Universidade do Porto
A capacidade de um algoritmo de aprendizagem induzir, para um determinado problema, uma boa generalização depende da linguagem de representação usada para generalizar os exemplos. Como diferentes algoritmos usam diferentes linguagens de representação e estratégias de procura, são explorados espaços diferentes e são obtidos resultados diferentes. O problema de encontrar a representação mais adequada para o problema em causa, é uma área de investigação bastante activa. Nesta dissertação, em vez de procurar métodos que fazem o ajuste aos dados usando uma única linguagem de representação, apresentamos uma família de algoritmos, sob a designação genérica de Generalização em Cascata, onde o espaço de procura contem modelos que utilizam diferentes linguagens de representação. A ideia básica do método consiste em utilizar os algoritmos de aprendizagem em sequência. Em cada iteração ocorre um processo com dois passos. No primeiro passo, um classificador constrói um modelo. No segundo passo, o espaço definido pelos atributos é estendido pela inserção de novos atributos gerados utilizando este modelo. Este processo de construção de novos atributos constrói atributos na linguagem de representação do classificador usado para construir o modelo. Se posteriormente na sequência, um classificador utiliza um destes novos atributos para construir o seu modelo, a sua capacidade de representação foi estendida. Desta forma as restrições da linguagem de representação dosclassificadores utilizados a mais alto nível na sequência, são relaxadas pela incorporação de termos da linguagem derepresentação dos classificadores de base. Esta é a metodologia base subjacente ao sistema Ltree e à arquitecturada Generalização em Cascata.O método é apresentado segundo duas perspectivas. Numa primeira parte, é apresentado como uma estratégia paraconstruir árvores de decisão multivariadas. É apresentado o sistema Ltree que utiliza como operador para a construção de atributos um discriminante linear. ...
33

De, Hoedt Amanda Marie. „Clubfoot Image Classification“. Thesis, University of Iowa, 2013. https://ir.uiowa.edu/etd/4836.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Clubfoot is a congenital foot disorder that, left untreated, can limit a person's mobility by making it difficult and painful to walk. Although inexpensive and reliable treatment exists, clubfoot often goes untreated in the developing world, where 80% of cases occur. Many nonprofit and non-governmental organizations are partnering with hospitals and clinics in the developing world to provide treatment for patients with clubfoot, and to train medical personnel in the use of these treatment methods. As a component of these partnerships, clinics and hospitals are collecting patient records. Some of this patient information, such as photographs, requires expert quality assessment. Such assessment may occur at a later date by a staff member in the hospital, or it may occur in a completely different location through the web interface. Photographs capture the state of a patient at a specific point in time. If a photograph is not taken correctly, and as a result, has no clinical utility, the photograph cannot be recreated because that moment in time has passed. These observations have motivated the desire to perform real-time classification of clubfoot images as they are being captured in a possibly remote and challenging environment. In the short term, successful classification could provide immediate feedback to those taking patient photos, helping to ensure that the image is of good quality and the foot is oriented correctly at the time of image capture. In the long term, this classification could be the basis for automated image analysis that could reduce the workload of a busy staff, and enable broader provision of treatment.
34

Rida, Imad. „Temporal signals classification“. Thesis, Normandie, 2017. http://www.theses.fr/2017NORMIR01/document.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
De nos jours, il existe de nombreuses applications liées à la vision et à l’audition visant à reproduire par des machines les capacités humaines. Notre intérêt pour ce sujet vient du fait que ces problèmes sont principalement modélisés par la classification de signaux temporels. En fait, nous nous sommes intéressés à deux cas distincts, la reconnaissance de la démarche humaine et la reconnaissance de signaux audio, (notamment environnementaux et musicaux). Dans le cadre de la reconnaissance de la démarche, nous avons proposé une nouvelle méthode qui apprend et sélectionne automatiquement les parties dynamiques du corps humain. Ceci permet de résoudre le problème des variations intra-classe de façon dynamique; les méthodes à l’état de l’art se basant au contraire sur des connaissances a priori. Dans le cadre de la reconnaissance audio, aucune représentation de caractéristiques conventionnelle n’a montré sa capacité à s’attaquer indifféremment à des problèmes de reconnaissance d’environnement ou de musique : diverses caractéristiques ont été introduites pour résoudre chaque tâche spécifiquement. Nous proposons ici un cadre général qui effectue la classification des signaux audio grâce à un problème d’apprentissage de dictionnaire supervisé visant à minimiser et maximiser les variations intra-classe et inter-classe respectivement
Nowadays, there are a lot of applications related to machine vision and hearing which tried to reproduce human capabilities on machines. These problems are mainly amenable to a temporal signals classification problem, due our interest to this subject. In fact, we were interested to two distinct problems, humain gait recognition and audio signal recognition including both environmental and music ones. In the former, we have proposed a novel method to automatically learn and select the dynamic human body-parts to tackle the problem intra-class variations contrary to state-of-art methods which relied on predefined knowledge. To achieve it a group fused lasso algorithm is applied to segment the human body into parts with coherent motion value across the subjects. In the latter, while no conventional feature representation showed its ability to tackle both environmental and music problems, we propose to model audio classification as a supervised dictionary learning problem. This is done by learning a dictionary per class and encouraging the dissimilarity between the dictionaries by penalizing their pair- wise similarities. In addition the coefficients of a signal representation over these dictionaries is sought as sparse as possible. The experimental evaluations provide performing and encouraging results
35

Cisse, Mouhamadou Moustapha. „Efficient extreme classification“. Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066594/document.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Dans cette thèse, nous proposons des méthodes a faible complexité pour la classification en présence d'un très grand nombre de catégories. Ces methodes permettent d'accelerer la prediction des classifieurs afin des les rendre utilisables dans les applications courantes. Nous proposons deux methodes destinées respectivement a la classification monolabel et a la classification multilabel. La première méthode utilise l'information hierarchique existante entre les catégories afin de créer un représentation binaire compact de celles-ci. La seconde approche , destinée aux problemes multilabel adpate le framework des Filtres de Bloom a la representation de sous ensembles de labels sous forme de de vecteurs binaires sparses. Dans chacun des cas, des classifieurs binaires sont appris afin de prédire les representations des catégories/labels et un algorithme permettant de retrouver l'ensemble de catégories pertinentes a partir de la représentation prédite est proposée. Les méthodes proposées sont validées par des expérience sur des données de grandes échelles et donnent des performances supérieures aux méthodes classiquement utilisées pour la classification extreme
We propose in this thesis new methods to tackle classification problems with a large number of labes also called extreme classification. The proposed approaches aim at reducing the inference conplexity in comparison with the classical methods such as one-versus-rest in order to make learning machines usable in a real life scenario. We propose two types of methods respectively for single label and multilable classification. The first proposed approach uses existing hierarchical information among the categories in order to learn low dimensional binary representation of the categories. The second type of approaches, dedicated to multilabel problems, adapts the framework of Bloom Filters to represent subsets of labels with sparse low dimensional binary vectors. In both approaches, binary classifiers are learned to predict the new low dimensional representation of the categories and several algorithms are also proposed to recover the set of relevant labels. Large scale experiments validate the methods
36

Cisse, Mouhamadou Moustapha. „Efficient extreme classification“. Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066594.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Dans cette thèse, nous proposons des méthodes a faible complexité pour la classification en présence d'un très grand nombre de catégories. Ces methodes permettent d'accelerer la prediction des classifieurs afin des les rendre utilisables dans les applications courantes. Nous proposons deux methodes destinées respectivement a la classification monolabel et a la classification multilabel. La première méthode utilise l'information hierarchique existante entre les catégories afin de créer un représentation binaire compact de celles-ci. La seconde approche , destinée aux problemes multilabel adpate le framework des Filtres de Bloom a la representation de sous ensembles de labels sous forme de de vecteurs binaires sparses. Dans chacun des cas, des classifieurs binaires sont appris afin de prédire les representations des catégories/labels et un algorithme permettant de retrouver l'ensemble de catégories pertinentes a partir de la représentation prédite est proposée. Les méthodes proposées sont validées par des expérience sur des données de grandes échelles et donnent des performances supérieures aux méthodes classiquement utilisées pour la classification extreme
We propose in this thesis new methods to tackle classification problems with a large number of labes also called extreme classification. The proposed approaches aim at reducing the inference conplexity in comparison with the classical methods such as one-versus-rest in order to make learning machines usable in a real life scenario. We propose two types of methods respectively for single label and multilable classification. The first proposed approach uses existing hierarchical information among the categories in order to learn low dimensional binary representation of the categories. The second type of approaches, dedicated to multilabel problems, adapts the framework of Bloom Filters to represent subsets of labels with sparse low dimensional binary vectors. In both approaches, binary classifiers are learned to predict the new low dimensional representation of the categories and several algorithms are also proposed to recover the set of relevant labels. Large scale experiments validate the methods
37

Sloof, Joël. „Classification Storage : A practical solution to file classification for information security“. Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84553.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In the information age we currently live in, data has become the most valuable resource in the world. These data resources are high value targets for cyber criminals and digital warfare. To mitigate these threats, information security, laws and legislation is required. It can be challenging for organisations to have control over their data, to comply with laws and legislation that require data classification. Data classification is often required to determine appropriate security measured for storing sensitive data. The goal of this thesis is to create a system that makes it easy for organisations to handle file classifications, and raise information security awareness among users. In this thesis, the Classification Storage system is designed, implemented and evaluated. The Classification Storage system is a Client--Server solution that together create a virtual filesystem.  The virtual filesystem is presented as one network drive, while data is stored separately, based on the classifications that are set by users. Evaluating the Classification Storage system is realised through a usability study. The study shows that users find the Classification Storage system to be intuitive, easy to use and users become more information security aware by using the system.
I dagens informationsålder har data blivit den mest värdefulla tillgången i världen. Datatillgångar har blivit högt prioriterade mål för cyberkriminella och digital krigsföring. För att minska dessa hot, finns det ett behov av informationssäkerhet, lagar och lagstiftning. Det kan vara utmanande för organisationer att ha kontroll över sitt data för att följa lagar som kräver data klassificering för att lagra känsligt data. Målet med avhandlingen är att skapa ett system som gör det lättare för organisationer att hantera filklassificering och som ökar informationssäkerhets medvetande bland användare. Classification Storage systemet har designats, implementerats och evaluerats i avhandlingen. Classification Storage systemet är en Klient--Server lösning som tillsammans skapar ett virtuellt filsystem. Det virtuella filsystemet är presenterad som en nätverksenhet, där data lagras separat, beroende på den klassificeringen användare sätter. Classification Storage systemet är evaluerat genom en användbarhetsstudie. Studien visar att användare tycker att Classification Storage systemet är intuitivt, lätt att använda och användare blir mer informationssäkerhets medveten genom att använda systemet.
38

Watkins, Peter. „Classification of sheep category using chemical analysis and statistical classification algorithms“. Thesis, Watkins, Peter (2011) Classification of sheep category using chemical analysis and statistical classification algorithms. PhD thesis, Murdoch University, 2011. https://researchrepository.murdoch.edu.au/id/eprint/6249/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In Australia, dentition (eruption of permanent incisors) is used as a proxy for age to define sheep meat quality. Lamb is defined as having no permanent incisors, hogget as having at least one incisor and mutton is defined as having two or more incisors. Classification of the carcase is done at the abattoir prior to the removal of an animal’s head. Recently, an Australian Senate inquiry into meat marketing reported that there was concern that substitution of hogget and mutton for lamb may be occurring in the industry. At present, no objective method is available that can be used for classifying sheep category. The general aims of this thesis were to i) evaluate whether chemical analysis of branched chain fatty acid (BCFA) content could be used as an objective tool to determine sheep age, ii) understand the effect that some production factors had on BCFA concentrations in Australian sheep and iii) develop new approaches (whether chemical and/or statistical) for determining sheep category (age). BCFAs are implicated as the main contributors to “mutton flavour”, often associated with the cooked meat of older animals. BCFAs are reported to increase with age, which suggests that chemical analysis of these compounds could be used as an objective method. Concentrations of three BCFAs (4-methyloctanoic (MOA), 4-ethyloctanoic (EOA) and 4- methylnonanoic (MNA) acids) were measured in a survey of fat samples taken from 533 sheep carcases at abattoirs in New South Wales, Victoria and Western Australia. This thesis shows that, on its own, chemical analysis of the BCFAs is not sufficient to discriminate lamb from hogget and mutton as pre-slaughter nutrition is a significant factor in classifying sheep using this approach. Uncertainty at the BCFA concentration ranges found in Australian sheep was determined to be high making it difficult to discriminate between sheep carcases of different ages based on the BCFA level. Fast gas chromatography was evaluated as the basis for a high throughput chemical technique but was not sufficiently sensitive for BCFA measurements. Solid-phase microextraction (SPME) was also found to be suitable for sampling 3-methylindole and p-cresol, compounds responsible for diet-related “pastoral flavour” in sheep fat, but further work is needed to validate this approach for measurement of these compounds in sheep fat. Statistical classification algorithms, when applied to the chromatograms measured for the 533 carcasses, showed great promise for predicting sheep category. Specifically, the random forests algorithm, when applied to mean-centred data, gave 100% predictive accuracy when differentiating between lamb, hogget and mutton. This approach could be used for the development of an objective method for determining sheep age and category, suitable for use by the Australian sheep meat industry.
39

Beghtol, Clare. „James Duff Brown's Subject Classification and Evaluation Methods for Classification Systems“. dLIST, 2004. http://hdl.handle.net/10150/106250.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
James Duff Brown (1862-1914), an important figure in librarianship in late nineteenth and early twentieth century England, made contributions in many areas of his chosen field. His Subject Classification (SC), however, has not received much recognition for its theoretical and practical contributions to bibliographic classification theory and practice in the twentieth century. This paper discusses some of the elements of SC that both did and did not inform future bibliographic classification work, considers some contrasting evaluation methods in the light of advances in bibliographic classification theory and practice and of commentaries on SC, and suggests directions for further research.
40

Bouzouita-Bayoudh, Inès. „Etude et extraction des règles associatives de classification en classification supervisée“. Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20217.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Dans le cadre de cette thèse, notre intérêt se porte sur la précision de la classification et l'optimalité du parcours de l'espace de recherche. L'objectif recherché est d'améliorer la précision de classification en étudiant les différents types de règles et de réduire l'espace de recherche des règles. Nous avons proposé une approche de classification IGARC permettant de générer un classifieur formé d'une base de règles de classification génériques permettant de mieux classer les nouveaux objets grâce à la flexibilité de petites prémisses caractérisant ces règles. De plus cette approche manipule un nombre réduit de règles en comparaison avec les autres approches de classification associative en se basant sur le principe des bases génériques des règles associatives. Une étude expérimentale inter et intra approches a été faite sur 12 bases Benchmark.Nous avons également proposé une approche Afortiori. Notre travail a été motivé par la recherche d'un algorithme efficace permettant l'extraction des règles génériques aussi bien fréquentes que rares de classification en évitant la génération d'un grand nombre de règles. L'algorithme que nous proposons est particulièrement intéressant dans le cas de bases de données bien spécifiques composées d'exemples positifs et négatifs et dont le nombre d'exemples négatifs est très réduit par rapport aux exemples positifs. La recherche des règles se fait donc sur les exemples négatifs afin de déterminer des règles qui ont un faible support et ce même par rapport à la population des exemples positifs et dont l'extraction pourrait être coûteuse
Within the framework of this thesis, our interest is focused on classification accuracy and the optimalité of the traversal of the search. we introduced a new direct associative classification method called IGARC that extracts directly a classifier formed by generic associative classification rules from a training set in order to reduce the number of associative classification rules without jeopardizing the classification accuracy. Carried out experiments outlined that IGARC is highly competitive in comparison with popular classification methods.We also introduced a new classification approach called AFORTIORI. We address the problem of generating relevant frequent and rare classification rules. Our work is motivated by the long-standing open question of devising an efficient algorithm for finding rules with low support. A particularly relevant field for rare item sets and rare associative classification rules is medical diagnosis. The proposed approach is based on the cover set classical algorithm. It allows obtaining frequent and rare rules while exploring the search space in a depth first manner. To this end, AFORTIORI adopts the covering set algorithm and uses the cover measure in order to guide the traversal of the search space and to generate the most interesting rules for the classification framework even rare ones. We describe our method and provide comparisons with common methods of associative classification on standard benchmark data set
41

Johansson, Henrik. „Video Flow Classification : Feature Based Classification Using the Tree-based Approach“. Thesis, Karlstads universitet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-43012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This dissertation describes a study which aims to classify video flows from Internet network traffic. In this study, classification is done based on the characteristics of the flow, which includes features such as payload sizes and inter-arrival time. The purpose of this is to give an alternative to classifying flows based on the contents of their payload packets. Because of an increase of encrypted flows within Internet network traffic, this is a necessity. Data with known class is fed to a machine learning classifier such that a model can be created. This model can then be used for classification of new unknown data. For this study, two different classifiers are used, namely decision trees and random forest. Several tests are completed to attain the best possible models. The results of this dissertation shows that classification based on characteristics is possible and the random forest classifier in particular achieves good accuracies. However, the accuracy of classification of encrypted flows was not able to be tested within this project.
HITS, 4707
42

Palanisamy, Senthil Kumar. „Association rule based classification“. Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050306-131517/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Itemset Pruning, Association Rules, Adaptive Minimal Support, Associative Classification, Classification. Includes bibliographical references (p.70-74).
43

Landt, Hermine. „The classification of blazars“. [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967456185.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Archer, Claude. „Classification of group extensions“. Doctoral thesis, Universite Libre de Bruxelles, 2002. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211419.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Xie, Wei University of Ballarat. „Classification of HTML Documents“. University of Ballarat, 2006. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/12774.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Text Classification is the task of mapping a document into one or more classes based on the presence or absence of words (or features) in the document. It is intensively being studied and different classification techniques and algorithms have been developed. This thesis focuses on classification of online documents that has become more critical with the development of World Wide Web. The WWW vastly increases the availability of on-line documents in digital format and has highlighted the need to classify them. From this background, we have noted the emergence of “automatic Web Classification”. These mainly concentrate on classifying HTML-like documents into classes or categories by not only using the methods that are inherited from the traditional Text Classification process, but also utilizing the extra information provided only by Web pages. Our work is based on the fact that, Web documents, contain not only ordinary features (words) but also extra information, such as meta-data and hyperlinks that can be used to advantage the classification process. The aim of this research is to study various ways of using the extra information, in particularly, hyperlink information provided by HTML-documents (Web pages). The merit of the approach, developed in this thesis, is its simplicity, compared with existing approaches. We present different approaches of using hyperlink information to improve the effectiveness of web classification. Unlike other work in this area, we will only use the mappings between linked documents and their own class or classes. In this case, we only need to add a few features called linked-class features into the datasets, and then apply classifiers on them for classification. In the numerical experiments we adopted two wellknown Text Classification algorithms, Support Vector Machines and BoosTexter. The results obtained show that classification accuracy can be improved by using mixtures of ordinary and linked-class features. Moreover, out-links usually work better than in-links in classification. We also analyse and discuss the reasons behind this improvement.
Master of Computing
46

Vazey, Megan Margaret. „Case-driven collaborative classification“. Doctoral thesis, Australia : Macquarie University, 2007. http://hdl.handle.net/1959.14/264.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (PhD) -- Macquarie University, Division of Information and Communication Sciences, Department of Computing, 2007.
"Submitted January 27 2007, revised July 27 2007".
Bibliography: p. 281-304.
Mode of access: World Wide Web.
xiv, 487 p., bound ill. (some col.)
47

McGuire, Peter Frederick. „Image classification using eigenpaxels“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0002/NQ41239.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Fazeli, Goldisse. „Classification and discriminant analysis“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ47800.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Liang, Fang. „Hyperplane-based classification techniques“. Connect to online resource, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3284447.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

de, Roos Dolf. „Spectral analysis classification sonars“. Thesis, University of Canterbury. Electrical Engineering, 1986. http://hdl.handle.net/10092/5575.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Sonar target classification based on frequency-domain echo analysis is investigated. Conventional pulsed sonars are compared with continuous transmission frequency modulated (CTFM) sonars, and differences relating to target classification are discussed. A practical technique is introduced which eliminates the blind time inherent in CTFM technology. The value and implications of modelling underwater sonars in air are discussed and illustrated. The relative merits of auditory, visual and computer analysis of echoes are examined, and the effects of using two or more analysis methods simultaneously are investigated. Various statistical techniques for detecting and classifying targets are explored. It is seen that with present hardware limitations, a two-stage echo analysis approach offers the most efficient means of target classification. A novel design for three-section quarter-wavelength transducers is presented and evaluated. Their inherently flat frequency response makes these transducers well suited to broadband applications. The design philosophy and construction details of a Diver's Sonar and an underwater Classification Sonar are given. Sea trials reveal that using the Diver's Sonar, a blind-folded diver can successfully navigate in an unknown environment, and locate and classify targets; using the Classification Sonar, targets may be located and classified using either operators or computer software.

Zur Bibliographie