Dissertations / Theses on the topic 'EFFICIENT CLASSIFICATION'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'EFFICIENT CLASSIFICATION.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Cisse, Mouhamadou Moustapha. "Efficient extreme classification." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066594/document.
Full textWe propose in this thesis new methods to tackle classification problems with a large number of labes also called extreme classification. The proposed approaches aim at reducing the inference conplexity in comparison with the classical methods such as one-versus-rest in order to make learning machines usable in a real life scenario. We propose two types of methods respectively for single label and multilable classification. The first proposed approach uses existing hierarchical information among the categories in order to learn low dimensional binary representation of the categories. The second type of approaches, dedicated to multilabel problems, adapts the framework of Bloom Filters to represent subsets of labels with sparse low dimensional binary vectors. In both approaches, binary classifiers are learned to predict the new low dimensional representation of the categories and several algorithms are also proposed to recover the set of relevant labels. Large scale experiments validate the methods
Monadjemi, Amirhassan. "Towards efficient texture classification and abnormality detection." Thesis, University of Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409593.
Full textAlonso, Pedro. "Faster and More Resource-Efficient Intent Classification." Licentiate thesis, Luleå tekniska universitet, EISLAB, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-81178.
Full textChatchinarat, Anuchin. "An efficient emotion classification system using EEG." Thesis, Chatchinarat, Anuchin (2019) An efficient emotion classification system using EEG. PhD thesis, Murdoch University, 2019. https://researchrepository.murdoch.edu.au/id/eprint/52772/.
Full textDuta, Ionut Cosmin. "Efficient and Effective Solutions for Video Classification." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/369314.
Full textDuta, Ionut Cosmin. "Efficient and Effective Solutions for Video Classification." Doctoral thesis, University of Trento, 2017. http://eprints-phd.biblio.unitn.it/2669/1/Duta_PhD-Thesis.pdf.
Full textStein, David Benjamin. "Efficient homomorphically encrypted privacy-preserving automated biometric classification." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/130608.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 87-96).
This thesis investigates whether biometric recognition can be performed on encrypted data without decrypting the data. Borrowing the concept from machine learning, we develop approaches that cache as much computation as possible to a pre-computation step, allowing for efficient, homomorphically encrypted biometric recognition. We demonstrate two algorithms: an improved version of the k-ishNN algorithm originally designed by Shaul et. al. in [1] and a homomorphically encrypted implementation of a SVM classifier. We provide experimental demonstrations of the accuracy and practical efficiency of both of these algorithms.
by David Benjamin Stein.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Graham, James T. "Efficient Generation of Reducts and Discerns for Classification." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1175639229.
Full textEkman, Carl. "Traffic Sign Classification Using Computationally Efficient Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157453.
Full textNurrito, Eugenio. "Scattering networks: efficient 2D implementation and application to melanoma classification." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12261/.
Full textSundström, Mikael. "Time and space efficient algorithms for packet classification and forwarding." Doctoral thesis, Luleå tekniska universitet, Datavetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-25804.
Full textGodkänd; 2007; 20070504 (ysko)
Yoshioka, Atsushi. "Rule hashing for efficient packet classification in network intrusion detection." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Fall2007/a_yoshioka_120307.pdf.
Full textSundström, Mikael. "Time and space efficient algorithms for packet classification and forwarding /." Luleå : Centre for Distance Spanning Technology : Luleå University of Technology, 2007. http://epubl.ltu.se/1402-1544/2007/15/.
Full textKhojandi, Aryan Iden. "Efficient MCMC inference for material detection and classification In tomography." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113183.
Full textPage 106 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 103-105).
Inferring the distribution of material in a volume of interest based on tomographic measurements is a ubiquitous problem. Accurate reconstruction of the configuration is a daunting task, especially when the sensor setup is not sufficiently comprehensive. The inverse problem corresponding to this reconstruction task is almost always ill-posed, but reasoning about the latent state remains possible. We investigate the problem of classifying volumes into object classes, using the latent configuration as an intermediate representation. We use the framework of Probabilistic Inference to implement MCMC sampling of realizations of the latent configuration conditioned on the measurements. We exploit conditional-independence properties of the graphical-model representation to sample many nodes in parallel and thereby render our sampling scheme much more efficient. We then reason over the samples and use a neural network to classify them. We demonstrate that classification is far more robust than reconstruction to the removal of sensors and interrogation angles. We also show the value of using the intermediate representation and a generative physics-based forward model by comparing these classification results with those obtained by foregoing the latent space and training a classifier directly on the sensor readings. The former benefits from regularization of the posterior distribution, allowing it to learn more rapidly and thereby perform significantly better when the number of labeled examples is limited, a reality present in the context of our problem and in many others.
by Aryan Iden Khojandi.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Naoto, Chiche Benjamin. "Video classification with memory and computation-efficient convolutional neural network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254678.
Full textVideoförståelse innebär problem som videoklassificering, som består av att annotera videor baserat på deras innehåll och ramar. I många verkliga applikationer, som robotteknik, självkörande bilar, förstärkt verklighet (AR) och sakernas internet (IoT) måste videoförståelsuppgifter utföras i realtid på en enhet med begränsade minnesresurser och beräkningsförmåga, samtidigt som det uppfyller krav på låg fördröjning.I det här sammanhanget, medan neurala nätverk som är minnesoch beräkningseffektiva, dvs den aktuella presentationen har en rimlig avvägning mellan noggrannhet och effektivitet (med avseende på minnesstorlek och beräkningar) utvecklats för bildigenkänningsuppgifter, har studier om videoklassificering inte fullt utnyttjat dessa tekniker. För att fylla denna lucka i vetenskapen svarar det här projektet på följande forskningsfråga: hur bygger man videoklassificeringspipelines som bygger på minne och beräkningseffektiva faltningsnätverk (CNN) och hur utförs det sistnämnda?För att svara på denna fråga bygger projektet och utvärderar videoklassificeringspipelines som är nya artefakter. Den empiriska forskningsmetoden används i denna forskning som involverar triangulering (dvs kvalitativt och kvantitativt samtidigt). Artefakterna är baserade på ett befintligt minnesoch beräkningseffektivt CNN och dess utvärdering baseras på en öppet tillgängligt dataset för videoklassificering. Fallstudieforskningsstrategin antas: Vi försöker att generalisera erhållna resultat så långt som möjligt till andra minnesoch beräkningseffektiva CNNs och videoklassificeringsdataset. Som resultat byggs artefakterna och visar tillfredsställande prestandamätningar jämfört med baslinjeresultat som också utvecklas i denna avhandling och värden som rapporteras i andra forskningspapper baserat på samma dataset. Sammanfattningsvis kan video-klassificeringsledningar baserade på ett minne och beräkningseffektivt CNN byggas genom att utforma och utveckla artefakter som kombinerar metoder inspirerade av befintliga papper och nya tillvägagångssätt och dessa artefakter presenterar tillfredsställande prestanda. I synnerhet observerar vi att nedgången i noggrannhet som induceras av ett minne och beräkningseffektivt CNN vid hantering av videoramar kompenseras till viss del genom att ta upp tidsmässig information genom beaktande av sekvensen av dessa ramar.
Bosio, Mattia. "Hierarchical information representation and efficient classification of gene expression microarray data." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/145902.
Full textEn el campo de la biología computacional, los microarrays son utilizados para medir la actividad de miles de genes a la vez y producir una representación global de la función celular. Los microarrays permiten analizar la expresión de muchos genes en un solo experimento, rápidamente y eficazmente. Aunque los microarrays sean una tecnología de investigación consolidada hoy en día y la tendencia es en utilizar nuevas tecnologías como Next Generation Sequencing (NGS), aun no se ha encontrado un método óptimo para la clasificación de muestras. La clasificación de muestras de microarray es una tarea complicada, debido al alto número de variables y a la falta de estructura entre los datos. Esta característica impide la aplicación de técnicas de procesado que se basan en relaciones estructurales, como el filtrado con wavelet u otras técnicas de filltrado. Por otro lado, los genes no se expresen independientemente unos de otros: los genes están inter-relacionados según el proceso biológico que les regula. El objetivo de esta tesis es mejorar el estado del arte en la clasi cación de microarrays y contribuir a entender cómo se pueden diseñar y aplicar técnicas de procesado de señal para analizar microarrays. El objetivo de construir un algoritmo de clasi cación, necesita un estudio de comprobaciones y adaptaciones de algoritmos existentes a los datos analizados. Los algoritmo desarrollados en esta tesis encaran el problema con dos bloques esenciales. El primero ataca la falta de estructura, derivando un árbol binario usando herramientas de clustering no supervisado. El segundo elemento fundamental para obtener clasificadores precisos reduciendo el riesgo de overfitting es un elemento de selección de variables. La principal tarea en esta tesis es la clasificación de datos binarios en la cual hemos obtenido mejoras relevantes al estado del arte. El primer paso es la generación de una estructura, para eso se ha utilizado el algoritmo Treelets disponible en la literatura. Múltiples alternativas a este algoritmo original han sido propuestas y evaluadas, cambiando las métricas de similitud o las reglas de fusión durante el proceso. Además, se ha estudiado la posibilidad de usar fuentes de información externas, como ontologías de información biológica, para mejorar la inferencia de la estructura. Se han estudiado dos enfoques diferentes para la selección de variables: el primero es una modificación del algoritmo IFFS y el segundo utiliza un esquema de aprendizaje con “ensembles”. El algoritmo IFFS ha sido adaptado a las características de microarrays para obtener mejores resultados, añadiendo elementos como la medida de fiabilidad y un sistema de evaluación para seleccionar la mejor variable en cada iteración. El método que utiliza “ensembles” aprovecha la abundancia de features de los microarrays para implementar una selección diferente. En este campo se han estudiado diferentes algoritmos, mejorando alternativas ya existentes al escaso número de muestras y al alto número de variables, típicos de los microarrays. El problema de clasificación con más de dos clases ha sido también tratado al estudiar un nuevo algoritmo que combina múltiples clasificadores binarios. El algoritmo propuesto aprovecha la redundancia ofrecida por múltiples clasificadores para obtener predicciones más fiables. Todos los algoritmos propuestos en esta tesis han sido evaluados con datos públicos y de alta calidad, siguiendo protocolos establecidos en la literatura para poder ofrecer una comparación fiable con el estado del arte. Cuando ha sido posible, se han aplicado simulaciones Monte Carlo para mejorar la robustez de los resultados.
Kanumuri, Sai Srilakshmi. "ON EVALUATING MACHINE LEARNING APPROACHES FOR EFFICIENT CLASSIFICATION OF TRAFFIC PATTERNS." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14985.
Full textAmbardekar, Amol A. "Efficient vehicle tracking and classification for an automated traffic surveillance system." abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1451111.
Full textHarte, T. P. "Efficient neural network classification of magnetic resonance images of the breast." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603805.
Full textLee, Zed Heeje. "A graph representation of event intervals for efficient clustering and classification." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281947.
Full textSekvenser av händelsesintervall förekommer i flera applikationsdomäner, medan deras inneboende komplexitet hindrar skalbara lösningar på uppgifter som kluster och klassificering. I den här avhandlingen föreslår vi en ny spektral inbäddningsrepresentation av händelsens intervallsekvenser som förlitar sig på bipartitgrafer. Mer konkret representeras varje händelsesintervalsekvens av en bipartitgraf genom att följa tre huvudsteg: (1) skapa en hashtabell som snabbt kan konvertera en samling händelsintervalsekvenser till en bipartig grafrepresentation, (2) skapa och reglera en bi-adjacency-matris som motsvarar bipartitgrafen, (3) definiera en spektral inbäddning på bi-adjacensmatrisen. Dessutom visar vi att väsentliga förbättringar kan uppnås med avseende på klassificeringsprestanda genom beskärningsparametrar som fångar arten av relationerna som bildas av händelsesintervallen. Vi demonstrerar genom omfattande experimentell utvärdering på fem verkliga datasätt att vår strategi kan erhålla runtime-hastigheter på upp till två storlekar jämfört med andra modernaste metoder och liknande eller bättre kluster- och klassificerings- prestanda.
Esteve, García Albert. "Design of Efficient TLB-based Data Classification Mechanisms in Chip Multiprocessors." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86136.
Full textLa mayor parte de los datos referenciados por aplicaciones paralelas y secuenciales que se ejecutan enCMPs actuales son referenciadas por un único hilo, es decir, son privados. Recientemente, algunas propuestas aprovechan esta observación para mejorar muchos aspectos de los CMPs, como por ejemplo reducir el sobrecoste de la coherencia o la latencia de los accesos a cachés distribuidas. La efectividad de estas propuestas depende en gran medida de la cantidad de datos que son considerados privados. Sin embargo, los mecanismos propuestos hasta la fecha no consideran la migración de hilos de ejecución ni las fases de una aplicación. Por tanto, una cantidad considerable de datos privados no se detecta apropiadamente. Con el fin de aumentar la detección de datos privados, proponemos un mecanismo basado en las TLBs, capaz de reclasificar los datos a privado, y que detecta la migración de los hilos de ejecución sin añadir complejidad al sistema. Los mecanismos de clasificación en las TLBs se han analizado en estructuras de varios niveles, incluyendo TLBs privadas y con un último nivel de TLB compartido y distribuido. Esta tesis también presenta un mecanismo de clasificación de páginas basado en la inspección de las TLBs de otros núcleos tras cada fallo de TLB. De forma particular, el mecanismo propuesto se basa en el intercambio y el cuenteo de tokens (testigos). Contar tokens en las TLBs supone una forma natural y eficiente para la clasificación de páginas de memoria. Además, evita el uso de solicitudes persistentes o arbitraje alguno, ya que si dos o más TLBs compiten para acceder a una página, los tokens se distribuyen apropiadamente y la clasifican como compartida. Sin embargo, la habilidad de los mecanismos basados en TLB para clasificar páginas privadas depende del tamaño de las TLBs. La clasificación basada en las TLBs se basa en la presencia de una traducción en las TLBs del sistema. Para evitarlo, se han propuesto diversos predictores de uso en las TLBs (UP), los cuales permiten una clasificación independiente del tamaño de las TLBs. En concreto, esta tesis presenta un sistema mediante el que se obtiene información de uso de página a nivel de sistema con la ayuda de un nivel de TLB compartida (SUP) o mediante TLBs cooperando juntas (CUP).
La major part de les dades referenciades per aplicacions paral·leles i seqüencials que s'executen en CMPs actuals són referenciades per un sol fil, és a dir, són privades. Recentment, algunes propostes aprofiten aquesta observació per a millorar molts aspectes dels CMPs, com és reduir el sobrecost de la coherència o la latència d'accés a memòries cau distribuïdes. L'efectivitat d'aquestes propostes depen en gran mesura de la quantitat de dades detectades com a privades. No obstant això, els mecanismes proposats fins a la data no consideren la migració de fils d'execució ni les fases d'una aplicació. Per tant, una quantitat considerable de dades privades no es detecta apropiadament. A fi d'augmentar la detecció de dades privades, aquesta tesi proposa un mecanisme basat en les TLBs, capaç de reclassificar les dades com a privades, i que detecta la migració dels fils d'execució sense afegir complexitat al sistema. Els mecanismes de classificació en les TLBs s'han analitzat en estructures de diversos nivells, incloent-hi sistemes amb TLBs d'últimnivell compartides i distribuïdes. Aquesta tesi presenta un mecanisme de classificació de pàgines basat en inspeccionar les TLBs d'altres nuclis després de cada fallada de TLB. Concretament, el mecanisme proposat es basa en l'intercanvi i el compte de tokens. Comptar tokens en les TLBs suposa una forma natural i eficient per a la classificació de pàgines de memòria. A més, evita l'ús de sol·licituds persistents o arbitratge, ja que si dues o més TLBs competeixen per a accedir a una pàgina, els tokens es distribueixen apropiadament i la classifiquen com a compartida. No obstant això, l'habilitat dels mecanismes basats en TLB per a classificar pàgines privades depenen de la grandària de les TLBs. La classificació basada en les TLBs resta en la presència d'una traducció en les TLBs del sistema. Per a evitar-ho, s'han proposat diversos predictors d'ús en les TLBs (UP), els quals permeten una classificació independent de la grandària de les TLBs. Específicament, aquesta tesi introdueix un predictor que obté informació d'ús de la pàgina a escala de sistema mitjançant un nivell de TLB compartida (SUP) or mitjançant TLBs cooperant juntes (CUP).
Esteve García, A. (2017). Design of Efficient TLB-based Data Classification Mechanisms in Chip Multiprocessors [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86136
TESIS
Karmakar, Priyabrata. "Effective and efficient kernel-based image representations for classification and retrieval." Thesis, Federation University Australia, 2018. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/165515.
Full textDoctor of Philosophy
Zhang, Liang. "Classification and ranking of environmental recordings to facilitate efficient bird surveys." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/107097/1/Liang_Zhang_Thesis.pdf.
Full textLoza, Mencía Eneldo [Verfasser], Johannes [Akademischer Betreuer] Fürnkranz, and Hüllermeier [Akademischer Betreuer] Eyke. "Efficient Pairwise Multilabel Classification / Eneldo Loza Mencía. Betreuer: Johannes Fürnkranz ; Hüllermeier Eyke." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://d-nb.info/1107769655/34.
Full textImmaneni, Raghu Nandan. "An efficient approach to machine learning based text classification through distributed computing." Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1603338.
Full textText Classification is one of the classical problems in computer science, which is primarily used for categorizing data, spam detection, anonymization, information extraction, text summarization etc. Given the large amounts of data involved in the above applications, automated and accurate training models and approaches to classify data efficiently are needed.
In this thesis, an extensive study of the interaction between natural language processing, information retrieval and text classification has been performed. A case study named “keyword extraction” that deals with ‘identifying keywords and tags from millions of text questions’ is used as a reference. Different classifiers are implemented using MapReduce paradigm on the case study and the experimental results are recorded using two newly built distributed computing Hadoop clusters. The main aim is to enhance the prediction accuracy, to examine the role of text pre-processing for noise elimination and to reduce the computation time and resource utilization on the clusters.
Franz, Torsten. "Spatial classification methods for efficient infiltration measurements and transfer of measuring results." Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A24942.
Full textFür den nachhaltigen Betrieb und die kosteneffiziente Unterhaltung von Kanalnetzen ist eine genaue Bestimmung ihrer Fremdwassersituation notwendig. Eine Optimierung der dazu erforderlichen Messkampagnen und eine zuverlässige Übertragung der Messergebnisse auf vergleichbare Gebiete sind aufgrund der hohen Aufwendungen für Infiltrationsmessungen angezeigt. Dafür wurden geeignete Methoden entwickelt, welche einerseits den Informationsgehalt von Messungen durch die Bestimmung optimaler Messpunkte verbessern und andererseits Messresultate mittels Vergleichen von Teileinzugsgebieten und Klassifizierungen von Kanalhaltungen zu anderen potenziellen Messstellen zuordnen. Die Methoden basieren auf dem Ähnlichkeitsansatz “Ähnliche Kanaleigenschaften führen zu ähnlichen Fremdwasserraten” und nutzen modifizierte multivariate statistische Verfahren. Sie haben einen hohen Freiheitsgrad bezüglich der Datenanforderung. Die Methoden wurden erfolgreich anhand gemessener und generierter Daten validiert. Es wird eingeschätzt, dass das Optimierungspotenzial bei geeigneten Einzugsgebieten bis zu 40 % gegenüber nicht optimierten Mess-netzen beträgt. Die Übertragung der Messergebnisse war mit einem akzeptablen Fehler für bis zu 75 % der untersuchten Teileinzugsgebiete erfolgreich. Mit den entwickelten Methoden ist es möglich, den Kenntnisstand über die Fremdwassersituation eines Kanalnetzes zu verbessern und die messungsbezogene Unsicherheit zu verringern. Dies resultiert in Kostenersparnissen für den Betreiber.
Franz, Torsten. "Spatial classification methods for efficient infiltration measurements and transfer of measuring results." Doctoral thesis, Dresden : Inst. für Siedlungs- und Industriewasserwirtschaft, Techn. Univ, 2007. http://nbn-resolving.de/urn:nbn:de:swb:14-1181687412171-65072.
Full textRunhem, Lovisa. "Resource efficient travel mode recognition." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217897.
Full textI denna rapport försöker vi ge insikter om hur en resurseffektiv lösning för transportlägesigenkänning kan implementeras på en smartphone genom att använda accelerometern och magnetometern som sensorer för datainsamling. Det föreslagna systemet använder en hierarkisk klassificeringsprocess där instanser först klassificeras som fordon eller icke-fordon, sedan som hjul- eller järnvägsfordon, och slutligen som tillhörande ett av transportsätten: buss, bil, motorcykel, tunnelbana eller tåg. Ett virtuellt gyroskop implementeras som en lågenergi källa till simulerad gyroskopdata. Olika särdrag extraheras från accelerometer, magnetometer och virtuella gyroskopläsningar som samlas in vid 30 Hz, innan de klassificeras med hjälp av maskininlärningsalgoritmer från WEKA-maskinlärningsbiblioteket. En Android-applikation har utvecklats för att klassificera realtidsdata, och programmets resursförbrukning mättes med hjälp av Trepn profiler-applikationen. Det föreslagna systemet uppnår en övergripande noggrannhet av 82.7% och en fordonsnoggrannhet av 84.9% genom att använda ett 5 sekunders fönster med 75% överlappning med en genomsnittlig energiförbrukning av 8.5 mW.
Meléndez, Rodríguez Jaime Christian. "Supervised and unsupervised segmentation of textured images by efficient multi-level pattern classification." Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8487.
Full textEsta tesis propone metodologías nuevas y eficientes para segmentar imágenes a partir de información de textura en entornos supervisados y no supervisados. Para el caso supervisado, se propone una técnica basada en una estrategia de clasificación de píxeles multinivel que refina la segmentación resultante de forma iterativa. Dicha estrategia utiliza métodos de reconocimiento de patrones basados en prototipos (determinados mediante algoritmos de agrupamiento) y máquinas de vectores de soporte. Con el objetivo de obtener el mejor rendimiento, se incluyen además un algoritmo para selección automática de parámetros y métodos para reducir el coste computacional asociado al proceso de segmentación. Para el caso no supervisado, se propone una adaptación de la metodología anterior mediante una etapa inicial de descubrimiento de patrones que permite transformar el problema no supervisado en supervisado. Las técnicas desarrolladas en esta tesis se validan mediante diversos experimentos considerando una gran variedad de imágenes.
He, Yuheng [Verfasser]. "Efficient Positioning Methods and Location-Based Classification in the IP Multimedia Subsystem / Yuheng He." München : Verlag Dr. Hut, 2013. http://d-nb.info/1033041629/34.
Full textKolb, Dirk [Verfasser], and Elmar [Akademischer Betreuer] Nöth. "Efficient and Trainable Detection and Classification of Radio Signals / Dirk Kolb. Betreuer: Elmar Nöth." Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2012. http://d-nb.info/1025963725/34.
Full textTanaka, Elly M., Dirk Lindemann, Tatiana Sandoval-Guzmán, Nicole Stanke, and Stephanie Protze. "Foamy virus for efficient gene transfer in regeneration studies." BioMed Central, 2013. https://tud.qucosa.de/id/qucosa%3A28877.
Full textGulbinas, Rimas Viktoras. "Motivating and Quantifying Energy Efficient Behavior among Commercial Building Occupants." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64867.
Full textPh. D.
Park, Sang-Hyeun [Verfasser], Johannes [Akademischer Betreuer] Fürnkranz, and Eyke [Akademischer Betreuer] Hüllermeier. "Efficient Decomposition-Based Multiclass and Multilabel Classification / Sang-Hyeun Park. Betreuer: Johannes Fürnkranz ; Eyke Hüllermeier." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2012. http://d-nb.info/1106115678/34.
Full textPapapetrou, Odysseas [Verfasser]. "Approximate algorithms for efficient indexing, clustering, and classification in Peer-to-peer networks / Odysseas Papapetrou." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2011. http://d-nb.info/1013287142/34.
Full textMakki, Sara. "An Efficient Classification Model for Analyzing Skewed Data to Detect Frauds in the Financial Sector." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1339/document.
Full textThere are different types of risks in financial domain such as, terrorist financing, money laundering, credit card fraudulence and insurance fraudulence that may result in catastrophic consequences for entities such as banks or insurance companies. These financial risks are usually detected using classification algorithms. In classification problems, the skewed distribution of classes also known as class imbalance, is a very common challenge in financial fraud detection, where special data mining approaches are used along with the traditional classification algorithms to tackle this issue. Imbalance class problem occurs when one of the classes have more instances than another class. This problem is more vulnerable when we consider big data context. The datasets that are used to build and train the models contain an extremely small portion of minority group also known as positives in comparison to the majority class known as negatives. In most of the cases, it’s more delicate and crucial to correctly classify the minority group rather than the other group, like fraud detection, disease diagnosis, etc. In these examples, the fraud and the disease are the minority groups and it’s more delicate to detect a fraud record because of its dangerous consequences, than a normal one. These class data proportions make it very difficult to the machine learning classifier to learn the characteristics and patterns of the minority group. These classifiers will be biased towards the majority group because of their many examples in the dataset and will learn to classify them much faster than the other group. After conducting a thorough study to investigate the challenges faced in the class imbalance cases, we found that we still can’t reach an acceptable sensitivity (i.e. good classification of minority group) without a significant decrease of accuracy. This leads to another challenge which is the choice of performance measures used to evaluate models. In these cases, this choice is not straightforward, the accuracy or sensitivity alone are misleading. We use other measures like precision-recall curve or F1 - score to evaluate this trade-off between accuracy and sensitivity. Our objective is to build an imbalanced classification model that considers the extreme class imbalance and the false alarms, in a big data framework. We developed two approaches: A Cost-Sensitive Cosine Similarity K-Nearest Neighbor (CoSKNN) as a single classifier, and a K-modes Imbalance Classification Hybrid Approach (K-MICHA) as an ensemble learning methodology. In CoSKNN, our aim was to tackle the imbalance problem by using cosine similarity as a distance metric and by introducing a cost sensitive score for the classification using the KNN algorithm. We conducted a comparative validation experiment where we prove the effectiveness of CoSKNN in terms of accuracy and fraud detection. On the other hand, the aim of K-MICHA is to cluster similar data points in terms of the classifiers outputs. Then, calculating the fraud probabilities in the obtained clusters in order to use them for detecting frauds of new transactions. This approach can be used to the detection of any type of financial fraud, where labelled data are available. At the end, we applied K-MICHA to a credit card, mobile payment and auto insurance fraud data sets. In all three case studies, we compare K-MICHA with stacking using voting, weighted voting, logistic regression and CART. We also compared with Adaboost and random forest. We prove the efficiency of K-MICHA based on these experiments
Gilman, Ekaterina, Anja Keskinarkaus, Satu Tamminen, Susanna Pirttikangas, Juha Röning, and Jukka Riekki. "Personalised assistance for fuel-efficient driving." Elsevier, 2015. https://publish.fid-move.qucosa.de/id/qucosa%3A72830.
Full textWeickert, J., and T. Steidten. "Efficient time step parallelization of full multigrid techniques." Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199800466.
Full textHambsch, Mike, Qianqian Lin, Ardalan Armin, Paul L. Burn, and Paul Meredith. "Efficient, monolithic large area organohalide perovskite solar cells." Royal Society of Chemistry, 2016. https://tud.qucosa.de/id/qucosa%3A36282.
Full textWeise, Michael. "A framework for efficient hierarchic plate and shell elements." Technische Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A20867.
Full textHönel, Sebastian. "Efficient Automatic Change Detection in Software Maintenance and Evolutionary Processes." Licentiate thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-94733.
Full textSchütze, Lars, and Jeronimo Castrillon. "Efficient Late Binding of Dynamic Function Compositions." ACM, 2019. https://tud.qucosa.de/id/qucosa%3A73178.
Full textLoza, Mencía Eneldo. "Efficient Pairwise Multilabel Classification." Phd thesis, 2013. https://tuprints.ulb.tu-darmstadt.de/3226/7/loza12diss.pdf.
Full textFontenelle-Augustin, Tiffany Natasha, and 蒂芙妮. "Prototype Selection for Efficient Classification." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4ad4rc.
Full text國立清華大學
資訊系統與應用研究所
106
Abstract Big data has become ubiquitous and has become of great significance in academia. With the rapid increase in the enormity of big data, many problems have arisen when trying to manipulate the data for the purpose of forecasting. In this thesis, we highlight the problem of computational complexity when attempting to deal with big data. We propose a heuristic that can help to solve this problem by altering the existing method of classification so that it is more suitable for handling big data, thereby increasing efficiency. Our heuristic would not only be more suitable to handle big data but it would also be faster than traditional classification while keeping the accuracy approximately the same as traditional classification, if not higher. Our heuristic combines prototype selection with the traditional classification process. In our heuristic, a subset of the training data is selected as prototypes. The remaining data in the training set is discarded and we continue the process of classification by training the set of prototypes as opposed to the conventional method of using the entire training set. The learning algorithm used in our heuristic is the J48 decision tree algorithm. We evaluated our heuristic by comparing the classification accuracy and running time of our algorithm (using prototypes) with the traditional decision tree and naïve Bayes algorithms (using the entire training set). We also compared the amount of data used in our training phase versus the amount used in the training phases of conventional methods. We tested the data on five data sets ranging from sizes small to large. Findings prove that for big data, our heuristic saves memory space and is 100% faster than traditional classification with only a slight drop in accuracy.
Lin, Tien Min, and 林天民. "ABV+: An Efficient Packet Classification Algorithm." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/60951596238716607595.
Full text長庚大學
資訊工程學研究所
97
英文摘要 Packet classification is an important technique for Internet services such as firewalls, intrusion detection systems, and differentiated services. The main function of packet classification is to classify incoming packets into different flows according to predefined rules or polices in a router. Since the packet classification is an important component of routers, it has received broad attention. A number of algorithms have been proposed in past few years. Among them, bit vector based algorithms such as Lucent Bit Vector (BV) and Aggregated Bit Vector (ABV) are well-known for their simplicity to be implemented in hardware. However, both BV and ABV do not scale well in large filter databases due to their storage requirements. In this thesis, we propose a new bit vector based algorithm named Aggregated Bit Vector Plus (ABV+). The key idea behind ABV+ is to replace each bit vector with two values for the selected trie. Since the length of a bit vector is equal to the number of filter rules, replacing the bit vector with two short and fixed-length fields can significantly reduce the storage requirement. For synthetic databases with 50K filter rules, experimental results show that ABV+ can reduce the storage requirement by 65%, and the search time by 42% as compared with ABV.
Lin, Keng-Pei, and 林耕霈. "Efficient Data Classification with Privacy-Preservation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/47593951590552273335.
Full text國立臺灣大學
電機工程學研究所
99
Data classification is a widely used data mining technique which learns classifiers from labeled data to predict the labels of unlabeled instances. Among data classification algorithms, the support vector machine (SVM) shows the state-of-the-art performance. Data privacy is a critical concern in applying the data mining techniques. In this dissertation, we study how to achieve privacy-preservation in utilizing the SVM as well as how to efficiently generate the SVM classifier. Outsourcing has become popular in current cloud computing trends. Since the training algorithm of the SVM involves intensive computations, outsourcing to external service providers can benefit the data owner who possesses only limited computing resources. In outsourcing, the data privacy is a critical concern since there may be sensitive information contained in the data. In addition to the data, the classifier generated from the data is also private to the data owner. Existing privacy-preserving SVM outsourcing technique is weak in security. In Chapter 2, we propose a secure privacy-preserving SVM outsourcing scheme. In the proposed scheme, the data are perturbed by random linear transformation which is stronger in security than existing works. The service provider generates the SVM classifier from the perturbed data where the classifier is also in perturbed form and cannot be accessed by the service provider. In Chapter 3, we study the inherent privacy violation problem in the SVM classifier. The SVM trains a classifier by solving an optimization problem to decide which instances of the training dataset are support vectors, which are the necessarily informative instances to form the SVM classifier. Since support vectors are intact tuples taken from the training dataset, releasing the SVM classifier for public use or other parties will disclose the private content of support vectors. We propose an approach to post-process the SVM classifier to transform it to a privacy-preserving SVM classifier which does not disclose the private content of support vectors. It precisely approximates the decision function of the Gaussian kernel SVM classifier without exposing the individual content of support vectors. The privacy-preserving SVM classifier is able to release the prediction ability of the SVM classifier without violating the individual data privacy. The efficiency of the SVM is also an important issue since for large-scale data, the SVM solver converges slowly. In Chapter 4, we design an efficient SVM training algorithm based on the kernel approximation technique developed in Chapter 3. The kernel function brings powerful classification ability to the SVM, but it incurs additional computational cost in the training process. In contrast, there exist faster solvers to train the linear SVM. We capitalize the kernel approximation technique to compute the kernel evaluation by the dot product of explicit low-dimensional features to leverage the efficient linear SVM solver for training a nonlinear kernel SVM. In addition to an efficient training scheme, it obtains a privacy-preserving SVM classifier directly, i.e., its classifier does not disclose any individual instance. We conduct extensive experiments over our studies. Experimental results show that the privacy-preserving SVM outsourcing scheme, the privacy-preserving SVM classifier, and the efficient SVM training scheme based on kernel approximation achieve similar classification accuracy to a normal SVM classifier while obtains the properties of privacy-preservation and efficiency respectively.
Ahmed, Omar. "Towards Efficient Packet Classification Algorithms and Architectures." Thesis, 2013. http://hdl.handle.net/10214/7406.
Full textPark, Sang-Hyeun. "Efficient Decomposition-Based Multiclass and Multilabel Classification." Phd thesis, 2012. http://tuprints.ulb.tu-darmstadt.de/2994/1/diss_shpark.pdf.
Full textMagalhães, Ricardo Manuel Correia. "Energy Efficient Smartphone-based Users Activity Classification." Master's thesis, 2019. https://hdl.handle.net/10216/119355.
Full textTzou, Yi-ru, and 鄒依儒. "Cache Strategies for Efficient Lazy Associative Classification." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/05991093986242149764.
Full text國立中正大學
資訊工程所
98
Lazy associative classification generates rules according to the selected features from training instance that are closely related the testing instance. When a large data set is mined a huge number of rules. The rules are the frequent itemsets that they are bigger than minimum threshold. The frequent itemsests are originated in combined the candidate itemsets that they are gernerted from training data. Hence, lazy associative classification spends a lot of time on calculating the threshold with every itemset that the features from the training instances. These rules will be used repeatedly. Therefore, we use the cache strategies in lazy associative classification to solve the efficiency for accuracy and speed. We add the new generated rule in cache, but the cache size is bigger than the setting cache size. When cache is full, we use four method-FIFO, LRU, DLC(Discarding the lowest confidence)and DLD(Discarding the lowest difference) -that they are used to discard the rules in cache. For each rule, there are recorded five data in cache, include the support value, the confidence value, FIFO index, LRU index and deference, they are used to discard the excess rule when the rule size is full. Lazy associative classification is classified fast with using these data and keeping the accuracy. In this paper, we use automatic setting the average confidence to instead the human setting the confidence. There are two datasets: Edoc and ModApte-Top10. The accuracy of our experimental results is improved to 3.11% and classified time is saved to 1.27 times in the Edoc. In the ModApte-Top10, the accuracy is improved to 2.24% and classified time is saved to 3.95 times.