Dissertations / Theses on the topic 'KNN classification'

To see the other types of publications on this topic, follow the link: KNN classification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'KNN classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mestre, Ricardo Jorge Palheira. "Improvements on the KNN classifier." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10923.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The object classification is an important area within the artificial intelligence and its application extends to various areas, whether or not in the branch of science. Among the other classifiers, the K-nearest neighbor (KNN) is among the most simple and accurate especially in environments where the data distribution is unknown or apparently not parameterizable. This algorithm assigns the classifying element the major class in the K nearest neighbors. According to the original algorithm, this classification implies the calculation of the distances between the classifying instance and each one of the training objects. If on the one hand, having an extensive training set is an element of importance in order to obtain a high accuracy, on the other hand, it makes the classification of each object slower due to its lazy-learning algorithm nature. Indeed, this algorithm does not provide any means of storing information about the previous calculated classifications,making the calculation of the classification of two equal instances mandatory. In a way, it may be said that this classifier does not learn. This dissertation focuses on the lazy-learning fragility and intends to propose a solution that transforms the KNNinto an eager-learning classifier. In other words, it is intended that the algorithm learns effectively with the training set, thus avoiding redundant calculations. In the context of the proposed change in the algorithm, it is important to highlight the attributes that most characterize the objects according to their discriminating power. In this framework, there will be a study regarding the implementation of these transformations on data of different types: continuous and/or categorical.
APA, Harvard, Vancouver, ISO, and other styles
2

Hanson, Sarah Elizabeth. "Classification of ADHD Using Heterogeneity Classes and Attention Network Task Timing." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83610.

Full text
Abstract:
Throughout the 1990s ADHD diagnosis and medication rates have increased rapidly, and this trend continues today. These sharp increases have been met with both public and clinical criticism, detractors stating over-diagnosis is a problem and healthy children are being unnecessarily medicated and labeled as disabled. However, others say that ADHD is being under-diagnosed in some populations. Critics often state that there are multiple factors that introduce subjectivity into the diagnosis process, meaning that a final diagnosis may be influenced by more than the desire to protect a patient's wellbeing. Some of these factors include standardized testing, legislation affecting special education funding, and the diagnostic process. In an effort to circumvent these extraneous factors, this work aims to further develop a potential method of using EEG signals to accurately discriminate between ADHD and non-ADHD children using features that capture spectral and perhaps temporal information from evoked EEG signals. KNN has been shown in prior research to be an effective tool in discriminating between ADHD and non-ADHD, therefore several different KNN models are created using features derived in a variety of fashions. One takes into account the heterogeneity of ADHD, and another one seeks to exploit differences in executive functioning of ADHD and non-ADHD subjects. The results of this classification method vary widely depending on the sample used to train and test the KNN model. With unfiltered Dataset 1 data over the entire ANT1 period, the most accurate EEG channel pair achieved an overall vector classification accuracy of 94%, and the 5th percentile of classification confidence was 80%. These metrics suggest that using KNN of EEG signals taken during the ANT task would be a useful diagnosis tool. However, the most accurate channel pair for unfiltered Dataset 2 data achieved an overall accuracy of 65% and a 5th percentile of classification confidence of 17%. The same method that worked so well for Dataset 1 did not work well for Dataset 2, and no conclusive reason for this difference was identified, although several methods to remove possible sources of noise were used. Using target time linked intervals did appear to marginally improve results in both Dataset 1 and Dataset 2. However, the changes in accuracy of intervals relative to target presentation vary between Dataset 1 and Dataset 2. Separating subjects into heterogeneity classes does appear to result in good (up to 83%) classification accuracy for some classes, but results are poor (about 50%) for other heterogeneity classes. A much larger data set is necessary to determine whether or not the very positive results found with Dataset 1 extend to a wide population.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Bel, Haj Ali Wafa. "Minimisation de fonctions de perte calibrée pour la classification des images." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00934062.

Full text
Abstract:
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d'un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d'autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie.
APA, Harvard, Vancouver, ISO, and other styles
4

Lopez, Marcano Juan L. "Classification of ADHD and non-ADHD Using AR Models and Machine Learning Algorithms." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73688.

Full text
Abstract:
As of 2016, diagnosis of ADHD in the US is controversial. Diagnosis of ADHD is based on subjective observations, and treatment is usually done through stimulants, which can have negative side-effects in the long term. Evidence shows that the probability of diagnosing a child with ADHD not only depends on the observations of parents, teachers, and behavioral scientists, but also on state-level special education policies. In light of these facts, unbiased, quantitative methods are needed for the diagnosis of ADHD. This problem has been tackled since the 1990s, and has resulted in methods that have not made it past the research stage and methods for which claimed performance could not be reproduced. This work proposes a combination of machine learning algorithms and signal processing techniques applied to EEG data in order to classify subjects with and without ADHD with high accuracy and confidence. More specifically, the K-nearest Neighbor algorithm and Gaussian-Mixture-Model-based Universal Background Models (GMM-UBM), along with autoregressive (AR) model features, are investigated and evaluated for the classification problem at hand. In this effort, classical KNN and GMM-UBM were also modified in order to account for uncertainty in diagnoses. Some of the major findings reported in this work include classification performance as high, if not higher, than those of the highest performing algorithms found in the literature. One of the major findings reported here is that activities that require attention help the discrimination of ADHD and Non-ADHD subjects. Mixing in EEG data from periods of rest or during eyes closed leads to loss of classification performance, to the point of approximating guessing when only resting EEG data is used.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Sichu. "Application of Machine Learning Techniques for Real-time Classification of Sensor Array Data." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/913.

Full text
Abstract:
There is a significant need to identify approaches for classifying chemical sensor array data with high success rates that would enhance sensor detection capabilities. The present study attempts to fill this need by investigating six machine learning methods to classify a dataset collected using a chemical sensor array: K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Classification and Regression Trees (CART), Random Forest (RF), Naïve Bayes Classifier (NB), and Principal Component Regression (PCR). A total of 10 predictors that are associated with the response from 10 sensor channels are used to train and test the classifiers. A training dataset of 4 classes containing 136 samples is used to build the classifiers, and a dataset of 4 classes with 56 samples is used for testing. The results generated with the six different methods are compared and discussed. The RF, CART, and KNN are found to have success rates greater than 90%, and to outperform the other methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Do, Cao Tri. "Apprentissage de métrique temporelle multi-modale et multi-échelle pour la classification robuste de séries temporelles par plus proches voisins." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM028/document.

Full text
Abstract:
La définition d'une métrique entre des séries temporelles est un élément important pour de nombreuses tâches en analyse ou en fouille de données, tel que le clustering, la classification ou la prédiction. Les séries temporelles présentent naturellement différentes caractéristiques, que nous appelons modalités, sur lesquelles elles peuvent être comparées, comme leurs valeurs, leurs formes ou leurs contenus fréquentielles. Ces caractéristiques peuvent être exprimées avec des délais variables et à différentes granularités ou localisations temporelles - exprimées globalement ou localement. Combiner plusieurs modalités à plusieurs échelles pour apprendre une métrique adaptée est un challenge clé pour de nombreuses applications réelles impliquant des données temporelles. Cette thèse propose une approche pour l'Apprentissage d'une Métrique Multi-modal et Multi-scale (M2TML) en vue d'une classification robuste par plus proches voisins. La solution est basée sur la projection des paires de séries temporelles dans un espace de dissimilarités, dans lequel un processus d'optimisation à vaste marge est opéré pour apprendre la métrique. La solution M2TML est proposée à la fois dans le contexte linéaire et non-linéaire, et est étudiée pour différents types de régularisation. Une variante parcimonieuse et interprétable de la solution montre le potentiel de la métrique temporelle apprise à pouvoir localiser finement les modalités discriminantes, ainsi que leurs échelles temporelles en vue de la tâche d'analyse considérée. L'approche est testée sur un vaste nombre de 30 bases de données publiques et challenging, couvrant des images, traces, données ECG, qui sont linéairement ou non-linéairement séparables. Les expériences montrent l'efficacité et le potentiel de la méthode M2TML pour la classification de séries temporelles par plus proches voisins
The definition of a metric between time series is inherent to several data analysis and mining tasks, including clustering, classification or forecasting. Time series data present naturally several characteristics, called modalities, covering their amplitude, behavior or frequential spectrum, that may be expressed with varying delays and at different temporal granularity and localization - exhibited globally or locally. Combining several modalities at multiple temporal scales to learn a holistic metric is a key challenge for many real temporal data applications. This PhD proposes a Multi-modal and Multi-scale Temporal Metric Learning (M2TML) approach for robust time series nearest neighbors classification. The solution is based on the embedding of pairs of time series into a pairwise dissimilarity space, in which a large margin optimization process is performed to learn the metric. The M2TML solution is proposed for both linear and non linear contexts, and is studied for different regularizers. A sparse and interpretable variant of the solution shows the ability of the learned temporal metric to localize accurately discriminative modalities as well as their temporal scales.A wide range of 30 public and challenging datasets, encompassing images, traces and ECG data, that are linearly or non linearly separable, are used to show the efficiency and the potential of M2TML for time series nearest neighbors classification
APA, Harvard, Vancouver, ISO, and other styles
7

Villa, Medina Joe Luis. "Reliability of classification and prediction in k-nearest neighbours." Doctoral thesis, Universitat Rovira i Virgili, 2013. http://hdl.handle.net/10803/127108.

Full text
Abstract:
En esta tesis doctoral seha desarrollado el cálculo de la fiabilidad de clasificación y de la fiabilidad de predicción utilizando el método de los k-vecinos más cercanos (k-nearest neighbours, kNN) y estrategias de remuestreo basadas en bootstrap. Se han desarrollado, además, dos nuevos métodos de clasificación:Probabilistic Bootstrapk-Nearest Neighbours (PBkNN) y Bagged k-Nearest Neighbours (BaggedkNN),yun nuevo método de predicción,el Direct OrthogonalizationkNN (DOkNN).En todos los casos, los resultados obtenidos con los nuevos métodos han sido comparables o mejores que los obtenidos utilizando métodos clásicos de clasificación y calibración multivariante.
En aquesta tesi doctoral s'ha desenvolupat el càlcul de la fiabilitat de classificació i de la fiabilitat de predicció utilitzant el mètode dels k-veïns més propers (k-nearest neighbours, kNN) i estratègies de remostreig basades en bootstrap. S'han desenvolupat, a més, dos nous mètodes de classificació: Probabilistic Bootstrap k-Nearest Neighbours (PBkNN) i Bagged k-Nearest Neighbours (Bagged kNN), i un nou mètode de predicció, el Direct OrthogonalizationkNN (DOkNN). En tots els casos, els resultats obtinguts amb els nous mètodes han estat comparables o millors que els obtinguts utilitzant mètodes clàssics de classificació i calibratge multivariant.
APA, Harvard, Vancouver, ISO, and other styles
8

Ozsakabasi, Feray. "Classification Of Forest Areas By K Nearest Neighbor Method: Case Study, Antalya." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609548/index.pdf.

Full text
Abstract:
Among the various remote sensing methods that can be used to map forest areas, the K Nearest Neighbor (KNN) supervised classification method is becoming increasingly popular for creating forest inventories in some countries. In this study, the utility of the KNN algorithm is evaluated for forest/non-forest/water stratification. Antalya is selected as the study area. The data used are composed of Landsat TM and Landsat ETM satellite images, acquired in 1987 and 2002, respectively, SRTM 90 meters digital elevation model (DEM) and land use data from the year 2003. The accuracies of different modifications of the KNN algorithm are evaluated using Leave One Out, which is a special case of K-fold cross-validation, and traditional accuracy assessment using error matrices. The best parameters are found to be Euclidean distance metric, inverse distance weighting, and k equal to 14, while using bands 4, 3 and 2. With these parameters, the cross-validation error is 0.009174, and the overall accuracy is around 86%. The results are compared with those from the Maximum Likelihood algorithm. KNN results are found to be accurate enough for practical applicability of this method for mapping forest areas.
APA, Harvard, Vancouver, ISO, and other styles
9

Joseph, Katherine Amanda. "Comparison of Segment and Pixel Based Non-Parametric Classification of Land Cover in the Amazon Region of Brazil Using Multitemporal Landsat TM/ETM+ Imagery." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32802.

Full text
Abstract:
This study evaluated the ability of segment-based classification paired with non-parametric methods (CART and kNN) to classify a chronosequence of Landsat TM/ETM+ imagery spanning from 1992 to 2002 within the state of Rondônia, Brazil. Pixel-based classification was also implemented for comparison. Interannual multitemporal composites were used in each classification in an attempt to increase the separation of primary forest, cleared, and re-vegetated classes within a given year. The kNN and CART classification methods, with the integration of multitemporal data, performed equally well with overall accuracies ranging from 77% to 91%. Pixel-based CART classification, although not different in terms of mean or median overall accuracy, did have significantly lower variability than all other techniques (3.2% vs. an average of 13.2%), and thus provided more consistent results. Segmentation did not improve classification success over pixel-based methods and was therefore an unnecessary processing step with the used dataset. Through the appropriate band selection methods of the respective non-parametric classifiers, multitemporal bands were chosen in 38 of the 44 total classifications, strongly suggesting the utility of interannual multitemporal data for the separation of cleared, re-vegetated, and primary forest classes. The separation of the primary forest class from the cleared and re-vegetated classes was particularly successful and may be a possible result of the incorporation of multitemporal data. The land cover maps from this study allow for an accurate annualized analysis of land cover and can be coupled with household data to gain a better understanding of landscape change in the region.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Buani, Bruna Elisa Zanchetta. "Aplicação da Lógica Fuzzy kNN e análises estatísticas para seleção de características e classificação de abelhas." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-10012011-085835/.

Full text
Abstract:
Este trabalho propõe uma alternativa para o problema de classificação de espécies de abelhas a partir da implementação de um algoritmo com base na Morfométria Geométrica e estudo das Formas dos marcos anatômicos das imagens obtidas pelas asas das abelhas. O algoritmo implementado para este propósito se baseia no algoritmo dos k-Vizinho mais Próximos (do inglês, kNN) e na Lógica Fuzzy kNN (Fuzzy k-Nearest Neighbor) aplicados a dados analisados e selecionados de pontos bidimensionais referentes as características geradas por marcos anatômicos. O estudo apresentado envolve métodos de seleção e ordenação de marcos anatômicos para a utilização no algoritmo por meio da implementação de um método matemático que utiliza o calculo dos marcos anatômicos mais significativos (que são representados por marcos matemáticos) e a formulação da Ordem de Significância onde cada elemento representa variáveis de entrada para a Fuzzy kNN. O conhecimento envolvido neste trabalho inclui uma perspectiva sobre a seleção de características não supervisionada como agrupamentos e mineração de dados, analise de pré-processamento dos dados, abordagens estatísticas para estimação e predição, estudo da Forma, Analise de Procrustes e Morfométria Geométrica sobre os dados e o tópico principal que envolve uma modificação do algoritmo dos k- Vizinhos mais Próximos e a aplicação da Fuzzy kNN para o problema. Os resultados mostram que a classificação entre amostras de abelhas no seu próprio grupo apresentam acuracia de 90%, dependendo da espécie. As classificações realizadas entre as espécies de abelhas alcançaram acuracia de 97%.
This work presents a proposal to solve the bees classification problem by implementing an algorithm based on Geometrics Morphometrics and the Shape analysis of landmarks generated from bees wings images. The algorithm is based on the K-Nearest Neighbor (K-Nearest Neighbor) algorithm and Fuzzy Logic KNN applied to the analysis and selection of two-dimensional data points relating to landmarks. This work is part of the Architecture Reference Model for Automatic identification and Taxonomic Classification System of Stingless Bee using the Wing Morphometry. The study includes selection and ordering methods for landmarks used in the algorithm by developing a mathematical model to represent the significance order, generating the most significant mathematical landmarks as input variables for Fuzzy Logic kNN. The main objective of this work is to develop a classification system for bee species. The knowledge involved in the development of this work include an overview of feature selection, unsupervised clustering and data mining, analysis of data pre-processing, statistical approaches for estimation and prediction, study of Shape, Procrustes Analysis on data that comes from Geometric Morphometry and the modification of the k-Nearest Neighbors algorithm and the Fuzzy Logic kNN. The results show that the classification in bee samples of the same species presents a accuracy above 90%, depending on the specie in analysis. The classification done between the bees species reach accuracies of 97%.
APA, Harvard, Vancouver, ISO, and other styles
11

Javanmardi, Ramtin, and Dawood Rehman. "Classification of Healthy and Alzheimer's Patients Using Electroencephalography and Supervised Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229650.

Full text
Abstract:
Alzheimer’s is one of the most costly illnesses that exists today and the number of people with alzheimers diease is expected to increase with 100 million until the year 2050. The medication that exists today is most effective if Alzheimer’s is detected during early stages since these medications do not cure Alzheimer’s but slows down the progression of the disease. Electroencephalography (EEG) is a relatively cheap method in comparison to for example Magnetic Resonance Imaging when it comes to diagnostic tools. However it is not clear how to deduce whether a patient has Alzheimer’s disease just from EEG data when the analyst is a human. This is the underlying motivation for our investigation; can supervised machine learning methods be used for pattern recognition using only the spectral power of EEG data to tell whether an individual has alzheimer’s disease or not? The output accuracy of the trained supervised machine learning models showed an average accuracy of above 80%. This indicates that there is a difference in the neural oscillations of the brain between healthy individuals and alzheimer’s disease patients which the machine learning methods are able to detect using pattern recognition.
Alzheimer är en av de mest kostsamma sjukdomar som existerar idag och antalet människor med alzheimer förväntas öka med omkring 100 miljoner människor tills 2050. Den medicinska hjälp som finns tillgänglig idag är som mest effektiv om man upptäcker Alzheimer i ett tidigt stadium eftersom dagens mediciner inte botar sjukdomen utan fungerar som bromsmedicin. Elektroencefalografi är en relativt billig metod för diagnostisering jämfört med Magnetisk resonanstomografi. Det är emellertid inte tydligt hur en läkare eller annan tränad individ ska tolka EEG datan för att kunna avgöra om det är en patient med alzheimers som de kollar på. Så den bakomliggande motivation till vår undersökning är; Kan man med hjälp av övervakad maskininlärning i kombination med spektral kraft från EEG datorn skapa modeller som kan avgöra om en patient har alzheimers eller inte. Medelvärdet av våra modellers noggrannhet var över 80%. Detta tyder på att det finns en faktiskt skillnad mellan hjärna signalerna hos en patient med alzheimer och en frisk individ, och att man med hjälp av maskininlärning kan hitta dessa skillnader som en människa enkelt missar.
APA, Harvard, Vancouver, ISO, and other styles
12

Björn, Albin. "Using machine learning to predict power deviations at Forsmark." Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-443225.

Full text
Abstract:
The power output at the Forsmark nuclear power plant sometimes deviates from the expected value. The causes of these deviations are sometimes known and sometimes unknown. Three types of machine learning methods (k-nearest neighbors, support vector machines and linear regression) were trained to predict whether or not the power deviation would be outside an expected interval. The data used to train the models was gathered from points in the power production process and the data signals consisted mostly of temperatures, pressures and flows. A large part of the project was dedicated to preparing the data before using it to train the models. Temperature signals were shown to be the best predictors of deviation in power, followed by pressure and flow. The model type that performed the best was k-nearest neighbors, followed by support vector machines and linear regression. Principal component analysis was performed to reduce the size of the training datasets and was found to perform equally well in the prediction task as when principal component analysis was not used.
APA, Harvard, Vancouver, ISO, and other styles
13

Stümer, Wolfgang. "Kombination von terrestrischen Aufnahmen und Fernerkundungsdaten mit Hilfe der kNN-Methode zur Klassifizierung und Kartierung von Wäldern." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2004. http://nbn-resolving.de/urn:nbn:de:swb:14-1096379861218-08302.

Full text
Abstract:
Bezüglich des Waldes hat sich in den letzten Jahren seitens der Politik und Wirtschaft ein steigender Informationsbedarf entwickelt. Zur Bereitstellung dieses Bedarfes stellt die Fernerkundung ein wichtiges Hilfsmittel dar, mit dem sich flächendeckende Datengrundlagen erstellen lassen. Die k-nächsten-Nachbarn-Methode (kNN-Methode), die terrestrische Aufnahmen mit Fernerkundungsdaten kombiniert, stellt eine Möglichkeit dar, diese Datengrundlage mit Hilfe der Fernerkundung zu verwirklichen. Deshalb beschäftigt sich die vorliegende Dissertation eingehend mit der kNN-Methode. An Hand der zwei Merkmale Grundfläche (metrische Daten) und Totholz (kategoriale Daten) wurden umfangreiche Berechnungen durchgeführt, wobei verschiedenste Variationen der kNN-Methode berücksichtigt wurden. Diese Variationen umfassen verschiedenste Einstellungen der Distanzfunktion, der Wichtungsfunktion und der Anzahl k-nächsten Nachbarn. Als Fernerkundungsdatenquellen kamen Landsat- und Hyperspektraldaten zum Einsatz, die sich sowohl von ihrer spektralen wie auch ihrer räumlichen Auflösung unterscheiden. Mit Hilfe von Landsat-Szenen eines Gebietes von verschiedenen Zeitpunkten wurde außerdem der multitemporale Ansatz berücksichtigt. Die terrestrische Datengrundlage setzt sich aus Feldaufnahmen mit verschiedenen Aufnahmedesigns zusammen, wobei ein wichtiges Kriterium die gleichmäßige Verteilung von Merkmalswerten (z.B. Grundflächenwerten) über den Merkmalsraum darstellt. Für die Durchführung der Berechnungen wurde ein Programm mit Visual Basic programmiert, welches mit der Integrierung aller Funktionen auf der Programmoberfläche eine benutzerfreundliche Bedienung ermöglicht. Die pixelweise Ausgabe der Ergebnisse mündete in detaillierte Karten und die Verifizierung der Ergebnisse wurde mit Hilfe des prozentualen Root Mean Square Error und der Bootstrap-Methode durchgeführt. Die erzielten Genauigkeiten für das Merkmal Grundfläche liegen zwischen 35 % und 67 % (Landsat) bzw. zwischen 65 % und 67 % (HyMapTM). Für das Merkmal Totholz liegen die Übereinstimmungen zwischen den kNN-Schätzern und den Referenzwerten zwischen 60,0 % und 73,3 % (Landsat) und zwischen 60,0 % und 63,3 % (HyMapTM). Mit den erreichten Genauigkeiten bietet sich die kNN-Methode für die Klassifizierung von Beständen bzw. für die Integrierung in Klassifizierungsverfahren an
Mapping forest variables and associated characteristics is fundamental for forest planning and management. The following work describes the k-nearest neighbors (kNN) method for improving estimations and to produce maps for the attributes basal area (metric data) and deadwood (categorical data). Several variations within the kNN-method were tested, including: distance metric, weighting function and number of neighbors. As sources of remote sensing Landsat TM satellite images and hyper spectral data were used, which differ both from their spectral as well as their spatial resolutions. Two Landsat scenes from the same area acquired September 1999 and 2000 regard multiple approaches. The field data for the kNN- method comprise tree field measurements which were collected from the test site Tharandter Wald (Germany). The three field data collections are characterized by three different designs. For the kNN calculation a program with integration all kNN functions were developed. The relative root mean square errors (RMSE) and the Bootstrap method were evaluated in order to find optimal parameters. The estimation accuracy for the attribute basal area is between 35 % and 67 % (Landsat) and 65 % and 67 % (HyMapTM). For the attribute deadwood is the accuracy between 60 % and 73 % (Landsat) and 60 % and 63 % (HyMapTM). Recommendations for applying the kNN method for mapping and regional estimation are provided
APA, Harvard, Vancouver, ISO, and other styles
14

SANTOS, Fernando Chagas. "Variações do método kNN e suas aplicações na classificação automática de textos." Universidade Federal de Goiás, 2010. http://repositorio.bc.ufg.br/tede/handle/tde/499.

Full text
Abstract:
Made available in DSpace on 2014-07-29T14:57:46Z (GMT). No. of bitstreams: 1 dissertacao-fernando.pdf: 677510 bytes, checksum: 19704f0b04ee313a63b053f7f9df409c (MD5) Previous issue date: 2010-10-10
Most research on Automatic Text Categorization (ATC) seeks to improve the classifier performance (effective or efficient) responsible for automatically classifying a document d not yet rated. The k nearest neighbors (kNN) is simpler and it s one of automatic classification methods more effective as proposed. In this paper we proposed two kNN variations, Inverse kNN (kINN) and Symmetric kNN (kSNN) with the aim of improving the effectiveness of ACT. The kNN, kINN and kSNN methods were applied in Reuters, 20ng and Ohsumed collections and the results showed that kINN and kSNN methods were more effective than kNN method in Reuters and Ohsumed collections. kINN and kSNN methods were as effective as kNN method in 20NG collection. In addition, the performance achieved by kNN method is more stable than kINN and kSNN methods when the value k change. A parallel study was conducted to generate new features in documents from the similarity matrices resulting from the selection criteria for the best results obtained in kNN, kINN and kSNN methods. The SVM (considered a state of the art method) was applied in Reuters, 20NG and Ohsumed collections - before and after applying this approach to generate features in these documents and the results showed statistically significant gains for the original collection.
Grande parte das pesquisas relacionadas com a classificação automática de textos (CAT) tem procurado melhorar o desempenho (eficácia ou eficiência) do classificador responsável por classificar automaticamente um documento d, ainda não classificado. O método dos k vizinhos mais próximos (kNN, do inglês k nearest neighbors) é um dos métodos de classificação automática mais simples e eficazes já propostos. Neste trabalho foram propostas duas variações do método kNN, o kNN invertido (kINN) e o kNN simétrico (kSNN) com o objetivo de melhorar a eficácia da CAT. Os métodos kNN, kINN e kSNN foram aplicados nas coleções Reuters, 20NG e Ohsumed e os resultados obtidos demonstraram que os métodos kINN e kSNN tiveram eficácia superior ao método kNN ao serem aplicados nas coleções Reuters e Ohsumed e eficácia equivalente ao método kNN ao serem aplicados na coleção 20NG. Além disso, nessas coleções foi possível verificar que o desempenho obtido pelo método kNN é mais estável a variação do valor k do que os desempenhos obtidos pelos métodos kINN e kSNN. Um estudo paralelo foi realizado para gerar novas características em documentos a partir das matrizes de similaridade resultantes dos critérios de seleção dos melhores resultados obtidos na avaliação dos métodos kNN, kINN e kSNN. O método SVM, considerado um método de classificação do estado da arte em relação à eficácia, foi aplicado nas coleções Reuters, 20NG e Ohsumed - antes e após aplicar a abordagem de geração de características nesses documentos e os resultados obtidos demonstraram ganhos estatisticamente significativos em relação à coleção original.
APA, Harvard, Vancouver, ISO, and other styles
15

Axillus, Viktor. "Comparing Julia and Python : An investigation of the performance on image processing with deep neural networks and classification." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19160.

Full text
Abstract:
Python is the most popular language when it comes to prototyping and developing machine learning algorithms. Python is an interpreted language that causes it to have a significant performance loss compared to compiled languages. Julia is a newly developed language that tries to bridge the gap between high performance but cumbersome languages such as C++ and highly abstracted but typically slow languages such as Python. However, over the years, the Python community have developed a lot of tools that addresses its performance problems. This raises the question if choosing one language over the other has any significant performance difference. This thesis compares the performance, in terms of execution time, of the two languages in the machine learning domain. More specifically, image processing with GPU-accelerated deep neural networks and classification with k-nearest neighbor on the MNIST and EMNIST dataset. Python with Keras and Tensorflow is compared against Julia with Flux for GPU-accelerated neural networks. For classification Python with Scikit-learn is compared against Julia with Nearestneighbors.jl. The results point in the direction that Julia has a performance edge in regards to GPU-accelerated deep neural networks. With Julia outperforming Python by roughly 1.25x − 1.5x. For classification with k-nearest neighbor the results were a bit more varied with Julia outperforming Python in 5 out of 8 different measurements. However, there exists some validity threats and additional research is needed that includes all different frameworks available for the languages in order to provide a more conclusive and generalized answer.
APA, Harvard, Vancouver, ISO, and other styles
16

Bastas, Selin A. "Nocturnal Bird Call Recognition System for Wind Farm Applications." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1325803309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jun, Yang. "Analysis and Visualization of the Two-Dimensional Blood Flow Velocity Field from Videos." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32539.

Full text
Abstract:
We estimate the velocity field of the blood flow in a human face from videos. Our approach first performs spatial preprocessing to improve the signal-to-noise ratio (SNR) and the computational efficiency. The discrete Fourier transform (DFT) and a temporal band-pass filter are then applied to extract the frequency corresponding to the subjects heart rate. We propose multiple kernel based k-NN classification for removing the noise positions from the resulting phase and amplitude maps. The 2D blood flow field is then estimated from the relative phase shift between the pixels. We evaluate our approach about segmentation as well as velocity field on real and synthetic face videos. Our method produces the recall and precision as well as a velocity field with an angular error and magnitude error on the average.
APA, Harvard, Vancouver, ISO, and other styles
18

Alsouda, Yasser. "An IoT Solution for Urban Noise Identification in Smart Cities : Noise Measurement and Classification." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-80858.

Full text
Abstract:
Noise is defined as any undesired sound. Urban noise and its effect on citizens area significant environmental problem, and the increasing level of noise has become a critical problem in some cities. Fortunately, noise pollution can be mitigated by better planning of urban areas or controlled by administrative regulations. However, the execution of such actions requires well-established systems for noise monitoring. In this thesis, we present a solution for noise measurement and classification using a low-power and inexpensive IoT unit. To measure the noise level, we implement an algorithm for calculating the sound pressure level in dB. We achieve a measurement error of less than 1 dB. Our machine learning-based method for noise classification uses Mel-frequency cepstral coefficients for audio feature extraction and four supervised classification algorithms (that is, support vector machine, k-nearest neighbors, bootstrap aggregating, and random forest). We evaluate our approach experimentally with a dataset of about 3000 sound samples grouped in eight sound classes (such as car horn, jackhammer, or street music). We explore the parameter space of the four algorithms to estimate the optimal parameter values for the classification of sound samples in the dataset under study. We achieve noise classification accuracy in the range of 88% – 94%.
APA, Harvard, Vancouver, ISO, and other styles
19

Lind, Johan. "Make it Meaningful : Semantic Segmentation of Three-Dimensional Urban Scene Models." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143599.

Full text
Abstract:
Semantic segmentation of a scene aims to give meaning to the scene by dividing it into meaningful — semantic — parts. Understanding the scene is of great interest for all kinds of autonomous systems, but manual annotation is simply too time consuming, which is why there is a need for an alternative approach. This thesis investigates the possibility of automatically segmenting 3D-models of urban scenes, such as buildings, into a predetermined set of labels. The approach was to first acquire ground truth data by manually annotating five 3D-models of different urban scenes. The next step was to extract features from the 3D-models and evaluate which ones constitutes a suitable feature space. Finally, three supervised learners were implemented and evaluated: k-Nearest Neighbour (KNN), Support Vector Machine (SVM) and Random Classification Forest (RCF). The classifications were done point-wise, classifying each 3D-point in the dense point cloud belonging to the model being classified. The result showed that the best suitable feature space is not necessarily the one containing all features. The KNN classifier got the highest average accuracy overall models — classifying 42.5% of the 3D points correct. The RCF classifier managed to classify 66.7% points correct in one of the models, but had worse performance for the rest of the models and thus resulting in a lower average accuracy compared to KNN. In general, KNN, SVM, and RCF seemed to have different benefits and drawbacks. KNN is simple and intuitive but by far the slowest classifier when dealing with a large set of training data. SVM and RCF are both fast but difficult to tune as there are more parameters to adjust. Whether the reason for obtaining the relatively low highest accuracy was due to the lack of ground truth training data, unbalanced validation models, or the capacity of the learners, was never investigated due to a limited time span. However, this ought to be investigated in future studies.
APA, Harvard, Vancouver, ISO, and other styles
20

Härenby, Deak Elliot. "Investigation of Machine Learning Methods for Anomaly Detection and Characterisation of Cable Shoe Pressing Processes." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-82721.

Full text
Abstract:
The ability to reliably connect electrical cables is important in many applications. A poor connection can become a fire hazard, so it is important that cables are always appropriately connected. This thesis investigates methods for monitoring of a machine that presses cable connectors onto cables. Using sensor data from the machine, would it be possible to create an algorithm that can automatically identify the cable and connector and thus make decisions on how a connector should be pressed for successful attachment? Furthermore, would it be possible to create an anomaly detection algorithm that is able to detect whether a connector has been incorrectly pressed by the end user? If these two questions can be addressed, the solutions would minimise the likelihood of errors, and enable detection of errors that anyway do arise. In this thesis, it is shown that the k-Nearest Neighbour (kNN) algorithm and Long Short-Term Memory (LSTM) network are both successful in classification of connectors and cables, both performing with 100% accuracy on the test set. The LSTM is the more promising alternative in terms of convergence and speed, being 28 times faster as well as requiring less memory. The distance-based methods and an autoencoder are investigated for the anomaly detection task. Data corresponding to a wide variety of possible incorrect kinds of usage of the tool were collected. The best anomaly detector detects 92% of incorrect cases of varying degrees of difficulty, a number which was higher than expected. On the tasks investigated, the performance of the neural networks are equal to or higher than the performance of the alternative methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Prabhakar, Yadu. "Detection and counting of Powered Two Wheelers in traffic using a single-plane Laser Scanner." Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00973472.

Full text
Abstract:
The safety of Powered Two Wheelers (PTWs) is important for public authorities and roadadministrators around the world. Recent official figures show that PTWs are estimated to represent only 2% of the total traffic but represent 30% of total deaths on French roads. However, as these estimated figures are obtained by simply counting the number plates registered, they do not give a true picture of the PTWs on the road at any given moment. This dissertation comes under the project METRAMOTO and is a technical applied research work and deals with two problems: detection of PTWsand the use of a laser scanner to count PTWs in the traffic. Traffic generally contains random vehicles of unknown nature and behaviour such as speed,vehicle interaction with other users on the road etc. Even though there are several technologies that can measure traffic, for example radars, cameras, magnetometers etc, as the PTWs are small-sized vehicles, they often move in between lanes and at quite a high speed compared to the vehicles moving in the adjacent lanes. This makes them difficult to detect. the proposed solution in this research work is composed of the following parts: a configuration to install the laser scanner on the road is chosen and a data coherence method is introduced so that the system is able to detect the road verges and its own height above the road surface. This is validated by simulator. Then the rawd ata obtained is pre-processed and is transform into the spatial temporal domain. Following this, an extraction algorithm called the Last Line Check (LLC) method is proposed. Once extracted, the objectis classified using one of the two classifiers either the Support Vector Machine (SVM) or the k-Nearest Neighbour (KNN). At the end, the results given by each of the two classifiers are compared and presented in this research work. The proposed solution in this research work is a propototype that is intended to be integrated in a real time system that can be installed on a highway to detect, extract, classify and counts PTWs in real time under all traffic conditions (traffic at normal speeds, dense traffic and even traffic jams).
APA, Harvard, Vancouver, ISO, and other styles
22

Boulay, Thomas. "Développement d'algorithmes pour la fonction NCTR - Application des calculs parallèles sur les processeurs GPU." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00907979.

Full text
Abstract:
Le thème principal de cette thèse est l'étude d'algorithmes de reconnaissance de cibles non coopératives (NCTR). Il s'agit de faire de la reconnaissance au sein de la classe "chasseur" en utilisant le profil distance. Nous proposons l'étude de quatre algorithmes : un basé sur l'algorithme des KPPV, un sur les méthodes probabilistes et deux sur la logique floue. Une contrainte majeure des algorithmes NCTR est le contrôle du taux d'erreur tout en maximisant le taux de succès. Nous avons pu montrer que les deux premiers algorithmes ne permettait pas de respecter cette contrainte. Nous avons en revanche proposé deux algorithmes basés sur la logique floue qui permettent de respecter cette contrainte. Ceci se fait au détriment du taux de succès (notamment sur les données réelles) pour le premier des deux algorithmes. Cependant la deuxième version de l'algorithme a permis d'augmenter considérablement le taux de succès tout en gardant le contrôle du taux d'erreur. Le principe de cet algorithme est de caractériser, case distance par case distance, l'appartenance à une classe en introduisant notamment des données acquises en chambre sourde. Nous avons également proposé une procédure permettant d'adapter les données acquises en chambre sourde pour une classe donnée à d'autres classes de cibles. La deuxième contrainte forte des algorithmes NCTR est la contrainte du temps réel. Une étude poussée d'une parallélisation de l'algorithme basé sur les KPPV a été réalisée en début de thèse. Cette étude a permis de faire ressortir les points à prendre en compte lors d'une parallélisation sur GPU d'algorithmes NCTR. Les conclusions tirées de cette étude permettront par la suite de paralléliser de manière efficace sur GPU les futurs algorithmes NCTR et notamment ceux proposés dans le cadre de cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
23

Marin, Rodenas Alfonso. "Comparison of Automatic Classifiers’ Performances using Word-based Feature Extraction Techniques in an E-government setting." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32363.

Full text
Abstract:
Nowadays email is commonly used by citizens to establish communication with their government. On the received emails, governments deal with some common queries and subjects which some handling officers have to manually answer. Automatic email classification of the incoming emails allows to increase the communication efficiency by decreasing the delay between the query and its response. This thesis takes part within the IMAIL project, which aims to provide an automatic answering solution to the Swedish Social Insurance Agency (SSIA) (“Försäkringskassan” in Swedish). The goal of this thesis is to analyze and compare the classification performance of different sets of features extracted from SSIA emails on different automatic classifiers. The features extracted from the emails will depend on the previous preprocessing that is carried out as well. Compound splitting, lemmatization, stop words removal, Part-of-Speech tagging and Ngrams are the processes used in the data set. Moreover, classifications will be performed using Support Vector Machines, k- Nearest Neighbors and Naive Bayes. For the analysis and comparison of different results, precision, recall and F-measure are used. From the results obtained in this thesis, SVM provides the best classification with a F-measure value of 0.787. However, Naive Bayes provides a better classification for most of the email categories than SVM. Thus, it can not be concluded whether SVM classify better than Naive Bayes or not. Furthermore, a comparison to Dalianis et al. (2011) is made. The results obtained in this approach outperformed the results obtained before. SVM provided a F-measure value of 0.858 when using PoS-tagging on original emails. This result improves by almost 3% the 0.83 obtained in Dalianis et al. (2011). In this case, SVM was clearly better than Naive Bayes.
APA, Harvard, Vancouver, ISO, and other styles
24

Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.

Full text
Abstract:
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
25

Šenovský, Jakub. "Dolování z dat v jazyce Python." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363895.

Full text
Abstract:
The main goal of this thesis was to get acquainted with the phases of data mining, with the support of the programming languages Python and R in the field of data mining and demonstration of their use in two case studies. The comparison of these languages in the field of data mining is also included. The data preprocessing phase and the mining algorithms for classification, prediction and clustering are described here. There are illustrated the most significant libraries for Python and R. In the first case study, work with time series was demonstrated using the ARIMA model and Neural Networks with precision verification using a Mean Square Error. In the second case study, the results of football matches are classificated using the K - Nearest Neighbors, Bayes Classifier, Random Forest and Logical Regression. The precision of the classification is displayed using Accuracy Score and Confusion Matrix. The work is concluded with the evaluation of the achived results and suggestions for the future improvement of the individual models.
APA, Harvard, Vancouver, ISO, and other styles
26

Rekathati, Faton. "Curating news sections in a historical Swedish news corpus." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166313.

Full text
Abstract:
The National Library of Sweden uses optical character recognition software to digitize their collections of historical newspapers. The purpose of such software is first to automatically segment text and images from scanned newspaper pages, and second to read the contents of the identified text regions. While the raw text is often digitized successfully, important contextual information regarding whether the text constitutes for example a header, a section title or the body text of an article is not captured. These characteristics are easy for a human to distinguish, yet they remain difficult for a machine to recognize. The main purpose of this thesis is to investigate how well section titles in the newspaper Svenska Dagbladet can be classified by using so called image embeddings as features. A secondary aim is to examine whether section titles become harder to classify in older newspaper data. Lastly, we explore if manual annotation work can be reduced using the predictions of a semi-supervised classifier to help in the labeling process.  Results indicate the use of image embeddings help quite substantially in classifying section titles. Datasets from three different time periods: 1990-1997, 2004-2013, and 2017 and onwards were sampled and annotated. The best performing model (Xgboost) achieved macro F1 scores of 0.886, 0.936 and 0.980 for the respective time periods. The results also showed classification became more difficult on older newspapers. Furthermore, a semi-supervised classifier managed an average precision of 83% with only single section title examples, showing promise as way to speed up manual annotation of data.
APA, Harvard, Vancouver, ISO, and other styles
27

Hersperger, Anna M., and Silvia Tobias. "Wie kann der Flächenverbrauch begrenzt werden? Erfahrungen aus der Schweiz." Rhombos-Verlag, 2019. https://slub.qucosa.de/id/qucosa%3A72246.

Full text
Abstract:
Schon lange ist der hohe Flächenverbrauch in der Schweiz ein wichtiges Thema, insbesondere weil die besiedelbare Fläche knapp ist. In den letzten 10 bis 15 Jahren erhielt die Thematik durch das starke Bevölkerungswachstum weitere Dringlichkeit. Das revidierte Raumplanungsgesetz, das am 1. Mai 2014 in Kraft trat, ist konsequent auf die Schaffung kompakter Siedlungen und die bessere Nutzung brachliegender oder ungenügend genutzter Flächen innerhalb bestehender Baugebiete ausgerichtet. Kantone und Gemeinden werden dabei stärker als bisher in die Pflicht genommen. Auch die nationalen Grundlagen für den Kulturlandschutz werden derzeit überarbeitet, erhalten aber auch Kritik, gerade aus Landwirtschaftskreisen. Für die Gemeindeebene identifizierten wir in unserer Forschung folgende wichtige raumplanerische Strategien, um kompakte Siedlungen zu erreichen: das Bauen auf Grund von Gestaltungsplänen, ein angepasstes Vorgehen im Umgang mit Nutzungsziffern, die Festlegung von sinnvollen Siedlungsgebieten in kommunalen Nutzungsplanungen und, wenn nötig, Rückzonungen. Kombinationen aus Regulierung, Verhandlung und aktiver Bodenpolitik ergeben die in den Schweizer Gemeinden beobachtete Vielfalt von Ansätzen, um den Flächenverbrauch zu begrenzen. Eine herausragende Rolle spielt die Kontinuität der kommunalen politisch-raumplanerischen Strategie, welche oft an individuelle Akteure gebunden ist. Obwohl auf allen Ebenen viel Erfahrung vorhanden ist, wird die Raumplanung weiterhin auf innovatives und kontinuierliches Engagement aller Akteure angewiesen sein.
APA, Harvard, Vancouver, ISO, and other styles
28

Qamar, Ali Mustafa. "Mesures de similarité et cosinus généralisé : une approche d'apprentissage supervisé fondée sur les k plus proches voisins." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM083.

Full text
Abstract:
Les performances des algorithmes d'apprentissage automatique dépendent de la métrique utilisée pour comparer deux objets, et beaucoup de travaux ont montré qu'il était préférable d'apprendre une métrique à partir des données plutôt que se reposer sur une métrique simple fondée sur la matrice identité. Ces résultats ont fourni la base au domaine maintenant qualifié d'apprentissage de métrique. Toutefois, dans ce domaine, la très grande majorité des développements concerne l'apprentissage de distances. Toutefois, dans certaines situations, il est préférable d'utiliser des similarités (par exemple le cosinus) que des distances. Il est donc important, dans ces situations, d'apprendre correctement les métriques à la base des mesures de similarité. Il n'existe pas à notre connaissance de travaux complets sur le sujet, et c'est une des motivations de cette thèse. Dans le cas des systèmes de filtrage d'information où le but est d'affecter un flot de documents à un ou plusieurs thèmes prédéfinis et où peu d'information de supervision est disponible, des seuils peuvent être appris pour améliorer les mesures de similarité standard telles que le cosinus. L'apprentissage de tels seuils représente le premier pas vers un apprentissage complet des mesures de similarité. Nous avons utilisé cette stratégie au cours des campagnes CLEF INFILE 2008 et 2009, en proposant des versions en ligne et batch de nos algorithmes. Cependant, dans le cas où l'on dispose de suffisamment d'information de supervision, comme en catégorisation, il est préférable d'apprendre des métriques complètes, et pas seulement des seuils. Nous avons développé plusieurs algorithmes qui visent à ce but dans le cadre de la catégorisation à base de k plus proches voisins. Nous avons tout d'abord développé un algorithme, SiLA, qui permet d'apprendre des similarités non contraintes (c'est-à-dire que la mesure peut être symétrique ou non). SiLA est une extension du perceptron par vote et permet d'apprendre des similarités qui généralisent le cosinus, ou les coefficients de Dice ou de Jaccard. Nous avons ensuite comparé SiLA avec RELIEF, un algorithme standard de re-pondération d'attributs, dont le but n'est pas sans lien avec l'apprentissage de métrique. En effet, il a récemment été suggéré par Sun et Wu que RELIEF pouvait être considéré comme un algorithme d'apprentissage de métrique avec pour fonction objectif une approximation de la fonction de perte 0-1. Nous montrons ici que cette approximation est relativement mauvaise et peut être avantageusement remplacée par une autre, qui conduit à un algorithme dont les performances sont meilleures. Nous nous sommes enfin intéressés à une extension directe du cosinus, extension définie comme la forme normalisée d'un produit scalaire dans un espace projeté. Ce travail a donné lieu à l'algorithme gCosLA. Nous avons testé tous nos algorithmes sur plusieurs bases de données. Un test statistique, le s-test, est utilisé pour déterminer si les différences entre résultats sont significatives ou non. GCosLA est l'algorithme qui a fourni les meilleurs résultats. De plus, SiLA et gCosLA se comparent avantageusement à plusieurs algorithmes standard, ce qui illustre leur bien fondé
Almost all machine learning problems depend heavily on the metric used. Many works have proved that it is a far better approach to learn the metric structure from the data rather than assuming a simple geometry based on the identity matrix. This has paved the way for a new research theme called metric learning. Most of the works in this domain have based their approaches on distance learning only. However some other works have shown that similarity should be preferred over distance metrics while dealing with textual datasets as well as with non-textual ones. Being able to efficiently learn appropriate similarity measures, as opposed to distances, is thus of high importance for various collections. If several works have partially addressed this problem for different applications, no previous work is known which has fully addressed it in the context of learning similarity metrics for kNN classification. This is exactly the focus of the current study. In the case of information filtering systems where the aim is to filter an incoming stream of documents into a set of predefined topics with little supervision, cosine based category specific thresholds can be learned. Learning such thresholds can be seen as a first step towards learning a complete similarity measure. This strategy was used to develop Online and Batch algorithms for information filtering during the INFILE (Information Filtering) track of the CLEF (Cross Language Evaluation Forum) campaign during the years 2008 and 2009. However, provided enough supervised information is available, as is the case in classification settings, it is usually beneficial to learn a complete metric as opposed to learning thresholds. To this end, we developed numerous algorithms for learning complete similarity metrics for kNN classification. An unconstrained similarity learning algorithm called SiLA is developed in which case the normalization is independent of the similarity matrix. SiLA encompasses, among others, the standard cosine measure, as well as the Dice and Jaccard coefficients. SiLA is an extension of the voted perceptron algorithm and allows to learn different types of similarity functions (based on diagonal, symmetric or asymmetric matrices). We then compare SiLA with RELIEF, a well known feature re-weighting algorithm. It has recently been suggested by Sun and Wu that RELIEF can be seen as a distance metric learning algorithm optimizing a cost function which is an approximation of the 0-1 loss. We show here that this approximation is loose, and propose a stricter version closer to the the 0-1 loss, leading to a new, and better, RELIEF-based algorithm for classification. We then focus on a direct extension of the cosine similarity measure, defined as a normalized scalar product in a projected space. The associated algorithm is called generalized Cosine simiLarity Algorithm (gCosLA). All of the algorithms are tested on many different datasets. A statistical test, the s-test, is employed to assess whether the results are significantly different. GCosLA performed statistically much better than SiLA on many of the datasets. Furthermore, SiLA and gCosLA were compared with many state of the art algorithms, illustrating their well-foundedness
APA, Harvard, Vancouver, ISO, and other styles
29

Qian, Kun [Verfasser], Björn W. [Akademischer Betreuer] Schuller, Björn W. [Gutachter] Schuller, and Werner [Gutachter] Hemmert. "Automatic General Audio Signal Classification / Kun Qian ; Gutachter: Björn W. Schuller, Werner Hemmert ; Betreuer: Björn W. Schuller." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1173898948/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pinon, Catherine. "La nébuleuse de kān : classification des différents emplois de kāna/yakūnu à partir d'un corpus d'arabe contemporain." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3078/document.

Full text
Abstract:
Ce travail a pour objet d'étudier les emplois du verbe-outil kāna en arabe contemporain. 1ère partie : nous commençons par faire la synthèse des descriptions de kāna chez les grammairiens arabes et arabisants, en nous intéressant au contenu de ces descriptions ainsi qu'à leur forme et à leur adéquation avec la langue décrite. 2ème partie : pour travailler sur la langue contemporaine, nous optons pour la méthodologie de la linguistique de corpus. Après une discussion théorique et un état de la recherche en linguistique de corpus appliquée à la langue arabe, nous réfléchissons à l'élaboration de notre propre corpus, un corpus numérique, multigénérique et diatopique d'arabe contemporain écrit non dialectal. Comprenant 1,5 millions de mots, il contient à part égale des textes écrits après 2002 provenant de trois genres (blogs, littérature, presse) et de sept pays (Arabie Saoudite, Égypte, Liban, Maroc, Syrie, Tunisie, Yémen). 3ème partie : nous classifions les 15 000 occurrences du verbe kāna extraites de notre corpus et analysons leurs emplois. Nous quantifions les différents types d'emploi, de structures et d'expressions en nous efforçant de dégager les valeurs portées par ce verbe, en particulier les valeurs modales. Nous plaçons cette étude dans le cadre d'une écologie de la langue en étudiant le milieu diatopique et générique duquel les occurrences proviennent
This dissertation studies the various uses of the verb-tool kāna in contemporary Arabic. Part I. We start by reviewing how kāna has been described by Arab grammarians and Arabic specialists. We look at both content and form, evaluating the extent to which these descriptions conform to the language they describe. Part II. In order to examine the contemporary Arabic language we chose to use the corpus linguistics methodology. After outlining some theoretical considerations and providing a state of the art in corpus linguistics applied to the Arabic language, we discuss the constitution of our own corpus. This digital corpus includes three types of texts (blogs, literature, press) from seven different countries (Saudi Arabia, Egypt, Lebanon, Morocco, Syria, Tunisia and Yemen). Numbering altogether 1.5 million words, the texts were all published after 2002. Part III. We classify 15,000 instances of kāna and analyze their uses. We quantify the various functions, patterns and expressions through which kāna is deployed, seeking to identify the values conveyed by the verb, especially modal values. We locate this study within an ecology of language by scrutinizing the diatopic and generic settings of the various occurrences
APA, Harvard, Vancouver, ISO, and other styles
31

Riedel, Wolfgang. "Was kann KOffice wirklich?" Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100297.

Full text
Abstract:
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur "Rechnernetze und verteilte Systeme" der Fakultaet fuer Informatik der TU Chemnitz. Workshop-Thema: Mobilitaet Analyse der Verwendbarkeit der aktuellen Version von KOffice unter Linux/KDE2 zur Gestaltung von Geschäftsdokumenten (Texte, Rechentabellen, Vektorgrafiken, Präsentationen).
APA, Harvard, Vancouver, ISO, and other styles
32

Bente, Klaus. "Kann Universität Heimat sein?" Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-119735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Erlitz, Monique. "Was kann Ehrenamt?" Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-74927.

Full text
Abstract:
Die HALLE 14 ist ein gemeinnütziges Kunstzentrum auf der Leipziger Baumwollspinnerei. Die öffentliche Kunstbibliothek der HALLE 14 mit einem Gesamtbestand von 36.000 Büchern und Medien zur zeitgenössischen Kunst ist seit Januar 2009 im Herzen des ehemaligen Industriegebäudes, dem Besucherzentrum öffentlich zugänglich. Nach und nach wird hier der jährlich um circa 3000 Medien wachsende Bestand der Bibliothek erschlossen. Kathrin Winkler, Fachangestellte für Medien- und Informationsdienste, bringt sich seit November 2010 ehrenamtlich in der Bibliothek der HALLE 14 ein. In einem Gespräch haben Monique Erlitz, die Projektleiterin der Kunstbibliothek der HALLE 14, und Kathrin Winkler Gründe sowie Probleme und Chancen des Ehrenamts erörtert.
APA, Harvard, Vancouver, ISO, and other styles
34

Steinheuser, Sylvia, and Joachim Zülch. "Kann personales Vertrauen virtuell produziert und reproduziert werden?" Technische Universität Dresden, 2004. https://tud.qucosa.de/id/qucosa%3A29595.

Full text
Abstract:
Vertrauen wird vielfach als definitorischer Bestandteil virtueller Organisationsformen diskutiert. Dies begründet sich nicht zuletzt darin, dass dem Vertrauen intra- wie interorganisational die Funktion eines Koordinations- und Kontrollmechanismus zugesprochen werden kann. Dieses wird umso bedeutsamer, als dass die Arbeit in virtuellen Kooperationen durch Unvorhersehbarkeit und damit durch geringe Erwartungssicherheit gekennzeichnet ist. In Anlehnung an die von Luhmann (2000) angenommene Funktionalität von Vertrauen als Mechanismus zur Reduktion sozial verursachter Komplexität, kann Vertrauen als wesentlich für die Bewältigung des enormen Komplexitätsniveaus und den daraus folgenden erheblichen Flexibilitätsanforderungen an die Mitarbeiter auf beiden Seiten der Kooperation betrachtet werden.
APA, Harvard, Vancouver, ISO, and other styles
35

Lilliedahl, Jonathan. "Musik i (ut)bildning : gränsdragningar och inramningar i läroplans(kon)texter för gymnasieskolan." Doctoral thesis, Örebro universitet, Musikhögskolan, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-27541.

Full text
Abstract:
The purpose of this dissertation is critically to illustrate discursive recontextualization between sociocultural production and reproduction, with respect to both relations within and relations to music education in Swedish upper-secondary school. The starting point for the study is the Swedish upper-secondary school reform, Gy 2011, which has involved a marked reformulation of the agenda for music education in upper-secondary school. The general Artistic Activities disappeared, at the same time as the significance of a specialising education in the field was strengthened. This dissertation is driven by the desire to understand the results of the upper-secondary school reform by explaining the processes and principles involved. But, in a wider perspective, the dissertation deals not only with a single reform, but encompasses a search for the underlying principles that have had, and are having, a regulating effect on the design and positioning of music in publicly regulated education. The results show that structuring of the subject of music takes place primarily through the classification and framing of social relationships in general, and of interactional relationships in particular. The focus of these relationships has shifted from time to time, and varies from context to context, but has always been in relation to something that has been regarded as sacred. In recent times, the framing within music-oriented knowledge practices has become weaker. At the same time, such knowledge practices have shown an increasing need for the drawing of boundaries in relation to other knowledge practices. The latter also has a value in explaining why general music content was removed from the upper-secondary school curriculum, whereas a special and specialising educational programme was able to gain legitimacy.
APA, Harvard, Vancouver, ISO, and other styles
36

Wahren, Sebastian, and Daniela Hoferer. "Das Schulprojekt Vot ken you mach mobil. Erfahrungsbericht über den Comicworkshop." HATiKVA e.V. – Die Hoffnung Bildungs- und Begegnungsstätte für Jüdische Geschichte und Kultur Sachsen, 2014. https://slub.qucosa.de/id/qucosa%3A34989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lindell, Oscar, and David Ström. "Byggdelsklassificering av installationer : En fallstudie i hur BSAB-systemet kan utvecklas." Thesis, KTH, Byggteknik och design, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-149455.

Full text
Abstract:
Building Information Modelling (BIM) medför ett helt nytt sätt att arbeta där objektsbaserade 3D-modeller är den huvudsakliga informationskällan. Modellerna kan bära på stora mängder information i jämförelse med den traditionella pappersritningen som är begränsad till text, symboler och en 2D-visualisering av byggnaden. Att arbeta med BIM ställer helt nya krav på hur information ska struktureras för att säkerställa att den tolkas på samma sätt av alla aktörer i ett projekt och all den programvara som används för att hantera informationen. Den här rapporten behandlar installationer specifikt och klassifikationen av byggdelar som är grunden för att identifiera objekt i en modell. Vi har utrett BSAB 96 som är ett väletablerat system för klassifikation av byggdelar men i sin nuvarande utformning täcker det inte behovet för att kunna användas i BIM. Detta är en nyckelfaktor för att kunna koppla rätt information till BIM-objekten så att en obruten informationskedja kan erhållas. Vi förklarar teorin bakom klassifikationssystem, BSAB 96 i synnerhet, och hur de appliceras på BIM. De problemområden och utvecklingsbehov som finns ringas in och i en fallstudie föreslås två konkreta förslag på utvidgning av byggdelstabellen för installationer i BSAB 96; ett för praktisk tillämpning idag och ett baserat på en fullt integrerad datoriserad process.
Building Information Modelling (BIM) represents a completely new way of working where object-based 3D models are the main source of information. The models can carry large amounts of information in comparison with the traditional paper drawing which is limited to text, symbols, and a 2D visualization of the building. Working with BIM creates new requirements on how information should be structured to ensure that it is interpreted in the same manner by all participants in a project and all the software that is being used to handle it. This report deals with HVAC especially and the classification of construction elements which is the basis for identifying objects in the model. We have investigated BSAB 96 which is a well-established system for classification of construction elements but in its current state it does not cover the needs for use in BIM. This is a key factor to be able to connect the right information to BIM objects. We explain the theory behind classification systems, BSAB 96 in particular, and how they are applied to BIM.  The problem areas and development needs are highlighted and in a case study two concrete suggestions are proposed for expanding the classification table for HVAC construction elements in BSAB 96; one for practical used today and one based on a fully integrated computerized process.
APA, Harvard, Vancouver, ISO, and other styles
38

Peter, Ronald. "Damit das Seil größere Kräfte übertragen kann." Hochschule für Technik und Wirtschaft Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-144152.

Full text
Abstract:
In der Förder- und Umschlagtechnik werden robuste Förderseile benötigt, die bei kleinem Seildurchmesser große Lasten tragen können. Mit Hilfe des neu entwickelten Reibwertprüfstandes sollen nun praktikable Lösungen zur Erhöhung der Treibfähigkeit zwischen Förderseil und Treibscheibe gefunden werden. Außerdem sollen damit die Beanspruchung des Seiles und die daraus resultierenden Schädigungen der Lebensdauer der Elemente des Seiltriebs beurteilt werden. Ebenfalls können technologische Möglichkeiten zur Minimierung des Verschleißes von Förderseil und Treibscheibe untersucht werden. Der Reibungskoeffizient kann auch mit Hilfe einer geeigneten Rillenkonstruktion der Treibscheibe sowie unter Anwendung spezieller Schmiermittel verbessert werden. Für die jeweilige ausgewählte Seilkonstruktion sollen optimale Rillenkonstruktionen und Oberflächenprofile des Treibscheibenfutters bestimmt werden. Bei besonderen Anwendungen soll ein spezielles Schmiermittel mit ferromagnetischen Nanopartikeln in Verbindung mit Dauermagneten unterhalb der Seilrille den Reibungskoeffizienten noch zusätzlich erhöhen.
APA, Harvard, Vancouver, ISO, and other styles
39

Büscher, Barbara. "Aufzeichnen.Transformieren - wie Wissen über vergangene Aufführungen zugänglich werden kann: eine medientheoretische Skizze." Hochschule für Musik und Theater 'Felix Mendelssohn Bartholdy' Leipzig, 2015. https://slub.qucosa.de/id/qucosa%3A7494.

Full text
Abstract:
Wenn wir Dokumente oder Spuren von Performances bzw. Aufführungen als mediale Transformationen verstehen, deren technisch-apparative und ästhetisch-diskursive Bedingungen reflektiert werden müssen, so verbinden sich mit der Frage nach deren medialem Charakter jene nach den Verfahren ihrer Herstellung, nach den Methoden der Transformation und nach den damit verbundenen Handlungen zu ihrer wissenschaftlichen Bearbeitung. Medial verschiedene Praktiken des AUFZEICHNENs – sei es in Schrift, Zeichnung, Diagramm oder audio-visueller Speicherung, analog und digital - werden als Prozess, z.B. in Hinblick auf ihre Aufnahmemodi, und im Verhältnis zueinander beschrieben.
APA, Harvard, Vancouver, ISO, and other styles
40

Wiesenmüller, Heidrun. "Nicht zu unterschätzen: Schlagwortketten – und was man alles damit machen kann." Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200901315.

Full text
Abstract:
Gemäß einer weit verbreiteten Annahme haben Schlagwortketten in OPACs kaum Nutzen. Doch tragen sie nicht nur bei der Anzeige erheblich zur Verständlichkeit bei, sondern sind auch im Retrieval vielfältig nutzbar: Der Schlagwortkettenindex bietet eine nach inhaltlichen Kriterien geordnete Anzeige und ermöglicht eine rasche Relevanzentscheidung. In der Volltitelanzeige können auch ungeübte Benutzer mit einem einzigen Klick ähnliche Titel finden (Weitersuche mit der Schlagwortkette). Innovative neue Anwendungen sind zum einen der Drill-down mit Schlagwortketten (z.B. UB Augsburg, UB Regensburg) und zum anderen die Kettenstichwortsuche (Ausgabe aller Ketten, die die Suchbegriffe an irgendeiner Stelle enthalten; z.B. Helveticat); zum Teil sind jedoch noch technische Verbesserungen nötig. Zukünftig sollte außerdem eine kettenspezifische, d.h. „gegenstandsscharfe“ Suche realisiert werden, die nur diejenigen Titel ausgibt, bei denen die gesuchten Begriffe in derselben Schlagwortkette stehen. Sinnvoll wäre auch das Angebot, sich ein beliebiges Trefferset in Form eines Schlagwortkettenindex anzeigen zu lassen (Schlagwortsicht). Schlagwortketten und moderne OPAC-Technik passen also durchaus zusammen, und es ist ein Gebot der Wirtschaftlichkeit, die Nutzung der Schlagwortketten im OPAC zu verbessern. Bsp: Nationalbibliothek Bern. Ihre Schlagwortketten wurden geprüft, am Bespiel „Geografie der Schweiz“: Etwa 2/5 der Ketten sind redundant, weil sie Duplikate darstellen.
APA, Harvard, Vancouver, ISO, and other styles
41

Ritschel, Susanne. "Ubiquitär in Zeit, Raum und Materie: Repräsentationen jüdischer Zugehörigkeiten in der Ausstellung Vot ken you mach?" HATiKVA e.V. – Die Hoffnung Bildungs- und Begegnungsstätte für Jüdische Geschichte und Kultur Sachsen, 2014. https://slub.qucosa.de/id/qucosa%3A34992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bertoncini-Zúbková, Elena. "A friend in need is a friend indeed: Ken Walibora's novel Kufa kuzikana." Swahili Forum 14 (2007), S. 153-163, 2007. https://ul.qucosa.de/id/qucosa%3A11500.

Full text
Abstract:
After being for a long time in the shadow of its Tanzanian counterpart, Kenyan fiction has recently come into the foreground with writers such as Kyallo Wadi Wamitila, Rocha Chi-merah, Mwenda Mbatiah and Ken Walibora. The paper deals with his second novel Kufa kuzikana. Although Kufa Kuzikana is a powerful accusation of how ruthless ethnic feelings still inform many people from the intellectuals and top politicians to the uneducated villagers, the novel does contain a positive message as well in that it shows how true friendship can overcome ethnic and other differences and survive even in the most adverse circumstances.
APA, Harvard, Vancouver, ISO, and other styles
43

Betzwieser, Thomas. "Kann man eine musikalische Interpretation ‚edieren‘?: Anmerkungen zur Freischütz-Aufnahme von Eugen Jochum und ihren Quellen." Allitera Verlag, 2016. https://slub.qucosa.de/id/qucosa%3A20964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Toussaint, Claude. "User Experience messen und gezielt steuern – Jeder will es, doch wer kann es? Wir zeigen, wie es geht!" TUDpress - Verlag der Wissenschaften GmbH, 2012. https://tud.qucosa.de/id/qucosa%3A29765.

Full text
Abstract:
designaffairs entwickelt seit 20 Jahren Strategien und Design für Produkte in den Bereichen Hardware, Software und Services. Mit weltweit mehr als 70 Experten bieten wir Leistungen in Research, Strategie, Design und Engineering an. Dabei kombinieren wir erfolgreich Kreativität mit wissenschaftlichen Methoden. [... aus dem Text]
APA, Harvard, Vancouver, ISO, and other styles
45

Hiller, Lars, and Daniela Hoferer. "Yet I´m not the author. Vot ken you mach mobil – Projekttage zu jüdischer Identität." HATiKVA e.V. – Die Hoffnung Bildungs- und Begegnungsstätte für Jüdische Geschichte und Kultur Sachsen, 2014. https://slub.qucosa.de/id/qucosa%3A34988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Pavani, Sri-Kaushik. "Methods for face detection and adaptive face recognition." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7567.

Full text
Abstract:
The focus of this thesis is on facial biometrics; specifically in the problems of face detection and face recognition. Despite intensive research over the last 20 years, the technology is not foolproof, which is why we do not see use of face recognition systems in critical sectors such as banking. In this thesis, we focus on three sub-problems in these two areas of research. Firstly, we propose methods to improve the speed-accuracy trade-off of the state-of-the-art face detector. Secondly, we consider a problem that is often ignored in the literature: to decrease the training time of the detectors. We propose two techniques to this end. Thirdly, we present a detailed large-scale study on self-updating face recognition systems in an attempt to answer if continuously changing facial appearance can be learnt automatically.
L'objectiu d'aquesta tesi és sobre biometria facial, específicament en els problemes de detecció de rostres i reconeixement facial. Malgrat la intensa recerca durant els últims 20 anys, la tecnologia no és infalible, de manera que no veiem l'ús dels sistemes de reconeixement de rostres en sectors crítics com la banca. En aquesta tesi, ens centrem en tres sub-problemes en aquestes dues àrees de recerca. En primer lloc, es proposa mètodes per millorar l'equilibri entre la precisió i la velocitat del detector de cares d'última generació. En segon lloc, considerem un problema que sovint s'ignora en la literatura: disminuir el temps de formació dels detectors. Es proposen dues tècniques per a aquest fi. En tercer lloc, es presenta un estudi detallat a gran escala sobre l'auto-actualització dels sistemes de reconeixement facial en un intent de respondre si el canvi constant de l'aparença facial es pot aprendre de forma automàtica.
APA, Harvard, Vancouver, ISO, and other styles
47

Gerike, Regine. "Wie kann das Leitbild nachhaltiger Verkehrsentwicklung konkretisiert werden?: Ableitung grundlegender Aufgabenbereiche." Doctoral thesis, Technische Universität Dresden, 2004. https://tud.qucosa.de/id/qucosa%3A24597.

Full text
Abstract:
Ausgangspunkt der Arbeit ist das Leitbild nachhaltiger Verkehrsentwicklung. Voraussetzung für die Umsetzung dieses qualitativen Leitbilds ist seine Operationalisierung, eine Aufgabe, die sich auf Grund des Prozesscharakters des Begriffs nachhaltiger Entwicklung als schwierig erweist. Eine Beschreibung dieses Begriffs durch konkrete, messbare Indikatoren ist aber notwendig, will man den Status quo sowie durchgeführte Maßnahmen anhand dieses Ziels bewerten. Die vorliegende Arbeit versucht, den Widerspruch zwischen dem Prozesscharakter nachhaltiger Entwicklung und der Notwendigkeit, diese durch konkrete Indikatoren zu beschreiben, für den Verkehrsbereich einer Lösung näher zu bringen. Folgende zentrale Fragen ergeben sich für die Arbeit: Wie kann das Leitbild nachhaltiger Verkehrsentwicklung konkretisiert und dabei gleichzeitig dem Prozesscharakter des Nachhaltigkeitsbegriffs Rechnung getragen werden? Welche Empfehlungen lassen sich für Maßnahmen zur Förderung nachhaltiger Verkehrsentwicklung ableiten? Zur Beantwortung der Fragen wird zunächst der Begriff nachhaltiger Entwicklung für die Arbeit abgegrenzt. Als Ergebnis dieses Arbeitsschritts wird der Begriff der Bedürfnisse in den Mittelpunkt der weiteren Arbeit gestellt: Nachhaltige Entwicklung wird als eine an den Bedürfnissen der Menschen orientierte Entwicklung betrachtet. Auf Basis einer bedürfnistheoretischen Diskussion werden anschließend die folgenden Aufgabenbereiche erarbeitet: Sozialer Aufgabenbereich: Auf Grund der Widersprüchlichkeit und Wandelbarkeit menschlicher Bedürfnisse ist die Befriedigung sämtlicher Bedürfnisse der Menschen als Ziel von Maßnahmen im Verkehrsbereich ungeeignet. Basis einer nachhaltigen Verkehrsentwicklung und Ziel des sozialen Aufgabenbereichs ist daher ein staatliches Angebot einer verkehrlichen Grundversorgung, durch welches die Befriedigung grundlegender Bedürfnisse gewährleistet wird. Dieses Angebot wird über Mindeststandards beschrieben und durch anwendungsspezifische Planungen ergänzt. Allokationsbereich: Der Markt ist ein geeignetes Instrument zur Realisierung aller über die im sozialen Aufgabenbereich festzulegende Grundversorgung hinausgehenden Mobilitätswünsche. Er ermittelt und befriedigt Bedürfnisse gut, allerdings mit Einschränkungen, welche durch den sozialen Aufgabenbereich sowie den Ressourcenbereich zu kompensieren sind. Das Ziel dieses Aufgabenbereichs besteht im Abbau von Marktunvollkommenheiten. Ressourcenbereich: Aus der mangelnden Berücksichtigung von Verteilungsfragen durch den Marktmechanismus ergeben sich verteilungspolitische Aufgabenbereiche nachhaltiger Verkehrsentwicklung: Soziale Fragen werden durch den sozialen Aufgabenbereich abgedeckt. Gegenstand des Ressourcenbereichs ist die Regelung der Verteilung natürlicher Ressourcen als Anfangsausstattung für den Prozess zur Erstellung der Verkehrsleistungen. Die erarbeiteten Aufgabenbereiche werden zu einem Entwicklungskorridor zusammengeführt: Die untere Begrenzung wird durch die im Rahmen des sozialen Aufgabenbereichs zu gewährleistende verkehrliche Grundversorgung gebildet. Diese sollte nicht unterschritten werden. Im Rahmen des Ressourcenbereichs zu formulierende Tragfähigkeitsgrenzen bilden die obere Begrenzung des Korridors und sollten nicht überschritten werden. Die Regeln für alle Aktivitäten innerhalb der Grenzen werden durch den Allokationsbereich vorgegeben, dessen Ziel die Gewährleistung funktionsfähiger Marktmechanismen ist. Im Anschluss an die Erarbeitung des Entwicklungskorridors werden Optionen zu dessen Konkretisierung aufgezeigt. Werden Teile der erarbeiteten Optionen mit Hilfe einer gesellschaftlichen Diskussion ausgewählt, so ist damit für einzelne konkrete Anwendungsfälle eine abschließende Beschreibung des Ziels nachhaltiger Verkehrsentwicklung möglich. Abschließend werden im Rahmen einer beispielhaften Analyse des Status quo für den Freistaat Sachsen verkehrliche CO2-Emissionen sowie externe Kosten von Verkehr quantifiziert.
APA, Harvard, Vancouver, ISO, and other styles
48

Vogel, Michael. "Zukunft bewahren: wie kann oder soll der Erhalt schriftlichen Kulturguts realisiert werden?" SLUB Dresden, 2016. https://slub.qucosa.de/id/qucosa%3A7382.

Full text
Abstract:
Die Koordinierungsstelle für die Erhaltung des schriftlichen Kulturguts (KEK) veröffentlichte im Oktober 2015 bundesweite Handlungsempfehlungen für die Erhaltung des schriftlichen Kulturguts in Archiven und Bibliotheken (BWHE). Sichtbar wird darin die Notwendigkeit bundesweiter Koordination und Zusammenarbeit.
APA, Harvard, Vancouver, ISO, and other styles
49

Röhr, Tobias. "Kreislaufwirtschaft nach dem Cradle-to-Cradle-Vorbild: Wie kann ein geschlossener Ressourcenkreislauf erreicht werden?: Eine Untersuchung unternehmerischer Konzepte mit Beispielen aus der Praxis." Technische Universität Chemnitz, 2021. https://monarch.qucosa.de/id/qucosa%3A73659.

Full text
Abstract:
Das aktuell vorherrschende lineare Wirtschaftsprinzip ist für viele Umweltprobleme verantwortlich. Neben der immensen Umweltverschmutzung sorgt der stetig wachsende Ressourcenverbrauch für eine zunehmende Verknappung vieler wertvoller Rohstoffe. Ein intelligentes Kreislaufwirtschaftskonzept wie Cradle-to-Cradle kann diesen Problemen entgegenwirken. Für eine erfolgreiche Umsetzung von Cradle-to-Cradle werden unternehmerische Ansätze benötigt. Im Rahmen dieses Artikels werden vier Konzepte untersucht, die in einer Kreislaufwirtschaft implementiert werden können. Diese sind Design für Demontage, Produkt-Service-Systeme, Take-back Strategien und Reverse Logistics. Für jeden dieser vier Ansätze werden die Voraussetzungen sowie Barrieren hinsichtlich der Umsetzung innerhalb eines Kreislaufwirtschaftssystems aufgezeigt. Zudem wird dargelegt, dass sie im Cradle-to-Cradle-System realisierbar sind. Weiterhin werden reale Beispiele in Form von Unternehmen vorgestellt, die die verschiedenen Modelle bereits erfolgreich umgesetzt haben. Alle vier untersuchten Konzepte sind für ein Kreislaufwirtschaftssystem unter Beachtung der Cradle-to-Cradle-Kriterien geeignet.
The currently prevailing linear economic principle is responsible for many environmental problems. In addition to the immense environmental pollution, the constantly growing consumption of resources is causing an increasing shortage of many valuable raw materials. An intelligent circular economy concept such as Cradle to Cradle can counteract these problems. Entrepreneurial approaches are needed for a successful implementation of Cradle to Cradle. This article examines four concepts that can be implemented in a circular economy. These are design for disassembly, product service systems, take-back strategies, and reverse logistics. For each of these four approaches, the prerequisites as well as barriers regarding the implementation within a circular economy system are shown. In addition, it is shown that they can be implemented in a cradle-to-cradle system. Furthermore, real examples are presented as companies that have already successfully implemented the various models. All four concepts examined are suitable for a circular economy system in compliance with the Cradle to Cradle criteria.
APA, Harvard, Vancouver, ISO, and other styles
50

Popplow, Laura. "Nur Mut zum Prozess! Oder: Wie kann zeitgemäße, mediale Ausstellungsgestaltung gelingen?" Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-111345.

Full text
Abstract:
Der Besuch des im Bau befindlichen Staatlichen Museums für Archäologie Chemnitz bot den Teilnehmerinnen und Teilnehmern der Dresden Summer School 2012 die Möglichkeit, die Planung eines von Grund auf neuen Museums mit einer sehr modernen Ausstellungsarchitektur und Medieninszenierung durch die Mitarbeiter und den leitenden Ausstellungsgestalter kennenzulernen. (...)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography