Дисертації з теми "Classification: Advanced Methods"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Classification: Advanced Methods.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-27 дисертацій для дослідження на тему "Classification: Advanced Methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zeggada, Abdallah. "Advanced classification methods for UAV imagery." Doctoral thesis, Università degli studi di Trento, 2018. https://hdl.handle.net/11572/367947.

Повний текст джерела
Анотація:
The rapid technological advancement manifested lately in the remote sensing acquisition platforms has triggered many benefits in favor of automated territory control and monitoring. In particular, unmanned aerial vehicles (UAVs) technology has drawn a lot of attention, providing an efficient solution especially in real-time applications. This is mainly motivated by their capacity to collect extremely high resolution (EHR) data over inaccessible areas and limited coverage zones, thanks to their small size and rapidly deployable flight capability, notwithstanding their ease of use and affordability. The very high level of details of the data acquired via UAVs, however, in order to be properly availed, requires further treatment through suitable image processing and analysis approaches. In this respect, the proposed methodological contributions in this thesis include: i) a complete processing chain which assists the Avalanche Search and Rescue (SAR) operations by scanning the UAV acquired images over the avalanche debris in order to detect victims buried under snow and their related objects in real time; ii) two multilabel deep learning strategies for coarsely describing extremely high resolution images in urban scenarios; iii) a novel multilabel conditional random fields classification framework that exploits simultaneously spatial contextual information and cross-correlation between labels; iv) a novel spatial and structured support vector machine for multilabel image classification by adding to the cost function of the structured support vector machine a term that enhances spatial smoothness within a one-step process. Conducted experiments on real UAV images are reported and discussed alongside suggestions for potential future improvements and research lines.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zeggada, Abdallah. "Advanced classification methods for UAV imagery." Doctoral thesis, University of Trento, 2018. http://eprints-phd.biblio.unitn.it/2943/1/thesis_disclaimer.pdf.

Повний текст джерела
Анотація:
The rapid technological advancement manifested lately in the remote sensing acquisition platforms has triggered many benefits in favor of automated territory control and monitoring. In particular, unmanned aerial vehicles (UAVs) technology has drawn a lot of attention, providing an efficient solution especially in real-time applications. This is mainly motivated by their capacity to collect extremely high resolution (EHR) data over inaccessible areas and limited coverage zones, thanks to their small size and rapidly deployable flight capability, notwithstanding their ease of use and affordability. The very high level of details of the data acquired via UAVs, however, in order to be properly availed, requires further treatment through suitable image processing and analysis approaches. In this respect, the proposed methodological contributions in this thesis include: i) a complete processing chain which assists the Avalanche Search and Rescue (SAR) operations by scanning the UAV acquired images over the avalanche debris in order to detect victims buried under snow and their related objects in real time; ii) two multilabel deep learning strategies for coarsely describing extremely high resolution images in urban scenarios; iii) a novel multilabel conditional random fields classification framework that exploits simultaneously spatial contextual information and cross-correlation between labels; iv) a novel spatial and structured support vector machine for multilabel image classification by adding to the cost function of the structured support vector machine a term that enhances spatial smoothness within a one-step process. Conducted experiments on real UAV images are reported and discussed alongside suggestions for potential future improvements and research lines.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Villa, Alberto. "Advanced spectral unmixing and classification methods for hyperspectral remote sensing data." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00767250.

Повний текст джерела
Анотація:
La thèse propose des nouvelles techniques pour la classification et le démelange spectraldes images obtenus par télédétection iperspectrale. Les problèmes liées au données (notammenttrès grande dimensionalité, présence de mélanges des pixels) ont été considerés et destechniques innovantes pour résoudre ces problèmes. Nouvelles méthodes de classi_cationavancées basées sur l'utilisation des méthodes traditionnel de réduction des dimension etl'integration de l'information spatiale ont été développés. De plus, les méthodes de démelangespectral ont été utilisés conjointement pour ameliorer la classification obtenu avec lesméthodes traditionnel, donnant la possibilité d'obtenir aussi une amélioration de la résolutionspatial des maps de classification grace à l'utilisation de l'information à niveau sous-pixel.Les travaux ont suivi une progression logique, avec les étapes suivantes:1. Constat de base: pour améliorer la classification d'imagerie hyperspectrale, il fautconsidérer les problèmes liées au données : très grande dimensionalité, presence demélanges des pixels.2. Peut-on développer méthodes de classi_cation avancées basées sur l'utilisation des méthodestraditionnel de réduction des dimension (ICA ou autre)?3. Comment utiliser les differents types d'information contextuel typique des imagés satellitaires?4. Peut-on utiliser l'information données par les méthodes de démelange spectral pourproposer nouvelles chaines de réduction des dimension?5. Est-ce qu'on peut utiliser conjointement les méthodes de démelange spectral pour ameliorerla classification obtenu avec les méthodes traditionnel?6. Peut-on obtenir une amélioration de la résolution spatial des maps de classi_cationgrace à l'utilisation de l'information à niveau sous-pixel?Les différents méthodes proposées ont été testées sur plusieurs jeux de données réelles, montrantresultats comparable ou meilleurs de la plus part des methodes presentés dans la litterature.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bergamasco, Luca. "Advanced Deep-Learning Methods For Automatic Change Detection and Classification of Multitemporal Remote-Sensing Images." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/342100.

Повний текст джерела
Анотація:
Deep-Learning (DL) methods have been widely used for Remote Sensing (RS) applications in the last few years, and they allow improving the analysis of the temporal information in bi-temporal and multi-temporal RS images. DL methods use RS data to classify geographical areas or find changes occurring over time. DL methods exploit multi-sensor or multi-temporal data to retrieve results more accurately than single-source or single-date processing. However, the State-of-the-Art DL methods exploit the heterogeneous information provided by these data by focusing the analysis either on the spatial information of multi-sensor multi-resolution images using multi-scale approaches or on the time component of the image time series. Most of the DL RS methods are supervised, so they require a large number of labeled data that is challenging to gather. Nowadays, we have access to many unlabeled RS data, so the creation of long image time series is feasible. However, supervised methods require labeled data that are expensive to gather over image time series. Hence multi-temporal RS methods usually follow unsupervised approaches. In this thesis, we propose DL methodologies that handle these open issues. We propose unsupervised DL methods that exploit multi-resolution deep feature maps derived by a Convolutional Autoencoder (CAE). These DL models automatically learn spatial features from the input during the training phase without any labeled data. We then exploit the high temporal resolution of image time series with the high spatial information of Very-High-Resolution (VHR) images to perform a multi-temporal and multi-scale analysis of the scene. We merge the information provided by the geometrical details of VHR images with the temporal information of the image time series to improve the RS application tasks. We tested the proposed methods to detect changes over bi-temporal RS images acquired by various sensors, such as Landsat-5, Landsat-8, and Sentinel-2, representing burned and deforested areas, and kinds of pasture impurities using VHR orthophotos and Sentinel-2 image time series. The results proved the effectiveness of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mehner, Henny. "The potential of high spatial resolution remote sensing for mapping upland vegetation using advanced classification methods." Thesis, University of Newcastle Upon Tyne, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417524.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Verzotto, Davide. "Advanced Computational Methods for Massive Biological Sequence Analysis." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3426282.

Повний текст джерела
Анотація:
With the advent of modern sequencing technologies massive amounts of biological data, from protein sequences to entire genomes, are becoming increasingly available. This poses the need for the automatic analysis and classification of such a huge collection of data, in order to enhance knowledge in the Life Sciences. Although many research efforts have been made to mathematically model this information, for example finding patterns and similarities among protein or genome sequences, these approaches often lack structures that address specific biological issues. In this thesis, we present novel computational methods for three fundamental problems in molecular biology: the detection of remote evolutionary relationships among protein sequences, the identification of subtle biological signals in related genome or protein functional sites, and the phylogeny reconstruction by means of whole-genome comparisons. The main contribution is given by a systematic analysis of patterns that may affect these tasks, leading to the design of practical and efficient new pattern discovery tools. We thus introduce two advanced paradigms of pattern discovery and filtering based on the insight that functional and conserved biological motifs, or patterns, should lie in different sites of sequences. This enables to carry out space-conscious approaches that avoid a multiple counting of the same patterns. The first paradigm considered, namely irredundant common motifs, concerns the discovery of common patterns, for two sequences, that have occurrences not covered by other patterns, whose coverage is defined by means of specificity and extension. The second paradigm, namely underlying motifs, concerns the filtering of patterns, from a given set, that have occurrences not overlapping other patterns with higher priority, where priority is defined by lexicographic properties of patterns on the boundary between pattern matching and statistical analysis. We develop three practical methods directly based on these advanced paradigms. Experimental results indicate that we are able to identify subtle similarities among biological sequences, using the same type of information only once. In particular, we employ the irredundant common motifs and the statistics based on these patterns to solve the remote protein homology detection problem. Results show that our approach, called Irredundant Class, outperforms the state-of-the-art methods in a challenging benchmark for protein analysis. Afterwards, we establish how to compare and filter a large number of complex motifs (e.g., degenerate motifs) obtained from modern motif discovery tools, in order to identify subtle signals in different biological contexts. In this case we employ the notion of underlying motifs. Tests on large protein families indicate that we drastically reduce the number of motifs that scientists should manually inspect, further highlighting the actual functional motifs. Finally, we combine the two proposed paradigms to allow the comparison of whole genomes, and thus the construction of a novel and practical distance function. With our method, called Unic Subword Approach, we relate to each other the regions of two genome sequences by selecting conserved motifs during evolution. Experimental results show that our approach achieves better performance than other state-of-the-art methods in the whole-genome phylogeny reconstruction of viruses, prokaryotes, and unicellular eukaryotes, further identifying the major clades of these organisms.
Con l'avvento delle moderne tecnologie di sequenziamento, massive quantità di dati biologici, da sequenze proteiche fino a interi genomi, sono disponibili per la ricerca. Questo progresso richiede l'analisi e la classificazione automatica di tali collezioni di dati, al fine di migliorare la conoscenza nel campo delle Scienze della Vita. Nonostante finora siano stati proposti molti approcci per modellare matematicamente le sequenze biologiche, ad esempio cercando pattern e similarità tra sequenze genomiche o proteiche, questi metodi spesso mancano di strutture in grado di indirizzare specifiche questioni biologiche. In questa tesi, presentiamo nuovi metodi computazionali per tre problemi fondamentali della biologia molecolare: la scoperta di relazioni evolutive remote tra sequenze proteiche, l'individuazione di segnali biologici complessi in siti funzionali tra loro correlati, e la ricostruzione della filogenesi di un insieme di organismi, attraverso la comparazione di interi genomi. Il principale contributo è dato dall'analisi sistematica dei pattern che possono interessare questi problemi, portando alla progettazione di nuovi strumenti computazionali efficaci ed efficienti. Vengono introdotti così due paradigmi avanzati per la scoperta e il filtraggio di pattern, basati sull'osservazione che i motivi biologici funzionali, o pattern, sono localizzati in differenti regioni delle sequenze in esame. Questa osservazione consente di realizzare approcci parsimoniosi in grado di evitare un conteggio multiplo degli stessi pattern. Il primo paradigma considerato, ovvero irredundant common motifs, riguarda la scoperta di pattern comuni a coppie di sequenze che hanno occorrenze non coperte da altri pattern, la cui copertura è definita da una maggiore specificità e/o possibile estensione dei pattern. Il secondo paradigma, ovvero underlying motifs, riguarda il filtraggio di pattern che hanno occorrenze non sovrapposte a quelle di altri pattern con maggiore priorità, dove la priorità è definita da proprietà lessicografiche dei pattern al confine tra pattern matching e analisi statistica. Sono stati sviluppati tre metodi computazionali basati su questi paradigmi avanzati. I risultati sperimentali indicano che i nostri metodi sono in grado di identificare le principali similitudini tra sequenze biologiche, utilizzando l'informazione presente in maniera non ridondante. In particolare, impiegando gli irredundant common motifs e le statistiche basate su questi pattern risolviamo il problema della rilevazione di omologie remote tra proteine. I risultati evidenziano che il nostro approccio, chiamato Irredundant Class, ottiene ottime prestazioni su un benchmark impegnativo, e migliora i metodi allo stato dell'arte. Inoltre, per individuare segnali biologici complessi utilizziamo la nozione di underlying motifs, definendo così alcune modalità per il confronto e il filtraggio di motivi degenerati ottenuti tramite moderni strumenti di pattern discovery. Esperimenti su grandi famiglie proteiche dimostrano che il nostro metodo riduce drasticamente il numero di motivi che gli scienziati dovrebbero altrimenti ispezionare manualmente, mettendo in luce inoltre i motivi funzionali identificati in letteratura. Infine, combinando i due paradigmi proposti presentiamo una nuova e pratica funzione di distanza tra interi genomi. Con il nostro metodo, chiamato Unic Subword Approach, relazioniamo tra loro le diverse regioni di due sequenze genomiche, selezionando i motivi conservati durante l'evoluzione. I risultati sperimentali evidenziano che il nostro approccio offre migliori prestazioni rispetto ad altri metodi allo stato dell'arte nella ricostruzione della filogenesi di organismi quali virus, procarioti ed eucarioti unicellulari, identificando inoltre le sottoclassi principali di queste specie.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Preethy, Byju Akshara. "Advanced Methods for Content Based Image Retrieval and Scene Classification in JPEG 2000 Compressed Remote Sensing Image Archives." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/281771.

Повний текст джерела
Анотація:
Recent advances in satellite imaging technologies have paved its way to the RS big data era. Efficient storage, management and utilization of massive amounts of data is one of the major challenges faced by the remote sensing (RS) community. To minimize the storage requirements and speed up the transmission rate, RS images are compressed before archiving. Accordingly, developing efficient Content Based Image Retrieval (CBIR) and scene classification techniques to effectively utilize these huge volume of data is one among the most researched areas in RS. With the continual growth in the volume of compressed RS data, the dominant aspect that plays a key role in the development of these techniques is the decompression time required by these images. Existing CBIR and scene classification methods in RS require fully decompressed RS images as input, which is a computationally complex and time consuming task to perform. Among several compression algorithms introduced to RS, JPEG 2000 is the most widely used in operational satellites due to its multiresolution paradigm, scalability and high compression ratio. In light of this, the goal of this thesis is to develop novel methods to achieve image retrieval and scene classification for JPEG 2000 compressed RS image archives. The first contribution of the thesis addresses the possibility of performing CBIR directly on compressed RS images. The aim of the proposed method is to achieve efficient image characterization and retrieval within the JPEG 2000 compressed domain. The proposed progressive image retrieval approach achieves a coarse to fine image description and retrieval in the partially decoded JPEG 2000 compressed domain. Its aims to reduce the computational time required by the CBIR system for compressed RS image archives. The second contribution of the thesis concerns the possibility of achieving scene classification for JPEG 2000 compressed RS image archives. Recently, deep learning methods have demonstrated a cutting edge improvement in scene classification performance in large-scale RS image archives. In view of this, the proposed method is based on deep learning and aims to achieve maximum scene classification accuracy with minimal decoding. The proposed approximation approach learns the high-level hierarchical image description in a partially decoded domain thereby avoiding the requirement to fully decode the images from the archive before any scene classification is performed. Quantitative as well as qualitative experimental results demonstrate the efficiency of the proposed methods, which show significant improvements over state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Preethy, Byju Akshara. "Advanced Methods for Content Based Image Retrieval and Scene Classification in JPEG 2000 Compressed Remote Sensing Image Archives." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/281771.

Повний текст джерела
Анотація:
Recent advances in satellite imaging technologies have paved its way to the RS big data era. Efficient storage, management and utilization of massive amounts of data is one of the major challenges faced by the remote sensing (RS) community. To minimize the storage requirements and speed up the transmission rate, RS images are compressed before archiving. Accordingly, developing efficient Content Based Image Retrieval (CBIR) and scene classification techniques to effectively utilize these huge volume of data is one among the most researched areas in RS. With the continual growth in the volume of compressed RS data, the dominant aspect that plays a key role in the development of these techniques is the decompression time required by these images. Existing CBIR and scene classification methods in RS require fully decompressed RS images as input, which is a computationally complex and time consuming task to perform. Among several compression algorithms introduced to RS, JPEG 2000 is the most widely used in operational satellites due to its multiresolution paradigm, scalability and high compression ratio. In light of this, the goal of this thesis is to develop novel methods to achieve image retrieval and scene classification for JPEG 2000 compressed RS image archives. The first contribution of the thesis addresses the possibility of performing CBIR directly on compressed RS images. The aim of the proposed method is to achieve efficient image characterization and retrieval within the JPEG 2000 compressed domain. The proposed progressive image retrieval approach achieves a coarse to fine image description and retrieval in the partially decoded JPEG 2000 compressed domain. Its aims to reduce the computational time required by the CBIR system for compressed RS image archives. The second contribution of the thesis concerns the possibility of achieving scene classification for JPEG 2000 compressed RS image archives. Recently, deep learning methods have demonstrated a cutting edge improvement in scene classification performance in large-scale RS image archives. In view of this, the proposed method is based on deep learning and aims to achieve maximum scene classification accuracy with minimal decoding. The proposed approximation approach learns the high-level hierarchical image description in a partially decoded domain thereby avoiding the requirement to fully decode the images from the archive before any scene classification is performed. Quantitative as well as qualitative experimental results demonstrate the efficiency of the proposed methods, which show significant improvements over state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Harikumar, Aravind. "Advanced methods for tree species classification and biophysical parameter estimation using crown geometric information in high density LiDAR data." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/369121.

Повний текст джерела
Анотація:
The ecological, climatic and economic influence of forests makes them an essential natural resource to be studied, preserved, and managed. Forest inventorying using single sensor data has a huge economic advantage over multi-sensor data. Remote sensing of forests using high density multi-return small footprint Light Detection and Ranging (LiDAR) data is becoming a cost-effective method to automatic estimation of forest parameters at the Individual Tree Crown (ITC) level. Individual tree detection and delineation techniques form the basis for ITC level parameter estimation. However SoA techniques often fail to exploit the huge amount of three dimensional (3D) structural information in the high density LiDAR data to achieve accurate detection and delineation of the 3D crown in dense forests, and thus, the first contribution of the thesis is a technique that detects and delineates both dominant and subdominant trees in dense multilayered forests. The proposed method uses novel two dimensional (2D) and 3D features to achieve this goal. Species knowledge at individual tree level is relevant for accurate forest parameter estimation. Most state-of-the-art techniques use features that represent the distribution of data points within the crown to achieve species classification. However, the performance of such methods is low when the trees belong to the same taxonomic class (e.g., the conifer class). High density LiDAR data contain a huge amount of fine structural information of individual tree crowns. Thus, the second contribution of the thesis is on novel methods for classifying conifer species using both the branch level and the crown level geometric characteristics. Accurate localization of trees is fundamental to calibrate the individual tree level inventory data, as it allows to match reference to LiDAR data. An important biophysical parameter for precision forestry applications is the Diameter at Breast Height (DBH). SoA methods locate the stem directly below the tree top, and indirectly estimate DBH using species-specific allometric models. Both approaches tend to be inaccurate and depend on the forest type. Thus, in this thesis, a method for accurate stem localization and DBH measurement is proposed. This is the third contribution of the thesis. Qualitative and quantitative results of the experiments confirm the effectiveness of the proposed methods over the SoA ones.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Harikumar, Aravind. "Advanced methods for tree species classification and biophysical parameter estimation using crown geometric information in high density LiDAR data." Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3782/1/PhD_Thesis_Harikumar.pdf.

Повний текст джерела
Анотація:
The ecological, climatic and economic influence of forests makes them an essential natural resource to be studied, preserved, and managed. Forest inventorying using single sensor data has a huge economic advantage over multi-sensor data. Remote sensing of forests using high density multi-return small footprint Light Detection and Ranging (LiDAR) data is becoming a cost-effective method to automatic estimation of forest parameters at the Individual Tree Crown (ITC) level. Individual tree detection and delineation techniques form the basis for ITC level parameter estimation. However SoA techniques often fail to exploit the huge amount of three dimensional (3D) structural information in the high density LiDAR data to achieve accurate detection and delineation of the 3D crown in dense forests, and thus, the first contribution of the thesis is a technique that detects and delineates both dominant and subdominant trees in dense multilayered forests. The proposed method uses novel two dimensional (2D) and 3D features to achieve this goal. Species knowledge at individual tree level is relevant for accurate forest parameter estimation. Most state-of-the-art techniques use features that represent the distribution of data points within the crown to achieve species classification. However, the performance of such methods is low when the trees belong to the same taxonomic class (e.g., the conifer class). High density LiDAR data contain a huge amount of fine structural information of individual tree crowns. Thus, the second contribution of the thesis is on novel methods for classifying conifer species using both the branch level and the crown level geometric characteristics. Accurate localization of trees is fundamental to calibrate the individual tree level inventory data, as it allows to match reference to LiDAR data. An important biophysical parameter for precision forestry applications is the Diameter at Breast Height (DBH). SoA methods locate the stem directly below the tree top, and indirectly estimate DBH using species-specific allometric models. Both approaches tend to be inaccurate and depend on the forest type. Thus, in this thesis, a method for accurate stem localization and DBH measurement is proposed. This is the third contribution of the thesis. Qualitative and quantitative results of the experiments confirm the effectiveness of the proposed methods over the SoA ones.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Feng, Zao. "Condition Classification in Underground Pipes Based on Acoustical Characteristics. Acoustical characteristics are used to classify the structural and operational conditions in underground pipes with advanced signal classification methods." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/9463.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Liska, Adam J. "Homology-Based Functional Proteomics By Mass Spectrometry and Advanced Informatic Methods." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2003. http://nbn-resolving.de/urn:nbn:de:swb:14-1071757497859-43887.

Повний текст джерела
Анотація:
Functional characterization of biochemically-isolated proteins is a central task in the biochemical and genetic description of the biology of cells and tissues. Protein identification by mass spectrometry consists of associating an isolated protein with a specific gene or protein sequence in silico, thus inferring its specific biochemical function based upon previous characterizations of that protein or a similar protein having that sequence identity. By performing this analysis on a large scale in conjunction with biochemical experiments, novel biological knowledge can be developed. The study presented here focuses on mass spectrometry-based proteomics of organisms with unsequenced genomes and corresponding developments in biological sequence database searching with mass spectrometry data. Conventional methods to identify proteins by mass spectrometry analysis have employed proteolytic digestion, fragmentation of resultant peptides, and the correlation of acquired tandem mass spectra with database sequences, relying upon exact matching algorithms; i.e. the analyzed peptide had to previously exist in a database in silico to be identified. One existing sequence-similarity protein identification method was applied (MS BLAST, Shevchenko 2001) and one alternative novel method was developed (MultiTag), for searching protein and EST databases, to enable the recognition of proteins that are generally unrecognizable by conventional softwares but share significant sequence similarity with database entries (~60-90%). These techniques and available database sequences enabled the characterization of the Xenopus laevis microtubule-associated proteome and the Dunaliella salina soluble salt-induced proteome, both organisms with unsequenced genomes and minimal database sequence resources. These sequence-similarity methods extended protein identification capabilities by more than two-fold compared to conventional methods, making existing methods virtually superfluous. The proteomics of Dunaliella salina demonstrated the utility of MS BLAST as an indispensable method for characterization of proteins in organisms with unsequenced genomes, and produced insight into Dunaliella?s inherent resilience to high salinity. The Xenopus study was the first proteomics project to simultaneously use all three central methods of representation for peptide tandem mass spectra for protein identification: sequence tags, amino acids sequences, and mass lists; and it is the largest proteomics study in Xenopus laevis yet completed, which indicated a potential relationship between the mitotic spindle of dividing cells and the protein synthesis machinery. At the beginning of these experiments, the identification of proteins was conceptualized as using ?conventional? versus ?sequence-similarity? techniques, but through the course of experiments, a conceptual shift in understanding occurred along with the techniques developed and employed to encompass variations in mass spectrometry instrumentation, alternative mass spectrum representation forms, and the complexities of database resources, producing a more systematic description and utilization of available resources for the characterization of proteomes by mass spectrometry and advanced informatic approaches. The experiments demonstrated that proteomics technologies are only as powerful in the field of biology as the biochemical experiments are precise and meaningful.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

MAGGIOLO, LUCA. "Deep Learning and Advanced Statistical Methods for Domain Adaptation and Classification of Remote Sensing Images". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1070050.

Повний текст джерела
Анотація:
In the recent years, remote sensing has faced a huge evolution. The constantly growing availability of remote sensing data has opened up new opportunities and laid the foundations for many new challenges. The continuous space missions and new constellations of satellites allow in fact more and more frequent acquisitions, at increasingly higher spatial resolutions, and at an almost total coverage of the globe. The availability of such an huge amount data has highlighted the need for automatic techniques capable of processing the data and exploiting all the available information. Meanwhile, the almost unlimited potential of machine learning has changed the world we live in. Artificial neural Networks have break trough everyday life, with applications that include computer vision, speech processing, autonomous driving but which are also the basis of commonly used tools such as online search engines. However, the vast majority of such models are of the supervised type and therefore their applicability rely on the availability of an enormous quantity of labeled data available to train the models themselves. Unfortunately, this is not the case with remote sensing, in which the enormous amounts of data are opposed to the almost total absence of ground truth. The purpose of this thesis is to find the way to exploit the most recent deep learning techniques, defining a common thread between two worlds, those of remote sensing and deep learning, which is often missing. In particular, this thesis proposes three novel contributions which face current issues in remote sensing. The first one is related to multisensor image registration and combines generative adversarial networks and non-linear optimization of crosscorrelation-like functionals to deal with the complexity of the setting. The proposed method was proved able to outperform state of the art approaches. The second novel contribution faces one of the main issues in deep learning for remote sensing: the scarcity of ground truth data for semantic segmentation. The proposed solution combines convolutional neural networks and probabilistic graphical models, two very active areas in machine learning for remote sensing, and approximate a fully connected conditional random field. The proposed method is capable of filling part of the gap which separate a densely trained model from a weakly trained one. Then, the third approach is aimed at the classification of high resolution satellite images for climate change purposes. It consist of a specific formulation of an energy minimization which allows to fuse multisensor information and the application a markov random field in a fast and efficient way for global scale applications. The results obtained in this thesis shows how deep learning methods based on artificial neural networks can be combined with statistical analysis to overcome their limitations, going beyond the classic benchmark environments and addressing practical, real and large-scale application cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Klein, Joachim, Christel Baier, Philipp Chrszon, Marcus Daum, Clemens Dubslaff, Sascha Klüppelholz, Steffen Märcker, and David Müller. "Advances in probabilistic model checking with PRISM." Springer, 2018. https://tud.qucosa.de/id/qucosa%3A74265.

Повний текст джерела
Анотація:
The popular model checker PRISM has been successfully used for the modeling and analysis of complex probabilistic systems. As one way to tackle the challenging state explosion problem, PRISM supports symbolic storage and manipulation using multi-terminal binary decision diagrams for representing the models and in the computations. However, it lacks automated heuristics for variable reordering, even though it is well known that the order of BDD variables plays a crucial role for compact representations and efficient computations. In this article, we present a collection of extensions to PRISM. First, we provide support for automatic variable reordering within the symbolic engines of PRISM and allow users to manually control the variable ordering at a fine-grained level. Second, we provide extensions in the realm of reward-bounded properties, namely symbolic computations of quantiles in Markov decision processes and, for both the explicit and symbolic engines, the approximative computation of quantiles for continuous-time Markov chains as well as support for multi-reward-bounded properties. Finally, we provide an implementation for obtaining minimal weak deterministic Büchi automata for the obligation fragment of linear temporal logic (LTL), with applications for expected accumulated reward computations with a finite horizon given by a co-safe LTL formula.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Klein, Joachim, Christel Baier, Philipp Chrszon, Marcus Daum, Clemens Dubslaff, Sascha Klüppelholz, Steffen Märcker, and David Müller. "Advances in Symbolic Probabilistic Model Checking with PRISM." Springer, 2016. https://tud.qucosa.de/id/qucosa%3A74267.

Повний текст джерела
Анотація:
For modeling and reasoning about complex systems, symbolic methods provide a prominent way to tackle the state explosion problem. It is well known that for symbolic approaches based on binary decision diagrams (BDD), the ordering of BDD variables plays a crucial role for compact representations and efficient computations. We have extended the popular probabilistic model checker PRISM with support for automatic variable reordering in its multi-terminal-BDD-based engines and report on benchmark results. Our extensions additionally allow the user to manually control the variable ordering at a finer-grained level. Furthermore, we present our implementation of the symbolic computation of quantiles and support for multi-reward-bounded properties, automata specifications and accepting end component computations for Streett conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Heidernätsch, Mario, Michael Bauer, Daniela Täuber, Günter Radons, and Christian von Borcyskowski. "An advanced method of tracking temporarily invisible particles in video imaging." Diffusion fundamentals 11 (2009) 111, S. 1-2, 2009. https://ul.qucosa.de/id/qucosa%3A14085.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Fried, Andrea, and Volker Linss. "Towards an Advanced Impact Analysis of Intangible Resources in Organisations." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501203.

Повний текст джерела
Анотація:
The paper refers to the discussion of measuring and assessing knowledge capital. In particular, the interconnectedness of the intangible resources in organizations is not well represented in the methodical approaches. Moreover, the identification of driver resources which is strongly connected with this question is far from being solved in a satisfactory manner. Therefore, this article reviews existing methods of the scenario analysis in view of the performance measurement discussion and contributes towards an advanced analysis of resources in organizations.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Linss, Volker, and Andrea Fried. "Advanced Impact Analysis: the ADVIAN® method - an enhanced approach for the analysis of impact strengths with the consideration of indirect relations." Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200900908.

Повний текст джерела
Анотація:
An enhanced approach for the impact analysis is presented. Impact analyses play an important role in future research analysis as part of the scenario techniques in the strategic management field. Nowadays, they are also applied for the description of mutual relationships of tangible and intangible resources in organisations. The new method is based on currently existing methods using a cross impact matrix and overcomes some of their drawbacks. Indirect impacts are considered together with their impact strengths. A modification of the impact matrix is not necessary. Simple examples show that the new method leads to more reasonable and stable results than the existing methods. The new method shall help analysing the complexity of social systems in a more reliable way.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Lienemann, Kai [Verfasser]. "Advanced ensemble methods for automatic classification of 1H-NMR spectra / von Kai Lienemann." 2010. http://d-nb.info/1008219770/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Christen, Victor. "Advanced Methods for Entity Linking in the Life Sciences." 2020. https://ul.qucosa.de/id/qucosa%3A73504.

Повний текст джерела
Анотація:
The amount of knowledge increases rapidly due to the increasing number of available data sources. However, the autonomy of data sources and the resulting heterogeneity prevent comprehensive data analysis and applications. Data integration aims to overcome heterogeneity by unifying different data sources and enriching unstructured data. The enrichment of data consists of different subtasks, amongst other the annotation process. The annotation process links document phrases to terms of a standardized vocabulary. Annotated documents enable effective retrieval methods, comparability of different documents, and comprehensive data analysis, such as finding adversarial drug effects based on patient data. A vocabulary allows the comparability using standardized terms. An ontology can also represent a vocabulary, whereas concepts, relationships, and logical constraints additionally define an ontology. The annotation process is applicable in different domains. Nevertheless, there is a difference between generic and specialized domains according to the annotation process. This thesis emphasizes the differences between the domains and addresses the identified challenges. The majority of annotation approaches focuses on the evaluation of general domains, such as Wikipedia. This thesis evaluates the developed annotation approaches with case report forms that are medical documents for examining clinical trials. The natural language provides different challenges, such as similar meanings using different phrases. The proposed annotation method, AnnoMap, considers the fuzziness of natural language. A further challenge is the reuse of verified annotations. Existing annotations represent knowledge that can be reused for further annotation processes. AnnoMap consists of a reuse strategy that utilizes verified annotations to link new documents to appropriate concepts. Due to the broad spectrum of areas in the biomedical domain, different tools exist. The tools perform differently regarding a particular domain. This thesis proposes a combination approach to unify results from different tools. The method utilizes existing tool results to build a classification model that can classify new annotations as correct or incorrect. The results show that the reuse and the machine learning-based combination improve the annotation quality compared to existing approaches focussing on the biomedical domain. A further part of data integration is entity resolution to build unified knowledge bases from different data sources. A data source consists of a set of records characterized by attributes. The goal of entity resolution is to identify records representing the same real-world entity. Many methods focus on linking data sources consisting of records being characterized by attributes. Nevertheless, only a few methods can handle graph-structured knowledge bases or consider temporal aspects. The temporal aspects are essential to identify the same entities over different time intervals since these aspects underlie certain conditions. Moreover, records can be related to other records so that a small graph structure exists for each record. These small graphs can be linked to each other if they represent the same. This thesis proposes an entity resolution approach for census data consisting of person records for different time intervals. The approach also considers the graph structure of persons given by family relationships. For achieving qualitative results, current methods apply machine-learning techniques to classify record pairs as the same entity. The classification task used a model that is generated by training data. In this case, the training data is a set of record pairs that are labeled as a duplicate or not. Nevertheless, the generation of training data is a time-consuming task so that active learning techniques are relevant for reducing the number of training examples. The entity resolution method for temporal graph-structured data shows an improvement compared to previous collective entity resolution approaches. The developed active learning approach achieves comparable results to supervised learning methods and outperforms other limited budget active learning methods. Besides the entity resolution approach, the thesis introduces the concept of evolution operators for communities. These operators can express the dynamics of communities and individuals. For instance, we can formulate that two communities merged or split over time. Moreover, the operators allow observing the history of individuals. Overall, the presented annotation approaches generate qualitative annotations for medical forms. The annotations enable comprehensive analysis across different data sources as well as accurate queries. The proposed entity resolution approaches improve existing ones so that they contribute to the generation of qualitative knowledge graphs and data analysis tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Fischer, André. "Advanced Cluster Methods for Correlated-Electron Systems." Doctoral thesis, 2015. https://tud.qucosa.de/id/qucosa%3A29127.

Повний текст джерела
Анотація:
In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult. The ground state in the weak-coupling limit is superconducting with chiral d+id-wave symmetry, in accordance to previous renormalization group approaches. In the regime of strong interactions SDW states are preferred over superconductivity and a collinaer SDW state with nonuniform spin moments on a quadrupled unit cell has the lowest grand potential. At strong coupling, inclusion of short-range quantum fluctuations turns out to favor this collinear state over the chiral phase predicted by mean-field theory. At intermediate interactions, no robust conclusion can be drawn from the results. Symmetry-breaking mechanisms within the nematic phase of the iron-pnictides are studied using a three-band model for the iron planes on a 4-site cluster. CPT allows a local breaking of the symmetry within the cluster without imposing long-range magnetic order. This is a crucial step beyond mean-field approaches to the magnetically ordered state, where such a nematic phase cannot easily be investigated. Three mechanisms are included to break the fourfold lattice symmetry down to a twofold symmetry. The effects of anisotropic magnetic couplings are compared to an orbital ordering field and anisotropic hoppings. All three mechanisms lead to similar features in the spectral density. Since the anisotropy of the hopping parameters has to be very large to obtain similar results as observed in ARPES, a phonon-driven transition is unlikely.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Reichenbach, Jonas. "Credit scoring with advanced analytics: applying machine learning methods for credit risk assessment at the Frankfurter sparkasse." Master's thesis, 2018. http://hdl.handle.net/10362/49557.

Повний текст джерела
Анотація:
Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies Management
The need for controlling and managing credit risk obliges financial institutions to constantly reconsider their credit scoring methods. In the recent years, machine learning has shown improvement over the common traditional methods for the application of credit scoring. Even small improvements in prediction quality are of great interest for the financial institutions. In this thesis classification methods are applied to the credit data of the Frankfurter Sparkasse to score their credits. Since recent research has shown that ensemble methods deliver outstanding prediction quality for credit scoring, the focus of the model investigation and application is set on such methods. Additionally, the typical imbalanced class distribution of credit scoring datasets makes us consider sampling techniques, which compensate the imbalances for the training dataset. We evaluate and compare different types of models and techniques according to defined metrics. Besides delivering a high prediction quality, the model’s outcome should be interpretable as default probabilities. Hence, calibration techniques are considered to improve the interpretation of the model’s scores. We find ensemble methods to deliver better results than the best single model. Specifically, the method of the Random Forest delivers the best performance on the given data set. When compared to the traditional credit scoring methods of the Frankfurter Sparkasse, the Random Forest shows significant improvement when predicting a borrower’s default within a 12-month period. The Logistic Regression is used as a benchmark to validate the performance of the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Reed, Eric R. "Development of advanced methods for large-scale transcriptomic profiling and application to screening of metabolism disrupting compounds." Thesis, 2020. https://hdl.handle.net/2144/41943.

Повний текст джерела
Анотація:
High-throughput transcriptomic profiling has become a ubiquitous tool to assay an organism transcriptome and to characterize gene expression patterns in different cellular states or disease conditions, as well as in response to molecular and pharmacologic perturbations. Refinements to data preparation techniques have enabled integration of transcriptomic profiling into large-scale biomedical studies, generally devised to elucidate phenotypic factors contributing to transcriptional differences across a cohort of interest. Understanding these factors and the mechanisms through which they contribute to disease is a principal objective of numerous projects, such as The Cancer Genome Atlas and the Cancer Cell Line Encyclopedia. Additionally, transcriptomic profiling has been applied in toxicogenomic screening studies, which profile molecular responses of chemical perturbations in order to identify environmental toxicants and characterize their mechanisms-of-action. Further adoption of high-throughput transcriptomic profiling requires continued effort to improve and lower the costs of implementation. Accordingly, my dissertation work encompasses both the development and assessment of cost-effective RNA sequencing platforms, and of novel machine learning techniques applicable to the analyses of large-scale transcriptomic data sets. The utility of these techniques is evaluated through their application to a toxicogenomic screen in which our lab profiled exposures of adipocytes to metabolic disrupting chemicals. Such exposures have been implicated in metabolic dyshomeostasis, the predominant cause of obesity pathogenesis. Considering that an estimated 10% of the global population is obese, understanding the role these exposures play in disrupting metabolic balance has the potential to help combating this pervasive health threat. This dissertation consists of three sections. In the first section, I assess data generated by a highly-multiplexed RNA sequencing platform developed by our section, and report on its significantly better quality relative to similar platforms, and on its comparable quality to more expensive platforms. Next, I present the analysis of a toxicogenomic screen of metabolic disrupting compounds. This analysis crucially relied on novel supervised and unsupervised machine learning techniques which I specifically developed to take advantage of the experimental design we adopted for data generation. Lastly, I describe the further development, evaluation, and optimization of one of these methods, K2Taxonomer, into a computational tool for unsupervised molecular subgrouping of bulk and single-cell gene expression data, and for the comprehensive in-silico annotation of the discovered subgroups.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Schindler, Rose. "Effective Prevention for Children: Conceptual and Methodological Advances." Doctoral thesis, 2015. https://monarch.qucosa.de/id/qucosa%3A20382.

Повний текст джерела
Анотація:
This dissertation addresses various methodological and conceptual challenges of prevention programs for preschool children. It focuses on two major topics, (1) methodological guidelines for longitudinal studies in the context of prevention projects, and (2) analyses of emotional development and moral emotions. After a brief introduction to the research questions in Chapter 1, Chapters 2 and 3 address the methodological branch of my research, and Chapters 4 to 6 will analyze several aspects of moral development and moral emotions. In the final Chapter 7, all findings are summarized in view of their application to prevention work in the context of childhood development.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Scharf, Florian. "Advances in the analysis of event-related potential data with factor analytic methods." 2019. https://ul.qucosa.de/id/qucosa%3A33711.

Повний текст джерела
Анотація:
Researchers are often interested in comparing brain activity between experimental contexts. Event-related potentials (ERPs) are a common electrophysiological measure of brain activity that is time-locked to an event (e.g., a stimulus presented to the participant). A variety of decomposition methods has been used for ERP data among them temporal exploratory factor analysis (EFA). Essentially, temporal EFA decomposes the ERP waveform into a set of latent factors where the factor loadings reflect the time courses of the latent factors, and the amplitudes are represented by the factor scores. An important methodological concern is to ensure the estimates of the condition effects are unbiased and the term variance misallocation has been introduced in reference to the case of biased estimates. The aim of the present thesis was to explore how exploratory factor analytic methods can be made less prone to variance misallocation. These efforts resulted in a series of three publications in which variance misallocation in EFA was described as a consequence of the properties of ERP data, ESEM was proposed as an extension of EFA that acknowledges the structure of ERP data sets, and regularized estimation was suggested as an alternative to simple structure rotation with desirable properties. The presence of multiple sources of (co-)variance, the factor scoring step, and high temporal overlap of the factors were identified as major causes of variance misallocation in EFA for ERP data. It was shown that ESEM is capable of separating the (co-)variance sources and that it avoids biases due to factor scoring. Further, regularized estimation was shown to be a suitable alternative for factor rotation that is able to recover factor loading patterns in which only a subset of the variables follow a simple structure. Based on these results, regSEMs and ESEMs with ERP-specific rotation have been proposed as promising extensions of the EFA approach that might be less prone to variance misallocation. Future research should provide a direct comparison of regSEM and ESEM, and conduct simulation studies with more physiologically motivated data generation algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mitin, Dmitriy. "Advanced scanning magnetoresistive microscopy as a multifunctional magnetic characterization method." Doctoral thesis, 2016. https://monarch.qucosa.de/id/qucosa%3A20700.

Повний текст джерела
Анотація:
Advanced scanning magnetoresistive microscopy (SMRM) — a robust magnetic imaging and probing technique — is presented. It utilizes conventional recording heads of a hard disk drive as sensors. The spatial resolution of modern tunneling magnetoresistive sensors is nowadays comparable with more commonly used magnetic force microscopes. Important advantages of SMRM are the ability to detect pure magnetic signals directly proportional to the out-of-plane magnetic stray field, negligible sensor stray fields, and the ability to apply local bipolar magnetic field pulses up to 10 kOe with bandwidths from DC up to 1 GHz. The performance assessment of this method and corresponding best practices are discussed in the first section of this work. An application example of SMRM, the study on chemically ordered L10 FePt is presented in a second section. A constructed heater unit of SMRM opens the path to investigate temperature-dependent magnetic properties of the medium by recording and imaging at elevated temperatures. L10 FePt is one of the most promising materials to reach limits in storage density of future magnetic recording devices based on heat-assisted magnetic recording (HAMR). In order to be implemented in an actual recording scheme, the medium Curie temperature should be lowered. This will reduce the power requirements, and hence, wear and tear on a heat source — integrated plasmonic antenna. It is expected that the exchange coupling of FePt to thin Fe layers provides high saturation magnetization and elevated Curie temperature of the composite. The addition of Cu allows adjusting the magnetic properties such as perpendicular magnetic anisotropy, coercivity, saturation magnetization, and Curie temperature. This should lead to a lowering of the switching field of the hard magnetic FeCuPt layer and a reduction of thermally induced recording errors. In this regard, the influence of the Fe layer thickness on the switching behavior of the hard layer was investigated, revealing a strong reduction for Fe layer thicknesses larger than the exchange length of Fe. The recording performance of single-layer and bilayer structures was studied by SMRM roll-off curves and histogram methods at temperatures up to 180 °C In the last section of this work, SMRM advantages are demonstrated by various experiments on a two-dimensional magnetic vortex lattice. Magnetic vortex is a peculiar complex magnetization configuration which typically appears in a soft magnetic structured materials. It consists of two coupled sub-systems: the core, where magnetization vector points perpendicular to the structure plane, and the curling magnetization where magnetic flux is rotating in-plane. The unique properties of a magnetic vortex making it an object of a great research and technological interest for spintronic applications in sensorics or data storage. Manipulation of the vortex core as well as the rotation sense by applying a local field pulse is shown. A spatially resolved switching map reveals a significant "write window" where vortex cores can be addressed correctly. Moreover, the external in-plane magnet extension unit allow analyzing the magnetic vortex rotational sense which is extremely practical for magnetic coupling investigations of magnetic coupling phenomena.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Haustein, Rocco. "Die Exportabhängigkeit südwestsächsischer Industrie-KMU und internationale Mitarbeiterqualifikationen: Eine qualitative Untersuchung anhand ausgewählter Unternehmen sowie industrienaher Dachorganisationen und Verbände." Doctoral thesis, 2013. https://monarch.qucosa.de/id/qucosa%3A20020.

Повний текст джерела
Анотація:
Die Region Südwestsachsen wird von kleinen und mittleren Industrieunternehmen (Industrie-KMU) geprägt. In der vorliegenden Arbeit wird gezeigt, dass der wirtschaftliche Erfolg dieser Unternehmen maßgeblich von deren Exportgeschäft abhängt. Eine solche große Bedeutung des Außenhandelsgeschäfts verlangt von den Beschäftigten der Unternehmen spezielle Qualifikationen ab. Diese Dissertationsschrift versucht, diese sog. „internationalen Qualifikationen“ durch die Untersuchung ausgewählter Unternehmen und industrienaher Dachorganisationen und Verbände sowohl zu charakterisieren als auch qualifikatorische Defizite in den Unternehmen aufzuzeigen und mögliche Lösungsstrategien anzuregen.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії