Dissertations / Theses on the topic 'Correctional classification'

To see the other types of publications on this topic, follow the link: Correctional classification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Correctional classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jascor, Barb. "A comparison of offender classification systems and the incidence of offender misconduct in a mid-west county jail." Online version, 2009. http://www.uwstout.edu/lib/thesis/2009/2009jascorb.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pettersson, Helena. "Anstaltens ambivalenta funktion : En studie av den samtida kriminalvårdsdiskursen." Thesis, Linköping University, Department of Thematic Studies, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2967.

Full text
Abstract:

Due to the attention of several escapes, rescue attempts and hostage situations from Swedish prisons during the year of 2004 a debate about correctional system and safety arose. It resulted in official reports and proposals to improve the safety of the institutions. The incidents of 2004 will most likely have an effect on the discourse of the correctional system. The purpose with this study is to analyse which discourses that today can be distinguished in the correctional system and thereby will be the foundation of the correctional systemof tomorrow.

The method used to answer the purpose of the study is a Foucauldian based discourse analysis. A perspective of socialconstructionism that leans upon the thought that it through language and text is possible to read processes of society and power. Power is a central idea in the study. By discerning the activated instruments of power in the Swedish correctional system it is possible to analyse the discourses made visible by these instruments of power.

Two different discourses may in the analysis be seen as the main influence in the work of correctional system. The liberalistic idea of empowerment is visible in the work against relapsing criminals, where focus is set on personal responsibility and the will of adjustment as a way of reentering society. New liberalistic ideas represent the second discourse where power works threw surveillance and control. In a risksociety we must protect ourselves from threats and dangers, which makes the prison a way of keeping the criminals distanced from society.

Classification is activated, to make the two discourses co-exist, as a way to create reliability around the work against relapsing criminals. By reproducing the norms and structure of our society we classify the level of digression and thereby punish as a way of handling it. To find a new way in to the debate about correctional system perhaps we need to ask ourselves: What function do we want the punishment to have?

APA, Harvard, Vancouver, ISO, and other styles
3

Harish, Kumar Rithika. "Spelling Correction To Improve Classification Of Technical Error Reports." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263112.

Full text
Abstract:
This master’s thesis project undertook the investigation of whether spelling correction would improve the performance of the classification of reports. The idea is to use different approaches of spelling correction to check which approach suits this particular dataset. Three different approaches were tested for spelling correction. The first two approaches considered only the erroneous word for correction. The third approach also considered context or the surrounding words to the erroneous word. The results after spelling correction were tested on a model classifier. No significant improvement in the performance of the classifier was observed when compared to the baseline. The reason for this might be because most of the reports do not contain more than a few spelling errors and the majority of words detected as spelling errors are not in English. However, the second approach performed better than the baseline for the dataset due to it being language independent as most of the non-words were non-english words which are dynamically updated based on input.
Det här examensarbetet undersökte huruvida stavningskontroll kan förbättra klassificering av rapporter. Tanken är att använda olika tillvägagångssätt för stavningskontroll för att finna det sätt som fungerar bäst på den här specifika datamängden. Tre olika tillvägagångssätt för stavningskontroll undersöktes. De två första tog bara hänsyn till enskilda felstavade ord. Det tredje sättet tog även hänsyn till det felstavade ordets kontext. Resultatet från stavningskontrollen testades på en klassificerare. Klassificeraren uppvisade inte någon signifikant förbättring vid jämförelse med en baslinje. Anledningen till detta kan vara att de flesta av rapporterna inte innehåller mer än några få stavfel och de flesta ord som upptäckts som stavfel är inte på engelska. Det andra tillvägagångssättet presterade dock bättre än baslinjen för datasetet tack vara att det var språkoberoende, eftersom de flesta av icke-orden var icke-engelska ord som dynamiskt uppdaterades baserat på input.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmad, Asmala. "Atmospheric effects on land classification using satellites and their correction." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/14602/.

Full text
Abstract:
Haze occurs almost every year in Malaysia and is caused by smoke which originates from forest fire in Indonesia. It causes visibility to drop, therefore affecting the data acquired for this area using optical sensor such as that on board Landsat - the remote sensing satellite that have provided the longest continuous record of Earth's surface. The work presented in this thesis is meant to develop a better understanding of atmospheric effects on land classification using satellite data and method of removing them. To do so, the two main atmospheric effects dealt with here are cloud and haze. Detection of cloud and its shadow are carried out using MODIS algorithms due to allowing optimal use of its rich bands. The analysis is applied to Landsat data, in which shows a high agreement with other methods. The thesis then concerns on determining the most suitable classification scheme to be used. Maximum Likelihood (ML) is found to be a preferable classification scheme due to its simplicity, objectivity and ability to classify land covers with acceptable accuracy. The effects of haze are subsequently modelled and simulated as a summation of a weighted signal component and a weighted pure haze component. By doing so, the spectral and statistical properties of the land classes can be systematically investigated, in which showing that haze modifies the class spectral signatures, consequently causing the classification accuracy to decline. Based on the haze model, a method of removing haze from satellite data was developed and tested using both simulated and real datasets. The results show that the removal method is able clean up haze and improve classification accuracy, yet a highly non-uniform haze may hamper its performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Pinheiro, Muriel Aline. "Processing, radiometric correction, autofocus and polarimetric classification of circular SAR data." Instituto Tecnológico de Aeronáutica, 2010. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=1083.

Full text
Abstract:
The demand for high resolution SAR systems and also for imaging techniques to retrieve scene information on the third dimension have stimulated the development of new acquisition modes and processing approaches. This work studies one of the newest SAR acquisition modes being used, namely the Circular SAR, in which the platform follows a non-linear circular trajectory. A brief introduction of the acquisition geometry is present along with the advantages of this acquisition mode, such as the volumetric reconstruction capability, higher resolutions and the possibility to retrieve target information from a wider range of observation angles. To deal with the non-linearity of trajectory, a processing approach using the time domain back-projection algorithm is suggested to focus and radiometric correct the images, taking into account the antenna patterns and loss due to propagation. An existing autofocus approach to correct motion errors is validated for the circular SAR context and a new frequency domain approach is proposed. Once the images are processed and calibrated, a polarimetric analysis is presented. In this context, a new polarimetric classification methodology is proposed for the particular geometry under consideration. The method uses the H- plane and the information of the first eigenvalue to classify small sub-apertures of the circular trajectory and finally classify the entire 360 circular aperture. Using information of all sub-apertures it is possible to preserve information of directional targets and diminish the effects caused by topography defocusing on the classification. To obtain speckle reduction improving the classification algorithm a Lee adaptive filter is implemented. The processing calibration approaches and the classification methodology are validated with circular SAR real data acquired with the SAR systems from the German Aerospace Center (DLR).
APA, Harvard, Vancouver, ISO, and other styles
6

Falahati, Asrami Farshad. "Alzheimer's Disease Classification using K-OPLS and MRI." Thesis, Linköpings universitet, Medicinsk informatik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78093.

Full text
Abstract:
In this thesis, we have used the kernel based orthogonal projection to latent structures (K-OPLS) method to discriminate between Alzheimer's Disease patients (AD) and healthy control subjects (CTL), and to predict conversion from mild cognitive impairment (MCI) to AD. In this regard three cohorts were used to create two different datasets; a small dataset including 63 subjects based on the Alzheimer’s Research Trust (ART) cohort and a large dataset including 1074 subjects combining the AddNeuroMed (ANM) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohorts. In the ART dataset, 34 regional cortical thickness measures and 21 volumetric measures from MRI in addition to 3 metabolite ratios from MRS, altogether 58 variables obtained for 28 AD and 35 CTL subjects. Three different K-OPLS models were created based on MRI and MRS measures and their combination. Combining the MRI and the MRS measures significantly improved the discriminant power resulting in a sensitivity of 96.4% and a specificity of 97.1%. In the combined dataset (ADNI and AddNeuroMed), the Freesurfer pipeline was utilized to extract 34 regional cortical thickness measures and 23 volumetric measures from MRI scans of 295 AD, 335 CTL and 444 MCI subjects. The classification of AD and CTL subjects using the K-OPLS model resulted in a high sensitivity of 85.8% and a specificity of 91.3%. Subsequently, the K-OPLS model was used to prospectively predict conversion from MCI to AD, according to the one year follow up diagnosis. As a result, 78.3% of the MCI converters were classified as AD-like and 57.5% of the MCI non-converters were classified as control-like. Furthermore, an age correction method was proposed to remove the effect of age as a confounding factor. The age correction method successfully removed the age-related changes of the data. Also, the age correction method slightly improved the performance regarding to classification and prediction. This resulted in that 82.1% of the MCI converters were correctly classified. All analyses were performed using 7-fold cross validation. The K-OPLS method shows strong potential for classification of AD and CTL, and for prediction of MCI conversion.
APA, Harvard, Vancouver, ISO, and other styles
7

Gould, Laurie A. "Perceptions of risk and need in the classification and supervision of offenders in the community corrections setting the role of gender /." Orlando, Fla. : University of Central Florida, 2008. http://purl.fcla.edu/fcla/etd/CFE0002008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Romero, Marie. "Le traitement juridique des délits sexuels sur mineurs, une enquête de sociologie législative et judiciaire." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEH017.

Full text
Abstract:
On assiste dans les sociétés occidentales à une évolution importante de la reconnaissance, de la condamnation morale et de la répression judiciaire des violences sexuelles faites aux enfants et aux jeunes, en particulier filles mais aussi garçons. C’est dans ce cadre général que s’inscrit cette recherche. A partir de deux enquêtes, l’une de sociologie législative, l’autre de sociologie judiciaire, elle propose de mettre au jour la place cruciale accordée désormais à la référence à l'âge dans l’évolution des normes et représentations du permis et de l’interdit sexuels.La première enquête de sociologie historique et législative porte sur l’évolution du droit pénal français de la Révolution à aujourd’hui, et est centrée sur les mutations des catégories d'incrimination au fur et à mesure que le consentement (et non plus le statut matrimonial) devient le critère majeur séparant le permis et l'interdit. La seconde enquête de sociologie judiciaire a été menée au sein de deux tribunaux correctionnels et deux tribunaux pour enfants dans le sud de la France. Elle porte sur un corpus d’archives de 81 affaires jugées en 2010 pour délits sexuels sur des mineurs, et vise à éclairer la façon dont interviennent dans la qualification pénale des faits, non seulement les problèmes de preuve mais les changements de normes juridiques et sociales. Le point commun aux deux enquêtes est la mise au jour et l'exploration de deux formes de consentement sexuel : situationnel et statutaire.Tout au long de cette recherche sont analysés sous différents angles, le traitement sociojuridique des statuts d’âge (mineur/majeur et mineur/mineur), le sens accordé aux seuils d’âge (consentement, discernement), les embarras du droit face à l’inceste, et enfin les asymétries de genre tant du côté des victimes que des auteurs
We are witnessing an important evolution in Western society of the condemnation and legal justice as regard to sexual violence towards children, teenager especially girls, but also young boys. It is in this context that my research has been carried out. There has been a double inquiry; legislative sociology; and judicial sociology. They put up to date the important placing of age reference, the evolution as regarding sexual norms and representations of illicit sexual relations.The firs investigation of legislative historic sociology carries on the evolution of French penal codes dating from the French Revolution to today. It is censed on slow mutations of categories of incrimination that consent (no longer the matrimonial state) becomes the major point that separates permission and the forbidden. The second inquiry of judicial sociology was carried out int the archives of two correctional courts, two children correctional courts in the South of France. It carries upon the documentation of eighty-one judged cases from 2010 for sexual offences against minors. The aim is to put light upon penal qualification of facts, not only the problem of legal proof but also changes as regard to social and judicial norms. The point these two inquiries have in common i the update of two forms of sexual consent: statuary and situation.Throughout this research, the facts were analyzed from different angles: the social-juridical treatment of ages status (minors vs of age and minors vs minors). The meaning given to the age of consent, and legal responsibility; the legal difficulties as regard to incest and finally gender discrepancies between victims and aggressors
APA, Harvard, Vancouver, ISO, and other styles
9

Krause, Wesley Allen. "An evaluation methodology using probation classification instruments in the selection of a nonequivalent control group." CSUSB ScholarWorks, 1989. https://scholarworks.lib.csusb.edu/etd-project/436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Scholler, Jules. "Imagerie optique 3D multimodale : traitements spatio-temporels, correction du front d'onde et classification automatique." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLS007.

Full text
Abstract:
Cette thèse vise à appliquer et combiner des méthodes numériques et optiques pour repousser les limites de la tomographie optique cohérente plein champ (FFOCT) statique et dynamique pour la microscopie et l'imagerie médicale. Des méthodes de post-traitement utilisant la décomposition en valeurs singulières ont permis pour la première fois l'acquisition d'images dynamiques in vivo tandis que l'utilisation des signaux non stationnaires a permis d'obtenir des images avec un meilleur rapport signal sur bruit, soit la possibilité d'imager plus profondément les échantillons. L'application de l'imagerie dynamique est présentée sur des organoïdes rétiniens où nous montrons que notre méthode est capable de fournir de nouvelles perceptions biologiques intéressantes qui ne sont possibles avec aucune autre méthode. Des développements matériel pour compenser les aberrations optiques ont été menés avec succès, ce qui a permis une mise en œuvre peu complexe et peu coûteuse permettant d'acquérir de manière fiable des images de la rétine avec une résolution en limite de diffraction. La compréhension de la manifestation des aberrations optiques en FFOCT validée expérimentalement nous a permis de concevoir et de simuler les performances du système proposé. Enfin, les applications cliniques potentielles du FFOCT dynamique et statique pour l'angiographie de l'œil humain in vivo, la cicatrisation ex vivo, la classification des cellules de la rétine et le dépistage du cancer du sein par des méthodes d'apprentissage automatique ont été démontrées avec succès
This PhD project aims at combining numerical and optical methods to apply and push the limits of static and dynamic full-field optical coherent tomography (FFOCT) for microscopy and medical imaging. Post-processing methods using singular value decomposition allowed the acquisition of dynamic images in vivo for the first time while the use of the signals non-stationarities allowed to image with a better signal to noise ratio, hence deeper inside samples. Application of dynamic imaging is presented on retinal organoids where we show that our method is able to provide new interesting biological insights that are not possible with any other methods. Hardware developments to counteracts optical aberrations were successfully conducted leading to low complexity and cost efficient implementation which can reliably acquire retinal images with a diffraction limited resolution. The understanding and demonstration of the particular aberrations manifestation in FFOCT allowed us to design and simulate the performances of the proposed system. Finally, potential clinical applications of dynamic and static FFOCT for angiography in the human eye in vivo, wound healing ex vivo, retinal cell classification and breast cancer screening using machine learning methods are successfully demonstrated
APA, Harvard, Vancouver, ISO, and other styles
11

Long, Joshua S. "Appropriate classification of prisoners: Balancing prison safety with the least restrictive placements of Ohio inmates." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1593267208117717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bohlandt, Florian Martin. "Single manager hedge funds - aspects of classification and diversification." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85859.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2013.
A persistent problem for hedge fund researchers presents itself in the form of inconsistent and diverse style classifications within and across database providers. For this paper, single-manager hedge funds from the Hedge Fund Research (HFR) and Hedgefund.Net (HFN) databases were classified on the basis of a common factor, extracted using the factor axis methodology. It was assumed that the returns of all sample hedge funds are attributable to a common factor that is shared across hedge funds within one classification, and a specific factor that is unique to a particular hedge fund. In contrast to earlier research and the application of principal component analysis, factor axis has sought to determine how much of the covariance in the dataset is due to common factors (communality). Factor axis largely ignores the diagonal elements of the covariance matrix and orthogonal factor rotation maximises the covariance between hedge fund return series. In an iterative framework, common factors were extracted until all return series were described by one common and one specific factor. Prior to factor extraction, the series was tested for autoregressive moving-average processes and the residuals of such models were used in further analysis to improve upon squared correlations as initial factor estimates. The methodology was applied to 120 ten-year rolling estimation windows in the July 1990 to June 2010 timeframe. The results indicate that the number of distinct style classifications is reduced in comparison to the arbitrary self-selected classifications of the databases. Single manager hedge funds were grouped in portfolios on the basis of the common factor they share. In contrast to other classification methodologies, these common factor portfolios (CFPs) assume that some unspecified individual component of the hedge fund constituents’ returns is diversified away and that single manager hedge funds should be classified according to their common return components. From the CFPs of single manager hedge funds, pure style indices were created to be entered in a multivariate autoregressive framework. For each style index, a Vector Error Correction model (VECM) was estimated to determine the short-term as well as co-integrating relationship of the hedge fund series with the index level series of a stock, bond and commodity proxy. It was postulated that a) in a well-diversified portfolio, the current level of the hedge fund index is independent of the lagged observations from the other asset indices; and b) if the assumptions of the Efficient Market Hypothesis (EMH) hold, it is expected that the predictive power of the model will be low. The analysis was conducted for the July 2000 - June 2010 period. Impulse response tests and variance decomposition revealed that changes in hedge fund index levels are partially induced by changes in the stock, bond and currency markets. Investors are therefore cautioned not to overemphasise the diversification benefits of hedge fund investments. Commodity trading advisors (CTAs) / managed futures, on the other hand, deliver diversification benefits when integrated with an existing portfolio. The results indicated that single manager hedge funds can be reliably classified using the principal factor axis methodology. Continuously re-balanced pure style index representations of these classifications could be used in further analysis. Extensive multivariate analysis revealed that CTAs and macro hedge funds offer superior diversification benefits in the context of existing portfolios. The empirical results are of interest not only to academic researchers, but also practitioners seeking to replicate the methodologies presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Gould, Laurie. "PERCEPTIONS OF RISK AND NEED IN THE CLASSIFICATION AND SUPERVISION OF OFFENDERS IN THE COMMUNITY CORRECTIONS SETTING: THE ROLE O." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4163.

Full text
Abstract:
Risk has emerged as a defining feature of punishment in the United States. Feeley and Simon (1992) note that contemporary punishment is increasingly moving away from rehabilitation (the old penology) and moving toward the management and control of offenders (the new penology), often though actuarial techniques. While the profusion of risk assessment instruments, now entering their fourth generation, provides some support for the assertion that risk is indeed an important element in corrections, it was previously unknown if the risk model applied to all offenders, particularly female offenders. This dissertation addressed that gap by examining whether the risk model applied to female offenders in the community corrections setting. This dissertation surveyed 93 community corrections officers employed by the Orange County Community Corrections Department. The findings suggest that the department has incorporated many elements of the new penology into the classification and supervision of offenders in each of its units, though several gender differences were noted. Classification overrides, the perceived level of risk to the community, supervision decisions, and the perceived importance of risk and need factors were all examined in this study. The results indicate that some elements of classification and supervision function uniformly for offenders and operate irrespective of gender, but some areas, such as the perceived level of risk to the community and the perceived importance of risk factors, are influenced by gender.
Ph.D.
Other
Health and Public Affairs
Public Affairs PhD
APA, Harvard, Vancouver, ISO, and other styles
14

Ticknor, Bobbie. "Sex Offender Policy and Practice: Comparing the SORNA Tier Classification System and Static-99 Risk Levels." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406880689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wannous, Hazem. "CLASSIFICATION MULTI VUES DE RÉGIONS COULEUR - APPLICATION A L'ÉVALUATION 3D DES PLAIES CHRONIQUES." Phd thesis, Université d'Orléans, 2008. http://tel.archives-ouvertes.fr/tel-00408712.

Full text
Abstract:
Alors que l'exploration fonctionnelle repose sur des techniques d'imagerie médicale sophistiquées, le relevé anatomique de surface fait encore appel à des pratiques cliniques manuelles imprécises et coûteuses. A partir d'images couleur prises à main levée avec un appareil photo numérique, un outil innovant d'évaluation des plaies chroniques a été développé. Il combine les deux modes d'examen pratiqués, l'analyse colorimétrique et la mesure dimensionnelle des tissus lésés, dans un système convivial, pour une diffusion massive dans les équipes de soin. S'appuyant sur une vérité terrain établie par des cliniciens, une base d'échantillons cutanés a été constituée. Ils sont issus d'une segmentation non supervisée d'image couleur après correction colorimétrique assurant l'indépendance aux conditions d'éclairage, aux changements de point de vue et d'appareil. Ils sont ensuite caractérisés par des descripteurs de couleur et de texture, sélectionnés et re-conditionnés par des techniques d'analyse de données, pour faire l'apprentissage des quatre catégories de tissus par un séparateur à vaste marge à noyau perceptron. Les résultats de classification mono vue sont alors fusionnés grâce au modèle 3D de la plaie qui établit les correspondances spatiales d'une paire d'images stéréoscopiques. Il en résulte une nette amélioration de la robustesse de la classification, également stable sur plusieurs reconstructions. Les surfaces tissulaires exactes sont obtenues par simple rétro projection des régions tissulaires sur le modèle 3D. Ce modèle géométrique est également renforcé puisque le détourage automatique de la plaie utilise la détection de peau saine pour éliminer des triangles du maillage.
APA, Harvard, Vancouver, ISO, and other styles
16

Philipp, Katrin, Florian Lemke, Matthias C. Wapler, Ulrike Wallrabe, Nektarios Koukourakis, and Jürgen W. Czarske. "Spherical aberration correction of adaptive lenses." SPIE, 2017. https://tud.qucosa.de/id/qucosa%3A34878.

Full text
Abstract:
Deformable mirrors are the standard adaptive optical elements for aberration correction in confocal microscopy. Their usage leads to increased contrast and resolution. However, these improvements are achieved at the cost of bulky optical setups. Since spherical aberrations are the dominating aberrations in confocal microscopy, it is not required to employ all degrees of freedom commonly offered by deformable mirrors. In this contribution, we present an alternative approach for aberration correction in confocal microscopy based on a novel adaptive lens with two degrees of freedom. These lenses enable both axial scanning and aberration correction, keeping the setup simple and compact. Using digital holography, we characterize the tuning range of the focal length and the spherical aberration correction ability of the adaptive lens. The operation at fixed trajectories in terms of focal length and spherical aberrations is demonstrated and investigated in terms of reproducibility. First results indicate that such adaptive lenses are a promising approach towards high-resolution, high-speed three-dimensional microscopy.
APA, Harvard, Vancouver, ISO, and other styles
17

Gyurecz, György, and Tibor Bercsey. "Surface Shape Correction by Highlight Lines." TUDpress - Verlag der Wissenschaften GmbH, 2012. https://tud.qucosa.de/id/qucosa%3A30525.

Full text
Abstract:
The design of industrial products applies various construction aspects. Beside functionality and manufacturability conditions that are essential in technical design, products must also meet aerodynamic, hydrodynamic and aesthetic demands. These demands are particularly important in automotive, ship and airplane industry but they are also present in the design of medical replacements, household appliances, etc. The common objective of above aspects is to produce smooth and irregularity free surface shape. Quality and smoothness of surfaces of industrial objects can efficiently be evaluated by highlight lines.
APA, Harvard, Vancouver, ISO, and other styles
18

Schnell, Sondre Kvalvåg, Thijs J. H. Vlugt, Jean-Marc Simon, Signe Kjelstrup, and Dick Bedeaux. "Direct calculation of the thermodynamic correction factor, gamma, from molecular dynamics simulations." Diffusion fundamentals 16 (2011) 72, S. 1-2, 2011. https://ul.qucosa.de/id/qucosa%3A13814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Franosch, Thomas, and Felix Höfling. "Cluster-resolved dynamic scaling theory and universal corrections for transport on percolating systems." Diffusion fundamentals 11 (2009) 59, S. 1, 2009. https://ul.qucosa.de/id/qucosa%3A14024.

Full text
Abstract:
For a continuum percolation model, it has been shown recentlythat the crossover from pure subdiffusion to normal diffusion extends over five decades in time [1, 2]; in addition, the asymptotic behavior is slowly approached and the large corrections cannot simply be ignored. Thus, it is of general interest to develop a systematic description of universal corrections to scaling in percolating systems. For percolating systems, we propose a universal exponent relation connecting the leading corrections to scaling of the cluster size distribution with the dynamic corrections to the asymptotic transport behavior at criticality. Our derivation is based on a cluster-resolved scaling theory unifying the scaling of both the cluster size distribution and the dynamics of a random walker. We corroborate our theoretical approach by extensive simulations for a site percolating square lattice and numerically determine both the static and dynamic correction exponents [3].
APA, Harvard, Vancouver, ISO, and other styles
20

Vaishnav, Rajesh Ishwardas, and Christoph Jacobi. "Correction to: Ionospheric response to the 25 - 26 August 2018 intense geomagnetic storm." Universität Leipzig, 2020. https://ul.qucosa.de/id/qucosa%3A74122.

Full text
Abstract:
The thermosphere-ionosphere regions are mainly controlled by the solar, but also by geomagnetic activity. In this case study, the Earth’s ionospheric response to the 25-26 August 2018 intense geomagnetic storm is investigated using the International GNSS System (IGS) Total Electron Content (TEC) observations. During this major storm, the minimum disturbance storm time (Dst) index reached -174 nT. We use observations and model simulations to analyse the ionospheric response during the initial phase and the main phase of the magnetic storm. A significant difference between storm day and quiet day TEC is observed. The O/N2 ratio observed from the GUVI instrument onboard the TIMED satellite is used to analyse the storm effect. The result shows a clear depletion of the O/N2 ratio in the high latitude region, and an enhancement in the low latitude region during the main phase of the storm. Furthermore, the Coupled Thermosphere Ionosphere Plasmasphere electrodynamics (CTIPe) model simulations were used. The results suggest that the CTIPe model can capture the ionospheric variations during storms.
Die Regionen der Ionosphären und Thermosphäre werden hauptsächlich von der Sonne sowie auch von geomagnetische Aktivität beeinflusst. In dieser Fallstudie wurde die ionosphärische Reaktion der Erde auf den starken geomagnetischen Sturm vom 25./26. August 2018 unter Verwendung der Gesamtelektronengehaltsdaten (Total Electron Content, TEC) vom Internationalen GNSS Service untersucht. Während dieses großen Sturms wurde ein ”Disturbance Storm Time Index” Dst von -174 nT erreicht. Beobachtungen und Modellsimulationen wurden verwendet, um die ionosphärische Reaktion während der Anfangsphase und der Hauptphase des magnetischen Sturms zu untersuchen. Ein signifikanter Unterschied zwischen TEC während eines Sturmtages und eines ruhigen Tages wurde beobachtet. Das vom GUVI-Instrument an Bord des TIMED-Satelliten beobachtete O/N2 -Verhältnis wurde verwendet, um den Sturmeffekt weiter zu untersuchen. Das Ergebnis zeigt eine deutliche Abnahme/Zunahme des O/N2 Verhältnis in hohen/niedrigen Breiten während der Hauptphase des Sturms. Darüber hinaus wurde das Coupled Thermosphere Ionosphere Plasmasphere ectrodynamics (CTIPe) Modell verwendet. Die Ergebnisse legen nahe, dass das CTIPe-Modell die ionosphärischen Schwankungen während eines Sturms erfassen kann.
APA, Harvard, Vancouver, ISO, and other styles
21

Adeline, Karine. "Classification des matériaux urbains en présence de végétation éparse par télédétection hyperspectrale à haute résolution spatiale." Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0056/document.

Full text
Abstract:
La disponibilité de nouveaux moyens d’acquisition en télédétection, satellitaire (PLEIADES, HYPXIM), aéroportée ou par drone (UAV) à très haute résolution spatiale ouvre la voie à leur utilisation pour l’étude de milieux complexes telles que les villes. En particulier, la connaissance de la ville pour l’étude des îlots de chaleur, la planification urbaine, l’estimation de la biodiversité de la végétation et son état de santé nécessite au préalable une étape de classification des matériaux qui repose sur l’utilisation de l’information spectrale accessible en télédétection hyperspectrale 0,4-2,5μm. Une des principales limitations des méthodes de classification réside dans le non traitement des zones à l’ombre. Des premiers travaux ont montré qu’il était possible d’exploiter l’information radiative dans les ombres des bâtiments. En revanche, les méthodes actuelles ne fonctionnent pas dans les ombres des arbres du fait de la porosité de leur couronne. L’objectif de cette thèse vise à caractériser les propriétés optiques de surface à l’ombre de la végétation arborée urbaine au moyen d’outils de transfert radiatif et de correction atmosphérique. L’originalité de ce travail est d’étudier la porosité d’un arbre via la grandeur de transmittance de la couronne. La problématique a donc été abordée en deux temps. Premièrement, la caractérisation de la transmittance d’un arbre isolé a été menée avec l’utilisation de l’outil DART à travers la mise en œuvre d’un plan d’expériences et d’études de sensibilité qui ont permis de la relier à des paramètres biophysiques et externes. Une campagne de mesures terrain a ensuite été réalisée afin d’évaluer son estimation à partir de différents niveaux de modélisation de l’arbre, dont un modèle réel acquis par mesures lidar terrestre. Deuxièmement, une nouvelle méthode de correction atmosphérique 3D adaptée à la végétation urbaine, ICARE-VEG, a été développée à partir des résultats précédents. Une campagne aéroportée et de mesures terrain UMBRA a été dédiée à sa validation. Ses performances comparées à d’autres outils existants ouvrent de larges perspectives pour l’interprétation globale d’une image par télédétection et pour souligner la complexité de modéliser des processus physiques naturels à une échelle spatiale très fine
The new advances in remote sensing acquisitions at very high spatial resolution, either spaceborne (PLEIADES, HYPXIM), airborne or unmanned aerial vehicles borne, open the way for the study of complex environments such as urban areas. In particular, the better understanding of urban heat islands, urban planning, vegetation biodiversity, requires the knowledge of detailed material classification mapsbased on the use of spectral information brought by hyperspectral imagery 0.4-2.5μm. However, one of the main limitations of classification methods relies on the absence of shadow processing. Past studies have demonstrated that spectral information was possible to be extracted from shadows cast by buildings. But existing methods fail in shadows cast by trees because of their crown porosity. The objective of this thesis aims to characterize surface optical properties in urban tree shadows by means of radiative transfer and atmospheric correction tools. The originality of this work is to study the tree crown porosity through the analysis of the tree crown transmittance. Therefore, the issue has been divided into two parts. Firstly, an experimental design with the use of DART tool has been carried out in order to examine the relationships between the transmittance of an isolated tree and different biophysical and external variables. Then, the estimation of the tree crown transmittance has been assessed with several tree 3D modelling strategies derived from reference terrestrial lidar acquisitions. Secondly, a new atmospheric correction method appropriate to the processing of tree shadows, ICARE-VEG, was implemented fromthese previous results. An airborne and field campaign UMBRA was dedicated to its validation. Moreover, its performances was compared to other existing tools. Finally, the conclusions open large outlooks to the overall interpretation of remote sensing images and highlight the complexity to model physical natural processes with finer spatial resolutions
APA, Harvard, Vancouver, ISO, and other styles
22

Mahé, Gaël. "Correction centralisée des distorsions spectrales de la parole sur les réseaux téléphoniques." Phd thesis, Université Rennes 1, 2002. http://tel.archives-ouvertes.fr/tel-00114668.

Full text
Abstract:
Ces travaux ont pour objet la correction des distorsions spectrales subies par la parole sur les réseaux téléphoniques, en premier lieu le réseau fixe (terrestre) dans sa partie analogique. Ces distorsions sont dues aux fonctions de transfert des terminaux téléphoniques en émission et en réception, et aux lignes téléphoniques analogiques correspondantes. Le but est de restaurer, en aveugle, un "timbre" le plus proche possible de la voix originale du locuteur, au moyen d'un traitement du signal centralisé dans un équipement du réseau.

Nous proposons un algorithme d'égalisation spectrale aveugle consistant à aligner, sur une bande de fréquences limitée (200-3150 Hz), le spectre à long terme du signal traité sur un spectre de référence (spectre de la recommandation P.50 de l'UIT-T). Des évaluations subjectives mettent en évidence une restauration satisfaisante du timbre original des locuteurs, dans la limite de la bande d'égalisation choisie.

Il apparaît toutefois que la quantification en loi A des échantillons de sortie de l'égaliseur induit un bruit gênant en réception. Deux approches sont donc proposées pour masquer perceptivement ce bruit par un reformage spectral. L'une est fondée sur la réinjection à l'entrée du quantificateur de l'erreur de quantification filtrée. L'autre explore selon un algorithme de type Viterbi les séquences temporelles des niveaux de quantification possibles, de manière à maximiser un critère probabiliste de masquage du bruit. Une évaluation subjective montre finalement d'une part que le bruit non reformé est préféré au bruit reformé, plus sporadique mais plus "rauque", d'autre part qu'une voix dont le timbre a été corrigé, au prix de ce bruit de quantification, est préférée à la même voix en réception d'une liaison téléphonique sans correction de timbre (et non bruitée).

Afin d'améliorer l'adéquation du spectre de référence de l'égaliseur aux différents locuteurs, une classification des locuteurs selon leur spectre, en deux ou quatre classes, est étudiée, et des critères de classement robustes aux distorsions de la liaison téléphonique sont définis. Cette classification permet d'utiliser non plus un spectre de référence unique, mais un spectre de référence par classe. Il en résulte une réduction de la distorsion spectrale induite par l'égaliseur, ce qui se traduit, pour certains locuteurs, par une amélioration significative de la correction de timbre.
APA, Harvard, Vancouver, ISO, and other styles
23

Martinoty, Gilles. "Reconnaissance de matériaux sur des images aériennes en multirecouvrement, par identification de fonction de réflectances bidirectionnelles." Paris 7, 2005. http://www.theses.fr/2005PA077039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Radner, Hannes, Lars Büttner, and Jürgen Czarske. "Interferometric velocity measurements through a fluctuating interface using a Fresnel guide star-based wavefront correction system." SPIE, 2018. https://tud.qucosa.de/id/qucosa%3A71762.

Full text
Abstract:
To improve optical measurements, which are degraded by optical distortions, wavefront correction systems can be used. Generally, these systems evaluate a guide star in transmission. The guide star emits wellknown wavefronts, which sample the distortion by propagating through it. The system is able to directly measure the distortion and correct it. There are setups, where it is not possible to generate a guide star behind the distortion. Here, we consider a liquid jet with a radially open surface. A Mach–Zehnder interferometer is presented where both beams are stabilized through a fluctuating liquid jet surface with the Fresnel guide star (FGS) technique. The wavefront correction system estimates the beam path behind the surface by evaluating the incident beam angle and reflected beam angle of the Fresnel reflex with an observer to control the incident angle for the desired beam path. With this approach, only one optical access through the phase boundary is needed for the measurement, which can be traversed over a range of 250 μm with a significantly increased rate of valid signals. The experiment demonstrates the potential of the FGS technique for measurements through fluctuating phase boundaries, such as film flows or jets.
APA, Harvard, Vancouver, ISO, and other styles
25

Philipp, Katrin, Florian Lemke, Matthias C. Wapler, Nektarios Koukourakis, Ulrike Wallrabe, and Jürgen W. Czarske. "Axial scanning and spherical aberration correction in confocal microscopy employing an adaptive lens." SPIE, 2018. https://tud.qucosa.de/id/qucosa%3A71732.

Full text
Abstract:
We present a fluid-membrane lens with two piezoelectric actuators that offer versatile, circular symmetric lens surface shaping. A wavefront-measurement-based control system ensures robustness against creeping and hysteresis effects of the piezoelectric actuators. We apply the adaptive lens to correct synthetic aberrations induced by a deformable mirror. The results suggest that the lens is able to correct spherical aberrations with standard Zernike coefficients between 0 μm and 1 μm, while operating at refractive powers up to about 4m-1. We apply the adaptive lens in a custom-built confocal microscope to allow simultaneous axial scanning and spherical aberration tuning. The confocal microscope is extended by an additional phase measurement system to include the control algorithm. To verify our approach, we use the maximum intensity and the axial FWHM of the overall confocal point spread function as figures of merit. We further discuss the ability of the adaptive lens to correct specimen-induced aberrations in a confocal microscope.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Zhanfeng. "Prediction and Classification of Physical Properties by Near-Infrared Spectroscopy and Baseline Correction of Gas Chromatography Mass Spectrometry Data of Jet Fuels by Using Chemometric Algorithms." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1336436389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Stankevičius, Arvydas. "Nuteistųjų laisvės atėmimu klasifikavimas ir diferencijavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090204_133034-62769.

Full text
Abstract:
Šiame magistro baigiamajame darbe analizuotas nuteistųjų laisvės atėmimu klasifikavimo ir diferencijavimo procesas. Teorinėje darbo dalyje (1 – 2 skyriai) nagrinėta klasifikavimo ir diferencijavimo sąvokų prasmė ir paskirtis, jų vieta teisės aktų sistemoje, klasifikavimo ir diferencijavimo proceso reikšmė tarptautinėje erdvėje. Antrojoje praktinėje-metodologinėje dalyje (3 – 4 skyriai) analizuojamos praktinės teisės aktų taikymo problemos nuteistųjų klasifikavimo ir diferencijavimo srityje, šių procesų praktinė svarba nuteistųjų pataisai, pateikiami pasiūlymai. Taip pat atlikta respondentų apklausa, siekiant išsiaiškinti ar pakankamas yra nuteistųjų diferencijavimo proceso teisinis reglamentavimas, kokios jo tobulinimo galimybės. Apklaustos dvi respondentų grupės – nuteistieji laisvės atėmimu bei pataisos namuose dirbantys pareigūnai. Darbo pabaigoje pateikiamos išvados ir pasiūlymai.
The author in this Master‘s work analyzing the process of the classification and differentiation of imprisonment convicts. At the first half of this work (1 – 2 sections) author is researching the meaning and purpose of classification and differentiation notions, their place in the system of enactments and law, the significance of classification and differentiation process at the international level of law. At the second, practical – methodological part of the work (2 – 4 sections) author analyzing practical problems in use of classification and differentiation enactments, researching the practical importance of these processes reaching the correction of convicts, giving the suggestions and solutions. Also author is giving the material of the survey accomplished by him, on purpose to find out the problems of differentiation process enactment, also to find out what are the possibilities of the refinement of this process. At the end of the work author is taking the conclusions and suggestions.
APA, Harvard, Vancouver, ISO, and other styles
28

España, Boquera Salvador. "Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition)." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62215.

Full text
Abstract:
[EN] This work is focused on problems (like automatic speech recognition (ASR) and handwritten text recognition (HTR)) that: 1) can be represented (at least approximately) in terms of one-dimensional sequences, and 2) solving these problems entails breaking the observed sequence down into segments which are associated to units taken from a finite repertoire. The required segmentation and classification tasks are so intrinsically interrelated ("Sayre's Paradox") that they have to be performed jointly. We have been inspired by what some works call the "successful trilogy", which refers to the synergistic improvements obtained when considering: - a good formalization framework and powerful algorithms; - a clever design and implementation taking the best profit of hardware; - an adequate preprocessing and a careful tuning of all heuristics. We describe and study "two stage generative models" (TSGMs) comprising two stacked probabilistic generative stages without reordering. This model not only includes Hidden Markov Models (HMMs, but also "segmental models" (SMs). "Two stage decoders" may be deduced by simply running a TSGM in reversed way, introducing non determinism when required: 1) A directed acyclic graph (DAG) is generated and 2) it is used together with a language model (LM). One-pass decoders constitute a particular case. A formalization of parsing and decoding in terms of semiring values and language equations proposes the use of recurrent transition networks (RTNs) as a normal form for Context Free Grammars (CFGs), using them in a parsing-as-composition paradigm, so that parsing CFGs result in a slight extension of regular ones. Novel transducer composition algorithms have been proposed that can work with RTNs and can deal with null transitions without resorting to filter-composition even in the presence of null transitions and non-idempotent semirings. A review of LMs is described and some contributions mainly focused on LM interfaces, LM representation and on the evaluation of Neural Network LMs (NNLMs) are provided. A review of SMs includes the combination of generative and discriminative segmental models and general scheme of frame emission and another one of SMs. Some fast cache-friendly specialized Viterbi lexicon decoders taking profit of particular HMM topologies are proposed. They are able to manage sets of active states without requiring dictionary look-ups (e.g. hashing). A dataflow architecture allowing the design of flexible and diverse recognition systems from a little repertoire of components has been proposed, including a novel DAG serialization protocol. DAG generators can take over-segmentation constraints into account, make use SMs other than HMMs, take profit of the specialized decoders proposed in this work and use a transducer model to control its behavior making it possible, for instance, to use context dependent units. Relating DAG decoders, they take profit of a general LM interface that can be extended to deal with RTNs. Some improvements for one pass decoders are proposed by combining the specialized lexicon decoders and the "bunch" extension of the LM interface, including an adequate parallelization. The experimental part is mainly focused on HTR tasks on different input modalities (offline, bimodal). We have proposed some novel preprocessing techniques for offline HTR which replace classical geometrical heuristics and make use of automatic learning techniques (neural networks). Experiments conducted on the IAM database using this new preprocessing and HMM hybridized with Multilayer Perceptrons (MLPs) have obtained some of the best results reported for this reference database. Among other HTR experiments described in this work, we have used over-segmentation information, tried lexicon free approaches, performed bimodal experiments and experimented with the combination of hybrid HMMs with holistic classifiers.
[ES] Este trabajo se centra en problemas (como reconocimiento automático del habla (ASR) o de escritura manuscrita (HTR)) que cumplen: 1) pueden representarse (quizás aproximadamente) en términos de secuencias unidimensionales, 2) su resolución implica descomponer la secuencia en segmentos que se pueden clasificar en un conjunto finito de unidades. Las tareas de segmentación y de clasificación necesarias están tan intrínsecamente interrelacionadas ("paradoja de Sayre") que deben realizarse conjuntamente. Nos hemos inspirado en lo que algunos autores denominan "La trilogía exitosa", refereido a la sinergia obtenida cuando se tiene: - un buen formalismo, que dé lugar a buenos algoritmos; - un diseño e implementación ingeniosos y eficientes, que saquen provecho de las características del hardware; - no descuidar el "saber hacer" de la tarea, un buen preproceso y el ajuste adecuado de los diversos parámetros. Describimos y estudiamos "modelos generativos en dos etapas" sin reordenamientos (TSGMs), que incluyen no sólo los modelos ocultos de Markov (HMM), sino también modelos segmentales (SMs). Se puede obtener un decodificador de "dos pasos" considerando a la inversa un TSGM introduciendo no determinismo: 1) se genera un grafo acíclico dirigido (DAG) y 2) se utiliza conjuntamente con un modelo de lenguaje (LM). El decodificador de "un paso" es un caso particular. Se formaliza el proceso de decodificación con ecuaciones de lenguajes y semianillos, se propone el uso de redes de transición recurrente (RTNs) como forma normal de gramáticas de contexto libre (CFGs) y se utiliza el paradigma de análisis por composición de manera que el análisis de CFGs resulta una extensión del análisis de FSA. Se proponen algoritmos de composición de transductores que permite el uso de RTNs y que no necesita recurrir a composición de filtros incluso en presencia de transiciones nulas y semianillos no idempotentes. Se propone una extensa revisión de LMs y algunas contribuciones relacionadas con su interfaz, con su representación y con la evaluación de LMs basados en redes neuronales (NNLMs). Se ha realizado una revisión de SMs que incluye SMs basados en combinación de modelos generativos y discriminativos, así como un esquema general de tipos de emisión de tramas y de SMs. Se proponen versiones especializadas del algoritmo de Viterbi para modelos de léxico y que manipulan estados activos sin recurrir a estructuras de tipo diccionario, sacando provecho de la caché. Se ha propuesto una arquitectura "dataflow" para obtener reconocedores a partir de un pequeño conjunto de piezas básicas con un protocolo de serialización de DAGs. Describimos generadores de DAGs que pueden tener en cuenta restricciones sobre la segmentación, utilizar modelos segmentales no limitados a HMMs, hacer uso de los decodificadores especializados propuestos en este trabajo y utilizar un transductor de control que permite el uso de unidades dependientes del contexto. Los decodificadores de DAGs hacen uso de un interfaz bastante general de LMs que ha sido extendido para permitir el uso de RTNs. Se proponen también mejoras para reconocedores "un paso" basados en algoritmos especializados para léxicos y en la interfaz de LMs en modo "bunch", así como su paralelización. La parte experimental está centrada en HTR en diversas modalidades de adquisición (offline, bimodal). Hemos propuesto técnicas novedosas para el preproceso de escritura que evita el uso de heurísticos geométricos. En su lugar, utiliza redes neuronales. Se ha probado con HMMs hibridados con redes neuronales consiguiendo, para la base de datos IAM, algunos de los mejores resultados publicados. También podemos mencionar el uso de información de sobre-segmentación, aproximaciones sin restricción de un léxico, experimentos con datos bimodales o la combinación de HMMs híbridos con reconocedores de tipo holístico.
[CAT] Aquest treball es centra en problemes (com el reconeiximent automàtic de la parla (ASR) o de l'escriptura manuscrita (HTR)) on: 1) les dades es poden representar (almenys aproximadament) mitjançant seqüències unidimensionals, 2) cal descompondre la seqüència en segments que poden pertanyer a un nombre finit de tipus. Sovint, ambdues tasques es relacionen de manera tan estreta que resulta impossible separar-les ("paradoxa de Sayre") i s'han de realitzar de manera conjunta. Ens hem inspirat pel que alguns autors anomenen "trilogia exitosa", referit a la sinèrgia obtinguda quan prenim en compte: - un bon formalisme, que done lloc a bons algorismes; - un diseny i una implementació eficients, amb ingeni, que facen bon us de les particularitats del maquinari; - no perdre de vista el "saber fer", emprar un preprocés adequat i fer bon us dels diversos paràmetres. Descrivim i estudiem "models generatiu amb dues etapes" sense reordenaments (TSGMs), que inclouen no sols inclouen els models ocults de Markov (HMM), sinò també models segmentals (SM). Es pot obtindre un decodificador "en dues etapes" considerant a l'inrevés un TSGM introduint no determinisme: 1) es genera un graf acíclic dirigit (DAG) que 2) és emprat conjuntament amb un model de llenguatge (LM). El decodificador "d'un pas" en és un cas particular. Descrivim i formalitzem del procés de decodificació basada en equacions de llenguatges i en semianells. Proposem emprar xarxes de transició recurrent (RTNs) com forma normal de gramàtiques incontextuals (CFGs) i s'empra el paradigma d'anàlisi sintàctic mitjançant composició de manera que l'anàlisi de CFGs resulta una lleugera extensió de l'anàlisi de FSA. Es proposen algorismes de composició de transductors que poden emprar RTNs i que no necessiten recorrer a la composició amb filtres fins i tot amb transicions nul.les i semianells no idempotents. Es proposa una extensa revisió de LMs i algunes contribucions relacionades amb la seva interfície, amb la seva representació i amb l'avaluació de LMs basats en xarxes neuronals (NNLMs). S'ha realitzat una revisió de SMs que inclou SMs basats en la combinació de models generatius i discriminatius, així com un esquema general de tipus d'emissió de trames i altre de SMs. Es proposen versions especialitzades de l'algorisme de Viterbi per a models de lèxic que permeten emprar estats actius sense haver de recórrer a estructures de dades de tipus diccionari, i que trauen profit de la caché. S'ha proposat una arquitectura de flux de dades o "dataflow" per obtindre diversos reconeixedors a partir d'un xicotet conjunt de peces amb un protocol de serialització de DAGs. Descrivim generadors de DAGs capaços de tindre en compte restriccions sobre la segmentació, emprar models segmentals no limitats a HMMs, fer us dels decodificadors especialitzats proposats en aquest treball i emprar un transductor de control que permet emprar unitats dependents del contexte. Els decodificadors de DAGs fan us d'una interfície de LMs prou general que ha segut extesa per permetre l'ús de RTNs. Es proposen millores per a reconeixedors de tipus "un pas" basats en els algorismes especialitzats per a lèxics i en la interfície de LMs en mode "bunch", així com la seua paral.lelització. La part experimental està centrada en el reconeiximent d'escriptura en diverses modalitats d'adquisició (offline, bimodal). Proposem un preprocés d'escriptura manuscrita evitant l'us d'heurístics geomètrics, en el seu lloc emprem xarxes neuronals. S'han emprat HMMs hibridats amb xarxes neuronals aconseguint, per a la base de dades IAM, alguns dels millors resultats publicats. També podem mencionar l'ús d'informació de sobre-segmentació, aproximacions sense restricció a un lèxic, experiments amb dades bimodals o la combinació de HMMs híbrids amb classificadors holístics.
España Boquera, S. (2016). Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62215
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
29

Escalera, Guerrero Sergio. "Coding and Decoding Design of ECOCs for Multi-class Pattern and Object Recognition." Doctoral thesis, Universitat Autònoma de Barcelona, 2008. http://hdl.handle.net/10803/5789.

Full text
Abstract:
Molts problemes de la vida quotidiana estan plens de decisions multi-classe. En l'àmbit del Reconeixement de Patrons, s'han proposat moltes tècniques d'aprenentatge que treballen sobre problemes de dos classes. No obstant, la extensió de classificadors binaris al cas multi-classe és una tasca complexa. En aquest sentit, Error-Correcting Output Codes (ECOC) han demostrat ser una eina potent per combinar qualsevol nombre de classificadors binaris i així modelar problemes multi-classe. No obstant, encara hi ha molts punts oberts sobre les capacitats del framework d'ECOC. En aquesta tesis, els dos estats principals d'un disseny ECOC són analitzats: la codificació i la decodificació. Es presenten diferents alternatives de dissenys dependents del domini del problema. Aquests dissenys fan ús del coneixement del domini del problema per minimitzar el nombre de classificadors que permeten obtenir un alt rendiment de classificació. Per altra banda, l'anàlisi de la codificació de dissenys d'ECOC es emprada per definir noves regles de decodificació que prenen total avantatja de la informació provinent del pas de la codificació. A més a més, com que classificacions exitoses requereixen rics conjunts de característiques, noves tècniques de detecció/extracció de característiques es presenten i s'avaluen en els nous dissenys d'ECOC. L'avaluació de la nova metodologia es fa sobre diferents bases de dades reals i sintètiques: UCI Machine Learning Repositori, símbols manuscrits, senyals de trànsit provinents de sistemes Mobile Mapping, imatges coronàries d'ultrasò, imatges de la Caltech Repositori i bases de dades de malats de Chagas. Els resultats que es mostren en aquesta tesis mostren que s'obtenen millores de rendiment rellevants tant a la codificació com a la decodificació dels dissenys d'ECOC quan les noves regles són aplicades.
Many real problems require multi-class decisions. In the Pattern Recognition field, many techniques have been proposed to deal with the binary problem. However, the extension of many 2-class classifiers to the multi-class case is a hard task. In this sense, Error-Correcting Output Codes (ECOC) demonstrated to be a powerful tool to combine any number of binary classifiers to model multi-class problems. But there are still many open issues about the capabilities of the ECOC framework. In this thesis, the two main stages of an ECOC design are analyzed: the coding and the decoding steps. We present different problem-dependent designs. These designs take advantage of the knowledge of the problem domain to minimize the number of classifiers, obtaining a high classification performance. On the other hand, we analyze the ECOC codification in order to define new decoding rules that take full benefit from the information provided at the coding step. Moreover, as a successful classification requires a rich feature set, new feature detection/extraction techniques are presented and evaluated on the new ECOC designs. The evaluation of the new methodology is performed on different real and synthetic data sets: UCI Machine Learning Repository, handwriting symbols, traffic signs from a Mobile Mapping System, Intravascular Ultrasound images, Caltech Repository data set or Chaga's disease data set. The results of this thesis show that significant performance improvements are obtained on both traditional coding and decoding ECOC designs when the new coding and decoding rules are taken into account.
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Qatawneh, Sokyna M. S. "3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4876.

Full text
Abstract:
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Mubareka, Sarah Betoul. "Identification d'indicateurs de risque des populations victimes de conflits par imagerie satellitaire études de cas : le nord de l'Irak." Thèse, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/2786.

Full text
Abstract:
Remote sensing and security, terms which are not usually associated, have found a common platform this decade with the conjuring of the GMOSS network (Global Monitoring for Security and Stability ), whose mandate is to discover new applications for satellite-derived imagery to security issues. This study focuses on human security, concentrating on the characterisation of vulnerable areas to conflict. A time-series of satellite imagery taken from Landsat sensors from 1987 to 2001 and the SRTM mission imagery are used for this purpose over a site in northern Iraq. Human security issues include the exposure to any type of hazard. The region of study is first characterised in order to understand which hazards are and were present in the past for the region of study. The principal hazard for the region of study is armed conflict and the relative field data was analysed to determine the links between geographical indicators and vulnerable areas. This is done through historical research and the study of open-sourced information about disease outbreaks; the movements of refugees and the internally displaced; and humanitarian aid and security issues. These open sources offer information which are not always consistent, objective, or normalized and are therefore difficult to quantify. A method for the rapid mapping and graphing and subsequent analysis of the situation in a region where limited information is available is developed. This information is coupled with population numbers to create a"risk map": A disaggregated matrix of areas most at risk during conflict situations. The results show that describing the risk factor for a population to the hazard conflict depends on three complex indicators: Population density, remoteness and economic diversity. Each of these complex indicators is then derived from Landsat and SRTM imagery and a satellite-driven model is formulated. This model based on satellite imagery is applied to the study site for a temporal study. The output are three 90 m × 90 m resolution grids which describe, at a pixel level, the risk level within the region for each of the dates studies, and the changes which occur in northern Iraq as the result of the Anfal Campaigns. Results show that satellite imagery, with a minimum of processing, can yield indicators for characterising risk in a region. Although by no means a replacement for field data, this technological source, in the absence of local knowledge, can provide users with a starting point in understanding which areas are most at risk within a region. If this data is coupled with open sourced information such as political and cultural discrimination, economy and agricultural practices, a fairly accurate risk map can be generated in the absence of field data.
APA, Harvard, Vancouver, ISO, and other styles
32

Langner, Jens. "Development of a Parallel Computing Optimized Head Movement Correction Method in Positron Emission Tomography." Master's thesis, Hochschule für Technik und Wirtschaft Dresden, 2003. https://slub.qucosa.de/id/qucosa%3A542.

Full text
Abstract:
As a modern tomographic technique, Positron-Emission-Tomography (PET) enables non-invasive imaging of metabolic processes in living organisms. It allows the visualization of malfunctions which are characteristic for neurological, cardiological, and oncological diseases. Chemical tracers labeled with radioactive positron emitting isotopes are injected into the patient and the decay of the isotopes is then observed with the detectors of the tomograph. This information is used to compute the spatial distribution of the labeled tracers. Since the spatial resolution of PET devices increases steadily, the whole sensitive process of tomograph imaging requires minimizing not only the disturbing effects, which are specific for the PET measurement method, such as random or scattered coincidences, but also external effects like body movement of the patient. Methods to correct the influences of such patient movement have been developed in previous studies at the PET center, Rossendorf. These methods are based on the spatial correction of each registered coincidence. However, the large amount of data and the complexity of the correction algorithms limited the application to selected studies. The aim of this thesis is to optimize the correction algorithms in a way that allows movement correction in routinely performed PET examinations. The object-oriented development in C++ with support of the platform independent Qt framework enables the employment of multiprocessor systems. In addition, a graphical user interface allows the use of the application by the medical assistant technicians of the PET center. Furthermore, the application provides methods to acquire and administrate movement information directly from the motion tracking system via network communication. Due to the parallelization the performance of the new implementation demonstrates a significant improvement. The parallel optimizations and the implementation of an intuitive usable graphical interface finally enables the PET center Rossendorf to use movement correction in routine patient investigations, thus providing patients an improved tomograph imaging.
Die Positronen-Emissions-Tomographie (PET) ist ein modernes medizinisches Diagnoseverfahren, das nichtinvasive Einblicke in den Stoffwechsel lebender Organismen ermöglicht. Es erfasst Funktionsstörungen, die für neurologische, kardiologische und onkologische Erkrankungen charakteristisch sind. Hierzu werden dem Patienten radioaktive, positronen emittierende Tracer injiziert. Der radioaktive Zerfall der Isotope wird dabei von den umgebenden Detektoren gemessen und die Aktivitätsverteilung durch Rekonstruktionsverfahren bildlich darstellbar gemacht. Da sich die Auflösung solcher Tomographen stetig verbessert und somit sich der Einfluss von qualitätsmindernden Faktoren wie z.B. das Auftreten von zufälligen oder gestreuten Koinzidenzen erhöht, gewinnt die Korrektur dieser Einflüsse immer mehr an Bedeutung. Hierzu zählt unter anderem auch die Korrektur der Einflüsse eventueller Patientenbewegungen während der tomographischen Untersuchung. In vorangegangenen Studien wurde daher am PET Zentrum Rossendorf ein Verfahren entwickelt, um die nachträgliche listmode-basierte Korrektur dieser Bewegungen durch computergestützte Verfahren zu ermöglichen. Bisher schränkte der hohe Rechenaufwand den Einsatz dieser Methoden jedoch ein. Diese Arbeit befasst sich daher mit der Aufgabe, durch geeignete Parallelisierung der Korrekturalgorithmen eine Optimierung dieses Verfahrens in dem Maße zu ermöglichen, der einen routinemässigen Einsatz während PET Untersuchungen erlaubt. Hierbei lässt die durchgeführte objektorientierte Softwareentwicklung in C++ , unter Zuhilfenahme des plattformübergreifenden Qt Frameworks, eine Nutzung von Mehrprozessorsystemen zu. Zusätzlich ermöglicht eine graphische Oberfläche die Bedienung einer solchen Bewegungskorrektur durch die medizinisch technischen Assistenten des PET Zentrums. Um darüber hinaus die Administration und Datenakquisition der Bewegungsdaten zu ermöglichen, stellt die entwickelte Anwendung Funktionen bereit, die die direkte Kommunikation mit dem Bewegungstrackingsystem erlauben. Es zeigte sich, dass durch die Parallelisierung die Geschwindigkeit wesentlich gesteigert wurde. Die parallelen Optimierungen und die Implementation einer intuitiv nutzbaren graphischen Oberfläche erlaubt es dem PET Zentrum nunmehr Bewegungskorrekturen innerhalb von Routineuntersuchungen durchzuführen, um somit den Patienten ein verbessertes Bildgebungsverfahren bereitzustellen.
APA, Harvard, Vancouver, ISO, and other styles
33

Werner, Peter, Michael Rullmann, Anke Bresch, Solveig Tiepolt, Donald Lobsien, Matthias Schröter, Osama Sabri, and Henryk Barthel. "Impact of attenuation correction on clinical [18F]FDG brain PET in combined PET/MRI." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-205215.

Full text
Abstract:
Background: In PET/MRI, linear photon attenuation coefficients for attenuation correction (AC) cannot be directly derived, and cortical bone is, so far, usually not considered. This results in an underestimation of the average PET signal in PET/MRI. Recently introduced MR-AC methods predicting bone information from anatomic MRI or proton density weighted zero-time imaging may solve this problem in the future. However, there is an ongoing debate if the current error is acceptable for clinical use and/or research. Methods: We examined this feature for [18F] fluorodeoxyglucose (FDG) brain PET in 13 patients with clinical signs of dementia or movement disorders who subsequently underwent PET/CT and PET/MRI on the same day. Multiple MR-AC approaches including a CT-derived AC were applied. Results: The resulting PET data was compared to the CT-derived standard regarding the quantification error and its clinical impact. On a quantitative level, −11.9 to +2 % deviations from the CT-AC standard were found. These deviations, however, did not translate into a systematic diagnostic error. This, as overall patterns of hypometabolism (which are decisive for clinical diagnostics), remained largely unchanged. Conclusions: Despite a quantitative error by the omission of bone in MR-AC, clinical quality of brain [18F]FDG is not relevantly affected. Thus, brain [18F]FDG PET can already, even now with suboptimal MR-AC, be utilized for clinical routine purposes, even though the MR-AC warrants improvement.
APA, Harvard, Vancouver, ISO, and other styles
34

Souza, César Salgado Vieira de. "Classify-normalize-classify : a novel data-driven framework for classifying forest pixels in remote sensing images." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/158390.

Full text
Abstract:
O monitoramento do meio ambiente e suas mudanças requer a análise de uma grade quantidade de imagens muitas vezes coletadas por satélites. No entanto, variações nos sinais devido a mudanças nas condições atmosféricas frequentemente resultam num deslocamento da distribuição dos dados para diferentes locais e datas. Isso torna difícil a distinção dentre as várias classes de uma base de dados construída a partir de várias imagens. Neste trabalho introduzimos uma nova abordagem de classificação supervisionada, chamada Classifica-Normaliza-Classifica (CNC), para amenizar o problema de deslocamento dos dados. A proposta é implementada usando dois classificadores. O primeiro é treinado em imagens não normalizadas de refletância de topo de atmosfera para distinguir dentre pixels de uma classe de interesse (CDI) e pixels de outras categorias (e.g. floresta versus não-floresta). Dada uma nova imagem de teste, o primeiro classificador gera uma segmentação das regiões da CDI e então um vetor mediano é calculado para os valores espectrais dessas áreas. Então, esse vetor é subtraído de cada pixel da imagem e portanto fixa a distribuição de dados de diferentes imagens num mesmo referencial. Finalmente, o segundo classificador, que é treinado para minimizar o erro de classificação em imagens já centralizadas pela mediana, é aplicado na imagem de teste normalizada no segundo passo para produzir a segmentação binária final. A metodologia proposta foi testada para detectar desflorestamento em pares de imagens co-registradas da Landsat 8 OLI sobre a floresta Amazônica. Experimentos usando imagens multiespectrais de refletância de topo de atmosfera mostraram que a CNC obteve maior acurácia na detecção de desflorestamento do que classificadores aplicados em imagens de refletância de superfície fornecidas pelo United States Geological Survey. As acurácias do método proposto também se mostraram superiores às obtidas pelas máscaras de desflorestamento do programa PRODES.
Monitoring natural environments and their changes over time requires the analysis of a large amount of image data, often collected by orbital remote sensing platforms. However, variations in the observed signals due to changing atmospheric conditions often result in a data distribution shift for different dates and locations making it difficult to discriminate between various classes in a dataset built from several images. This work introduces a novel supervised classification framework, called Classify-Normalize-Classify (CNC), to alleviate this data shift issue. The proposed scheme uses a two classifier approach. The first classifier is trained on non-normalized top-of-the-atmosphere reflectance samples to discriminate between pixels belonging to a class of interest (COI) and pixels from other categories (e.g. forest vs. non-forest). At test time, the estimated COI’s multivariate median signal, derived from the first classifier segmentation, is subtracted from the image and thus anchoring the data distribution from different images to the same reference. Then, a second classifier, pre-trained to minimize the classification error on COI median centered samples, is applied to the median-normalized test image to produce the final binary segmentation. The proposed methodology was tested to detect deforestation using bitemporal Landsat 8 OLI images over the Amazon rainforest. Experiments using top-of-the-atmosphere multispectral reflectance images showed that the deforestation was mapped by the CNC framework more accurately as compared to running a single classifier on surface reflectance images provided by the United States Geological Survey (USGS). Accuracies from the proposed framework also compared favorably with the benchmark masks of the PRODES program.
APA, Harvard, Vancouver, ISO, and other styles
35

Samuel, Nikhil J. "Identification of Uniform Class Regions using Perceptron Training." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439307102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Schmidt, Martin, Mathias Baumert, Hagen Malberg, and Sebastian Zaunseder. "T Wave Amplitude Correction of QT Interval Variability for Improved Repolarization Lability Measurement." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-217300.

Full text
Abstract:
Objectives: The inverse relationship between QT interval variability (QTV) and T wave amplitude potentially confounds QT variability assessment. We quantified the influence of the T wave amplitude on QTV in a comprehensive dataset and devised a correction formula. Methods: Three ECG datasets of healthy subjects were analyzed to model the relationship between T wave amplitude and QTV. To derive a generally valid correction formula, linear regression analysis was used. The proposed correction formula was applied to patients enrolled in the Evaluation of Defibrillator in Non-Ischemic Cardiomyopathy Treatment Evaluation trial (DEFINITE) to assess the prognostic significance of QTV for all-cause mortality in patients with non-ischemic dilated cardiomyopathy. Results: A strong inverse relationship between T wave amplitude and QTV was demonstrated, both in healthy subjects (R2 = 0.68, p < 0.001) and DEFINITE patients (R2 = 0.20, p < 0.001). Applying the T wave amplitude correction to QTV achieved 2.5-times better group discrimination between patients enrolled in the DEFINITE study and healthy subjects. Kaplan-Meier estimator analysis showed that T wave amplitude corrected QTVi is inversely related to survival (p < 0.01) and a significant predictor of all-cause mortality. Conclusion: We have proposed a simple correction formula for improved QTV assessment. Using this correction, predictive value of QTV for all-cause mortality in patients with non-ischemic cardiomyopathy has been demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
37

Rupasinghe, Prabha Amali. "Assessment of Shoreline Vegetation in the Western Basin of Lake Erie Using Airborne Hyperspectral Imagery." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1467323545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Legrand, Karim. "Correction and Optimization of 4D aircraft trajectories by sharing wind and temperature information." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0011/document.

Full text
Abstract:
Cette thèse s'inscrit dans l'amélioration de la gestion du trafic aérien. Le vent et la température sont deux paramètres omniprésents, subis, et à l'origine de nombreux biais de prédiction qui altèrent le suivi des trajectoires. Nous proposons une méthode pour limiter ces biais. Le concept "Wind and Température Networking" améliore la prédiction de trajectoire en utilisant le vent et la température mesurés par les avions voisins. Nous détaillons les effets de la température sur l'avion, permettant sa prise en compte. L'évaluation du concept est faite sur 8000 vols. Nous traitons du calcul de trajectoires optimales en présence de vent prédit, pour remplacer les actuelles routes de l'Atlantique Nord, et aboutir à des groupes de trajectoires optimisées et robustes. Dans la conclusion, nous présentons d'autres champs d'applications du partage de vents, et abordons les besoins en nouvelles infrastructures et protocoles de communication, nécessaires à la prise en compte de ce nouveau concept
This thesis is related to air traffic management systems current changes. On the ground and in flight, trajectory calculation methods and available data differ. Wind and temperature are two ubiquitous parameters that are subject to and cause prediction bias. We propose a concept to limit this bias. Our "Wind and Temperature Networking" concept improves trajectory prediction, using wind and temperature information from neighboring aircraft. We detail the effects of temperature on the aircraft performances, allowing for temperature to be taken into account. The concept evaluation is done on 8000 flights. We discuss the calculation of optimal trajectories in the presence of predicted winds, to replace the current North Atlantic Tracks, and to provide optimized and robust groups of trajectories. The conclusion of this thesis presents other fields of wind sharing applications, and addresses the need for new telecommunications infrastructures and protocols
APA, Harvard, Vancouver, ISO, and other styles
39

Zenóbio, Ângelo Almeida. "Avaliação geológica-geotécnica de encostas naturais rochosas por meio de classificações geomecânicas: área urbana de Ouro Preto (MG) Escala 1:5.000." Universidade de São Paulo, 2000. http://www.teses.usp.br/teses/disponiveis/18/18132/tde-19102018-201443/.

Full text
Abstract:
Esta dissertação teve como principal objetivo desenvolver um estudo de caracterização dos maciços rochosos em encostas naturais, via levantamento e análises das principais descontinuidades presentes no sítio urbano da cidade de Ouro Preto- M.G. Nesta pesquisa, utilizou-se as classificações geomecânicas como ferramenta para o mapeamento geotécnico, com a geração de documentos cartográficos na escala 1:5.000, em uma área de aproximadamente 2,88 km2, compreendendo uma parcela da Serra de Ouro Preto e do centro histórico. Os sistemas de classificações geomecânicos, Sistema RMR (BIENIAWSKI, 1989), Sistema Q (BARTON et al., 1974) e Sistema SMR (ROMANA, 1985), utilizados na pesquisa, expressaram o comportamento dos maciços rochosos que, associados aos documentos cartográficos gerados, serviram de base para a elaboração das cartas de zoneamento. Como ferramenta auxiliar, foi proposto um índice de correção para o parâmetro R.Q.D. ( \"Rock Qualily Designation\"), com a finalidade de adequar os valores deste parâmetro, de acordo com o comportamento dos maciços em campo, visto que os primeiros valores se apresentaram elevados. Os documentos cartográficos gerados foram: mapas de documentação I e II, mapa geológico, mapa de feições dos movimentos gravitacionais de massa e processos correlatos, carta das encostas com suas declividades e cartas de zoneamento para cada sistema de classificação geomecânica.
The main objective at this work is to develop a study at characterization of rock masses in natural slopes, in agreement with lifting and analysis of the principal descontinuitys present in the urban area of Ouro Preto-M.G. That work to make use of geomechanics classifications as tools to engineering geological mapping with generation of the carthographycal documents at scale 1:5,000 at area of aproximate 2,88 km2, that area cover part of Serra de Ouro Preto and historical center. The systems of geomechanic classifications, System RMR (BIENIAWSKI, 1989), System Q (BARTON et al., 1974) e System SMR (ROMANA, 1985) used in research expression the rock mass behavior, that associate carthographycal documents general with base to making zonning chart. As auxiliary tool was propose a correction index for R.Q.D. (\"Rock Quality Designation\") with aim of adapt the value that R.Q.D. in agreement with behavior at roch mass in field propor the first value presents upper. The carthographical documents produced were: documentation maps I and II, geological map, mass movements and related processes scars map, declivity chart and zonning chart for the geomechanics classifications systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Richter, Christian. "Der Einfluss der Atembewegung auf die PET/CT-Schwächungskorrektur." Master's thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A25045.

Full text
Abstract:
Die Kombination von Positronen-Emissions-Tomographie (PET) und Röntgen-Computertomographie (CT) in Form moderner PET/CT-Geräte ermöglicht die Nutzung der CT-Information zur Korrektur der Photonenschwächung in der PET. Allerdings können Bewegungen, die zum Beispiel durch die Atmung hervorgerufen werden können, zu einer fehlerhaften Schwächungskorrektur führen. Die Einführung von zeitlich aufgelöster Bildgebung für beide Modalitäten (4D-PET/4D-CT) ermöglicht nicht nur die Auflösung von periodischen Bewegungen, sondern auch die Reduktion dieser Fehler in der Schwächungskorrektur. Dazu werden die einzelnen Datensätze des 4D-PET, die jeweils einer bestimmten Bewegungsphase entsprechen, mit dem entsprechenden CT-Datensatz dieser Atemphase schwächungskorrigiert. In der vorliegenden Arbeit wurde diese phasenkorrelierte Schwächungskorrektur des 4D-PET mit dem 4D-CT am Universitästsklinikum Dresden installierten PET/CT ermöglicht und anhand von Phantomexperimenten mit anderen Schwächungskorrekturmethoden für 4D-PET verglichen. Dazu musste zunächst die Aufnahme von 4D-CT an dem verwendeten PET/CT ermöglicht und dessen Synchronität mit dem 4D-PET hergestellt werden. Außerdem wurde ein vorhandenes Atemphantom so modifiziert, dass es typische Bewegungen von Bronchialkarzinomen in zwei Dimensionen und mit zwei möglichen Atemmustern simuliert. Die phasenkorrelierte Schwächungskorrektur führte zu einer quantitativ korrekten Wiederherstellung des Aktivitätsvolumens, der darin enthaltenen Aktivität sowie der Bewegungsamplitude und stellt somit die beste der hier verglichenen 4D-PET-Schwächungskorrekturmethoden dar. Diese Ergebnisse lassen vermuten, dass die phasenkorrelierte Schwächungskorrektur auch bei klinischer Anwendung eine signifikante Verbesserung in oben genannten Punkten darstellt. Dies sollte in Zukunft an Patientendaten überprüft werden.
The combination of Positron Emission Tomography (PET) and Computed Tomography (CT) in one device allows the use of CT-information for attenuation correction in PET. Though motion, for example induced by respiration, can cause inaccurate attenuation correction. The implementation of time-resolved imaging methods for both modalities (4D-PET/4D-CT) enables not only the resolution of motion but also the reduction of artifacts caused by attenuation correction. Therefore, the single datasets of the 4D-PET that are related to a individual respiratory phase, are attenuation corrected with the corresponding dataset of the 4D-CT. This phase correlated attenuation correction of the 4D-PET with the 4D-CT was implemented at the PET/CT installed at the Universitätsklinikum Dresden. For that purpose the acquisition of 4D-CT was implemented at the PET/CT and its synchronisation with the 4D-PET was verified. Furthermore the new attenuation correction method was compared with other attenuation correction methods by performing phantom experiments. Therefore an exisisting respiratory phantom had to be modified to perform typical lung tumor motion in two dimensions with two possible patterns of respiration. The phase correlated attenuation correction leads to a quantitatively correct restauration of the activity volume, its total activity and its motion amplitude. Compared with other correction methods, the phase correlated attenuation correction shows the best results in all examined criteria. This findings suggest that the clinical application of the phase correlated attenuation correction will also lead to a significant improvement in all mentioned points. This has to be verified by analyzing patient data.
APA, Harvard, Vancouver, ISO, and other styles
41

Großmann, Knut. "Thermo-Energetische Gestaltung von Werkzeugmaschinen: Modellierung und Simulation: 2. Kolloquium zum SFB/TR 96: 24./25.10.2012 in Chemnitz." Technische Universität Dresden, 2012. https://tud.qucosa.de/id/qucosa%3A28098.

Full text
Abstract:
Der Beitrag "Voraussetzungen und Grenzen einer eigenschaftsmodellbasierten Korrektur St. Bäumler, C. Brecher, M. Wennemer; RWTH Aachen, Lehrstuhl für Werkzeugmaschinen" ist in dieser Version nicht enthalten, bitte nutzen Sie die Version unter oben angegebenen Link (Nachfolger).
Im Mittelpunkt der 2. Tagung des Sonderforschungsbereichs Transregio 96 „Thermo-energetische Gestaltung von Werkzeugmaschinen standen erste Ergebnisse zur Modellierung und Simulation von Komponenten und Baugruppen von Werkzeugmaschinen im Mittelpunkt. An den drei Standorten Aachen, Chemnitz und Dresden werden unterschiedliche Lösungsansätze für die steuerungsintegrierte Korrektur thermischer bedingter Strukturverformungen in spanenden Werkzeugmaschinen verfolgt. Von diesen wird eine unterschiedliche Wirksamkeit bzw. Eignung für verschiedene Einsatzfälle erwartet. Bevor diese in der Praxis umgesetzt werden können, müssen Fragen zur Beschreibung der Wärmequellen und zur Wärmeübertragung beantwortet werden. Außerdem bedarf die Umsetzung der Konzepte in den CNC-Steuerungen effizienter Verfahren zur Modellierung und Simulation der thermisch bedingten Strukturverformung. Für die Entwicklung und Bewertung der Korrekturverfahren sowie zur Berechnung der notwendigen Achs-Korrekturen ist die Systemsimulation u. a. an einem prozessaktuelle Werkzeugmaschinenabbild erforderlich. Für die Bewertung ihrer Praxisrelevanz werden die Einzellösungen nach und nach in ein betriebswirtschaftlich orientiertes Gesamtmodell integriert.
APA, Harvard, Vancouver, ISO, and other styles
42

Wildt, Steffen. "Mehrwegeausbreitung bei GNSS-gestützter Positionsbestimmung." Doctoral thesis, Technische Universität Dresden, 2003. https://tud.qucosa.de/id/qucosa%3A23925.

Full text
Abstract:
GNSS-Messungen werden neben systembedingten Fehlereinflüssen vor allem von den Auswirkungen der Mehrwegeausbreitung und Signalbeugung insbesondere in der Empfangsumgebung dominiert. Verschiedene Dienste z.B. der Landesvermessungsämter haben deshalb ein primäres Interesse daran, die Auswirkungen der Effekte möglichst gering zu halten oder aber genau bestimmen zu können, um Korrekturwerte zu generieren. Mehrwege- und Beugungseffekte lassen sich besonders innerhalb von Netzstrukturen gut bestimmen. Liegen Sollkoordinaten aller Beobachtungsstationen vor gelingt dies auch in Echtzeit. In der vorliegenden Arbeit werden neben einer detaillierten Beschreibung der jeweiligen Einflussgrößen auch Möglichkeiten aufgezeigt, die genannten Effekte zu erkennen und Maßnahmen zur Reduktion der Auswirkungen auf das Meßergebnis zu ergreifen. Kern der Untersuchungen ist ein zweistufiges Modell zur Reduzierung von Mehrwegeeffekten in Echtzeit innerhalb von (Referenz-) Stationsnetzen durch Bestimmung von Korrekturwerten für originale und abgeleitete Meßwerte pro Epoche, Station und Satellit.
APA, Harvard, Vancouver, ISO, and other styles
43

Großmann, Knut. "Thermo-Energetische Gestaltung von Werkzeugmaschinen: Experimentelle Methodik: 3. Kolloquium zum SFB/TR 96: 29./30.10.2013 in Aachen." Prof. Dr.-Ing. habil. Knut Großmann, 2013. https://tud.qucosa.de/id/qucosa%3A28099.

Full text
Abstract:
Im Mittelpunkt der 3. Tagung des Sonderforschungsbereichs Transregio 96 am 29. und 30.Oktober 2013 am Werkzeugmaschinenlabor der RWTH Aachen standen die verschiedenen Lösungsansätze der einzelnen Teilprojekte bei der Durchführung der experimentellen Untersuchungen zur Verifizierung von Simulationsergebnissen bzw. zur Ableitung von Modellparametern. Es wurden vier Themenblöcke behandelt: • Ermittlung von thermisch relevanten Prozessparametern • Experimentelle Methodik zur Analyse von Teilsystemen in Werkzeugmaschinen • Methodische Rahmenbedingungen bei der Ermittlung von thermisch relevanten Parametern • Verfahren zur Verformungs- und Verlagerungsmessung
APA, Harvard, Vancouver, ISO, and other styles
44

Henker, Stephan. "Entwurf und Modellierung von Multikanal-CMOS-Farbsensoren." Doctoral thesis, Dresden TUDpress, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2740718&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kristóf, Dániel. "Application de la télédétection pour la cartographie et le suivi des écosystèmes forestiers : application à la forêt hongroise." Toulouse 3, 2005. http://www.theses.fr/2005TOU30233.

Full text
Abstract:
Le travail présenté est une réponse à divers problèmes posés en télédétection et en particulier dans l'application à l'étude des changements de la végétation dus à une cause externe. La première étude consiste à la création d'une carte de végétation en utilisant des méthodes de classification sur une image multispectrale. La deuxième étude aborde le sujet de l'application forestière des images satellitaires à très haute résolution. Les corrections géométriques et les méthodes d'extraction d'information thématique sont essentielles ici. La troisième étude porte sur l'analyse quantitative des changements causés par le construction d'un barrage au nord-ouest de la Hongrie. Une série d'images satellitales est utilisée dans ce but, ce qui nécessite le développement et le test de nouvelles méthodes. La minimisation de l'effet des défauts géométriques et de la variabilité radiométrique temporelle non attribuable aux changements réels de la végétation sont les éléments clés
Monitoring subtle changes over long time periods using numerous satellite images is a challenging task. In this thesis, possibilities and limitations of the available data and methods are presented through three case studies. In the first one, the objective is to create a digital vegetation map by using a multispectral satellite image. In the second study, forestry applications of novel very high resolution satellite images are examined. Geometric correction and special data extraction methods are of interest. The third case study aims at the quantification of the effects of a water diversion on local forested ecosystems in the north-western part of Hungary. Numerous satellite images are used to carry out quantitative change analysis. The long study period, the large number of images and the objectives of the study require the application and testing of several methods, and the elaboration of new methods, especially for geometric and radiometric corrections and data fusion
APA, Harvard, Vancouver, ISO, and other styles
46

Neubert, Marco. "Bewertung, Verarbeitung und segmentbasierte Auswertung sehr hoch auflösender Satellitenbilddaten vor dem Hintergrund landschaftsplanerischer und landschaftsökologischer Anwendungen." Doctoral thesis, Technische Universität Dresden, 2005. https://tud.qucosa.de/id/qucosa%3A24684.

Full text
Abstract:
Die Fernerkundung war in den vergangenen Jahren von einschneidenden Umbrüchen gekennzeichnet, die sich besonders in der stark gestiegenen geometrischen Bodenauflösung der Sensoren und den damit einhergehenden Veränderungen der Verarbeitungs- und Auswertungsverfahren widerspiegeln. Sehr hoch auflösende Satellitenbilddaten - definiert durch eine Auflösung zwischen einem halben und einem Meter - existieren seit dem Start von IKONOS Ende 1999. Etwa im selben Zeitraum wurden extrem hoch auflösende digitale Flugzeugkameras (0,1 bis 0,5 m) entwickelt. Dieser Arbeit liegen IKONOS-Daten mit einer Auflösung von einem (panchromatischer Kanal) bzw. vier Metern (Multispektraldaten) zugrunde. Bedingt durch die Eigenschaften sehr hoch aufgelöster Bilddaten (z. B. Detailgehalt, starke spektrale Variabilität, Datenmenge) lassen sich bisher verfügbare Standardverfahren der Bildverarbeitung nur eingeschränkt anwenden. Die Ergebnisse der in dieser Arbeit getesteten Verfahren verdeutlichen, dass die Methoden- bzw. Softwareentwicklung mit den technischen Neuerungen nicht Schritt halten konnte. Einige Verfahren werden erst allmählich für sehr hoch auflösende Daten nutzbar (z. B. atmosphärisch-topographische Korrektur). Die vorliegende Arbeit zeigt, dass Daten dieses Auflösungsbereiches mit bisher verwendeten pixelbasierten, statistischen Klassifikationsverfahren nur unzulänglich ausgewertet werden können. Die hier untersuchte Anwendung von Bildsegmentierungsmethoden hilft, die Nachteile pixelbasierter Verfahren zu überwinden. Dies wurde durch einen Vergleich pixel- und segmentbasierter Klassifikationsverfahren belegt. Im Rahmen einer Segmentierung werden homogene Bildbereiche zu Regionen verschmolzen, welche die Grundlage für die anschließende Klassifikation bilden. Hierzu stehen über die spektralen Eigenschaften hinaus Form-, Textur- und Kontextmerkmale zur Verfügung. In der verwendeten Software eCognition lassen sich diese Klassifikationsmerkmale zudem auf Grundlage des fuzzy-logic-Konzeptes in einer Wissensbasis (Entscheidungsbaum) umsetzen. Ein Vergleich verschiedener, derzeit verfügbarer Segmentierungsverfahren zeigt darüber hinaus, dass sich mit der genutzten Software eine hohe Segmentierungsqualität erzielen lässt. Der wachsende Bedarf an aktuellen Geobasisdaten stellt für sehr hoch auflösende Fernerkundungsdaten eine wichtige Einsatzmöglichkeit dar. Durch eine gezielte Klassifikation der Bilddaten lassen sich Arbeitsgrundlagen für die hier betrachteten Anwendungsfelder Landschaftsplanung und Landschaftsökologie schaffen. Die dargestellten Beispiele von Landschaftsanalysen durch die segmentbasierte Auswertung von IKONOS-Daten zeigen, dass sich eine Klassifikationsgüte von 90 % und höher erreichen lässt. Zudem können die infolge der Segmentierung abgegrenzten Landschaftseinheiten eine Grundlage für die Berechnung von Landschaftsstrukturmaßen bilden. Nationale Naturschutzziele sowie internationale Vereinbarungen zwingen darüber hinaus zur kontinuierlichen Erfassung des Landschaftsinventars und dessen Veränderungen. Fernerkundungsdaten können in diesem Bereich zur Etablierung automatisierter und operationell einsatzfähiger Verfahren beitragen. Das Beispiel Biotop- und Landnutzungskartierung zeigt, dass eine Erfassung von Landnutzungseinheiten mit hoher Qualität möglich ist. Bedingt durch das Auswertungsverfahren sowie die Dateneigenschaften entspricht die Güte der Ergebnisse noch nicht vollständig den Ansprüchen der Anwender, insbesondere hinsichtlich der erreichbaren Klassifikationstiefe. Die Qualität der Ergebnisse lässt sich durch die Nutzung von Zusatzdaten (z. B. GIS-Daten, Objekthöhenmodelle) künftig weiter steigern. Insgesamt verdeutlicht die Arbeit den Trend zur sehr hoch auflösenden digitalen Erderkundung. Für eine breite Nutzung dieser Datenquellen ist die weitere Entwicklung automatisierter und operationell anwendbarer Verarbeitungs- und Analysemethoden unerlässlich.
In recent years remote sensing has been characterised by dramatic changes. This is reflected especially by the highly increased geometrical resolution of imaging sensors and as a consequence thereof by the developments in processing and analysis methods. Very high resolution satellite imagery (VHR) - defined by a resolution between 0.5 and 1 m - exists since the start of IKONOS at the end of 1999. At about the same time extreme high resolution digital airborne sensors (0.1 till 0.5 m) have been developed. The basis of investigation for this dissertation is IKONOS imagery with a resolution of one meter (panchromatic) respectively four meters (multispectral). Due to the characteristics of such high resolution data (e.g. level of detail, high spectral variability, amount of data) the use of previously available standard methods of image processing is limited. The results of the procedures tested within this work demonstrate that the development of methods and software was not able to keep up with the technical innovations. Some procedures are only gradually becoming suitable for VHR data (e.g. atmospheric-topographic correction). Additionally, this work shows that VHR imagery can be analysed only inadequately using traditional pixel-based statistical classifiers. The herein researched application of image segmentation methods helps to overcome drawbacks of pixel-wise procedures. This is demonstrated by a comparison of pixel and segment-based classification. Within a segmentaion, homogeneous image areas are merged into regions which are the basis for the subsequent classification. For this purpose, in addition to spectral features also formal, textural and contextual properties are available. Furthermore, the applied software eCognition allows the definition of the features for classification based on fuzzy logic in a knowledge base (decision tree). An evaluation of different, currently available segmentation approaches illustrates that a high segmentation quality is achievable with the used software. The increasing demand for geospatial base data offers an important field of application for VHR remote sensing data. With a targeted classification of the imagery the creation of working bases for the herein considered usage for landscape planning and landscape ecology is possible. The given examples of landscape analyses using a segment-based processsing of IKONOS data show an achievable classification accuracy of 90 % and more. The landscape units delineated by image segmentation could be used for the calculation of landscape metrics. National aims of nature conservation as well as international agreements constrain a continuous survey of the landscape inventory and the monitoring of its changes. Remote sensing imagery can support the establishment of automated and operational methods in this field. The example of biotope and land use type mapping illustrates the possibility to detect land use units with a high precision. Depending on the analysis method and the data characteristics the quality of the results is not fully equivalent to the user?s demands at the moment, especially concerning the achievable depth of classification. The quality of the results can be enhanced by using additional thematic data (e.g. GIS data, object elevation models). To summarize this dissertation underlines the trend towards very high resolution digital earth observation. Thus, for a wide use of this kind of data it is essentially to further develop automated and operationally useable processing and analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Pfennig, Stefan, and Elke Franz. "Comparison of Different Secure Network Coding Paradigms Concerning Transmission Efficiency." Technische Universität Dresden, 2013. https://tud.qucosa.de/id/qucosa%3A28134.

Full text
Abstract:
Preventing the success of active attacks is of essential importance for network coding since even the infiltration of one single corrupted data packet can jam large parts of the network. The existing approaches for network coding schemes preventing such pollution attacks can be divided into two categories: utilize cryptographic approaches or utilize redundancy similar to error correction coding. Within this paper, we compared both paradigms concerning efficiency of data transmission under various circumstances. Particularly, we considered an attacker of a certain strength as well as the influence of the generation size. The results are helpful for selecting a suitable approach for network coding taking into account both security against pollution attacks and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
48

Al, Madi Naser S. "Modeling Eye Movement for the Assessment of Programming Proficiency." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1595429905152276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zavesky, Martin. "Wahrnehmungsrealistische Projektion anthropomorpher Formen." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-100625.

Full text
Abstract:
Die vorliegende Arbeit befasst sich mit grundlegenden Effekten bei der Projektion virtueller räumlicher Szenen auf zweidimensionale Bildflächen. Die Motivation dazu ergibt sich aus der Feststellung, dass in aktuellen computergrafischen Visualisierungssystemen die Räumlichkeit der Darstellung in der Regel durch eine Fläche vermittelt wird. Die aktuell benutzten Verfahren stützen sich dabei auf das Konzept der Virtuellen Kamera. Mit dieser sind jedoch auch Nachteile hinsichtlich einer an den Menschen angepassten (wahrnehmungskonformen) Darstellung verbunden. Zur Einführung der für das Verständnis der Arbeit notwendigen Fachtermini werden ausgewählte Grundlagen aus den Bereichen Technische Visualistik, Bildsprache, Computergrafik und Psychologie vorgestellt. Von besonderer Bedeutung sind dabei die Begriffe Abbild (das Ergebnisbild einer Projektion), Wahrnehmungskonformität (ein Indikator für Angepasstheit an die visuelle menschliche Wahrnehmung) und Multiperspektive (eine die Wahrnehmungskonformität förderliche Darstellungsform). Anschließend werden die, im weiteren Verlauf der Arbeit relevanten, Vektoren definiert. Im darauf folgenden Kapitel beschreibt die Arbeit zwei wesentliche Herausforderungen für die wahrnehmungskonforme Darstellung virtueller Objekte. Zum einen wird das Proportionsproblem beschrieben. Zum anderen das Orientierungsproblem als neu erkanntes Arbeitsthema eingeführt. Darauf aufbauend erfolgt eine Beschreibung des Grundkonzeptes zur Erstellung wahrnehmungskonformer Abbilder durch gesonderte Behandlung einzelner Szenenobjekte sowie die Vorstellung relevanter wissenschaftlicher Vorarbeiten für diesen Sachverhalt. Weiterhin stehen eine Einordnung der bestehenden Verfahren und ein Exkurs in verwandte Studien der Wahrnehmungspsychologie im Mittelpunkt der Ausführungen. Als ein existierendes Verfahren zur computergrafischen Erzeugung von wahrnehmungskonformen Abbildern wird nachfolgend das Verfahren der Erweiterten Perspektivischen Korrektur (EPK) als Ausgangspunkt für eine Optimierung detailliert vorgestellt. Aus den aufgeworfenen Fragen hinsichtlich des Orientierungsproblems ergibt sich die Notwendigkeit einer tiefergehenden Analyse. Aus der künstlerischen Praxis sowie wahrnehmungspsychologischen Aspekten heraus wird der Mensch als geeignetes Referenzmodell argumentativ untersetzt. Einen Schwerpunkt der Arbeit bildet die im Anschluss durchgeführte mehrstufige Studie zur Orientierungswahrnehmung in mono- und multiperspektivischen Abbildern. Aus den in der Studie gewonnenen Erkenntnissen kann schließlich ein Optimierungsansatz für die EPK synthetisiert werden. Das Konzept der so genannten Augpunkt-bezogenen EPK wird ausführlich hergeleitet, die Wirkung analysiert, eine algorithmische Umsetzung erarbeitet und diese mit den bereits bestehenden EPK-Ausprägungen verglichen. Als Vervollständigung der Ausführungen folgen zwei Praxisbeispiele zum Einsatz der EPK und dem Nutzen der vorgestellten Optimierung.
APA, Harvard, Vancouver, ISO, and other styles
50

Soares, Sófacles Figueredo Carreiro. "Um novo método para transferência de modelos de calibração NIR e uma nova estratégia para classificação de sementes de algodão usando imagem hiperespectral NIR." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9237.

Full text
Abstract:
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-09T15:33:48Z No. of bitstreams: 1 arquivototal.pdf: 4699110 bytes, checksum: ef3b7c0aa5c4758d2c77e65ad6a81ad3 (MD5)
Made available in DSpace on 2017-08-09T15:33:48Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 4699110 bytes, checksum: ef3b7c0aa5c4758d2c77e65ad6a81ad3 (MD5) Previous issue date: 2016-06-20
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This work involves the development of two studies that are presented in chapters 2 and 3. At first, a new method to perform the calibration transfer was designed. This method was developed to make use of separate variables instead of using the full spectrum or spectral windows. To accomplish this task a univariate procedure is initially used to correct the spectra recorded in the secondary equipment, given a set of transfer samples. A robust regression technique is then used to obtain a model with small sensitivity with respect to the univariate correction. The proposed method is employed in two case studies involving near infrared spectrometric determination of specific mass, research octane number and naphtenes in gasoline, and moisture and oil in corn. In both cases, better calibration transfer results were obtained in comparison with piecewise direct standardization (PDS). In the second, a new strategy for cotton seed classification using near infrared (NIR) hyperspectral images (HSI) was developed. Initially the cotton seeds samples were recorded on a station HSI image-NIR and a conventional spectrometer NIR. Thereon, the images were segmented and the mean spectrum of each seed was extract. Classification models SPA-LDA e PLS-DA based on the mean spectral were developed for two data sets. The results for models SPA-LDA and PLSDA showed that the classification with HSI-NIR data set has been achieved with greater accuracy when compared to models for the NIR-conventional data set.
Este trabalho envolve o desenvolvimento de dois estudos, que são apresentados nos capítulos 2 e 3. No primeiro, um novo método para realizar a transferência de calibração foi concebido. Este método foi desenvolvido para fazer uso de variáveis isoladas em vez de usar todo o espectro ou janelas espectrais. Para realizar essa tarefa, um procedimento univariado é inicialmente usado para corrigir os espectros registrados no equipamento secundário, dado um conjunto de amostras de transferência. Uma técnica de regressão robusta é então usada para obter um modelo com pequena sensibilidade em relação aos resíduos da correção univariada. O novo método é então empregado em dois estudos de caso envolvendo análise espectrométrica NIR, em que foram determinados os parâmetros massa específica, RON (Research Octane Number) e teor de naftênicos em gasolina e os teores de água e óleo em amostras de milho. Os resultados do novo método foram melhores do que os obtidos usando o método PDS. No segundo, uma nova estratégia para classificação de sementes de algodão usando imagens hiperespectrais no NIR foi desenvolvido. Inicialmente as amostras de sementes de algodão foram registradas em uma estação de imagem HSI-NIR e em um equipamento NIR convencional. Após isso, as imagens foram segmentadas e os espectros médios de cada semente foram extraídos. Os modelos de classificação SPA-LDA e PLS-DA baseados nos espectros médios foram construídos para os dois conjuntos de dados. Os resultados SPA-LDA e PLS-DA para os modelos demonstraram que a classificação com os dados HSI-NIR foi alcançada com maior exatidão quando comparada aos modelos obtidos usando o NIR-convencional.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography