Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Forest automata.

Dissertationen zum Thema „Forest automata“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Forest automata" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Killough, Brian D. „A semi-empirical cellular automata model for wildfire monitoring from a geosynchronous space platform“. W&M ScholarWorks, 2003. https://scholarworks.wm.edu/etd/1539623419.

Der volle Inhalt der Quelle
Annotation:
The environmental and human impacts of wildfires have grown considerably in recent years due to an increase in their frequency and coverage. Effective wildfire management and suppression requires real-time data to locate fire fronts, model their propagation and assess the impact of biomass burning. Existing empirical wildfire models are based on fuel properties and meteorological data with inadequate spatial or temporal sampling. A geosynchronous space platform with the proposed set of high resolution infrared detectors provides a unique capability to monitor fires at improved spatial and temporal resolutions. The proposed system is feasible with state-of-the-art hardware and software for high sensitivity fire detection at saturation levels exceeding active flame temperatures. Ground resolutions of 100 meters per pixel can be achieved with repeat cycles less than one minute. Atmospheric transmission in the presence of clouds and smoke is considered. Modeling results suggest fire detection is possible through thin clouds and smoke. A semi-empirical cellular automata model based on theoretical elliptical spread shapes is introduced to predict wildfire propagation using detected fire front location and spread rate. Model accuracy compares favorably with real fire events and correlates within 2% of theoretical ellipse shapes. This propagation modeling approach could replace existing operational systems based on complex partial differential equations. The baseline geosynchronous fire detection system supplemented with a discrete-based propagation model has the potential to save lives and property in the otherwise uncertain and complex field of fire management.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Moriarty, Kaleen S. „Automated image-to-image rectification for use in change detection analysis as applied to forest clearcut mapping /“. Online version of thesis, 1993. http://hdl.handle.net/1850/11738.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hamadeh, Nizar. „Le développement de la loi de diffusion des incendies en modélisant le niveau de danger et son évolution dans le temps. : comparaison avec des données expérimentales dans les forêts libanaises“. Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0060/document.

Der volle Inhalt der Quelle
Annotation:
Les incendies de forêt sont l'un des phénomènes les plus complexes auxquels sont confrontées nos sociétés. Le Liban, faisant partie du Moyen-Orient, est en train de perdre dramatiquement ses forêts vertes principalement en raison de graves incendies. Cette thèse étudie le phénomène des incendies de forêt. Elle propose des nouveaux modèles et méthodologies pour remédier à la crise des incendies de forêts, en particulier au Liban et en Méditerranée. Elle est divisée en deux parties principales: nouvelles approches de la prévision des incendies de forêt et développement d'un nouveau modèle de diffusion du feu plus fidèle du cas réel. La première partie est subdivisée en 3 chapitres. Le premier chapitre présente une étude analytique des modèles métrologiques les plus utilisés qui permettent de prédire les incendies de forêt. Dans le deuxième chapitre, nous appliquons cinq méthodes de techniques d’exploration de données: Réseaux de neurones, arbre de décision, floue logique, analyse discriminante linéaire et méthode SVM. Nous cherchons à trouver la technique la plus précise pour la prévision des incendies de forêt. Dans le troisième chapitre, nous utilisons différentes techniques d'analyse de données corrélatives (Régression, Pearson, Spearman et Kendall-tau) pour évaluer la corrélation entre l'occurrence d'incendie et les données météorologiques (température, point de rosée, température du sol, humidité, précipitation et vitesse du vent). Cela permet de trouver les paramètres les plus influents qui influencent l'occurrence de l’incendie, ce qui nous amène à développer un nouveau Indice Libanais de Risques d'Incendie (IL). L'indice proposé est ensuite validé à partir des données météorologiques pour les années 2015-2016. La deuxième partie est subdivisée en 3 chapitres. Le premier chapitre passe en revue les caractéristiques du comportement de feu et sa morphologie; il se concentre sur la validité des modèles mathématique et informatique de comportement de feu. Le deuxième chapitre montre l'importance des automates cellulaires, en expliquant les principaux types et examine certaines applications dans différents domaines. Dans le troisième chapitre, nous utilisons des automates cellulaires pour élaborer un nouveau modèle de comportement pour prédire la propagation de l’incendie, sur des bases elliptiques, dans des paysages homogènes et hétérogènes. La méthodologie proposée intègre les paramètres de la vitesse du vent, du carburant et de la topographie. Notre modèle développé est ensuite utilisé pour simuler les incendies de forêt qui ont balayé la forêt du village d'Aandqet, au nord du Liban. Les résultats de simulation obtenus sont comparés avec les résultats rapportés de l'incident réel et avec des simulations qu'on a iv effectuées sur le modèle de Karafyllidis et le modèle de Karafyllidis modifié par Gazmeh. Ces comparaisons ont prouvé l'ambiguë du modèle proposé. Dans cette thèse, la crise des feux de forêt a été étudiée et de nouveaux modèles ont été développés dans les deux phases: pré-feu et post-feu. Ces modèles peuvent être utilisés comme outils préventifs efficaces dans la gestion des incendies de forêt
Wildland fires are one of the most complex phenomena facing our societies. Lebanon, a part of Middle East, is losing its green forests dramatically mainly due to severe fires. This dissertation studies the phenomenon of forest fires. It proposes new models and methodologies to tackle the crisis of forest fires particularly in Lebanon and Mediterranean. It is divided into two main parts: New Approaches in Forest Fire Prediction and Forest Fire modeling. The first part is sub-divided into 3 chapters. First chapter presents an analytical study of the most widely used metrological models that can predict forest fires. In the second chapter we apply five data mining techniques methods: Neural Networks, Decision Tree, Fuzzy Logic, Linear Discriminate Analysis and Support Vector Machine. We aim to find the most accurate technique in forecasting forest fires. In the third chapter, we use different correlative data analysis techniques (Regression, Pearson, Spearman and Kendall-tau) to evaluate the correlation between fire occurrence and meteorological data (Temperature, Dew point, Soil temperature, Humidity, Precipitation and Wind speed). This allows to find the most influential parameters that affect the occurrence of fire, which lead us to develop a new Lebanese fire danger Index (LI). The proposed index is then validated using meteorological data for the years 2015-2016. The second part is sub-divided into 3 chapters. The first chapter reviews the fire behavior characteristics and its morphology; and focuses on the validity of mathematical and computer fire behavior models. The second chapter manifests the importance of cellular automata, explains the main types of cellular automata and reviews some applications in various domains. In the third chapter, we use cellular automata to develop a new behavior model for predicting the spread of fire, on elliptical basis, in both homogeneous and heterogeneous landscapes .The proposed methodology incorporates the parameters of wind speed, fuel and topography. The developed model is then used to simulate the wildfire that swept through the forest of Aandqet village, North Lebanon. Obtained simulation results are compared with reported results of the real incident and with simulations done on Karafyllidis model and Gazmeh-Modified Karafyllidis model. These comparisons have proven the outperformance of the proposed model. In this dissertation, the crisis of forest fires has been studied and new models have been developed in both phases: pre-fire and post-fire. These models can be used as efficient preventive tools in forest fire management
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Clark, James Joseph. „Multi-resolution stereo vision with application to the automated measurement of logs“. Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25582.

Der volle Inhalt der Quelle
Annotation:
A serial multi-resolution stereo matching algorithm is presented that is based on the Marr-Poggio matcher (Marr and Poggio, 1979). It is shown that the Marr-Poggio feature disambiguation and in-range/out-of-range mechanisms are unreliable for non-constant disparity functions. It is proposed that a disparity function estimate reconstructed from the disparity samples at the lower resolution levels be used to disambiguate possible matches at the high resolutions. Also presented is a disparity scanning algorithm with a similar control structure, which is based on an algorithm recently proposed by Grimson (1985). It is seen that the proposed algorithms will function reliably only if the disparity measurements are accurate and if the reconstruction process is accurate. The various sources of errors in the matching are analyzed in detail. Witkin's (Witkin, 1983) scale space is used as an analytic tool for describing a hitherto unreported form of disparity error, that caused by spatial filtering of the images with non-constant disparity functions. The reconstruction process is analyzed in detail. Current methods for performing the reconstruction are reviewed. A new method for reconstructing functions from arbitrarily distributed samples based on applying coordinate transformations to the sampled function is presented. The error due to the reconstruction process is analyzed, and a general formula for the error as a function of the function spectra, sample distribution and reconstruction filter impulse response is derived. Experimental studies are presented which show how the matching algorithms perform with surfaces of varying bandwidths, and with additive image noise. It is proposed that matching of scale space feature maps can eliminate many of the problems that the Marr-Poggio type of matchers have. A method for matching scale space maps which operates in the domain of linear disparity functions is presented. This algorithm is used to experimentally verify the effect of spatial filtering on the disparity measurements for non-constant disparity functions. It is shown that measurements can be made on the binocular scale space maps that give an independent estimate of the disparity gradient this leads to the concept of binocular diffrequency. It is shown that the diffrequency measurements are not affected by the spatial filtering effect for linear disparities. Experiments are described which show that the disparity gradient can be obtained by diffrequency measurement. An industrial application for stereo vision is described. The application is automated measurement of logs, or log scaling. A moment based method for estimating the log volume from the segmented two dimensional disparity map of the log scene is described. Experiments are described which indicate that log volumes can be estimated to within 10%.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hamraz, Hamid. „AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR“. UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/69.

Der volle Inhalt der Quelle
Annotation:
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Blanco, Carolina Casagrande. „Modelo de simulação da dinâmica de vegetação em paisagens de coexistência campo-floresta no sul do Brasil“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49276.

Der volle Inhalt der Quelle
Annotation:
Uma questão que ainda instiga discussões na literatura ecológica é como explicar a coocorrência dinâmica e milenar de formações florestais e campestres sob um mesmo regime climático que tende a favorecer as primeiras, como ocorre atualmente com mosaicos florestacampo no sul do Brasil. A partir de meados do século XX, têm-se evidenciado um fenômeno mundial de avanço de elementos lenhosos sobre áreas abertas. Neste sentido, a modelagem dos processos ecológicos envolvidos na manutenção de ambas as formações numa escala de paisagem permite o esclarecimento dos mecanismos que atuam na manutenção dessa coexistência até o presente e permite prever estados futuros diante dos prognósticos de drásticas alterações climáticas globais já nas próximas décadas. Para tanto, desenvolveu-se um modelo espacialmente explícito (2D-aDGVM) que agrega um Modelo Adaptativo Global de Dinâmica de Vegetação (aDGVM) e ainda inclui heterogeneidades topográficas, propagação do fogo e dispersão de sementes. Este modelo busca satisfazer a necessidade de modelagem mais realista de processos biofísicos, fisiológicos e demográficos na escala de indivíduos e relacionados de forma adaptativa às variações ambientais e aos regimes de distúrbios, ao mesmo tempo que agrega importantes processos ecológicos espaciais, até então pouco ou nada abordados por esse grupo de modelos numa escala de paisagem. Com este modelo, avaliaram-se os efeitos das variações topográficas da radiação solar incidente e destas nos mecanismos de interação (feedbacks) positiva e negativa que surgem daqueles processos na escala de indivíduos e que definem localmente os limites da coexistência entre elementos arbóreos e herbáceos. Ainda, foram analisados os efeitos do aumento da temperatura, precipitação e CO2 atmosférico, desde o período pré-industrial até projeções futuras para as próximas décadas, na performance das diferentes fisiologias envolvidas, bem como no balanço daquelas interações entre as mesmas e, finalmente, na sensibilidade da dinâmica dos mosaicos floresta-campo. Os resultados evidenciaram que, sob o regime climático vigente, uma coexistência relativamente estável entre floresta e campo numa mesma paisagem é mantida por uma alta freqüência de distúrbios, que por sua vez, resulta do forte feedback positivo do acúmulo de biomassa inflamável da vegetação campestre na intensidade do fogo, proporcionado pela condição altamente produtiva do atual clima mesotérmico. Por outro lado, intensificadas pela declividade do terreno, as heterogeneidades espaciais afetaram o balanço dessas interações, interferindo nos padrões espaço-temporais relacionados ao comportamento do fogo e dependentes da densidade de elementos arbóreos. Ainda, tanto esses efeitos observados na escala das manchas de vegetação, como o arranjo espacial inicial das mesmas na paisagem, afetaram as taxas de expansão florestal. Em outras palavras, a manutenção da coexistência de duas formações vegetais constituídas por elementos de inerente assimetria competitiva é possível pela manutenção de uma maior conectividade daquela que propicia o distúrbio, superando a vantagem da outra, que por sua vez é dependente da densidade dos indivíduos. Numa escala de paisagem, isto causa a manutenção de uma baixa conectividade entre as manchas florestais, propiciando sua relativa estabilidade num contexto de dispersão predominante a curtas distâncias. Contudo, embora ambos os sistemas tenham apresentado incremento no crescimento, produtividade e fecundidade, observou-se uma sensibilidade maior no sentido de aumento das taxas de avanço florestal em resposta às projeções climáticas futuras, principalmente nos próximos 90 anos, mesmo na presença do fogo. Isto seria proporcionado pela vantagem fotossintética das árvores-C3 sobre gramíneas-C4 na presença do fogo sob altas concentrações de CO2 atmosférico. Por fim, uma abordagem mais sistêmica dos mosaicos como estados alternativos mostrou ser adequada para o entendimento dos mecanismos que propiciam essa coexistência dinâmica na paisagem.
A longstanding problem in ecology is how to explain the coexistence over thousands of years of forests and natural grasslands under the same climatic regime, which favors the first, such as in forest-grasslands mosaics in South Brazil. Since the middle of the 20th century, a worldwide bush encroachment phenomenon of woody invasion in open vegetation has been threatening this relatively stable coexistence. In this sense, modelling ecological processes that arbitrate the maintenance of both vegetation formations at the landscape scale allows a better understanding of the mechanisms behind the maintenance of this coexistence, as well as predictions of future states under projections of drastic climate change over the next decades. For this, we developed a bidimensional spatial explicit model (2D-aDGVM) that aggregates an adaptive Global Vegetation Model (aDGVM), which includes topographic heterogeneity, fire spread and seed dispersal. The model aims at fulfilling the need for a more realistic representation of biophysical, physiological and demographical processes using an individualbased approach as it adapts these processes to environmental variations and disturbance regimes. In addition, the model includes important spatial ecological processes that have gained less attention by such models adopting a landscape-scale approach. Therefore, we evaluated the effect of topographic variations in incoming solar radiation on positive and on negative feedbacks that rise from those individual-based processes, and which in turns define the limiting thresholds upon which woody and grassy forms coexist. Additionally, the effects of increasing temperature, rainfall and atmospheric CO2 levels on the performance of distinct physiologies (C3-tree and C4-grass) were analyzed, as well as the sensitivity of forestgrassland mosaics to changes in climate from the preindustrial period to the next decades. Results showed that a relatively stable coexistence of forests and grasslands in the same landscape was observed with more frequent fires under the present climatic conditions. This was due to strong positive feedbacks of the huge accumulation of flammable grass biomass on fire intensity promoted by the high productivity of the present mesic conditions. On the other hand, spatio-temporal density dependent processes linked to fire and enhanced by slope at the patch scale, as well as the initial spatial arrangement of vegetation patches affected the rate of forest expansion at the landscape scale. The persistence of coexisting vegetation formations with an inherent asymmetry of competitive interactions was possible when the higher connectivity of the fire-prone patches (grassland) affected negatively the performance of the entire fire-sensitive system (forest). This was possible by overcoming its local densitydependent advantage, or by maintaining it with a low connectivity, which is expected to reduce the rate of coalescence of forest patches in a scenario of predominantly short distance dispersal. Despite the increments in biomass production, stem growth and fecundity that were observed in both grassland and forest, climate change increased the rates of forest expansion over grasslands even in presence of fire, and mainly over the next 90 years. This was attributed to a high photosynthetic advantage of C3-trees over C4-grasses in presence of fire under higher atmospheric CO2 levels. Finally, in the face of the general observed tendency of forest expansion over grasslands, the ancient grasslands have persisted as alternative ecosystem states in forest-grassland mosaics. In this sense, exploring this dynamic coexistence under the concept of alternative stable states have showed to be the most appropriate approach, and the outcomes of this novel perspective may highlight the understanding of the mechanisms behind the long-term coexistence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Carlsson, Erik. „Modeling Hydrostatic Transmission in Forest Vehicle“. Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6864.

Der volle Inhalt der Quelle
Annotation:

Hydrostatic transmission is used in many applications where high torque at low speed is demanded. For this project a forest vehicle is at focus. Komatsu Forest would like to have a model for the pressure in the hose between the hydraulic pump and the hydraulic motor. Pressure peaks can arise when the vehicle changes speed or hit a bump in the road, but if a good model is achieved some control action can be developed to reduce the pressure peaks.

For simulation purposes a model has been developed in Matlab-Simulink. The aim has been to get the simulated values to agree as well as possible with the measured values of the pressure and also for the rotations of the pump and the motor.

The greatest challenge has been due to the fact that the pressure is a sum of two flows, if one of these simulated flows is too big the pressure will tend to plus or minus infinity. Therefore it is necessary to develop models for the rotations of the pump and the motor that stabilize the simulated pressure.

Different kinds of models and methods have been tested to achieve the present model. Physical modeling together with a black box model are used. The black box model is used to estimate the torque from the diesel engine. The probable torque from the ground has been calculated. With this setup the simulated and measured values for the pressure agrees well, but the fit for the rotations are not as good.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kauffman, Jobriath Scott. „Spatiotemporal Informatics for Sustainable Forest Production Utilizing Forest Inventory and Remotely Sensed Data“. Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/74974.

Der volle Inhalt der Quelle
Annotation:
The interrelationship between trees and humans is primordial. As pressures on natural resources grow and become more complex this innate connection drives an increased need for improved data and analytical techniques for assessing the status and trends of forests, trees, their products, and their services. Techniques for using readily available data such as the Forest Inventory and Analysis (FIA) database and output from forest disturbance detection algorithms derived from Landsat data, such as Vegetation Change Tracker (VCT), for estimating forest attributes across time from the state and inventory unit level down to the stand and pixel level are presented. Progressively more comprehensive harvest and parcel boundary records are incorporated appropriately. Quantification of attributes, including non-timber forest products and fine-scale age estimates, across the landscape both historically and into the future is emphasized. Spatial information on the distribution of forest resources by age-class provides knowledge of timber volume through time and across the landscape to support forest management for sustained production. In addition to monitoring forest resources in regards to their value as products for human consumption, their measurement facilitates analysis of the relationship of their spatial and temporal abundance to other resources such as water and wildlife.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Candy, Katherine. „Mapping fire affected areas in northern Western Australia - towards an automatic approach“. Candy, Katherine (2004) Mapping fire affected areas in northern Western Australia - towards an automatic approach. Masters by Research thesis, Murdoch University, 2004. http://researchrepository.murdoch.edu.au/500/.

Der volle Inhalt der Quelle
Annotation:
Wildfires across northern Australia are a growing problem with more than 2.5 million hectares being burnt each year. Accordingly, remote sensing has been used as a tool to routinely monitor and map fire histories. In northern Western Australia, the Department of Land Information Satellite Remote Sensing Services (DLI SRSS) has been responsible for providing and interpreting NOAA-AVHRR (National Oceanic and Atmospheric Administration-Advanced Very High Resolution Radiometer) data. SRSS staff utilise this data to automatically map hotspots on a daily basis, and manually map fire affected areas (FAA) every nine days. This information is then passed on to land managers to enhance their ability to manage the effects of fire and assess its impact over time. The aim of this study was to develop an algorithm for the near real-time automatic mapping of FAA in the Kimberley and Pilbara as an alternative to the currently used semimanual approach. Daily measures of temperature, surface reflectance and vegetation indices from twenty nine NOAA-16 (2001) passes were investigated. It was firstly necessary to apply atmospheric and BRDF corrections to the raw reflectance data to account for the variation caused by changing viewing and illumination geometry over a cycle. Findings from the four case studies indicate that case studies 1 and 2 exhibited a typical fire response (visible and near-infrared channels and vegetation indices decreased), whereas 3 and 4 displayed an atypical response (visible channel increased while the near-infrared channel and vegetation indices decreased). Alternative vegetation indices such as GEMI, GEMI3 and VI3 outperformed NDVI in some cases. Likewise atmospheric and BRDF corrected NDVI provided better performance in separating burnt and unburnt classes. The difficulties in quantifying FAA due to temporal and spatial variation result from numerous factors including vegetation type, fire intensity, rate of ash and charcoal dispersal due to wind and rain, background soil influence and rate of revegetation. In this study two different spectral responses were recorded, indicating the need to set at least two sets of thresholds in an automated or semi-automated classification algorithm. It also highlighted the necessity of atmospheric and BRDF corrections. It is therefore recommended that future research apply atmospheric and BRDF corrections at the pre-processing stage prior to analysis when utilising a temporal series of NOAAAVHRR data. Secondly, it is necessary to investigate additional FAA within the four biogeographic regions to enable thresholds to be set in order to develop an algorithm. This algorithm must take into account the variation in a fire's spectral response which may result from fire intensity, vegetation type, background soil influence or climatic factors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hedborg, Mårten, und Patrik Grylin. „Active Noise Control of a Forest Machine Cabin“. Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9065.

Der volle Inhalt der Quelle
Annotation:

Today, a high noise level is considered a problem in many working environments. The main reason is that it contributes to stress and fatigue. Traditional methods using passive noise control is only practicable for high frequencies. As a complement to passive noise control, active noise control (ANC) can be used to reduce low frequency noise. The main idea of ANC is to use destructive interference of waves to cancel disturbing noises.

The purpose of this thesis is to design and implement an ANC system in the driver's cabin of a Valmet 890 forest

machine. The engine boom is one of the most disturbing noises and therefore the main subjective for the ANC system to suppress.

The ANC system is implemented on a Texas Instrument DSP development starter kit. Different FxLMS algorithms are evaluated with feedback and feedforward configurations.

The results indicate that an ANC system significantly reduces the sound pressure level (SPL) in the cabin. Best performance of the evaluated systems is achieved for the feedforward FxLMS system. For a commonly used engine speed of 1500 rpm, the SPL is reduced with 17 dB. The results show fast enough convergence and global suppression of low frequency noise.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Gebre, Tamrat Gebremedhin. „Automatic recognition of tree trunks in images : Robotics in forest industry“. Thesis, Mittuniversitetet, Avdelningen för elektronikkonstruktion, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21515.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Almer, Oscar Erik Gabriel. „Automated application-specific optimisation of interconnects in multi-core systems“. Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7622.

Der volle Inhalt der Quelle
Annotation:
In embedded computer systems there are often tasks, implemented as stand-alone devices, that are both application-specific and compute intensive. A recurring problem in this area is to design these application-specific embedded systems as close to the power and efficiency envelope as possible. Work has been done on optimizing singlecore systems and memory organisation, but current methods for achieving system design goals are proving limited as the system capabilities and system size increase in the multi- and many-core era. To address this problem, this thesis investigates machine learning approaches to managing the design space presented in the interconnect design of embedded multi-core systems. The design space presented is large due to the system scale and level of interconnectivity, and also feature inter-dependant parameters, further complicating analysis. The results presented in this thesis demonstrate that machine learning approaches, particularly wkNN and random forest, work well in handling the complexity of the design space. The benefits of this approach are in automation, saving time and effort in the system design phase as well as energy and execution time in the finished system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Stückelberger, Jürg Andreas. „A weighted-graph optimization approach for automatic location of forest road networks /“. Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17366.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Musy, Rebecca Forest. „Refinement of Automated Forest Area Estimation via Iterative Guided Spectral Class Rejection“. Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/33053.

Der volle Inhalt der Quelle
Annotation:
The goal of this project was to develop an operational Landsat TM image classification protocol for FIA forest area estimation. A hybrid classifier known as Iterative Guided Spectral Class Rejection (IGSCR) was automated using the ERDAS C Toolkit and ERDAS Macro Language. The resulting program was tested on 4 Landsat ETM+ images using training data collected via region-growing at 200 random points within each image. The classified images were spatially post-processed using variations on a 3x3 majority filter and a clump and eliminate technique. The accuracy of the images was assessed using the center land use of all plots, and subsets containing plots with 50, 75 and 100% homogeneity. The overall classification accuracies ranged from 81.9-95.4%. The forest area estimates derived from all image, filter and accuracy set combinations met the USDA Forest Service precision requirement of less than 3% per million acres timberland. There were no consistently significant filtering effects at the 95% level; however, the 3x3 majority filter significantly improved the accuracy of the most fragmented image and did not decrease the accuracy of the other images. Overall accuracy increased with homogeneity of the plots used in the validation set and decreased with fragmentation (estimated by % edge; R2 = 0.932). We conclude that the use of random points to initiate training data collection via region-growing may be an acceptable and repeatable addition to the IGSCR protocol, if the training data are representative of the spectral characteristics of the image. We recommend 3x3 majority filtering for all images, and, if it would not bias the sample, the selection of validation data using a plot homogeneity requirement rather than plot center land use only. These protocol refinements, along with the automation of IGSCR, make IGSCR suitable for use by the USDA Forest Service in the operational classification of Landsat imagery for forest area estimation.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Tang, Ying. „Real-time automatic face tracking using adaptive random forests“. Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=95172.

Der volle Inhalt der Quelle
Annotation:
Tracking is treated as a pixel-based binary classification problem in this thesis. An ensemble strong classifier obtained as a weighted combination of several random forests (weak classifiers), is trained on pixel feature vectors. The strong classifier is then used to classify the pixels belonging to the face or the background in the next frame. The classification margins are used to create a confidence map, whose peak indicates the new location of the face. The peak is located by Camshift which adjusts the size of the tracked face. The random forests in the ensemble are updated using AdaBoost by training new random forests to replace certain older ones to adapt to the changes between two frames. Tracking accuracy is monitored by a variable called the classification score. If the score detects a tracking anomaly, the system will stop tracking and restart by re-initializing using a Viola-Jones face detector. The tracker is tested on several sequences and proved to provide robust performance in different scenarios and illumination. The tracker can deal with complex changes of the face, a short period of occlusion, and the loss of tracking.
La localisation est traitée comme étant un problème de classification binaire à base de pixels dans cette thèse. Un ensemble de fort classificateur, obtenu à l'aide d'une combinaison pesée de plusieurs forêts (faibles classificateurs) aléatoires, est entraîné sur des vecteurs figurant des pixels. Le classificateur fort est ensuite utilisé pour classifier les pixels appartenant à la face ou au fond dans la prochaine image. Les marges de classifications sont utilisées pour créer une carte de confiance dont le sommet indique où est la nouvelle face. Le sommet est localisé par Camshift qui ajuste la grandeur de la face à localiser. Les forêts aléatoires dans l'ensemble sont mises à jours avec AdaBoost en entraînant des nouvelles forêts aléatoires pour remplacer certaines vieilles forêts pour s'adapter aux changements entre deux images. La précision de localisation est surveillée par une variable appelée note de classification. Si la note détecte une anomalie, le système arrêtera la localisation et redémarrera en réinitialisant en utilisant un détecteur de face Viola-Jones. Le localisateur est testé sur plusieurs séquences et s'est prouvé d'une performance robuste dans divers scénarios et illumination. Le localisateur peut agir bien à travers plusieurs changement complexes de la face, une courte période d'occlusion et la perte de la localisation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Cecil, Carl Patrick. „NPSNET-MES : semi-automated forces integration“. Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Vernon, Zachary Isaac. „A comparison of automated land cover/use classification methods for a Texas bottomland hardwood system using lidar, spot-5, and ancillary data“. [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2744.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Engelbrecht, Alma Margaretha. „Modelling of mass transfer in packing materials with cellular automata“. Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/1914.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Cleve, Oscar, und Sara Gustafsson. „Automatic Feature Extraction for Human Activity Recognitionon the Edge“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260247.

Der volle Inhalt der Quelle
Annotation:
This thesis evaluates two methods for automatic feature extraction to classify the accelerometer data of periodic and sporadic human activities. The first method selects features using individual hypothesis tests and the second one is using a random forest classifier as an embedded feature selector. The hypothesis test was combined with a correlation filter in this study. Both methods used the same initial pool of automatically generated time series features. A decision tree classifier was used to perform the human activity recognition task for both methods.The possibility of running the developed model on a processor with limited computing power was taken into consideration when selecting methods for evaluation. The classification results showed that the random forest method was good at prioritizing among features. With 23 features selected it had a macro average F1 score of 0.84 and a weighted average F1 score of 0.93. The first method, however, only had a macro average F1 score of 0.40 and a weighted average F1 score of 0.63 when using the same number of features. In addition to the classification performance this thesis studies the potential business benefits that automation of feature extractioncan result in.
Denna studie utvärderar två metoder som automatiskt extraherar features för att klassificera accelerometerdata från periodiska och sporadiska mänskliga aktiviteter. Den första metoden väljer features genom att använda individuella hypotestester och den andra metoden använder en random forest-klassificerare som en inbäddad feature-väljare. Hypotestestmetoden kombinerades med ett korrelationsfilter i denna studie. Båda metoderna använde samma initiala samling av automatiskt genererade features. En decision tree-klassificerare användes för att utföra klassificeringen av de mänskliga aktiviteterna för båda metoderna. Möjligheten att använda den slutliga modellen på en processor med begränsad hårdvarukapacitet togs i beaktning då studiens metoder valdes. Klassificeringsresultaten visade att random forest-metoden hade god förmåga att prioritera bland features. Med 23 utvalda features erhölls ett makromedelvärde av F1 score på 0,84 och ett viktat medelvärde av F1 score på 0,93. Hypotestestmetoden resulterade i ett makromedelvärde av F1 score på 0,40 och ett viktat medelvärde av F1 score på 0,63 då lika många features valdes ut. Utöver resultat kopplade till klassificeringsproblemet undersöker denna studie även potentiella affärsmässiga fördelar kopplade till automatisk extrahering av features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Samarakoon, Prasad. „Random Regression Forests for Fully Automatic Multi-Organ Localization in CT Images“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM039/document.

Der volle Inhalt der Quelle
Annotation:
La localisation d'un organe dans une image médicale en délimitant cet organe spécifique par rapport à une entité telle qu'une boite ou sphère englobante est appelée localisation d'organes. La localisation multi-organes a lieu lorsque plusieurs organes sont localisés simultanément. La localisation d'organes est l'une des étapes les plus cruciales qui est impliquée dans toutes les phases du traitement du patient à partir de la phase de diagnostic à la phase finale de suivi. L'utilisation de la technique d'apprentissage supervisé appelée forêts aléatoires (Random Forests) a montré des résultats très encourageants dans de nombreuses sous-disciplines de l'analyse d'images médicales. De même, Random Regression Forests (RRF), une spécialisation des forêts aléatoires pour la régression, ont produit des résultats de l'état de l'art pour la localisation automatique multi-organes.Bien que l'état de l'art des RRF montrent des résultats dans la localisation automatique de plusieurs organes, la nouveauté relative de cette méthode dans ce domaine soulève encore de nombreuses questions sur la façon d'optimiser ses paramètres pour une utilisation cohérente et efficace. Basé sur une connaissance approfondie des rouages des RRF, le premier objectif de cette thèse est de proposer une paramétrisation cohérente et automatique des RRF. Dans un second temps, nous étudions empiriquement l'hypothèse d'indépendance spatiale utilisée par RRF. Enfin, nous proposons une nouvelle spécialisation des RRF appelé "Light Random Regression Forests" pour améliorant l'empreinte mémoire et l'efficacité calculatoire
Locating an organ in a medical image by bounding that particular organ with respect to an entity such as a bounding box or sphere is termed organ localization. Multi-organ localization takes place when multiple organs are localized simultaneously. Organ localization is one of the most crucial steps that is involved in all the phases of patient treatment starting from the diagnosis phase to the final follow-up phase. The use of the supervised machine learning technique called random forests has shown very encouraging results in many sub-disciplines of medical image analysis. Similarly, Random Regression Forests (RRF), a specialization of random forests for regression, have produced the state of the art results for fully automatic multi-organ localization.Although, RRF have produced state of the art results in multi-organ segmentation, the relative novelty of the method in this field still raises numerous questions about how to optimize its parameters for consistent and efficient usage. The first objective of this thesis is to acquire a thorough knowledge of the inner workings of RRF. After achieving the above mentioned goal, we proposed a consistent and automatic parametrization of RRF. Then, we empirically proved the spatial indenpendency hypothesis used by RRF. Finally, we proposed a novel RRF specialization called Light Random Regression Forests for multi-organ localization
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Sjösund, Lars Lowe. „Automatic Localization of Bounding Boxes forSubcortical Structures in MR Images UsingRegression Forests“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142391.

Der volle Inhalt der Quelle
Annotation:
Manual delineation of organs at risk in MR images is a very time consuming task for physicians, and to be able to automate the process is therefore highly desirable. This thesis project aims to explore the possibility of using regression forests to nd bounding boxes for general subcortical structures. This is an important preprocessing step for later implementations of full segmentation, to improve the accuracy, and also to reduce the time consumption. An algorithm suggested by Criminisi et al. is implemented and extended to MR images. The extension also includes using a greater pool of used feature types. The obtained results are very good, with an average Jaccard similarity coecient as high as 0.696, and center mean error distance as low as 3.14 mm. The algorithm is very fast, and is able to predict the location of 43 bounding boxes within 14 seconds. These results indicate that regression forests are well suited as the method of choice for preprocessing before a full segmentation.
Manuell segmentering av riskorgan i MR-bilder är en väldigt tidskrävande uppgift för läkare. Att kunna automatisera denna process vore därför av stor nytta. I detta examensarbete har vi undersökt möjligheten att använda regression forests för att hitta en minsta bounding box för olika strukturer i hjärnan. Detta är ett viktigt steg för att snabba upp och öka precisionen hos en senare komplett segmentering. En algoritm utvecklad av Criminisi med era utvidgas till att användas pa MR bilder och innefatta en rikare bas av möjliga funktioner. De resultat som fås fram är väldigt bra, med en genomsnittlig Jaccard similarity coecient på 0.696 och en genomsnittlig feluppskattning av bounding box centrum pa 3.14 mm. Algoritmen är även väldigt snabb och den lokaliserar bounding boxes for 43 strukturer på 14 s. Dessa resultat visar tydligt att algoritmen kan användas som ett steg innan komplett segmentering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Nybrant, Arvid. „On Robust Forecast Combinations With Applications to Automated Forecasting“. Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-450807.

Der volle Inhalt der Quelle
Annotation:
Combining forecasts have been proven as one of the most successful methods to improve predictive performance. However, while there often is a focus on theoretically optimal methods, this is an ill-posed issue in practice where the problem of robustness is of more empirical relevance. This thesis focuses on the latter issue, where the risk associated with different combination methods is examined. The problem is addressed using Monte Carlo experiments and an application to automated forecasting with data from the M4 competition. Overall, our results indicate that the choice of combining methodology could constitute an important source of risk. While equal weighting of forecasts generally works well in the application, there are also cases where estimating weights improve upon this benchmark. In these cases, many robust and simple alternatives perform the best. While estimating weights can be beneficial, it is important to acknowledge the role of estimation uncertainty as it could outweigh the benefits of combining. For this reason, it could be advantageous to consider methods that effectively acknowledge this source of risk. By doing so, a forecaster can effectively utilize the benefits of combining forecasts while avoiding the risk associated with uncertainty in weights.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Edling, Laura. „Factors Affecting The Adoption Of Automated Wood Pellet Heating Systems In The Northeastern Us And Implications For The Transition To Renewable Energy“. ScholarWorks @ UVM, 2020. https://scholarworks.uvm.edu/graddis/1177.

Der volle Inhalt der Quelle
Annotation:
Public and private incentive programs have encouraged conversions to high efficiency, low emissions wood heating systems as a strategy to promote renewable energy and support local economies in the Northeastern US. Despite these efforts, the adoption of these systems remains slow. The study that is the subject of this dissertation examines several social, economic, policy and environmental factors that affect the decisions of individuals and small-scale institutions (local business and community facilities) to transition to automated wood pellet boilers and furnaces (AWPH) utilizing local fuel sources. Due to the complexity and risk associated with conversion, the transition to these systems can help further both a practical and theoretical understanding of the global transition to non-fossil fuel technologies. Chapter One of this dissertation examines this notion in more detail, as well as spells out the research questions of this study. Chapter Two delves into the research methods and their implications for other studies of energy transitions. These methods include interviews with 60 consumers, technology and fuel suppliers, and NGO and state agency personnel. These provided in-depth qualitative data which are complemented by a four-state survey (New Hampshire, Vermont, New York, and Maine) of adopters and informed non-adopters of AWPH systems (n=690; 38% response rate). Interview and survey questions, as well as subsequent coding, was developed through use of diffusion of innovation theory, the multi-level perspective on sociotechnical transitions, as well as through collaboration with industry experts and research partners. Chapters Three and Four offer a discussion of the results and their implications. Specifically, Chapter Three examines the complex system actors, elements, and interactions that are part of the transition from fossil fuel technology to AWPH. Chapter Four focuses on the data surrounding state and private programs that encourage the use of AWPH and the implications that this data has for effective climate mitigation and energy policy. Data show that AWPH consumers, who should be considered “early adopters” due to the small number of AWPH adopters in the region, are largely value-driven but are also concerned about upfront costs and lack of available technical support and fuel delivery options. Both environmental values (e.g. desire to find alternative to fossil fuels, concern for air quality and belief in climate change) and social values (e.g. support for the local economy and wood products industry) influenced consumer decisions, especially when fuel oil prices were low. Financial incentives, which are offered by all four states in the study region, were highly influential, but additional decision support offered by a non-profit (e.g. site visits, informational workshops, local print media) were rated highly by consumers where they were available. These additional supports, as well as the community-based nature of the non-profit program, enabled a broader range of people (lower income, more risk averse) to choose AWPH as well as created more efficiency in the supply chain. This approach created a reinforcing feedback loop between broader early adopters of AWPH, normalization of AWPH technology and its associated infrastructure, and increased levels of technical support and fuel availability. These findings suggest that efforts to increase adoption of renewable technologies that use locally harvest fuels take a community-based and system-wide approach, targeting both consumer and supplier motivations and barriers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

D'Ambrosio, Annamaria. „Segmentazione semantica automatica di immagini WSI per applicazioni in Patomica“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23197/.

Der volle Inhalt der Quelle
Annotation:
Nella patologia tradizionale, i vetrini vengono preparati con un campione di tessuto del paziente e poi rivisti dal patologo con un microscopio ad alto ingrandimento. Successivamente viene delineata manualmente sull’immagine la Region of Interest (ROI). Obiettivo di questa fase è generare correttamente bordi di strutture anatomiche, in quanto una segmentazione sbagliata potrebbe condurre ad un'errata pianificazione della cura successiva. La segmentazione è un processo che richiede molto tempo. Per ovviare a questo si ricorre alla cosidetta Patomica, ovvero la patologia digitale che ha lo scopo, mediante algoritmi computazionali, di estrarre automaticamente parametri quantitativi delle immagini istologiche, al fine di migliorare la velocità e l’accuratezza delle diagnosi. La Patomica ha anche lo scopo di individuare precocemente importanti malattie quali il tumore della pelle. Il lavoro di questa tesi, nasce proprio con lo scopo di superare questi problemi e rendere più facile il lavoro del patologo dal punto di vista del tempo e dell’accuratezza nella determinazione di eventuali cure. Oggetto di questo lavoro è lo sviluppo di modelli di segmentazione semantica automatica di immagini WSI di dermatologia prese da tessuti di melanoma raccolti in uno studio clinico del Policlinico Sant'Orsola di Bologna. La segmentazione è basata sull'algoritmo KNN con particolare attenzione alla ricerca e selezione di features che consentano di ottenere i migliori risultati. Viene in oltre presentato un confronto con altri due modelli supervisionati (Support Vector Machine e Random Forest), allenati sulle stesse features del migliore modello KNN, per paragonarne le performance con quelle ottenute con il modello KNN. L’utilizzo di modelli supervisionati per il task di segmentazione semantica di immagini WSI con risultati soddisfacenti potrebbe aprire la possibilità futura di utilizzare anche modelli non supervisionati permettendo così un ulteriore aumento dell’efficienza.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

ELMOZNINO, HERVE. „Influence du cycle de vie individuel sur la dynamique spatiale d'une foret monospecifique. Analyse a travers un automate cellulaire“. Nice, 1999. http://www.theses.fr/1999NICE5369.

Der volle Inhalt der Quelle
Annotation:
La banalisation des moyens de calcul puissants dans les sciences de la vie a entraine l'utilisation de modeles de plus en plus complets, mais aussi de plus en plus compliques, dans le but d'etudier les phenomenes naturels, en l'occurrence la dynamique forestiere. Ce travail, centre sur l'aspect spatial de la dynamique d'une foret monospecifique, pose la question de la pertinence de tels modeles. Pour cela, nous avons choisi comme support l'objet le plus immediat pour la modelisation mathematique de phenomenes distribues spatialement : l'automate cellulaire. Apres avoir pose les bases formelles de l'automate cellulaire, nous definissons la classe mosaic cycle, adaptee a la modelisation des phenomenes consideres. Puis nous presentons les resultats d'autres auteurs, a propos d'un automate appartenant a cette classe. Nous presentons ensuite une generalisation de cet automate. Ne parvenant pas a generaliser les resultats precedents, nous developpons d'autres outils d'investigation, pour parvenir a une bonne comprehension de ce nouvel automate. Nous nous interrogeons ensuite sur les conditions qui pourraient permettre de remplacer notre modele spatialise par un simple modele en classe de type leslie densite-dependant. Nous etudierons enfin un automate inspire du modele de ch. Wissel, presente dans the mosaic-cycle concept of ecosystems (h. Remmert).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Pettersson, Kristian. „Tillförlitligheten i den automatiserade gallringsuppföljningen“. Thesis, Linnéuniversitetet, Institutionen för skog och träteknik (SOT), 1985. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-54057.

Der volle Inhalt der Quelle
Annotation:
To ensure that thinning is done properly and correct due to instruction, regularly manual monitoring of the stand is done by the harvester operator after thinning. The aim of this study is to investigate the reliability of a newly developed program, that is using harvester data to automaticly calculate stand variables after thinning. A manual forest inventory was carried out in ten differens stands i south west of Sweden, where basal area, stem density, volume and species mix were estimated and compared to the automatically calculated data. The results shows that volume and stem density were estimated with high precision while the systematic deviation for basal area was 10 %, which is a significant differens.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ebrahimi, Shahin. „Contribution to automatic adjustments of vertebrae landmarks on x-ray images for 3D reconstruction and quantification of clinical indices“. Thesis, Paris, ENSAM, 2017. http://www.theses.fr/2017ENAM0050/document.

Der volle Inhalt der Quelle
Annotation:
L’exploitation de données radiographiques, en particulier pour la reconstruction 3D du rachis de patients scoliotiques, est un prérequis à la modélisation personnalisée. Les méthodes actuelles, bien qu’assez robustes pour la routine clinique, reposent sur des ajustements manuels fastidieux. Dans ce contexte, ce travail de thèse vise à la détection automatisée de points anatomiques spécifiques des vertèbres, permettant ainsi des ajustements automatisés. Nous avons développé premièrement une méthode originale de localisation de coins de vertèbres cervicales et lombaires sur les radiographies sagittales. L’évaluation rigoureuse de cette méthode suggère sa robustesse et sa précision. Nous avons ensuite développé un algorithme pour le problème pertinent cliniquement de localisation des pédicules sur les radiographies coronales. Cet algorithme se compare favorablement aux méthodes similaires dans la littérature, qui nécessitent une saisie manuelle. Enfin, nous avons soulevé les problèmes, relativement peu étudiés, de détection, identification et segmentation des apophyses épineuses du rachis cervical dans les radiographies sagittales. Toutes les tâches mentionnées ont été réalisées grâce à une combinaison originale de descripteurs visuels et une classification multi-classe par Random Forest, menant à une nouvelle et puissante approche de localisation et de segmentation. Les méthodes proposées dans cette thèse suggèrent un grand potentiel pour être intégré à la reconstruction 3D du rachis, utilisée quotidiennement en routine clinique
Exploitation of spine radiographs, in particular for 3D spine shape reconstruction of scoliotic patients, is a prerequisite for personalized modelling. Current methods, even though robust enough to be used in clinical routine, still rely on tedious manual adjustments. In this context, this PhD thesis aims toward automated detection of specific vertebrae landmarks in spine radiographs, enabling automated adjustments. In the first part, we developed an original Random Forest based framework for vertebrae corner localization that was applied on sagittal radiographs of both cervical and lumbar spine regions. A rigorous evaluation of the method confirms robustness and high accuracy of the proposed method. In the second part, we developed an algorithm for the clinically-important task of pedicle localization in the thoracolumbar region on frontal radiographs. The proposed algorithm compares favourably to similar methods from the literature while relying on less manual supervision. The last part of this PhD tackled the scarcely-studied task of joint detection, identification and segmentation of spinous processes of cervical vertebrae in sagittal radiographs, with again high precision performance. All three algorithmic solutions were designed around a generic framework exploiting dedicated visual feature descriptors and multi-class Random Forest classifiers, proposing a novel solution with computational and manual supervision burdens aiming for translation into clinical use. Overall, the presented frameworks suggest a great potential of being integrated in current spine 3D reconstruction frameworks that are used in daily clinical routine
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ellis, David G. „Machine learning improves automated cortical surface reconstruction in human MRI studies“. Thesis, University of Iowa, 2017. https://ir.uiowa.edu/etd/5465.

Der volle Inhalt der Quelle
Annotation:
Analysis of surface models reconstructed from human MR images gives re- searchers the ability to quantify the shape and size of the cerebral cortex. Increasing the reliability of automatic reconstructions would increase the precision and, therefore, power of studies utilizing cortical surface models. We looked at four different workflows for reconstructing cortical surfaces: 1) BAW + LOGIMSOS- B; 2) FreeSurfer + LOGISMOS-B; 3) BAW + FreeSurfer + Machine Learning + LOGISMOS-B; 4) Standard FreeSurfer(Dale et al. 1999). Workflows 1-3 were developed in this project. Workflow 1 utilized both BRAINSAutoWorkup(BAW)(Kim et al. 2015) and a surface reconstruction tool called LOGISMOS-B(Oguz et al. 2014). Workflow 2 added LOGISMOS-B to a custom built FreeSurfer workflow that was highly optimized for parallel processing. Workflow 3 combined workflows 1 and 2 and added random forest classifiers for predicting the edges of the cerebral cortex. These predictions were then fed into LOGISMOS-B as the cost function for graph segmentation. To compare these work- flows, a dataset of 578 simulated cortical volume changes was created from 20 different sets of MR scans. The workflow utilizing machine learning (workflow 3) produced cortical volume changes with the least amount of error when compared to the known volume changes from the simulations. Machine learning can be effectively used to help reconstruct cortical surfaces that more precisely track changes in the cerebral cortex. This research could be used to increase the power of future projects studying correlations between cortical morphometrics and neurological health.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Brandtberg, Tomas. „Automatic individual tree-based analysis of high spatial resolution remotely sensed data /“. Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 1999. http://epsilon.slu.se/avh/1999/91-576-5852-8.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Maurel, Denis. „Reconnaissance des séquences de mots par automate. Cas des adverbes de date du français“. Paris 7, 1989. http://www.theses.fr/1989PA077211.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a pour but la description d'une méthode de reconnaissance de séquences de mots par des automates finis et la réalisation d'un logiciel de reconnaissance des adverbes de date du français. Un premier chapitre présente les travaux du LADL sur les lexiques-grammaires, la théorie des automates finis et les problèmes soulevés par l'interprétation sémantique. Un deuxième chapitre décrit la structure du dictionnaire ainsi que le lien entre le lexique-grammaire, l'automate et les tables de transitions, puis commente le programme réalisé. Le troisième chapitre traite des adverbes de date et soulève les problèmes de la généralisation de la notion d'adverbe et du choix d'un verbe support classificateur. Le chapitre suivant aborde alors certains compléments circonstanciels en lien avec une date. C'est dans ces deux chapitres que sont construits le dictionnaire et l'automate. Un cinquième chapitre est uniquement constitué d'exemples destinés à la construction des tables de transitions citées en annexe. Enfin, un sixième chapitre insère une poursuite éventuelle du travail vers des modules d'interprétation sémentique, puis décrit les possibilités de maintenance et les résultats du logiciel, avant de conclure sur des perspectives futures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Cheng, Wijian. „Automatic Red Tide Detection using MODIS Satellite Images“. Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3772.

Der volle Inhalt der Quelle
Annotation:
Red tides pose a significant economic and environmental threat in the Gulf of Mexico. Detecting red tide is important for understanding this phenomenon. In this thesis, machine learning approaches based on Random Forests, Support Vector Machines and K-Nearest Neighbors have been evaluated for red tide detection from MODIS satellite images. Detection results using machine learning algorithms were compared to ship collected ground truth red tide data. This work has three major contributions. First, machine learning approaches outperformed two of the latest thresholding red tide detection algorithms based on bio-optical characterization by more than 10% in terms of F measure and more than 4% in terms of area under the ROC curve. Machine Learning approaches are effective in more locations on the West Florida Shelf. Second, the thresholds developed in recent thresholding methods were introduced as input attributes to the machine learning approaches and this strategy improved Random Forests and KNearest Neighbors approaches' F-measures. Third, voting the machine learning and thresholding methods could achieve the better performance compared with using machine learning alone, which implied a combination between machine learning models and biocharacterization thresholding methods can be used to obtain effective red tide detection results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Pai, Chih-Yun. „Automatic Pain Assessment from Infants’ Crying Sounds“. Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6560.

Der volle Inhalt der Quelle
Annotation:
Crying is infants utilize to express their emotional state. It provides the parents and the nurses a criterion to understand infants’ physiology state. Many researchers have analyzed infants’ crying sounds to diagnose specific diseases or define the reasons for crying. This thesis presents an automatic crying level assessment system to classify infants’ crying sounds that have been recorded under realistic conditions in the Neonatal Intensive Care Unit (NICU) as whimpering or vigorous crying. To analyze the crying signal, Welch’s method and Linear Predictive Coding (LPC) are used to extract spectral features; the average and the standard deviation of the frequency signal and the maximum power spectral density are the other spectral features which are used in classification. For classification, three state-of-the-art classifiers, namely K-nearest Neighbors, Random Forests, and Least Squares Support Vector Machine are tested in this work, and the experimental result achieves the highest accuracy in classifying whimper and vigorous crying using the clean dataset is 90%, which is sampled with 10 seconds before scoring and 5 seconds after scoring and uses K-nearest neighbors as the classifier.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Karlsson, Daniel, und Alex Lindström. „Automated Learning and Decision : Making of a Smart Home System“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234313.

Der volle Inhalt der Quelle
Annotation:
Smart homes are custom-fitted systems for users to manage their home environments. Smart homes consist of devices which has the possibility to communicate between each other. In a smart home system, the communication is used by a central control unit to manage the environment and the devices in it. Setting up a smart home today involves a lot of manual customizations to make it function as the user wishes. What smart homes lack is the possibility to learn from users behaviour and habits in order to provide a customized environment for the user autonomously. The purpose of this thesis is to examine whether environmental data can be collected and used in a small smart home system to learn about the users behaviour. To collect data and attempt this learning process, a system is set up. The system uses a central control unit for mediation between wireless electrical outlets and sensors. The sensors track motion, light, temperature as well as humidity. The devices and sensors along with user interactions in the environment make up the collected data. Through studying the collected data, the system is able to create rules. These rules are used for the system to make decisions within its environment to suit the users’ needs. The performance of the system varies depending on how the data collection is handled. Results find that collecting data in intervals as well as when an action is made from the user is important.
Smarta hem är system avsedda för att hjälpa användare styra sin hemmiljö. Ett smart hem är uppbyggt av enheter med möjlighet att kommunicera med varandra. För att kontrollera enheterna i ett smart hem, används en central styrenhet. Att få ett smart hem att vara anpassat till användare är ansträngande och tidskrävande. Smarta hemsystem saknar i stor utsträckning möjligheten att lära sig av användarens beteende. Vad ett sådant lärande skulle kunna möjliggöra är ett skräddarsytt system utan användarens involvering. Syftet med denna avhandling är att undersöka hur användardata från en hemmiljö kan användas i ett smart hemsystem för att lära sig av användarens beteende. Ett litet smart hemsystem har skapats för att studera ifall denna inlärningsmetod är applicerbar. Systemet består av sensorer, trådlösa eluttag och en central styrenhet. Den centrala styrenheten används för att kontrollera de olika enheterna i miljön. Sensordata som sparas av systemet består av rörelse, ljusstyrka, temperatur och luftfuktighet. Systemet sparar även användarens beteende i miljön. Systemet skapar regler utifrån sparad data med målet att kunna styra enheterna i miljön på ett sätt som passar användaren. Systemets agerande varierade beroende på hur data samlades in. Resultatet visar vikten av att samla in data både i intervaller och när användare tar ett beslut i miljön.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Kupferschmidt, Benjamin, und Eric Pesciotta. „Automatic Format Generation Techniques for Network Data Acquisition Systems“. International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606089.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Configuring a modern, high-performance data acquisition system is typically a very timeconsuming and complex process. Any enhancement to the data acquisition setup software that can reduce the amount of time needed to configure the system is extremely useful. Automatic format generation is one of the most useful enhancements to a data acquisition setup application. By using Automatic Format Generation, an instrumentation engineer can significantly reduce the amount of time that is spent configuring the system while simultaneously gaining much greater flexibility in creating sampling formats. This paper discusses several techniques that can be used to generate sampling formats automatically while making highly efficient use of the system's bandwidth. This allows the user to obtain most of the benefits of a hand-tuned, manually created format without spending excessive time creating it. One of the primary techniques that this paper discusses is an enhancement to the commonly used power-of-two rule, for selecting sampling rates. This allows the system to create formats that use a wider variety of rates. The system is also able to handle groups of related measurements that must follow each other sequentially in the sampling format. This paper will also cover a packet based formatting scheme that organizes measurements based on common sampling rates. Each packet contains a set of measurements that are sampled at a particular rate. A key benefit of using an automatic format generation system with this format is the optimization of sampling rates that are used to achieve the best possible match for each measurement's desired sampling rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Lien, Tord Hjalmar. „Automatic identification technology tracking weapons and ammunition for the Norwegian Armed Forces“. Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5715.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited.
The purpose of this study is to recommend technology and solutions that improve the accountability and accuracy of small arms and ammunition inventories in the Norwegian Armed Forces (NAF). Radio Frequency Identification (RFID) and Item Unique Identification (IUID) are described, and challenges and benefits of these two major automatic identification technologies are discussed. A case study for the NAF is conducted where processes and objectives that are important for the inventory system are presented. Based on the specific requirements in the NAF's inventory system, an analysis of four different inventory management solutions is examined. For the RFID solution, an experiment is conducted to determine whether this is a feasible solution for small arms inventory control. A recommendation is formed based on the results of this analysis. The tandem solution, which uses IUID technology at the item level, passive RFID at the box level and active RFID when items are transported, is the recommendation. This solution uses the appropriate technologies where they are best suited and offers the best results for an accurate inventory control system with low implementation costs and risks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Frilley, François. „Differenciation d'ensembles structures : applications aux cas du monoïde libre et de la foret des arbres finis et étiquetés“. Paris 7, 1989. http://www.theses.fr/1989PA077189.

Der volle Inhalt der Quelle
Annotation:
La manipulation d'objets structures est un aspect essentiel de l'informatique moderne, en particulier dans les systèmes d'aide a la programmation. L'objet de cette thèse est l'étude de la comparaison de ces structures, qui est importante, tant d'un point de vue théorie que d'un point de vue pratique. Mathématiquement, tous les problèmes de comparaison appartiennent à un même paradigme décrit simplement par la notion de différenciation étudiée dans la première partie. Les exemples très importants de la différenciation sur le monoïde libre et sur la foret des arbres étiquetés sont ensuite abordes. Le premier de ces deux exemples a été souvent étudie: seule une synthèse des principaux résultats afférents au problème, complétée par une importante bibliographie, est présentée ici. Le deuxième, par contre, est totalement original. Il est aborde a la fois sous un aspect théorique, montrant la difficulté du sujet et sous un aspect pratique permettant de construire un différenciateur d'arbres applicable a un système réel a caractère industriel: l'éditeur syntaxique Centaur
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Kupferschmidt, Benjamin, und Albert Berdugo. „DESIGNING AN AUTOMATIC FORMAT GENERATOR FOR A NETWORK DATA ACQUISITION SYSTEM“. International Foundation for Telemetering, 2006. http://hdl.handle.net/10150/604157.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California
In most current PCM based telemetry systems, an instrumentation engineer manually creates the sampling format. This time consuming and tedious process typically involves manually placing each measurement into the format at the proper sampling rate. The telemetry industry is now moving towards Ethernet-based systems comprised of multiple autonomous data acquisition units, which share a single global time source. The architecture of these network systems greatly simplifies the task of implementing an automatic format generator. Automatic format generation eliminates much of the effort required to create a sampling format because the instrumentation engineer only has to specify the desired sampling rate for each measurement. The system handles the task of organizing the format to comply with the specified sampling rates. This paper examines the issues involved in designing an automatic format generator for a network data acquisition system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Kim, Eun Young. „Machine-learning based automated segmentation tool development for large-scale multicenter MRI data analysis“. Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4998.

Der volle Inhalt der Quelle
Annotation:
Background: Volumetric analysis of brain structures from structural Mag- netic Resonance (MR) images advances the understanding of the brain by providing means to study brain morphometric changes quantitatively along aging, development, and disease status. Due to the recent increased emphasis on large-scale multicenter brain MR study design, the demand for an automated brain MRI processing tool has increased as well. This dissertation describes an automatic segmentation framework for subcortical structures of brain MRI that is robust for a wide variety of MR data. Method: The proposed segmentation framework, BRAINSCut, is an inte- gration of robust data standardization techniques and machine-learning approaches. First, a robust multi-modal pre-processing tool for automated registration, bias cor- rection, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. The segmentation framework was then constructed to achieve robustness for large-scale data via the following comparative experiments: 1) Find the best machine-learning algorithm among several available approaches in the field. 2) Find an efficient intensity normalization technique for the proposed region-specific localized normalization with a choice of robust statistics. 3) Find high quality features that best characterize the MR brain subcortical structures. Our tool is built upon 32 handpicked multi-modal muticenter MR images with man- ual traces of six subcortical structures (nucleus accumben, caudate nucleus, globus pallidum, putamen, thalamus, and hippocampus) from three experts. A fundamental task associated with brain MR image segmentation for re- search and clinical trials is the validation of segmentation accuracy. This dissertation evaluated the proposed segmentation framework in terms of validity and reliability. Three groups of data were employed for the various evaluation aspects: 1) traveling human phantom data for the multicenter reliability, 2) a set of repeated scans for the measurement stability across various disease statuses, and 3) a large-scale data from Huntington's disease (HD) study for software robustness as well as segmentation accuracy. Result: Segmentation accuracy of six subcortical structures was improved with 1) the bias-corrected inputs, 2) the two region-specific intensity normalization strategies and 3) the random forest machine-learning algorithm with the selected feature-enhanced image. The analysis of traveling human phantom data showed no center-specific bias in volume measurements from BRAINSCut. The repeated mea- sure reliability of the most of structures also displayed no specific association to disease progression except for caudate nucleus from the group of high risk for HD. The constructed segmentation framework was successfully applied on multicenter MR data from PREDICT-HD [133] study ( < 10% failure rate over 3000 scan sessions pro- cessed). Conclusion: Random-forest based segmentation method is effective and robust to large-scale multicenter data variation, especially with a proper choice of the intensity normalization techniques. Benefits of proper normalization approaches are more apparent compared to the custom set of feature-enhanced images for the ccuracy and robustness of the segmentation tool. BRAINSCut effectively produced subcortical volumetric measurements that are robust to center and disease status with validity confirmed by human experts and low failure rate from large-scale multicenter MR data. Sample size estimation, which is crutial for designing efficient clinical and research trials, is provided based on our experiments for six subcortical structures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Schilling, Anita. „Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155698.

Der volle Inhalt der Quelle
Annotation:
Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically
Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Pereira, Talita de Azevedo Coelho Furquim. „Feridas complexas classificação de tecidos, segmentação e mensuração com o classificador Optimun-Path Forest /“. Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/153761.

Der volle Inhalt der Quelle
Annotation:
Submitted by Talita De Azevedo Coelho Furquim Pereira (talitapereira@bauru.sp.gov.br) on 2018-04-24T00:47:22Z No. of bitstreams: 1 Dissertação - final repositório.pdf: 1807192 bytes, checksum: 3197e207676b603a47d2146acf83b045 (MD5)
Approved for entry into archive by ROSANGELA APARECIDA LOBO null (rosangelalobo@btu.unesp.br) on 2018-04-26T16:58:24Z (GMT) No. of bitstreams: 1 pereira_tacf_me_bot.pdf: 1807192 bytes, checksum: 3197e207676b603a47d2146acf83b045 (MD5)
Made available in DSpace on 2018-04-26T16:58:24Z (GMT). No. of bitstreams: 1 pereira_tacf_me_bot.pdf: 1807192 bytes, checksum: 3197e207676b603a47d2146acf83b045 (MD5) Previous issue date: 2018-02-23
Introdução: As feridas complexas apresentam difícil resolução e associam-se a perda cutânea extensa, infecções importantes, comprometimento da viabilidade dos tecidos e/ou associação com doenças sistêmicas que prejudicam os processos normais de cicatrização, cursam com elevada morbimortalidade e têm sido apontadas como grave problema de saúde pública. Na prática clínica, é importante avaliar as feridas e documentar a avaliação. O registro incompleto sobre o paciente e o tratamento em uso é apontado como um desafio no acompanhamento das feridas e também prejudica ações de gestão, pesquisa e educação. A incorporação de fotografias de feridas à pratica profissional, mostra-se como uma estratégia para auxiliar profissionais na observação, evolução e registro claro e preciso. O Optimum-Path Forest (OPF) é um framework para o desenvolvimento de técnicas de reconhecimento de padrões baseado em partições de caminhos ótimos e particularmente eficiente para a classificação de imagens. O classificador OPF gera resultados a partir do cruzamento das classes e características selecionadas. Objetivo: Descrever as etapas do desenvolvimento de um aplicativo para dispositivos móveis capaz de segmentar e classificar tecidos de feridas complexas baseado no Optimum-Path Forest (OPF) supervisionado. Método: Foi aplicada uma nova metodologia inteligente para análise e classificação de imagens de feridas complexas por meio de técnicas de processamento digital de imagens e aprendizado de máquina com o classificador de padrões Optimum-Path Forest (OPF) supervisionado. Criou-se o banco de imagens de 27 feridas complexas, que foram rotuladas por quatro especialistas conforme a classificação dos tecidos em quatro classes: granulação (vermelho), tecido fibrinóide (amarelo), necrose (preto) e hematoma (roxo), gerando 108 imagens rotuladas. Acrescentou-se duas classes: branco (o que está na foto, exceto o leito da ferida) e dúvida (divergência na classificação pelos profissionais). O classificador OPF foi treinado a partir dessas 108 imagens. Aplicou-se o OPF às imagens de feridas e verificou-se a acurácia. Em seguida, iniciou-se a construção do aplicativo. Resultados e Discussão: O presente estudo desenvolveu um esquema de classificação de tecido de feridas assistido por computador para avaliação e gerenciamento de feridas complexas, a partir de fotos de feridas da câmera digital de um smartphone. A aplicação do OPF a feridas complexas trouxe como resultado uma acurácia de 77,52% ± 6,14. Com esta ferramenta, foi desenvolvido como produto desta pesquisa um aplicativo para segmentação, classificação de tecidos e mensuração de feridas complexas. O aplicativo gera um relatório no formato Portable Document Format (PDF) que pode ser enviado por e-mail, impresso ou anexado a prontuário eletrônico compatível. Conclusão: Foi construído um banco com 27 imagens de feridas complexas, que quatro profissionais rotularam para treinamento do classificador OPF, aplicou-se o OPF às imagens de feridas complexas, avaliou-se a acurácia deste processo e desenvolveu-se um aplicativo para dispositivos móveis com as funções de segmentação da ferida, classificação de tecidos e mensuração da ferida. Os resultados mostraram que o valor da acurácia obtido na análise computacional teve valor significativo, equiparando-se a avaliação de especialistas em feridas. Comparando com estudos similares, a análise computacional de feridas mostrou-se com menor variabilidade em relação a avaliação de profissionais, sugerindo que a incorporação desta tecnologia na prática clínica favoreça o cuidado em saúde do paciente com feridas complexas, além de fornecer dados para a gestão, ensino e pesquisa.
Introduction: Complex wounds are difficult to resolve and are associated with extensive cutaneous loss, major infections, compromised tissue viability and / or are related to systemic diseases that impair normal healing processes, have high morbidity and mortality and have been identified as severe public health problem. In clinical practice, it is important to evaluate the wounds and document the evaluation. The incomplete record on the patient and the treatment in use is pointed out as a challenge in the follow up of the wounds and also impairs management, research and education actions. The incorporation of wounds’ photos in the professional practice, stands out as a strategy to assist professionals in the observation, evolution and clear and precise recording. Optimum-Path Forest (OPF) is a framework for the development of pattern recognition techniques based on optimal path partitions and is particularly efficient for image classification. The OPF classifier generates results from the intersection of the selected classes and characteristics. Objective: Describe the steps in developing a mobile application capable of segmenting and sorting complex wound tissue based on the supervised Optimum-Path Forest (OPF). Method: A new intelligent methodology was applied for the analysis and classification of complex wound images using digital image processing and machine learning techniques with the supervised Optimum-Path Forest (OPF) standards classifier. The image bank of 27 complex wounds was created, which were labeled by four specialists according to the classification of the tissues into four classes: granulation (red), fibrinoid (yellow) tissue, necrosis (black) and hematoma (purple), generating 108 images. Two classes were added: white (what is in the photo, except the wound bed) and doubt (divergence in classification by professionals). The OPF classifier was trained from these 108 images. The OPF was applied to the wound images and the accuracy was verified. Then, the application developing process was started. Results and Discussion: The present study developed a computer-aided wound tissue classification scheme for evaluation and management of complex wounds from photos of a smartphone. The OPF application to complex wounds resulted in an accuracy of 77.52 ± 6.14. With this 4 tool, it was developed the product of this research: an application for segmentation, tissue classification and measurement of complex wounds. The application generates a Portable Document Format (PDF) report that can be emailed, printed or attached to a compatible electronic medical record. Conclusion: A bank was made with 27 images of complex wounds, which four professionals labeled for training the OPF classifier, applied the OPF to complex wound images, assessed the accuracy of this process and developed a mobile application with the functions of wound segmentation, tissue classification and wound measurement. The results showed that the value of the accuracy obtained in the computational analysis had a significant value, being equal to the evaluation of specialists in wounds. Comparing to similar studies, the computational analysis of wounds showed less variability than professionals´ evaluation, suggesting that the incorporation of this technology in clinical practice favors the health care of patients with complex wounds, besides providing data for the management, teaching and research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Lujan, Jose Luis. „AUTOMATED OPTIMAL COORDINATION OF MULTIPLE-DEGREE-OF-FREEDOM MUSCULOSKELETAL ACTIONS IN FEED-FORWARD NEUROPROSTHESES“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1165011349.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Pampouchidou, Anastasia. „Automatic detection of visual cues associated to depression“. Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK054/document.

Der volle Inhalt der Quelle
Annotation:
La dépression est le trouble de l'humeur le plus répandu dans le monde avec des répercussions sur le bien-être personnel, familial et sociétal. La détection précoce et précise des signes liés à la dépression pourrait présenter de nombreux avantages pour les cliniciens et les personnes touchées. Le présent travail visait à développer et à tester cliniquement une méthodologie capable de détecter les signes visuels de la dépression afin d’aider les cliniciens dans leur décision.Plusieurs pipelines d’analyse ont été mis en œuvre, axés sur les algorithmes de représentation du mouvement, via des changements de textures ou des évolutions de points caractéristiques du visage, avec des algorithmes basés sur les motifs binaires locaux et leurs variantes incluant ainsi la dimension temporelle (Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI)). Ces méthodes de représentation ont été combinées avec différents algorithmes d'extraction de caractéristiques basés sur l'apparence, à savoir les modèles binaires locaux (LBP), l'histogramme des gradients orientés (HOG), la quantification de phase locale (LPQ) et les caractéristiques visuelles obtenues après transfert de modèle issu des apprentissage profonds (VGG). Les méthodes proposées ont été testées sur deux ensembles de données de référence, AVEC et le Wizard of Oz (DAICWOZ), enregistrés à partir d'individus non diagnostiqués et annotés à l'aide d'instruments d'évaluation de la dépression. Un nouvel ensemble de données a également été développé pour inclure les patients présentant un diagnostic clinique de dépression (n = 20) ainsi que les volontaires sains (n = 45).Deux types différents d'évaluation de la dépression ont été testés sur les ensembles de données disponibles, catégorique (classification) et continue (régression). Le MHI avec VGG pour l'ensemble de données de référence AVEC'14 a surpassé l'état de l’art avec un F1-Score de 87,4% pour l'évaluation catégorielle binaire. Pour l'évaluation continue des symptômes de dépression « autodéclarés », LMHI combinée aux caractéristiques issues des HOG et à celles issues du modèle VGG ont conduit à des résultats comparatifs aux meilleures techniques de l’état de l’art sur le jeu de données AVEC'14 et sur notre ensemble de données, avec une erreur quadratique moyenne (RMSE) et une erreur absolue moyenne (MAE) de 10,59 / 7,46 et 10,15 / 8,48 respectivement. La meilleure performance de la méthodologie proposée a été obtenue dans la prédiction des symptômes d'anxiété auto-déclarés sur notre ensemble de données, avec une RMSE/MAE de 9,94 / 7,88.Les résultats sont discutés en relation avec les limitations cliniques et techniques et des améliorations potentielles pour des travaux futurs sont proposées
Depression is the most prevalent mood disorder worldwide having a significant impact on well-being and functionality, and important personal, family and societal effects. The early and accurate detection of signs related to depression could have many benefits for both clinicians and affected individuals. The present work aimed at developing and clinically testing a methodology able to detect visual signs of depression and support clinician decisions.Several analysis pipelines were implemented, focusing on motion representation algorithms, including Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI). These motion representation methods were combined with different appearance-based feature extraction algorithms, namely Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Local Phase Quantization (LPQ), as well as Visual Graphic Geometry (VGG) features based on transfer learning from deep learning networks. The proposed methods were tested on two benchmark datasets, the AVEC and the Distress Analysis Interview Corpus - Wizard of Oz (DAICWOZ), which were recorded from non-diagnosed individuals and annotated based on self-report depression assessment instruments. A novel dataset was also developed to include patients with a clinical diagnosis of depression (n=20) as well as healthy volunteers (n=45).Two different types of depression assessment were tested on the available datasets, categorical (classification) and continuous (regression). The MHI with VGG for the AVEC’14 benchmark dataset outperformed the state-of-the-art with 87.4% F1-Score for binary categorical assessment. For continuous assessment of self-reported depression symptoms, MHI combined with HOG and VGG performed at state-of-the-art levels on both the AVEC’14 dataset and our dataset, with Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) of 10.59/7.46 and 10.15/8.48, respectively. The best performance of the proposed methodology was achieved in predicting self-reported anxiety symptoms in our dataset, with RMSE/MAE of 9.94/7.88.Results are discussed in relation to clinical and technical limitations and potential improvements in future work
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Järrendahl, Hannes. „Automatic Detection of Anatomical Landmarks in Three-Dimensional MRI“. Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130944.

Der volle Inhalt der Quelle
Annotation:
Detection and positioning of anatomical landmarks, also called points of interest(POI), is often a concept of interest in medical image processing. Different measures or automatic image analyzes are often directly based upon positions of such points, e.g. in organ segmentation or tissue quantification. Manual positioning of these landmarks is a time consuming and resource demanding process. In this thesis, a general method for positioning of anatomical landmarks is outlined, implemented and evaluated. The evaluation of the method is limited to three different POI; left femur head, right femur head and vertebra T9. These POI are used to define the range of the abdomen in order to measure the amount of abdominal fat in 3D data acquired with quantitative magnetic resonance imaging (MRI). By getting more detailed information about the abdominal body fat composition, medical diagnoses can be issued with higher confidence. Examples of applications could be identifying patients with high risk of developing metabolic or catabolic disease and characterizing the effects of different interventions, i.e. training, bariatric surgery and medications. The proposed method is shown to be highly robust and accurate for positioning of left and right femur head. Due to insufficient performance regarding T9 detection, a modified method is proposed for T9 positioning. The modified method shows promises of accurate and repeatable results but has to be evaluated more extensively in order to draw further conclusions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Boes, Christoph. „Active automatic chassis actuation for an excavator“. Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A71224.

Der volle Inhalt der Quelle
Annotation:
This paper shows an electrohydraulic control system to stabilize the chassis of a mobile machine driving across an off-road ground profile. The active hydraulic suspension system is based on new electronics, SW- and control architectures and the use of state of the art industrial components. The paper shows, that the static and dynamic performance of the system is dominated by the servo valve, which represents the central component of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Fontana, Andrà Luis Fonseca. „Estimates of changes time space adjacent to roads in the amazon: case study BR 422“. Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=8667.

Der volle Inhalt der Quelle
Annotation:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
This paper presents a method for generating estimates of temporal changes in the surrounding area of a highway located in the Amazon, using the technique Cellular Automata and explanatory variables, only attributes of the land. The proposed model uses vector images (obtained from the National Institute for Space Research in Brazil), which are converted to grid type files â raster image, representing a series of spatial changes in the region of study. With this proposition, it is expected to assist decision makers in order to meet the requests of CONAMA Resolution 01, relating to environmental impacts, more specifically, as regards the construction of models which consider scenarios with and without the project, and that the process of construction / rehabilitation of roads can be made in view of the legal norms in order to minimize potential environmental and social impacts. The model generated from the CAs showed promise in generating future estimates of deforestation and a good quantitative and qualitative indicators to support the decision making process to consider future deforestation being caused by construction and / or paving of road in the Amazon.
Este trabalho apresenta um mÃtodo para a estimativa de mudanÃas espaÃo temporais no entorno de uma rodovia localizada na AmazÃnia, utilizando para tanto a tÃcnica AutÃmatos Celulares adaptada em ambiente SIG, onde as variÃveis explicativas do modelo serÃo somente os atributos do terreno. O modelo proposto usa imagens vetoriais (obtidas junto ao Instituto Nacional de Pesquisas Espaciais) que posteriormente sÃo convertidas para arquivos tipo grid â em formato raster, com a sÃrie histÃrica das mudanÃas espaciais na regiÃo objeto de estudo. Espera-se auxiliar os tomadores de decisÃo no atendimento das solicitaÃÃes da resoluÃÃo CONAMA 01/86 relativas à concepÃÃo de modelos que considerem cenÃrios com e sem o empreendimento, e que os processos de construÃÃo/recuperaÃÃo de rodovias possam ser realizados atendendo Ãs normas legais, visando minimizar os potenciais impactos sÃcio ambientais. O modelo gerado a partir dos ACs mostrou-se promissor na geraÃÃo de estimativas futuras de desmatamento e um bom indicador quantitativo e qualitativo para suporte no processo de tomada de decisÃo que pondere o desmatamento futuro a ser causado pela construÃÃo e/ou pavimentaÃÃo de uma rodovia na AmazÃnia.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sinclair, Rhona Ann. „Pre-clinical evaluation of the forces during limb lengthening using manual and automated devices“. Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/7669.

Der volle Inhalt der Quelle
Annotation:
Limb lengthening procedures use fixation devices to extend the constantly regenerating bone and surrounding soft tissues. Automated devices have been developed that aim to provide a more gradual tissue extension, resulting in better quality of treatment for the patient. Benefits include pain reduction and probable enhanced tissue outcomes. The development of one such new smart lengthening device is described. An integrated numerical model of tissue mechanics during lengthening is presented. It represents the mechanical environment in which the devices extend. The mechanism of the automated device is also modelled using Matlab software and validation was achieved through experimental testing. Validation of the tissue model includes the design of an experimental hydraulic system with the ability to control the peak loads and relaxation over time. A simplified mechanobiological model for the longer term healing effects is proposed. Calibration of the tissue model to clinical data allows for direct comparison of the load and extension of identical tissues, one being lengthened by a traditional device, the other an automated device. This simulation can be extended to include a range of lengthening rates and frequencies of distraction alongside various patient dependent tissue properties. The models also provide the opportunity to assess the effects of iterative changes to the device parameters (such as stiffness) on its performance as well as analyse the effect that these changes have on tissue extension and loading. Use of these models to optimise the device design alongside optimisation of the extension regime can result in improved device design and consequently improved patient outcomes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Alhadabi, Amal Mohammed. „AUTOMATED GROWTH MIXTURE MODEL FITTING AND CLASSES HETEROGENEITY DEDUCTION: MONTE CARLO SIMULATION STUDY“. Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1615986232296185.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Alomari, Mohammad Hani. „Engineering system design for automated space weather forecast : designing automatic software systems for the large-scale analysis of solar data, knowledge extraction and the prediction of solar activities using machine learning techniques“. Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4248.

Der volle Inhalt der Quelle
Annotation:
Coronal Mass Ejections (CMEs) and solar flares are energetic events taking place at the Sun that can affect the space weather or the near-Earth environment by the release of vast quantities of electromagnetic radiation and charged particles. Solar active regions are the areas where most flares and CMEs originate. Studying the associations among sunspot groups, flares, filaments, and CMEs is helpful in understanding the possible cause and effect relationships between these events and features. Forecasting space weather in a timely manner is important for protecting technological systems and human life on earth and in space. The research presented in this thesis introduces novel, fully computerised, machine learning-based decision rules and models that can be used within a system design for automated space weather forecasting. The system design in this work consists of three stages: (1) designing computer tools to find the associations among sunspot groups, flares, filaments, and CMEs (2) applying machine learning algorithms to the associations' datasets and (3) studying the evolution patterns of sunspot groups using time-series methods. Machine learning algorithms are used to provide computerised learning rules and models that enable the system to provide automated prediction of CMEs, flares, and evolution patterns of sunspot groups. These numerical rules are extracted from the characteristics, associations, and time-series analysis of the available historical solar data. The training of machine learning algorithms is based on data sets created by investigating the associations among sunspots, filaments, flares, and CMEs. Evolution patterns of sunspot areas and McIntosh classifications are analysed using a statistical machine learning method, namely the Hidden Markov Model (HMM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Alomari, Mohammad H. „Engineering System Design for Automated Space Weather Forecast. Designing Automatic Software Systems for the Large-Scale Analysis of Solar Data, Knowledge Extraction and the Prediction of Solar Activities Using Machine Learning Techniques“. Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4248.

Der volle Inhalt der Quelle
Annotation:
Coronal Mass Ejections (CMEs) and solar flares are energetic events taking place at the Sun that can affect the space weather or the near-Earth environment by the release of vast quantities of electromagnetic radiation and charged particles. Solar active regions are the areas where most flares and CMEs originate. Studying the associations among sunspot groups, flares, filaments, and CMEs is helpful in understanding the possible cause and effect relationships between these events and features. Forecasting space weather in a timely manner is important for protecting technological systems and human life on earth and in space. The research presented in this thesis introduces novel, fully computerised, machine learning-based decision rules and models that can be used within a system design for automated space weather forecasting. The system design in this work consists of three stages: (1) designing computer tools to find the associations among sunspot groups, flares, filaments, and CMEs (2) applying machine learning algorithms to the associations¿ datasets and (3) studying the evolution patterns of sunspot groups using time-series methods. Machine learning algorithms are used to provide computerised learning rules and models that enable the system to provide automated prediction of CMEs, flares, and evolution patterns of sunspot groups. These numerical rules are extracted from the characteristics, associations, and time-series analysis of the available historical solar data. The training of machine learning algorithms is based on data sets created by investigating the associations among sunspots, filaments, flares, and CMEs. Evolution patterns of sunspot areas and McIntosh classifications are analysed using a statistical machine learning method, namely the Hidden Markov Model (HMM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Löfgren, Björn. „Kinematic Control of Redundant Knuckle Booms with Automatic Path Following Functions“. Doctoral thesis, KTH, Mekatronik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11495.

Der volle Inhalt der Quelle
Annotation:
To stay competitive internationally, the Swedish forestry sector must increase its productivity by 2 to 3% annually. There are a variety of ways in which productivity can be increased. One option is to develop remote-controlled or unmanned machines, thus reducing the need for operator intervention. Another option—and one that could be achieved sooner than full automation—would be to make some functions semi-automatic. Semi-automatic operation of the knuckle boom and felling head in particular would create “mini-breaks” for the operators, thereby reducing mental and physiological stress. It would also reduce training time and increase the productivity of a large proportion of operators. The objective of this thesis work has been to develop and evaluate algorithms for simplified boom control on forest machines. Algorithms for so called boom tip control, as well as automatic boom functions have been introduced. The algorithms solve the inverse kinematics of kinematically redundant knuckle booms while maximizing lifting capacity. The boom tip control was evaluated – first by means of a kinematic simulation and then in a dynamic forest machine simulator. The results show that boom tip control is an easier system to learn in comparison to conventional control, leading to savings in production due to shorter learning times and operators being able to reach full production sooner. Boom tip control also creates less mental strain than conventional control, which in the long run will reduce mental stress on operators of forest machines. The maximum lifting capacity algorithm was then developed further to enable TCP path-tracking, which was also implemented and evaluated in the simulator. An evaluation of the fidelity of the dynamic forest machine simulator was performed to ensure validity of the results achieved with the simplified boom control. The results from the study show that there is good fidelity between the forest machine simulator and a real forest machine, and that the results from simulations are reliable. It is also concluded that the simulator was a useful research tool for the studies performed in the context of this thesis work. The thesis had two overall objectives. The first was to provide the industry and forestry sector with usable and verified ideas and results in the area of automation. This has been accomplished with the implementation of a simplified boom control and semi-automation on a forwarder in a recently started joint venture between a hydraulic manufacturer, a forest machine manufacturer and a forest enterprise. The second objective was to strengthen the research and development links between the forestry sector and technical university research. This has been accomplished through the thesis work itself and by a number of courses, projects and Masters theses over the last three years. About 150 students in total have been studying forest machine technology in one way or the other.
QC 20100729
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie