Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Machine calibration.

Dissertationen zum Thema „Machine calibration“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Machine calibration" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Haussamer, Nicolai Haussamer. „Model Calibration with Machine Learning“. Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29451.

Der volle Inhalt der Quelle
Annotation:
This dissertation focuses on the application of neural networks to financial model calibration. It provides an introduction to the mathematics of basic neural networks and training algorithms. Two simplified experiments based on the Black-Scholes and constant elasticity of variance models are used to demonstrate the potential usefulness of neural networks in calibration. In addition, the main experiment features the calibration of the Heston model using model-generated data. In the experiment, we show that the calibrated model parameters reprice a set of options to a mean relative implied volatility error of less than one per cent. The limitations and shortcomings of neural networks in model calibration are also investigated and discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Stark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Der volle Inhalt der Quelle
Annotation:

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Alvarez, Teleña S. „Systematic trading : calibration advances through machine learning“. Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1461997/.

Der volle Inhalt der Quelle
Annotation:
Systematic trading in finance uses computer models to define trade goals, risk controls and rules that can execute trade orders in a methodical way. This thesis investigates how performance in systematic trading can be crucially enhanced by both i) persistently reducing the bid-offer spread quoted by the trader through optimized and realistically backtested strategies and ii) improving the out-of-sample robustness of the strategy selected through the injection of theory into the typically data-driven calibration processes. While doing so it brings to the foreground sound scientific reasons that, for the first time to my knowledge, technically underpin popular academic observations about the recent nature of the financial markets. The thesis conducts consecutive experiments across strategies within the three important building blocks of systematic trading: a) execution, b) quoting and c) risk-reward allowing me to progressively generate more complex and accurate backtested scenarios as recently demanded in the literature (Cahan et al. (2010)). The three experiments conducted are: 1. Execution: an execution model based on support vector machines. The first experiment is deployed to improve the realism of the other two. It analyses a popular model of execution: the volume weighted average price (VWAP). The VWAP algorithm targets to split the size of an order along the trading session according to the expected intraday volume's profile since the activity in the markets typically resembles convex seasonality – with more activity around the open and the closing auctions than along the rest of the day. In doing so, the main challenge is to provide the model with a reasonable expected profile. After proving in my data sample that two simple static approaches to the profile overcome the PCA-ARMA from Bialkowski et al. (2008) (a popular two-fold model composed by a dynamic component around an unsupervised learning structure) a further combination of both through an index based on supervised learning is proposed. The Sample Sensitivity Index hence successfully allows estimating the expected volume's profile more accurately by selecting those ranges of time where the model shall be less sensitive to past data through the identification of patterns via support vector machines. Only once the intraday execution risk has been defined can the quoting policy of a mid-frequency (in general, up to a week) hedging strategy be accurately analysed. 2. Quoting: a quoting model built upon particle swarm optimization. The second experiment analyses for the first time to my knowledge how to achieve the disruptive 50% bid-offer spread discount observed in Menkveld (2013) without increasing the risk profile of a trading agent. The experiment depends crucially on a series of variables of which market impact and slippage are typically the most difficult to estimate. By adapting the market impact model in Almgren et al. (2005) to the VWAP developed in the previous experiment and by estimating its slippage through its errors' distribution a framework within which the bid-offer spread can be assessed is generated. First, a full-replication spread, (that set out following the strict definition of a product in order to hedge it completely) is calculated and fixed as a benchmark. Then, by allowing benefiting from a lower market impact at the cost of assuming deviation risk (tracking error and tail risk) a non-full-replication spread is calibrated through particle swarm optimization (PSO) as in Diez et al. (2012) and compared with the benchmark. Finally, it is shown that the latter can reach a discount of a 50% with respect to the benchmark if a certain number of trades is granted. This typically occurs on the most liquid securities. This result not only underpins Menkveld's observations but also points out that there is room for further reductions. When seeking additional performance, once the quoting policy has been defined, a further layer with a calibrated risk-reward policy shall be deployed. 3. Risk-Reward: a calibration model defined within a Q-learning framework. The third experiment analyses how the calibration process of a risk-reward policy can be enhanced to achieve a more robust out-of-sample performance – a cornerstone in quantitative trading. It successfully gives a response to the literature that recently focusses on the detrimental role of overfitting (Bailey et al. (2013a)). The experiment was motivated by the assumption that the techniques underpinned by financial theory shall show a better behaviour (a lower deviation between in-sample and out-of-sample performance) than the classical data-driven only processes. As such, both approaches are compared within a framework of active trading upon a novel indicator. The indicator, called the Expectations' Shift, is rooted on the expectations of the markets' evolution embedded in the dynamics of the prices. The crucial challenge of the experiment is the injection of theory within the calibration process. This is achieved through the usage of reinforcement learning (RL). RL is an area of ML inspired by behaviourist psychology concerned with how software agents take decisions in an specific environment incentivised by a policy of rewards. By analysing the Q-learning matrix that collects the set of state/actions learnt by the agent within the environment, defined by each combination of parameters considered within the calibration universe, the rationale that an autonomous agent would have learnt in terms of risk management can be generated. Finally, by then selecting the combination of parameters whose attached rationale is closest to that of the portfolio manager a data-driven solution that converges to the theory-driven solution can be found and this is shown to successfully outperform out-of-sample the classical approaches followed in Finance. The thesis contributes to science by addressing what techniques could underpin recent academic findings about the nature of the trading industry for which a scientific explanation was not yet given: • A novel agent-based approach that allows for a robust out-of-sampkle performance by crucially providing the trader with a way to inject financial insights into the generally data-driven only calibration processes. It this way benefits from surpassing the generic model limitations present in the literature (Bailey et al. (2013b), Schorfheid and Wolpin (2012), Van Belle and Kerr (2012) or Weiss and Kulikowski (1991)) by finding a point where theory-driven patterns (the trader's priors tend to enhance out-of-sample robustness) merge with data-driven ones (those that allow to exploit latent information). • The provision of a technique that, to the best of my knowledge, explains for the first time how to reduce the bid-offer spread quoted by a traditional trader without modifying her risk appetite. A reduction not previously addressed in the literature in spite of the fact that the increasing regulation against the assumption of risk by market makers (e.g. Dodd–Frank Wall Street Reform and Consumer Protection Act) does yet coincide with the aggressive discounts observed by Menkveld (2013). As a result, this thesis could further contribute to science by serving as a framework to conduct future analyses in the context of systematic trading. • The completion of a mid-frequency trading experiment with high frequency execution information. It is shown how the latter can have a significant effect on the former not only through the erosion of its performance but, more subtly, by changing its entire strategic design (both, optimal composition and parameterization). This tends to be highly disregarded by the financial literature. More importantly, the methodologies disclosed herein have been crucial to underpin the setup of a new unit in the industry, BBVA's Global Strategies & Data Science. This disruptive, global and cross-asset team gives an enhanced role to science by successfully becoming the main responsible for the risk management of the Bank's strategies both in electronic trading and electronic commerce. Other contributions include: the provision of a novel risk measure (flowVaR); the proposal of a novel trading indicator (Expectations’ Shift); and the definition of a novel index that allows to improve the estimation of the intraday volume’s profile (Sample Sensitivity Index).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ulmer, Bernard C. Jr. „Fabrication and calibration of an open architecture diamond turning machine“. Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/17120.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Parkinson, Simon. „Construction of machine tool calibration plans using domain-independent automated planning“. Thesis, University of Huddersfield, 2014. http://eprints.hud.ac.uk/id/eprint/20329/.

Der volle Inhalt der Quelle
Annotation:
The evolution in precision manufacturing has resulted in the requirement to produce and maintain more accurate machine tools. This new requirement coupled with desire to reduce machine tool downtime places emphasis on the calibration procedure during which the machine's capabilities are assessed. Machine tool downtime can be as much as $120 per hour and is significant for manufacturers because the machine will be unavailable for manufacturing use, therefore wasting the manufacturer's time and potentially increasing lead-times for clients. In addition to machine tool downtime, the uncertainty of measurement, due to the schedule of the calibration plan, has significant implications on tolerance conformance, resulting in an increased possibility of false acceptance and rejection of machined parts. Currently calibrations are planned based on expert knowledge and there are no intelligent tools aiding to produce optimal calibration plans. This thesis describes a method of intelligently constructing calibration plans, optimising to reduce machine tool downtime and the estimated uncertainty of measurement due to the plan schedule. This resulted in the production of a novel, extensible domain model that encodes the decision making capabilities of a subject expert. Encoding the knowledge in PDDL2 requires the discretization of non-linear resources, such as continuous temperature change. Empirical analysis has shown that when this model is used alongside state-of-the-art automated planning tools, it is possible to achieve a reduction in machine tool downtime greater than 10% (12:30 to 11:18) over expert generated plans. In addition, the estimated uncertainty due to the schedule of the plan can be reduced by 59% (48 µm to 20 µm). Further experiments on a PC architecture investigate the trade-o� when optimising calibration plans for both time and the uncertainty of measurement. These experiments demonstrated that it is possible to optimise both metrics reaching a compromise that is on average 5% worse that the best-known solution for each individual metric. Additional experiments using a High Performance Computing architecture show that on average optimality of calibration plans can be improved by 4%; a potential saving of 30 minutes for a single machine and 10 hours for a company with 20 machines tools. This could incur a financial saving in excess of $1200 saving.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nichols, Scott A. „Improvement of the camera calibration through the use of machine learning techniques“. [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp1587/nichols%5Fthesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains vii, 45 p.; also contains graphics. Vita. Includes bibliographical references (p. 43-44).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Herron, Christopher, und André Zachrisson. „Machine Learning Based Intraday Calibration of End of Day Implied Volatility Surfaces“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273419.

Der volle Inhalt der Quelle
Annotation:
The implied volatility surface plays an important role for Front office and Risk Management functions at Nasdaq and other financial institutions which require mark-to-market of derivative books intraday in order to properly value their instruments and measure risk in trading activities. Based on the aforementioned business needs, being able to calibrate an end of day implied volatility surface based on new market information is a sought after trait. In this thesis a statistical learning approach is used to calibrate the implied volatility surface intraday. This is done by using OMXS30-2019 implied volatility surface data in combination with market information from close to at the money options and feeding it into 3 Machine Learning models. The models, including Feed Forward Neural Network, Recurrent Neural Network and Gaussian Process, were compared based on optimal input and data preprocessing steps. When comparing the best Machine Learning model to the benchmark the performance was similar, indicating that the calibration approach did not offer much improvement. However the calibrated models had a slightly lower spread and average error compared to the benchmark indicating that there is potential of using Machine Learning to calibrate the implied volatility surface.
Implicita volatilitetsytor är ett viktigt vektyg för front office- och riskhanteringsfunktioner hos Nasdaq och andra finansiella institut som behöver omvärdera deras portföljer bestående av derivat under dagen men också för att mäta risk i handeln. Baserat på ovannämnda affärsbehov är det eftertraktat att kunna kalibrera de implicita volatilitets ytorna som skapas i slutet av dagen nästkommande dag baserat på ny marknadsinformation. I denna uppsats används statistisk inlärning för att kalibrera dessa ytor. Detta görs genom att uttnytja historiska ytor från optioner i OMXS30 under 2019 i kombination med optioner nära at the money för att träna 3 Maskininlärnings modeller. Modellerna inkluderar Feed Forward Neural Network, Recurrent Neural Network och Gaussian Process som vidare jämfördes baserat på data som var bearbetat på olika sätt. Den bästa Maskinlärnings modellen jämfördes med ett basvärde som bestod av att använda föregående dags yta där resultatet inte innebar någon större förbättring. Samtidigt hade modellen en lägre spridning samt genomsnittligt fel i jämförelse med basvärdet som indikerar att det finns potential att använda Maskininlärning för att kalibrera dessa ytor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sousa, João Beleza Teixeira Seixas e. „Machine learning Gaussian short rate“. Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/12230.

Der volle Inhalt der Quelle
Annotation:
Dissertação para obtenção do Grau de Doutor em Estatística e Gestão do Risco
The main theme of this thesis is the calibration of a short rate model under the risk neutral measure. The problem of calibrating short rate models arises as most of the popular models have the drawback of not fitting prices observed in the market, in particular, those of the zero coupon bonds that define the current term structure of interest rates. This thesis proposes a risk neutral Gaussian short rate model based on Gaussian processes for machine learning regression using the Vasicek short rate model as prior. The proposed model fits not only the prices that define the current term structure observed in the market but also all past prices. The calibration is done using market observed zero coupon bond prices, exclusively. No other sources of information are needed. This thesis has two parts. The first part contains a set of self-contained finished papers, one already published, another accepted for publication and the others submitted for publication. The second part contains a set of self-contained unsubmitted papers. Although the fundamental work on papers in part two is finished as well, there are some extra work we want to include before submitting them for publication. Part I: - Machine learning Vasicek model calibration with Gaussian processes In this paper we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. We stress that the only prices needed for calibration are market observed zero coupon bond prices and that the parameters are directly obtained in the arbitrage free risk neutral measure. - One Factor Machine Learning Gaussian Short Rate In this paper we model the short rate, under the risk neutral measure, as a Gaussian process, conditioned on market observed zero coupon bonds log prices. The model is based on Gaussian processes for machine learning, using a single Vasicek factor as prior. All model parameters are learned directly under the risk neutral measure,using zero coupon bonds log prices only. The model supports observations of zero coupon bounds with distinct maturities limited to one observation per time instant. All the supported observations are automatically fitted.
M2A/ISEL financing conference trips; ISEL - financing conference fees; ISEL/IPL the PROTEC scholarship; CMA/FCT/UNL - financing conference trips
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Solorzano, Soria Ana Maria. „Fire Detectors Based on Chemical Sensor Arrays and Machine Learning Algorithms: Calibration and Test“. Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/669584.

Der volle Inhalt der Quelle
Annotation:
In some types of fire, namely, smoldering fires or involving polymers without flame, gases and volatiles appear before smoke is released. Most of the fatalities registered for fires, are caused due to the intoxication of the building occupants over the burns. Nowadays, conventional fire detectors are based on the detection of smoke or airborne particles. In smoldering fires situations, conventional fire detectors triggers the alarm after the release of toxic emissions. The early emission of gas in fires opens the possibility to build fire alarm systems with shorter response times than widespread smoke-based detectors. Actually, the sensitivity of gas sensors to combustion products has been proved for many years. However, already early works remarked the challenge of providing reliable fire detection using chemical sensors. As gas sensors are not specific, they can be calibrated to detect large variety of fire signatures. But, at the same time, they are also potentially sensitive to any activity that releases volatiles when being performed. Cross-sensitivity to water vapor and other chemical compounds make gas-based fire alarm systems prone to false positives. For that reason, the development of reliable and robust fire detectors based on gas sensors relies in pattern recognition and Machine Learning algorithms to discriminate fire from nuisance sensor signatures. The presented PhD. Thesis explore the role of pattern recognition algorithms for fire detection using detectors based exclusively in chemical sensors. Two prototypes based on different types of gas sensors were designed. The sensor selection was performed to be sensitive to combustion products and to capture other volatiles that may help to discriminate fire and nuisances. Machine Learning algorithms for the prediction of fire were trained using standard fire tests stablished in EU norm 54. Additionally to those test experiments that may induce false alarms were also performed. Two approaches of machine learning algorithms were explore. The first prediction algorithms is based on Partial Least Squares Discriminant Analysis and the second set of algorithms are based on Support Vector Machines. Additionally, two new methodologies for cost reduction are presented. The first methodology build fire detection algorithms using the combination of Standard fire test and a reduced version of those experiments. The reduced version were performed in a small chamber. The smaller setup allows the performance of experiments in a shorter period of time. In consequence, the number of experiments to test the models increase and also the robustness of the prediction algorithms. The second methodology built general calibration models using replicates of the same sensor array. The use of different units rejects the variance between sensor arrays and allows the construction of general calibration models. The use of a single model to calibrate sensor arrays systems allows the mass production and resulting in the reduction of costs production.
Les alarmes convencionals d'incendis es basen en la detecció de fums. Tanmateix, els incendis solen emetre molts volàtils abans d'emetre fum. Altres grups de recerca ja han proposat sistemes detectors d'incendis basats en sensors químics, que poden proporcionar una resposta més ràpida, però segueixen sent propensos a falses alarmes davant d'interferències. Les tècniques de reconeixement de patrons poden ser útils per mitigar aquesta limitació. En aquesta tesi, es desenvolupen dos detectors d’incendis basats exclusivament en sensors de gas, de diverses tecnologies, que proporcionen una alarma d’incendi basada en algorismes d’aprenentatge automàtic. Els detectors van ser exposats a incendis estandarditzats i a diverses interferències. La tesi presenta dos enfocaments diferents pel reconeixement de patrons: el primer es basa en una anàlisi discriminant de mínims quadrats parcials, PLS-DA, i el segon es basa en una màquina de vectors de suport, SVM. Els resultats confirmen la capacitat de detectar incendis a una fase inicial del seu desenvolupament i el rebuig de la majoria de les interferències. A més, es presenten dues metodologies per a la reducció dels costos de calibratge d'agrupacions de sensors de gas per la detecció d'incendis, tenint present que els experiments per avaluar els detectors es fan en una sala d'incendis estàndard i són molt llargs i costosos. La primera metodologia proposada combina dades procedents d'una sala d'incendis estàndard i dades d'experiments fets a petita escala, més ràpids i menys costosos. Els resultats mostren que el rendiment dels models de predicció pot millorar amb la fusió de dades. La segona metodologia de reducció de costos compensa la necessitat de models de calibratge individuals per a cada matriu de sensors (a causa de la variabilitat del sensor) rebutjant la variabilitat del sensor i proporcionant models generals de calibratge.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dutra, Calainho Felipe. „Evaluation of Calibration Methods to Adjust for Infrequent Values in Data for Machine Learning“. Thesis, Högskolan Dalarna, Mikrodataanalys, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-28134.

Der volle Inhalt der Quelle
Annotation:
The performance of supervised machine learning algorithms is highly dependent on the distribution of the target variable. Infrequent values are more di_cult to predict, as there are fewer examples for the algorithm to learn patterns that contain those values. These infrequent values are a common problem with real data, being the object of interest in many _elds such as medical research, _nance and economics, just to mention a few. Problems regarding classi_cation have been comprehensively studied. For regression, on the other hand, few contributions are available. In this work, two ensemble methods from classi_cation are adapted to the regression case. Additionally, existing oversampling techniques, namely SmoteR, are tested. Therefore, the aim of this research is to examine the inuence of oversampling and ensemble techniques over the accuracy of regression models when predicting infrequent values. To assess the performance of the proposed techniques, two data sets are used: one concerning house prices, while the other regards patients with Parkinson's Disease. The _ndings corroborate the usefulness of the techniques for reducing the prediction error of infrequent observations. In the best case, the proposed Random Distribution Sample Ensemble reduced the overall RMSE by 8.09% and the RMSE for infrequent values by 6.44% when compared with the best performing benchmark for the housing data set.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Potdar, Akshay Anand. „Reducing the uncertainty of thermal model calibration using on-machine probing and data fusion“. Thesis, University of Huddersfield, 2016. http://eprints.hud.ac.uk/id/eprint/31397/.

Der volle Inhalt der Quelle
Annotation:
Various sources of error hinder the possibility of achieving tight accuracy requirements for high-value manufacturing processes. These are often classified as: pseudo-static geometric errors; non-rigid body errors; thermal errors; and dynamic errors. It is comparatively complicated to obtain an accurate error map for the thermal errors because they are influenced by various factors with different materials, time constants, asymmetric heating sources and machining process, environmental effects, etc. Their transient nature and complex interaction mean that they are relatively difficult to compensate using pre-calibration methods. For error correction, the magnitude and sign of the error must first be measured or estimated. Pre-calibrated thermal compensation has been shown to be an effective means of improving accuracy. However, the time required to acquire the calibration data is prohibitive, reducing the uptake of this technology in industrial applications. Furthermore, changing conditions of the machine or factory environment are not adequately accommodated by pre-calibrated compensation, leading to degradation in performance. The supplementary use of on-machine probing, which is often installed for process control, can help to achieve better results. During the probing operation, the probe is carried by the machine tool axes. Therefore, the measurement data that it takes inevitably includes both the probing errors and those originating from the inaccuracies of a machine tool as well as any deviation in the part or artefact being measured. Each of these error sources must be understood and evaluated to be able to establish a measurement with a stated uncertainty. This is a vital preliminary step to ensure that the calibration parameters of the thermal model are not contaminated by other effects. This thesis investigates the various sources of measurement uncertainties for probing on a CNC machine tool and quantify their effects in the particular case where the on-machine probing is used to calibrate the thermal error model. Thermal errors constitute the largest uncertainty source for on-machine probing. The maximum observed thermal displacement error was approximately 220 μm for both X and Z-axis heating test at 100 % speed. To reduce the influence of this uncertainty source, sensor data fusion model using artificial neural network and principal component analysis was developed. The output of this model showed better than 90 % correlation to the measured thermal displacement. This data fusion model was developed for the temperature and FBG sensors. To facilitate the integration of the sensor and to ease the communication with machine tool controller, a modular machine tool structural monitoring system using LabVIEW environment was developed. Finally, to improve the performance of the data fusion model in order to reduce the thermal uncertainty, a novel photo-microsensor based sensing head for displacement measurement is presented and analysed in detail. This prototype sensor has measurement range of 20 μm and resolution of 21 nm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Flynn, Joseph. „The identification of geometric errors in five-axis machine tools using the telescoping magnetic ballbar“. Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.698982.

Der volle Inhalt der Quelle
Annotation:
To maximise productivity and reduce scrap in high-value, low-volume production, five-axis machine tool (5A-MT) motion accuracy must be verified quickly and reliably. Numerous metrology instruments have been developed to measure errors arising from geometric imperfections within and between machine tool axes (amongst other sources). One example is the TMBB, which is becoming an increasingly popular instrument to measure both linear and rotary axis errors. This research proposes new TMBB measurement technique to rapidly, accurately and reliably measure all position-independent rotary axis errors in a 5A-MT. In this research two literature reviews have been conducted. The findings informed the subsequent development of a virtual machine tool (VMT). This VMT was used to capture the effects of rotary and linear axis position-independent geometric errors, and apparatus set-up errors on a variety of candidate measurement routines. This new knowledge then informed the design of an experimental methodology to capture specific phenomena that were observed within the VMT on a commercial 5A-MT. Finally, statistical analysis of experimental measurements facilitated a quantification of the repeatability, strengths and limitations of the final testing method concept. The major contribution of this research is the development of a single set-up testing procedure to identify all 5A-MT rotary axis location errors, whilst remaining robust in the presence of set-up and linear axis location errors. Additionally, a novel variance-based sensitivity analysis approach was used to design testing procedures. By considering the effects of extraneous error sources (set-up and linear location) in the design and validation phases, an added robustness was introduced. Furthermore, this research marks the first usage of Monte Carlo uncertainty analysis in conjunction with rotary axis TMBB testing. Experimental evidence has shown that the proposed corrections for set-up and linear axis errors are highly effective and completely indispensable in rotary axis testing of this kind. However, further development of the single set-up method is necessary, as geometric errors cannot always be measured identically at different testing locations. This has highlighted the importance of considering the influences on 5A-MT component errors on testing results, as the machine tool axes cannot necessarily be modelled as straight lines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Landström, Per, und John Sandström. „Classication framework formonitoring calibration ofautonomous waist-actuated minevehicles“. Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-84453.

Der volle Inhalt der Quelle
Annotation:
For autonomous mine vehicles that perform the ”load-haul-dump” (LHD) cycle to operate properly, calibration of the sensors they rely on is crucial. The LHD cycle refers to a vehicle that loads material, hauls the material along a route and dumps it in an extraction point. Many of these vehicles are waist-actuated, meaning that the front and rear part of the machines are fixated at an articulation point.   The focus of this thesis is about developing and implementing two differ- ent frameworks to distinguish patterns from routes where calibration of the hinge-angle sensor was needed before and try to predict when calibrating the sensor is needed. We present comparative results of one method using ma- chine learning, specifically supervised learning with support vector machine and one optimization-based method using scan matching by implementing a two-dimensional NDT (Normal Distributions Transform) algorithm.   Comparative results based on evaluation metrics used in this thesis show that detecting incorrect behaviour of the hinge-angle sensor is possible. Evaluation show that the machine learning classifier performs better on the data used for this thesis than the optimization-based classifier.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Lawler, Clinton T. „A two-phase spherical electric machine for generating rotating uniform magnetic fields“. Thesis, Monterey, California. Naval Postgraduate School, 2007. http://hdl.handle.net/10945/2995.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the design and construction of a novel two-phase spherical electric machine that generates rotating uniform magnetic fields, known as a fluxball machine. Alternative methods for producing uniform magnetic fields with air-cored solenoidal magnets are discussed and evaluated. Analytical and numerical models of these alternatives are described and compared. The design details of material selection, slot geometry, and mechanical connections are described for the fluxball machine. The electrical properties of the machine are predicted and measured. Based on these properties, two modes of operation for the fluxball machine, normal and resonant, are described, and reference tables of important operating parameters are given. The drive and measurement circuitry for the fluxball machine are described. The magnetic properties of the fluxball machine are measured using Hall effect sensors. The calibration of two different Hall effect sensors is performed, providing the ability to measure the magnetic fields accurately to +or- 1%. Measurements of the magnetic field in the uniform field region are taken and compared with predicted values. The attenuation and distortion of the magnetic fields due to diffusion through the inner fluxball winding is measured as a function of operating frequency. Finally, future uses of this machine for various applications are discussed.
Contract number: N62271-97-G-0026
US Navy (USN) author.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Camboulives, Martin. „Étalonnage d'un espace de travail par multilatération“. Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLN024/document.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés dans cette thèse ont pour but la maîtrise des méthodes d'étalonnage par multilatération. Ils s'inscrivent dans une collaboration entre le Laboratoire national de métrologie et d'essais (LNE) et le Laboratoire Universitaire de Recherche en Production Automatisée (LURPA). Dans ces travaux, la multilatération est dite séquentielle car réalisée avec un unique Laser Tracer positionné successivement plusieurs points de l'espace. La détermination de ces positions ainsi que des bras-morts de l'interféromètre est le point clef de la méthode. Pour l'évaluation des incertitudes, le raccordement aux étalons est fait via les longueurs interférométriques délivrées par le Laser Tracer. Elles sont associées à des défauts caractéristiques d'une cinématique particulière ou aux coordonnées des points mesurés. Elles sont évaluées au travers de la stratégie de mesure et des performances de chaque composant intervenant lors de la procédure d'étalonnage. Mesurer les coordonnées d'un point cible de l'espace par multilatération implique de connaître les positions des points de vue depuis lesquels le point est visé, ainsi que les longueurs qui le séparent des points de vue qui en pratique sont les centres des Laser Tracer. La méthode que nous proposons permet d'identifier les positions et bras-morts des Laser Tracer qui constituent un repère de mesure qualifié de Système Mesurant de Référence (SMR), puis de réaliser la multilatération. Ensuite, l'extraction de défauts volumiques permet éventuellement d'identifier les défauts cinématiques d'une chaîne de solides particulière associée au volume de mesure. Dans cette optique, nous proposons une procédure type inspirée des travaux du LNE axés sur l'utilisation d'une barre à trous pour identifier les défauts cinématiques d'une MMT à trois axes cartésiens. Cette méthode se démarque des approches actuellement proposées car le SMR est construit indépendamment de l'identification des défauts de l'appareil de mesure. De plus, la procédure d'étalonnage que nous proposons repose sur une investigation axe par axe plutôt que par une optimisation globale du problème d'étalonnage. En nous focalisant sur les machines à mesurer tridimensionnelles (MMT), nous proposons un bilan d'incertitudes qui a inclus des facteurs dont le rôle n'était auparavant pas pris en compte dans la littérature. Ces facteurs sont liés au fait de n'utiliser qu'un seul Laser Tracer pour étalonner la MMT. Nous proposons un module d'évaluation des incertitudes qui permet, grâce à des simulations de Monte Carlo, d'identifier l'influence de chacun de facteurs d'incertitude. La pertinence d'une stratégie d'étalonnage peut donc être évaluée à priori de la mise en œuvre de la procédure. L'outil de simulation proposé s'appuie sur la simulation du comportement de la MMT et de celui du Laser Tracer lors de la mesure. Deux indicateurs d'incertitude sont proposés pour l'étude des incertitudes. L'un est lié à l'exactitude de calcul du SMR construit sur les positions successives du Laser Tracer, l'autre est une image de l'incertitude obtenu sur les profils des défauts cinématiques calculés. Cet outil de simulation a permis de valider l'importance des sources d'incertitudes établies initialement pour l'étalonnage d'une MMT à trois axes cartésiens. L’ensemble de la démarche a été appliqué et validé pour une MMT à 3 axes cartésiens en conditions de laboratoire chez un industriel. Cependant, l’approche proposée découple la construction du SMR de l’identification des défauts cinématiques. Elle peut donc être facilement étendue à des systèmes de mesure 3D variés. Nous montrons donc que la démarche globale peut s’appliquer à des espaces de mesure sans cinématique machine. Il s’agit alors d’identifier les défauts volumiques associés à l’espace de mesure, ainsi que les incertitudes associées à la méthode d’étalonnage mise en œuvre. Afin d’illustrer notre propos, nous traitons le cas d’espaces de travail associés à un système de mesure optique
This thesis aims at developing calibration procedures and methods for measuring tools such as coordinate measuring machines (CMMs) and stereovision devices. This work is incorporated within the framework of a collaboration between the Laboratoire national de métrologie et d’essais (LNE) and the Automated Production Research Laboratory (LURPA). In the scope of this thesis, multilateration is qualified as sequential because it is carried out by a single tracking interferometer (Laser Tracer) that is placed in different positions during the calibration procedure. In order to assess the calibration uncertainties, the link to the length standards is obtained through the measured lengths provided by the interferometer. Each one of these measured lengths is linked to the kinematic chain parametric errors that cause the volumetric errors of the CMM or directly to the measured points coordinates. They are assessed thanks to the study of both the calibration procedure and the performance of each component that takes part in the calibration procedure.Performing multilateration to obtain the spatial coordinates of a point requires to know both the stand points from which the point is measured and the distances between the stand points and the measured point. Practically, the stand points are the Laser Tracer positions. The proposed method aims at identifying the Laser Tracer’s positions and dead-paths lengths first in order to build a reference measuring frame, then performing multilateration. Then, if the measuring device is a CMM, its kinematic chain parametric errors are identified. For this matter, we propose a specific procedure based on the LNE knowledge on CMM calibration carried out using hole-bars. The originality of the proposed method lies in the fact that the reference measuring frame and the measuring device errors are calculated independently from each other. Plus, when addressing the case of a CMM calibration, the kinematic chain parametric errors are extracted one by one when a global optimization algorithm is usually performed nowadays.We focus on the case of CMMs calibration and we propose a precise analysis of all the sources of errors. It includes factors which influence was not studied before. They appear to result from the fact that a single tracking interferometer is used to calibrate the CMM. A simulation module based on a Monte Carlo approach has been developed. It enables the study of the influence of each source of errors independently from the other ones. Hence, the relevance of a measuring strategy can be assessed beforehand. This module simulates the behaviour of both the CMM and the Laser Tracer to evaluate uncertainties. We propose two indicators to observe the relative influence of each uncertainty factor. The first one is linked to the reference frame that is built on the successive positions of the Laser Tracer. The second one represents the global uncertainty one the kinematic chain parametric errors. This uncertainty assessment module has been successfully used to highlight the importance of sources of errors which role used to not be studied.The calibration procedure and uncertainty assessment module we propose have been successfully applied to a 3-axis cartesian CMM in laboratory conditions. Plus, since the reference measuring frame and the kinematic chain parametric errors identification are performed separately, the method we propose can be applied to other measuring devices. We especially explain how to apply it in the case of a measuring device based on stereovision
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Gunnlaugsdottir, Helga. „Spectroscopic determination of pH in an arterial line from a Heart-lung machine“. Thesis, KTH, Skolan för teknik och hälsa (STH), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121583.

Der volle Inhalt der Quelle
Annotation:
There is a need for a real-time, non-invasive method to monitor blood pH in a patient line during cardiopulmonary bypass, as today’s methods are both invasive and time consuming. Blood pH is an indicator of physiological and biochemical activity in the body and needs to be kept within a relatively narrow range, typically between 7.35-7.45. A pH value outside this range can be critical for the patient and therefore needs to be carefully monitored throughout the course of cardiopulmonary bypass. In this study the feasibility of using spectroscopic methods for indirect measurement of pH was investigated, and both transmission and reflectance spectroscopy were tested. The results showed that NIR reflectance spectroscopy is a feasible technique for blood pH monitoring during cardiopulmonary bypass. A strong correlation was found between measured pH values and spectral output in the wavelength range 800-930 nm. It was suggested that by means of the statistical partial least square regression method, a model could be created with three regression factors with a cross-validated R2 of 0.906 and a prediction error RMSEP of 0.089 pH units. The results presented here form a foundation for further analysis and experiments with larger sample set and more controlled experimental environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Hong, Cefu. „Error Calibration on Five-axis Machine Tools by Relative Displacement Measurement between Spindle and Work Table“. 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/157572.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Profeta, Rebecca L. „Calibration Models and System Development for Compressive Sensing with Micromirror Arrays“. Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright15160282553897.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Todeschi, Tiziano. „Calibration of local-stochastic volatility models with neural networks“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23052/.

Der volle Inhalt der Quelle
Annotation:
During the last twenty years several models have been proposed to improve the classic Black-Scholes framework for equity derivatives pricing. Recently a new model has been proposed: Local-Stochastic Volatility Model (LSV). This model considers volatility as the product between a deterministic and a stochastic term. So far, the model choice was not only driven by the capacity of capturing empirically observed market features well, but also by the computational tractability of the calibration process. This is now undergoing a big change since machine learning technologies offer new perspectives on model calibration. In this thesis we consider the calibration problem to be the search for a model which generates given market prices and where additionally technology from generative adversarial networks can be used. This means parametrizing the model pool in a way which is accessible for machine learning techniques and interpreting the inverse problems a training task of a generative network, whose quality is assessed by an adversary. The calibration algorithm proposed for LSV models use as generative models so-called neural stochastic differential equations (SDE), which just means to parameterize the drift and volatility of an Ito-SDE by neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Zonzini, Mirko. „Calibration and advanced control of the PICKABLE robot for the improvement of its dynamic performance“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Den vollen Inhalt der Quelle finden
Annotation:
Pickable is a prototype, in the early stages of development, of an industrial CDPR developed by Tecnalia Research and Innovation within the "PicknPack" project, launched by the European Union to promote the development of flexible automated systems for the packaging of fresh or packaged food products. Current industrial solutions for pick and place applications are either delta parallel robots or serial arms SCARA robots but, in recent years, PKM based pick and place robots have begun to become more and more important in the pick and place market. The idea behind Pickable is to combine the principles of CDPR and PKM to build an alternative to the usual industrial manipulators that has workspace agility, a better footprint, and lower material cost. Pickable has some special elements compared to market standards; the CDPR itself with an industrial PC, cables, winches, and a platform integrating a pneumatic feeding system and two small motors to move the end-effector; and a vision system, consisting of a camera, a conveyor, an additional PC, and a graphical interface. Developed in collaboration with the LIRMM research laboratory, the aim of this Master Thesis was to carry on with the development of the machine in order to have all the components working and integrated with each other so it will then be possible to focus on the optimization and analysis of more specialized topics of interest. As a first step, the vision system was integrated with the manipulator, through the development of TCP/IP socket connections to allow the exchange of crucial information for operations and security. The second step involved the creation of a state machine to allow the management of the various functions, which was then also integrated with the GUI. The last step was the execution of tests to evaluate the preliminary dynamics performance of the machine and highlight the parameters that have the greatest influence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Gao, Xiang Hui. „Development of an initial-training-free online extreme learning machine with applications to automotive engine calibration and control“. Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691047.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Sepe, Luca. „Analysis and implementation of an industrial control for laser-based calibration of electronic devices“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Den vollen Inhalt der Quelle finden
Annotation:
Nowadays the electronics market and technology in general are constantly expanding. Technological evolution is steadily growing to keep pace with the times: perhaps the competitiveness of the various markets makes the timelines close. Just think of the market of all electronic devices, every 6 months or less, comes out a new, more powerful model with new functions and able to satisfy the needs of the consumer more and more fully. This thesis addresses the evolution of technology as regards one of the many factors affecting electronics, that is, the construction of electronic cards with certain tolerances, made by their ohmic and functional calibration. In particular, it will be analyzed, studied and modernized, a system that allows the calibration of electronic components by the use of lasers and other devices. The thesis is divided in 7 chapters: Chapter 1: general introduction and scientific bases, Chapter 2: laser structure and additional components, Chapter 3: process explanation and software design, Chapter 4: hardware design and software development, Chapter 5: time analysis, Chapter 6: visualization development, Chapter 7: final considerations and possible variants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Diaz, Mauricio. „Analyse de l'illumination et des propriétés de réflectance en utilisant des collections d'images“. Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM051/document.

Der volle Inhalt der Quelle
Annotation:
L'utilisation de collections d'images pour les applications de vision par ordinateur devient de plus en plus commune des nos jours. L'objectif principal de cette thèse est d'exploiter et d'extraire des informations importantes d'images de scènes d'extérieur a partir de ce type de collections : l'illumination présente au moment de la prise, les propriétés de reflectance des matériaux composant les objets dans la scène et les propriétés radiométriques des appareils photo utilisés. Pour atteindre notre objectif, cette thèse est composée de deux parties principales. Dans un premier temps nous allons réaliser une analyse de différentes représentations du ciel et une comparaison des images basée sur l'apparence de celui-ci. Une grande partie de l'information visuelle perçue dans les images d'extérieures est due a l'illumination en provenance du ciel. Ce facteur est représenté par les rayons du soleil réfléchis et réfractés dans l'atmosphère en créant une illumination globale de l'environnement. En même temps cet environnement détermine la façon de percevoir les objets du monde réel. Etant donné l'importance du ciel comme source d'illumination, nous formulons un processus générique en trois temps, segmentation, modélisation et comparaison des pixels du ciel, pour trouver des images similaires en se basant sur leurs apparences. Différentes méthodes sont adoptées dans les phases de modélisation et de comparaison. La performance des algorithmes est validée en trouvant des images similaires dans de grandes collections de photos. La deuxième partie de cette thèse consiste a exploiter l'information géométrique additionnelle pour en déduire les caractéristiques photométriques de la scène. A partir d'une structure 3D récupérée en utilisant des méthodes disponibles, nous analysons le processus de formation de l'image a partir de modèles simples, puis nous estimons les paramètres qui les régissent. Les collections de photos sont généralement capturées par différents appareils photos, d'où l'importance d'insister sur leur calibrage radiométrique. Notre formulation estime cet étalonnage pour tous les appareils photos en même temps, en utilisant une connaissance a priori sur l'espace des fonctions de réponse des caméras possibles. Nous proposons ensuite, un cadre d'estimation conjoint pour calculer une représentation de l'illumination globale dans chaque image, l'albedo de la surface qui compose la structure 3D et le calibrage radiométrique pour tous les appareils photos
The main objective of this thesis is to exploit the photometric information avail- able in large photo collections of outdoor scenes to infer characteristics of the illumination, the objects and the cameras. To achieve this goal two problems are addressed. In a preliminary work, we explore opti- mal representations for the sky and compare images based on its appearance. Much of the information perceived in outdoor scenes is due to the illumination coming from the sky. The solar beams are reflected and refracted in the atmosphere, creating a global illumination ambiance. In turn, this environment determines the way that we perceive objects in the real world. Given the importance of the sky as an illumination source, we formulate a generic 3–step process in order to compare images based on its appearance. These three stages are: segmentation, modeling and comparing of the sky pixels. Different approaches are adopted for the modeling and comparing phases. Performance of the algorithms is validated by finding similar images in large photo collections. A second part of the thesis aims to exploit additional geometric information in order to deduce the photometric characteristics of the scene. From a 3D structure recovered using available multi–view stereo methods, we trace back the image formation process and estimate the models for the components involved on it. Since photo collections are usually acquired with different cameras, our formulation emphasizes the estimation of the radiometric calibration for all the cameras at the same time, using a strong prior on the possible space of camera response functions. Then, in a joint estimation framework, we also propose a robust computation of the global illumination for each image, the surface albedo for the 3D structure and the radiometric calibration for all the cameras
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Grizou, Jonathan. „Apprentissage simultané d'une tâche nouvelle et de l'interprétation de signaux sociaux d'un humain en robotique“. Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0146/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s'intéresse à un problème logique dont les enjeux théoriques et pratiques sont multiples. De manière simple, il peut être présenté ainsi : imaginez que vous êtes dans un labyrinthe, dont vous connaissez toutes les routes menant à chacune des portes de sortie. Derrière l'une de ces portes se trouve un trésor, mais vous n'avez le droit d'ouvrir qu'une seule porte. Un vieil homme habitant le labyrinthe connaît la bonne sortie et se propose alors de vous aider à l'identifier. Pour cela, il vous indiquera la direction à prendre à chaque intersection. Malheureusement, cet homme ne parle pas votre langue, et les mots qu'il utilise pour dire ``droite'' ou ``gauche'' vous sont inconnus. Est-il possible de trouver le trésor et de comprendre l'association entre les mots du vieil homme et leurs significations ? Ce problème, bien qu'en apparence abstrait, est relié à des problématiques concrètes dans le domaine de l'interaction homme-machine. Remplaçons le vieil homme par un utilisateur souhaitant guider un robot vers une sortie spécifique du labyrinthe. Ce robot ne sait pas en avance quelle est la bonne sortie mais il sait où se trouvent chacune des portes et comment s'y rendre. Imaginons maintenant que ce robot ne comprenne pas a priori le langage de l'humain; en effet, il est très difficile de construire un robot à même de comprendre parfaitement chaque langue, accent et préférence de chacun. Il faudra alors que le robot apprenne l'association entre les mots de l'utilisateur et leur sens, tout en réalisant la tâche que l'humain lui indique (i.e.trouver la bonne porte). Une autre façon de décrire ce problème est de parler d'auto-calibration. En effet, le résoudre reviendrait à créer des interfaces ne nécessitant pas de phase de calibration car la machine pourrait s'adapter,automatiquement et pendant l'interaction, à différentes personnes qui ne parlent pas la même langue ou qui n'utilisent pas les mêmes mots pour dire la même chose. Cela veut aussi dire qu'il serait facile de considérer d’autres modalités d'interaction (par exemple des gestes, des expressions faciales ou des ondes cérébrales). Dans cette thèse, nous présentons une solution à ce problème. Nous appliquons nos algorithmes à deux exemples typiques de l'interaction homme robot et de l'interaction cerveau machine: une tâche d'organisation d'une série d'objets selon les préférences de l'utilisateur qui guide le robot par la voix, et une tâche de déplacement sur une grille guidé par les signaux cérébraux de l'utilisateur. Ces dernières expériences ont été faites avec des utilisateurs réels. Nos résultats démontrent expérimentalement que notre approche est fonctionnelle et permet une utilisation pratique d’une interface sans calibration préalable
This thesis investigates how a machine can be taught a new task from unlabeled humaninstructions, which is without knowing beforehand how to associate the human communicative signals withtheir meanings. The theoretical and empirical work presented in this thesis provides means to createcalibration free interactive systems, which allow humans to interact with machines, from scratch, using theirown preferred teaching signals. It therefore removes the need for an expert to tune the system for eachspecific user, which constitutes an important step towards flexible personalized teaching interfaces, a key forthe future of personal robotics.Our approach assumes the robot has access to a limited set of task hypotheses, which include the task theuser wants to solve. Our method consists of generating interpretation hypotheses of the teaching signalswith respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signallabelpairs for each task, the task the user wants to solve is the one that explains better the history of interaction.We consider different scenarios, including a pick and place robotics experiment with speech as the modalityof interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacherinstructs a robot to perform a new task using initially unclassified signals, whose associated meaning can bea feedback (correct/incorrect) or a guidance (go left, right, up, ...). Our results show that a) it is possible tolearn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) itis possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. Wefurther introduce a planning strategy that exploits uncertainty from the task and the signals' meanings toallow more efficient learning sessions. We present a study where several real human subjects controlsuccessfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.Based on this work, but from another perspective, we introduce a new experimental setup to study howhumans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve atask but the channels of communication they can use are constrained and force them to invent and agree ona shared interaction protocol in order to solve the task. These constraints allow analyzing how acommunication protocol is progressively established through the interplay and history of individual actions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Huang, Jian. „Assessing predictive performance and transferability of species distribution models for freshwater fish in the United States“. Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73477.

Der volle Inhalt der Quelle
Annotation:
Rigorous modeling of the spatial species distributions is critical in biogeography, conservation, resource management, and assessment of climate change. The goal of chapter 2 of this dissertation was to evaluate the potential of using historical samples to develop high-resolution species distribution models (SDMs) of stream fishes of the United States. I explored the spatial transferability and temporal transferability of stream–fish distribution models in chapter 3 and chapter 4 respectively. Chapter 2 showed that the discrimination power of SDMs for 76 non-game fish species depended on data quality, species' rarity, statistical modeling technique, and incorporation of spatial autocorrelation. The area under the Receiver-Operating-Characteristic curve (AUC) in the cross validation tended to be higher in the logistic regression and boosted regression trees (BRT) than the presence-only MaxEnt models. AUC in the cross validation was also higher for species with large geographic ranges and small local populations. Species prevalence affected discrimination power in the model training but not in the validation. In chapter 3, spatial transferability of SDMs was low for over 70% of the 21 species examined. Only 24% of logistic regression, 12% of BRT, and 16% of MaxEnt had AUC > 0.6 in the spatial transfers. Friedman's rank sum test showed that there was no significant difference in the performance of the three modeling techniques. Spatial transferability could be improved by using spatial logistic regression under Lasso regularization in the training of SDMs and by matching the range and location of predictor variables between training and transfer regions. In chapter 4, testing of temporal SDM transfer on independent samples resulted in discrimination power of the moderate to good range, with AUC > 0.6 for 80% of species in all three types of models. Most cool water species had good temporal transferability. However, biases and misspecified spread occurred frequently in the temporal model transfers. To reduce under- or over-estimation bias, I suggest rescaling the predicted probability of species presence to ordinal ranks. To mitigate inappropriate spread of predictions in the climate change scenarios, I recommended to use large training datasets with good coverage of environmental gradients, and fine-tune predictor variables with regularization and cross validation.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Mollaret, Sébastien. „Artificial intelligence algorithms in quantitative finance“. Thesis, Paris Est, 2021. http://www.theses.fr/2021PESC2002.

Der volle Inhalt der Quelle
Annotation:
L'intelligence artificielle est devenue de plus en plus populaire en finance quantitative avec l'augmentation des capacités de calcul ainsi que de la complexité des modèles et a conduit à de nombreuses applications financières. Dans cette thèse, nous explorons trois applications différentes pour résoudre des défis concernant le domaine des dérivés financiers allant de la sélection de modèle, à la calibration de modèle ainsi que la valorisation des dérivés. Dans la Partie I, nous nous intéressons à un modèle avec changement de régime de volatilité afin de valoriser des dérivés sur actions. Les paramètres du modèle sont estimés à l'aide de l'algorithme d'Espérance-Maximisation (EM) et une composante de volatilité locale est ajoutée afin que le modèle soit calibré sur les prix d'options vanilles à l'aide de la méthode particulaire. Dans la Partie II, nous utilisons ensuite des réseaux de neurones profonds afin de calibrer un modèle à volatilité stochastique, dans lequel la volatilité est représentée par l'exponentielle d'un processus d'Ornstein-Uhlenbeck, afin d'approximer la fonction qui lie les paramètres du modèle aux volatilités implicites correspondantes hors ligne. Une fois l'approximation couteuse réalisée hors ligne, la calibration se réduit à un problème d'optimisation standard et rapide. Dans la Partie III, nous utilisons enfin des réseaux de neurones profonds afin de valorisation des options américaines sur de grands paniers d'actions pour surmonter la malédiction de la dimension. Différentes méthodes sont étudiées avec une approche de type Longstaff-Schwartz, où nous approximons les valeurs de continuation, et une approche de type contrôle stochastique, où nous résolvons l'équation différentielle partielle de valorisation en la reformulant en problème de contrôle stochastique à l'aide de la formule de Feynman-Kac non linéaire
Artificial intelligence has become more and more popular in quantitative finance given the increase of computer capacities as well as the complexity of models and has led to many financial applications. In the thesis, we have explored three different applications to solve financial derivatives challenges, from model selection, to model calibration and pricing. In Part I, we focus on a regime-switching model to price equity derivatives. The model parameters are estimated using the Expectation-Maximization (EM) algorithm and a local volatility component is added to fit vanilla option prices using the particle method. In Part II, we then use deep neural networks to calibrate a stochastic volatility model, where the volatility is modelled as the exponential of an Ornstein-Uhlenbeck process, by approximating the mapping between model parameters and corresponding implied volatilities offline. Once the expensive approximation has been performed offline, the calibration reduces to a standard & fast optimization problem.In Part III, we finally use deep neural networks to price American option on large baskets to solve the curse of the dimensionality. Different methods are studied with a Longstaff-Schwartz approach, where we approximate the continuation values, and a stochastic control approach, where we solve the pricing partial differential equation by reformulating the problem as a stochastic control problem using the non-linear Feynman-Kac formula
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Hamidisepehr, Ali. „CLASSIFYING SOIL MOISTURE CONTENT USING REFLECTANCE-BASED REMOTE SENSING“. UKnowledge, 2018. https://uknowledge.uky.edu/bae_etds/57.

Der volle Inhalt der Quelle
Annotation:
The ability to quantify soil moisture spatial variability and its temporal dynamics over entire fields through direct soil observations using remote sensing will improve early detection of water stress before crop physiological or economic damage has occurred, and it will contribute to the identification of zones within a field in which soil water is depleted faster than in other zones of a field. The overarching objective of this research is to develop tools and methods for remotely estimating soil moisture variability in agricultural crop production. Index-based and machine learning methods were deployed for processing hyperspectral data collected from moisture-controlled samples. In the first of five studies described in this dissertation, the feasibility of using “low-cost” index-based multispectral reflectance sensing for remotely delineating soil moisture content from direct soil and crop residue measurements using down-sampled spectral data were determined. The relative reflectance from soil and wheat stalk residue were measured using visible and near-infrared spectrometers. The optimal pair of wavelengths was chosen using a script to create an index for estimating soil and wheat stalk residue moisture levels. Wavelengths were selected to maximize the slope of the linear index function (i.e., sensitivity to moisture) and either maximize the coefficient of determination (R2) or minimize the root mean squared error (RMSE) of the index. Results showed that wavelengths centered near 1300 nm and 1500 nm, within the range of 400 to 1700 nm, produced the best index for individual samples; however, this index worked poorly on estimating stalk residue moisture. In the second of five studies, 20 machine learning algorithms were applied to full spectral datasets for moisture prediction and comparing them to the index-based method from the previous objective. Cubic support vector machine (SVM) and ensemble bagged trees methods produced the highest composite prediction accuracies of 96% and 93% for silt-loam soil samples, and 86% and 93% for wheat stalk residue samples, respectively. Prediction accuracy using the index-based method was 86% for silt-loam soil and 30% for wheat stalk residue. In the third study, a spectral measurement platform capable of being deployed on a UAS was developed for future use in quantifying and delineating moisture zones within agricultural landscapes. A series of portable spectrometers covering ultraviolet (UV), visible (VIS), and near-infrared (NIR) wavelengths were instrumented using a Raspberry Pi embedded computer that was programmed to interface with the UAS autopilot for autonomous reflectance data acquisition. A similar ground-based system was developed to keep track of ambient light during reflectance target measurement. The systems were tested under varying ambient light conditions during the 2017 Great American Eclipse. In the fourth study, the data acquisition system from the third study was deployed for recognizing different targets in the grayscale range using machine learning methods and under ambient light conditions. In this study, a dynamic method was applied to update integration time on spectrometers to optimize sensitivity of the instruments. It was found that by adjusting the integration time on each spectrometer such that a maximum intensity across all wavelengths was reached, the targets could be recognized simply based on the reflectance measurements with no need of a separate ambient light measurement. Finally, in the fifth study, the same data acquisition system and variable integration time method were used for estimating soil moisture under ambient light condition. Among 22 machine learning algorithms, linear and quadratic discriminant analysis achieved the maximum prediction accuracy. A UAS-deployable hyperspectral data acquisition system containing three portable spectrometers and an embedded computer was developed to classify moisture content from spectral data. Partial least squares regression and machine learning algorithms were shown to be effective to generate predictive models for classifying soil moisture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Aparicio, Vázquez Ignacio. „Venn Prediction for Survival Analysis : Experimenting with Survival Data and Venn Predictors“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278823.

Der volle Inhalt der Quelle
Annotation:
The goal of this work is to expand the knowledge on the field of Venn Prediction employed with Survival Data. Standard Venn Predictors have been used with Random Forests and binary classification tasks. However, they have not been utilised to predict events with Survival Data nor in combination with Random Survival Forests. With the help of a Data Transformation, the survival task is transformed into several binary classification tasks. One key aspect of Venn Prediction are the categories. The standard number of categories is two, one for each class to predict. In this work, the usage of ten categories is explored and the performance differences between two and ten categories are investigated. Seven data sets are evaluated, and their results presented with two and ten categories. For the Brier Score and Reliability Score metrics, two categories offered the best results, while Quality performed better employing ten categories. Occasionally, the models are too optimistic. Venn Predictors rectify this performance and produce well-calibrated probabilities.
Målet med detta arbete är att utöka kunskapen om området för Venn Prediction som används med överlevnadsdata. Standard Venn Predictors har använts med slumpmässiga skogar och binära klassificeringsuppgifter. De har emellertid inte använts för att förutsäga händelser med överlevnadsdata eller i kombination med Random Survival Forests. Med hjälp av en datatransformation omvandlas överlevnadsprediktion till flera binära klassificeringsproblem. En viktig aspekt av Venn Prediction är kategorierna. Standardantalet kategorier är två, en för varje klass. I detta arbete undersöks användningen av tio kategorier och resultatskillnaderna mellan två och tio kategorier undersöks. Sju datamängder används i en utvärdering där resultaten presenteras för två och tio kategorier. För prestandamåtten Brier Score och Reliability Score gav två kategorier de bästa resultaten, medan för Quality presterade tio kategorier bättre. Ibland är modellerna för optimistiska. Venn Predictors korrigerar denna prestanda och producerar välkalibrerade sannolikheter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Richard, Michael. „Évaluation et validation de prévisions en loi“. Thesis, Orléans, 2019. http://www.theses.fr/2019ORLE0501.

Der volle Inhalt der Quelle
Annotation:
Cette thèse porte sur l’évaluation et la validation de prévisions en loi. Dans la première partie, nous nous intéressons à l’apport du machine learning vis à vis des prévisions quantile et des prévisions en loi. Pour cela, nous avons testé différents algorithmes de machine learning dans un cadre de prévisions de quantiles sur données réelles. Nous tentons ainsi de mettre en évidence l’intérêt de certaines méthodes selon le type de données auxquelles nous sommes confrontés. Dans la seconde partie, nous exposons quelques tests de validation de prévisions en loi présents dans la littérature. Certains de ces tests sont ensuite appliqués sur données réelles relatives aux log-rendements d’indices boursiers. Dans la troisième, nous proposons une méthode de recalibration permettant de simplifier le choix d’une prévision de densité en particulier par rapport à d’autres. Cette recalibration permet d’obtenir des prévisions valides à partir d’un modèle mal spécifié. Nous mettons également en évidence des conditions sous lesquelles la qualité des prévisions recalibrées, évaluée à l’aide du CRPS, est systématiquement améliorée, ou très légèrement dégradée. Ces résultats sont illustrés par le biais d’applications sur des scénarios de températures et de prix
In this thesis, we study the evaluation and validation of predictive densities. In a first part, we are interested in the contribution of machine learning in the field of quantile and densityforecasting. We use some machine learning algorithms in quantile forecasting framework with real data, inorder to highlight the efficiency of particular method varying with nature of the data.In a second part, we expose some validation tests of predictive densities present in the literature. Asillustration, we use two of the mentionned tests on real data concerned about stock indexes log-returns.In the third part, we address the calibration constraint of probability forecasting. We propose a generic methodfor recalibration, which allows us to enforce this constraint. Thus, it permits to simplify the choice betweensome density forecasts. It remains to be known the impact on forecast quality, measured by predictivedistributions sharpness, or specific scores. We show that the impact on the Continuous Ranked ProbabilityScore (CRPS) is weak under some hypotheses and that it is positive under more restrictive ones. We use ourmethod on weather and electricity price ensemble forecasts.Keywords : Density forecasting, quantile forecasting, machine learning, validity tests, calibration, bias correction,PIT series , Pinball-Loss, CRPS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Fulová, Silvia. „Stanovení nejistoty měření optického měřicí stroje pomocí laserinterferometru“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443250.

Der volle Inhalt der Quelle
Annotation:
This final thesis is dealing with stating uncertainty of optical measuring device Micro-Vu Sol 311, which is located at Faculty of mechanical engineering in Brno. Overview of coordinate measuring machines (CMM for short) and analyzed present status of optical CMM is in summation. This part also includes basic metrology concepts and methodology of determination of uncertainty of measuring instrument. Content of following parts of thesis is detailed description of Micro-Vu SOL 311 machine and etalons that were used in determination of enhanced uncertainty of measurement such as gage blocks, laser interferometer and glass scale. Last part of this thesis includes summary of achieved results and recommendations for practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Maretto, Danilo Althmann. „Aplicação de máquinas de vetores de suporte para desenvolvimento de modelos de classificação e calibração multivariada em espectroscopia no infravermelho“. [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/249287.

Der volle Inhalt der Quelle
Annotation:
Orientador: Ronei Jesus Popi
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Química
Made available in DSpace on 2018-08-18T17:27:36Z (GMT). No. of bitstreams: 1 Maretto_DaniloAlthmann_D.pdf: 2617064 bytes, checksum: 1ebea2b6ab73ef552155cd9b79b6fd1b (MD5) Previous issue date: 2011
Resumo: O objetivo desta tese de doutorado foi de utilizar o algoritmo Máquinas de Vetores de Suporte (SVM) em problemas de classificação e calibração, onde algoritmos mais tradicionais (SIMCA e PLS, respectivamente) encontram problemas. Foram realizadas quatro aplicações utilizando dados de espectroscopia no infravermelho. Na primeira o SVM se mostrou ser uma ferramenta mais indicada para a determinação de Carbono e Nitrogênio em solo por NIR, quando estes elementos estão em solos sem que se saiba se há ou não a presença do mineral gipsita, obtendo concentrações desses elementos com erros consideravelmente menores do que a previsão feita pelo PLS. Na determinação da concentração de um mineral em polímero por NIR, que foi a segunda aplicação, o PLS conseguiu previsões com erros aceitáveis, entretanto, através da análise do teste F e o gráfico de erros absolutos das previsões, foi possível concluir que o modelo SVM conseguiu chegar a um modelo mais ajustado. Na terceira aplicação, que consistiu na classificação de bactérias quanto às condições de crescimento (temperaturas 30 ou 40°C e na presença ou ausência de fosfato) por MIR, o SIMCA não foi capaz de classificar corretamente a grande maioria das amostras enquanto o SVM produziu apenas uma previsão errada. E por fim, na última aplicação, que foi a diferenciação de nódulos cirróticos e de hepatocarcinoma por microespectroscopia MIR, a taxa das previsões corretas para os conjuntos de validação do SVM foram maiores do que do SIMCA. Nas quatro aplicações o SVM produziu resultados melhores do que o SIMCA e o PLS, mostrando que pode ser uma alternativa aos métodos mais tradicionais de classificação e calibração multivariada
Abstract: The objective of this thesis was to use the algorithm Support Vector Machines (SVM) in problems of classification and calibration, where more traditional algorithms (SIMCA and PLS, respectively) present problems. Four applications were developed using data for infrared spectra. In the first one, the SVM proved to be a most suitable tool for determination of carbon and nitrogen in soil by NIR, when these elements are in soils without knowledge whether or not the presence of the gypsum mineral, obtaining concentrations of these elements with errors considerably smaller than the estimated by the PLS. In the determination of the concentration of a mineral in a polymer by NIR, which was the second application, the PLS presented predictions with acceptable errors, however, by examining the F test and observing absolute errors of predictions, it was concluded that the SVM was able to reach a more adjusted model. In the third application, classification of bacteria on the different growth conditions (temperatures 30 or 40 ° C and in the presence or absence of phosphate) by MIR, the SIMCA was not able to correctly classify the majority of the samples while the SVM produced only one false prediction. Finally, in the last application, which was the differentiation of cirrhotic nodules and Hepatocellular carcinoma by infrared microspectroscopy, the rate of correct predictions for the validation of sets of SVM was higher than the SIMCA. In the four applications SVM produced better results than SIMCA and PLS, showing that it can be an alternative to the traditional algorithms for classification and multivariate calibration
Doutorado
Quimica Analitica
Doutor em Ciências
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Vieira, Alessandro David. „Calibração indireta de máquina de medir por coordenadas utilizando esquadro mecânico de esferas“. Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18146/tde-13012011-133713/.

Der volle Inhalt der Quelle
Annotation:
Com o crescimento industrial e tecnológico nas últimas décadas, as indústrias passaram a oferecer produtos customizados, ou seja, desenvolvidos com tolerâncias geométricas cada vez mais apertadas e geometrias cada vez mais complexas. Com isso, as máquinas de medir por coordenadas (MMC) vêm tornando-se instrumentos essenciais no ambiente industrial. A MMC é extremamente versátil o que possibilita a medição das mais diversas características geométricas e dimensionais. Padrões para calibração de MMC foram sugeridos e colocados em uso através dos anos, com a finalidade de utilizá-los em testes de aceitação e verificação periódica dos erros e da incerteza de medição de MMC. Novos artefatos para a calibração indireta de MMC visam melhorar os procedimentos de calibração para uso em sistemas de compensação de erros. Diante do exposto acima, este trabalho tem como objetivo desenvolver um procedimento de calibração indireta de MMC com o esquadro de esferas aliado a um modelo reduzido de sintetização de erros (MRSE) para uso em um Sistema de Compensação de Erros. O procedimento possibilita maior rapidez na obtenção dos valores e comportamentos dos erros quando comparado com outros procedimentos de calibração indireta. O procedimento proposto tem como vantagem o uso de um esquadro de esferas para medir todos os termos das equações das componentes do erro volumétrico, nas direções X, Y e Z de uma MMC.
With the technological and industrial growth in recent decades, the industries began to offer customized products, that is, products that fit individual specifications and often present increasingly tight tolerances and increasingly complex geometries. Therefore, the coordinate measuring machines (CMMs) have become an essential tool in the industrial environment. The CMM is very versatile since it allows the measurement of several geometric and dimensional features at once. Different standards for the calibration of CMMs were suggested and put into use through the years. This type of standard is traditionally used in acceptance tests and periodic verifications of the CMMs and in the evaluation of measurement uncertainties. New artifacts for indirect calibration of CMMs are proposed to allow the development of better procedures of error evaluation and compensation. Considering the above, this work aims to develop a procedure for indirect calibration of CMMs using a mechanical ball square combined with a reduced model of synthesis of Errors (MRSE). As a result, a compensation system for CMM errors is obtained. The procedure allows a faster evaluation of the values and behaviors of errors when compared with other indirect calibration procedures. Additionally, the proposed procedure has the advantage of using a single artifact to measure all the components of the volumetric error in the directions X, Y and Z of a CMM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Rydell, Christopher. „Deep Learning for Whole Slide Image Cytology : A Human-in-the-Loop Approach“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-450356.

Der volle Inhalt der Quelle
Annotation:
With cancer being one of the leading causes of death globally, and with oral cancers being among the most common types of cancer, it is of interest to conduct large-scale oral cancer screening among the general population. Deep Learning can be used to make this possible despite the medical expertise required for early detection of oral cancers. A bottleneck of Deep Learning is the large amount of data required to train a good model. This project investigates two topics: certainty calibration, which aims to make a machine learning model produce more reliable predictions, and Active Learning, which aims to reduce the amount of data that needs to be labeled for Deep Learning to be effective. In the investigation of certainty calibration, five different methods are compared, and the best method is found to be Dirichlet calibration. The Active Learning investigation studies a single method, Cost-Effective Active Learning, but it is found to produce poor results with the given experiment setting. These two topics inspire the further development of the cytological annotation tool CytoBrowser, which is designed with oral cancer data labeling in mind. The proposedevolution integrates into the existing tool a Deep Learning-assisted annotation workflow that supports multiple users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Braden, Jason Patrick. „Open architecture and calibration of a cylindrical grinder“. Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/18190.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Matsubara, Edson Takashi. „Relações entre ranking, análise ROC e calibração em aprendizado de máquina“. Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-04032009-114050/.

Der volle Inhalt der Quelle
Annotation:
Aprendizado supervisionado tem sido principalmente utilizado para classificação. Neste trabalho são mostrados os benefícios do uso de rankings ao invés de classificação de exemplos isolados. Um rankeador é um algoritmo que ordena um conjunto de exemplos de tal modo que eles são apresentados do exemplo de maior para o exemplo de menor expectativa de ser positivo. Um ranking é o resultado dessa ordenação. Normalmente, um ranking é obtido pela ordenação do valor de confiança de classificação dado por um classificador. Este trabalho tem como objetivo procurar por novas abordagens para promover o uso de rankings. Desse modo, inicialmente são apresentados as diferenças e semelhanças entre ranking e classificação, bem como um novo algoritmo de ranking que os obtém diretamente sem a necessidade de obter os valores de confiança de classificação, esse algoritmo é denominado de LEXRANK. Uma área de pesquisa bastante importante em rankings é a análise ROC. O estudo de árvores de decisão e análise ROC é bastante sugestivo para o desenvolvimento de uma visualização da construção da árvore em gráficos ROC. Para mostrar passo a passo essa visualização foi desenvolvido uma sistema denominado PROGROC. Ainda do estudo de análise ROC, foi observado que a inclinação (coeficiente angular) dos segmentos que compõem o fecho convexo de curvas ROC é equivalente a razão de verossimilhança que pode ser convertida para probabilidades. Essa conversão é denominada de calibração por fecho convexo de curvas ROC que coincidentemente é equivalente ao algoritmo PAV que implementa regressão isotônica. Esse método de calibração otimiza Brier Score. Ao explorar essa medida foi encontrada uma relação bastante interessante entre Brier Score e curvas ROC. Finalmente, também foram explorados os rankings construídos durante o método de seleção de exemplos do algoritmo de aprendizado semi-supervisionado multi-descrição CO-TRAINING
Supervised learning has been used mostly for classification. In this work we show the benefits of a welcome shift in attention from classification to ranking. A ranker is an algorithm that sorts a set of instances from highest to lowest expectation that the instance is positive, and a ranking is the outcome of this sorting. Usually a ranking is obtained by sorting scores given by classifiers. In this work, we are concerned about novel approaches to promote the use of ranking. Therefore, we present the differences and relations between ranking and classification followed by a proposal of a novel ranking algorithm called LEXRANK, whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. One very important field which uses rankings as its main input is ROC analysis. The study of decision trees and ROC analysis suggested an interesting way to visualize the tree construction in ROC graphs, which has been implemented in a system called PROGROC. Focusing on ROC analysis, we observed that the slope of segments obtained from the ROC convex hull is equivalent to the likelihood ratio, which can be converted into probabilities. Interestingly, this ROC convex hull calibration method is equivalent to Pool Adjacent Violators (PAV). Furthermore, the ROC convex hull calibration method optimizes Brier Score, and the exploration of this measure leads us to find an interesting connection between the Brier Score and ROC Curves. Finally, we also investigate rankings build in the selection method which increments the labelled set of CO-TRAINING, a semi-supervised multi-view learning algorithm
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Jelínek, Vít. „Kalibrace skleněných měřítek“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232162.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with a more work-efficient and time-efficient method of calibration of standard glass scales, with practical use in the Czech Metrology Institute Regional Inspectorate in Brno. The desired streamlining of calibration were achieved in the use of a 3D coordinate measuring machine Micro-Vu Excel 4520. In the service software InSpec, six measuring programs were designed in the use of a standard glass scale brand SIP. The measurement uncertainties of this calibration were presented and calculated. This thesis draws up a draft proposal of the calibration procedure and drafts a formalized document of the calibration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Kiška, Roman. „Stanovení přesnosti měření souřadnicového měřicího stroje Zeiss UPMC Carat“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442852.

Der volle Inhalt der Quelle
Annotation:
The aim of this diploma thesis is to create a comprehensive study of measurement accuracy of coordinate measuring machine (hereinafter CMM) Zeiss UPMC 850 Carat S-ACC (hereinafter Zeiss Carat) for the needs of the national metrological institute in Brno in accordance with ČSN EN ISO / IEC 17025 and the follow-up system standards of the ČSN EN ISO 10360 series. Additionally, it includes the creation of instructions for the calculation of measurement uncertainty, which will be put into effect in an accredited calibration laboratory. The first part of the work focuses on the description of the current state of knowledge and the definition of basic concepts in the field of metrology and accurate measurements on CMM. The second part describes the Zeiss Carat measuring machine, identifies the individual contributors to the resulting measurement uncertainty and defines the methodology for their quantification. The last part deals with the evaluation of calibration data and the calculation of the expanded measurement uncertainty of the Zeiss Carat instrument, which is used to quantify its accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Zhu, Hui. „Partial discharge propagation, measurement, and calibration in high power rotating machines“. Thesis, Glasgow Caledonian University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261609.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Cilici, Florent. „Développement de solutions BIST (Built-In Self-Test) pour circuits intégrés radiofréquences/millimétriques“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT072.

Der volle Inhalt der Quelle
Annotation:
Les technologies silicium récentes sont particulièrement prônes aux imperfections durant la fabrication des circuits. La variation des procédés peut entrainer une dégradation des performances, notamment aux hautes fréquences. Dans cette thèse, plusieurs contributions visant la réduction des coûts et de la complexité du test des circuits millimétriques sont présentées. Dans ce sens, deux sujets principaux ont fait l'objet de notre attention : a) le test indirect non-intrusif basé sur l’apprentissage automatique et b) la calibration non-itérative "one-shot". Nous avons en particulier développé une méthode générique pour implémenter un test indirect non-intrusif basé sur l’apprentissage automatique. La méthode vise à être aussi automatisée que possible de façon à pouvoir être appliquée à pratiquement n'importe quel circuit millimétrique. Elle exploite les modèles Monte Carlo du design kit et des informations de variations du BEOL pour proposer un jeu de capteurs non-intrusifs. Des mesures à basses fréquences permettent ensuite d'extraire des signatures qui contiennent des données pertinentes concernant la qualité des procédés de fabrication, et donc a fortiori de la performance du circuit. Cette méthode est supportée par des résultats expérimentaux sur des PAs fonctionnant à 65 GHz, conçus dans une technologie 55 nm de STMicroelectronics. Pour s'attaquer plus encore à la dégradation des performances induite par les variations des procédés de fabrication, nous nous sommes également penchés sur une procédure de calibration non-itérative. Nous avons ainsi présenté un PA à deux étages qui peut être calibré en post-fabrication. La méthode de calibration exploite une cellule de découplage variable comme moyen de modifier les performances de l'amplificateur. Des moniteurs de variations des procédés de fabrication, placés dans les espaces vides du circuit, sont utilisés afin de prédire la meilleure configuration possible pour les cellules de découplage variables. La faisabilité et les performances de cette approche ont été validés en simulation
Recent silicon technologies are especially prone to imperfections during the fabrication of the circuits. Process variations can induce a noticeable performance shift, especially for high frequency devices. In this thesis we present several contributions to tackle the cost and complexity associated with testing mm-wave ICs. In this sense, we have focused on two main topics: a) non-intrusive machine learning indirect test and b) one-shot calibration. We have in particular developed a generic method to implement a non-intrusive machine learning indirect test based on process variation sensors. The method is aimed at being as automated as possible and can be applied to virtually any mm-wave circuit. It leverages the Monte Carlo models of the design kit and the BEOL variability information to propose a set of non-intrusive sensors. Low frequency measurements can be performed on these sensors to extract signatures that provide relevant information about the process quality, and consequently about the device performance. The method is supported by experimental results in a set of 65 GHz PAs designed in a 55 nm technology from STMicroelectronics. To further tackle the performance degradation induced by process variations, we have also focused on the implementation of a one-shot calibration procedure. In this line, we have presented a two-stage 60 GHz PA with one-shot calibration capability. The proposed calibration takes advantage of a novel tuning knob, implemented as a variable decoupling cell. Non-intrusive process monitors, placed within the empty spaces of the circuit, are used for predicting the best tuning knob configuration based on a machine learning regression model. The feasibility and performance of the proposed calibration strategy have been validated in simulation
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Lau, Tse-yeung, und 劉子揚. „Mechanical calibration of drilling process monitor (DPM) methodology“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43753012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Lau, Tse-yeung. „Mechanical calibration of drilling process monitor (DPM) methodology“. Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43753012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Di, Giacomo Benedito. „Computer aided calibration and hybrid compensation of geometric errors in coordinate measuring machines“. Thesis, University of Manchester, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306885.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Alves, Julio Cesar Laurentino 1978. „Máquina de vetores de suporte aplicada a dados de espectroscopia NIR de combustíveis e lubrificantes para o desenvolvimento de modelos de regressão e classificação“. [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/249312.

Der volle Inhalt der Quelle
Annotation:
Orientador: Ronei Jesus Poppi
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Química
Made available in DSpace on 2018-08-19T18:06:58Z (GMT). No. of bitstreams: 1 Alves_JulioCesarLaurentino_D.pdf: 19282542 bytes, checksum: 78d1bf16d9d133c488adb4bedf593b06 (MD5) Previous issue date: 2012
Resumo: Modelos lineares de regressão e classificação por vezes proporcionam um desempenho insatisfatório no tratamento de dados de espectroscopia no infravermelho próximo de produtos derivados de petróleo. A máquina de vetores de suporte (SVM), baseada na teoria do aprendizado estatístico, possibilita o desenvolvimento de modelos de regressão e classificação não lineares que podem proporcionar uma melhor modelagem dos referidos dados, porém ainda é pouco explorada para resolução de problemas em química analítica. Nesse trabalho demonstra-se a utilização do SVM para o tratamento de dados de espectroscopia na região do infravermelho próximo de combustíveis e lubrificantes. O SVM foi utilizado para a solução de problemas de regressão e classificação e seus resultados comparados com os algoritmos de referência PLS e SIMCA. Foram abordados os seguintes problemas analíticos relacionados a controle de processos e controle de qualidade: (i) determinação de parâmetros de qualidade do óleo diesel utilizados para otimização do processo de mistura em linha na produção desse combustível; (ii) determinação de parâmetros de qualidade do óleo diesel que é carga do processo de HDT, para controle e otimização das condições de processo dessa unidade; (iii) determinação do teor de biodiesel na mistura com o óleo diesel; (iv) classificação das diferentes correntes que compõem o pool de óleo diesel na refinaria, permitindo a identificação de adulterações e controle de qualidade; (v) classificação de lubrificantes quanto ao teor de óleo naftênico e/ou presença de óleo vegetal. Demonstram-se o melhor desempenho do SVM em relação aos modelos desenvolvidos com os métodos quimiométricos de referência (métodos lineares). O desenvolvimento de métodos analíticos rápidos e de baixo custo para solução de problemas em controle de processos e controle de qualidade, com a utilização de modelos de regressão e classificação mais exatos, proporcionam o monitoramento da qualidade de forma mais eficaz e eficiente, contribuindo para o aumento das rentabilidades nas atividades econômicas de produção e comercialização dos derivados do petróleo estudados
Abstract: Linear regression and classification models can produce a poor performance in processing near-infrared spectroscopy data of petroleum products. Support vectors machine (SVM), based on statistical learning theory, provides the development of models for nonlinear regression and classification that can result in better modeling of these data but it is still little explored for solving problems in analytical chemistry. This work demonstrates the use of the SVM for treatment of near-infrared spectroscopy data of fuels and lubricants. The SVM was used to solve regression and classification problems and its results were compared with the reference algorithms PLS and SIMCA. The following analytical problems related to process control and quality control were studied: (i) quality parameters determination of diesel oil, used for optimization of in line blending process; (ii) quality parameters determination of diesel oil which is feed-stock of HDT unit for optimization of process control; (iii) quantification of biodiesel blended with diesel oil; (iv) classification of different streams that make up the pool of diesel oil in the refinery, enabling identification of adulteration and quality control; (v) classification of lubricants based on the content of naphthenic oil and/or the presence of vegetable oil. It is shown the best performance of the SVM compared to models developed with the reference algorithms. The development of fast and low cost analytical methods used in process control and quality control, with the use of more accurate regression and classification models, allows monitoring quality parameters in more effectiveness and efficient manner, making possible an increase in profitability of economic activities of production and business of petroleum derivatives studied
Doutorado
Quimica Analitica
Doutor em Ciências
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Duggan, Matthew Sherman. „Automatic correction of robot programs based on sensor calibration data“. Thesis, Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/17814.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Pohlhammer, Christopher M. „Sensing for automated assembly : direct calibration techniques for determining part-in-hand location /“. Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/7118.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Solnon, Matthieu. „Apprentissage statistique multi-tâches“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00911498.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a pour objet la construction, la calibration et l'étude d'estimateurs multi-tâches, dans un cadre fréquentiste non paramétrique et non asymptotique. Nous nous plaçons dans le cadre de la régression ridge à noyau et y étendons les méthodes existantes de régression multi-tâches. La question clef est la calibration d'un paramètre de régularisation matriciel, qui encode la similarité entre les tâches. Nous proposons une méthode de calibration de ce paramètre, fondée sur l'estimation de la matrice de covariance du bruit entre les tâches. Nous donnons ensuite pour l'estimateur obtenu des garanties d'optimalité, via une inégalité oracle, puis vérifions son comportement sur des exemples simulés. Nous obtenons par ailleurs un encadrement précis des risques des estimateurs oracles multi-tâches et mono-tâche dans certains cas. Cela nous permet de dégager plusieurs situations intéressantes, où l'oracle multi-tâches est plus efficace que l'oracle mono-tâche, ou vice versa. Cela nous permet aussi de nous assurer que l'inégalité oracle force l'estimateur multi-tâches à avoir un risque inférieur à l'estimateur mono-tâche dans les cas étudiés. Le comportement des oracles multi-tâches et mono-tâche est vérifié sur des exemples simulés.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Szatmari, Szabolcs. „Kinematic Calibration of Parallel Kinematic Machines on the Example of the Hexapod of Simple Design“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2007. http://nbn-resolving.de/urn:nbn:de:swb:14-1194357963765-04082.

Der volle Inhalt der Quelle
Annotation:
The aim of using parallel kinematic motion systems as an alternative of conventional machine tools for precision machining has raised the demands made on the accuracy of identification of the geometric parameters that are necessary for the kinematic transformation of the motion variables. The accuracy of a parallel manipulator is not only dependent upon an accurate control of its actuators but also upon a good knowledge of its geometrical characteristics. As the platform's controller determines the length of the actuators according to the nominal model, the resulted pose of the platform is inaccurate. One way to enhance platform accuracy is by kinematic calibration, a process by which the actual kinematic parameters are identified and then implemented to modify the kinematic model used by the controller. The first and most general valuation criterion for the actual calibration approaches is the relative improvement of the motion accuracy, eclipsing the other aspects to pay for it. The calibration outlay has been underestimated or even neglected for a long time. The scientific value of the calibration procedure is not only in direct proportion to the achieved accuracy, but also to the calibration effort. These demands become particularly stringent in case of the calibration of hexapods of the so-called simple design. The objectives of the here proposed new calibration procedure are based on the deficits mentioned above under the special requirements due to the circumstances of the simple design-concept. The main goals of the procedure can be summarized in obtaining the basics for an automated kinematic calibration procedure which works efficiently, quickly, effectively and possibly low-cost, all-in-one economically applied to the parallel kinematic machines. The problem will be approached systematically and taking step by step the necessary conclu-sions and measurements through: Systematical analysis of the workspace to determine the optimal measuring procedure, measurements with automated data acquisition and evaluation, simulated measurements based on the kinematic model of the structure and identifying the kinematic parameters using efficient optimization algorithms. The presented calibration has been successfully implemented and tested on the hexapod of simple design `Felix' available at the IWM, TU Dresden. The obtained results encourage the application of the procedure to other hexapod structures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Szatmári, Szabolcs. „Kinematic Calibration of Parallel Kinematic Machines on the Example of the Hexapod of Simple Design“. Dresden : Inst. für Werkzeugmaschinen und Steuerungstechnik, Lehrstuhl für Werkzeugmaschinen, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016374557&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Aldawi, Fouad Juma. „A low-cost ultrasonic 3D measurement device for calibration of Cartesian and non-Cartesian machines“. Thesis, University of Huddersfield, 2009. http://eprints.hud.ac.uk/id/eprint/9106/.

Der volle Inhalt der Quelle
Annotation:
The major obstacles to the widespread adoption of 3D measurement systems are accuracy, speed of process and the cost. At present, high accuracy for measuring 3D position has been achieved, and there have been real advances in reducing measurement time, but the cost of such systems remains high. A high-accuracy and high-resolution ultrasonic distance measurement system has been achieved in this project by creating multi-frequency continuous wave frequency modulation (MFCWFM) system. The low-cost system measures dynamic distance (displacements of an ultrasound transmitter) and fixed distance (distances between receivers). The instantaneous distance between the transmitter and each receiver can be precisely determined. New geometric algorithms for transmitter 3D position and receiver positing have also been developed in the current research to improve the measurement system‟s practicability. These algorithms allow the ultrasound receivers to be arbitrarily placed and located by self-calibration following a simple procedure. After the development and testing of the new 3D measurement system, further studies have also been carried out on the system, considering the two major external disturbances: air temperature drifting and ultrasound echo interference. Novel methods have been successfully developed and tested to minimize measurement errors and evaluation of speed of sound. All the enabling research described in the thesis means that it is now possible to build and implement a measurement system at reasonable cost for industrial exploitation. This will have the necessary performance to provide ultrasonic 3D position measurements in real time for monitoring position.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Baird, Patrick James Samuel. „Mathematical modelling of the parameters and errors of a contact probe system and its application to the computer simulation of coordinate measuring machines“. Thesis, Brunel University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320548.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie