Dissertations / Theses on the topic 'Optimal estimation of parameters'

To see the other types of publications on this topic, follow the link: Optimal estimation of parameters.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimal estimation of parameters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cheung, Man-Fung. "On optimal algorithms for parameter set estimation." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1302628544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iolov, Alexandre V. "Parameter Estimation, Optimal Control and Optimal Design in Stochastic Neural Models." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34866.

Full text
Abstract:
This thesis solves estimation and control problems in computational neuroscience, mathematically dealing with the first-passage times of diffusion stochastic processes. We first derive estimation algorithms for model parameters from first-passage time observations, and then we derive algorithms for the control of first-passage times. Finally, we solve an optimal design problem which combines elements of the first two: we ask how to elicit first-passage times such as to facilitate model estimation based on said first-passage observations. The main mathematical tools used are the Fokker-Planck partial differential equation for evolution of probability densities, the Hamilton-Jacobi-Bellman equation of optimal control and the adjoint optimization principle from optimal control theory. The focus is on developing computational schemes for the solution of the problems. The schemes are implemented and are tested for a wide range of parameters.
APA, Harvard, Vancouver, ISO, and other styles
3

Agaba, Peter. "Optimal Control Theory and Estimation of Parameters in a Differential Equation Model for Patients with Lupus." TopSCHOLAR®, 2019. https://digitalcommons.wku.edu/theses/3118.

Full text
Abstract:
System Lupus Erythematosus (SLE) is a chronic inflammatory autoimmune disorder that affects many parts of the body including skin, joints, kidneys, brains and other organs. Lupus Nephritis (LN) is a disease caused by SLE. Given the complexity of LN, we establish an optimal treatment strategy based on a previously developed mathematical model.For our thesis work, the model variables are: Immune Complexes (I), Pro-inflammatory mediators (P), Damaged tissue (D), and Anti-inflammatory mediators (A). The analysis in this research project focuses on analyzing therapeutic strategies to control damage using both parameter estimation techniques (integration of data to quantify any uncertainties associated with parameters) and optimal control with the goal of minimizing time spent on therapy for treating damaged tissue by LN.
APA, Harvard, Vancouver, ISO, and other styles
4

Alana, Jorge Enrique. "Optimal measurement locations for parameter estimation of distributed parameter systems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/optimal-measurement-locations-for-parameter-estimation-of-distributed-parameter-systems(fffa31d8-2b19-434b-a2b6-7809e314bb55).html.

Full text
Abstract:
Identifying the parameters with the largest influence on the predicted outputs of a model revealswhich parameters need to be known more precisely to reduce the overall uncertainty on themodel output. A large improvement of such models would result when uncertainties in the keymodel parameters are reduced. To achieve this, new experiments could be very helpful,especially if the measurements are taken at the spatio-temporal locations that allow estimate the parameters in an optimal way. After evaluating the methodologies available for optimal sensor location, a few observations were drawn. The method based on the Gram determinant evolution can report results not according to what should be expected. This method is strongly dependent of the sensitivity coefficients behaviour. The approach based on the maximum angle between subspaces, in some cases, produced more that one optimal solution. It was observed that this method depends on the magnitude of outputs values and report the measurement positions where the outputs reached their extrema values. The D-optimal design method produces number and locations of the optimal measurements and it depends strongly of the sensitivity coefficients, but mostly of their behaviours. In general it was observed that the measurements should be taken at the locations where the extrema values (sensitivity coefficients, POD modes and/or outputs values) are reached. Further improvements can be obtained when a reduced model of the system is employed. This is computationally less expensive and the best estimation of the parameter is obtained, even with experimental data contaminated with noise. A new approach to calculate the time coefficients belonging to an empirical approximator based on the POD-modes derived from experimental data is introduced. Additionally, an artificial neural network can be used to calculate the derivatives but only for systems without complex nonlinear behaviour. The latter two approximations are very valuable and useful especially if the model of the system is unknown.
APA, Harvard, Vancouver, ISO, and other styles
5

Chan, Chun-wang Aaron, and 陳俊弘. "Statistical estimation of haemodynamic parameters in optical coherence tomography." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206460.

Full text
Abstract:
Optical coherence tomography (OCT) is an imaging modality analogous to ultrasound. By using the interference properties of light, one may image to micrometer resolutions using interferometric methods. Most modern systems can acquire A-scans at kHz to MHz speeds, and are capable of real-time 3D imaging. OCT has been used extensively in ophthalmology and has been used in angiography to quantify blood flow. The aim of this research has been to develop statistically optimal estimators for blood flow estimation to take advantage of these hardware advances. This is achieved through a deeper understanding of the noise characteristics of OCT. Through mathematical derivations and simulations, the noise characteristics of OCT Doppler and flow imaging were accurately modelled as additive white noise and multiplicative decorrelation noise. Decorrelation arises due to relative motion of tissue relative to the OCT region of interest and adversely affects Doppler estimation. From these models maximum likelihood estimators (MLEs) that statistically outperform the commonly used Kasai autocorrelation estimator were derived. The Cramer-Rao lower bound (CRLB), which gives the theoretical lowest estimator variance for an unbiased estimator was derived for different noise regimes. It is shown that the AWGN MLE achieves the CRLB for additive white noise dominant conditions, and the decorrelation noise MLE achieves the CRLB under more general noise conditions. The use of computational algorithms that enhance the capabilities of OCT are demonstrated. This shows that this approach for determining parametric estimators may be used in a more general medical imaging context. In addition, the use of decorrelation as a measure of speed is explored, as it is itself proportional to flow speed.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
6

Helin, Mikael. "Inverse Parameter Estimation using Hamilton-Jacobi Equations." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123092.

Full text
Abstract:
Inthis degree project, a solution on a coarse grid is recovered by fitting apartial differential equation to a few known data points. The PDE to consideris the heat equation and the Dupire’s equation with their synthetic data,including synthetic data from the Black-Scholes formula. The approach to fit aPDE is by optimal control to derive discrete approximations to regularized Hamiltoncharacteristic equations to which discrete stepping schemes, and parameters forsmoothness, are examined. By non-parametric numerical implementation thedervied method is tested and then a few suggestions on possible improvementsare given
I detta examensarbete återskapas en lösning på ett glest rutnät genom att anpassa en partiell differentialekvation till några givna datapunkter. De partiella differentialekvationer med deras motsvarande syntetiska data som betraktas är värmeledningsekvationen och Dupires ekvation inklusive syntetiska data från Black-Scholes formel. Tillvägagångssättet att anpassa en PDE är att med hjälp av optimal styrning härleda diskreta approximationer på ett system av regulariserade Hamilton karakteristiska ekvationer till vilka olika diskreta stegmetoder och parametrar för släthet undersöks. Med en icke-parametrisk numerisk implementation prövas den härledda metoden och slutligen föreslås möjliga förbättringar till metoden.
APA, Harvard, Vancouver, ISO, and other styles
7

Fraleigh, Lisa Marie. "Optimal sensor selection and parameter estimation for real-time optimization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ40050.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Dong Jin. "Essays on optimal tests for parameter instability." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3304195.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed June 16, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 158-164).
APA, Harvard, Vancouver, ISO, and other styles
9

Said, Munzir. "Computational optimal control modeling and smoothing for biomechanical systems." University of Western Australia. Dept. of Mathematics and Statistics, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0082.

Full text
Abstract:
[Truncated abstract] The study of biomechanical system dynamics consists of research to obtain an accurate model of biomechanical systems and to find appropriate torques or forces that reproduce motions of a biomechanical subject. In the first part of this study, specific computational models are developed to maintain relative angle constraints for 2-dimensional segmented bodies. This is motivated by the fact that there is a possibility of models of segmented bodies, moving under gravitational acceleration and joint torques, for its segments to move past the natural relative angle limits. Three models to maintain angle constraints between segments are proposed and compared. These models are: all-time angle constraints, a restoring torque in the state equations and an exponential penalty model. The models are applied to a 2-D three segment body to test the behaviour of each model when optimizing torques to minimize an objective. The optimization is run to find torques so that the end effector of the body follows the trajectory of a half circle. The result shows the behavior of each model in maintaining the angle constraints. The all-time constraints case exhibits a behaviour of not allowing torques (at a solution) which make segments move past the constraints, while the other two show a flexibility in handling the angle constraints more similar to a real biomechanical system. With three computational methods to represent the angle contraint, a workable set of initial torques for the motion of a segmented body can be obtained without causing integration failure in the ordinary differential equation (ODE) solver and without the need to use the “blind man method” that restarts the optimal control many times. ... With one layer of penalty weight balancing between trajectory compliance penalty and other optimal control objectives (minimizing torque/smoothing torque) already difficult to obtain (as explained by the L-curve phenomena), adding the second layer penalty weight for the closeness of fit for each of the body segments will further complicate the weight balancing and too much trial and error computation may be needed to get a reasonably good set of weighting values. Second order regularization is also added to the optimal control objective and the optimization has managed to obtain smoother torques for all body joints. To make the current approach more competitive with the inverse dynamic, an algorithm to speed up the computation of the optimal control is required as a potential future work.
APA, Harvard, Vancouver, ISO, and other styles
10

Hendriko, ? "Advanced virtual simulation for optimal cutting parameters control in five axis milling." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22464/document.

Full text
Abstract:
La thèse concerne l’usinage à 5 axes de formes complexes. Le but est d’estimer le plus précisément possible les efforts induits par la coupe pour ajuster la vitesse d’avance et gagner en performance. Pour cela, il est nécessaire d’estimer les engagements radial et axial de la fraise à chaque instant. Ce calcul est rendu particulièrement complexe à cause de la forme de la pièce, de la forme du brut et de la complexité de la géométrie de l’outil. Les méthodes usuelles par Zbuffer sont particulièrement couteuses en temps de calcul. Dans ces travaux nous proposons une méthode de calcul rapide à partir d’une modélisation du contact dans toutes les situations envisageables. Différentes simulations et expérimentations ont permis de valider la précision expérimentalement
This study presents a simple method to define the Cutter Workpiece Engagement (CWE) during sculptured surface machining in five-axis milling. The instantaneous CWE was defined by determining two engagement points, lowermost engagement (LE)-point and uppermost engagement (UE)-point. LE-point was calculated using a method called grazing method. Meanwhile the UE-point was calculated using a combination of discretization and analytical method. During rough milling and semi-finish milling, the workpiece surface was represented by vertical vector. The method called Toroidal–boundary was employed to obtain the UE-point when it was located on cutting tool at toroidal side. On the other hand, the method called Cylindrical-boundary was used to calculate the UE-point for flat-end cutter and cylindrical side of toroidal cutter. For a free-form workpiece surface, a hybrid method, which is a combination of analytical method and discrete method, was used. All the CWE models proposed in this study were verified and the results proved that the proposed method were accurate. The efficiency of the proposed model in generating CWE was also compared with Z-mapping method. The result confirmed that the proposed model was more efficient in term of computational time. The CWE model was also applied for supporting the method to predict cutting forces. The test results showed that the predicted cutting force has a good agreement with the cutting force generated from the experimental work
APA, Harvard, Vancouver, ISO, and other styles
11

Shahrrava, Behnam. "Indirect stochastic adaptive control using optimal joint parameter and state estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0006/NQ32855.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jonsson, Robin. "Optimal Linear Combinations of Portfolios Subject to Estimation Risk." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28524.

Full text
Abstract:
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
APA, Harvard, Vancouver, ISO, and other styles
13

Sharp, Jesse A. "Numerical methods for optimal control and parameter estimation in the life sciences." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/230762/1/Jesse_Sharp_Thesis.pdf.

Full text
Abstract:
This thesis concerns numerical methods in mathematical optimisation and inference; with a focus on techniques for optimal control, and for parameter estimation and uncertainty quantification. Novel methodological and computational developments are presented, with a view to improving the efficiency, effectiveness and accessibility of these techniques for practitioners. The numerical methods considered in this work are widely applied throughout the life sciences; in areas including ecology, epidemiology and oncology, and beyond the life sciences; in engineering, economics, aeronautics and other disciplines.
APA, Harvard, Vancouver, ISO, and other styles
14

Ortiz, Joseph Christian, and Joseph Christian Ortiz. "Estimation of Kinetic Parameters From List-Mode Data Using an Indirect Approach." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621785.

Full text
Abstract:
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
APA, Harvard, Vancouver, ISO, and other styles
15

Tran, Hong-Thai. "Numerical methods for parameter estimation and optimal control of the Red River network." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975808583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rautenberg, Carlos Nicolas. "A Distributed Parameter Approach to Optimal Filtering and Estimation with Mobile Sensor Networks." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/27103.

Full text
Abstract:
In this thesis we develop a rigorous mathematical framework for analyzing and approximating optimal sensor placement problems for distributed parameter systems and apply these results to PDE problems defined by the convection-diffusion equations. The mathematical problem is formulated as a distributed parameter optimal control problem with integral Riccati equations as constraints. In order to prove existence of the optimal sensor network and to construct a framework in which to develop rigorous numerical integration of the Riccati equations, we develop a theory based on Bochner integrable solutions of the Riccati equations. In particular, we focus on $\I_p$-valued continuous solutions of the Bochner integral Riccati equation. We give new results concerning the smoothing effect achieved by multiplying a general strongly continuous mapping by operators in $\I_p$. These smoothing results are essential to the proofs of the existence of Bochner integrable solutions of the Riccati integral equations. We also establish that multiplication of continuous $\I_p$-valued functions improves convergence properties of strongly continuous approximating mappings and specifically approximating $C_0$-semigroups. We develop a Galerkin type numerical scheme for approximating the solutions of the integral Riccati equation and prove convergence of the approximating solutions in the $\I_p$-norm. Numerical examples are given to illustrate the theory.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
17

De, Gregorio Ludovica. "Development of new data fusion techniques for improving snow parameters estimation." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/245392.

Full text
Abstract:
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
APA, Harvard, Vancouver, ISO, and other styles
18

De, Gregorio Ludovica. "Development of new data fusion techniques for improving snow parameters estimation." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/245392.

Full text
Abstract:
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
APA, Harvard, Vancouver, ISO, and other styles
19

Hopkins, Mark A. "Pseudo-linear identification: optimal joint parameter and state estimation of linear stochastic MIMO systems." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53941.

Full text
Abstract:
This dissertation presents a new method of simultaneous parameter and state estimation for linear, stochastic, discrete—time, multiple-input, multiple-output (MIMO) (B systems. This new method is called pseudo·Iinear identification (PLID), and extends an earlier method to the more general case where system input and output measurements are corrupted by noise. PLID can be applied to completely observable, completely controllable systems with known structure (i.e., known observability indexes) and unknown parameters. No assumptions on pole and zero locations are required; and no assumptions on relative degree are required, except that the system transfer functions must be strictly proper. Under standard gaussian assumptions on the various noises, for time-invariant systems in the class described above, it is proved that PLID is the optimal estimator (in the mean-square·error sense) of the states and the parameters, conditioned on the output measurements. It is also proved, under a reasonable assumption of persistent excitation, that the PLID parameter estimates converge a.e. to the true parameter values of the unknown system. For deterministic systems, it is proved that PLID exactly identifies the states and parameters in the minimum possible time, so—called deadbeat identification. The proof brings out an interesting relation between the estimate error propagation and the observability matrix of the time-varying extended system (the extended system incorporates the unknown parameters into the state vector). This relation gives rise to an intuitively appealing notion of persistent excitation. Some results of system identification simulations are presented. Several different cases are simulated, including a two-input, two-output system with non-minimum-phase zeros, and an unstable system. A comparison of PLID with the widely used extended Kalman filter is presented for a single-input, single·output system with near cancellation of a pole-zero pair. Results are also presented from simulations of the adaptive control of an unstable. two-input, two-output system In these simulations, PLID is used in a se1f—tuning regulator to identify the parameters needed to compute the feedback gain matrix, and (simultaneously) to estimate the system states, for the state feedback
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Karimli, Nigar. "Parameter Estimation and Optimal Design Techniques to Analyze a Mathematical Model in Wound Healing." TopSCHOLAR®, 2019. https://digitalcommons.wku.edu/theses/3114.

Full text
Abstract:
For this project, we use a modified version of a previously developed mathematical model, which describes the relationships among matrix metalloproteinases (MMPs), their tissue inhibitors (TIMPs), and extracellular matrix (ECM). Our ultimate goal is to quantify and understand differences in parameter estimates between patients in order to predict future responses and individualize treatment for each patient. By analyzing parameter confidence intervals and confidence and prediction intervals for the state variables, we develop a parameter space reduction algorithm that results in better future response predictions for each individual patient. Moreover, use of another subset selection method, namely Structured Covariance Analysis, that considers identifiability of parameters, has been included in this work. Furthermore, to estimate parameters more efficiently and accurately, the standard error (SE- )optimal design method is employed, which calculates optimal observation times for clinical data to be collected. Finally, by combining different parameter subset selection methods and an optimal design problem, different cases for both finding optimal time points and intervals have been investigated.
APA, Harvard, Vancouver, ISO, and other styles
21

Faugeroux, Olivier. "Caractérisation thermophysique de revêtements de protection thermomécanique par méthode photothermique impulsionnelle." Perpignan, 2001. http://www.theses.fr/2001PERP0459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Smith, J. R. "Design of experiments for the precise estimation of the optimum, economic optimim and parameters for one factor inverse polynomial models." Thesis, University of Reading, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Xiong, Hao. "Constrained expectation-maximization (EM), dynamic analysis, linear quadratic tracking, and nonlinear constrained expectation-maximation (EM) for the analysis of genetic regulatory networks and signal transduction networks." Thesis, [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gorynin, Ivan. "Bayesian state estimation in partially observable Markov processes." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL009/document.

Full text
Abstract:
Cette thèse porte sur l'estimation bayésienne d'état dans les séries temporelles modélisées à l'aide des variables latentes hybrides, c'est-à-dire dont la densité admet une composante discrète-finie et une composante continue. Des algorithmes généraux d'estimation des variables d'états dans les modèles de Markov partiellement observés à états hybrides sont proposés et comparés avec les méthodes de Monte-Carlo séquentielles sur un plan théorique et appliqué. Le résultat principal est que ces algorithmes permettent de réduire significativement le coût de calcul par rapport aux méthodes de Monte-Carlo séquentielles classiques
This thesis addresses the Bayesian estimation of hybrid-valued state variables in time series. The probability density function of a hybrid-valued random variable has a finite-discrete component and a continuous component. Diverse general algorithms for state estimation in partially observable Markov processesare introduced. These algorithms are compared with the sequential Monte-Carlo methods from a theoretical and a practical viewpoint. The main result is that the proposed methods require less processing time compared to the classic Monte-Carlo methods
APA, Harvard, Vancouver, ISO, and other styles
25

Torres, Marcella. "DETERMINATION OF OPTIMAL PARAMETER ESTIMATES FOR MEDICAL INTERVENTIONS IN HUMAN METABOLISM AND INFLAMMATION." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5890.

Full text
Abstract:
In this work we have developed three ordinary differential equation models of biological systems: body mass change in response to exercise, immune system response to a general inflammatory stimulus, and the immune system response in atherosclerosis. The purpose of developing such computational tools is to test hypotheses about the underlying biological processes that drive system outcomes as well as possible real medical interventions. Therefore, we focus our analysis on understanding key interactions between model parameters and outcomes to deepen our understanding of these complex processes as a means to developing effective treatments in obesity, sarcopenia, and inflammatory diseases. We develop a model of the dynamics of muscle hypertrophy in response to resistance exercise and have shown that the parameters controlling response vary between male and female group means in an elderly population. We further explore this individual variability by fitting to data from a clinical obesity study. We then apply logistic regression and classification tree methods to the analysis of between- and within-group differences in underlying physiology that lead to different long-term body composition outcomes following a diet or exercise program. Finally, we explore dieting strategies using optimal control methods. Next, we extend an existing model of inflammation to include different macrophage phenotypes. Complications with this phenotype switch can result in the accumulation of too many of either type and lead to chronic wounds or disease. With this model we are able to reproduce the expected timing of sequential influx of immune cells and mediators in a general inflammatory setting. We then calibrate this base model for the sequential response of immune cells with peritoneal cavity data from mice. Next, we develop a model for plaque formation in atherosclerosis by adapting the current inflammation model to capture the progression of macrophages to inflammatory foam cells in response to cholesterol consumption. The purpose of this work is ultimately to explore points of intervention that can lead to homeostasis.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Renke. "Seamless design of energy management systems." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53518.

Full text
Abstract:
The contributions of the research are (a) an infrastructure of data acquisition systems that provides the necessary information for an automated EMS system enabling autonomous distributed state estimation, model validation, simplified protection, and seamless integration of other EMS applications, (b) an object-oriented, interoperable, and unified component model that can be seamlessly integrated with a variety of applications of the EMS, (c) a distributed dynamic state estimator (DDSE) based on the proposed data acquisition system and the object-oriented, interoperable, and unified component model, (d) a physically-based synchronous machine model, which is expressed in terms of the actual self and mutual inductances of the synchronous machine windings as a function of rotor position, for the purpose of synchronous machine parameters identification, and (e) a robust and highly efficient algorithm for the optimal power flow (OPF) problem, one of the most important applications of the EMS, based on the validated states and models of the power system provided by the proposed DDSE.
APA, Harvard, Vancouver, ISO, and other styles
27

Galvanin, Federico. "Optimal model-based design of experiments in dynamic systems: novel techniques and unconventional applications." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3427095.

Full text
Abstract:
Model-based design of experiments (MBDoE) techniques are a very useful tool for the rapid assessment and development of dynamic deterministic models, providing a significant support to the model identification task on a broad range of process engineering applications. These techniques allow to maximise the information content of an experimental trial by acting on the settings of an experiment in terms of initial conditions, profiles of the manipulated inputs and number and time location of the output measurements. Despite their popularity, standard MBDoE techniques are still affected by some limitations. In fact, when a set of constraints is imposed on the system inputs or outputs, factors like uncertainty on prior parameter estimation and structural system/model mismatch may lead the design procedure to plan experiments that turn out, in practice, to be suboptimal (i.e. scarcely informative) and/or unfeasible (i.e. violating the constraints imposed on the system). Additionally, standard MBDoE techniques have been originally developed considering a discrete acquisition of the information. Therefore, they do not consider the possibility that the information on the system itself could be acquired very frequently if there was the possibility to record the system responses in a continuous manner. In this Dissertation three novel MBDoE methodologies are proposed to address the above issues. First, a strategy for the online model-based redesign of experiments is developed, where the manipulated inputs are updated while an experiment is still running. Thanks to intermediate parameter estimations, the information is exploited as soon as it is generated from an experiment, with great benefit in terms of precision and accuracy of the final parameter estimate and of experimental time. Secondly, a general methodology is proposed to formulate and solve the experiment design problem by explicitly taking into account the presence of parametric uncertainty, so as to ensure by design both feasibility and optimality of an experiment. A prediction of the system responses for the given parameter distribution is used to evaluate and update suitable backoffs from the nominal constraints, which are used in the design session in order to keep the system within a feasible region with specified probability. Finally, a design criterion particularly suitable for systems where continuous measurements are available is proposed in order to optimise the information dynamics of the experiments since the very beginning of the trial. This approach allows tailoring the design procedure to the specificity of the measurement system. A further contribution of this Dissertation is aimed at assessing the general applicability of both standard and advanced MBDoE techniques to the biomedical area, where unconventional experiment design applications are faced. In particular, two identification problems are considered: one related to the optimal drug administration in cancer chemotherapy, and one related to glucose homeostasis models for subjects affected by type 1 diabetes mellitus (T1DM). Particular attention is drawn to the optimal design of clinical tests for the parametric identification of detailed physiological models of T1DM. In this latter case, advanced MBDoE techniques are used to ensure a safe and optimally informative clinical test for model identification. The practicability and effectiveness of a complex approach taking simultaneously into account the redesign-based and the backoff-based MBDoE strategies are also shown. The proposed experiment design procedure provides alternative test protocols that are sufficiently short and easy to carry out, and allow for a precise, accurate and safe estimation of the model parameters defining the metabolic portrait of a diabetic subject.
Le moderne tecniche di progettazione ottimale degli esperimenti basata su modello (MBDoE, model-based design of experiments) si sono dimostrate utili ed efficaci per sviluppare e affinare modelli matematici dinamici di tipo deterministico. Queste tecniche consentono di massimizzare il contenuto informativo di un esperimento di identificazione, determinando le condizioni sperimentali più opportune da adottare nella sperimentazione allo scopo di stimare i parametri di un modello nel modo più rapido ed efficiente possibile. Le tecniche MBDoE sono state applicate con successo in svariate applicazioni industriali. Tuttavia, nella loro formulazione standard, esse soffrono di alcune limitazioni. Infatti, quando sussistono vincoli sugli ingressi manipolabili dallo sperimentatore oppure sulle risposte del sistema, l’incertezza nell’informazione preliminare che lo sperimentatore possiede sul sistema fisico (in termini di struttura del modello e precisione nella stima dei parametri) può profondamente influenzare l’efficacia della procedura di progettazione dell’esperimento. Come conseguenza, è possibile che venga progettato un esperimento poco informativo e dunque inadeguato per stimare i parametri del modello in maniera statisticamente precisa ed accurata, o addirittura un esperimento che porta a violare i vincoli imposti sul sistema in esame. Inoltre, le tecniche MBDoE standard non considerano nella formulazione stessa del problema di progettazione la specificità e le caratteristiche del sistema di misura in termini di frequenza, precisione e accuratezza con cui le misure sono disponibili. Nella ricerca descritta in questa Dissertazione sono sviluppate metodologie avanzate di progettazione degli esperimenti con lo scopo di superare tali limitazioni. In particolare, sono proposte tre nuove tecniche per la progettazione ottimale di esperimenti dinamici basata su modello: 1. una tecnica di progettazione in linea degli esperimenti (OMBRE, online model-based redesign of experiments), che consente di riprogettare un esperimento mentre questo è ancora in esecuzione; 2. una tecnica basata sul concetto di “backoff” (arretramento) dai vincoli, per gestire l’incertezza parametrica e strutturale del modello; 3. una tecnica di progettazione che consente di ottimizzare l’informazione dinamica di un esperimento (DMBDoE, dynamic model-based design of experiments) allo scopo di considerare la specificità del sistema di misura disponibile. La procedura standard MBDoE per la progettazione di un esperimento è sequenziale e si articola in tre stadi successivi. Nel primo stadio l’esperimento viene progettato considerando l’informazione preliminare disponibile in termini di struttura del modello e stima preliminare dei parametri. Il risultato della progettazione è una serie di profili ottimali delle variabili manipolabili (ingressi) e l’allocazione ottimale dei tempi di campionamento delle misure (uscite). Nel secondo stadio l’esperimento viene effettivamente condotto, impiegando le condizioni sperimentali progettate e raccogliendo le misure come da progetto. Nel terzo stadio, le misure vengono utilizzate per stimare i parametri del modello. Seguendo questa procedura, l’informazione ottenuta dall’esperimento viene sfruttata solo a conclusione dell’esperimento stesso. La tecnica OMBRE proposta consente invece di riprogettare l’esperimento, e quindi di aggiornare i profili manipolabili nel tempo, mentre l’esperimento è ancora in esecuzione, attuando stime intermedie dei parametri. In questo modo l’informazione viene sfruttata progressivamente mano a mano che l’esperimento procede. I vantaggi di questa tecnica sono molteplici. Prima di tutto, la procedura di progettazione diventa meno sensibile, rispetto alla procedura standard, alla qualità della stima preliminare dei parametri. In secondo luogo, essa consente una stima dei parametri statisticamente più soddisfacente, grazie alla possibilità di sfruttare in modo progressivo l’informazione generata dall’esperimento. Inoltre, la tecnica OMBRE consente di ridurre le dimensioni del problema di ottimizzazione, con grande beneficio in termini di robustezza computazionale. In alcune applicazioni, risulta di importanza critica garantire la fattibilità dell’esperimento, ossia l’osservanza dei vincoli imposti sul sistema. Nella Dissertazione è proposta e illustrata una nuova procedura di progettazione degli esperimenti basata sul concetto di “backoff” (arretramento) dai vincoli, nella quale l’effetto dell’incertezza sulla stima dei parametri e/o l’inadeguatezza strutturale del modello vengono inclusi nella formulazione delle equazioni di vincolo grazie ad una simulazione stocastica. Questo approccio porta a ridurre lo spazio utile per la progettazione dell’esperimento in modo tale da assicurare che le condizioni di progettazione siano in grado di garantire non solo l’identificazione dei parametri del modello, ma anche la fattibilità dell’esperimento in presenza di incertezza strutturale e/o parametrica del modello. Nelle tecniche standard di progettazione la formulazione del problema di ottimo prevede che le misure vengano acquisite in maniera discreta, considerando una certa distanza temporale tra misure successive. Di conseguenza, l’informazione attesa dall’esperimento viene calcolata e massimizzata durante la progettazione mediante una misura discreta dell’informazione di Fisher. Tuttavia, nella pratica, sistemi di misura di tipo continuo permetterebbero di seguire la dinamica del processo mediante misurazioni molto frequenti. Per questo motivo viene proposto un nuovo criterio di progettazione (DMBDoE), nel quale l’informazione attesa dall’esperimento viene ottimizzata in maniera continua. Il nuovo approccio consente di generalizzare l’approccio della progettazione includendo le caratteristiche del sistema di misura (in termini di frequenza di campionamento, accuratezza e precisione delle misure) nella formulazione stessa del problema di ottimo. Un ulteriore contributo della ricerca presentata in questa Dissertazione è l’estensione al settore biomedico di tecniche MBDoE standard ed avanzate. I sistemi fisiologici sono caratterizzati da elevata complessità, e spesso da scarsa controllabilità e scarsa osservabilità. Questi elementi rendono particolarmente lunghe e complesse le procedure di identificazione parametrica di modelli fisiologici dettagliati. L’attività di ricerca ha considerato due problemi principali inerenti l’identificazione parametrica di modelli fisiologici: il primo legato a un modello per la somministrazione ottimale di agenti chemioterapici per la cura del cancro, il secondo relativo ai modelli complessi dell’omeostasi glucidica per soggetti affetti da diabete mellito di tipo 1. In quest’ultimo caso, al quale è rivolta attenzione particolare, l’obiettivo principale è identificare il set di parametri individuali del soggetto diabetico. Ciò consente di tracciarne un ritratto metabolico, fornendo così un prezioso supporto qualora si intenda utilizzare il modello per sviluppare e verificare algoritmi avanzati per il controllo del diabete di tipo 1. Nella letteratura e nella pratica medica esistono test clinici standard, quali il test orale di tolleranza al glucosio e il test post-prandiale da carico di glucosio, per la diagnostica del diabete e l’identificazione di modelli dell’omeostasi glucidica. Tali test sono sufficientemente brevi e sicuri per il soggetto diabetico, ma si possono rivelare poco informativi quando l’obiettivo è quello di identificare i parametri di modelli complessi del diabete. L’eccitazione fornita durante questi test al sistema-soggetto, in termini di infusione di insulina e somministrazione di glucosio, può infatti essere insufficiente per stimare in maniera statisticamente soddisfacente i parametri del modello. In questa Dissertazione è proposto l’impiego di tecniche MBDoE standard e avanzate per progettare test clinici che permettano di identificare nel modo più rapido ed efficiente possibile il set di parametri che caratterizzano un soggetto affetto da diabete, rispettando durante il test i vincoli imposti sul livello glicemico del soggetto. Partendo dai test standard per l’identificazione di modelli fisiologici del diabete, è così possibile determinare dei protocolli clinici modificati in grado di garantire test clinici altamente informativi, sicuri, poco invasivi e sufficientemente brevi. In particolare, si mostra come un test orale opportunamente modificato risulta altamente informativo per l’identificazione, sicuro per il paziente e di facile implementazione per il clinico. Inoltre, viene evidenziato come l’integrazione di tecniche avanzate di progettazione (quali OMBRE e tecniche basate sul concetto di backoff) è in grado di garantire elevata significatività e sicurezza dei test clinici anche in presenza di incertezza strutturale, oltre che parametrica, del modello. Infine, si mostra come, qualora siano disponibili misure molto frequenti della glicemia, ottimizzare mediante tecniche DMBDoE l’informazione dinamica progressivamente acquisita dal sistema di misura durante il test consente di sviluppare protocolli clinici altamente informativi, ma di durata inferiore, minimizzando così lo stress sul soggetto diabetico. La struttura della Dissertazione è la seguente. Il primo Capitolo illustra lo stato dell’arte delle attuali tecniche di progettazione ottimale degli esperimenti, analizzandone le limitazioni e identificando gli obiettivi della ricerca. Il secondo Capitolo contiene la trattazione matematica necessaria per comprendere la procedure standard di progettazione degli esperimenti. Il terzo Capitolo presenta la nuova tecnica OMBRE per la riprogettazione in linea di esperimenti dinamici. La tecnica viene applicata a due casi di studio, riguardanti un processo di fermentazione di biomassa in un reattore semicontinuo e un processo per la produzione di uretano. Il quarto Capitolo propone e illustra il metodo basato sul concetto di “backoff” per gestire l’effetto dell’incertezza parametrica e strutturale nella formulazione stessa del problema di progettazione. L’efficacia del metodo è verificata su due casi di studio in ambito biomedico. Il primo riguarda l’ottimizzazione dell’infusione di insulina per l’identificazione di un modello dettagliato del diabete mellito di tipo 1; il secondo la somministrazione ottimale di agenti chemioterapici per la cura del cancro. Il quinto Capitolo riguarda interamente il problema della progettazione ottimale di test clinici per l’identificazione di un modello fisiologico complesso del diabete mellito di tipo 1. La progettazione di protocolli clinici modificati avviene adottando tecniche MBDoE in presenza di elevata incertezza parametrica tra modello e soggetto diabetico. Il sesto Capitolo affronta il problema della progettazione dei test clinici assumendo sia incertezza di modello parametrica che strutturale. Il settimo Capitolo propone un nuovo criterio di progettazione (DMBDoE) che ottimizza l’informazione dinamica acquisibile da un esperimento. La tecnica viene applicata a un modello complesso del diabete mellito di tipo 1 e ad un processo per la fermentazione di biomassa in un reattore semicontinuo. Conclusioni e possibili sviluppi futuri vengono descritti nella sezione conclusiva della Dissertazione.
APA, Harvard, Vancouver, ISO, and other styles
28

Jarullah, Aysar Talib. "Kinetic Modelling Simulation and Optimal Operation of Trickle Bed Reactor for Hydrotreating of Crude Oil. Kinetic Parameters Estimation of Hydrotreating Reactions in Trickle Bed Reactor (TBR) via Pilot Plant Experiments; Optimal Design and Operation of an Industrial TBR with Heat Integration and Economic Evaluation." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5363.

Full text
Abstract:
Catalytic hydrotreating (HDT) is a mature process technology practiced in the petroleum refining industries to treat oil fractions for the removal of impurities (such as sulfur, nitrogen, metals, asphaltene). Hydrotreating of whole crude oil is a new technology and is regarded as one of the more difficult tasks that have not been reported widely in the literature. In order to obtain useful models for the HDT process that can be confidently applied to reactor design, operation and control, the accurate estimation of kinetic parameters of the relevant reaction scheme are required. This thesis aims to develop a crude oil hydrotreating process (based on hydrotreating of whole crude oil followed by distillation) with high efficiency, selectivity and minimum energy consumption via pilot plant experiments, mathematical modelling and optimization. To estimate the kinetic parameters and to validate the kinetic models under different operating conditions, a set of experiments were carried out in a continuous flow isothermal trickle bed reactor using crude oil as a feedstock and commercial cobaltmolybdenum on alumina (Co-Mo/¿-Al2O3) as a catalyst. The reactor temperature was varied from 335°C to 400°C, the hydrogen pressure from 4 to10 MPa and the liquid hourly space velocity (LHSV) from 0.5 to 1.5 hr-1, keeping constant hydrogen to oil ratio (H2/Oil) at 250 L/L. The main hydrotreating reactions were hydrodesulfurization (HDS), hydrodenitrogenation (HDN), hydrodeasphaltenization (HDAs) and hydrodemetallization (HDM) that includes hydrodevanadization (HDV) and hydrodenickelation (HDNi). An optimization technique is used to evaluate the best kinetic models of a trickle-bed reactor (TBR) process utilized for HDS, HDAs, HDN, HDV and HDNi of crude oil based on pilot plant experiments. The minimization of the sum of the squared errors (SSE) between the experimental and estimated concentrations of sulfur (S), nitrogen (N), asphaltene (Asph), vanadium (V) and nickel (Ni) compounds in the products, is used as an objective function in the optimization problem using two approaches (linear (LN) and non-linear (NLN) regression). The growing demand for high-quality middle distillates is increasing worldwide whereas the demand for low-value oil products, such as heavy oils and residues, is decreasing. Thus, maximizing the production of more liquid distillates of very high quality is of immediate interest to refiners. At the same time, environmental legislation has led to more strict specifications of petroleum derivatives. Crude oil hydrotreatment enhances the productivity of distillate fractions due to chemical reactions. The hydrotreated crude oil was distilled into the following fractions (using distillation pilot plant unit): light naphtha (L.N), heavy naphtha (H.N), heavy kerosene (H.K), light gas oil (L.G.O) and reduced crude residue (R.C.R) in order to compare the yield of these fractions produced by distillation after the HDT process with those produced by conventional methods (i.e. HDT of each fraction separately after the distillation). The yield of middle distillate showed greater yield compared to the middle distillate produced by conventional methods in addition to improve the properties of R.C.R. Kinetic models that enhance oil distillates productivity are also proposed based on the experimental data obtained in a pilot plant at different operation conditions using the discrete kinetic lumping approach. The kinetic models of crude oil hydrotreating are assumed to include five lumps: gases (G), naphtha (N), heavy kerosene (H.K), light gas oil (L.G.O) and reduced crude residue (R.C.R). For all experiments, the sum of the squared errors (SSE) between the experimental product compositions and predicted values of compositions is minimized using optimization technique. The kinetic models developed are then used to describe and analyse the behaviour of an industrial trickle bed reactor (TBR) used for crude oil hydrotreating with the optimal quench system based on experiments in order to evaluate the viability of large-scale processing of crude oil hydrotreating. The optimal distribution of the catalyst bed (in terms of optimal reactor length to diameter) with the best quench position and quench rate are investigated, based upon the total annual cost. The energy consumption is very important for reducing environmental impact and maximizing the profitability of operation. Since high temperatures are employed in hydrotreating (HDT) processes, hot effluents can be used to heat other cold process streams. It is noticed that the energy consumption and recovery issues may be ignored for pilot plant experiments while these energies could not be ignored for large scale operations. Here, the heat integration of the HDT process during hydrotreating of crude oil in trickle bed reactor is addressed in order to recover most of the external energy. Experimental information obtained from a pilot scale, kinetics and reactor modelling tools, and commercial process data, are employed for the heat integration process model. The optimization problem is formulated to optimize some of the design and operating parameters of integrated process, and minimizing the overall annual cost is used as an objective function. The economic analysis of the continuous whole industrial refining process that involves the developed hydrotreating (integrated hydrotreating process) unit with the other complementary units (until the units that used to produce middle distillate fractions) is also presented. In all cases considered in this study, the gPROMS (general PROcess Modelling System) package has been used for modelling, simulation and parameter estimation via optimization process.
Tikrit University, Iraq
APA, Harvard, Vancouver, ISO, and other styles
29

Nguyen, Van Tri. "Adjoint-based approach for estimation & sensor location on 1D hyperbolic systems with applications in hydrology & traffic." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT063/document.

Full text
Abstract:
Ce travail de thèse propose une approche générique pour l'estimation de l'état/ des paramètres et pour le placement de capteurs de systèmes hyperboliques non linéaires en dimension infinie. Le travail est donc divisé en deux parties principales : une partie consacrée à l'estimation optimale et une partie dédiée au placement optimal de capteurs. La méthode d'estimation optimale utilise une approche par calcul des variations et utilise la méthode des multiplicateurs de Lagrange. Ces multiplicateurs jouent un rôle important en donnant accès aux sensibilités des mesures par rapport aux variables qui doivent être estimées. Ces sensibilités, décrites par les équations adjointes, sont aussi à l'origine d'une nouvelle approche, dite méthode de l'adjoint, pour le placement optimal de capteurs. Divers exemples, construits sur la base de simulations mais également de données réelles et pour différents scénarios, sont aussi étudiées afin d'illustrer l'efficacité des approches développées. Ces exemples concernent les écoulements à surface libre (en hydrologie des bassins versants) et le trafic routier représentés par des équations aux dérivées partielles hyperboliques non linéaires
The thesis proposes a general framework for both state/parameters estimation and sensor placement in nonlinear infinite dimensional hyperbolic systems. The work is therefore divided into two main parts: a first part devoted to the optimal estimation and a second one to optimal sensor location. The estimation method is based on the calculus of variations and the use of Lagrange multipliers. The Lagrange multipliers play an important role in giving access to the sensitivities of the measurements with respect to the variables to be estimated. These sensitivities, described by the adjoint equations, are also the key idea of a new approach, so-called the adjoint-based approach, for the optimal sensor placement. Various examples, either based on some simulations with synthetic measurements or real data sets and for different scenarios, are also studied to illustrate the effectiveness of the developed approaches. Theses examples concern the overland flow systems and the traffic flow, which are both governed by nonlinear hyperbolic partial differential equations
APA, Harvard, Vancouver, ISO, and other styles
30

Ayoub, Houssein. "Prolifération des cellules T dans des conditions lymphopéniques : modélisation, estimation des paramètres et analyse mathématique." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0059/document.

Full text
Abstract:
Les lymphocytes T sont une composante essentielle du système immunitaire de l'organisme. Ils peuvent reconnaître et répondre à un antigène étranger en vertu de leur récepteur d'antigène. En effet, les cellules T qui n'ont pas encore rencontrées des antigènes, sont appelées "naïves". Lors d'un premier contact antigénique, l'expansion clonale des lymphocytes T spécifiques a un antigène augmente fortement leur fréquence, et déséquilibre transitoirement de façon plus ou moins intense le compartiment lymphocytaire T périphérique. Cet équilibre doit être rétabli pour ne pas menacer à terme le bon fonctionnement du système immunitaire. Outre le risque de réponse explosive lors d'une réexposition à l'antigène, l'accumulation de clones T de taille disproportionnée gênerait considérablement le recrutement de lymphocytes T spécifiques de nouveaux antigènes. Ainsi, après élimination de l'antigène ou son confinement dans l'organisme, différents mécanismes interviennent. Il faut en effet d'une part assurer le maintien d'un compartiment de cellules T naïves de taille suffisante pour faire face à de nouvelles stimulations antigéniques. D'autre part, la constitution d'un panel de cellules T mémoires est nécessaire pour permettre une réponse immunitaire plus rapide et plus efficace lors de réexpositions antigéniques. Donc les mécanismes d'homéostasie des cellules T sont essentielles pour maintenir le nombre de cellules T à un niveau à peu près constant en contrôlant la division cellulaire et la mortalité des cellules. [...]
T lymphocytes are a fundamental component of the immune system that can recognise and respond to foreign antigens by virtue of their clonally expressed T cell antigen receptor (TCR). T cells that have yet to encounter the antigen they recognise are termed 'naive' as they have not been activated to respond. Homeostatic mechanisms maintain the number of T cells at an approximately constant level by controling cell division and death. In normal replete hosts, cell turnover within the naive compartment is very low and naive cells are maintained in a resting state.However, disruption of the homeostatic balance can arise from a wide variety of causes (viral infection (e.g. HIV), or drugs used in peritransplant induction therapy or cancer chemotherapy) and can result in T cell deciency or T lymphopenia. Under conditions of T lymphopenia, naive T cells undergo cell division with a subtle change in the cell surface phenotype (CD44 expression), termed homeostatic proliferation or lymphopenia induced proliferation (LIP). In this thesis, our purpose is to understand the process of T cell homeostatic through mathematical approach. At first, we build a new model that describes the proliferation of T cells in vitro under lymphopenic conditions. Our nonlinear model is composed of ordinary differential equations and partial differential equations structured by age (maturity of cell) and CD44 expression. To better understand the homeostasis of T cells, we identify the parameters that define T cell division by using experimental data. Next, we consider an age-structured model system describing the T cell homeostatic in vivo, and we investigate its asymptotic behaviour. Finally, an optimal strategy is applied in the in vivo model to rebuild immunity under conditions of T lympopenia
APA, Harvard, Vancouver, ISO, and other styles
31

Dey, Vivek. "A Supervised Approach For The Estimation Of Parameters Of Multiresolution Segementation And Its Application In Building Feature Extraction From VHR Imagery." Thesis, Fredericton: University of New Brunswick, 2011. http://hdl.handle.net/1882/35388.

Full text
Abstract:
With the advent of very high spatial resolution (VHR) satellite, spatial details within the image scene have increased considerably. This led to the development of object-based image analysis (OBIA) for the analysis of VHR satellite images. Image segmentation is the fundamental step for OBIA. However, a large number of techniques exist for RS image segmentation. To identify the best ones for VHR imagery, a comprehensive literature review on image segmentation is performed. Based on that review, it is found that the multiresolution segmentation, as implemented in the commercial software eCognition, is the most widely-used technique and has been successfully applied for wide variety of VHR images. However, the multiresolution segmentation suffers from the parameter estimation problem. Therefore, this study proposes a solution to the problem of the parameter estimation for improving its efficiency in VHR image segmentation. The solution aims to identify the optimal parameters, which correspond to optimal segmentation. The solution to the parameter estimation is drawn from the Equations related to the merging of any two adjacent objects in multiresolution segmentation. The solution utilizes spectral, shape, size, and neighbourhood relationships for a supervised solution. In order to justify the results of the solution, a global segmentation accuracy evaluation technique is also proposed. The solution performs excellently with the VHR images of different sensors, scenes, and land cover classes. In order to justify the applicability of solution to a real life problem, a building detection application based on multiresolution segmentation from the estimated parameters, is carried out. The accuracy of the building detection is found nearly to be eighty percent. Finally, it can be concluded that the proposed solution is fast, easy to implement and effective for the intended applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Marushkevych, Dmytro. "Asymptotic study of covariance operator of fractional processes : analytic approach with applications." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1010/document.

Full text
Abstract:
Les problèmes aux valeurs et fonctions propres surviennent fréquemment dans la théorie et dans les applications des processus stochastiques. Cependant quelques-uns seulement admettent une solution explicite; la résolution est alors généralement obtenue par la théorie généralisée de Sturm-Liouville pour les opérateurs différentiels. Les problèmes plus généraux ne peuvent pas être résolus sous une forme fermée et le sujet de cette thèse est l'analyse spectrale asymptotique des processus gaussiens fractionnaires et ses applications. Dans la première partie, nous développons une méthodologie pour l'analyse spectrale des opérateurs de covariance de type fractionnaire, correspondant à une famille importante de processus, incluant le processus fractionnaire d'Ornstein-Uhlenbeck, le mouvement brownien fractionnaire intégré et le mouvement brownien fractionnaire mixte. Nous obtenons des approximations asymptotiques du second ordre pour les valeurs propres et les fonctions propres. Au chapitre 2, nous considérons le problème aux valeurs et fonctions propres pour l'opérateur de covariance des ponts gaussiens. Nous montrons comment l'asymptotique spectrale d'un pont peut être dérivée de celle de son processus de base, en prenant comme exemple le cas du pont brownien fractionnaire. Dans la dernière partie, nous considérons trois applications représentatives de la théorie développée: le problème de filtrage des signaux gaussiens fractionnaires dans le bruit blanc, le problème de grande déviation pour le processus d'Ornstein-Uhlenbeck gouverné par un mouvement brownien fractionnaire mixte et probabilités des petites boules pour les processus gaussiens fractionnaires
Eigenproblems frequently arise in theory and applications of stochastic processes, but only a few have explicit solutions. Those which do are usually solved by reduction to the generalized Sturm-Liouville theory for differential operators.The more general eigenproblems are not solvable in closed form and the subject of this thesis is the asymptotic spectral analysis of the fractional Gaussian processes and its applications.In the first part, we develop methodology for the spectral analysis of the fractional type covariance operators, corresponding to an important family of processes that includes the fractional Ornstein-Uhlenbeck process, the integrated fractional Brownian motion and the mixed fractional Brownian motion. We obtain accurate second order asymptotic approximations for both the eigenvalues and the eigenfunctions. In Chapter 2 we consider the covariance eigenproblem for Gaussian bridges. We show how the spectral asymptotics of a bridge can bederived from that of its base process, considering, as an example, the case of the fractional Brownian bridge. In the final part we consider three representative applications of the developed theory: filtering problem of fractional Gaussian signals in white noise, large deviation properties of the maximum likelihood drift parameter estimator for the Ornstein-Uhlenbeck process driven by mixed fractional Brownian motion and small ball probabilities for the fractional Gaussian processes
APA, Harvard, Vancouver, ISO, and other styles
33

John, Yakubu M. "Kinetic modelling simulation and optimal operation of fluid catalytic cracking of crude oil: Hydrodynamic investigation of riser gas phase compressibility factor, kinetic parameter estimation strategy and optimal yields of propylene, diesel and gasoline in fluid catalytic cracking unit." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/17323.

Full text
Abstract:
The Fluidized Catalytic Cracking (FCC) is known for its ability to convert refinery wastes into useful fuels such as gasoline, diesel and some lighter products such as ethylene and propylene, which are major building blocks for the polyethylene and polypropylene production. It is the most important unit of the refinery. However, changes in quality, nature of crude oil blends feedstock, environmental changes and the desire to obtain higher profitability, lead to many alternative operating conditions of the FCC riser. There are two major reactors in the FCC unit: the riser and the regenerator. The production objective of the riser is the maximisation of gasoline and diesel, but it can also be used to maximise products like propylene, butylene etc. For the regenerator, it is for regeneration of spent or deactivated catalyst. To realise these objectives, mathematical models of the riser, disengage-stripping section, cyclones and regenerator were adopted from the literature and modified, and then used on the gPROMS model builder platform to make a virtual form of the FCC unit. A new parameter estimation technique was developed in this research and used to estimate new kinetic parameters for a new six lumps kinetic model based on an industrial unit. Research outputs have resulted in the following major products’ yields: gasoline (plant; 47.31 wt% and simulation; 48.63 wt%) and diesel (plant; 18.57 wt% and simulation; 18.42 wt%) and this readily validates the new estimation methodology as well as the kinetic parameters estimated. The same methodology was used to estimate kinetic parameters for a new kinetic reaction scheme that considered propylene as a single lump. The yield of propylene was found to be 4.59 wt%, which is consistent with published data. For the first time, a Z-factor correlation analysis was used in the riser simulation to improve the hydrodynamics. It was found that different Z factor correlations predicted different riser operating pressures (90 – 279 kPa) and temperatures as well as the riser products. The Z factor correlation of Heidaryan et al. (2010a) was found to represent the condition of the riser, and depending on the catalyst-to-oil ratio, this ranges from 1.06 at the inlet of the riser to 0.92 at the exit. Optimisation was carried out to maximise gasoline, propylene in the riser and minimise CO2 in the regenerator. An increase of 4.51% gasoline, 8.93 wt.% increase in propylene as a single lump and 5.24 % reduction of carbon dioxide emission were achieved. Finally, varying the riser diameter was found to have very little effect on the yields of the riser products.
APA, Harvard, Vancouver, ISO, and other styles
34

Jarachi, Fatah. "Filtrage de systemes a partir d'observations ponctuelles et application a l'identification d'un modele biologique monocompartimental." Toulouse 3, 1986. http://www.theses.fr/1986TOU30025.

Full text
Abstract:
L'intensite d'un processus de comptage est une fonction d'un processus etat. On etudie le probleme du filtrage non lineaire d'une fonction du processus etat connaissant une realisation du processus de comptage. La solution est une equation differentielle regissant l'evolution de cette fonction. Pour une intensite lineaire, les equations de tous les moments de l'estimee de l'etat sont ecrites recursivement. Un filtre sous optimal est propose et l'estimation partitionnee des parametres de l'etat est etudiee. L'application de cette theorie concerne l'estimation du debit sanguin cerebral local
APA, Harvard, Vancouver, ISO, and other styles
35

Raman, Dhruva Venkita. "On the identifiability of highly parameterised models of physical processes." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:f58aa335-db0a-495b-8eef-1ddb363cbd19.

Full text
Abstract:
This thesis is concerned with drawing out high-level insight from otherwise complex mathematical models of physical processes. This is achieved through detailed analysis of model behaviour as constituent parameters are varied. A particular focus is the well-posedness of parameter estimation from noisy data, and its relationship to the parametric sensitivity properties of the model. Other topics investigated include the verification of model performance properties over large ranges of parameters, and the simplification of models based upon their response to parameter perturbation. Several methodologies are proposed, which account for various model classes. However, shared features of the models considered include nonlinearity, parameters with considerable scope for variability, and experimental data corrupted by significant measurement uncertainty. We begin by considering models described by systems of nonlinear ordinary differen- tial equations with parameter dependence. Model output, in this case, can only be obtained by numerical integration of the relevant equations. Therefore, assessment of model behaviour over tracts of parameter space is usually carried out by repeated model simulation over a grid of parameter values. We instead reformulate this as- sessment as an algebraic problem, using polynomial programming techniques. The result is an algorithm that produces parameter-dependent algebraic functions that are guaranteed to bound user-defined aspects of model behaviour over parameter space. We then consider more general classes of parameter-dependent model. A theoretical framework is constructed through which we can explore the duality between model sensitivity to non-local parameter perturbations, and the well-posedness of parameter estimation from significantly noisy data. This results in an algorithm that can uncover functional relations on parameter space over which model output is insensitive and parameters cannot be estimated. The methodology used derives from techniques of nonlinear optimal control. We use this algorithm to simplify benchmark models from the systems biology literature. Specifically, we uncover features such as fast-timescale subsystems and redundant model interactions, together with the sets of parameter values over which the features are valid. We finally consider parameter estimation in models that are acknowledged to im- perfectly describe the modelled process. We show that this invalidates standard statistical theory associated with uncertainty quantification of parameter estimates. Alternative theory that accounts for this situation is then developed, resulting in a computationally tractable approximation of the covariance of a parameter estimate with respect to noise-induced fluctuation of experimental data.
APA, Harvard, Vancouver, ISO, and other styles
36

Ekman, Mats. "Modeling and Control of Bilinear Systems : Application to the Activated Sludge Process." Doctoral thesis, Uppsala University, Department of Information Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5940.

Full text
Abstract:

This thesis concerns modeling and control of bilinear systems (BLS). BLS are linear but not jointly linear in state and control. In the first part of the thesis, a background to BLS and their applications to modeling and control is given. The second part, and likewise the principal theme of this thesis, is dedicated to theoretical aspects of identification, modeling and control of mainly BLS, but also linear systems. In the last part of the thesis, applications of bilinear and linear modeling and control to the activated sludge process (ASP) are given.

APA, Harvard, Vancouver, ISO, and other styles
37

El, Heda Khadijetou. "Choix optimal du paramètre de lissage dans l'estimation non paramétrique de la fonction de densité pour des processus stationnaires à temps continu." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0484/document.

Full text
Abstract:
Les travaux de cette thèse portent sur le choix du paramètre de lissage dans le problème de l'estimation non paramétrique de la fonction de densité associée à des processus stationnaires ergodiques à temps continus. La précision de cette estimation dépend du choix de ce paramètre. La motivation essentielle est de construire une procédure de sélection automatique de la fenêtre et d'établir des propriétés asymptotiques de cette dernière en considérant un cadre de dépendance des données assez général qui puisse être facilement utilisé en pratique. Cette contribution se compose de trois parties. La première partie est consacrée à l'état de l'art relatif à la problématique qui situe bien notre contribution dans la littérature. Dans la deuxième partie, nous construisons une méthode de sélection automatique du paramètre de lissage liée à l'estimation de la densité par la méthode du noyau. Ce choix issu de la méthode de la validation croisée est asymptotiquement optimal. Dans la troisième partie, nous établissons des propriétés asymptotiques, de la fenêtre issue de la méthode de la validation croisée, données par des résultats de convergence presque sûre
The work this thesis focuses on the choice of the smoothing parameter in the context of non-parametric estimation of the density function for stationary ergodic continuous time processes. The accuracy of the estimation depends greatly on the choice of this parameter. The main goal of this work is to build an automatic window selection procedure and establish asymptotic properties while considering a general dependency framework that can be easily used in practice. The manuscript is divided into three parts. The first part reviews the literature on the subject, set the state of the art and discusses our contribution in within. In the second part, we design an automatical method for selecting the smoothing parameter when the density is estimated by the Kernel method. This choice stemming from the cross-validation method is asymptotically optimal. In the third part, we establish an asymptotic properties pertaining to consistency with rate for the resulting estimate of the window-width
APA, Harvard, Vancouver, ISO, and other styles
38

Picart, Delphine. "Modélisation et estimation des paramètres liés au succès reproducteur d'un ravageur de la vigne (Lobesia botrana DEN. & SCHIFF.)." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00405686.

Full text
Abstract:
L'objectif de ce travail de thèse est de développer un modèle mathématique pour l'étude et la compréhension de la dynamique des populations d'un insecte ravageur, l'Eudémis de la vigne, dans son écosystème. Le modèle proposé est un système d'équations aux dérivées partielles (EDP) de type hyperbolique qui décrit les variations numériques au cours du temps de la population en fonction des stades de développement, du sexe des individus et des conditions environnementales. La ressource alimentaire, la température, l'humidité et la prédation sont les principaux facteurs environnementaux du modèle expliquant les fluctuations du nombre d'individus au cours du temps. Les différences de développement qui existent dans une cohorte d'Eudémis sont aussi modélisées pour affiner les prédictions du modèle. A partir de données expérimentales obtenues par les entomologistes de l'INRA, les paramètres du modèle sont estimés. Ce modèle ainsi ajusté nous permet alors d'étudier quelques aspects biologiques et écologiques de l'insecte comme par exemple l'impact de scénarios climatiques sur le ponte des femelles ou sur la dynamique d'attaque de la vigne par les jeunes larves. Les analyses mathématique et numérique du modèle mathématique et des problèmes d'estimation des paramètres sont développes dans cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
39

Scarlat, Raul Cristian [Verfasser], Georg [Akademischer Betreuer] Heygster, Gunnar [Gutachter] Spreen, and Leif Toudal [Gutachter] Pedersen. "Improving an optimal estimation algorithm for surface and atmospheric parameter retrieval using passive microwave data in the Arctic / Raul Cristian Scarlat ; Gutachter: Gunnar Spreen, Leif Toudal Pedersen ; Betreuer: Georg Heygster." Bremen : Staats- und Universitätsbibliothek Bremen, 2018. http://d-nb.info/1169299059/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Morris, Evan Daniel. "The role of the low-density lipoprotein receptor in transport and metabolism of LDL through the wall of normal rabbit aorta in vivo. Estimation of model parameters from optimally designed dual-tracer experiments." Case Western Reserve University School of Graduate Studies / OhioLINK, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=case1055528562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Castaings, William. "Analyse de sensibilité et estimation de paramètres pour la modélisation hydrologique : potentiel et limitations des méthodes variationnelles." Grenoble 1, 2007. http://www.theses.fr/2007GRE10154.

Full text
Abstract:
Comme tout évènement géophysique, la transformation de la pluie en débit dans les rivières est caractérisée par la complexité des processus engagés et par l'observation partielle, parfois très limitée, de la réponse hydrologique du bassin versant ainsi que du forçage atmosphérique auquel il est soumis. Il est donc essentiel de comprendre, d'analyser et de réduire les incertitudes inhérentes à la modélisation hydrologique (analyse de sensibilité, assimilation de données, propagation d'incertitudes). Les méthodes variationnelles sont très largement employées au sein d'autres disciplines (ex. Météorologie, océanographie. . . ) confrontés aux mêmes challenges. Dans le cadre de ce travail, nous avons appliqué ce type de méthodes à des modèles représentant deux types de fonctionnement des hydrosystèmes à l'échelle du bassin versant. Le potentiel et les limitations de l'approche variationnelle pour la modélisation hydrologique sont illustrés avec un modèle faisant du ruissellement par dépassement de la capacité d'infiltration le processus prépondérant pour la genèse des écoulements superficiels (MARINE) ainsi qu'avec un modèle basé sur le concept des zones contributives d'aire variable (TOPMODEL). L'analyse de sensibilité par linéarisation ou basée sur la méthode de l'état adjoint permet une analyse locale mais approfondie de la relation entre les facteurs d'entrée de la modélisation et les variables pronostiques du système. De plus, le gradient du critère d'ajustement aux observations calculé par le modèle adjoint permet guider de manière très efficace un algorithme de descente avec contraintes de bornes pour l'estimation des paramètres. Les résultats obtenus sont très encourageants et plaident pour une utilisation accrue de l'approche variationnelle afin d'aborder les problématiques clés que sont l'analyse de la physique décrite dans les modèles hydrologiques et l'estimation des variables de contrôle (calibration des paramètres et mise à jour de l'état par assimilation de données)
The rainfall-runoff transformation is characterized by the complexity of the involved processes and by the limited observability of the atmospheric forcing, catchment properties and hydrological response. It is therefore essential to understand, analyse and reduce the uncertainty inherent to hydrological modelling (sensitivity and uncertainty analysis, data assimilation). Variational methods are widely used in other scientific disciplines (ex. Meteorology, oceanography) facing the same challenges. In this work, they were applied to hydrological models characterised by different modelling paradigms (reductionist vs. Systemic) and runoff generation mechanisms (infiltration-excess vs. Saturation excess). The potential and limitations of variational methods for catchment hydrology are illustrated with MARINE from the Toulouse Fluids Mechanics Institute (IMFT) and two models (event based flood model and continuous water balance model) based on TOPMODEL concepts developed at the Laboratory of Environmental Hydrology (LTHE). Forward and adjoint sensitivity analysis provide a local but extensive insight of the relation between model inputs and prognostic variables. The gradient of a performance measure (characterising the misfit with observations), calculated with the adjoint model, efficiently drives a bound-constrained quasi-newton optimization algorithm for the estimation of model parameters. The results obtained are very encouraging and plead for an extensive use of the variational approach to understand and corroborate the processes described in hydrological models but also estimate the model control variables (calibration of model parameters and state estimation using data assimilation)
APA, Harvard, Vancouver, ISO, and other styles
42

Avramidis, Stefanos. "Simulation and parameter estimation of spectrophotometric instruments ." Thesis, KTH, Numerical Analysis and Computer Science, NADA, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12292.

Full text
Abstract:

The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample.A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample.A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.


QC 20100707
PaperOpt, Paper Optics and Colour
APA, Harvard, Vancouver, ISO, and other styles
43

Vlk, Jan. "Návrh a evaluace moderních systémů řízení letu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445472.

Full text
Abstract:
Tato práce je zaměřena na výzkum moderních metod automatického řízení letu a jejich ověření s ohledem na současný stav poznání a budoucí využití bezpilotních letadlových systémů. Práce představuje proces návrhu automatického systému řízení letu s důrazem na přístupy z oblasti návrhu založeného na modelování (Model-Based Design). Nedílnou součástí tohoto procesu je tvorba matematického modelu letounu, který byl využit k syntéze zákonů řízení a k vytvoření simulačního rámce pro evaluaci stability a kvality regulace automatického systému řízení letu. Jádro této práce se věnuje syntéze zákonů řízení založených na unikátní kombinaci teorie optimálního a adaptivního řízení. Zkoumané zákony řízení byly integrovány do digitálního systému řízení letu, jenž umožňuje vysoce přesné automatické létání. Závěrečná část práce se zabývá ověřením a analýzou navrženého systému řízení letu a je rozdělena do 3 fází. První fáze ověření obsahuje evaluaci robustnosti a analyzuje stabilitu a robustnost navrženého systému řízení letu ve frekvenční oblasti. Druhá fáze, evaluace kvality regulace, probíhala v rámci počítačových simulací s využitím vytvořených matematických modelů v časové oblasti.  V poslední fázi ověření došlo k integraci navrženého systému řízení letu do experimentálního letounu, sloužícího jako testovací platforma pro budoucí bezpilotní letadlové systémy a jeho evaluaci v rámci série letových experimentů.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Qiaochu. "Contribution à la planification d'expériences, à l'estimation et au diagnostic actif de systèmes dynamiques non linéaires : application au domaine aéronautique." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2231/document.

Full text
Abstract:
Dans ce travail de thèse, nous nous focalisons sur le problème de l'intégration d'incertitude à erreurs bornées pour les systèmes dynamiques, dont les entrées et les états initiaux doivent être optimaux afin de réaliser certaines fonctionnalités.Le document comporte 5 chapitres: le premier est une introduction présentant le panorama du travail. Le deuxième chapitre présente les outils de base de l'analyse par intervalle. Le chapitre 3 est dédié à l'estimation d'états et de paramètres. Nous décrivons d'abord une procédure pour résoudre un système d'équations différentielles ordinaires avec l'aide de cet outil. Ainsi, une estimation des états à partir des conditions initiales peut être faite. Les systèmes différentiels considérés dépendent de paramètres qui doivent être estimés. Ce problème inverse pourra être résolu via l'inversion ensembliste. L'approche par intervalle est une procédure déterministe naturelle sans incertitude, tous les résultats obtenus sont garantis. Néanmoins, cette approche n'est pas toujours efficace, ceci est dû au fait que certaines opérations ensemblistes conduisent à des temps de calcul important. Nous présentons quelques techniques, par cela, nous nous plaçons dans un contexte à erreurs bornées permettant d'accélérer cette procédure. Celles-ci utilisent des contracteurs ciblés qui permettent ainsi une réduction de ce temps. Ces algorithmes ont été testés et ont montré leur efficacité sur plusieurs applications: des modèles pharmacocinétiques et un modèle du vol longitudinal d'avion en atmosphère au repos.Le chapitre 4 présente la recherche d'entrées optimales dans le cadre analyse par intervalle, ce qui est une approche originale. Nous avons construit plusieurs critères nouveaux permettant cette recherche. Certains sont intuitifs, d'autres ont nécessité un développement théorique. Ces critères ont été utilisés pour la recherche d'états initiaux optimaux. Des comparaisons ont été faites sur plusieurs applications et l'efficacité de certains critères a été mise en évidence.Dans le chapitre 5, nous appliquons les approches présentées précédemment au diagnostic via l'estimation de paramètres. Nous avons développé un processus complet pour le diagnostic et aussi formulé un processus pour le diagnostic actif avec une application en aéronautique. Le dernier chapitre résume les travaux réalisés dans cette thèse et essaye de donner des perspectives à la recherche.Les algorithmes proposés dans ce travail ont été développés en C++ et utilisent l'environnement du calcul ensembliste
In this work, we will study the uncertainty integration problem in a bounded error context for the dynamic systems, whose input and the initial state have to be optimized so that some other operation could be more easily and better obtained. This work is consisted of 6 chapters : the chapter 1 is an introduction to the general subject which we will discuss about. The chapter 2 represents the basic tools of interval analysis.The chapter 3 is dedicated to state estimation and parameter estimation. We explain at the first, how to solve the ordinary differential equation using interval analysis, which will be the basic tool for the state estimation problem given the initial condition of studied systems. On the other ride, we will look into the parameter estimation problem using interval analysis too. Based on a simple hypothesis over the uncertain variable, we calculate the system's parameter in a bounded error form, considering the operation of intervals as the operation of sets. Guaranteed results are the advantage of interval analysis, but the big time consumption is still a problem for its popularization in many non linear estimation field. We present our founding techniques to accelerate this time consuming processes, which are called contractor in constraint propagation field. At the end of this chapter, différent examples will be the test proof for our proposed methods.Chapter 4 presents the searching for optimal input in the context of interval analysis, which is an original approach. We have constructed several new criteria allow such searching. Some of them are intuitive, the other need a theoretical proof. These criteria have been used for the search of optimal initial States and le better parameter estimation results. The comparisons are done by using multiple applications and the efficiency is proved by evidence.In chapter 5, we applied the approaches proposed above in diagnosis by state estimation and parameter estimation. We have developed a complete procedure for the diagnosis. The optimal input design has been reconsidered in an active diagnosis context. Both state and parameter estimation are implemented using an aeronautical application in literature.The last chapter given a brief summary over the realized subject, some further research directions are given in the perspective section.All the algorithms are written in C/C++ on a Linux based operation system
APA, Harvard, Vancouver, ISO, and other styles
45

Beek, Jaap van de. "Estimation of synchronization parameters." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16971.

Full text
Abstract:
This thesis deals with the estimation of synchronization parameters in {Orthogonal Frequency Division Multiplexing} (OFDM) communication systems and in active ultrasonic measuring systems. Estimation methods for the timing and frequency offset and for the attenuation taps of the frequency selective channel are presented and investigated.In OFDM communication systems the estimation of the timing offset of the transmitted data frame is one important parameter. This offset provides the receiver with a means of synchronizing its sampling clock to that of the transmitter. A second important parameter is the offset in the carrier frequency used by the receiver to demodulate the received signal.For OFDM systems using a cyclic prefix, the joint {Maximum Likelihood} (ML) estimation of the timing and carrier frequency offset is introduced. The redundancy introduced by the prefix is exploited optimally. This novel method is derived for a non-dispersive channel. Its performance, however, is also evaluated for a frequency-selective Rayleigh-fading radio channel. Time dispersion causes an irreducible error floor in this estimator's performance. This error floor is the limiting factor for the applicability of the timing estimator. Depending on the requirements, it may be used in either an acquisition or a tracking mode. For the frequency estimator the error floor is low enough to allow for stable frequency tracking.A low-complex variant of the timing offset estimator is presented allowing a simple implementation. This is the ML estimator, given a 2-bit representation of the received signal as the sufficient statistics. Its performance is evaluated for a frequency-selective Rayleigh-fading radio channel and for a twisted-pair copper channel. Simulations show this estimator to have a similar error floor as the full resolution ML estimator.The problem of estimating the propagation time of a signal is also of interest in active pulse echo systems, such as are used in, {\it e.g.}, radar, medical imaging, and geophysics. The {Minimum Mean Squared Error} (MMSE) estimator of arrival time is derived and investigated for an active airborne ultrasound measurement system. Besides performing better than the conventional {\it Maximum a Posteriori} (MAP) estimator, this method can be used to develop different estimators in situations where the system Signal to Noise Ratio (SNR) is unknown.Coherent multi-amplitude OFDM receivers generally need to compensate for a frequency selective channel in order to detect transmitted data symbols reliably. For this purpose, a channel equalizer needs to be fed estimates of the subchannel attenuations.The linear MMSE estimator of these attenuations is presented. Of all linear estimators, this estimator optimally makes use of the frequency correlation between the subchannel attenuations. Low-complex modified estimators are proposed and investigated. The proposed modifications cause an irreducible error floor for this estimator's performance, but simulations show that for SNR values up to 20~dB, the improvement of a modified estimator compared to the Least Squares (LS) estimator is at least 3~dB.
Godkänd; 1996; 20080328 (ysko)
APA, Harvard, Vancouver, ISO, and other styles
46

Lim, Chern Beverly Brenda. "Optimal parameters for tissue fusion." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Richter, Andreas. "Estimation of radio channel parameters." Ilmenau : ISLE, 2005. http://deposit.d-nb.de/cgi-bin/dokserv?idn=981051421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Garcia, Sandrine. "Conception optimale d'experiences et estimation de parametres thermophysiques de materiaux composites par algorithmes genetiques." Nantes, 1999. http://www.theses.fr/1999NANT2031.

Full text
Abstract:
La caracterisation thermophysique de materiaux composites est un enjeu crucial. La precision de l'estimation peut etre amelioree si les experiences sont concues avec pertinence. Cependant, l'etude parametrique traditionnellement employee pour l'optimisation experimentale est limitee, et les methodes d'estimation basees sur le calcul d'un gradient sont instables pour des problemes mal conditionnes. Les objectifs de ce travail etaient de developper des methodologies robustes pour la conception optimale d'experiences (coe) destinees a l'identification de proprietes thermiques, et l'estimation simultanee de parametres (esp). L'approche utilisee s'appuie sur les specificitees avantageuses des algorithmes genetiques et consiste en coe a maximiser un critere d'optimisation base sur la matrice de sensibilite des proprietes recherchees et en esp, a minimiser l'erreur des moindres carres. Un programme general base sur les ags a ete developpe sous deux versions, permettant soit l'analyse de modeles analytiques ou un schema d'etude par volumes finis. Les methodologies genetiques de coe et d'esp ont d'abord ete applique sur des cas tests de la litterature, puis sur des cas innovants. Les etudes menees concernent la caracterisation thermique de composites anisotropes carbone/epoxy, et incluent des coe avec trois, cinq et sept parametres experimentaux, et des esp de trois, sept et neufs parametres thermophysiques. Enfin, la methodologie d'esp a ete etendue a la caracterisation de cinetiques chimiques de resines thermodurcissables mettant en jeu six parametres cinetiques. Finalement, la methode genetique s'est averee etre d'une remarquable robustesse, malgre de tres fortes correlations et faibles sensibilites de plusieurs parametres dans toutes les etudes. L'interet de l'utilisation des ags reside non seulement dans la solution de problemes mal poses mais aussi dans la reduction des couts experimentaux et donc de temps, en permettant des coe et esp de plusieurs parametres en une fois.
APA, Harvard, Vancouver, ISO, and other styles
49

Mamduhi, Mohammadhossein. "Optimal Distributed Estimation-Deterministic Framework." Thesis, KTH, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105126.

Full text
Abstract:
Estimation Theory has always been a very important and necessary tool in dealing with complex systems and control engineering, from its birth in 18th century. In the last decades, and by raising the hot topics of distributed systems, estimation over networks has been of great interest among the scientists, and lots of effort has been made to solve the various aspects of this problem. An important question in solving the estimation problems, either over networks or a single system, is how much the obtained estimation is reliable, or in the other words, how much close our estimation is to the subject being estimated. Undoubtedly, a good estimation is an estimation which produces the least error. This leads us to combine the estimation theory with optimization techniques to obtain the best estimation of a given variable, which it is called Optimal Estimation. In control systems theory, we can have the optimal estimation problem in a static system, which is not progressing in time, and also, we can have the optimal estimation problem in a dynamic system, which is changing by time. Moreover, from another point of view, we can divide the common problems into two different frameworks, Stochastic Estimation Problem, and Deterministic Estimation Problem, which less attention has been made on the latter. Actually, treating a problem in deterministic framework is tougher than stochastic case, since in deterministic case we are not allowed to use the nice properties of stochastic random variables. In this Master thesis, the optimal estimation problem over distributed systems consist of a finite number of players, in deterministic framework, and in static setting has been treated. We assume a special case of estimation problem, in which the measurements available for different players are completely decoupled from each other. In the other words, no player can have access to the other players’ information space. We will derive the mathematical conditions for this problem as well as the optimal estimation minimizing the given cost function. For ease of understanding, some numerical examples are also provided, and the performance of the given approach is derived. This thesis consists of five chapters. In chapter 1, a brief introduction about the considered problems in this thesis and their history is given. Chapter 2 introduces the reader with the mathematical tools used in the thesis through the solving a very classic problem in estimation theory. In chapter 3, we have treated the main part of this thesis which is static team estimation problem. In chapter 4, we have looked at the performance of derived estimators, and compare our results with the available numerical solutions. Chapter 5 is a short conclusion, stating the main results, and summarizing the main points of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
50

Müller, Werner, and Dale L. Zimmerman. "Optimal Design for Variogram Estimation." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/756/1/document.pdf.

Full text
Abstract:
The variogram plays a central role in the analysis of geostatistical data. A valid variogram model is selected and the parameters of that model are estimated before kriging (spatial prediction) is performed. These inference procedures are generally based upon examination of the empirical variogram, which consists of average squared differences of data taken at sites lagged the same distance apart in the same direction. The ability of the analyst to estimate variogram parameters efficiently is affected significantly by the sampling design, i.e., the spatial configuration of sites where measurements are taken. In this paper, we propose design criteria that, in contrast to some previously proposed criteria oriented towards kriging with a known variogram, emphasize the accurate estimation of the variogram. These criteria are modifications of design criteria that are popular in the context of (nonlinear) regression models. The two main distinguishing features of the present context are that the addition of a single site to the design produces as many new lags as there are existing sites and hence also produces that many new squared differences from which the variograrn is estimated. Secondly, those squared differences are generally correlated, which inhibits the use of many standard design methods that rest upon the assumption of uncorrelated errors. Several approaches to design construction which account for these features are described and illustrated with two examples. We compare their efficiency to simple random sampling and regular and space-filling designs and find considerable improvements. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography