Dissertations / Theses on the topic 'Stochastic control theory'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Stochastic control theory.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Beard, Rodney. "Ito stochastic control theory, stochastic differential games and the economic theory of mobile pastoralism /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18631.pdf.
Full textHunt, K. J. "Stochastic optimal control theory with application in self-tuning control." Thesis, University of Strathclyde, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382399.
Full textCao, Jie. "Stochastic inventory control in dynamic environments." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011469.
Full textBrand, Samuel P. C. "Spatial and stochastic epidemics : theory, simulation and control." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/56738/.
Full textHao, Xiao Qi. "The main development of stochastic control problems." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691355.
Full textZhou, Yulong. "Stochastic control and approximation for Boltzmann equation." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/392.
Full textDamm, Tobias. "Rational matrix equations in stochastic control /." Berlin [u.a.] : Springer, 2004. http://www.loc.gov/catdir/enhancements/fy0817/2003066858-d.html.
Full textZeryos, Mihail. "Bayesian pursuit analysis and singular stochastic control." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338932.
Full textKabouris, John C. "Stochastic control of the activated sludge process." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/20306.
Full textHuang, Hui. "Optimal control of piecewise continuous stochastic processes." Bonn : [s.n.], 1989. http://catalog.hathitrust.org/api/volumes/oclc/23831217.html.
Full textCheng, Tak Sum. "Stochastic optimal control in randomly-branching environments." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/713.
Full textPesonen, Joonas. "Stochastic Estimation and Control over WirelessHART Networks: Theory and Implementation." Thesis, KTH, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105172.
Full textHassan, Nofal Adrees. "Adaptive-stochastic identification of an idling automotive I.C. engine." Thesis, University of Liverpool, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316554.
Full textZhang, Lei. "Stochastic optimal control and regime switching : applications in economics." Thesis, University of Warwick, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387250.
Full textSilva, Francisco Jose. "Interior penalty approximation for optimal control problems. Optimality conditions in stochastic optimal control theory." Palaiseau, Ecole polytechnique, 2010. http://pastel.archives-ouvertes.fr/docs/00/54/22/95/PDF/tesisfjsilva.pdf.
Full textRésumé anglais : This thesis is divided in two parts. In the first one we consider deterministic optimal control problems and we study interior approximations for two model problems with non-negativity constraints. The first model is a quadratic optimal control problem governed by a nonautonomous affine ordinary differential equation. We provide a first-order expansion for the penalized state an adjoint state (around the corresponding state and adjoint state of the original problem), for a general class of penalty functions. Our main argument relies on the following fact: if the optimal control satisfies strict complementarity conditions for its Hamiltonian, except for a set of times with null Lebesgue measure, the functional estimates of the penalized optimal control problem can be derived from the estimates of a related finite dimensional problem. Our results provide three types of measure to analyze the penalization technique: error estimates of the control, error estimates of the state and the adjoint state and also error estimates for the value function. The second model we study is the optimal control problem of a semilinear elliptic PDE with a Dirichlet boundary condition, where the control variable is distributed over the domain and is constrained to be non-negative. Following the same approach as in the first model, we consider an associated family of penalized problems, whose solutions define a central path converging to the solution of the original one. In this fashion, we are able to extend the results obtained in the ODE framework to the case of semilinear elliptic PDE constraints. In the second part of the thesis we consider stochastic optimal control problems. We begin withthe study of a stochastic linear quadratic problem with non-negativity control constraints and we extend the error estimates for the approximation by logarithmic penalization. The proof is based is the stochastic Pontryagin's principle and a duality argument. Next, we deal with a general stochastic optimal control problem with convex control constraints. Using the variational approach, we are able to obtain first and second-order expansions for the state and cost function, around a local minimum. This analysis allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second-order necessary conditions are also established
Liao, Jiali Banerjee Avijit Benson Hande Y. "A discretionary stopping problem in stochastic control: an application in credit exposure control /." Philadelphia, Pa. : Drexel University, 2006. http://dspace.library.drexel.edu/handle/1860%20/888.
Full textOrtiz, Olga L. "Stochastic inventory control with partial demand observability." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22551.
Full textCommittee Co-Chair: Alan L Erera; Committee Co-Chair: Chelsea C, White III; Committee Member: Julie Swann; Committee Member: Paul Griffin; Committee Member: Soumen Ghosh.
Brown, Emma L. "On-line control of paper web formation using stochastic distribution theory." Thesis, University of Manchester, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488349.
Full textChen, Hairong. "Dynamic admission and dispatching control of stochastic distribution systems /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?IEEM%202003%20CHEN.
Full textIncludes bibliographical references (leaves 117-130). Also available in electronic version. Access restricted to campus users.
Chen, Gong. "Schemes for using LQG control strategy in the design of regulators for stochastic systems." Thesis, Cranfield University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305393.
Full textIourtchenko, Daniil V. "Optimal bounded control and relevant response analysis for random vibrations." Link to electronic thesis, 2001. http://www.wpi.edu/Pubs/ETD/Available/etd-0525101-111407.
Full textKeywords: Stochastic optimal control; dynamic programming; Hamilton-Jacobi-Bellman equation; Random vibration. Keywords: Stochastic optimal control; dynamic programming; Hamilton-Jacobi-Bellman equation; Random vibration; energy balance method. Includes bibliographical references (p. 86-89).
Yang, Lin. "Linear robust H-infinity stochastic control theory on the insurance premium-reserve processes." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2037227/.
Full textBasei, Matteo. "Topics in stochastic control and differential game theory, with application to mathematical finance." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424239.
Full textIn questa tesi vengono considerati tre problemi relativi alla teoria del controllo stocastico e dei giochi differenziali; tali problemi sono legati a situazioni concrete nell'ambito della finanza matematica e, più precisamente, dei mercati dell'energia. Innanzitutto, affrontiamo il problema dell'esercizio ottimale di opzioni swing nel mercato dell'energia. Il risultato principale consiste nel caratterizzare la funzione valore come unica soluzione di viscosità di un'opportuna equazione di Hamilton-Jacobi-Bellman. Il caso relativo ai contratti con penalità può essere trattato in modo standard. Al contrario, il caso relativo ai contratti con vincoli stretti porta a problemi di controllo stocastico in cui è presente un vincolo non standard sui controlli: la suddetta caratterizzazione è allora ottenuta considerando un'opportuna successione di problemi non vincolati. Tale approssimazione viene dimostrata per una classe generale di problemi con vincolo integrale sui controlli. Successivamente, consideriamo un fornitore di energia che deve decidere quando e come intervenire per cambiare il prezzo che chiede ai suoi clienti, al fine di massimizzare il suo guadagno. I costi di intervento possono essere fissi o dipendere dalla quota di mercato del fornitore. Nel primo caso, otteniamo un problema standard di controllo stocastico impulsivo, in cui caratterizziamo la funzione valore e la politica ottimale di gestione del prezzo. Nel secondo caso, la teoria classica non può essere applicata a causa delle singolarità nella funzione che definisce le penalità. Delineiamo quindi una procedura di approssimazione e consideriamo infine condizioni più forti sui controlli, così da caratterizzare, anche in questo caso, il controllo ottimale. Infine, studiamo una classe generale di giochi differenziali a somma non nulla e con controlli di tipo impulsivo. Dopo aver definito rigorosamente tali problemi, forniamo la dimostrazione di un teorema di verifica: se una coppia di funzioni è sufficientemente regolare e soddisfa un opportuno sistema di disequazioni quasi-variazionali, essa coincide con le funzioni valore del problema ed è possibile caratterizzare gli equilibri di Nash. Concludiamo con un esempio dettagliato: indaghiamo l'esistenza di equilibri nel caso in cui due nazioni, con obiettivi differenti, possono condizionare il tasso di cambio tra le rispettive valute.
Evans, Martin A. "Multiplicative robust and stochastic MPC with application to wind turbine control." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39.
Full textMohamad-Than, Mohamad Nor. "The stability of control systems employing Kalman filters as stochastic observers in the state variable feedback configurations." Thesis, University of Reading, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333408.
Full textBruzzone, Andrea. "P-SGLD : Stochastic Gradient Langevin Dynamics with control variates." Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140121.
Full textMarwah, Gaurav. "Algorithms for stochastic finite memory control of partially observable systems." Master's thesis, Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/etd/show.asp?etd=etd-07082005-132056.
Full textMilisavljevic, Mile. "Information driven optimization methods in control systems, signal processing, telecommunications and stochastic finance." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/14912.
Full textHuang, Xin. "A study on the application of machine learning algorithms in stochastic optimal control." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252541.
Full textGenom att observera en likhet mellan målet för stokastisk optimal styrning för att minimera en förväntad kostnadsfunktionell och syftet med maskininlärning att minimera en förväntad förlustfunktion etableras och implementeras en metod för att applicera maskininlärningsalgoritmen för att approximera den optimala kontrollfunktionen via neuralt approximation. Baserat på en diskretiseringsram, härleds en rekursiv formel för gradienten av den approximerade kostnadsfunktionen på parametrarna för neuralt nätverk. För ett välkänt linjärt-kvadratisk-gaussiskt kontrollproblem lyckas den approximerade neurala nätverksfunktionen erhållen med stokastisk gradient nedstigningsalgoritm att reproducera till formen av den teoretiska optimala styrfunktionen och tillämpning av olika typer av algoritmer för maskininlärning optimering ger en ganska nära noggrannhet med avseende på deras motsvarande empiriska värdefunktion. Vidare är det visat att noggrannheten och stabiliteten hos maskininlärning simetrationen kan förbättras genom att öka storleken på minibatch och tillämpa ett finare diskretiseringsschema. Dessa resultat tyder på effektiviteten och lämpligheten av att tillämpa maskininlärningsalgoritmen för stokastisk optimal styrning.
Hochart, Antoine. "Nonlinear Perron-Frobenius theory and mean-payoff zero-sum stochastic games." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX079/document.
Full textZero-sum stochastic games have a recursive structure encompassed in their dynamic programming operator, so-called Shapley operator. The latter is a useful tool to study the asymptotic behavior of the average payoff per time unit. Particularly, the mean payoff exists and is independent of the initial state as soon as the ergodic equation - a nonlinear eigenvalue equation involving the Shapley operator - has a solution. The solvability of the latter equation in finite dimension is a central question in nonlinear Perron-Frobenius theory, and the main focus of the present thesis. Several known classes of Shapley operators can be characterized by properties based entirely on the order structure or the metric structure of the space. We first extend this characterization to "payment-free" Shapley operators, that is, operators arising from games without stage payments. This is derived from a general minimax formula for functions homogeneous of degree one and nonexpansive with respect to a given weak Minkowski norm. Next, we address the problem of the solvability of the ergodic equation for all additive perturbations of the payment function. This problem extends the notion of ergodicity for finite Markov chains. With bounded payment function, this "ergodicity" property is characterized by the uniqueness, up to the addition by a constant, of the fixed point of a payment-free Shapley operator. We give a combinatorial solution in terms of hypergraphs to this problem, as well as other related problems of fixed-point existence, and we infer complexity results. Then, we use the theory of accretive operators to generalize the hypergraph condition to all Shapley operators, including ones for which the payment function is not bounded. Finally, we consider the problem of uniqueness, up to the addition by a constant, of the nonlinear eigenvector. We first show that uniqueness holds for a generic additive perturbation of the payments. Then, in the framework of perfect information and finite action spaces, we provide an additional geometric description of the perturbations for which uniqueness occurs. As an application, we obtain a perturbation scheme allowing one to solve degenerate instances of stochastic games by policy iteration
Shackleton, Mark Broughton. "Frequency domain and stochastic control theory applied to volatility and pricing in intraday financial data." Thesis, London Business School (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299700.
Full textLaliotis, Dimitrios. "Financial time series prediction and stochastic control of trading decisions in the fixed income markets." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243831.
Full textFang, Fang. "A simulation study for Bayesian hierarchical model selection methods." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/fangf/fangfang.pdf.
Full textSimms, Amy E. "A Stochastic Approach to Modeling Aviation Security Problems Using the KNAPSACK Problem." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36806.
Full textMaster of Science
Yucelen, Tansel. "Advances in adaptive control theory: gradient- and derivative-free approaches." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43731.
Full textLarrañaga, Maialen. "Dynamic control of stochastic and fluid resource-sharing systems." Thesis, Toulouse, INPT, 2015. http://www.theses.fr/2015INPT0075/document.
Full textIn this thesis we study the dynamic control of resource-sharing systems that arise in various domains: e.g. inventory management, healthcare and communication networks. We aim at efficiently allocating the available resources among competing projects according to a certain performance criteria. These type of problems have a stochastic nature and may be very complex to solve. We therefore focus on developing well-performing heuristics. In Part I, we consider the framework of Restless Bandit Problems, which is a general class of dynamic stochastic optimization problems. Relaxing the sample-path constraint in the optimization problem enables to define an index-based heuristic for the original constrained model, the so-called Whittle index policy. We derive a closed-form expression for the Whittle index as a function of the steady-state probabilities for the case in which bandits (projects) evolve in a birth-and-death fashion. This expression requires several technical conditions to be verified, and in addition, it can only be computed explicitly in specific cases. In the particular case of a multi-class abandonment queue, we further prove that the Whittle index policy is asymptotically optimal in the light-traffic and heavy-traffic regimes. In Part II, we derive heuristics by approximating the stochastic resource-sharing systems with deterministic fluid models. We first formulate a fluid version of the relaxed optimization problem introduced in Part I, and we develop a fluid index policy. The fluid index can always be computed explicitly and hence overcomes the technical issues that arise when calculating the Whittle index. We apply the Whittle index and the fluid index policies to several systems: e.g. power-aware server-farms, opportunistic scheduling in wireless systems, and make-to-stock problems with perishable items. We show numerically that both index policies are nearly optimal. Secondly, we study the optimal scheduling control for the fluid version of a multi-class abandonment queue. We derive the fluid optimal control when there are two classes of customers competing for a single resource. Based on the insights provided by this result we build a heuristic for the general multi-class setting. This heuristic shows near-optimal performance when applied to the original stochastic model for high workloads. In Part III, we further investigate the abandonment phenomena in the context of a content delivery problem. We characterize an optimal grouping policy so that requests, which are impatient, are efficiently transmitted in a multi-cast mode
Fang, Qijun. "Model search strategy when P >> N in Bayesian hierarchical setting." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/fangq/qijunfang.pdf.
Full textHerzog, David Paul. "Geometry's Fundamental Role in the Stability of Stochastic Differential Equations." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145150.
Full textSouto, Rafael Fontes 1984. "Processos de difusão controlada = um estudo sobre sistemas em que a variação do controle aumenta a incerteza." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259267.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T02:55:02Z (GMT). No. of bitstreams: 1 Souto_RafaelFontes_M.pdf: 470367 bytes, checksum: 516cc5b88625a7d2e5142b69233188f5 (MD5) Previous issue date: 2010
Resumo: Esta dissertação apresenta uma caracterização para sistemas estocásticos em tempo contínuo em que a variação da ação de controle aumenta a incerteza sobre o estado. Este tipo de sistema pode ser aplicado em diversas áreas da ciência e da engenharia, haja vista sua capacidade de modelar sistemas estocásticos complexos, cujas dinâmicas não são completamente conhecidas. Processos de difusão controlada de Itô são usados para descrever a trajetória do estado, e a otimização é realizada por meio do método da programação dinâmica, sendo, portanto, necessária a resolução da equação de Hamilton-Jacobi-Bellman. Adicionalmente, a utilização de ferramentas da análise de funções não suaves indicou a existência de uma região no espaço de estados onde a ação ótima de controle consiste na manutenção do controle que tem sido aplicado ao sistema, seja ele qual for. Intuitivamente, este resultado está de acordo com a natureza cautelosa do controle de sistemas subdeterminados. Finalmente, estudou-se analiticamente o caso particular de um sistema com custo quadrático. Este estudo revelou que a técnica desenvolvida permite o cálculo da solução ótima de maneira simples e eficaz para comportamentos assintóticos do sistema. Essa peculiaridade da solução vem de auxílio à obtenção da solução completa do problema via aproximações numéricas
Abstract: This dissertation presents a framework for continuous-time stochastic systems in which the control variations increase the state uncertainty. This type of system can be applied in several areas of science and engineering, due to its hability of modelling complex stochastic systems, for which the dynamics are not completely known. Controlled Itô diffusion processes are used in order to describe the state path, and the optimization was achieved by the dynamic programming method, so it was necessary to solve the Hamilton-Jacobi-Bellman equation. In addition, tools from nonsmooth analysis indicated the existence of a region in the state space in which the optimal control action is characterized by no variation, no matter the previous control were. Intuitively, this result is expected from the cautionary nature of controlling underdetermined systems. Finally, it was analytically studied the particular case of a system with quadratic running costs. This study revealed that the technique developed allows the computation of the optimal solution in a simple and effective way for asymptotic behavior of the system. This feature of the solution comes in hand to obtain the complete solution of the problem by means of numerical approximations
Mestrado
Automação
Mestre em Engenharia Elétrica
Kelome, Djivèdé Armel. "Viscosity solutions of second order equations in a separable Hilbert space and applications to stochastic optimal control." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/29159.
Full textBountourelis, Theologos. "Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28144.
Full textCommittee Chair: Reveliotis, Spyros; Committee Member: Ayhan, Hayriye; Committee Member: Goldsman, Dave; Committee Member: Shamma, Jeff; Committee Member: Zwart, Bert.
Chen, Si. "Design of Energy Storage Controls Using Genetic Algorithms for Stochastic Problems." UKnowledge, 2015. http://uknowledge.uky.edu/ece_etds/80.
Full textWang, Wen-Kai. "Application of stochastic differential games and real option theory in environmental economics." Thesis, University of St Andrews, 2009. http://hdl.handle.net/10023/893.
Full textCayci, Semih. "Online Learning for Optimal Control of Communication and Computing Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595516470389826.
Full textOlsén, Jörgen. "Stochastic modeling and simulation of the TCP protocol /." Uppsala : Matematiska institutionen, Univ. [distributör], 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3534.
Full textSoltani-Moghaddam, Alireza. "Network simulator design with extended object model and generalized stochastic petri-net /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9999317.
Full textCheng, Gang. "Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1236.
Full textAhmadian, Mansooreh. "Hybrid Modeling and Simulation of Stochastic Effects on Biochemical Regulatory Networks." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99481.
Full textDoctor of Philosophy
Cell cycle is a process in which a growing cell replicates its DNA and divides into two cells. Progression through the cell cycle is regulated by complex interactions between networks of genes, transcripts, and proteins. These interactions inside the confined volume of a cell are subject to inherent noise. To provide a quantitative description of the cell cycle, several deterministic and stochastic models have been developed. However, deterministic models cannot capture the intrinsic noise. In addition, stochastic modeling poses the following challenges. First, stochastic models generally require extensive computations, particularly when applied to large networks. Second, the accuracy of stochastic models is highly dependent on the accuracy of the estimated model parameters. The goal of this dissertation is to address these challenges by developing new efficient methods for modeling and simulation of stochastic effects in biochemical networks. The results show that the proposed hybrid model that combines stochastic and deterministic modeling approaches can achieve high computational efficiency while generating accurate simulation results. Moreover, a new machine learning-based method is developed to address the parameter estimation problem in biochemical systems. The results show that the proposed method yields accurate ranges for the model parameters and highlight the potentials of model-free learning for parameter estimation in stochastic modeling of complex biochemical networks.
Britton, Matthew Scott. "Stochastic task scheduling in time-critical information delivery systems." Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phb8629.pdf.
Full textBottegal, Giulio. "Modeling, estimation and identification of stochastic systems with latent variables." Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423358.
Full textL’argomento principale di questa tesi è l’analisi di modelli statici e dinamici in cui alcune variabili non sono accessibili a misurazioni, nonostante esse influenzino l’evoluzione di certe osservazioni. Questi modelli trovano applicazione in molte discipline delle scienze e dell’ingegneria, come ad esempio l’automatica, le telecomunicazioni, le scienze naturali, la biologia e l’econometria e sono stati studiati approfonditamente nel campo dell’identificazione dei modelli. E' ben noto che sistemi con variabili inaccessibili - o latenti, spesso soffrono di una mancanza di unicità nella rappresentazione. In altre parole, in generale ci sono molti modelli dello stesso tipo che possono descrivere un dato insieme di osservazioni, come ad esempio variabili misurabili di ingresso-uscita. Questo è ben noto, ed è stato studiato a fondo per una classe speciale di modelli lineari, chiamata modelli a spazio di stato. In questa tesi ci si focalizza su due classi particolari di sistemi stocastici a variabili latenti: i modelli generalized factor analysis e i modelli errors-in-variables. Per queste classi di modelli ci sono ancora alcuni problemi irrisolti legati alla non unicità della rappresentazione e chiarificare questi problemi è di importanza fondamentale per la loro identificazione. Poiché solitamente i modelli matematici necessitano ti essere stimati da dati sperimentali, è essenziale risolvere il problema della non unicità per il loro utilizzo nell’inferenza statistica (identificazione di modelli) da dati misurati.