To see the other types of publications on this topic, follow the link: Maximum entropy.

Dissertations / Theses on the topic 'Maximum entropy'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Maximum entropy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sognnæs, Ida Andrea Braathen. "Maximum Entropy and Maximum Entropy Production in Macroecology." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for fysikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-12651.

Full text
Abstract:
The Maximum Entropy Theory of Ecology (METE), developed by John Harte, presents an entirely new method of making inferences in ecology. The method is based on the established mathematical procedure of Maximum Information Entropy (MaxEnt), developed by Edwin T. Jaynes, and is used to derive a range of important relationships in macroecology. The Maximum Entropy Production (MEP) principle is a more recent theory. This principle was used by Paltridge to successfully predict the climate on Earth in 1975. It has been suggested that this principle can be used for predicting the evolution of ecosystems over time in the framework of METE. This idea is at the very frontier of Harte's theory. This thesis investigates the hypothesis that the information entropy defined in METE is described by the MEP principle.I show that the application of the MEP principle to the information entropy in METE leads to a range of conceptual and mathematical difficulties. I show that the initial hypothesis alone cannot predict the time rate of change, but that it does predict that the number of individual organisms and the total metabolic rate of an ecosystem will continue to grow indefinitely, whereas the number of species will approach one.I also conduct a thorough review of the MEP literature and discuss the possibility of an application of the MEP principle to METE based on analogies. I also study a proof of the MEP principle published by Dewar in 2003 and 2005 in order to investigate the possibility of an application based on first principles. I conclude that the MEP principle has a low probability of success if applied directly to the information entropy in METE.One of the most central relationships derived in METE is the expected number of species in a plot of area $A$. I conduct a numerical simulation in order to study the variance of the actual number of species in a collection of plots. I then suggest two methods to be used for comparison between predictions and observations in METE.I also conduct a numerical study of selectied stability properties of Paltridge's climate model and conclude that none of these can explain the observed MEP state in nature.
APA, Harvard, Vancouver, ISO, and other styles
2

Charter, Mark Keith. "Maximum entropy pharmacokinetics." Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Patterson, Brett Alexander. "Maximum entropy data analysis." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xie, Yong. "Maximum entropy in crystallography." Thesis, De Montfort University, 2003. http://hdl.handle.net/2086/4220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Purahoo, K. "Maximum entropy data analysis." Thesis, Cranfield University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Robinson, David Richard Terence. "Developments in maximum entropy data analysis." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McLean, Andrew Lister. "Applications of maximum entropy data analysis." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sears, Timothy Dean, and tim sears@biogreenoil com. "Generalized Maximum Entropy, Convexity and Machine Learning." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20090525.210315.

Full text
Abstract:
This thesis identifies and extends techniques that can be linked to the principle of maximum entropy (maxent) and applied to parameter estimation in machine learning and statistics. Entropy functions based on deformed logarithms are used to construct Bregman divergences, and together these represent a generalization of relative entropy. The framework is analyzed using convex analysis to charac- terize generalized forms of exponential family distributions. Various connections to the existing machine learning literature are discussed and the techniques are applied to the problem of non-negative matrix factorization (NMF).
APA, Harvard, Vancouver, ISO, and other styles
9

Oliveira, V. A. "Maximum entropy image restoration in nuclear medicine." Thesis, University of Southampton, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Касьянов, Володимир, and Андрій Гончаренко. "SUBJECTIVE ENTROPY MAXIMUM PRINCIPLE AND ITS APPLICATIONS." Thesis, Національний авіаційний університет, 2017. https://er.nau.edu.ua/handle/NAU/48996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Касьянов, Володимир, and Андрій Гончаренко. "SUBJECTIVE ENTROPY MAXIMUM PRINCIPLE AND ITS APPLICATIONS." Thesis, Національний авіаційний університет, 2017. http://er.nau.edu.ua/handle/NAU/30676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fellman, Laura Suzanne. "The Genetic Algorithm and Maximum Entropy Dice." PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5247.

Full text
Abstract:
The Brandeis dice problem, originally introduced in 1962 by Jaynes as an illustration of the principle of maximum entropy, was solved using the genetic algorithm, and the resulting solution was compared with that obtained analytically. The effect of varying the genetic algorithm parameters was observed, and the optimum values for population size, mutation rate, and mutation interval were determined for this problem. The optimum genetic algorithm program was then compared to a completely random method of search and optimization. Finally, the genetic algorithm approach was extended to several variations of the original problem for which an analytical approach would be impractical.
APA, Harvard, Vancouver, ISO, and other styles
13

Thomaz, Carlos Eduardo. "Maximum entropy covariance estimate for statistical pattern recognition." Thesis, Imperial College London, 2004. http://hdl.handle.net/10044/1/8755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chan, Oscar. "Prosodic features for a maximum entropy language model." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0244.

Full text
Abstract:
A statistical language model attempts to characterise the patterns present in a natural language as a probability distribution defined over word sequences. Typically, they are trained using word co-occurrence statistics from a large sample of text. In some language modelling applications, such as automatic speech recognition (ASR), the availability of acoustic data provides an additional source of knowledge. This contains, amongst other things, the melodic and rhythmic aspects of speech referred to as prosody. Although prosody has been found to be an important factor in human speech recognition, its use in ASR has been limited. The goal of this research is to investigate how prosodic information can be employed to improve the language modelling component of a continuous speech recognition system. Because prosodic features are largely suprasegmental, operating over units larger than the phonetic segment, the language model is an appropriate place to incorporate such information. The prosodic features and standard language model features are combined under the maximum entropy framework, which provides an elegant solution to modelling information obtained from multiple, differing knowledge sources. We derive features for the model based on perceptually transcribed Tones and Break Indices (ToBI) labels, and analyse their contribution to the word recognition task. While ToBI has a solid foundation in linguistic theory, the need for human transcribers conflicts with the statistical model's requirement for a large quantity of training data. We therefore also examine the applicability of features which can be automatically extracted from the speech signal. We develop representations of an utterance's prosodic context using fundamental frequency, energy and duration features, which can be directly incorporated into the model without the need for manual labelling. Dimensionality reduction techniques are also explored with the aim of reducing the computational costs associated with training a maximum entropy model. Experiments on a prosodically transcribed corpus show that small but statistically significant reductions to perplexity and word error rates can be obtained by using both manually transcribed and automatically extracted features.
APA, Harvard, Vancouver, ISO, and other styles
15

Battle, David John. "Maximum Entropy Regularisation Applied to Ultrasonic Image Reconstruction." University of Sydney. Electrical Engineering, 1999. http://hdl.handle.net/2123/842.

Full text
Abstract:
Image reconstruction, in common with many other inverse problems, is often mathematically ill-posed in the sense that solutions are neither stable nor unique. Ultrasonic image reconstruction is particularly notorious in this regard, with narrow transducer bandwidths and limited - sometimes sparsely sampled apertures posing formidable difficulties for conventional signal processing. To overcome these difficulties, some form of regularisation is mandatory, whereby the ill-posed problem is restated as a closely related, well-posed problem, and then solved uniquely. This thesis explores the application of maximum entropy (MaxEnt) regularisation to the problem of reconstructing complex-valued imagery from sparsely sampled coherent ultrasonic field data, with particular emphasis on three-dimensional problems in the non-destructive evaluation (NDE) of materials. MaxEnt has not previously been applied to this class of problem, and yet in comparison with many other approaches to image reconstruction, it emerges as the clear leader in terms of resolution and overall image quality. To account for this performance, it is argued that the default image model used with MaxEnt is particularly meaningful in cases of ultrasonic scattering by objects embedded in homogeneous media. To establish physical and mathematical insights into the forward problem, linear equations describing scattering from both penetrable and impenetrable objects are first derived using the Born and physical optics approximations respectively. These equations are then expressed as a shift-invariant computational model that explicitly incorporates sparse sampling. To validate this model, time-domain scattering responses are computed and compared with analytical solutions for a simple canonical test case drawn from the field of NDE. The responses computed via the numerical model are shown to accurately reproduce the analytical responses. To solve inverse scattering problems via MaxEnt, the robust Cambridge algorithm is generalised to the complex domain and extended to handle broadband (multiple-frequency) data. Two versions of the augmented algorithm are then compared with a range of other algorithms, including several linearly regularised algorithms and lastly, due to its acknowledged status as a competitor with MaxEnt in radio-astronomy, the non-linear CLEAN algorithm. These comparisons are made through simulated 3-D imaging experiments under conditions of both complete and sparse aperture sampling with low and high levels of additive Gaussian noise. As required in any investigation of inverse problems, the experimental confirmation of algorithmic performance is emphasised, and two common imaging geometries relevant to NDE are selected for this purpose. In monostatic synthetic aperture imaging experiments involving side-drilled holes in an aluminium plate and test objects immersed in H2O, MaxEnt image reconstruction is demonstrated to be robust against grating-lobe and side-lobe formation, in addition to temporal bandwidth restriction. This enables efficient reconstruction of 2-D and 3-D images from small numbers of discrete samples in the spatial and frequency domains. The thesis concludes with a description of the design and testing of a novel polyvinylidene fluoride (PVDF) bistatic array transducer that offers advantages over conventional point-sampled arrays in terms of construction simplicity and signal-to-noise ratio. This ultra-sparse orthogonal array is the only one of its kind yet demonstrated, and was made possible by MaxEnt signal processing.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Yongsheng. "A maximum entropy approach to Chinese language parsing /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20YANG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 54-55). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Jun. "Maximum entropy language modeling with non-local dependencies." Available to US Hopkins community, 2002. http://wwwlib.umi.com/dissertations/dlnow/3068231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Whiting, Peter Mark. "Reflection traveltime tomography and the maximum entropy principle." Thesis, The University of Sydney, 1993. https://hdl.handle.net/2123/26623.

Full text
Abstract:
Conventional reflection tomography creates an estimate of subsurface seismic velocity structure by inverting a set of seismic traveltime data. This is achieved by solving a least-squares optimisation problem that finds the velocity and depth model that minimises the difference between raytraced and measured traveltimes. Obtaining the traveltime data can be difficult as manual picking of reflection times is required and all picked reflection events must be associated with the reflector depths defined in the model. Even with good traveltime data the optimisation problem is very non-linear and the surface restriction of the sources and receivers makes the problem generally underdetermined. These issues result in severe ambiguity and local minima problems. This thesis shows that modifications to the conventional reflection tomography algorithm can make it a more practical and reliable procedure that is less likely to be trapped by local minima. The ray tracing procedure is changed so that reflector depths are not necessary and automatic traveltime interpretation can be successful. Entropy constraints are introduced (after being justified) which prevent unwarranted velocity structure from appearing. This feature adds significant stability and reduces the ambiguity problems. Staged smoothing of the optimisation function helps avoid local minima. Synthetic data examples show that the algorithm can be very effective on noise free data. Adding noise to synthetic data reduces the algorithms effectiveness, but inversions of real data sets produces updated velocity fields that result in superior pre-stack depth migrations.
APA, Harvard, Vancouver, ISO, and other styles
19

Sagarra, Pascual Oleguer Josep. "Non-binary maximum entropy network ensembles and their application to the study of urban mobility." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/400560.

Full text
Abstract:
Complex networks grow subject to structural constraints which affect their measurable properties. Assessing the effect that such constraints impose on their observables is thus a crucial aspect to be taken into account in their analysis, if one wants to quantify the effect a given topological property has on other observed network quantities observed in empirical datasets. Null models are needed for this end. A well understood analytical approach to face the generation and development of flexible models for binary networks is based on considering ensembles of networks obtained using an entropy maximization principle. In this work, we explore the generalization of maximum entropy ensembles to networks where multiple or non-dihcotomic connections among nodes are allowed. We develop a statistical mechanics framework where it is possible to get information about the most relevant observables given a large spectrum of linear and non-linear constraints including those depending both on the weight per link and their binary projection. We furthermore identify three different relevant cases that lead to distinctively different edge statistics, depending on the distinguishable nature of the events allocated to each link. For each case, we perform an extensive study considering microcanonical or hard constrained ensembles as well as grand canonical or soft constrained ones. We provide tools for the generation an analysis of network instances belonging to each model which are implemented and available in the form of open-source software packages, and we provide also analytical tools to obtain null model expectations to later compare to real data. Developing the theory developed, we apply the obtained insights to the analysis of urban mobility considering four large datasets of taxi displacements in the cities of New York, Singapore, San Francisco and Vienna. We show that, once they are appropriately transformed, mobility patterns are highly stable over long time scales and display common features across the studied datasets which are very conveniently represented using one of the cases earlier studied maximum entropy ensembles. We furthermore perform a critical review on existing mobility demand forecasting models and discuss their strengths and weaknesses when adapted to the urban environment, while showing how entropy maximizing models display the best descriptive power of the datasets using a number of network-based, information and matrix similarity metrics to assess the accuracy of the predicted vehicle flows. Based on our observations, we develop two practical applications based on our theoretical work. On the hand, we envisage a supersampling methodology to reliably extrapolate mobility records from a reduced sample which opens the possibility to scale up data from limited records when information on the full system is required. On the other hand, we adapt previous work on graph filtering to our proposed models that allows to extract random contributions from the observed empirical data. This allows to obtain simplified network backbones which contain the most relevant features of mobility datasets not explained by the considered constraints imposed in the maximum entropic models considered. Such a filter is useful for easing the analysis, computational handling and visualization of dense datasets, as well as assessing the degree of proximity between a model and empirical data using suitable hypothesis testing arguments.
Les xarxes complexes tenen una estructura complicada, on sovint es fa difícil establir les relacions de causalitat entre les seves propietats macroscòpiques (mesurables). Per tal de fer-ho es necessiten models nuls amb propietats flexibles que es puguin fixar. Per a xarxes amb connexions binàries (que tenen valor dicotòmic u o zero), s'han proposat col·lectivitats de xarxes que compleixen un principi de màxima entropia per a resoldre el problema de generació d'aquest tipus de models. En aquest treball explorem la seva generalització per a xarxes no-binàries, on les connexions entre elements estan graduades. Desenvolupem un tractament matemàtic que ens permet obtenir prediccions sobre els observables més rellevants d'una xarxa que tingui certes propietats prefixades, a triar en un rang ampli de funcions lineals i no-lineals pertanyent a col·lectivitats micro-canòniques (propietats fixades de manera estricta) i gran canòniques (propietats fixades sols en promig sobre la col·lectivitat). Detectem tres possibles varietats que duen a estadístiques d'ocupació d'enllaços diferents, depenent de la distingibilitat dels elements a partir del qual s'ha generat la xarxa. Per cada cas, desenvolupem eines per a la generació computacional i l'anàlisi de mostres de xarxes pertanyents a cada col·lectivitat. Tot seguit apliquem la teoria desenvolupada a l'anàlisi de mobilitat humana emprant sets de dades de desplaçaments de taxis a Nova York, Singapur, San Francisco i Viena. Mostrem l'estabilitat espaciotemporal de les dades estudiades i l'aparició de propietats comunes. Tot seguit realitzem un anàlisi crític de models de predicció de mobilitat existents i la seva possible adaptació als entorns urbans, mostrant com els models de màxima entropia tenen el major poder predictiu per descriure les dades. Finalment presentem dues aplicacions de la teoria desenvolupada que exploten les propietats comunes detectades a les dades estudiades. D'una banda, derivem un model que permet extrapolar dades de mobilitat sobre sets de dades reduïts. De l'altra, proposem un mètode de filtratge per extreure les contribucions de les dades reals dels trajectes esperats d'acord a qualssevol dels nostres models de màxima entropia. Aquest procediment permet obtenir versions simplificades de les xarxes originals que continguin les seves propietats més rellevants.
APA, Harvard, Vancouver, ISO, and other styles
20

Uchimoto, Kiyotaka. "Maximum Entropy Models for Japanese Text Analysis and Generation." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/147595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tahmasbi, Mohammad Saeed. "VLSI implementation of heart sounds maximum entropy spectral estimation /." Title page, contents and summary only, 1994. http://web4.library.adelaide.edu.au/theses/09ENS/09enst128.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hughes, Leslie Peter. "Maximum entropy methods applied to NMR and mass spectrometry." Thesis, Durham University, 2001. http://etheses.dur.ac.uk/3785/.

Full text
Abstract:
Maximum Entropy data processing techniques have been widely available for use by NMR spectroscopisis and mass spectrometrisls since they were first reported as a tool for enhancing damaged images. However, the techniques have been met with a certain amount of scepticism amongst the spectroscopic community; not least their apparent ability to get something for nothing. The aim of the work presented in this thesis is to demonstrate that if these techniques are used carefully and in appropriate situations a great deal of information can be extracted from both NMR and mass spectra. This has been achieved by using the Memsys5 and Massive Inference algorithms to process a range of NMR and mass spectra which suffer from some of the problems which are commonly encountered in spectroscopy, i.e. poor resolution, poor sensitivity, how to process spectra with a wide range of peak widths. The theory underlying the two algorithms is described simply and the techniques for selecting appropriate point spread functions are outlined. Experimental rather than simulated spectra are processed throughout. Throughout this work the Maximum Entropy results are freated with scepticism. A pragmatic approach is employed to demonstrate that the results are valid. It is concluded that the Maximum Entropy methods do have their place amongst the many other data processing strategies used by spectroscopists. If used correctly and in appropriate situations the results can be worth the investment in time needed to obtain a satisfactory result.
APA, Harvard, Vancouver, ISO, and other styles
23

McGrath, Deirdre Maria. "Maximum entropy deconvolution of low count nuclear medicine images." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bourne, Rachel Anne. "Default reasoning using maximum entropy and variable strength defaults." Thesis, Queen Mary, University of London, 1999. http://qmro.qmul.ac.uk/xmlui/handle/123456789/3808.

Full text
Abstract:
The thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction of conclusions should they subsequently turn out to be incorrect. The model can be used both to reason from a given knowledge base of default rules, and to aid in the construction of such knowledge bases by allowing their designer to compare the consequences of his design with his own default assumptions. The conclusions supported by the proposed model are justified by the use of a probabilistic semantics for default rules in conjunction with the application of a rational means of inference from incomplete knowledge the principle of maximum entropy (ME). The thesis develops both the theory and algorithms for the ME approach and argues that it should be considered as a general theory of default reasoning. The argument supporting the thesis has two main threads. Firstly, the ME approach is tested on the benchmark examples required of nonmonotonic behaviour, and it is found to handle them appropriately. Moreover, these patterns of commonsense reasoning emerge as consequences of the chosen semantics rather than being design features. It is argued that this makes the ME approach more objective, and its conclusions more justifiable, than other default systems. Secondly, the ME approach is compared with two existing systems: the lexicographic approach (LEX) and system Z+. It is shown that the former can be equated with ME under suitable conditions making it strictly less expressive, while the latter is too crude to perform the subtle resolution of default conflict which the ME approach allows. Finally, a program called DRS is described which implements all systems discussed in the thesis and provides a tool for testing their behaviours.
APA, Harvard, Vancouver, ISO, and other styles
25

Guo, Weiyu. "Implementing the principle of maximum entropy in option pricing /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9946259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rama, Ritesh Rao. "Local maximum entropy approximation-based modelling of the canine heart." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/16963.

Full text
Abstract:
Local Maximum Entropy (LME) method is an approximation technique which has been known to have good approximation characteristics. This is due to its non-negative shape functions and the weak Kronecker delta property which allow the solutions to be continuous and smooth as compared to the Moving Least Square method (MLS) which is used in the Element Free Galerkin method (EFG). The method is based on a convex optimisation scheme where a non-linear equation is solved with the help of a Newton algorithm, implemented in an in-house code called SESKA. In this study, the aim is to compare LME and MLS and highlight the differences. Preliminary benchmark tests of LME are found to be very conclusive. The method is able to approximate deformation of a cantilever beam with higher accuracy as compared to MLS. Moreover, its rapid convergence rate, based on a Cook's membrane problem, demonstrated that it requires a relatively coarser mesh to reach the exact solution. With those encouraging results, LME is then applied to a larger non-linear cardiac mechanics problem. That is simulating a healthy and a myocardial infarcted canine left ventricle (LV) during one heart beat. The LV is idealised by a prolate spheroidal ellipsoid. It undergoes expansion during the diastolic phase, addressed by a non-linear passive stress model which incorporates the transversely isotropic properties of the material. The contraction, during the systolic phase, is simulated by Guccione's active stress model. The infarct region is considered to be non-contractile and twice as stiff as the healthy tissue. The material loss, especially during the necrotic phase, is incorporated by the use of a homogenisation approach. Firstly, the loss of the contraction ability of the infarct region counteracts the overall contraction behaviour by a bulging deformation where the occurrence of high stresses are noted. Secondly, with regards to the behaviour of LME, it is found to feature high convergence rate and a decrease in computation time with respect to MLS. However, it is also observed that LME is quite sensitive to the nodal spacing in particular for an unstructured nodal distribution where it produces results that are completely unreliable.
APA, Harvard, Vancouver, ISO, and other styles
27

Wu, Nailong. "The maximum entropy method and its application in radio astronomy." Thesis, The University of Sydney, 1985. https://hdl.handle.net/2123/27440.

Full text
Abstract:
This thesis consists of seven chapters, two appendices and a bibliography. In addition, three publications are included. The first digit in the page numbers refers to the chapter or appendix, and the second one or two digits are the serial numbers of the page in the chapter or appendix. For equations, figures and tables, the first digit and the second one or two digits refer respectively to the chapter and section; the third one or two digits refer to its serial occurrence in the chapter. The references in the bibliography are listed in alphabetical order by the author. Mathematical symbols are defined when they are first introduced. All abbreviated terms in the text are given in full at the first mention, with the abbreviations in parentheses; thereafter, the abbreviations are used. However, words in full may be repeated for readability.
APA, Harvard, Vancouver, ISO, and other styles
28

Armstrong, Nicholas. "Application of the maximum entropy method to x-ray profile analysis /." Electronic version, 1999. http://adt.lib.uts.edu.au/public/adt-NTSM20031204.135221/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hofmann, Bernd, and Romy Krämer. "Maximum entropy regularization for calibrating a time-dependent volatility function." Universitätsbibliothek Chemnitz, 2004. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200401213.

Full text
Abstract:
We investigate the applicability of the method of maximum entropy regularization (MER) including convergence and convergence rates of regularized solutions to the specific inverse problem (SIP) of calibrating a purely time-dependent volatility function. In this context, we extend the results of [16] and [17] in some details. Due to the explicit structure of the forward operator based on a generalized Black-Scholes formula the ill-posedness character of the nonlinear inverse problem (SIP) can be verified. Numerical case studies illustrate the chances and limitations of (MER) versus Tikhonov regularization (TR) for smooth solutions and solutions with a sharp peak.
APA, Harvard, Vancouver, ISO, and other styles
30

Padró, Muntsa, and Lluís Padró. "ME-CSSR : an extension of CSSR using maximum entropy models." Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2008/2721/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bury, Thomas. "Collective behaviours in the stock market: a maximum entropy approach." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209341.

Full text
Abstract:
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours.

The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder.

In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
32

Kontonasios, Kleanthis-Nikolaos. "Maximum entropy modelling for quantifying unexpectedness of data mining results." Thesis, University of Bristol, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.618550.

Full text
Abstract:
This thesis is concerned with the problem of finding subjectively interesting patterns in data. The focus is restricted to the most prominent notion of subjective interestingness, namely the unexpectedness of a pattern. A pattern is considered unexpected if it contradicts the user's prior knowledge or beliefs about the data. Recently, a general information-theoretic framework for data. mining that naturally incorporates unexpectedness was devised. The proposed approach relics on: 1. the Maximum Entropy principle for encoding the user's prior knowledge about the data or the patterns, 2. the InfRatio measure, an information-theoretic measure for evaluating the unexpectedness of a pattern and 3. a set covering algorithm for finding the most interesting set of patterns. However, this framework is intentionally phrased in abstract terms and formally applied only for limited types of data mining tasks. This thesis is meant to fill this gap, as its main contribution is the formalization of this general framework to specific data mining tasks in order to demonstrate the wide applicability of the framework ill practice. In particular, we instantiate the three main components of the framework ill order to evaluate frequent item.set.li, clusterings and patterns found in real-valued data such as biclusters and subgroups. Additionally, we provide the first literature review of interestingness mea- sures based on unexpectedness and propose a novel classification of the methods into two classes, namely the "syntactical" and "probabilistic" approaches. We show that exploiting the framework for finding subjectively interesting sets of patterns in data is a highly efficient practice in theoretical, algorithmic and computational terms.
APA, Harvard, Vancouver, ISO, and other styles
33

Allen, Paul Nicholas. "The quantification of SIMS depth profiles by Maximum Entropy reconstruction." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/3849/.

Full text
Abstract:
The quantification procedures applied to raw SIMS data were devised on the basis of a simple model for the sputtering and ionisation that occur during measurement. The model and the associated quantification procedures have long been known to be inaccurate. If SIMS is to remain a useful analysis tool in the future, the quantification procedures must be adjusted such that current features of interest are accurately measured. This thesis describes the development of a more accurate (though empirical) model for the effects of the analysis, using the convolution integral. We propose a method for the quantification of SIMS depth profiles appropriate to this model, using Maximum Entropy (MaxEnt) reconstruction. SIMS depth profile data differ significantly from previous applications of the MaxEnt method: the very high signal to background ratio of the technique has lead users to plot the results on a logarithmic axis, giving much importance to extremely small signals. The noise on SIMS depth profiles has been characterised. A number of optimisation algorithms have been developed and tested, and the performance of the MaxEnt method on SIMS data has been assessed. A novel form of the entropy, particularly suited to SIMS depth profiles, has been suggested. This form has given excellent results.
APA, Harvard, Vancouver, ISO, and other styles
34

Pascale, Salvatore. "Maximum entropy production as a constraint on the climate system." Thesis, University of Reading, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.541950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Clowser, Jonathan. "Application of the maximum entropy method to dynamical fermion simulations." Thesis, Swansea University, 2002. https://cronfa.swan.ac.uk/Record/cronfa42282.

Full text
Abstract:
This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.
APA, Harvard, Vancouver, ISO, and other styles
36

Camiola, Vito Dario. "Subbands model for semiconductors based on the Maximum Entropy Principle." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1313.

Full text
Abstract:
In this thesis a double-gate MOSFET is simulated with an energy-transport subband model and an energy-transport model is derived for a nanoscale MOSFET. Regarding the double-gate MOSFET the model is formulated starting from the moment system derived from the Schroedinger-Poisson-Boltzmann equations. The system is closed on the basis of the maximum entropy principle and includes scattering of electrons with acoustic and non-polar optical phonons. The proposed expression of the entropy combines quantum effects and semiclassical transport by weighting the contribution of each sub band with the square modulus of the envelope functions arising from the Schroedinger-Poisson system. The simulations show that the model is able to capture the relevant confining and transport features and asses the robustness of the numerical scheme.\\ The model for the MOSFET takes into account the presence of both 3D and 2D electron gas included along with the quantization in the transversal direction with respect to the oxide at the gate which gives raise to a sub band decomposition of the electron energy.\\ Both intra and inter scattering between the 2D and the 3D electron gas are considered. In particular, a fictitious transition from the 3D to the 2D electrons and vice versa is introduced.
APA, Harvard, Vancouver, ISO, and other styles
37

Paramahamsan, Harinarayan. "Fundamental properties of Synthetic O-D Generation Formulations and Solutions." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/31143.

Full text
Abstract:
Origin-Destination (O-D) matrices are required in order to model traffic routing behavior in networks. Direct techniques for collecting O-D information from home and roadside interviews have historically been utilized to estimate O-D tables. However, these techniques are not only very costly, labor intensive, and disruptive to trip makers, but traditionally also do not capture traffic peaking behavior, which is often required for traffic operational purposes. Consequently, more cost-effective indirect or synthetic O-D estimation techniques have been developed, and continue to be developed. They utilize readily available traffic volume counts to estimate the most likely O-D tables that may have generated the observed link counts. This thesis describes the basic formulations that have been proposed to formulate and solve the static O-D problem synthetically using link flow observations based on Maximum Entropy techniques. As is the case with many mathematical solutions to engineering problems, a number of simplifying assumptions have been made in order to solve the synthetic O-D problem. Unfortunately, the descriptions of these simplifying assumptions are often not fully described in the literature, and in some cases, these assumptions are not mentioned at all. Furthermore, the literature fails to systematically demonstrate what impact these assumptions have on the final O-D table estimate. Therefore, this thesis utilizes simple hypothetical networks to; 1. Demonstrate and compare the two main types of synthetic O-D formulations, namely the trip and volume based formulations. 2. Track the O-D estimation procedure from its initial formulation to its final formulation, demonstrating all significant assumptions that have been made and the implications of these assumptions on the final solution. 3. Demonstrate to what extent the final O-D estimation formulation remains valid when these assumptions are invoked. 4. Test the applicability of some packages which implement the various formulations and solution techniques that are available.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
38

Ziebart, Brian D. "Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/17.

Full text
Abstract:
Predicting human behavior from a small amount of training examples is a challenging machine learning problem. In this thesis, we introduce the principle of maximum causal entropy, a general technique for applying information theory to decision-theoretic, game-theoretic, and control settings where relevant information is sequentially revealed over time. This approach guarantees decision-theoretic performance by matching purposeful measures of behavior (Abbeel & Ng, 2004), and/or enforces game-theoretic rationality constraints (Aumann, 1974), while otherwise being as uncertain as possible, which minimizes worst-case predictive log-loss (Gr¨unwald & Dawid, 2003). We derive probabilistic models for decision, control, and multi-player game settings using this approach. We then develop corresponding algorithms for efficient inference that include relaxations of the Bellman equation (Bellman, 1957), and simple learning algorithms based on convex optimization. We apply the models and algorithms to a number of behavior prediction tasks. Specifically, we present empirical evaluations of the approach in the domains of vehicle route preference modeling using over 100,000 miles of collected taxi driving data, pedestrian motion modeling from weeks of indoor movement data, and robust prediction of game play in stochastic multi-player games.
APA, Harvard, Vancouver, ISO, and other styles
39

Tate, Graeme. "New methods in protein X-ray crystallography using maximum entropy techniques." Thesis, University of Glasgow, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mattar, Essam Hussain. "Clinical application of maximum entropy image processing in planar radionuclide imaging." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Johnson, Jason K. (Jason Kyle). "Convex relaxation methods for graphical models : Lagrangian and maximum entropy approaches." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45871.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 241-257).
Graphical models provide compact representations of complex probability distributions of many random variables through a collection of potential functions defined on small subsets of these variables. This representation is defined with respect to a graph in which nodes represent random variables and edges represent the interactions among those random variables. Graphical models provide a powerful and flexible approach to many problems in science and engineering, but also present serious challenges owing to the intractability of optimal inference and estimation over general graphs. In this thesis, we consider convex optimization methods to address two central problems that commonly arise for graphical models. First, we consider the problem of determining the most probable configuration-also known as the maximum a posteriori (MAP) estimate-of all variables in a graphical model, conditioned on (possibly noisy) measurements of some variables. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. This involves using the Lagrangian decomposition technique to break up an intractable graph into tractable subgraphs, such as small "blocks" of nodes, embedded trees or thin subgraphs. We develop a distributed, iterative algorithm that minimizes the Lagrangian dual function by block coordinate descent. This results in an iterative marginal-matching procedure that enforces consistency among the subgraphs using an adaptation of the well-known iterative scaling algorithm. This approach is developed both for discrete variable and Gaussian graphical models. In discrete models, we also introduce a deterministic annealing procedure, which introduces a temperature parameter to define a smoothed dual function and then gradually reduces the temperature to recover the (non-differentiable) Lagrangian dual. When strong duality holds, we recover the optimal MAP estimate. We show that this occurs for a broad class of "convex decomposable" Gaussian graphical models, which generalizes the "pairwise normalizable" condition known to be important for iterative estimation in Gaussian models.
(cont.) In certain "frustrated" discrete models a duality gap can occur using simple versions of our approach. We consider methods that adaptively enhance the dual formulation, by including more complex subgraphs, so as to reduce the duality gap. In many cases we are able to eliminate the duality gap and obtain the optimal MAP estimate in a tractable manner. We also propose a heuristic method to obtain approximate solutions in cases where there is a duality gap. Second, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We propose the maximum entropy relaxation (MER) method, which is the convex optimization problem of selecting the least informative (maximum entropy) model over an exponential family of graphical models subject to constraints that small subsets of variables should have marginal distributions that are close to the distribution of sample data. We use relative entropy to measure the divergence between marginal probability distributions. We find that MER leads naturally to selection of sparse graphical models. To identify this sparse graph efficiently, we use a "bootstrap" method that constructs the MER solution by solving a sequence of tractable subproblems defined over thin graphs, including new edges at each step to correct for large marginal divergences that violate the MER constraint. The MER problem on each of these subgraphs is efficiently solved using the primaldual interior point method (implemented so as to take advantage of efficient inference methods for thin graphical models). We also consider a dual formulation of MER that minimizes a convex function of the potentials of the graphical model. This MER dual problem can be interpreted as a robust version of maximum-likelihood parameter estimation, where the MER constraints specify the uncertainty in the sufficient statistics of the model. This also corresponds to a regularized maximum-likelihood approach, in which an information-geometric regularization term favors selection of sparse potential representations. We develop a relaxed version of the iterative scaling method to solve this MER dual problem.
by Jason K. Johnson.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
42

Benjamin, Ryan Lester. "Non-maximum entropy polymer elasticity, viscoelasticity and the lattice Boltzmann method." Doctoral thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/10126.

Full text
Abstract:
Includes abstract.
Includes bibliographical references (p. 291-303).
Various models of viscoelasticity exist based on continuum mechanics. In this work a statistical mechanical approach is taken to derive a new isotropic, hyperelastic, viscoelastic, incompressible constitutive equation for polymers. The result has been achieved by generating a novel physics for the microscopic behaviour of polymers. A vocabulary has been created to facilitate the physics. A new differential equation describing polymer behaviour is derived based on the mathematical description of the physics.
APA, Harvard, Vancouver, ISO, and other styles
43

Broqvist, Widham Emil. "Scaling up Maximum Entropy Deep Inverse Reinforcement Learning with Transfer Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281796.

Full text
Abstract:
In this thesis an issue with common inverse reinforcement learning algorithms is identified, which causes them to be computationally heavy. A solution is proposed which attempts to address this issue and which can be built upon in the future. The complexity of inverse reinforcement algorithms is increased because at each iteration something called a reinforcement learning step is performed to evaluate the result of the previous iteration and guide future learning. This step is slow to perform for problems with large state spaces and where many iterations are required. It has been observed that the problem solved in this step in many cases is very similar to that of the previous iteration. Therefore the solution suggested is to utilize transfer learning to retain some of the learned information and improve speed at subsequent steps. In this thesis different forms of transfers are evaluated for common reinforcement learning algorithms when applied to this problem. Experiments are run using value iteration and Q-learning as the algorithms for the reinforcement learning step. The algorithms are applied to two route planning problems and it is found that in both cases a transfer can be useful for improving calculation times. For value iteration the transfer is easy to understand and implement and shows large improvements in speed compared to the basic method. For Q-learning the implementation contains more variables and while it shows an improvement it is not as dramatic as that for value iteration. The conclusion drawn is that for inverse reinforcement learning implementations using value iteration a transfer is always recommended while for implementations using other algorithms for the reinforcement learning step a transfer is most likely recommended but more experimentation needs to be conducted.
I denna uppsats identifieras ett vanligt problem med algoritmer för omvänd förstärkt inlärning vilket leder till att de blir beräkningstunga. En lösning föreslås som försöker addressera problemet och som kan byggas på i framtiden. Komplexiteten i algoritmer för omvänd förstärkt inlärning ökar på grund av att varje iteration kräver ett så kallat förstärkt inlärnings-steg som har som syfte att utvärdera föregående iteration och guida lärandet. Detta steg tar lång tid att genomföra för problem med stor tillståndsrymd och där många iterationer är nödvändiga. Det har observerats att problemet som löses i detta steg i många fall är väldigt likt det problem som löstes i föregående iteration. Därför är den föreslagna lösningen att använda sig av informationsöverföring för att ta tillvara denna kunskap. I denna uppsats utvärderas olika former av informationsöverföring för vanliga algoritmer för förstärkt inlärning på detta problem. Experiment görs med value iteration och Q-learning som algoritmerna för förstärkt inlärnings-steget. Algoritmerna appliceras på två ruttplanneringsproblem och finner att i båda fallen kan en informationsöverföring förbättra beräkningstider. För value iteration är överföringen enkel att implementera och förstå och visar stora förbättringar i hastighet jämfört med basfallet. För Qlearning har implementationen fler variabler och samtidigt som en förbättring visas så är den inte lika dramatisk som för value iteration. Slutsaterna som dras är att för implementationer av omvänd förstärkt inlärning där value iteration används som algoritm för förstärkt inlärnings-steget så rekommenderas alltid en informationsöverföring medan för implementationer som använder andra algoritmer så rekommenderas troligtvis en överföring men fler experiment skulle behöva utföras.
APA, Harvard, Vancouver, ISO, and other styles
44

Kendall, Elizabeth Ann Caughey Thomas Kirk. "Range dependent signals and maximum entropy methods for underwater acoustic tomography /." Diss., Pasadena, Calif. : California Institute of Technology, 1985. http://resolver.caltech.edu/CaltechETD:etd-04092008-080843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Graser, David Jay. "Image restoration and enhancement by closed form positively constrained maximum entropy." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/289224.

Full text
Abstract:
This dissertation is concerned with an image processing algorithm that performs image enhancement and restoration. Closed form maximum entropy filtering will be derived from its foundations in classical Wiener filtering and maximum entropy theory. Ad hoc variations of Wiener filtering will be introduced and discussed in terms of information density. The language of information density will be used to examine the entropy filter and its merits. These merits will be demonstrated through a series of numerical simulations of real and artificial astronomical objects. The results of these simulations will be shown to be a 7% to 50% improvement over the classical Wiener estimate. The closed form maximum entropy filter will be adapted to the blind deconvolution problem. A test pattern will be estimated to demonstrate the potential power of this adaptation.
APA, Harvard, Vancouver, ISO, and other styles
46

Okafor, Anthony. "Entropy based techniques with applications in data mining." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

LeBlanc, Raymond. "The maximum entropy principle as a basis for statistical models in epidemiology /." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74600.

Full text
Abstract:
We propose an approach for the construction of statistical models based on the maximum entropy principle in conjunction with a constructive method relying on a precise description of the individual contribution of each possible unit of observation of the population. This procedure is applied to the analysis of 2 x 2 tables, ubiquitous in biostatistics. This approach provides a new perspective and understanding of the fundamental nature of logistic regression, of Cox's proportional hazard model and of the noncentral hypergeometric model. Application of this method to analyse the odds ratio produces new distributions for this random variable and gives new means of estimating the odds ratio by confidence intervals. We present basic properties of these distributions and compare results with other methods.
Finally, this constructive approach that proceeds from the lower level of the individual contribution of the experimental units to the global level of the population is applied to sample size determination for comparative studies when, in the compared groups, there is attrition due to noncompliance to the specific regimen. This attrition reduces the apparent treatment effect in the analysis. This presentation constitutes a foundation for a more general and elegant solution to the problem.
APA, Harvard, Vancouver, ISO, and other styles
48

Kane, Thomas Brett. "Reasoning with uncertainty using Nilsson's probabilistic logic and the maximum entropy formalism." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Feng, Jianping. "Semi-supervised CONTRAfold for RNA Secondary Structure Prediction: A Maximum Entropy Approach." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1316225523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Macedo, Pedro Filipe Pessoa. "Contributions to the theory of maximum entropy estimation for ill-posed models." Doctoral thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/11048.

Full text
Abstract:
Doutoramento em Matemática
As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.
Statistical techniques are essential in most areas of science being linear regression one of the most widely used. It is well-known that under fairly conditions linear regression is a powerful statistical tool. Unfortunately, some of these conditions are usually not satisfied in practice and the regression models become ill-posed, which means that the application of traditional estimation methods may lead to non-unique or highly unstable solutions. This work is mainly focused on the maximum entropy estimation of ill-posed models, in particular the estimation of regression models with small samples sizes affected by collinearity and outliers. The research is developed in three directions, namely the estimation of technical efficiency with state-contingent production frontiers, the estimation of the ridge parameter in ridge regression, and some developments in maximum entropy estimation. In the estimation of technical efficiency with state-contingent production frontiers, this work reveals that the maximum entropy estimators outperform the maximum likelihood estimator in most of the cases analyzed, namely in models with few observations in some states of nature and models with a large number of states of nature, which usually represent models affected by collinearity. The maximum entropy estimators are expected to make an important contribution to the increase of empirical work with state-contingent production frontiers. The main challenge in ridge regression is the selection of the ridge parameter. There is a huge number of methods to estimate the ridge parameter and no single method emerges in the literature as the best overall. In this work, a new method to select the ridge parameter in ridge regression is presented. The simulation study reveals that, in the case of regression models with small samples sizes affected by collinearity, the new estimator is probably one of the best ridge parameter estimators available in the literature on ridge regression. Founded on the Shannon entropy, the ordinary least squares estimator and some concepts from quantum electrodynamics, the maximum entropy Leuven estimator overcomes the main weakness of the generalized maximum entropy estimator, avoiding exogenous information that is usually not available. Based on the maximum entropy Leuven estimator, information theory and robust regression, new developments on the theory of maximum entropy estimation are provided in this work. The simulation studies and the empirical applications reveal that the new estimators are a good choice in the estimation of linear regression models with small samples sizes affected by collinearity and outliers. Finally, a contribution to the increase of computational resources on the maximum entropy estimation is also accomplished in this work.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography