Dissertations / Theses on the topic 'Optimality'

To see the other types of publications on this topic, follow the link: Optimality.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Trommer, Jochen. "Distributed optimality." Phd thesis, [S.l. : s.n.], 2001. http://pub.ub.uni-potsdam.de/2004/0037/trommer.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Cheuk Ming. "Pareto optimality and beyond." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72066.

Full text
Abstract:
The problem of social choice is the central theme of this study. Our main objective is to prove the existence of a social welfare function in order to put to rest the doctrine of 'natural liberty.' We reject most of the recently suggested solutions to the problem on the basis that they are either incomplete or inconsistent. Our proposed social welfare function is along the utilitarian line. Ratio-scale interpersonal comparisons of cardinal utilities are used to prove its existence. If we are allowed to define utilitarianism more broadly, then our social welfare function will also be unique. Finally, the study argues strongly for more positive action on the part of the government to rectify social injustice.
APA, Harvard, Vancouver, ISO, and other styles
3

Joyce, Thomas. "Optimisation and Bayesian optimality." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/19564.

Full text
Abstract:
This doctoral thesis will present the results of work into optimisation algorithms. We first give a detailed exploration of the problems involved in comparing optimisation algorithms. In particular we provide extensions and refinements to no free lunch results, exploring algorithms with arbitrary stopping conditions, optimisation under restricted metrics, parallel computing and free lunches, and head-to-head minimax behaviour. We also characterise no free lunch results in terms of order statistics. We then ask what really constitutes understanding of an optimisation algorithm. We argue that one central part of understanding an optimiser is knowing its Bayesian prior and cost function. We then pursue a general Bayesian framing of optimisation, and prove that this Bayesian perspective is applicable to all optimisers, and that even seemingly non-Bayesian optimisers can be understood in this way. Specifically we prove that arbitrary optimisation algorithms can be represented as a prior and a cost function. We examine the relationship between the Kolmogorov complexity of the optimiser and the Kolmogorov complexity of it’s corresponding prior. We also extended our results from deterministic optimisers to stochastic optimisers and forgetful optimisers, and we show that uniform randomly selecting a prior is not equivalent to uniform randomly selecting an optimisation behaviour. Lastly we consider what the best way to go about gaining a Bayesian understanding of real optimisation algorithms is. We use the developed Bayesian framework to explore the affects of some common approaches to constructing meta-heuristic optimisation algorithms, such as on-line parameter adaptation. We conclude by exploring an approach to uncovering the probabilistic beliefs of optimisers with a “shattering” method.
APA, Harvard, Vancouver, ISO, and other styles
4

Baker, Adam. "Parallel lexical optimality theory." University of Arizona Linguistics Circle, 2005. http://hdl.handle.net/10150/126626.

Full text
Abstract:
Parallel Lexical Optimality Theory (PLOT) is a model I propose to account for opacity and related phenomena in Optimality Theory. PLOT recognizes three input interfaces and three output interfaces to the grammar. Interfaces are related to each other by constituency and by correspondence (McCarthy & Prince 1995). PLOT’s architecture provides sufficient power to account for opacity, but is not overly powerful, I argue. Additionally, PLOT interfaces neatly with Comparative Markedness (McCarthy 2002b) to explain the co-occurrence of derived environment effects and counterfeeding opacity. PLOT also makes more limited typological predictions than LPM-OT (Kiparsky 2003), on which PLOT is based, since PLOT recognizes only one markedness hierarchy for the grammar.
APA, Harvard, Vancouver, ISO, and other styles
5

Rodier, Dominique. "Prosodic domains in Optimality Theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0024/NQ50247.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Bing. "Calibration, Optimality and Financial Mathematics." Doctoral thesis, Uppsala universitet, Matematiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-209235.

Full text
Abstract:
This thesis consists of a summary and five papers, dealing with financial applications of optimal stopping, optimal control and volatility. In Paper I, we present a method to recover a time-independent piecewise constant volatility from a finite set of perpetual American put option prices. In Paper II, we study the optimal liquidation problem under the assumption that the asset price follows a geometric Brownian motion with unknown drift, which takes one of two given values. The optimal strategy is to liquidate the first time the asset price falls below a monotonically increasing, continuous time-dependent boundary. In Paper III, we investigate the optimal liquidation problem under the assumption that the asset price follows a jump-diffusion with unknown intensity, which takes one of two given values. The best liquidation strategy is to sell the asset the first time the jump process falls below or goes above a monotone time-dependent boundary. Paper IV treats the optimal dividend problem in a model allowing for positive jumps of the underlying firm value. The optimal dividend strategy is of barrier type, i.e. to pay out all surplus above a certain level as dividends, and then pay nothing as long as the firm value is below this level. Finally, in Paper V it is shown that a necessary and sufficient condition for the explosion of implied volatility near expiry in exponential Lévy models is the existence of jumps towards the strike price in the underlying process.
APA, Harvard, Vancouver, ISO, and other styles
7

Rodier, Dominique. "Prosodic domains in optimality theory." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35933.

Full text
Abstract:
Cross-linguistically, the notion 'minimal word' has proved fruitful grounds for explanatory accounts of requirements imposed on morphological and phonological constituents. Word minimality requires that a lexical word includes the main-stressed foot of the language. As a result, subminimal words are augmented to a bimoraic foot through diverse strategies like vowel lengthening, syllable addition, etc. Even languages with numerous monomoraic lexical words may impose a minimality requirement on derived words that would otherwise be smaller than a well-formed foot. In addition, the minimal word has been argued to play a central role in characterizing a prosodic base within some morpho-prosodic constituent for the application of processes such as reduplication and infixation.
The goal of this thesis is to offer an explanation as to why and in which contexts grammars may prefer a prosodic constituent which may not be reducible to a bimoraic foot. I provide explanatory accounts for a number of cases where the prosodic structure of morphological or phonological constituents cannot be defined as coextensive with the main stressed foot of the language. To this end, I propose to add to the theory of Prosodic Structure (Chen 1987; Selkirk 1984, 1986, 1989, 1995; Selkirk and Shen 1990) within an optimality-theoretic framework by providing evidence for a new level within the Prosodic Hierarchy, that of the Prosodic Stem (PrStem).
An important aspect of the model of prosodic structure proposed here is a notion of headship which follows directly from the Prosodic Hierarchy itself and from the metrical grouping of prosodic constituents. A theory of prosodic heads is developed which assumes that structural constraints can impose well-formedness requirements on the prosodic shape and the distribution of heads within morphological and phonological constituents.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Van Vinh S. M. Massachusetts Institute of Technology. "Fairness and optimality in trading." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61894.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 50-51).
This thesis proposes a novel approach to address the issues of efficiency and fairness when multiple portfolios are rebalanced simultaneously. A fund manager who rebalances multiple portfolios needs to not only optimize the total efficiency, i.e., maximize net risk-adjusted return, but also guarantee that trading costs are fairly split among the clients. The existing approaches in the literature, namely the Social Welfare and the Competitive Equilibrium schemes, do not compromise efficiency and fairness effectively. To this end, we suggest an approach that utilizes popular and well-accepted resource allocation ideas from the field of communications and economics, such as Max-Min fairness, Proportional fairness and a-fairness. We incorporate in our formulation a quadratic model of market impact cost to reflect the cumulative effect of trade pooling. Total trading costs are split fairly among accounts using the so-called pro rata scheme. We solve the resulting multi-objective optimization problem by adopting the Max-Min fairness, Proportional fairness and a-fairness schemes. Under these schemes, the resulting optimization problems have non-convex objectives and non-convex constraints, which are NP-hard in general. We solve these problems using a local search method based on linearization techniques. The efficiency of this approach is discussed when we compare it with a deterministic global optimization method on small size optimization problems that have similar structure to the aforementioned problems. We present computational results for a small data set (2 funds, 73 assets) and a large set (6 funds, 73 assets). These results suggest that the solution obtained from our model provides a better compromise between efficiency and fairness than existing approaches. An important implication of our work is that given a level of fairness that we want to maintain, we can always find Pareto-efficient trade sets.
by Van Vinh Nguyen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Xiaowei. "Weighted Optimality of Block Designs." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/26168.

Full text
Abstract:
Design optimality for treatment comparison experiments has been intensively studied by numerous researchers, employing a variety of statistically sound criteria. Their general formulation is based on the idea that optimality functions of the treatment information matrix are invariant to treatment permutation. This implies equal interest in all treatments. In practice, however, there are many experiments where not all treatments are equally important. When selecting a design for such an experiment, it would be better to weight the information gathered on different treatments according to their relative importance and/or interest. This dissertation develops a general theory of weighted design optimality, with special attention to the block design problem. Among others, this study develops and justifies weighted versions of the popular A, E and MV optimality criteria. These are based on the weighted information matrix, also introduced here. Sufficient conditions are derived for block designs to be weighted A, E and MV-optimal for situations where treatments fall into two groups according to two distinct levels of interest, these being important special cases of the "2-weight optimality" problem. Particularly, optimal designs are developed for experiments where one of the treatments is a control. The concept of efficiency balance is also studied in this dissertation. One view of efficiency balance and its generalizations is that unequal treatment replications are chosen to reflect unequal treatment interest. It is revealed that efficiency balance is closely related to the weighted-E approach to design selection. Functions of the canonical efficiency factors may be interpreted as weighted optimality criteria for comparison of designs with the same replication numbers.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Parish, Michael S. "Optimality of aeroassisted orbital plane changes." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA306016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Boţ, Radu Ioan. "Duality and optimality in multiobjective optimization." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968798322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Engels, Eva. "Adverb placement an optimality theoretic approach /." Phd thesis, [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974371874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bot, Radu Ioan. "Duality and optimality in multiobjective optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200300842.

Full text
Abstract:
The aim of this work is to make some investigations concerning duality for multiobjective optimization problems. In order to do this we study first the duality for scalar optimization problems by using the conjugacy approach. This allows us to attach three different dual problems to a primal one. We examine the relations between the optimal objective values of the duals and verify, under some appropriate assumptions, the existence of strong duality. Closely related to the strong duality we derive the optimality conditions for each of these three duals. By means of these considerations, we study the duality for two vector optimization problems, namely, a convex multiobjective problem with cone inequality constraints and a special fractional programming problem with linear inequality constraints. To each of these vector problems we associate a scalar primal and study the duality for it. The structure of both scalar duals give us an idea about how to construct a multiobjective dual. The existence of weak and strong duality is also shown. We conclude our investigations by making an analysis over different duality concepts in multiobjective optimization. To a general multiobjective problem with cone inequality constraints we introduce other six different duals for which we prove weak as well as strong duality assertions. Afterwards, we derive some inclusion results for the image sets and, respectively, for the maximal elements sets of the image sets of these problems. Moreover, we show under which conditions they become identical. A general scheme containing the relations between the six multiobjective duals and some other duals mentioned in the literature is derived
Das Ziel dieser Arbeit ist die Durchführung einiger Untersuchungen bezüglich der Dualität für Mehrzieloptimierungsaufgaben. Zu diesem Zweck wird als erstes mit Hilfe des so genannten konjugierten Verfahrens die Dualität für skalare Optimierungsaufgaben untersucht. Das erlaubt uns zu einer primalen Aufgabe drei unterschiedliche duale Aufgaben zuzuordnen. Wir betrachten die Beziehungen zwischen den optimalen Zielfunktionswerten der drei Dualaufgaben und untersuchen die Existenz der starken Dualität unter naheliegenden Annahmen. Im Zusammenhang mit der starken Dualität leiten wir für jede dieser Dualaufgaben die Optimalitätsbedingungen her. Die obengenannten Ergebnisse werden beim Studium der Dualität für zwei Vektoroptimierungsaufgaben angewandt, und zwar für die konvexe Mehrzieloptimierungsaufgabe mit Kegel-Ungleichungen als Nebenbedingungen und für eine spezielle Quotientenoptimierungsaufgabe mit linearen Ungleichungen als Nebenbedingungen. Wir assoziieren zu jeder dieser vektoriellen Aufgaben eine skalare Aufgabe für welche die Dualität betrachtet wird. Die Formulierung der beiden skalaren Dualaufgaben führt uns zu der Konstruktion der Mehrzieloptimierungsaufgabe. Die Existenz von schwacher und starker Dualität wird bewiesen. Wir schliessen unsere Untersuchungen ab, indem wir eine Analyse von verschiedenen Dualitätskonzepten in der Mehrzieloptimierung durchführen. Zu einer allgemeinen Mehrzieloptimierungsaufgabe mit Kegel-Ungleichungen als Nebenbedingungen werden sechs verschiedene Dualaufgaben eingeführt, für die sowohl schwache als auch starke Dualitätsaussagen gezeigt werden. Danach leiten wir verschiedene Beziehungen zwischen den Bildmengen, bzw., zwischen den Mengen der maximalen Elemente dieser Bildmengen der sechs Dualaufgaben her. Dazu zeigen wir unter welchen Bedingungen werden diese Mengen identisch. Ein allgemeines Schema das die Beziehungen zwischen den sechs dualen Mehrzieloptimierungsaufgaben und andere Dualaufgaben aus der Literatur enthält, wird dargestellt
APA, Harvard, Vancouver, ISO, and other styles
14

McKay, Johnathan Lucas. "Neuromechanical constraints and optimality for balance." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34669.

Full text
Abstract:
Although people can typically maintain balance on moving trains, or press the appropriate button on an elevator with little conscious effort, the apparent ease of these sensorimotor tasks is courtesy of neural mechanisms that continuously interpret many sensory input signals to activate muscles throughout the body. The overall hypothesis of this work is that motor behaviors emerge from the interacting constraints and features of the nervous and musculoskeletal systems. The nervous system may simplify the control problem by recruiting muscles in groups called muscle synergies rather than individually. Because muscles cannot be recruited individually, muscle synergies may represent a neural constraint on behavior. However, the constraints of the musculoskeletal system and environment may also contribute to determining motor behaviors, and so must be considered in order to identify and interpret muscle synergies. Here, I integrated techniques from musculoskeletal modeling, control systems engineering, and data analysis to identify neural and biomechanical constraints that determine the muscle activity and ground reaction forces during the automatic postural response (APR) in cats. First, I quantified the musculoskeletal constraints on force production during postural tasks in a detailed, 3D musculoskeletal model of the cat hindlimb. I demonstrated that biomechanical constraints on force production in the isolated hindlimb do not uniquely determine the characteristic patterns of force activity observed during the APR. However, when I constrained the muscles in the model to activate in a few muscle synergies based on experimental data, the force production capability drastically changed, exhibiting a characteristic rotation with the limb axis as the limb posture was varied that closely matched experimental data. Finally, after extending the musculoskeletal model to be quadrupedal, I simulated the optimal feedforward control of individual muscles or muscle synergies to regulate the center of mass (CoM) during the postural task. I demonstrated that both muscle synergy control and optimal muscle control reproduced the characteristic force patterns observed during postural tasks. These results are consistent with the hypothesis that the nervous system may use a low-dimension control scheme based on muscle synergies to approximate the optimal motor solution for the postural task given the constraints of the musculoskeletal system. One primary contribution of this work was to demonstrate that the influences of biomechanical mechanisms in determining motor behaviors may be unclear in reduced models, a factor that may need to be considered in other studies of motor control. The biomechanical constraints on force production in the isolated hindlimb did not predict the stereotypical forces observed during the APR unless a muscle synergy organization was imposed, suggesting that neural constraints were critical in resolving musculoskeletal redundancy during the postural task. However, when the model was extended to represent the quadrupedal system in the context of the task, the optimal control of the musculoskeletal system predicted experimental force patterns in the absence of neural constraints. A second primary contribution of this work was to test predictions concerning muscle synergies developed in theoretical neuromechanical models in the context of a natural behavior, suggesting that these concepts may be generally useful for understanding motor control. It has previously been shown in abstract neuromechanical models that low-dimension motor solutions such as muscle synergies can emerge from the optimal control of individual muscles. This work demonstrates for the first time that low-dimension motor solutions can emerge from optimal muscle control in the context of a natural behavior and a realistic musculoskeletal model. This work also represents the first explicit comparison of muscle synergy control and optimal muscle control during a natural behavior. It demonstrates that an explicit low-dimension control scheme based on muscle synergies is competent for performance of the postural task across biomechanical conditions, and in fact, may approximate the motor solution predicted by optimal muscle control. This work advances our understanding how the constraints and features of the nervous and musculoskeletal systems interact to produce motor behaviors. In the future, this understanding may inform improved clinical interventions, prosthetic applications, and the general design of distributed, hierarchal systems.
APA, Harvard, Vancouver, ISO, and other styles
15

Samek-Ludovici, Vieri. "Optimality theory and the minimalist program." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2009/3232/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Causley, Trisha Kathleen. "Complexity and markedness in optimality theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0004/NQ41121.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Thuijsman, Frank. "Optimality and equilibria in stochastic games." Maastricht : Maastricht : Rijksuniversiteit Limburg ; University Library, Maastricht University [Host], 1989. http://arno.unimaas.nl/show.cgi?fid=5476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Callies, Leonie. "Optimality of uncertaintyprinciples for joint timefrequencyrepresentations." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35399.

Full text
Abstract:
The study of joint time-frequency representations is a large field of mathematics and physics, especially signal analysis. Based on Heisenberg's cllassical uncertainty principle various inequalities for such time-frequency distributions have been studied. The objective of this thesis is to examine the role that Gaussian functions, including those with a chirp contribution, play in inequalities for the Short-Time Fourier transform and the Wigner distribution. We show that Gröchenig's uncertainty principles for the Short-Time Fourier transform are not optimal with regard to these functions. As for the Wigner distribution we show how an existing uncertainty principle by Janssen can be modied to reach optimality for Chirp Gaussians.
APA, Harvard, Vancouver, ISO, and other styles
19

Arechavaleta-Servin, Gustavo. "An optimality principle governing human walking." Phd thesis, INSA de Toulouse, 2007. http://tel.archives-ouvertes.fr/tel-00260990.

Full text
Abstract:
L'objectif dans ce travail est d'étudier la locomotion humaine. Notre approche met en évidence le rapport qui existe entre la forme géométrique des trajectoires locomotrices et le modèle cinématique simplifié d'un robot mobile à roues. Ce type de système a déjà été longtemps étudié dans le domaine de la robotique. D'un point de vue purement cinématique, la particularité d'un robot à roues est la contrainte non holonome qui impose au robot de se déplacer toujours selon la tangente à son axe principal. Dans le cas de la marche humaine, les observations nous montrent que les humains marchent vers l'avant et la direction instantanée du corps est tangente à la trajectoire qu'ils réalisent (dû à certains restrictions mécanique, anatomique... du corps au moment de la marche). Ce couplage entre la direction et la position du corps impose une contrainte non holonome parce qu'elle ne restreint pas la dimension de l'espace accessible à partir d'une configuration quelconque. Du point de vue du conducteur, une voiture possède deux commandes : l'accélérateur et le volant. La première question abordée ici peut être formulée de la manière suivante : où se trouve le ''volant'' du corps humain ? Plusieurs repères ont été associés aux différents parties du squelette (tête, tronc et bassin). Dans notre étude expérimentale nous montrons qu'il existe un repère qui prend en compte la nature non holonome de la locomotion humaine et que c'est le tronc qui joue le rôle du "volant". Nous avons validé notre modèle avec une base de données de 1560 trajectoires enregistrées à partir des trajectoires faites par 7 sujets. La deuxième question abordée dans ce travail est la suivante : parmi toutes les trajectoires possibles qui existent pour atteindre une position avec une orientation données, pourquoi l'humain effectue une trajectoire au lieu d'une autre ? Afin de donner une possible réponse à cette question, nous avons fait appel à la commande optimale : les trajectoires ont été choisies sel on un critère à optimiser. Dans cette perspective, le sujet est vu comme un système de commande, donc, la question devient : quel est le critère à optimiser ? est-ce la longueur de la trajectoire ? ou le temps parcouru ? ou la secousse minimale ?... Dans cet étude nous montrons que les trajectoires locomotrices peuvent être approximées par les géodésiques d'un système différentiel minimisant la norme de la commande. Ces géodésiques sont composés de morceaux de clothoides. Une clothoide, ou spirale de Cornu, est une courbe dont la courbure varie linéairement en fonction de l'abscisse curviligne. Nous montrons que le 90% des trajectoires faites par les 7 sujets ont été approximées avec une erreur moyenne de moins de 10cm. Dans la dernière partie de ce travail nous réalisons la synthèse numérique de trajectoires optimales dans l'espace atteignable. Il s'agit de partitionner l'espace des configurations par rapport aux différents types de trajectoires optimales qui peuvent relier l'origine à un point dans cet espace. Deux points appartiennent à une même cellule si les trajectoires parcourues sont de même type. Dans la plupart des cas le passage entre deux cellules adjacentes se fait par une déformation continue des trajectoires. Il est remarquable de noter que les rares cas de discontinuités du modèle proposé correspondent précisément aux changements de stratégies observées chez les sujets.
APA, Harvard, Vancouver, ISO, and other styles
20

Arechavaleta, Servin Gustavo. "An optimality principle governing human walking." Toulouse, INSA, 2007. http://eprint.insa-toulouse.fr/archive/00000193/.

Full text
Abstract:
This work seeks to analyze human walking at the trajectory planning level from an optimal control perspective. Our approach emphasizes the close relationship between the geometric shape of human locomotion in goal-directed movements and the simplified kinematic model of a wheeled mobile robot. This kind of system has been extensively studied in robotics community. From a kinematic perspective, the characteristic of this wheeled robot is the nonholonomic constraint of the wheels on the floor, which forces the vehicle to move tangentially to its main axis. In the case of human walking, the observation indicates that the direction of the motion is given by the direction of the body (due to some anatomical, mechanical body constraints. . . ). This coupling between the direction q and the position (x,y) of the body can be summarized by tanq = ˙ y ˙ x. It is known that this differential equation defines a non integrable 2- dimensional distribution in the 3-dimensional manifold R2×S1 gathering all the configurations (x,y,q). The controls of a vehicle are usually the linear velocity (via the accelerator and the brake) and the angular velocity (via the steering wheel). The first question addressed in this study can be roughly formulated as : where is the “steering wheel” of the human body located ? It appears that the torso can be considered as a kind of a steering wheel that steers the human body. This model has been validated on a database of 1,560 trajectories recorded from seven subjects. In the second part we address the following question : among all possible trajectories reaching a given position and direction, the subject has chosen one. Why ? The central idea to understand the shape of trajectories has been to relate this problem to an optimal control scheme : the trajectory is chosen according to some optimization principle. The subjects being viewed as a controlled system, we tried to identify several criteria that could be optimized. Is it the time to perform the trajectory ? the length of the path ? the minimum jerk along the path ?. . . We argue that the time derivative of the curvature of the locomotor trajectories is minimized. We show that the human locomotor trajectories are well approximated by the geodesics of a differential system minimizing the L2 norm of the control. Such geodesics are made of arcs of clothoids. The clothoid is a curve whose curvature grows with the distance from the origin. The accuracy of the model is supported by the fact that 90 percent of trajectories are approximated with an average error < 10cm. In the last part of this work we provide the partition of the 3-dimensional configuration space in cells : 2 points belong to a same cell if and only if they are reachable from the origin by a path of the same type. Such a decomposition is known as the synthesis of the optimal control problem. Most of the time when the target changes slightly the optimal trajectories change slightly. However, some singularities appear at some critical frontiers between cells. It is noticeable that they correspond to the strategy change for the walking subjects. This fundamental result is another poof of the locomotion model we have proposed
APA, Harvard, Vancouver, ISO, and other styles
21

Stallings, Jonathan W. "General Weighted Optimality of Designed Experiments." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/56949.

Full text
Abstract:
Design problems involve finding optimal plans that minimize cost and maximize information about the effects of changing experimental variables on some response. Information is typically measured through statistically meaningful functions, or criteria, of a design's corresponding information matrix. The most common criteria implicitly assume equal interest in all effects and certain forms of information matrices tend to optimize them. However, these criteria can be poor assessments of a design when there is unequal interest in the experimental effects. Morgan and Wang (2010) addressed this potential pitfall by developing a concise weighting system based on quadratic forms of a diagonal matrix W that allows a researcher to specify relative importance of information for any effects. They were then able to generate a broad class of weighted optimality criteria that evaluate a design's ability to maximize the weighted information, ultimately targeting those designs that efficiently estimate effects assigned larger weight. This dissertation considers a much broader class of potential weighting systems, and hence weighted criteria, by allowing W to be any symmetric, positive definite matrix. Assuming the response and experimental effects may be expressed as a general linear model, we provide a survey of the standard approach to optimal designs based on real-valued, convex functions of information matrices. Motivated by this approach, we introduce fundamental definitions and preliminary results underlying the theory of general weighted optimality. A class of weight matrices is established that allows an experimenter to directly assign weights to a set of estimable functions and we show how optimality of transformed models may be placed under a weighted optimality context. Straightforward modifications to SAS PROC OPTEX are shown to provide an algorithmic search procedure for weighted optimal designs, including A-optimal incomplete block designs. Finally, a general theory is given for design optimization when only a subset of all estimable functions is assumed to be in the model. We use this to develop a weighted criterion to search for A-optimal completely randomized designs for baseline factorial effects assuming all high-order interactions are negligible.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Lauria, Christopher Sacha Aristide <1990&gt. "On Optimality of Score Driven Models." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9627/1/Christopher_Lauria_tesi.pdf.

Full text
Abstract:
The contribution of this thesis consists in proving that score driven models possess a novel, intuitive, high dimensional and global optimality criterion, called Conditional Expected Variation optimality that formalizes the following words from Creal et al. (2013) "The use of the score is intuitive. It defines a steepest ascent direction for improving the model's local fit in terms of the likelihood or density at time t given the current position of the parameter. This provides the natural direction for updating the parameter. " Indeed, the fact that the score defines a steepest ascent direction is crucial in deriving the results and for the proposed optimality criterion to hold. To prove the aforementioned property, a point of contact between the econometric literature and the time varying optimization literature will be established. As a matter of fact, the Conditional Expected Variation optimality can be naturally viewed as a generalization of the monotonicity property of the gradient descent scheme. A number of implications on the specification of score driven models are analyzed and discussed, even in the case of model misspecification.
APA, Harvard, Vancouver, ISO, and other styles
23

Wolf, Viktor [Verfasser], and Ludger [Akademischer Betreuer] Rüschendorf. "Comparison of Markovian price processes and optimality of payoffs = Vergleiche von Markovschen Preisprozessen und Optimalität von Auszahlungen." Freiburg : Universität, 2014. http://d-nb.info/1123480877/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Xiaojie. "Characterizations of optimality in multi-objective programming." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61040.

Full text
Abstract:
This thesis contains several contributions to the theory of optimality conditions in single- and multi-objective optimization. The main result provides an answer to the following, apparently open, question in mathematical welfare economics: Given a feasible decision, find a saddle-point condition which is both necessary and sufficient that the decision is Pareto optimal for convex objectives and convex constraints. This result is then extended to convex multi-objective parametric optimization and to a large class of nonconvex multi-objective programs.
APA, Harvard, Vancouver, ISO, and other styles
25

Collie, Sarah. "English stress preservation and Stratal Optimality Theory." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/2590.

Full text
Abstract:
Since Chomsky & Halle (1968), English stress preservation – oríginal -> orìginálity, óbvious -> óbviousness – has been important in generative discussions of morphophonological interaction. This thesis carries out empirical investigations into English stress preservation, and uses their results to argue for a particular version of Optimality Theory: Stratal Optimality Theory (‘Stratal OT’) (Kiparsky, 1998a, 2000, 2003a; Bermúdez-Otero, 1999, 2003, in preparation). In particular, the version of Stratal OT proposed in Bermúdez-Otero (in preparation) and Bermúdez-Otero and McMahon (2006) is supported. The empirical investigations focus upon the type of preservation where preserved stress is subordinated in the preserving word (‘weak preservation’): e.g. oríginal -> orìginálity; àntícipate -> antìcipátion. Evidence for the existence of weak preservation is presented. However, it is also shown that weak preservation is not consistently successful, but that it is, rather, probabilistically dependent upon word frequency. This result is expected in light of work like Hay (2003), where it is proposed that word frequency affects the strength of relationships between words: stress preservation is an indicator of such a relationship. Stratal OT can handle the existence of English stress preservation: by incorporating the cyclic interaction between morphological and phonological modules proposed in Lexical Phonology and Morphology (‘LPM’), Stratal OT has the intrinsic serialism which is necessary to predict a phenomenon like English stress preservation. It is shown that the same cannot be said for those of models of OT which attempt to handle preservation while avoiding such serialism, notably, Benua (1997). Bermúdez-Otero’s (in preparation) proposal of ‘fake cyclicity’ for the first stratum in Stratal OT can capture weak preservation’s probabilistic dependence upon word frequency. Fake cyclicity rejects the cycle which has previously been proposed to handle weak stress preservation, in LPM and elsewhere; instead, fake cyclicity proposes that weak preservation is a result of blocking among stored lexical entries. Blocking is independently established as a psycholinguistic phenomenon that is probabilistically dependent upon word frequency; in contrast, the cycle is not a probabilistic mechanism, and so can only handle instances of stress preservation failure by stipulation.
APA, Harvard, Vancouver, ISO, and other styles
26

Al, Balushi Ibrahim. "Instance optimality in infinite-dimensional compressed sensing." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121443.

Full text
Abstract:
This thesis provides a thorough literature review of the newly founded theory of compressed sensing (CS), developed by Cand`es and his collaborators. The majority of the documented developments remain in the treatment of perfectly sparse signals in the finite dimensional setting. This was extended to the treatment of nearly sparse (compressible) signals in infinite-dimensions by Adcock and Hansen. A novel approach in analyzing the performance of CS, in the finite-dimensional setting, was developed by Cohen, Dahmen and DeVore where they study the effectiveness of CS. This is carried by comparing it to the well established theory of best k-term approximation, i.e in terms of how well CS recovers non-sparse vectors which can be well approximated by sparse vectors. The contribution of this thesis extends DeVore and his collaborators instance optimality results for CS to infinite dimensions by following a similar construction carried by Adcock and Hansen. This will be made by appealing to the truncation techniques devised by Adcock and Hansen in their development of the generalized sampling theory, and by appealing to an intermediate result established by Candès and Plan regarding the restricted isometry property (RIP).
Cette thèse présente une minutieuse revue de la littérature portant sur l’acquisition comprimée (CS), une théorie récemment développée par Candès et ses collaborateurs. La majorité des travaux qui en découlent se focalise sur les signaux parfaitement parcimonieux en un nombre fini de dimensions. Ces résultats ont été étendus au cas des signaux (compressibles) quasi-parcimonieux dans les espaces de dimension infinie par Adcock et Hansen. Une nouvelle approche permettant d'analyser la performance de la CS en un nombre de dimension finie a été proposée par Cohen, Dahmen et DeVore. Celle-ci étudie l'efficacité de la CS en comparant cette méthode à la très reconnue théorie des approximations par les k meilleurs termes; c'est-à-dire en étudiant la capacité de la CS à recouvrer des vecteurs non-parcimonieux pouvant eux-mêmes être approximés par des vecteurs parcimonieux. La contribution de cette thèse étend les résultats de DeVore et de ses collaborateurs sur l'optimalité exemplaire de la CS au cas des espaces de dimension infinie, en suivant une construction similaire à celle employée par Adcock et Hansen. À cette fin, les techniques de troncation décrites par Adcock et Hansen dans leur développement de la théorie de l'échantillonage général seront utilisées, tout comme un résultat intermédiaire établi par Cand`es et Plan portant sur la Propriété Isométrique Restreinte (RIP).
APA, Harvard, Vancouver, ISO, and other styles
27

Rung-ruang, Apichai. "English loanwords in Thai and optimality theory." Virtual Press, 2007. http://liblink.bsu.edu/uhtbin/catkey/1389690.

Full text
Abstract:
This study focuses on English loanwords in Thai, particularly the treatment of consonants in different environments, namely onset/coda simplification, laryngeal features, medial consonants, and liquid alternation, within the framework of Optimality Theory (OT: Prince and Smolensky 1993/2004). The major objectives are: (1) to examine the way English loanwords are adapted to a new environment, (2) to investigate how conflict between faithfulness and markedness constraints is resolved and in what ways through OT grammars, and (3) finally to be a contribution to the literature of loan phonology in OT since there has not been much literature on English loanwords in Thai within the recent theoretical framework of Optimality TheoryThe data are drawn from an English-Thai dictionary (Sethaputa 1995), an on-line English-Thai dictionary, an English loanword dictionary (Komutthamwiboon 2003), and earlier studies of English loans in Thai by Udomwong (1981), Nacaskul (1989), Raksaphet (2000), and Kenstowicz and Atiwong (2004).The study has found that Thais replace unlicensed consonants with either auditory similar segments or shared natural class segments, as in /v/ in the English and [w] in word borrowing due to auditory similarity, /g/ in the English source replaced by [k] because of shared place of articulation. Vowel insertion is found if the English source begins with /sC/ as in /skaen/ scan -> [stkc cn]. Since Thai allows consonant clusters, a second segment of the clusters is always retained if it fits the Thai phonotactics, as in /gruup/ `group' -4 [kruip]. In coda, consonant clusters must be simplified. Consonant clusters in the English source are divided into five main subgroups. Sometimes Thais retain a segment adjacent to a vowel and delete the edge, as in /lcnzi lens -4 [len].However, a postvocalic lateral [1] followed by a segment are replaced by either a nasal [n] or a glide [w]. In terms of repair strategies, the lowest ranked faithfulness constraints indicate what motivates Thais to have consonant adaptation. MAX-I0, DEP-I0, IDENT-I0 (place) reveal that segmental deletion, insertion, and replacement on the place of articulation are employed to deal with marked structures, respectively. The two lines of approaches (Positional Faithfulness, Positional Markedness) have been examined with respect to segments bearing aspiration or voicing. The findings have shown that both approaches can be employed to achieve the same result. In medial consonants, ambisyllabic consonants in the English source undergo syllable adaptation and behave like geminates in word borrowings in Thai. Most cases show that ambisyllabic/geminate consonants in loanwords are unaspirated. A few cases are aspirated.The study has revealed that there is still more room for improvement in 0T. The standard OT allowing only a single output in the surface form is challenged. Some English loanwords have multiple outputs. For instance, /aesfoolt/ `asphalt' can be pronounced either [26tf6n] or [26tf6w]. Another example is the word /k h riim / `cream' can be pronounced as [k h riim], [khliim], and [khiim]. To account for these phenomena requires a sociolinguistic explanation.
Department of English
APA, Harvard, Vancouver, ISO, and other styles
28

Ghaus, Aisha. "Local government finances : efficiency, equity and optimality." Thesis, University of Leeds, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tawn, Nicholas. "Towards optimality of the parallel tempering algorithm." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/99796/.

Full text
Abstract:
Markov Chain Monte Carlo (MCMC) techniques for sampling from complex probability distributions have become mainstream. Big data and high model complexity demand more scalable and robust algorithms. A famous problem with MCMC is making it robust to situations when the target distribution is multi-modal. In such cases the algorithm can become trapped in a subset of the state space and fail to escape during the entirety of the run of the algorithm. This non-exploration of the state space results in highly biased sample output. Simulated (ST) and Parallel (PT) Tempering algorithms are typically used to address multi-modality problems. These methods flatten out the target distribution using a temperature schedule. This allows the Markov chain to move freely around the state space and explore all regions of significant mass. This thesis explores two new ideas to improve the scalability of the PT algorithm. These are implemented in prototype algorithms, QuanTA and HAT, which are accompanied by supportive theoretical optimal scaling results. QuanTA focuses on improving transfer speed of the hot state mixing information to the target cold state. The associated scaling result for QuanTA shows that under mild conditions the QuanTA approach admits a higher order temperature spacing than the PT algorithm. HAT focuses on preserving modal weight through the temperature schedule. This is an issue that can lead to critically poor performance of the PT approach. The associated optimal scaling result is useful from a practical perspective. The result also challenges the notion that without modal weight preservation tempering schedules can be selected based on swap acceptance rates; an idea repeatedly used in the current literature. The new algorithms are prototype designs and have clear limitations. However, the impressive empirical performance of these new algorithms, together with supportive theory, illustrate their substantial improvement over existing methodology.
APA, Harvard, Vancouver, ISO, and other styles
30

Deckelbaum, Alan. "The structure of auctions : optimality and efficiency." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90182.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2014.
64
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 183-187).
The problem of constructing auctions to maximize expected revenue is central to mechanism design and to algorithmic game theory. While the special case of selling a single item has been well understood since the work of Myerson, progress on the multi-item case has been sporadic over the past three decades. In the first part of this thesis we develop a mathematical framework for finding and characterizing optimal single-bidder multi-item mechanisms by establishing that revenue maximization has a tight dual minimization problem. This approach reduces mechanism design to a measure-theoretic question involving transport maps and stochastic dominance relations. As an important application, we prove that a grand bundling mechanism is optimal if and only if two particular measure-theoretic inequalities are satisfied. We also provide several new examples of optimal mechanisms and we prove that the optimal mechanism design problem in general is computationally intractable, even in the most basic multi-item setting, unless ZPP contains P #p. Another key problem in mechanism design is how to efficiently allocate a collection of goods amongst multiple bidders. In the second part of the thesis, we study the problem of welfare maximization in the presence of unrestricted rational collusion. We generalize the notion of dominant-strategy mechanisms to collusive contexts, construct a highly practical such mechanism for multi-unit auctions, and prove that no such mechanism (practical or not) exists for unrestricted combinatorial auctions. Our results explore the power and limitations of enlarging strategy spaces to incentivize agents to reveal information about their collusive behavior.
by Alan Deckelbaum.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Nagengast, Arne Johannes. "Uncertainty and optimality in human motor control." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Neuwirth, Bernard. "Problematika hodnocení optimality a vyváženosti podnikových IS." Doctoral thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-233719.

Full text
Abstract:
This doctoral thesis deals with the aspects of evaluation of balance and optimality of corporate information systems. The initiative for this specialization was given by the increasing importance that is being laid on the perception of information systems from the point of view of a business company. More and more resources are being invested in the domain of information systems, but afterwards, it is not always ascertained that the information system is such a system, one could characterize as balanced and optimal for the company today as well as in the future. Often this is because there does not exist for the company an available and easily applicable methodic how to evaluate the system. As one of the main starting points of this doctoral thesis I have chosen the methodic HOS8 that was published 5 years ago on our faculty. The newly proposed methodic HOS2009 is trying to clear up the weak points of the original HOS8 methodic that were discovered during its practical use. This is done mainly by using the information feedback from the applicants of the methodic. Within the scope of this thesis the factors influencing the level of the particular areas of the system and the influence of these areas on its general balance are being examined. With regard to the evaluation of the balance and optimality of the information system, in this thesis the problematic of determination of a balanced and optimal state of information system for a company nowadays as well in the future are being examined. As a part of the methods output the thesis presents also charts representing the general state of the system, the imbalance of the particular parts of the IS and the relationship between the areas of hardware and software. Based on the evaluation of the current state and its comparison to the balanced optimal state for the present day as well for the future, the new possible directions and strategies of further development of the IS in the company are being proposed. I see the best exploitation of the methodic HOS2009 in the company in the support of managerial decisions with impact on: the discovery of potentially problems within the scope of IS of the company, the design of a possible course of development useful for their solution, but also the usage of the methodic as a simple control mechanism.
APA, Harvard, Vancouver, ISO, and other styles
33

Chau, Ho Fai. "Mandarin loanword phonology : an optimality theory approach." HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Gruber, Gottfried. "Multichannel management a normative model towards optimality." Frankfurt, M. Berlin Bern Bruxelles New York, NY Oxford Wien Lang, 2009. http://d-nb.info/997250909/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hoshi, Hidehito. "On Multiple Sympathy Candidates in Optimality Theory." Department of Linguistics, University of Arizona (Tucson, AZ), 1998. http://hdl.handle.net/10150/227250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Heiberg, Andrea Jeanine. "Features in optimality theory: A computational model." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/288983.

Full text
Abstract:
This dissertation presents a computational model of Optimality Theory (OT) (Prince and Smolensky 1993). The model provides an efficient solution to the problem of candidate generation and evaluation, and is demonstrated for the realm of phonological features. Explicit object-oriented implementations are proposed for autosegmental representations (Goldsmith 1976 and many others) and violable OT constraints and Gen operations on autosegmental representations. Previous computational models of OT (Ellison 1995, Tesar 1995, Eisner 1997, Hammond 1997, Karttunen 1998) have not dealt in depth with autosegmental representations. The proposed model provides a full treatment of autosegmental representations and constraints on autosegmental representations (Akinlabi 1996, Archangeli and Pulleyblank 1994, Ito, Mester, and Padgett 1995, Kirchner 1993, Padgett 1995, Pulleyblank 1993, 1996, 1998). Implementing Gen, the candidate generation component of OT, is a seemingly intractable problem. Gen in principle performs unlimited insertion; therefore, it may produce an infinite candidate set. For autosegmental representations, however, it is not necessary to think of Gen as infinite. The Obligatory Contour Principle (Leben 1973, McCarthy 1979, 1986) restricts the number of tokens of any one feature type in a single representation; hence, Gen for autosegmental features is finite. However, a finite Gen may produce a candidate set of exponential size. Consider an input representation with four anchors for each of five features: there are (2⁴ + 1)⁵, more than one million, candidates for such an input. The proposed model implements a method for significantly reducing the exponential size of the candidate set. Instead of first creating all candidates (Gen) and then evaluating them against the constraint hierarchy (Eval), candidate creation and evaluation are interleaved (cf. Eisner 1997, Hammond 1997) in a Gen-Eval loop. At each pass through the Gen-Eval loop, Gen operations apply to create the minimal number of candidates needed for constraint evaluation; this candidate set is evaluated and culled, and the set of Gen operations is reduced. The loop continues until the hierarchy is exhausted; the remaining candidate(s) are optimal. In providing explicit implementations of autosegmental representations, constraints, and Gen operations, the model provides a coherent view of autosegmental theory, Optimality Theory, and the interaction between the two.
APA, Harvard, Vancouver, ISO, and other styles
37

Sasa, Tomomasa. "Treatment of vowel harmony in optimality theory." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/318.

Full text
Abstract:
From the early stage of Optimality Theory (OT) (Prince, Alan and Paul Smolensky (1993): Optimality Theory: Constraint Interaction in Generative Grammar. [ROA: 537-0802: http://roa.rutgers.edu], McCarthy, John J. and Alan Prince (1995). Faithfulness and reduplicative identity. In Jill Beckman, Laura W. Dickey and Suzanne Urbanczyk (eds.) Papers in Optimality Theory. Amherst, MA: GLSA. 249-384), a number of analyses have been proposed to account for vowel harmony in the OT framework. However, because of the diversity of the patterns attested cross-linguistically, no consensus has been reached with regard to the OT treatment of vowel harmony. This, in turn, raises the question whether OT is a viable phonological theory to account for vowel harmony; if a theory is viable, a uniform account of the diverse patterns of vowel harmony should be possible.The main purpose of this thesis is to discuss the application of five different OT approaches to vowel harmony, and to investigate which approach offers the most comprehensive coverage of the diverse vowel harmony patterns. Three approaches are the main focus: feature linking with SPREAD (Padgett, Jaye (2002). Feature classes in phonology. Language 78. 81-110), Agreement-By-Correspondence (ABC) (Walker, Rachel (2009). Similarity-sensitive blocking and transparency in Menominee. Paper presented at the 83rd Annual Meeting of the Linguistic Society of America. San Francisco), and the Span Theory of harmony (McCarthy, John J. (2004). Headed spans and autosegmental spreading. [ROA: 685-0904: http://roa.rutgers.edu]). The applications of these approaches in the following languages are considered: backness and roundness harmony in Turkish and in Yakut (Turkic), and ATR harmony in Pulaar (Niger-Congo). It is demonstrated that both feature linking and ABC analyses are successful in offering a uniform account of the different types of harmony processes observed in these three languages. However, Span Theory turns out to be empirically inadequate when used in the analysis of Pulaar harmony. These results lead to the conclusion that there are two approaches within OT that can offer a uniform account of the vowel harmony processes. This also suggests that OT is viable as a phonological theory.
APA, Harvard, Vancouver, ISO, and other styles
38

Kocillari, Loren. "Variational principles and optimality in biological systems." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3425402.

Full text
Abstract:
The aim of this thesis is to investigate the signatures of evolutionary optimization in biological systems, such as in proteins, human behaviours and transport tissues in vascular plants (xylems), by means of the Pareto optimality analysis and the calculus of variations. In the first part of this thesis, we address multi-objective optimization problems with tradeoffs through the Pareto optimality analysis ( [132],[69]), according which the best tradeoff solutions correspond to the optimal species, enclosed onto low-dimensional geometrical polytopes, defined as Pareto optimal fronts, in the space of physical traits, called morphospace. Chapter 3 is devoted to the Pareto optimality analysis in the Escherichia coli proteome by projecting proteins onto the space of solubility and hydrophobicity. In chapter 4 we analyze the HCP dataset of cognitive and behavioral scores in 1206 humans, in order to identify any signature of Pareto optimization in the space of Delay Discounting Task (DDT), which measures the tendency for people to prefer smaller, immediate monetary rewards over larger, delayed rewards. The second part of this thesis is devoted to solving an optimization problem regarding xylems, which are the internal conduits in angiosperms that deliver water and other nutrients from roots to petioles in plants. Based on the optimization criteria of minimizing the energy dissipated in a fluid flow, we propose in chapter 5 a biophysical model with the goal of explaining the underlying physical mechanism that affects the structure of xylem conduits in vascular plants, which results in tapered xylem profiles [104, 105, 117, 164]. We address this optimization problem by formulating the model in the context of the calculus of variations. The results of these investigations, besides providing quantitative support to previous theories of natural selection, demonstrate how processes of optimization can be identified in different biological systems by applying statistical methods such as the Pareto optimality and the variational one, showing the relevance of employing these statistical approaches to various biological systems.
Lo scopo di questa tesi è quello di identificare le impronte che l’evoluzione ha avuto nei sistemi biologici, come ad esempio nelle proteine, nei comportamenti umani e nei tessuti trasportatori delle piante vascolari (xilemi), attraverso un’analisi di ottimizzazione di Pareto ed il calcolo delle variazioni. Nella prima parte della tesi, affrontiamo l’ottimizzazione di problemi multi-obiettivo con competizione, attraverso l’analisi di ottimizzazione di Pareto, in base alla quale le migliori soluzioni di compromesso corrispondono alle specie ottimali, le quali vengono racchiuse in politopi geometrici, definiti come fronti ottimali di Pareto, nello spazio dei tratti fisici. Il capitolo 3 è dedicato all’analisi dell’ottimizzazione di Pareto nel proteoma dell’Escherichia coli, proiettando le proteine nello spazio della solubilitá ed idrofobicitá. Nel capitolo 4 analizziamo il set di dati HCP cognitivi e comportamentali in 1206 umani, al fine di identificare qualsiasi traccia di ottimizzazione alla Pareto nello spazio del “Delay Discounting Task” (DDT), che misura la tendenza per le persone a preferire ritorni economici piú piccoli e immediati rispetto a ricompense di premi piú grandi e ritardati. La seconda parte di questa tesi è dedicata alla risoluzione di un problema di ottimizzazione riguardante gli xilemi, che sono i condotti interni degli angiospermi e forniscono con acqua ed altri nutrienti le piante, dalle radici ai piccioli. Basandosi sui criteri di ottimizzazione per minimizzare l’energia dissipata in un flusso di fluido, nel capitolo 5 proponiamo un modello biofisico con l’obiettivo di spiegare il meccanismo fisico sottostante che influenza la struttura di condotti dello xilema nelle piante vascolari, che si traducono in profili di xilema affusolati. Affrontiamo questo problema di ottimizzazione formulando il modello nel contesto del calcolo delle variazioni. I risultati di queste indagini, oltre a fornire supporto quantitativo sulle precedenti teorie sulla selezione naturale, dimostra come i processi dell’ottimizzazione possono essere identificati in diversi sistemi biologici applicando metodi statistici come l’ottimalitá di Pareto e il variazionale uno, mostrando la rilevanza di impiegare questi approcci statistici a vari sistemi biologici.
APA, Harvard, Vancouver, ISO, and other styles
39

Loeza-Serrano, Sergio Ivan. "Optimal statistical design for variance components in multistage variability models." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/optimal-statistical-design-for-variance-components-in-multistage-variability-models(d407bb0e-cbb0-4ef8-ab6d-80cf3e4327cb).html.

Full text
Abstract:
This thesis focuses on the construction of optimum designs for the estimation of the variance components in multistage variability models. Variance components are the model parameters that represent the different sources of variability that affect the response of a system. A general and highly detailed way to define the linear mixed effects model is proposed. The extension considers the explicit definition of all the elements needed to construct a model. One key aspect of this formulation is that the random part is stated as a functional that individually determines the form of the design matrices for each random regressor, which gives significant flexibility. Further, the model is strictly divided into the treatment structure and the variability structure. This allows separate definitions of each structure but using the single rationale of combining, with little restrictions, simple design arrangements called factor layouts. To provide flexibility for considering different models, methodology to find and select optimum designs for variance components is presented using MLE and REML estimators and an alternative method known as the dispersion-mean model. Different forms of information matrices for variance components were obtained. This was mainly done for the cases when the information matrix is a function of the ratios of variances. Closed form expressions for balanced designs for random models with 3-stage variability structure, in crossed and nested layouts were found. The nested case was obtained when the information matrix is a function of the variance components. A general expression for the information matrix for the ratios using REML is presented. An approach to using unbalanced models, which requires the use of general formulae, is discussed. Additionally, D-optimality and A-optimality criteria of design optimality are restated for the case of variance components, and a specific version of pseudo-Bayesian criteria is introduced. Algorithms to construct optimum designs for the variance components based on the aforementioned methodologies were defined. These algorithms have been implemented in the R language. The results are communicated using a simple, but highly informative, graphical approach not seen before in this context. The proposed plots convey enough details for the experimenter to make an informed decision about the design to use in practice. An industrial internship allowed some the results herein to be put into practice, although no new research outcomes originated. Nonetheless, this is evidence of the potential for applications. Equally valuable is the experience of providing statistical advice and reporting conclusions to a non statistical audience.
APA, Harvard, Vancouver, ISO, and other styles
40

Fanselow, Gisbert, Matthias Schlesewsky, Damir Cavar, and Reinhold Kliegl. "Optimal parsing: syntactic parsing preferences and optimality theory." Universität Potsdam, 1999. http://opus.kobv.de/ubp/volltexte/2011/5716/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Simjour, Narges. "A New Optimality Measure for Distance Dominating Sets." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2941.

Full text
Abstract:
We study the problem of finding the smallest power of an input graph that has k disjoint dominating sets, where the ith power of an input graph G is constructed by adding edges between pairs of vertices in G at distance i or less, and a subset of vertices in a graph G is a dominating set if and only if every vertex in G is adjacent to a vertex in this subset.   The problem is a different view of the d-domatic number problem in which the goal is to find the maximum number of disjoint dominating sets in the dth power of the input graph.   This problem is motivated by applications in multi-facility location and distributed networks. In the facility location framework, for instance, there are k types of services that all clients in different regions of a city should receive. A graph representing the map of regions in the city is given where the nodes of the graph represent regions and neighboring regions are connected by edges. The problem is how to establish facility servers in the city (each region can host at most one server) such that every client in the city can access a facility server in its region or in a region in the neighborhood. Since it may not be possible to find a facility location satisfying this condition, "a region in the neighborhood" required in the question is modified to "a region at the minimum possible distance d".   In this thesis, we study the connection of the above-mentioned problem with similar problems including the domatic number problem and the d-domatic number problem. We show that the problem is NP-complete for any fixed k greater than two even when the input graph is restricted to split graphs, 2-connected graphs, or planar bipartite graphs of degree four. In addition, the problem is in P for bounded tree-width graphs, when considering k as a constant, and for strongly chordal graphs, for any k. Then, we provide a slightly simpler proof for a known upper bound for the problem. We also develop an exact (exponential) algorithm for the problem, running in time O(2. 73n). Moreover, we prove that the problem cannot be approximated within ratio smaller than 2 even for split graphs, 2-connected graphs, and planar bipartite graphs of degree four. We propose a greedy 3-approximation algorithm for the problem in the general case, and other approximation ratios for permutation graphs, distance-hereditary graphs, cocomparability graphs, dually chordal graphs, and chordal graphs. Finally, we list some directions for future work.
APA, Harvard, Vancouver, ISO, and other styles
42

Shekhar, Rohan Chandra. "Variable horizon model predictive control : robustness and optimality." Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/244210.

Full text
Abstract:
Variable Horizon Model Predictive Control (VH-MPC) is a form of predictive control that includes the horizon length as a decision variable in the constrained optimisation problem solved at each iteration. It has been recently applied to completion problems, where the system state is to be steered to a closed set in finite time. The behaviour of the system once completion has occurred is not considered part of the control problem. This thesis is concerned with three aspects of robustness and optimality in VH-MPC completion problems. In particular, the thesis investigates robustness to well defined but unpredictable changes in system and controller parameters, robustness to bounded disturbances in the presence of certain input parameterisations to reduce computational complexity, and optimal robustness to bounded disturbances using tightened constraints. In the context of linear time invariant systems, new theoretical contributions and algorithms are developed. Firstly, changing dynamics, constraints and control objectives are addressed by introducing the notion of feasible contingencies. A novel algorithm is proposed that introduces extra prediction variables to ensure that anticipated new control objectives are always feasible, under changed system parameters. In addition, a modified constraint tightening formulation is introduced to provide robust completion in the presence of bounded disturbances. Different contingency scenarios are presented and numerical simulations demonstrate the formulation’s efficacy. Next, complexity reduction is considered, using a form of input parameterisation known as move blocking. After introducing a new notation for move blocking, algorithms are presented for designing a move-blocked VH-MPC controller. Constraints are tightened in a novel way for robustness, whilst ensuring that guarantees of recursive feasibility and finite-time completion are preserved. Simulations are used to illustrate the effect of an example blocking scheme on computation time, closed-loop cost, control inputs and state trajectories. Attention is now turned towards mitigating the effect of constraint tightening policies on a VH-MPC controller’s region of attraction. An optimisation problem is formulated to maximise the volume of an inner approximation to the region of attraction, parameterised in terms of the tightening policy. Alternative heuristic approaches are also proposed to deal with high state dimensions. Numerical examples show that the new technique produces substantially improved regions of attraction in comparison to other proposed approaches, and greatly reduces the maximum required prediction horizon length for a given application. Finally, a case study is presented to illustrate the application of the new theory developed in this thesis to a non-trivial example system. A simplified nonlinear surface excavation machine and material model is developed for this purpose. The model is stabilised with an inner-loop controller, following which a VH-MPC controller for autonomous trajectory generation is designed using a discretised, linearised model of the stabilised system. Realistic simulated trajectories are obtained from applying the controller to the stabilised system and incorporating the ideas developed in this thesis. These ideas improve the applicability and computational tractability of VH-MPC, for both traditional applications as well as those that go beyond the realm of vehicle manœuvring.
APA, Harvard, Vancouver, ISO, and other styles
43

Castro, Carlos. "Essays in dependence and optimality in large portfolios." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210186.

Full text
Abstract:
This thesis is composed of three chapters. The first two chapters provides novel approaches for

modeling and estimating the dependence structure for a large portfolio of assets using rating data.

In both chapters a natural form of organizing a portfolio in terms of the levels of exposure to economic sectors and geographical regions, plays a key role in setting up the dependence structure.

The last chapter investigates weather financial strategies that exploit sector or geographical heterogeneity in the asset space are relevant in terms of portfolio optimization. This is also done in a context of a large portfolio but with data on stock returns.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
44

Zhao, Lei. "Study on Optimality Conditions in Stochastic Linear Programming." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1343%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kolf, K. Peter. "Pricing optimality of a multi-product public enterprise /." Title page, contents and abstract only, 1986. http://web4.library.adelaide.edu.au/theses/09ECM/09ecmk81.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Andersson, Daniel. "Necessary Optimality Conditions for Two Stochastic Control Problems." Licentiate thesis, Stockholm : Matematik, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chung, Yau-lin, and 鍾有蓮. "Optimality and approximability of the rectangle covering problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30294873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Babad, Hannah Rachel. "Optimality conditions and sensitivity relations in dynamic optimization." Thesis, Imperial College London, 1991. http://hdl.handle.net/10044/1/46655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Beverly, Robert E. 1975. "Reorganization in network regions for optimality and fairness." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28729.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 92-95).
(cont.) down implicit assumptions of altruism while showing the resulting negative impact on utility. From a selfish equilibrium, with much lower global utility, we show the ability of our algorithm to reorganize and restore the utility of individual nodes, and the system as a whole, to similar levels as realized in the SuperPeer network. Simulation of our algorithm shows that it reaches the predicted optimal utility while providing fairness not realized in other systems. Further analysis includes an epsilon equilibrium model where we attempt to more accurately represent the actual reward function of nodes. We find that by employing such a model, over 60% of the nodes are connected. In addition, this model converges to a utility 34% greater than achieved in the SuperPeer network while making no assumptions on the benevolence of nodes or centralized organization.
This thesis proposes a reorganization algorithm, based on the region abstraction, to exploit the natural structure in overlays that stems from common interests. Nodes selfishly adapt their connectivity within the overlay in a distributed fashion such that the topology evolves to clusters of users with shared interests. Our architecture leverages the inherent heterogeneity of users and places within the system their incentives and ability to affect the network. As such, it is not dependent on the altruism of any other nodes in the system. Of particular interest is the optimality and fairness of our design. We rigorously define ideal and fair networks and develop a continuum of optimality measures by which to evaluate our algorithm. Further, to evaluate our algorithm within a realistic context, validate assumptions and make design decisions, we capture data from a portion of a live file-sharing network. More importantly, we discover, name, quantify and solve several previously unrecognized subtle problems in a content-based self-organizing network as a direct result of simulations using the trace data. We motivate our design by examining the dependence of existing systems on benevolent Super-Peers. Through simulation we find that the current architecture is highly dependent on the filtering capability and the willingness of the SuperPeer network to absorb the majority of the query burden. The remainder of the thesis is devoted to a world in which SuperPeers no longer exist or are untenable. In our evaluation, we introduce four reasons for utility suboptimal self-reorganizing networks: anarchy (selfish behavior), indifference, myopia and ordering. We simulate the level of utility and happiness achieved in existing architectures. Then we systematically tear
by Robert E. Beverly, IV.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
50

Rudy, TOET. "An Optimality-Theoretic Analysis of the Japanese Passive." Kyoto University, 2020. http://hdl.handle.net/2433/253004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography