Dissertations / Theses on the topic 'Computational models'

To see the other types of publications on this topic, follow the link: Computational models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Erriquez, Elisabetta. "Computational models of trust." Thesis, University of Liverpool, 2012. http://livrepository.liverpool.ac.uk/7433/.

Full text
Abstract:
Trust and reputation are key issues in the multi-agent systems domain. As in human societies, software agents must interact with other agents in settings where there is the possibility that they can be exploited. This suggests the need for theoretical and computational models of trust and reputation that can be used by software agents, and accordingly, much research has investigated this issue. The first part of this thesis investigates the conjecture that agents who make decisions in scenarios where trust is important can benefit from the use of a social structure, representing the social relationships that exist between agents. To this end, we present techniques that can be used by agents to initially build and then progressively update such a structure in the light of experience. As the agents interact with other agents they gather information about interactions and relationships in order to build the network of agents and to better understand their social environment. We also show empirical evidence that a trust model enhanced with a social structure representation, used to gather additional information to select trustworthy agents for an agent’s interactions, can improve the trust model’s performance. In the second part of this thesis, we concentrate on the context of coalition formation. Coalition stability is a crucial issue. Stability is the motivation of an agent’s refusal to break from the original coalition and form a new one. Lack of trust in some of the coalition members could induce one agent to leave the coalition. Therefore we address the current model’s limitation by introducing an abstract framework that allows agents to form distrust-free coalitions. Moreover we present measures to evaluate the trustworthiness of the agent with respect to the whole society or to a particular coalition. We also describe a way to combine the trust and distrust relationships to form coalitions which are still distrust-free.
APA, Harvard, Vancouver, ISO, and other styles
2

Casarin, Stefano. "Mathematical models in computational surgery." Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS008/document.

Full text
Abstract:
La chirurgie informatisée est une science nouvelle dont le but est de croiser la chirurgie avec les sciences de l’informatique afin d’aboutir à des améliorations significatives dans les deux domaines. Avec l’évolution des nouvelles techniques chirurgicales, une collaboration étroite entre chirurgiens et chercheurs est devenue à la fois inévitable et essentielle à l’optimisation des soins chirurgicaux. L’utilisation de modèles mathématiques est la pierre angulaire de ce nouveau domaine. Cette thèse démontre comment une approche systématique d’un problème clinique nous a amenés à répondre à des questions ouvertes dans le domaine chirurgical en utilisant des modèles mathématiques à grande échelle. De manière générale, notre approche inclut (i) une vision générale du problème, (ii) le ciblage du/des système(s) physiologique(s) à étudier pour y répondre, et (iii) un effort de modélisation mathématique, qui a toujours été poussé par la recherche d’un compromis entre complexité du système étudié et réalité physiologique. Nous avons consacré la première partie de cette thèse à l’optimisation des conditions limites à appliquer à un bio-réacteur utilisé pour démultiplier le tissu pulmonaire provenant d’un donneur. Un modèle géométrique de l’arbre trachéo-bronchique couplé à un modèle de dépôt de soluté nous a permis de déterminer l’ensemble des pressions à appliquer aux pompes servant le bio-réacteur afin d’obtenir une distribution optimale des nutriments à travers les cultures de tissus. Nous avons consacré la seconde partie de cette thèse au problème de resténose des greffes de veines utilisées pour contourner une occlusion artérielle. Nous avons reproduit l’apparition de resténose grâce à plusieurs modèles mathématiques qui permettent d’étudier les preuves cliniques et de tester des hypothèses cliniques avec un niveau croissant de complexité et de précision. Pour finir, nous avons développé un cadre de travail robuste pour tester les effets des thérapies géniques afin de limiter la resténose. Une découverte intéressante a été de constater qu’en contrôlant un groupe de gènes spécifique, la perméabilité à la lumière double après un mois de suivi. Grace aux résultats obtenus, nous avons démontré que la modélisation mathématique peut servir de puissant outil pour l’innovation chirurgicale
Computational surgery is a new science that aims to intersect surgery and computational sciences in order to bring significant improvements in both fields. With the evolution of new surgical techniques, a close collaboration between surgeons and computational scientists became unavoidable and also essential to optimize surgical care. A large usage of mathematical models is the cornerstone in this new field. The present thesis shows how a systematic approach to a clinical problem brought us to answer open questions in the field of surgery by using mathematical models on a large scale. In general, our approach includes (i) an overview of the problem, (ii) the individuation of which physiological system/s is/are to be studied to address the question, and (iii) a mathematical modeling effort, which has been always driven by the pursue of a compromise between system complexity and closeness to the physiological reality. In the first part, we focused on the optimization of the boundary conditions to be applied to a bioreactor used to re-populate lung tissue from donor. A geometrical model of tracheobronchial tree combined with a solute deposition model allowed us to retrieve the set of pressures to be applied to the pumps serving the bioreactor in order to reach an optimal distribution of nourishment across the lung scaffold. In the second part, we focused on the issue of post-surgical restenosis of vein grafts used to bypass arterial occlusions. We replicated the event of restenosis with several mathematical models that allow us to study the clinical evidences and to test hypothesis with an escalating level of complexity and accuracy. Finally, we developed a solid framework to test the effect of gene therapies aimed to limit the restenosis. Interestingly, we found that by controlling a specific group of genes, the lumen patency is double after a month of follow-up. With the results achieved, we proved how mathematical modeling can be used as a powerful tool for surgical innovation
APA, Harvard, Vancouver, ISO, and other styles
3

Alzaidi, Samara Samir. "Computational Models of Cerebral Hemodynamics." Thesis, University of Canterbury. Mechanical Engineering, 2009. http://hdl.handle.net/10092/3159.

Full text
Abstract:
The cerebral tissue requires a constant supply of oxygen and nutrients. This is maintained through delivering a constant supply of blood. The delivery of sufficient blood is preserved by the cerebral vasculature and its autoregulatory function. The cerebral vasculature is composed of the Circle of Willis (CoW), a ring-like anastomoses of arteries at the base of the brain, and its peripheral arteries. However, only 50% of the population have a classical complete CoW network. This implies that the route of blood flow through the cerebral vasculature is different and dependent on where the blood is needed most in the brain. Autoregulation is a mechanism held by the peripheral arteries and arterioles downstream of the CoW. It ensures the delivery of the essential amount of cerebral blood flow despite changes in the arterial perfusion pressure, through the vasoconstriction and vasodilation of the vessels. The mechanisms that control the vessels’ vasomotion could be attributed to myogenic, metabolic, neurogenic regulation or a combination of all three. However, the variations in the CoW structure, combined with different pathological conditions such as hypertension, a stenosis or an occlusion in one or more of the supplying cerebral arteries may alter, damage or abolish autoregulation, and consequently result in a stroke. Stroke is the most common cerebrovascular disease that affects millions of people in the world every year. Therefore, it is essential to understand the cerebral hemodynamics via mathematical modelling of the cerebral vasculature and its regulation mechanisms. This thesis presents the developed model of the cerebral vasculature coupled with the different forms of autoregulation mechanisms. The model was developed over multiple stages. First, a linear model of the CoW was developed, where the peripheral vessels downstream of the CoW efferent arteries are represented as lumped parameter variable resistances. The autoregulation function in the efferent arteries was modelled using a PI controller, and a metabolic model was added to the lumped peripheral variable resistances. The model was then modified so the pressure losses encountered at the CoW bifurcations, and the vessels’ tortuosity are taken into account resulting in a non-linear system. A number of cerebral autoregulation models exist in the literature, however, no model combines a fully populated arterial tree with dynamic autoregulation. The final model presented in this thesis was built by creating an asymmetric binary arterial vascular tree to replace the lumped resistance parameters for the vasculature network downstream of each of the CoW efferent arteries. The autoregulation function was introduced to the binary arterial tree by implementing the myogenic and metabolic mechanisms which are active in the small arteries and arterioles of the binary arterial tree. The myogenic and metabolic regulation mechanisms were both tested in the model. The results indicate that because of the low pressures experienced by the arterioles downstream of the arterial tree, the myogenic mechanism, which is hypothesised by multiple researchers as the main driver of autoregulation, does not provide enough regulation of the arterioles’ diameters to support autoregulation. The metabolic model showed that it can provide sufficient changes in the arterioles’ diameters, which produces a vascular resistance that support the constancy of the autoregulation function. The work carried out for this research has the potential of being a significant clinical tool to evaluate patient-specific cases when combined with the graphical user interfaces provided. The research and modelling performed was done as part of the Brain Group of the Centre of Bioengineering at the University of Canterbury.
APA, Harvard, Vancouver, ISO, and other styles
4

Ng, Khin Hua. "Computational models of belief propagation." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/55283.

Full text
Abstract:
In this thesis we aim to gain better understanding on the working of the belief propagation algorithm designed for graphical models in other computational frameworks like the neural systems, as well as the error associated with loopy belief propagation in Bayesian networks. In the first part, we examine a few recent neural computational models of belief propagation and highlight the significance of these models that demonstrate the viability of performing belief propagation using neural computations by transforming it into a dynamical system. We also propose the idea of implementing the belief propagation in computational models like the Hopfield network through free energy minimisation. It is widely known that exact inference in loopy graphs is computationally difficult and thus there has been a lot of effort spent in the area of developing practical approximate inference algorithms. Loopy belief propagation is a widely used approximate inference algorithm for graphical models. In the second part of this thesis, we analyse the loopy error and propose two exact inference algorithms using belief propagation with loop correction for Bayesian networks with generic loops. We also propose a new approximate inference method called the 2-Pass loopy belief propagation and demonstrate empirically its potential for use as a fast approximate inference algorithm with comparable accuracy to standard loopy belief propagation. We also discuss issues related to its application as an approximate inference method.
APA, Harvard, Vancouver, ISO, and other styles
5

Stegle, Oliver. "Probabilistic models in computational biology." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jackson, Antony. "Computational models of financial markets." Thesis, University of Leicester, 2014. http://hdl.handle.net/2381/28753.

Full text
Abstract:
The three chapters of this thesis share the common theme of computational approaches to modeling financial markets. Chapter 1, “Market Ecologies: The Interaction and Survival of Technical Trading Strategies”, finds its place in the boundedly-rational heterogeneous agent literature. Market prices result from the interaction of fundamental and technical trading strategies. We show that the way in which traders process information is critical in determining the long-run profitability of individual strategies. More realistic auction settings— in which price information is incorporated into trading methods in “real time”—demand computationally demanding techniques. The main conclusion of the chapter is that contrarian technical traders inadvertently mimic the role of arbitrageurs in more realistic auction settings. Chapter 2, “Capital Allocation in a Delegated Trading Model”, develops a model of capital allocation that removes the need for full mean-variance optimization of the firm-level portfolio. The strategies explored within the artificial setting of Chapter 1 are used to test the model against empirical foreign exchange data. We observe that the proposed capital allocation scheme yields economically and statistically significant returns, even when traders choose rules without the benefit of hindsight. Chapter 3, “Portfolio Choice: The Costs and Benefits of Asymmetric Information”, continues the theme of artificial markets, with the auction process departing from the fictitious auctioneer of Chapter 1, toward a market making model in which risk-neutral dealers quote bid-ask spreads to compensate them for the losses incurred by trading with informed agents. We obtain the intriguing result that, in multiple markets, there is an “optimal” level of inside information. In individual markets, portfolio managers incur higher transaction costs as asymmetric information increases, but benefit from an externality at the portfolio level, as inside information aids price discovery.
APA, Harvard, Vancouver, ISO, and other styles
7

Frankenstein, William. "Computational Models of Nuclear Proliferation." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/782.

Full text
Abstract:
This thesis utilizes social influence theory and computational tools to examine the disparate impact of positive and negative ties in nuclear weapons proliferation. The thesis is broadly in two sections: a simulation section, which focuses on government stakeholders, and a large-scale data analysis section, which focuses on the public and domestic actor stakeholders. In the simulation section, it demonstrates that the nonproliferation norm is an emergent behavior from political alliance and hostility networks, and that alliances play a role in current day nuclear proliferation. This model is robust and contains second-order effects of extended hostility and alliance relations. In the large-scale data analysis section, the thesis demonstrates the role that context plays in sentiment evaluation and highlights how Twitter collection can provide useful input to policy processes. It first highlights the results of an on-campus study where users demonstrated that context plays a role in sentiment assessment. Then, in an analysis of a Twitter dataset of over 7.5 million messages, it assesses the role of ‘noise’ and biases in online data collection. In a deep dive analyzing the Iranian nuclear agreement, we demonstrate that the middle east is not facing a nuclear arms race, and show that there is a structural hole in online discussion surrounding nuclear proliferation. By combining both approaches, policy analysts have a complete and generalizable set of computational tools to assess and analyze disparate stakeholder roles in nuclear proliferation.
APA, Harvard, Vancouver, ISO, and other styles
8

Eades, Patrick Fintan. "Uncertainty Models in Computational Geometry." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23909.

Full text
Abstract:
In recent years easily and cheaply available internet-connected devices have enabled the collection of vast amounts of data, which has driven a continued interest in efficient, elegant combinatorial algorithms with mathematical guarantees. Much of this data contains an inherent element of uncertainty; whether because of imperfect measurements, because the data contains predictions about the future, or because the data is derived from machine learning algorithms which are inherently probabilistic. There is therefore a need for algorithms which include uncertainty in their definition and give answers in terms of that uncertainty. Questions about the most likely solution, the solution with lowest expected cost or a solution which is correct with high probability are natural here. Computational geometry is the sub-field of theoretical computer science concerned with developing algorithms and data structures for geometric problems, that is problems involving points, distances, angles and shapes. In computational geometry uncertainty is included in the location of the input points, or in which potential points are included in the input. The study of uncertainty in computational geometry is relatively recent; earlier research concerned imprecise points, which are known to appear somewhere in a geometric region. More recently the focus has been on points whose location, or presence, is given by a probability distribution. In this thesis we describe the most commonly used uncertainty models which are the subject of ongoing research in computational geometry. We present specific problems in those models and present new results, both positive and negative. In Chapter 3 we consider universal solutions, and show a new lower bound on the competitive ratio of the Universal Traveling Salesman Problem. In Chapter 4 we describe how to determine if two moving entities are ever mutually visible, and how data structures can be repeatedly queried to simulate uncertainty. In Chapter 5 we describe how to construct a graph on uncertain points with high probability of being a geometric spanner, an example of redundancy protecting against uncertainty. In Chapter 6 we introduce the online ply maintenance problem, an online problem where uncertainty can be reduced at a cost, and give an optimal algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Bashar, Hasanain. "Meta-modelling of intensive computational models." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/13667/.

Full text
Abstract:
Engineering process design for applications that use computationally intensive nonlinear dynamical systems can be expensive in time and resources. The presented work reviews the concept of a meta-model as a way to improve the efficiency of this process. The proposed meta-model will have a computational advantage in implementation over the computationally intensive model therefore reducing the time and resources required to design an engineering process. This work proposes to meta-model a computationally intensive nonlinear dynamical system using reduced-order linear parameter varying system modelling approach with local linear models in velocity based linearization form. The parameters of the linear time-varying meta-model are blended using Gaussian Processes regression models. The meta-model structure is transparent and relates directly to the dynamics of the computationally intensive model while the velocity-based local linear models faithfully reproduce the original system dynamics anywhere in the operating space of the system. The non-parametric blending of the meta-model local linear models by Gaussian Processes regression models is ideal to deal with data sparsity and will provide uncertainty information about the meta-model predictions. The proposed meta-model structure has been applied to second-order nonlinear dynamical systems, a small sized nonlinear transmission line model, medium sized fluid dynamics problem and the computationally intensive nonlinear transmission line model of order 5000.
APA, Harvard, Vancouver, ISO, and other styles
10

Calude, Elena. "Automata-Theoretic Models for Computational Complementarity." Thesis, University of Auckland, 1997. http://hdl.handle.net/2292/1915.

Full text
Abstract:
The purpose of this Thesis is to study, from the mathematical point of view, a new type of questions about finite automata, questions motivated by looking at automata as toy models of physical particles. Working along the line of research initiated by Moore, two computational complementarity principles are studied, (for finite-deterministic, complete or incomplete, nondeterministic-automata with outputs but no initial states) both theoretically and experimentally; they mimic the physical complementarity and the Einstein-Podolsky-Rosen effect. Automata are studied via simulations (informally, the automaton A is simulated by the automaton B if B can perform all computations A can execute and produces the same outputs; two automata are equivalent in case they simulate each other). A new type of minimization problem will be solved and the solution is proved to be unique up to an isomorphism; the minimal automaton equivalent to a given automaton can be constructed only in terms of outputs for deterministic complete or incomplete automata, but one needs the whole internal machinery for nondeterministic automata. It happens that minimal automata are exactly the automata which may feature computational complementarity. Even if the original motivation will remain only metaphorical, the physical motivation was good to suggest new definitions and constructions (simulation, universality, complementarity) leading to new mathematical results (existence of universal finite automaton, solving in a new way the minimalization problem for nondeterministic automata).
APA, Harvard, Vancouver, ISO, and other styles
11

Schmidt, Peter. "Computational Models of Adhesively Bonded Joints." Doctoral thesis, Linköping : Division of Mechanics, Department of Management and Engineering, Linköping University, 2007. http://www.bibl.liu.se/liupubl/disp/disp2007/tek1076s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Frohlich, Flavio. "Computational network models of neocortical seizures." Diss., [La Jolla] : University of California, San Diego, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3288841.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed June 2, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 220-244).
APA, Harvard, Vancouver, ISO, and other styles
13

Seaman, Matthew. "Computational models of structure and dynamics." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lewis, Suzanne Carole. "Computational models of emotion and affect." Thesis, University of Hull, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Okwechime, Dumebi. "Computational models of socially interactive animation." Thesis, University of Surrey, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.541433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Buttery, P. J. "Computational models for first language acquisition." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597195.

Full text
Abstract:
This work investigates a computational model of first language acquisition; the Categorical Grammar Learner or CGL. The model builds on the work of Villavicenio, who created a parametric Categorical Grammar learner that organises its parameters into an inheritance hierarchy, and also on the work of Buszkowski and Kanazawa, who demonstrated the learnability of a k-valued Classic Categorial Grammar (which uses only the rules of function application) from strings. The CGL is able to learn a k-valued General Categorial Grammar (which uses the rules of function application, function composition and Generalised Weak Permutation). The novel concept of Sentence Objects (simple strings, augmented strings, unlabelled structures and functor-argument structures) are presented as potential points from which learning may commence. Augmented strings (which are stings augmented with some basic syntactic information) are suggested as a sensible input to the CGL as they are cognitively plausible objects and have greater information content than strings alone. Building on the work of Siskind, a method for constructing augmented strings from unordered logic forms is detailed and it is suggested that augmented strings are simply a representation of the constraints placed on the space of possible parses due to a sting’s associated semantic content. The CGL make crucial use of a statistical Memory Module (constructed from a type memory and Word Order Memory) that is used to both constrain hypotheses and handle data which is noisy or parametrically ambiguous. A consequence of the Memory Module is that the CGL learns in an incremental fashion. This echoes real child learning as documented in Brown’s Stages of Language Development and also as alluded to by an included corpus study of child speech. Furthermore, the CGL learns faster when initially presented with simpler linguistic data; a further corpus study of child-directed speech suggests that this echoes the input provided to children. The CGL is demonstrated to learn from real data. It is evaluated against previous parametric learners (the Triggering Learning Algorithm of Gibson and Wexler and the Structural Triggers Learner of Fodor and Sakas) and is found to be more efficient.
APA, Harvard, Vancouver, ISO, and other styles
17

Bell, Alexander Charlton. "Formal computational models of biological systems." Thesis, University of Sheffield, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

CALDEIRA, ANDRE MACHADO. "GARCH MODELS IDENTIFICATION USING COMPUTATIONAL INTELLIGENCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14872@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Os modelos ARCH e GARCH vêm sendo bastante explorados tanto tecnicamente quanto em estudos empíricos desde suas respectivas criações em 1982 e 1986. Contudo, o enfoque sempre foi na reprodução dos fatos estilizados das séries financeiras e na previsão de volatilidade, onde o GARCH(1,1) é o mais utilizado. Estudos sobre identificação dos modelos GARCH são muito raros. Diante desse contexto, este trabalho propõe um sistema inteligente para melhorar a identificação da correta especificação dos modelos GARCH, evitando assim o uso indiscriminado dos modelos GARCH(1,1). Para validar a eficácia do sistema proposto, séries simuladas foram utilizadas. Os resultados derivados desse sistema são comparados com os modelos escolhidos pelos critérios de informação AIC e BIC. O desempenho das previsões dos modelos identificados por esses métodos são comparados utilizando-se séries reais.
ARCH and GARCH models have been largely explored technically and empirically since their creation in 1982 and 1986, respectively. However, the focus has always been on stylized facts of financial time series or volatility forecasts, where GARCH(1,1) has commonly been used. Studies on identification of GARCH models have been rare. In this context, this work aims to develop an intelligent system for improving the specification of GARCH models, thus avoiding the indiscriminate use of the GARCH(1,1) model. In order to validate the efficacy of the proposed system, simulated time series are used. Results are compared to chosen models through AIC and BIC criteria. Their performances are then compared by using real data.
APA, Harvard, Vancouver, ISO, and other styles
19

Edwards, Matthew Douglas. "Information-sharing models for computational genetics." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105572.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 97-105).
Modern genetics has been transformed by a dramatic explosion of data. As sample sizes and the number of measured data types grow, the need for computational methods tailored to deal with these noisy and complex datasets increases. In this thesis, we develop and apply integrated computational and biological approaches for two genetic problems. First, we build a statistical model for genetic mapping using pooled sequencing, a powerful and efficient technique for rapidly unraveling the genetic basis of complex traits. Our approach explicitly models the pooling process and genetic parameters underlying the noisy observed data, and we use it to calculate accurate intervals that contain the targeted regions of interest. We show that our model outperforms simpler alternatives that do not use all available marker data in a principled way. We apply this model to study several phenotypes in yeast, including the genetic basis of the surprising phenomenon of strain-specific essential genes. We demonstrate the complex genetic basis of many of these strain-specific viability phenotypes and uncover the influence of an inherited virus in modifying their effects. Second, we design a statistical model that uses additional functional information describing large sets of genetic variants in order to predict which variants are likely to cause phenotypic changes. Our technique is able to learn complicated relationships between candidate features and can accommodate the additional noise introduced by training on groups of candidate variants, instead of single labeled variants. We apply this model to a large genetic mapping study in yeast by collecting multiple genome-wide functional measurements. By using our model, we demonstrate the importance of several molecular phenotypes in predicting genetic impact. The common themes in this thesis are the development of computational models that accurately reflect the underlying biological processes and the integration of carefully controlled biological experiments to test and utilize our new models.
by Matthew Douglas Edwards.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Cho, Peter Sungil. "Computational models for expressive dimensional typography." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/61105.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.
Includes bibliographical references (p. 83-84).
This thesis research explores the prospect of typographic forms, based on custom computational models, which can be faithfully realized only in a three-dimensional, interactive environment. These new models allow for manipulation of letter-form attributes including visual display, scale, two-dimensional structure and three dimensional sculptural form. In this research, each computational model must accommodate the variation in letter shapes, while trying to balance functional flexibility with the beauty and legibility of fine typography. In most cases, this thesis work approaches typography at the level of a single letter, looking at ways we can build living, expressive textual environments on the computer display.
Peter Sungil Cho.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Evans, Owain Rhys. "Bayesian computational models for inferring preferences." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101522.

Full text
Abstract:
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 130-131).
This thesis is about learning the preferences of humans from observations of their choices. It builds on work in economics and decision theory (e.g. utility theory, revealed preference, utilities over bundles), Machine Learning (inverse reinforcement learning), and cognitive science (theory of mind and inverse planning). Chapter 1 lays the conceptual groundwork for the thesis and introduces key challenges for learning preferences that motivate chapters 2 and 3. I adopt a technical definition of 'preference' that is appropriate for inferring preferences from choices. I consider what class of objects preferences should be defined over. I discuss the distinction between actual preferences and informed preferences and the distinction between basic/intrinsic and derived/instrumental preferences. Chapter 2 focuses on the challenge of human 'suboptimality'. A person's choices are a function of their beliefs and plans, as well as their preferences. If they have inaccurate beliefs or make inefficient plans, then it will generally be more difficult to infer their preferences from choices. It is also more difficult if some of their beliefs might be inaccurate and some of their plans might be inefficient. I develop models for learning the preferences of agents subject to false beliefs and to time inconsistency. I use probabilistic programming to provide a concise, extendable implementation of preference inference for suboptimal agents. Agents performing suboptimal sequential planning are represented as functional programs. Chapter 3 considers how preferences vary under different combinations (or &compositions') of outcomes. I use simple mathematical functional forms to model composition. These forms are standard in microeconomics, where the outcomes in question are quantities of goods or services. These goods may provide the same purpose (and be substitutes for one another). Alternatively, they may combine together to perform some useful function (as with complements). I implement Bayesian inference for learning the preferences of agents making choices between different combinations of goods. I compare this procedure to empirical data for two different applications.
by Owain Rhys Evans.
Ph. D. in Linguistics
APA, Harvard, Vancouver, ISO, and other styles
22

Ruhl, Jan Matthias 1973. "Efficient algorithms for new computational models." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/17018.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 155-163).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Advances in hardware design and manufacturing often lead to new ways in which problems can be solved computationally. In this thesis we explore fundamental problems in three computational models that are based on such recent advances. The first model is based on new chip architectures, where multiple independent processing units are placed on one chip, allowing for an unprecedented parallelism in hardware. We provide new scheduling algorithms for this computational model. The second model is motivated by peer-to-peer networks, where countless (often inexpensive) computing devices cooperate in distributed applications without any central control. We state and analyze new algorithms for load balancing and for locality-aware distributed data storage in peer-to-peer networks. The last model is based on extensions of the streaming model. It is an attempt to capture the class of problems that can be efficiently solved on massive data sets. We give a number of algorithms for this model, and compare it to other models that have been proposed for massive data set computations. Our algorithms and complexity results for these computational models follow the central thesis that it is an important part of theoretical computer science to model real-world computational structures, and that such effort is richly rewarded by a plethora of interesting and challenging problems.
by Jan Matthias Ruhl.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
23

Hely, Timothy Alasdair. "Computational models of developing neural systems." Thesis, University of Edinburgh, 1999. http://hdl.handle.net/1842/22303.

Full text
Abstract:
The work of this thesis has focused on crating computational models of developing neurons. Three different but related areas of research have been studied - how cells make connections, what influences the shape of these connections and how neuronal network behaviour can be influenced by local interactions. In order to understand how cells make connections I simulated the dynamics of the neuronal growth cone - a structure which guides the developing axon to its target cells. Results from the first models showed that small interaction effects between structural proteins in the axon called microtubules can significantly alter the rate of axonal elongation and turning. I also simulated the dynamics of growth cone filopodia. The filopodia act as antennae and explore the extracellular environment surrounding the growth cone. This model showed that a reaction-diffusion system based on Turing morphogenesis patterns could account for the dynamic behaviour of filopodia. To find out what influences the shape of neuronal connections I simulated the branching patterns of neuronal dendrites. These are tree-like structures which receive input from other cells. Recent experiments indicate that dendrite branching is dependent on the phosphorylation status of microtubule associated protein 2 (MAP2) which affects the growth rate and spacing of microtubules. MAP2 phosphorylation can occur through calcium activation of the protein CaMKII. In the model the phosphorylation status and physical distribution of MAP2 within the cell can be varied to produce a wide range of biologically realistic dendritic patterns. The final model simulates emergent synchronisation of neuronal spike firing which can occur in cultures of developing neurons. In the model the frequency and phase of cell firing is modified by the pattern of input signals received by the cell through local connections. This mechanism alone can lead to synchronous oscillation of the entire network of cells. The results of the model indicate that synchronization of firing in developing neurons in culture occurs through a passive spread of activity, rather through an active coupling mechanism.
APA, Harvard, Vancouver, ISO, and other styles
24

Bai, Lihui. "Computational methods for toll pricing models." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Machado, Rui Mário da Silva. "Massivel y parallel declarative computational models." Doctoral thesis, Universidade de Évora, 2013. http://hdl.handle.net/10174/12063.

Full text
Abstract:
Current computer archictectures are parallel, with an increasing number of processors. Parallel programming is an error-prone task and declarative models such as those based on constraints relieve the programmer from some of its difficult aspects, because they abstract control away. In this work we study and develop techniques for declarative computational models based on constraints using GPI, aiming at large scale parallel execution. The main contributions of this work are: A GPI implementation of a scalable dynamic load balancing scheme based on work stealing, suitable for tree shaped computations and effective for systems with thousands of threads. A parallel constraint solver, MaCS, implemented to take advantage of the GPI programming model. Experimental evaluation shows very good scalability results on systems with hundreds of cores. A GPI parallel version of the Adaptive Search algorithm, including different variants. The study on different problems advances the understanding of scalability issues known to exist with large numbers of cores; ### SUMÁRIO: Actualmente as arquitecturas de computadores são paralelas, com um crescente número de processadores. A programação paralela é uma tarefa propensa a erros e modelos declarativos baseados em restrições aliviam o programador de aspectos difíceis dado que abstraem o controlo. Neste trabalho estudamos e desenvolvemos técnicas para modelos de computação declarativos baseados em restrições usando o GPI, uma ferramenta e modelo de programação recente. O Objectivo é a execução paralela em larga escala. As contribuições deste trabalho são as seguintes: a implementação de um esquema dinâmico para balanceamento da computação baseado no GPI. O esquema é adequado para computações em árvores e efectiva em sistemas compostos por milhares de unidades de computação. Uma abordagem à resolução paralela de restrições denominadas de MaCS, que tira partido do modelo de programação do GPI. A Avaliação experimental revelou boa escalabilidade num sistema com centenas de processadores. Uma versão paralela do algoritmo Adaptive Search baseada no GPI, que inclui diferentes variantes. O estudo de diversos problemas aumenta a compreensão de aspectos relacionados com a escalabilidade e presentes na execução deste tipo de algoritmos num grande número de processadores.
APA, Harvard, Vancouver, ISO, and other styles
26

Khabirova, Eleonora. "Models of neurodegeneration using computational approaches." Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/274157.

Full text
Abstract:
Alzheimer's disease (AD), as one of the most common neurodegenerative diseases, is characterized by the loss of neuronal dysfunction and death resulting in progressive cognitive impairment. The main histopathological hallmark of AD is the accumulation and deposition of misfolded Aβ peptide as amyloid plaques, however the precise role of Aβ toxicity in the disease pathogenesis is still unclear. Moreover, at early stages of the disease the important clinical features of the disorder, in addition to memory loss, are the disruptions of circadian rhythms and spatial disorientation. In the present work I first studied the role of Aβ toxicity by comparing the findings of genome-wide association studies in sporadic AD with the results of an RNAi screen in a transgenic C. elegans model of Aβ toxicity. The initial finding was that none of the human orthologues of these worm genes are associated with risk for sporadic Alzheimer’s disease, indicating that Aβ toxicity in the worm model may not be equivalent to sporadic AD. Nevertheless, comparing the first degree physical interactors (+1 interactome) of the GWAS and worm screen-derived gene products have uncovered 4 worm genes that have a +1 interactome overlap with the GWAS genes that is larger than one would expect by chance. Three of these genes form a chaperonin complex and the fourth gene codes for actin, a major substrate of the same chaperonin. Next I have evaluated the circadian disruptions in AD by developing a new system to simultaneously monitor the oscillations of the peripheral molecular clock and behavioural rhythms in single Drosophila. Experiments were undertaken on wild- type and Aβ-expressing flies. The results indicate the robustness of the peripheral clock is not correlated with the robustness of the circadian sleep and locomotor behaviours, indicating that the molecular clock does not directly drive behaviour. This is despite period length correlations that indicate that the underlying molecular mechanisms that generate both molecular and behavioural rhythms are the same. Rhythmicity in Aβ-expressing flies is worse than in controls. I further investigated the mechanism of spatial orientation in Drosophila. It was established that in the absence of visual stimuli the flies use self-motion cues to orientate themselves within the tubes and that in a Drosophila model of Aβ toxicity this control function is disrupted.
APA, Harvard, Vancouver, ISO, and other styles
27

Seminck, Olga. "Cognitive Computational Models of Pronoun Resolution." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC184/document.

Full text
Abstract:
La résolution des pronoms est le processus par lequel un pronom anaphorique est mis en relation avec son antécédent. Les humains en sont capables sans efforts notables en situation normale. En revanche, les systèmes automatiques ont une performance qui reste loin derrière, malgré des algorithmes de plus en plus sophistiqués, développés par la communauté du Traitement Automatique des Langues. La recherche en psycholinguistique a montré à travers des expériences qu'au cours de la résolution de nombreux facteurs sont pris en compte par les locuteurs. Une question importante se pose : comment les facteurs interagissent et quel poids faut-il attribuer à chacun d'entre eux ? Une deuxième question qui se pose alors est comment les théories linguistiques de la résolution des pronoms incorporent tous les facteurs. Nous proposons une nouvelle approche à ces problématiques : la simulation computationnelle de la charge cognitive de la résolution des pronoms. La motivation pour notre approche est double : d'une part, l'implémentation d'hypothèses par un système computationnel permet de mieux spécifier les théories, d’autre part, les systèmes automatiques peuvent faire des prédictions sur des données naturelles comme les corpus de mouvement oculaires. De cette façon, les modèles computationnels représentent une alternative aux expériences classiques avec des items expérimentaux construits manuellement. Nous avons fait plusieurs expériences afin d'explorer les modèles cognitifs computationnels de la résolution des pronoms. D'abord, nous avons simulé la charge cognitive des pronoms en utilisant des poids de facteurs de résolution appris sur corpus. Ensuite, nous avons testé si les concepts de la Théorie de l’Information sont pertinents pour prédire la charge cognitive des pronoms. Finalement, nous avons procédé à l’évaluation d’un modèle psycholinguistique sur des données issues d’un corpus enrichi de mouvements oculaires. Les résultats de nos expériences montrent que la résolution des pronoms est en effet multi-factorielle et que l’influence des facteurs peut être estimée sur corpus. Nos résultats montrent aussi que des concepts de la Théorie de l’Information sont pertinents pour la modélisation des pronoms. Nous concluons que l’évaluation des théories sur des données de corpus peut jouer un rôle important dans le développement de ces théories et ainsi amener dans le futur à une meilleure prise en compte du contexte discursif
Pronoun resolution is the process in which an anaphoric pronoun is linked to its antecedent. In a normal situation, humans do not experience much cognitive effort due to this process. However, automatic systems perform far from human accuracy, despite the efforts made by the Natural Language Processing community. Experimental research in the field of psycholinguistics has shown that during pronoun resolution many linguistic factors are taken into account by speakers. An important question is thus how much influence each of these factors has and how the factors interact with each-other. A second question is how linguistic theories about pronoun resolution can incorporate all relevant factors. In this thesis, we propose a new approach to answer these questions: computational simulation of the cognitive load of pronoun resolution. The motivation for this approach is two-fold. On the one hand, implementing hypotheses about pronoun resolution in a computational system leads to a more precise formulation of theories. On the other hand, robust computational systems can be run on uncontrolled data such as eye movement corpora and thus provide an alternative to hand-constructed experimental material. In this thesis, we conducted various experiments. First, we simulated the cognitive load of pronouns by learning the magnitude of impact of various factors on corpus data. Second, we tested whether concepts from Information Theory were relevant to predict the cognitive load of pronoun resolution. Finally, we evaluated a theoretical model of pronoun resolution on a corpus enriched with eye movement data. Our research shows that multiple factors play a role in pronoun resolution and that their influence can be estimated on corpus data. We also demonstrate that the concepts of Information Theory play a role in pronoun resolution. We conclude that the evaluation of hypotheses on corpus data enriched with cognitive data ---- such as eye movement data --- play an important role in the development and evaluation of theories. We expect that corpus based methods will lead to a better modelling of the influence of discourse structure on pronoun resolution in future work
APA, Harvard, Vancouver, ISO, and other styles
28

Kaltenmark, Irène. "Geometrical Growth Models for Computational Anatomy." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN049/document.

Full text
Abstract:
Dans le domaine de l'anatomie, à l'investissement massif dans la constitution de base de données collectant des données d'imagerie médicale, doit répondre le développement de techniques numériques modernes pour une quantification de la façon dont les pathologies affectent et modifie les structures biologiques. Le développement d'approches géométriques via les espaces homogènes et la géométrie riemannienne en dimension infinie, initialisé il y a une dizaine d'années par Christensen et Miller, et simultanément Trouvé et Younes, et mettant en œuvre des idées originales de d'Arcy Thompson, a permis de construire ces dernières années un cadre conceptuel extrêmement efficace pour attaquer le problème de la modélisation et de l'analyse de la variabilité de populations de formes. Néanmoins, à l'intégration de l'analyse longitudinale des données, ont émergé des phénomènes biologiques de croissance ou de dégénérescence se manifestant via des déformations spécifiques de nature non difféomorphique. On peut en effet observer lors de la croissance d'un composant organique, une apparition progressive de matière qui ne s'apparente pas à un simple étirement du tissu initial. Face à cette observation, nous proposons de garder l'esprit géométrique qui fait la puissance des approches difféomorphiques dans les espaces de formes mais en introduisant un concept assez général de déploiement où l'on modélise les phénomènes de croissance comme le déploiement optimal progressif d'un modèle préalablement replié dans une région de l'espace. Nous présentons donc une généralisation des méthodes difféomorphiques classiques pour modéliser plus fidèlement l'évolution de chaque individu d'une population et saisir l'ensemble de la dynamique de croissance. Nous nous appuyons sur l'exemple concret de la croissance des cornes animales. La considération d'un a priori sur la dynamique de croissance de la corne, nous permet de construire un chemin continu dans un espace de formes, modélisant l'évolution de la corne de sa naissance, d'un état réduit à un point (comme l'état d'embryon pour un humain ou de graine pour une plante) à un âge adulte quelconque de corne bien déployée. Au lieu d'étirer la corne, nous anticipons l'arrivée matière nouvelle en des endroits prédéfinis. Pour cela, nous définissons une forme mère indépendante du temps dans un espace virtuel, qui est progressivement plongée dans l'espace ambiant en fonction d'un marqueur temporel prédéfini sur la forme mère. Finalement, nous aboutissons à un nouveau problème de contrôle optimal pour l'assimilation de données de surfaces évoluant dans le temps, conduisant à un problème intéressant dans le domaine du calcul des variations où le choix pour la représentation des données, courant ou varifold, joue un rôle inattendu
The Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework has proved to be highly efficient for addressing the problem of modelling and analysis of the variability of populations of shapes, allowing for the direct comparison and quantization of diffeomorphic morphometric changes. However, the analysis of medical imaging data also requires the processing of more complex changes, which especially appear during growth or aging phenomena. The observed organisms are subject to transformations over the time which are no longer diffeomorphic, at least in a biological sense. One reason might be a gradual creation of new material uncorrelated to the preexisting one. For this purpose, we offer to extend the LDDMM framework to address the problem of non diffeomorphic structural variations in longitudinal scenarios during a growth or degenerative process. We keep the geometric central concept of a group of deformations acting on a shape space. However, the shapes will be encoded by a new enriched mathematical object allowing through partial mappings an intrinsic evolution dissociated from external deformations. We focus on the specific case of the growth of animal horns.Ultimately, we integrate these growth priors into a new optimal control problem for assimilation of time-varying surface data, leading to an interesting problem in the field of the calculus of variations where the choice of the attachment term on the data, current or varifold, plays an unexpected role
APA, Harvard, Vancouver, ISO, and other styles
29

Aizam, Nur Aidya Hanum. "Effective computational models for timetabling problem." Thesis, Curtin University, 2013. http://hdl.handle.net/20.500.11937/827.

Full text
Abstract:
Timetabling is a table of information showing when certain events are scheduled to take place. Timetabling is in fact very essential in making sure that all events occur in the time and place required. It is critical in areas such as: education, production and manufacturing, sport competitions, transport and logistics. The difficulty in timetabling is satisfying all the restrictions and requirements. The restrictions relate to resources such as time and location as well as conflicts. The requirements relate to the preferences of customers and service providers. The problem is further complicated by the desire to optimize an objective function that usually relates to the cost or effectiveness of the schedule. The task of how to construct a high quality timetable which satisfies all requirements is definitely not an easy one. A further difficulty is the dynamic aspect of timetabling and the need to accommodate changes after the schedule has been announced. Our focus in this study is on university timetabling problems.Mathematically, the problem is to optimize an objective function that reflects the value of the schedule subject to a set of constraints that relate to various operational requirements and a range of resource constraints (lecturers, rooms, etc). The usual objective is to maximize the total preferences or to minimize the number of students affected by clashes. The problem can be conveniently expressed as an Integer Programming (IP) problem. The computational difficulty is due to the integer restrictions on the variables. Various computational models including both heuristics and exact methods have been proposed.The timetabling problem in universities courses has existed for a long time, but due to the complexity and its variation, many researchers are still trying to decipher the solution for this problem. Numerous methods have been developed over the years and most of them have been successful. However, according to McCollum (2006) based on the international review of Operational Research in the UK (Commissioned by the Engineering and Physical Sciences Research Council), a gap still exists between the theory and practice of timetabling. Additionally, Burke and Petrovic (2002) also mentioned that many methods that have succeeded in solving this problem are applicable to specific institutions where they are designed. Nevertheless, Benli and Botsali (2004) explained that there is no generalized model for this problem because of the variation present in each university. Moreover, the limited availability of facilities and growth of flexibility of the student’s choices of courses makes the problem even tighter.This thesis in whole outlines studies which gain a step in a pathway to develop a more general IP model for university course timetabling problem. We incorporate all important features of this problem which includes the hard constraints and the desirable soft constraints. AIMMS 3.11 mathematical software is employed as a tool to solve the models with CPLEX 12.1 as the solver.In the first study (Chapter 3), we aim to develop models for timetabling problems which are flexible in terms of the ability to be applied in various institutions. To achieve this, we gather the information obtained on features that are used in other studies, which is covered in the literature review (Chapter 2) of this thesis. From the information on the gathered features, we observed that some features are compulsory, being that they are always used in models to solve timetabling problems. These features therefore form a basic model of university course timetabling problem in this study. We then develop an extended model by incorporating additional features found from the literature. The extended model also contains a few more additional features which we generate that are significant to be included in a model for solving this problem.Different combinations of the features which form the extended model are extracted to produce a range of models. These models are useful to be used by any institutions which require some relevant features to solve their timetabling problem. These models are tested with a small randomly generated test problem. In the following chapter (Chapter 4), we apply the developed model into 3 case studies obtained from the literature. The objective of this is to test the efficiency of the developed models for application to larger problems. Appropriate variation models are used to solve each of the case studies. This application testing is further extended by including a number of additional features. This is to illustrate that the developed model is able to be applied in institutions even ivwhen changes of requirements occur. Results from these tests demonstrate successful outcomes from application of our developed models to the chosen case studies.In Chapter 5, we tested the application of the developed models application in a case study using a pre-assignment approach as a simplification in solving timetabling problem. In this approach, the core units are determined and prioritized to be assigned into prime time slots at the very beginning of the scheduling process. It then follows with the assignment of the remainder units subject to the university requirements. One case study which is applied in Chapter 4 is used for the purpose of testing the pre-assignment approach. From this testing, we show that the pre-assignment is a useful simplification tool in solving timetabling problem of the chosen case study using the developed model, especially in reducing the computational time. We believe that this approach can be applied in other case studies using the developed model.As an overview of the thesis, we believe that the developed models will be applicable to other problems apart from the ones tested.
APA, Harvard, Vancouver, ISO, and other styles
30

Kajero, Olumayowa T. "Meta-model assisted calibration of computational fluid dynamics simulation models." Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/813857/.

Full text
Abstract:
Computational fluid dynamics (CFD) is a computer-based analysis of the dynamics of fluid flow, and it is widely used in chemical and process engineering applications. However, computation usually becomes a herculean task when calibration of the CFD models with experimental data or sensitivity analysis of the output relative to the inputs is required. This is due to the simulation process being highly computationally intensive, often requiring a large number of simulation runs, with a single simulation run taking hours or days to be completed. Hence, in this research project, the kriging meta-modelling method was coupled with expected improvement (EI) global optimisation approach to address the CFD model calibration challenge. In addition, a kriging meta-model based sensitivity analysis technique was implemented to study the model parameter input-output relationship. A novel EI measure was developed for the sum of squared errors (SSE) which conforms to a generalised chi-square distribution, where existing normal distribution-based EI measures are not applicable. This novel EI measure suggested the values of CFD model parameters to simulate with, hence minimising SSE and improving the match between simulation and experiments. To test the proposed methodology, a non-CFD numerical simulation case of the semi-batch reactor was considered as a case study which confirmed a saving in computational time, and an improvement of the simulation model with the actual plant data. The usefulness of the developed method has been subsequently demonstrated through a CFD case study of a single-phase flow in both a straight type and convergent-divergent type annular jet pump, where both a single turbulent model parameter, C_μ and two turbulent model parameters, C_μ and C_2ε where considered for calibration. Sensitivity analysis was subsequently based on C_μ as the input parameter. In calibration using both single and two model parameters, a significant improvement in the agreement with experimental data was obtained. The novel method gave a significant reduction in simulation computational time as compared to traditional CFD. A new correlation was proposed relating C_μ to the flow ratio, which could serve as a guide for future simulations. The meta-model based calibration aids exploration of different parameter combinations which would have been computationally challenging using CFD. In addition, computational time was significantly reduced with kriging-assisted sensitivity analysis studies which explored effect of different C_μ values on the output, the pressure coefficient. The numerical simulation case of the semi-batch reactor was also used as a basis of comparison between the previous EI measure and the newly proposed EI measure, which overall revealed that the latter gave a significant improvement at fewer number of simulation runs as compared to the former. The research studies carried out has hence been able to propose and successfully demonstrate the use of a novel methodology for faster calibration and sensitivity analysis studies of computational fluid dynamics simulations. This is essential in the design, analysis and optimisation of chemical and process engineering systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Vasilkoski, Zlatko. "Protein folding computational studies /." Thesis, Connect to Dissertations & Theses @ Tufts University, 2003.

Find full text
Abstract:
Thesis (Ph.D.)--Tufts University, 2003.
Adviser: David L. Weaver. Submitted to the Dept. of Physics. Includes bibliographical references. Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
32

vanCort, Tracy. "Computational Evolutionary Linguistics." Scholarship @ Claremont, 2001. https://scholarship.claremont.edu/hmc_theses/137.

Full text
Abstract:
Languages and species both evolve by a process of repeated divergences, which can be described with the branching of a phylogenetic tree or phylogeny. Taking advantage of this fact, it is possible to study language change using computational tree building techniques developed for evolutionary biology. Mathematical approaches to the construction of phylogenies fall into two major categories: character based and distance based methods. Character based methods were used in prior work in the application of phylogenetic methods to the Indo-European family of languages by researchers at the University of Pennsylvania. Discussion of the limitations of character-based models leads to a similar presentation of distance based models. We present an adaptation of these methods to linguistic data, and the phylogenies generated by applying these methods to several modern Germanic languages and Spanish. We conclude that distance based for phylogenies are useful for historical linguistic reconstruction, and that it would be useful to extend existing tree drawing methods to better model the evolutionary effects of language contact.
APA, Harvard, Vancouver, ISO, and other styles
33

Gok, Selvi Elif. "Modeling Consciousness: A Comparison Of Computational Models." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611178/index.pdf.

Full text
Abstract:
There has been a recent flurry of activity in consciousness research. Although an operational definition of consciousness has not yet been developed, philosophy has come to identify a set of features and aspects that are thought to be associated with the various elements of consciousness. On the other hand, there have been several recent attempts to develop computational models of consciousness that are claimed to capture or illustrate one or more aspects of consciousness. As a plausible substitute to evaluating how well the current computational models model consciousness, this study examines how the current computational models fare in modeling those aspects and features of consciousness identified by philosophy. Following a detailed and critical review of the literature of philosophy of consciousness, this study constructs a composite and eclectic list of features and aspects that would be expected in any successful model of consciousness. The study then evaluates, from the viewpoint of that list, some of the current self-claimed computational models of consciousness, specifically CLARION, IDA, ACT-R and model proposed in the Cleeremans'
review and study. The computational models studied are evaluated with respect to each identified aspect and feature of consciousness.
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Wei. "Computational social influence : models, algorithms, and applications." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58394.

Full text
Abstract:
Social influence is a ubiquitous phenomenon in human life. Fueled by the extreme popularity of online social networks and social media, computational social influence has emerged as a subfield of data mining whose goal is to analyze and understand social influence using computational frameworks such as theoretical modeling and algorithm design. It also entails substantial application potentials for viral marketing, recommender systems, social media analysis, etc. In this dissertation, we present our research achievements that take significant steps toward bridging the gap between elegant theories in computational social influence and the needs of two real-world applications: viral marketing and recommender systems. In Chapter 2, we extend the classic Linear Thresholds model to incorporate price and valuation to model the diffusion process of new product adoption; we design a greedy-style algorithm that finds influential users from a social network as well as their corresponding personalized discounts to maximize the expected total profit of the advertiser. In Chapter 3, we propose a novel business model for online social network companies to sell viral marketing as a service to competing advertisers, for which we tackle two optimization problems: maximizing total influence spread of all advertisers and allocating seeds to advertisers in a fair manner. In Chapter 4, we design a highly expressive diffusion model that can capture arbitrary relationship between two propagating entities to arbitrary degrees. We then study the influence maximization problem in a novel setting consisting of two complementary entities and design efficient approximation algorithms. Next, in Chapter 5, we apply social influence into recommender systems. We model the dynamics of user interest evolution using social influence, as well as attraction and aversion effects. As a result, making effective recommendations are substantially more challenging and we apply semi-definite programming techniques to achieve near-optimal solutions. Chapter 6 concludes the dissertation and outlines possible future research directions.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
35

Blazejewski, Adam. "Computational Models for Stock Market Order Submissions." Engineering, 2006. http://hdl.handle.net/2123/923.

Full text
Abstract:
Doctor of Philosophy
The motivation for the research presented in this thesis stems from the recent availability of high frequency limit order book data, relative scarcity of studies employing such data, economic significance of transaction costs management, and a perceived potential of data mining for uncovering patterns and relationships not identified by the traditional top-down modelling approach. We analyse and build computational models for order submissions on the Australian Stock Exchange, an order-driven market with a public electronic limit order book. The focus of the thesis is on the trade implementation problem faced by a trader who wants to transact a buy or sell order of a certain size. We use two approaches to build our models, top-down and bottom-up. The traditional, top-down approach is applied to develop an optimal order submission plan for an order which is too large to be traded immediately without a prohibitive price impact. We present an optimisation framework and some solutions for non-stationary and non-linear price impact and price impact risk. We find that our proposed transaction costs model produces fairly good forecasts of the variance of the execution shortfall. The second, bottom-up, or data mining, approach is employed for trade sign inference, where trade sign is defined as the side which initiates both a trade and the market order that triggered the trade. We are interested in an endogenous component of the order flow, as evidenced by the predictable relationship between trade sign and the variables used to infer it. We want to discover the rules which govern the trade sign, and establish a connection between them and two empirically observed regularities in market order submissions, competition for order execution and transaction cost minimisation. To achieve the above aims we first use exploratory analysis of trade and limit order book data. In particular, we conduct unsupervised clustering with the self-organising map technique. The visualisation of the transformed data reveals that buyer-initiated and seller-initiated trades form two distinct clusters. We then propose a local non-parametric trade sign inference model based on the k-nearest-neighbour classifier. The best k-nearest-neighbour classifier constructed by us requires only three predictor variables and achieves an average out-of-sample accuracy of 71.40% (SD=4.01%)1, across all of the tested stocks. The best set of predictor variables found for the non-parametric model is subsequently used to develop a piecewise linear trade sign model. That model proves superior to the k-nearest-neighbour classifier, and achieves an average out-of-sample classification accuracy of 74.38% (SD=4.25%). The result is statistically significant, after adjusting for multiple comparisons. The overall classification performance of the piecewise linear model indicates a strong dependence between trade sign and the three predictor variables, and provides evidence for the endogenous component in the order flow. Moreover, the rules for trade sign classification derived from the structure of the piecewise linear model reflect the two regularities observed in market order submissions, competition for order execution and transaction cost minimisation, and offer new insights into the relationship between them. The obtained results confirm the applicability and relevance of data mining for the analysis and modelling of stock market order submissions.
APA, Harvard, Vancouver, ISO, and other styles
36

Huss, Mikael. "Computational models of lamprey locomotor network neurons." Licentiate thesis, Stockholm : KTH Numerical Analysis and Computer Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Xiaojuan. "Computational and statistical aspects of pricing models." Thesis, University of Nottingham, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.606378.

Full text
Abstract:
This thesis is motivated by the problem of modelling and analysing real share price data. Various approaches and models are considered. One approach is to consider a random walk on a discrete-time Markov chain perturbed by Gaussian noise as a model for real share price data. To implement this model a numerical algorithm is constructed to treat the NP hard Emdedding problem. A second approach to modelling share price data is to consider a random walk on the lamplighter group perturbed by Gaussian noise. This class of problems leads to interesting theoretical questions about asymptotic behaviour of random stochastic matrices. In particular, we found an asymptotic expression for the L 2 error between two independent random stochastic matrices. We apply a variety of statistical and modelling techniques to justify the models including traditional econometric transforms, regression and MLE techniques, EM algorithms, and Monte Carlo methods such as random search.
APA, Harvard, Vancouver, ISO, and other styles
38

Comerford, Andrew Peter. "Computational Models of Endothelial and Nucleotide Function." Thesis, University of Canterbury. Mechanical Engineering, 2007. http://hdl.handle.net/10092/1178.

Full text
Abstract:
Atherogenesis is the leading cause of death in the developed world, and is putting considerable monetary pressure on health systems the world over. Although the risk factors are well understood, unfortunately, the initiation and development of this disease still remains relatively poorly understood, but it is becoming increasingly identifiable as a dysfunction of the endothelial cells that line the walls of arteries. The prevailing haemodynamic environment plays an important role in the focal nature of atherosclerosis to very specific regions of the human vasculature. Disturbed haemodynamics lead to very low wall shear stress, and inhibit the transport of important blood borne chemicals. The present study models, both computationally and mathematically, the transport and hydrolysis of important blood borne adneosine nucleotides in physiologically relevant arterial geometries. In depth analysis into the factors that affect the transport of these low diffusion coefficient species is undertaken. A mathematical model of the complex underlying endothelial cell dynamics is utilised to model production of key intracellular molecules that have been implicated into the complex initiation processes of atherosclerosis; hence regions of the vasculature can be identified as being 'hot spots' for atherogenesis. This model is linked into CFD software allowing for the assessment of how 3D low yields and mass transfer affect the underlying cell signalling. Three studies are undertaken to further understand nucleotide variations at the endothelium and to understand factors involved in determining the underlying cell dynamics. The major focus of the first two studies is geometric variations. This is primarily due to the plethora of evidence implicating the geometry of the human vasculature, hence the haemodynamics, as an influential factor in atherosclerosis initiation. The final model looks at a physiologically realistic geometry to provide a more realistic reproduction of the in vivo environment.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Xiaomei, and 楊笑梅. "Computational models for piezoelectrics and piezoelectric laminates." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31246217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Li. "Financial and computational models in electricity markets." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51849.

Full text
Abstract:
This dissertation is dedicated to study the design and utilization of financial contracts and pricing mechanisms for managing the demand/price risks in electricity markets and the price risks in carbon emission markets from different perspectives. We address the issues pertaining to the efficient computational algorithms for pricing complex financial options which include many structured energy financial contracts and the design of economic mechanisms for managing the risks associated with increasing penetration of renewable energy resources and with trading emission allowance permits in the restructured electric power industry. To address the computational challenges arising from pricing exotic energy derivatives designed for various hedging purposes in electricity markets, we develop a generic computational framework based on a fast transform method, which attains asymptotically optimal computational complexity and exponential convergence. For the purpose of absorbing the variability and uncertainties of renewable energy resources in a smart grid, we propose an incentive-based contract design for thermostatically controlled loads (TCLs) to encourage end users' participation as a source of DR. Finally, we propose a market-based approach to mitigate the emission permit price risks faced by generation companies in a cap-and-trade system. Through a stylized economic model, we illustrate that the trading of properly designed financial options on emission permits reduces permit price volatility and the total emission reduction cost.
APA, Harvard, Vancouver, ISO, and other styles
41

Easom, Gary. "Improved turbulence models for computational wind engineering." Thesis, University of Nottingham, 2000. http://eprints.nottingham.ac.uk/10113/.

Full text
Abstract:
The fundamental errors in the numerical modelling of the turbulent component of fluid flow are one of the main reasons why computational fluid dynamics techniques have not yet been fully accepted by the wind engineering community. This thesis is the result of extensive research that was undertaken to assess the various methods available for numerical simulation of turbulent fluid flow. The research was undertaken with a view to developing improved turbulence models for computational wind engineering. Investigations have concentrated on analysing the accuracy and numerical stability of a number of different turbulence models including both the widely available models and state of the art techniques. These investigations suggest that a turbulence model, suitable for wind engineering applications, should be able to model the anisotropy of turbulent flow as in the differential stress model whilst maintaining the ease of use and computational stability of the two equation k-e models. Therefore, non-linear expansions of the Boussinesq hypotheses, the quadratic and cubic non-linear k-e models, have been tested in an attempt to account for anisotropic turbulence and curvature related strain effects. Furthermore, large eddy simulations using the standard Smagorinsky sub-grid scale model have been completed, in order to account for the four dimensional nature of turbulent flow. This technique, which relies less heavily on the need to model turbulence by utilising advances in computer technology and processing power to directly resolve more of the flow field, is now becoming increasingly popular in the engineering community. The author has detailed and tested all of the above mentioned techniques and given recommendations for both the short and longer term future of turbulence modelling in computational wind engineering. Improved turbulence models that will more accurately predict bluff body flow fields and that are numerically stable for complex geometries are of paramount importance if the use of CFD techniques are to gain wide acceptance by the wind engineering community.
APA, Harvard, Vancouver, ISO, and other styles
42

Impett, Jonathan. "Computational models for interactive composition/performance systems." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Sloan, Robert Hal. "Computational learning theory : new models and algorithms." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/38339.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.
Includes bibliographical references (leaves 116-120).
by Robert Hal Sloan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
44

Arkhipov, Dmitri I. "Computational Models for Scheduling in Online Advertising." Thesis, University of California, Irvine, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10168557.

Full text
Abstract:

Programmatic advertising is an actively developing industry and research area. Some of the research in this area concerns the development of optimal or approximately optimal contracts and policies between publishers, advertisers and intermediaries such as ad networks and ad exchanges. Both the development of contracts and the construction of policies governing their implementation are difficult challenges, and different models take different features of the problem into account. In programmatic advertising decisions are made in real time, and time is a scarce resource particularly for publishers who are concerned with content load times. Policies for advertisement placement must execute very quickly once content is requested; this requires policies to either be pre-computed and accessed as needed, or for the policy execution to be very efficient. We formulate a stochastic optimization problem for per publisher ad sequencing with binding latency constraints. Within our context an ad request lifecycle is modeled as a sequence of one by one solicitations (OBOS) subprocesses/lifecycle stages. From the viewpoint of a supply side platform (SSP) (an entity acting in proxy for a collection of publishers), the duration/span of a given lifecycle stage/subprocess is a stochastic variable. This stochasticity is due both to the stochasticity inherent in Internet delay times, and the lack of information regarding the decision processes of independent entities. In our work we model the problem facing the SSP, namely the problem of optimally or near-optimally choosing the next lifecycle stage of a given ad request lifecycle at any given time. We solve this problem to optimality (subject to the granularity of time) using a classic application of Richard Bellman's dynamic programming approach to the 0/1 Knapsack Problem. The DP approach does not scale to a large number of lifecycle stages/subprocesses so a sub-optimal approach is needed. We use our DP formulation to derive a focused real time dynamic programming (FRTDP) implementation, a heuristic method with optimality guarantees for solving our problem. We empirically evaluate (through simulation) the performance of our FRTDP implementation relative to both the DP implementation (for tractable instances) and to several alternative heuristics for intractable instances. Finally, we make the case that our work is usefully applicable to problems outside the domain of online advertising.

APA, Harvard, Vancouver, ISO, and other styles
45

Fancellu, Federico. "Computational models for multilingual negation scope detection." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33038.

Full text
Abstract:
Negation is a common property of languages, in that there are few languages, if any, that lack means to revert the truth-value of a statement. A challenge to cross-lingual studies of negation lies in the fact that languages encode and use it in different ways. Although this variation has been extensively researched in linguistics, little has been done in automated language processing. In particular, we lack computational models of processing negation that can be generalized across language. We even lack knowledge of what the development of such models would require. These models however exist and can be built by means of existing cross-lingual resources, even when annotated data for a language other than English is not available. This thesis shows this in the context of detecting string-level negation scope, i.e. the set of tokens in a sentence whose meaning is affected by a negation marker (e.g. 'not'). Our contribution has two parts. First, we investigate the scenario where annotated training data is available. We show that Bi-directional Long Short Term Memory (BiLSTM) networks are state-of-the-art models whose features can be generalized across language. We also show that these models suffer from genre effects and that for most of the corpora we have experimented with, high performance is simply an artifact of the annotation styles, where negation scope is often a span of text delimited by punctuation. Second, we investigate the scenario where annotated data is available in only one language, experimenting with model transfer. To test our approach, we first build NEGPAR, a parallel corpus annotated for negation, where pre-existing annotations on English sentences have been edited and extended to Chinese translations. We then show that transferring a model for negation scope detection across languages is possible by means of structured neural models where negation scope is detected on top of a cross-linguistically consistent representation, Universal Dependencies. On the other hand, we found cross-lingual lexical information only to help very little with performance. Finally, error analysis shows that performance is better when a negation marker is in the same dependency substructure as its scope and that some of the phenomena related to negation scope requiring lexical knowledge are still not captured correctly. In the conclusions, we tie up the contributions of this thesis and we point future work towards representing negation scope across languages at the level of logical form as well.
APA, Harvard, Vancouver, ISO, and other styles
46

Cittern, David. "Computational models of attachment and self-attachment." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45314.

Full text
Abstract:
We explore, using a variety of models grounded in computational neuroscience, the dynamics of attachment formation and change. In the first part of the thesis we consider the formation of the traditional organised forms of attachment (as defined by Mary Ainsworth) within the context of the free energy principle, showing how each type of attachment might arise in infant agents who minimise free energy over interoceptive states while interacting with caregivers with varying responsiveness. We show how exteroceptive cues (in the form of disrupted affective communication from the caregiver) can result in disorganised forms of attachment (as first uncovered by Mary Main) in infants of caregivers who consistently increase stress on approach, but can have an organising (towards ambivalence) effect in infants of inconsistent caregivers. The second part of the thesis concerns Self-Attachment: a new self-administrable attachment-based psychotherapy recently introduced by Abbas Edalat, which aims to induce neural plasticity in order to retrain an individual's suboptimal attachment schema. We begin with a model of the hypothesised neurobiological underpinnings of the Self-Attachment bonding protocols, which are concerned with the formation of an abstract, self-directed bond. Finally, using neuroscientific findings related to empathy and the self-other distinction within the context of pain, we propose a simple spiking neural model for how empathic states might serve to motivate application of the aforementioned bonding protocols.
APA, Harvard, Vancouver, ISO, and other styles
47

Ahmad, Faysal B. "Computational and biophysical models of the brain." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:7395e8af-0a12-4304-88a3-52e3a0d20ec5.

Full text
Abstract:
Widely distributed brain networks display highly coherent activity at rest. In this work, we combined bottom-up and top-down approaches to investigate the dynamics and underlying mechanisms of this spontaneous activity. We developed a realistic network model to simulate resting-state data, which incorporates biophysical regional dynamics, empirical brain connectivity, time delays, and background noise. At moderately weak coupling strengths, the model produces spontaneous metastable oscillatory states and a novel form of frequency depression, resulting in transient synchronizations between brain regions at reduced collective frequencies. We used fixed and sliding window correlation approaches on the power of band-limited MEG data, and show that brain regions exhibit significant functional connectivity (FC) in the alpha and beta frequency bands on slow ( > 1sec) time scales. We also show that temporal non-stationarity and bistability in FC occur in the same pairs of brain areas, and in the same frequency bands, as stationary measures of FC. We find that the network model reproduces the same frequency-dependency, time-scales, and non-stationary nature of FC as we found in real MEG data. Furthermore, seed-based correlations and independent component analysis also reveal a similar spatial profile of FC in empirical and simulated data, with the existence of widely distributed transient resting state networks in the same frequency bands. Finally, we used the network model simulations to evaluate a range of network estimation methods, and find that often the simplest linear measures perform best and some of the common non-linear measures can often give erroneous estimates. Overall, our results suggest that structured interactions between brain regions in the presence of delays and noise result in spontaneous synchronizations leading to the organized power fluctuations across brain regions, and some of the simplest statistical measures provide excellent estimates of this connectivity. Our work also highlights the potential of computational models in exploring neural mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
48

Weis, Michael Christian. "Computational Models of the Mammalian Cell Cycle." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1323278159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tang, Chao. "Computational models for mining online drug reviews." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/87.

Full text
Abstract:
Healthcare social media is emerging in recent years with increasing attention on people’s health. Online review websites are not only diversi.ed with medicine, hospitals, or doctors but abundant in amount. To discover knowledge from these online reviews, several computational models are proposed. Online healthcare review websites are facing challenges in con.ict of interests among various healthcare stakeholders. To avoid legal complaints and better sustain under such circumstance, we propose a decoupling approach for designing healthcare review websites. Objective components such as medical condition and treatment are remained as the primary parts, as they are generic, impersonal and directly related to patients themselves. Subjective components, however, such as comments to doctors or hospitals are decoupled as secondary parts for sensitive and controversial informa­tion and are optional to reviewers. Our proposed approach shows better .exibility in managing of contents in different levels of details and ability of balancing the right of expression of reviewers with other stakeholders. To identity the patient-reported adverse reactions in drug reviews, we propose a consumer-oriented coding scheme using wordnet synonym and derivational related form. Signi.cant discrepancy of incidences of adverse reactions is discovered be­tween online reviews and clinical trials. We proposed an adverse reaction report ratio model for integrated interpretation of adverse reactions reported in online re­views versus those from clinical trial. Our estimation on average adverse reactions shows high correlation with drug acceptability score obtained from a large-scale meta-analysis. To investigate the impact of key adverse reactions in patients’ perspective, we propose a topic model named Fisher’s Linear Discriminant Analysis Projected Non­negative Matrix Factorization (FLDA-projected-NMF) for discovering discrimina­tive features and topics with additional class information. With satisfaction scores provided in the reviews, discriminative features and topics on satisfaction are dis­covered and polarities of adverse reactions are estimated based on the discriminative feature weights. Discriminative features and topics on medication duration and on age group are obtained as well. Our method outperforms other supervised methods in evaluation of topic sentiment score and topic interpretation measured by entropy. Patient-reported adverse reaction terms are mined from reviews with comment class label. Some new adverse reactions in depression drug and statin drug are also dis­covered. To further study patients’ behaviors, we use structural equation modeling for studying the relationship of factors in patients’ treatment experience with patients’ quality of life. In covariance model, most adverse reactions are found of small co­variance except nausea, headache and dizziness. In measurement model, coef.cients of individual adverse reactions on latent adverse reaction are correlated to the inci­dence of adverse reactions. In structural model, we model the relationship of latent adverse reaction, rating score, positive sentiment and negative sentiment. Compari­son between the measurement models of rating scores of depression drug and statin drug shows that there could be latent factors to account for the variances of latent rating, which shows correlations with the severity of adverse reactions.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Fei Fei Perona Pietro. "Visual recognition : computational models and human psychophysics /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-06022005-150332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography