Dissertations / Theses on the topic 'Models of adaptation'

To see the other types of publications on this topic, follow the link: Models of adaptation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Models of adaptation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pennings, Pleuni. "Models of adaptation and speciation." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-66567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wallin, Johan. "Dose Adaptation Based on Pharmacometric Models." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-100569.

Full text
Abstract:
Many drugs exhibit major variability in both pharmacokinetic (PK) and pharmacodynamic (PD) parameters that prevents the use of the same dose for all patients. Variability can occur both between patients (IIV) as well as within patients over the course of time (IOV). In a drug with narrow therapeutic range and substantial IIV, dose selection may require individual adaptation. Adaptation can either be made before (a priori) or after (a posteriori) first drug administration. The former implies basing the dose on prior information known to be influential, such as kidney function indicators, weight or concomitant medication, whereas a posteriori dose adaptations are based on post-treatment observations. Often individualization cannot be based on the clinical outcome itself. In such cases, drug concentrations or biomarkers may be valuable for dose individualisation. In this thesis two therapeutic areas where dosing is critical have been investigated regarding the possibilities of a priori and a posteriori dose adaptation; anticancer treatment where myelosuppression is dose limiting, and tacrolimus used for immunosuppression in paediatric transplantation. In tacrolimus previously published models were found to be of little value for dose adaptation in the early critical days post-transplantation. New PK models were developed and used to suggest new dosing regimens tailored for the paediatric population, recognizing the changing pharmacokinetics in the early time post-transplantation. For several anticancer drugs covariates were identified that partly explained IIV in myelosuppression. IOV were found to be lower than IIV which implies that individual dose adaptations a posteriori can be valuable. Dose adaptation, using Bayesian principles in order to simultaneously minimise the risk of severe toxicity or subtherapeutic levels, was evaluated using simulations. Type and amount of data needed, as well as variability parameters influential on the outcome, were evaluated. Results show drug concentrations being of little value, if neutrophil counts are available. The models discussed in this thesis have been implemented in MS Excel macros for Bayesian forecasting, to allow widespread distribution to clinical settings without necessitating access to specific statistical software.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Jiaolong. "Domain adaptation of deformable part-based models." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/290266.

Full text
Abstract:
La detecció de vianants és crucial per als sistemes d’assistència a la conducció (ADAS). Disposar d’un classificador precís és fonamental per a un detector de vianants basat en visió. Al entrenar un classificador, s’assumeix que les característiques de les dades d’entrenament segueixen la mateixa distribució de probabilitat que la de les dades de prova. Tot i això, a la pràctica, aquesta assumpció pot no complir-se per diferents causes. En aquests casos, en la comunitat de visió per computador és cada cop més comú utilitzar tècniques que permeten adaptar els classificadors existents del seu entorn d’entrenament (domini d’origen) al nou entorn de prova (domini de destí). En aquesta tesi ens centrem en l’adaptació de domini dels detectors de vianants basats en models deformables basats en parts (DPMs). Com a prova de concepte, utilitzem dades sintètiques com a domini d’origen (món virtual) i adaptem el detector DPM entrenat en el món virtual per a funcionar en diferents escenaris reals. Començem explotant al màxim les capacitats de detecció del DPM entrenant en dades del món virtual, però, tot i això, al aplicar-lo a diferents conjunts del món real, el detector encara perd poder de discriminació degut a les diferències entre el món virtual i el real. És per això, que ens centrem en l’adaptació de domini del DPM. Per començar, considerem un únic domini d’origen per a adaptar-lo a un únic domini de destí mitjançant dos mètodes d’aprenentatge per lots, l’A-SSVM i el SASSVM. Després, l’ampliem a treballar amb múltiples (sub-)dominis mitjançant una adaptació progressiva, utilitzant una jerarquia adaptativa basada en SSVM (HASSVM) en el procés d’optimització. Finalment, extenem HA-SSVM per a aconseguir un detector que s’adapti de forma progressiva i sense intervenció humana al domini de destí. Cal destacar que cap dels mètodes proposats en aquesta tesi requereix visitar les dades del domini d’origen. L’evaluació dels resultats, realitzada amb el sistema d’evaluació de Caltech, mostra que el SA-SSVM millora lleugerament respecte el ASSVM i millora en 15 punts respecte el detector no adaptat. El model jeràrquic entrenat mitjançant el HA-SSVM encara millora més els resultats de la adaptació de domini. Finalment, el mètode sequencial d’adaptació de domini ha demostrat que pot obtenir resultats comparables a la adaptació per lots, però sense necessitat d’etiquetar manualment cap exemple del domini de destí. L’adaptació de domini aplicada a la detecció de vianants és de gran importància i és una àrea que es troba relativament sense explorar. Desitgem que aquesta tesi pugui assentar les bases del treball futur d’aquesta àrea.
La detección de peatones es crucial para los sistemas de asistencia a la conducción (ADAS). Disponer de un clasificador preciso es fundamental para un detector de peatones basado en visión. Al entrenar un clasificador, se asume que las características de los datos de entrenamiento siguen la misma distribución de probabilidad que las de los datos de prueba. Sin embargo, en la práctica, esta asunción puede no cumplirse debido a diferentes causas. En estos casos, en la comunidad de visión por computador cada vez es más común utilizar técnicas que permiten adaptar los clasificadores existentes de su entorno de entrenamiento (dominio de origen) al nuevo entorno de prueba (dominio de destino). En esta tesis nos centramos en la adaptación de dominio de los detectores de peatones basados en modelos deformables basados en partes (DPMs). Como prueba de concepto, usamos como dominio de origen datos sintéticos (mundo virtual) y adaptamos el detector DPM entrenado en el mundo virtual para funcionar en diferentes escenarios reales. Comenzamos explotando al máximo las capacidades de detección del DPM entrenado en datos del mundo virtual pero, aun así, al aplicarlo a diferentes conjuntos del mundo real, el detector todavía pierde poder de discriminaci ón debido a las diferencias entre el mundo virtual y el real. Es por ello que nos centramos en la adaptación de dominio del DPM. Para comenzar, consideramos un único dominio de origen para adaptarlo a un único dominio de destino mediante dos métodos de aprendizaje por lotes, el A-SSVM y SA-SSVM. Después, lo ampliamos a trabajar con múltiples (sub-)dominios mediante una adaptación progresiva usando una jerarquía adaptativa basada en SSVM (HA-SSVM) en el proceso de optimización. Finalmente, extendimos HA-SSVM para conseguir un detector que se adapte de forma progresiva y sin intervención humana al dominio de destino. Cabe destacar que ninguno de los métodos propuestos en esta tesis requieren visitar los datos del dominio de origen. La evaluación de los resultados, realizadas con el sistema de evaluación de Caltech, muestran que el SA-SSVM mejora ligeramente respecto al A-SSVM y mejora en 15 puntos respecto al detector no adaptado. El modelo jerárquico entrenado mediante el HA-SSVM todavía mejora más los resultados de la adaptación de dominio. Finalmente, el método secuencial de adaptación de domino ha demostrado que puede obtener resultados comparables a la adaptación por lotes pero sin necesidad de etiquetar manualmente ningún ejemplo del dominio de destino. La adaptación de domino aplicada a la detección de peatones es de gran importancia y es un área que se encuentra relativamente sin explorar. Deseamos que esta tesis pueda sentar las bases del trabajo futuro en esta área.
On-board pedestrian detection is crucial for Advanced Driver Assistance Systems (ADAS). An accurate classi cation is fundamental for vision-based pedestrian detection. The underlying assumption for learning classi ers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classi ers. However, in practice, there are di erent reasons that can break this constancy assumption. Accordingly, reusing existing classi ers by adapting them from the previous training environment (source domain) to the new testing one (target domain) is an approach with increasing acceptance in the computer vision community. In this thesis we focus on the domain adaptation of deformable part-based models (DPMs) for pedestrian detection. As a prof of concept, we use a computer graphic based synthetic dataset, i.e. a virtual world, as the source domain, and adapt the virtual-world trained DPM detector to various real-world dataset. We start by exploiting the maximum detection accuracy of the virtual-world trained DPM. Even though, when operating in various real-world datasets, the virtualworld trained detector still su er from accuracy degradation due to the domain gap of virtual and real worlds. We then focus on domain adaptation of DPM. At the rst step, we consider single source and single target domain adaptation and propose two batch learning methods, namely A-SSVM and SA-SSVM. Later, we further consider leveraging multiple target (sub-)domains for progressive domain adaptation and propose a hierarchical adaptive structured SVM (HA-SSVM) for optimization. Finally, we extend HA-SSVM for the challenging online domain adaptation problem, aiming at making the detector to automatically adapt to the target domain online, without any human intervention. All of the proposed methods in this thesis do not require revisiting source domain data. The evaluations are done on the Caltech pedestrian detection benchmark. Results show that SA-SSVM slightly outperforms A-SSVM and avoids accuracy drops as high as 15 points when comparing with a non-adapted detector. The hierarchical model learned by HA-SSVM further boosts the domain adaptation performance. Finally, the online domain adaptation method has demonstrated that it can achieve comparable accuracy to the batch learned models while not requiring manually label target domain examples. Domain adaptation for pedestrian detection is of paramount importance and a relatively unexplored area. We humbly hope the work in this thesis could provide foundations for future work in this area.
APA, Harvard, Vancouver, ISO, and other styles
4

Pirrotta, Elizabeth. "Testing chromatic adaptation models using object colors /." Online version of thesis, 1994. http://hdl.handle.net/1850/11674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nikolaidis, Stefanos. "Mathematical Models of Adaptation in Human-Robot Collaboration." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1121.

Full text
Abstract:
While much work in human-robot interaction has focused on leaderfollower teamwork models, the recent advancement of robotic systems that have access to vast amounts of information suggests the need for robots that take into account the quality of the human decision making and actively guide people towards better ways of doing their task. This thesis proposes an equal-partners model, where human and robot engage in a dance of inference and action, and focuses on one particular instance of this dance: the robot adapts its own actions via estimating the probability of the human adapting to the robot. We start with a bounded-memory model of human adaptation parameterized by the human adaptability - the probability of the human switching towards a strategy newly demonstrated by the robot. We then examine more subtle forms of adaptation, where the human teammate adapts to the robot, without replicating the robot’s policy. We model the interaction as a repeated game, and present an optimal policy computation algorithm that has complexity linear to the number of robot actions. Integrating these models into robot action selection allows for human-robot mutual-adaptation. Human subject experiments in a variety of collaboration and shared-autonomy settings show that mutual adaptation significantly improves human-robot team performance, compared to one-way robot adaptation to the human.
APA, Harvard, Vancouver, ISO, and other styles
6

Acosta, Padilla Francisco Javier. "Self-adaptation for Internet of things applications." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S094/document.

Full text
Abstract:
L'Internet des Objets (IdO) couvre peu à peu tous les aspects de notre vie. À mesure que ces systèmes deviennent plus répandus, le besoin de gérer cette infrastructure complexe comporte plusieurs défis. En effet, beaucoup de petits appareils interconnectés fournissent maintenant plus d'un service dans plusieurs aspects de notre vie quotidienne, qui doivent être adaptés à de nouveaux contextes sans l'interruption de tels services. Cependant, ce nouveau système informatique diffère des systèmes classiques principalement sur le type, la taille physique et l'accès des nœuds. Ainsi, des méthodes typiques pour gérer la couche logicielle sur de grands systèmes distribués comme on fait traditionnellement ne peuvent pas être employées dans ce contexte. En effet, cela est dû aux capacités très différentes dans la puissance de calcul et la connectivité réseau, qui sont très contraintes pour les appareils de l'IdO. De plus, la complexité qui était auparavant gérée par des experts de plusieurs domaines, tels que les systèmes embarqués et les réseaux de capteurs sans fil (WSN), est maintenant accrue par la plus grande quantité et hétérogénéité des logiciels et du matériel des nœuds. Par conséquent, nous avons besoin de méthodes efficaces pour gérer la couche logicielle de ces systèmes, en tenant compte les ressources très limitées. Cette infrastructure matérielle sous-jacente pose de nouveaux défis dans la manière dont nous administrons la couche logicielle de ces systèmes. Ces défis peuvent entre divisés en : Intra-nœud, sur lequel nous faisons face à la mémoire limitée et à la puissance de calcul des nœuds IdO, afin de gérer les mises à jour sur ces appareils ; Inter-noeud, sur lequel une nouvelle façon de distribuer les mises à jour est nécessaire, en raison de la topologie réseau différente et le coût en énergie pour les appareils alimentés par batterie ; En effet, la puissance de calcul limitée et la durée de vie de chaque nœud combiné à la nature très distribuée de ces systèmes, ajoute de la complexité à la gestion de la couche logicielle distribuée. La reconfiguration logicielle des nœuds dans l'Internet des objets est une préoccupation majeure dans plusieurs domaines d'application. En particulier, la distribution du code pour fournir des nouvelles fonctionnalités ou mettre à jour le logiciel déjà installé afin de l'adapter aux nouvelles exigences, a un impact énorme sur la consommation d'énergie. La plupart des algorithmes actuels de diffusion du code sur l'air (OTA) sont destinés à diffuser un microprogramme complet à travers de petits fragments, et sont souvent mis en œuvre dans la couche réseau, ignorant ainsi toutes les informations de guidage de la couche applicative. Première contribution : Un moteur de modèles en temps d'exécution représentant une application de l'IdO en cours d'exécution sur les nœuds à ressources limitées. La transformation du méta-modèle Kevoree en code C pour répondre aux contraintes de mémoire spécifiques d'un dispositif IdO a été réalisée, ainsi que la proposition des outils de modélisation pour manipuler un modèle en temps d'exécution. Deuxième contribution : découplage en composants d'un système IdO ainsi qu'un algorithme de distribution de composants efficace. Le découplage en composants d'une application dans le contexte de l'IdO facilite sa représentation sur le modèle en temps d'exécution, alors qu'il fournit un moyen de changer facilement son comportement en ajoutant/supprimant des composants et de modifier leurs paramètres. En outre, un mécanisme pour distribuer ces composants en utilisant un nouvel algorithme appelé Calpulli est proposé
The Internet of Things (IoT) is covering little by little every aspect on our lives. As these systems become more pervasive, the need of managing this complex infrastructure comes with several challenges. Indeed, plenty of small interconnected devices are now providing more than a service in several aspects of our everyday life, which need to be adapted to new contexts without the interruption of such services. However, this new computing system differs from classical Internet systems mainly on the type, physical size and access of the nodes. Thus, typical methods to manage the distributed software layer on large distributed systems as usual cannot be employed on this context. Indeed, this is due to the very different capacities on computing power and network connectivity, which are very constrained for IoT devices. Moreover, the complexity which was before managed by experts on several fields, such as embedded systems and Wireless Sensor Networks (WSN), is now increased by the larger quantity and heterogeneity of the node’s software and hardware. Therefore, we need efficient methods to manage the software layer of these systems, taking into account the very limited resources. This underlying hardware infrastructure raises new challenges in the way we administrate the software layer of these systems. These challenges can be divided into: intra-node, on which we face the limited memory and CPU of IoT nodes, in order to manage the software layer and ; inter-node, on which a new way to distribute the updates is needed, due to the different network topology and cost in energy for battery powered devices. Indeed, the limited computing power and battery life of each node combined with the very distributed nature of these systems, greatly adds complexity to the distributed software layer management. Software reconfiguration of nodes in the Internet of Things is a major concern for various application fields. In particular, distributing the code of updated or new software features to their final node destination in order to adapt it to new requirements, has a huge impact on energy consumption. Most current algorithms for disseminating code over the air (OTA) are meant to disseminate a complete firmware through small chunks and are often implemented at the network layer, thus ignoring all guiding information from the application layer. First contribution: A models@runtime engine able to represent an IoT running application on resource constrained nodes. The transformation of the Kevoree meta-model into C code to meet the specific memory constraints of an IoT device was performed, as well as the proposition of modelling tools to manipulate a model@runtime. Second contribution: Component decoupling of an IoT system as well as an efficient component distribution algorithm. Components decoupling of an application in the context of the IoT facilitates its representation on the model@runtime, while it provides a way to easily change its behaviour by adding/removing components and changing their parameters. In addition, a mechanism to distribute such components using a new algorithm, called Calpulli is proposed
APA, Harvard, Vancouver, ISO, and other styles
7

Gurdamar, Emre. "Adaptation Of Turbulence Models To A Navier-stokes Solver." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606568/index.pdf.

Full text
Abstract:
This thesis presents the implementation of several two-equation turbulence models into a finite difference, two- and three-dimensional Navier-Stokes Solver. Theories of turbulence modeling and the historical development of these theories are briefly investigated. Turbulence models that are defined by two partial differential equations, based on k-&
#969
and k-&
#949
models, having different correlations, constants and boundary conditions are selected to be adapted into the base solver. The basic equations regarding the base Navier-Stokes solver to which the turbulence models are implemented presented by briefly explaining the outputs obtained from the solver. Numerical work regarding the implementation of turbulence models into the base solver is given in steps of non-dimensionalization, transformation of equations into generalized coordinate system, numerical scheme, discretization, boundary and initial conditions and limitations. These sections of implementation are investigated and presented in detail with providing every steps of work accomplished. Certain trial problems are solved and outputs are compared with experimental data. Solutions for fluid flow over flat plate, in free shear, over cylinder and airfoil are demonstrated. Airfoil validation test cases are analyzed in detail. For three dimensional applications, computation of flow over a wing is accomplished and pressure distributions from certain sections are compared with experimental data.
APA, Harvard, Vancouver, ISO, and other styles
8

Clarkson, P. R. "Adaptation of statistical language models for automatic speech recognition." Thesis, University of Cambridge, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597745.

Full text
Abstract:
Statistical language models encode linguistic information in such a way as to be useful to systems which process human language. Such systems include those for optical character recognition and machine translation. Currently, however, the most common application of language modelling is in automatic speech recognition, and it is this that forms the focus of this thesis. Most current speech recognition systems are dedicated to one specific task (for example, the recognition of broadcast news), and thus use a language model which has been trained on text which is appropriate to that task. If, however, one wants to perform recognition on more general language, then creating an appropriate language model is far from straightforward. A task-specific language model will often perform very badly on language from a different domain, whereas a model trained on text from many diverse styles of language might perform better in general, but will not be especially well suited to any particular domain. Thus the idea of an adaptive language model whose parameters automatically adjust to the current style of language is an appealing one. In this thesis, two adaptive language models are investigated. The first is a mixture-based model. The training text is partitioned according to the style of text, and a separate language model is constructed for each component. Each component is assigned a weighting according to its performance at modelling the observed text, and a final language model is constructed as the weighted sum of each of the mixture components. The second approach is based on a cache of recent words. Previous work has shown that words that have occurred recently have a higher probability of occurring in the immediate future than would be predicted by a standard triagram language model. This thesis investigates the hypothesis that more recent words should be considered more significant within the cache by implementing a cache in which a word's recurrence probability decays exponentially over time. The problem of how to predict the effect of a particular language model on speech recognition accuracy is also addressed in this thesis. The results presented here, as well as those of other recent research, suggest that perplexity, the most commonly used method of evaluating language models, is not as well correlated with word error rate as was once thought. This thesis investigates the connection between a language model's perplexity and its effect on speech recognition performance, and will describe the development of alternative measures of a language models' quality which are better correlated with word error rate. Finally, it is shown how the recognition performance which is achieved using mixture-based language models can be improved by optimising the mixture weights with respect to these new measures.
APA, Harvard, Vancouver, ISO, and other styles
9

Baptista, Adérito Herculano Sarmento. "Dynamic adaptation of interaction models for stateful web services." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/12042.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Wireless Sensor Networks (WSNs) are accepted as one of the fundamental technologies for current and future science in all domains, where WSNs formed from either static or mobile sensor devices allow a low cost high-resolution sensing of the environment. Such opens the possibility of developing new kinds of crucial applications or providing more accurate data to more traditional ones. For instance, examples may range from large-scale WSNs deployed on oceans contributing to weather prediction simulations; to high number of diverse Sensor devices deployed over a geographical area at different heights from the ground for collecting more accurate data for cyclic wildfire spread simulations; or to networks of mobile phone devices contributing to urban traffic management via Participatory Sensing applications. In order to simplify data access, network parameterisation, and WSNs aggregation, WSNs have been integrated in Web environments, namely through high level standard interfaces like Web services. However, the typical interface access usually supports a restricted number of interaction models and the available mechanisms for their run-time adaptation are still scarce. Nevertheless, applications demand a richer and more flexible control on interface accesses – e.g. such accesses may depend on contextual information and, consequently, may evolve in time. Additionally, Web services have become increasingly popular in the latest years, and their usage led to the need of aggregating and coordinating them and also to represent state in between Web services invocations. Current standard composition languages for Web services (wsbpel,wsci,bpml) deal with the traditional forms of service aggregation and coordination, while WS-Resource framework (wsrf) deals with accessing services pertaining state concerns (relating both executing applications and the runtime environment). Subjacent to the notion of service coordination is the need to capture dependencies among them (through the workflow concept, for instance), reuse common interaction models, e.g. embodied in common behavioural Patterns like Client/Server, Publish/- Subscriber, Stream, and respond to dynamic events in the system (novel user requests, service failures, etc.). Dynamic adaptation, in particular, is a pressing requirement for current service-based systems due to the increasing trend on XaaS ("everything as a service") which promises to reduce costs on application development and infrastructure support, as is already apparent in the Cloud computing domain. Therefore, the self-adaptive (or dynamic/adaptive) systems present themselves as a solution to the above concerns. However, since they comprise a vast area, this thesis only focus on self-adaptive software. Concretely, we propose a novel model for dynamic interactions, in particular with Stateful Web Services, i.e. services interfacing continued activities. The solution consists on a middleware prototype based on pattern abstractions which may be able to provide (novel) richer interaction models and a few structured dynamic adaptation mechanisms, which are captured in the context of a "Session" abstraction. The middleware was implemented and uses a pre-existent framework supporting Web enabled access to WSNs, and some evaluation scenarios were tested in this setting. Namely, this area was chosen as the application domain that contextualizes this work as it contributes to the development of increasingly important applications needing highresolution and low cost sensing of environment. The result is a novel way to specify richer and dynamic modes of accessing and acquiring data generated by WSNs.
Este trabalho foi parcialmente financiado pelo Centro de Informática e Tecnologias da Informação (CITI), e pela Fundação para a Ciência e a Tecnologia (FCT / MCTES) em projectos de investigação
APA, Harvard, Vancouver, ISO, and other styles
10

Ahadi-Sarkani, Seyed Mohammad. "Bayesian and predictive techniques for speaker adaptation." Thesis, University of Cambridge, 1996. https://www.repository.cam.ac.uk/handle/1810/273100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Paule, Inès. "Adaptation of dosing regimen of chemotherapies based on pharmacodynamic models." Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00846454.

Full text
Abstract:
There is high variability in response to cancer chemotherapies among patients. Its sources are diverse: genetic, physiologic, comorbidities, concomitant medications, environment, compliance, etc. As the therapeutic window of anticancer drugs is usually narrow, such variability may have serious consequences: severe (even life-threatening) toxicities or lack of therapeutic effect. Therefore, various approaches to individually tailor treatments and dosing regimens have been developed: a priori (based on genetic information, body size, drug elimination functions, etc.) and a posteriori (that is using information of measurements of drug exposure and/or effects). Mixed-effects modelling of pharmacokinetics and pharmacodynamics (PK-PD), combined with Bayesian maximum a posteriori probability estimation of individual effects, is the method of choice for a posteriori adjustments of dosing regimens. In this thesis, a novel approach to adjust the doses on the basis of predictions, given by a model for ordered categorical observations of toxicity, was developed and investigated by computer simulations. More technical aspects concerning the estimation of individual parameters were analysed to determine the factors of good performance of the method. These works were based on the example of capecitabine-induced hand-and-foot syndrome in the treatment of colorectal cancer. Moreover, a review of pharmacodynamic models for discrete data (categorical, count, time-to-event) was performed. Finally, PK-PD analyses of hydroxyurea in the treatment of sickle cell anemia were performed and used to compare different dosing regimens and determine the optimal measures for monitoring the treatment
APA, Harvard, Vancouver, ISO, and other styles
12

Nanduri, Chandra Sekhara Srinivas. "Platform business models : incumbent adaptation perspectives subsequent to discontinuous changes." Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/80492.

Full text
Abstract:
This thesis explored the impact of platforms on South African Banking, Telecom and Media industries, studying how the industry competes or collaborates with the phenomenon. Thus far, research focuses on non-existential threats, which allowed for long-term adaptation and scant evidence about incumbent adaptation under discontinuous changes. This research looked at two key questions: (a) how discontinuous changes impact incumbents; and (b) how incumbents adapt their exploration and exploitation balance subsequent to discontinuous changes. A qualitative methodology was applied to answer these research questions. Semi-structured interviews were conducted with leaders and senior management involved in the organisational sense-making process to understand the phenomenon. Interview findings were analysed using thematic analysis to generate insights and meanings from the adaptation experiences. This study contributes to the literature by combing incumbent adaptation, discontinuous changes, and organisational design aspects based on in-depth interviews. There are four main findings: one, platforms were perceived as a threat, affirming past research; two, leadership assumes 3–5 years for full-scale adaptation before entirely disrupted, supporting past research in the domain; three, contrary to the literature, which expects increased exploration during discontinuous changes, Incumbents balancing their exploration and exploitation initiatives is a significant revelation; four, the transformation journey was mostly led by Top Management Teams (TMT), who preferred to run these initiatives as a separate organisation. However, these Incumbents are yet to achieve the much-talked-about network effects and the scale compared to digital-first ventures; whether their approach yields result or not, no Oracle can tell.
Mini Dissertation (MPhil)--University of Pretoria, 2020.
Gordon Institute of Business Science (GIBS)
MPhil
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Brian(Brian W. ). "Combating fake news with adversarial domain adaptation and neural models." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121689.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 77-80).
Factually incorrect claims on the web and in social media can cause considerable damage to individuals and societies by misleading them. As we enter an era where it is easier than ever to disseminate "fake news" and other dubious claims, automatic fact checking becomes an essential tool to help people discern fact from fiction. In this thesis, we focus on two main tasks: fact checking which involves classifying an input claim with respect to its veracity, and stance detection which involves determining the perspective of a document with respect to a claim. For the fact checking task, we present Bidirectional Long Short Term Memory (Bi-LSTM) and Convolutional Neural Network (CNN) based models and conduct our experiments on the LIAR dataset [Wang, 2017], a recently released fact checking task. Our model outperforms the state of the art baseline on this dataset. For the stance detection task, we present bag of words (BOW) and CNN based models in hierarchy schemes. These architectures are then supplemented with an adversarial domain adaptation technique, which helps the models overcome dataset size limitations. We test the performance of these models by using the Fake News Challenge (FNC) [Pomerleau and Rao, 2017], the Fact Extraction and VERification (FEVER) [Thorne et al., 2018], and the Stanford Natural Language Inference (SNLI) [Bowman et al., 2015] datasets. Our experiments yielded a model which has state of the art performance on FNC target data by using FEVER source data coupled with adversarial domain adaptation [Xu et al., 2018].
by Brian Xu.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
14

Haith, Adrian. "Computational models of motor adaptation under multiple classes of sensorimotor disturbance." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3973.

Full text
Abstract:
The human motor system exhibits remarkable adaptability, enabling us to maintain high levels of performance despite ever-changing requirements. There are many potential sources of error duringmovement to which the motor system may need to adapt: the properties of our bodies or tools may vary over time, either at a dynamic or a kinematic level; our senses may become miscalibrated over time and mislead us as to the state of our bodies or the true location of an intended goal; the relationship between sensory stimuli and movement goals may change. Despite these many varied ways in which our movements may be disturbed, existing models of human motor adaptation have tended to assume just a single adaptive component. In this thesis, I argue that the motor system maintains multiple components of adaptation, corresponding to the multiple potential sources of error to which we are exposed. I outline some of the shortcomings of existing adaptation models in scenarious where multiple kinds of disturbances may be present - in particular examining how different distal learning problems associated with different classes of disturbance can affect adaptation within alternative cerebellar-based learning architectures - and outline the computational challenges associated with extending these existing models. Focusing on the specific problem in which the potential disturbances are miscalibrations of vision and proprioception and changes in arm dynamics during reaching, a unified model of sensory and motor adaptation is derived based on the principle of Bayesian estimation of the disturbances given noisy observations. This model is able to account parsimoniously for previously reported patterns of sensory and motor adaptation during exposure to shifted visual feedback. However the model additionally makes the novel and surprising prediction that adaptation to a force field will also result in sensory adaptation. These predictions are confirmed experimentally. The success of the model strongly supports the idea that the motor system maintains multiple components of adaptation, which it updates according to the principles of Bayesian estimation.
APA, Harvard, Vancouver, ISO, and other styles
15

Streilein, Andrea Susan. "Making sense of change : how place-specific cultural models and experiential influencers are shaping understandings of climate change in two BC coastal communities." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2647.

Full text
Abstract:
Global climate change has become the imminent issue of our time. Recent literature has stressed the pressing need for adaptation planning, particularly for communities that are most vulnerable to new climatic variations, such as resource dependent and coastal communities. Yet, such cries for adaptation have often glossed over the need for prior examination into the underlying cultural mindsets of such communities. In response, this thesis has sought to examine the various factors that are influencing local understandings of global climate change by leaders in two British Columbia coastal communities, Port Alberni and the Tseshaht First Nation. Guided by a social (or ecological) constructionist lens and a phenomenological methodological approach, a series of in-depth interviews were conducted with the leadership, both formal and informal, of the two aforementioned B.C. communities during the summer of 2006. Although each community yielded distinct findings, the interviews captured richly nuanced descriptions of local environmental changes, which in turn played a sizeable role in shaping how the leaders conceptualized climate change. A plethora of place-specific historical, experiential and values-based factors interacted and moulded the many contextual culturalmodels (from tsunamis, to recycling, to colonial pasts to reverence for nature), which were imbedded within leaders' discussions of climate change. Following this core analysis, I explored the community capacity to manage and adapt to future changes by examining local strengths and challenges. The concluding chapter provided a reflection of the results and pointed to new directions.
APA, Harvard, Vancouver, ISO, and other styles
16

Algeri, Soliman. "Representation and adaptation of high level object-oriented models for reuse." Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Eisenbarth, Thomas [Verfasser], and Bernhard [Akademischer Betreuer] Bauer. "Semantic Process Models: Transformation, Adaptation, Resource Consideration / Thomas Eisenbarth. Betreuer: Bernhard Bauer." Augsburg : Universität Augsburg, 2014. http://d-nb.info/1077703414/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Saman, Nariman Goran. "A Framework for Secure Structural Adaptation." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78658.

Full text
Abstract:
A (self-) adaptive system is a system that can dynamically adapt its behavior or structure during execution to "adapt" to changes to its environment or the system itself. From a security standpoint, there has been some research pertaining to (self-) adaptive systems in general but not enough care has been shown towards the adaptation itself. Security of systems can be reasoned about using threat models to discover security issues in the system. Essentially that entails abstracting away details not relevant to the security of the system in order to focus on the important aspects related to security. Threat models often enable us to reason about the security of a system quantitatively using security metrics. The structural adaptation process of a (self-) adaptive system occurs based on a reconfiguration plan, a set of steps to follow from the initial state (configuration) to the final state. Usually, the reconfiguration plan consists of multiple strategies for the structural adaptation process and each strategy consists of several steps steps with each step representing a specific configuration of the (self-) adaptive system. Different reconfiguration strategies have different security levels as each strategy consists of a different sequence configuration with different security levels. To the best of our knowledge, there exist no approaches which aim to guide the reconfiguration process in order to select the most secure available reconfiguration strategy, and the explicit security of the issues associated with the structural reconfiguration process itself has not been studied. In this work, based on an in-depth literature survey, we aim to propose several metrics to measure the security of configurations, reconfiguration strategies and reconfiguration plans based on graph-based threat models. Additionally, we have implemented a prototype to demonstrate our approach and automate the process. Finally, we have evaluated our approach based on a case study of our making. The preliminary results tend to expose certain security issues during the structural adaptation process and exhibit the effectiveness of our proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
19

Reimann, Thomas. "Adaptation of Numerical Modeling Approaches for Karst Aquifer Characterization." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-108404.

Full text
Abstract:
Karst aquifers can be conceptualized as dual flow systems comprised of a low-conductive matrix with embedded high-conductive conduits / preferential flow zones. Discharge in conduits ranges from low-velocity laminar flow to high-velocity transitional and turbulent flow. Commonly employed continuum models do not account for the specific behavior of transitional and turbulent flow. In response to this limitation, enhancements have been made to MODFLOW, a commonly used groundwater flow model, by adding a discrete conduit network to the matrix continuum (hybrid model). The Conduit Flow Process (CFP) package is the latest realization of this model approach. CFP Mode 1 (CFPM1) computes laminar and turbulent flow in discrete conduits that are coupled to the laminar continuum model. CFP Mode 2 (CFPM2) accounts for turbulent flow in preferential flow layers by adapting the continuum model. Therefore, laminar hydraulic conduc-tivities are converted into turbulent hydraulic conductivities. CFPM2 was further modified to consider steady turbulent pipe flow. Karst models based on CFPM2 require potentially less input data and computational efforts than karst models based on CFPM1. Furthermore, CFPM2 integrates more easily into MODFLOW versions including e.g. transport models. Parameter studies for a synthetic catchment demonstrates that continuum models with turbulent flow representation and an additional flow barrier between conduits and matrix can represent karst systems similar to hybrid models. For simulation of highly transient flow processes in karst conduit systems, i.e. during flood events, it is crucial to consider dynamics such as free-surface flow, wave propagation, and changes between pressurized and non-pressurized conduit flow. The coupled overland- and groundwater flow model MODBRANCH was therefore enhanced to consider unsteady and non-uniform flow processes in karst conduits. Flow in discrete conduits is simulated using the Saint-Venant-equations for free-surface flow. Contrary to overland flow, the cross sectional area of karst conduits is finite. Accordingly, both pressurized and non-pressurized flow may occur within conduits. To simulate pressurized flow, a hypothetical, narrow, open-top slot (Preissmann slot) is added to the conduit crown, which allows the use of the free-surface flow equations for fully filled conduits. Beyond this, the model features a variable time step to consider wave speed variations, for example due to the transition from free-surface to pressurized flow. Parameter studies for a synthetic catchment demonstrate the significance of free-surface flow representation for variably filled conduits
Karstgrundwasserleiter können als duale Fließsysteme konzeptionalisiert werden, bestehend aus einer geringdurchlässigen Matrix mit eingebundenen hochdurchlässigen Bereichen, z. B. Karströhren. Der Abfluss in den hochdurchlässigen Bereichen reicht von langsamer laminarer Strömung bis zu schneller turbulenter Strömung. Herkömmliche numerische Grundwasser-strömungsmodelle berücksichtigen nicht die spezifischen Eigenschaften von nicht-laminarer Strömung (Übergangsbereich laminar-turbulent bzw. turbulente Verhältnisse). Ein Ansatz um diese Einschränkung zu umgehen, ist die Erweiterung des laminaren Kontinuums um ein dis-kretes Röhrenmodell, das zustandsabhängig laminare und turbulente Strömung berücksichtigt (Hybridmodell). Eine aktuelle Umsetzung dieses Ansatzes ist Conduit Flow Process (CFP), ein Modul für das weitverbreitete Grundwasserströmungsmodell MODFLOW. CFP Mode 1 (CFPM1) berechnet laminare und turbulente Strömung in diskreten, mit dem Kontinuummodell gekoppelten Röhren. CFP Mode 2 (CFPM2) berücksichtigt nicht-laminare Strömung in hochdurchlässigen Schichten mit einer angepassten hydraulischen Leitfähigkeit des Kontinuummodells. CFPM2 wurde weiter modifiziert, so dass auch turbulente Strömung in Karströhren berechnet werden kann. Dadurch kann möglicherweise der Parameterbedarf sowie der Rechenaufwand gegenüber Hybrid¬modellen reduziert werden. CFPM2 lässt sich einfach in vorhandene MODFLOW Modelle einbinden, z. B. zur Berechnung von Transportprozessen. Parameterstudien für ein idealisiertes Karsteinzugsgebiet zeigen, dass Kontinuummodelle bei Berücksichtigung der turbulenten Strömung sowie des zusätzlichen hydraulischen Widerstand zwischen Röhren und Matrix, Karstsysteme ähnlich wie Hybridmodelle darstellen. Zur Simulation von instationären Prozessen in Karströhren, z. B. ausgeprägte Abflusssignale infolge pulsförmiger Grundwasserneubildung, ist es notwendig, dynamische Prozesse infolge Freispiegelabfluss, Wellenausbreitung sowie Wechsel zwischen Abfluss in teil- und vollgefüllten Röhren zu berücksichtigen. Aus diesem Grund wurde das numerische Modell MODBRANCH, welches ein diskretes Oberflächenwassermodell mit einem Kontinuummodell koppelt, so angepasst, dass instationäre und nichtgleichförmige Abflussprozesse in Karströhren berücksichtigt werden können. Der Abfluss in diskreten Röhren wird dabei mit den Saint-Venant-Gleichungen für Freispiegelabfluss berechnet. Im Gegensatz zu Oberflächengewässern ist der für den Abfluss zur Verfügung stehende Querschnitt in Karströhren limitiert, so dass sowohl Freispiegel- als auch Druckabfluss innerhalb der Röhren auftreten kann. Druckabfluss wird mit Hilfe eines schmalen virtuellen Schlitzes an der Röhrenoberkante simuliert (Preissmann Schlitz), der auch im Fall vollgefüllter Röhren die Anwendung der Gleichungen für Freispiegelabfluss erlaubt. Durch die Verwendung eines variablen Zeitschrittes kann die geänderte Dynamik beim Übergang von Freispiegel- zu Druckabfluss berücksichtigt werden. Parameterstudien für idealisierte, synthetische Karsteinzugsgebiete demonstrieren die Bedeutung der Berücksichtigung von Freispiegelabfluss in teilgefüllter Röhren
APA, Harvard, Vancouver, ISO, and other styles
20

Ewen, Heidi Harriman. "RECONCILING BIOPHYSICAL AND PSYCHOSOCIAL MODELS OF STRESS IN RELOCATION AMONG OLDER WOMEN." UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_diss/374.

Full text
Abstract:
The decision to relocate or to age in place can be a difficult one, mitigated by a variety of influencing factors such as finances, physical abilities, as well as social and instrumental support from family and others. This study focuses on the stresses of residential relocation to independent and assisted living facilities among older women living in Lexington, Kentucky. Participation entailed three semi-structured interviews as well as saliva and blood sampling over a period of 6 months, beginning within one month of the move. Measures of cortisol were used as indicators of stress reactivity. Distinct patterns of cortisol response have been identified, with those who indicated the relocation was the result of health issues or anticipated health issues showing the greatest degree of physiological stress reactivity. The majority of women reveal satisfactory psychosocial adjustment, with women indicating the move was facilitated by need for caring for ailing family showing the least amount of facility integration. Significant life events appear to be related to social integration, stress reactivity, and perceptions of facility life over the course of the first six months in residence. These results have implications for facility managers with regard to facilitation of new and prospective resident acclimation and possible interventions aimed at reducing adaptation time among those on waitlists for such facilities.
APA, Harvard, Vancouver, ISO, and other styles
21

Zemirline, Nadjet. "Assisting in the reuse of existing materials to build adaptive hypermedia." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00664996.

Full text
Abstract:
Nowadays, there is a growing demand for personalization and the "one-size-fits-all" approach for hypermedia systems is no longer applicable. Adaptive hypermedia (AH) systems adapt their behavior to the needs of individual users. However due to the complexity of their authoring process and the different skills required from authors, only few of them have been proposed. These last years, numerous efforts have been put to propose assistance for authors to create their own AH. However, as explained in this thesis some problems remain.In this thesis, we tackle two particular problems. A first problem concerns the integration of authors' materials (information and user profile) into models of existing systems. Thus, allowing authors to directly reuse existing reasoning and execute it on their materials. We propose a semi-automatic merging/specialization process to integrate an author's model into a model of an existing system. Our objectives are twofold: to create a support for defining mappings between elements in a model of existing models and elements in the author's model and to help creating consistent and relevant models integrating the two models and taking into account the mappings between them.A second problem concerns the adaptation specification, which is famously the hardest part of the authoring process of adaptive web-based systems. We propose an EAP framework with three main contributions: a set of elementary adaptation patterns for the adaptive navigation, a typology organizing the proposed elementary adaptation patterns and a semi-automatic process to generate adaptation strategies based on the use and the combination of patterns. Our objectives are to define easily adaptation strategies at a high level by combining simple ones. Furthermore, we have studied the expressivity of some existing solutions allowing the specification of adaptation versus the EAP framework, discussing thus, based on this study, the pros and cons of various decisions in terms of the ideal way of defining an adaptation language. We propose a unified vision of adaptation and adaptation languages, based on the analysis of these solutions and our framework, as well as a study of the adaptation expressivity and the interoperability between them, resulting in an adaptation typology. The unified vision and adaptation typology are not limited to the solutions analysed, and can be used to compare and extend other approaches in the future. Besides these theoretical qualitative studies, this thesis also describes implementations and experimental evaluations of our contributions in an e-learning application.
APA, Harvard, Vancouver, ISO, and other styles
22

Lamarche, Louis. "Reduction of wall interference for three dimensional models with two dimensional wall adaptation." Doctoral thesis, Universite Libre de Bruxelles, 1986. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/213544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kirwin, Roan. "Modification and adaptation of WEDM wire-lag models for use in production environments." Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1564759778566713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tomashenko, Natalia. "Speaker adaptation of deep neural network acoustic models using Gaussian mixture model framework in automatic speech recognition systems." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1040/document.

Full text
Abstract:
Les différences entre conditions d'apprentissage et conditions de test peuvent considérablement dégrader la qualité des transcriptions produites par un système de reconnaissance automatique de la parole (RAP). L'adaptation est un moyen efficace pour réduire l'inadéquation entre les modèles du système et les données liées à un locuteur ou un canal acoustique particulier. Il existe deux types dominants de modèles acoustiques utilisés en RAP : les modèles de mélanges gaussiens (GMM) et les réseaux de neurones profonds (DNN). L'approche par modèles de Markov cachés (HMM) combinés à des GMM (GMM-HMM) a été l'une des techniques les plus utilisées dans les systèmes de RAP pendant de nombreuses décennies. Plusieurs techniques d'adaptation ont été développées pour ce type de modèles. Les modèles acoustiques combinant HMM et DNN (DNN-HMM) ont récemment permis de grandes avancées et surpassé les modèles GMM-HMM pour diverses tâches de RAP, mais l'adaptation au locuteur reste très difficile pour les modèles DNN-HMM. L'objectif principal de cette thèse est de développer une méthode de transfert efficace des algorithmes d'adaptation des modèles GMM aux modèles DNN. Une nouvelle approche pour l'adaptation au locuteur des modèles acoustiques de type DNN est proposée et étudiée : elle s'appuie sur l'utilisation de fonctions dérivées de GMM comme entrée d'un DNN. La technique proposée fournit un cadre général pour le transfert des algorithmes d'adaptation développés pour les GMM à l'adaptation des DNN. Elle est étudiée pour différents systèmes de RAP à l'état de l'art et s'avère efficace par rapport à d'autres techniques d'adaptation au locuteur, ainsi que complémentaire
Differences between training and testing conditions may significantly degrade recognition accuracy in automatic speech recognition (ASR) systems. Adaptation is an efficient way to reduce the mismatch between models and data from a particular speaker or channel. There are two dominant types of acoustic models (AMs) used in ASR: Gaussian mixture models (GMMs) and deep neural networks (DNNs). The GMM hidden Markov model (GMM-HMM) approach has been one of the most common technique in ASR systems for many decades. Speaker adaptation is very effective for these AMs and various adaptation techniques have been developed for them. On the other hand, DNN-HMM AMs have recently achieved big advances and outperformed GMM-HMM models for various ASR tasks. However, speaker adaptation is still very challenging for these AMs. Many adaptation algorithms that work well for GMMs systems cannot be easily applied to DNNs because of the different nature of these models. The main purpose of this thesis is to develop a method for efficient transfer of adaptation algorithms from the GMM framework to DNN models. A novel approach for speaker adaptation of DNN AMs is proposed and investigated. The idea of this approach is based on using so-called GMM-derived features as input to a DNN. The proposed technique provides a general framework for transferring adaptation algorithms, developed for GMMs, to DNN adaptation. It is explored for various state-of-the-art ASR systems and is shown to be effective in comparison with other speaker adaptation techniques and complementary to them
APA, Harvard, Vancouver, ISO, and other styles
25

Kiss, Andreea. "Opportunistic Adaptation and New Venture Growth: Exploring the Link between Cognition, Action and Growth." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/managerialsci_diss/20.

Full text
Abstract:
This dissertation introduces the model of opportunistic adaptation to explain new venture growth. In established firms processes of change and adaptation usually imply a transition from one steady-state strategy to another and a problem oriented perspective as firms change in response to potential threats to their current positions. However, in the context of new ventures, adaptation is less about moving from one existent strategy to another and more about the entrepreneur’s effort to reach a steady state for the first time by continuously experimenting and combining resources in creative and innovative ways. The model of opportunistic adaptation rests on three key assumptions: 1.) new venture growth results from actions grounded in an opportunistic (proactive) logic; 2.) entrepreneurial cognition is viewed as an antecedent to all organizational actions leading to growth; 3.) the relationship between entrepreneurial cognition and action is influenced by industry and firm level attributes. The model is tested using quantitative and qualitative data on new ventures founded between 1996 and 2006 in technology intensive industries. The results provide partial support for the notion of opportunistic adaptation as a process in which entrepreneurial cognition, firm and industry related factors are closely intertwined. The results of the dissertation suggest that some aspects of entrepreneurial cognition, such as entrepreneurial schema focus have a more direct effect on actions related to new venture growth than others whose effect is strongly moderated by contextual influences such as industry growth and social network heterogeneity. This dissertation also finds that not all types of organizational actions associated with an opportunity logic lead to new venture growth. Of the three action types included in the model (fast, diverse and frequent) only action diversity was found to have a positive impact on new venture growth. Theoretical implications of the study results for both the literature on new venture growth and the literature on organizational adaptation, as well as practical implications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

Foust, Richard John. "Development of a hydraulic bone chamber implant to study in vivo bone repair and adaptation." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

McMillen, David. "Effects of spike-frequency adaptation on neural models, with applications to biologically inspired robotics." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0020/NQ53651.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Grobler, Johannes Hendrik. "Development and adaptation of dynamic models for new power generation source / Johannes Hendrik Grobler." Thesis, North-West University, 2011. http://hdl.handle.net/10394/8402.

Full text
Abstract:
This dissertation’s main aim was to adapt a generic gas turbine and combined cycle power plant dynamic model for use in the power system simulation software, DigSilent PowerFactory. Due to the advantages in overall efficiency and lower emissions compared to conventional coal fired power plants, combined cycle power plants have gained popularity. Combined cycle power plants have become a significant portion in power generation across the world in recent times. Due to changes in the world to minimise carbon-dioxide footprints, there is demand for cleaner methods of power generation. In South Africa, the main power source is still coal fired power stations, but in recent times, gas turbine power plants were added to the power system. Approximately two-thirds of the generation capacity in a combined cycle power plant is produced by the gas turbines. The other third is generated by the steam turbine. Using the steam that is available means the overall efficiency of the power plant is improved and the emissions are decreased. Gas turbines and their controls are significantly different from the controls of a conventional steam turbine plant. In particular, the maximum output power of the gas turbine is very dependent on the deviation of its operating frequency from the rated frequency (or speed of the gas turbine), and the ambient conditions in which the gas turbine operates. In an effort to provide the industry with a single document and simulation model that summarises the unique characteristics, controls and protection of combined cycle power plants, the Cigre Task Force 25 was formed [1]. The aim of this Task Force was to develop an open cycle gas turbine (a more detailed model than existing models) and a combined cycle power plant simulation model, as no detailed models existed in any power system simulation software. The aim of this dissertation was to adapt the Cigre simulation models, enabling their use in the DigSilent PowerFactory power system simulation software and validate their performance.
Thesis (M.Ing. (Electrical Engineering))--North-West University, Potchefstroom Campus, 2011
APA, Harvard, Vancouver, ISO, and other styles
29

Al, Tobi Amjad Mohamed. "Anomaly-based network intrusion detection enhancement by prediction threshold adaptation of binary classification models." Thesis, University of St Andrews, 2018. http://hdl.handle.net/10023/17050.

Full text
Abstract:
Network traffic exhibits a high level of variability over short periods of time. This variability impacts negatively on the performance (accuracy) of anomaly-based network Intrusion Detection Systems (IDS) that are built using predictive models in a batch-learning setup. This thesis investigates how adapting the discriminating threshold of model predictions, specifically to the evaluated traffic, improves the detection rates of these Intrusion Detection models. Specifically, this thesis studied the adaptability features of three well known Machine Learning algorithms: C5.0, Random Forest, and Support Vector Machine. The ability of these algorithms to adapt their prediction thresholds was assessed and analysed under different scenarios that simulated real world settings using the prospective sampling approach. A new dataset (STA2018) was generated for this thesis and used for the analysis. This thesis has demonstrated empirically the importance of threshold adaptation in improving the accuracy of detection models when training and evaluation (test) traffic have different statistical properties. Further investigation was undertaken to analyse the effects of feature selection and data balancing processes on a model's accuracy when evaluation traffic with different significant features were used. The effects of threshold adaptation on reducing the accuracy degradation of these models was statistically analysed. The results showed that, of the three compared algorithms, Random Forest was the most adaptable and had the highest detection rates. This thesis then extended the analysis to apply threshold adaptation on sampled traffic subsets, by using different sample sizes, sampling strategies and label error rates. This investigation showed the robustness of the Random Forest algorithm in identifying the best threshold. The Random Forest algorithm only needed a sample that was 0.05% of the original evaluation traffic to identify a discriminating threshold with an overall accuracy rate of nearly 90% of the optimal threshold.
APA, Harvard, Vancouver, ISO, and other styles
30

Ding, Jie. "Structural and fluid analysis for large scale PEPA models, with applications to content adaptation systems." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/7975.

Full text
Abstract:
The stochastic process algebra PEPA is a powerful modelling formalism for concurrent systems, which has enjoyed considerable success over the last decade. Such modelling can help designers by allowing aspects of a system which are not readily tested, such as protocol validity and performance, to be analysed before a system is deployed. However, model construction and analysis can be challenged by the size and complexity of large scale systems, which consist of large numbers of components and thus result in state-space explosion problems. Both structural and quantitative analysis of large scale PEPA models suffers from this problem, which has limited wider applications of the PEPA language. This thesis focuses on developing PEPA, to overcome the state-space explosion problem, and make it suitable to validate and evaluate large scale computer and communications systems, in particular a content adaption framework proposed by the Mobile VCE. In this thesis, a new representation scheme for PEPA is proposed to numerically capture the structural and timing information in a model. Through this numerical representation, we have found that there is a Place/Transition structure underlying each PEPA model. Based on this structure and the theories developed for Petri nets, some important techniques for the structural analysis of PEPA have been given. These techniques do not suffer from the state-space explosion problem. They include a new method for deriving and storing the state space and an approach to finding invariants which can be used to reason qualitatively about systems. In particular, a novel deadlock-checking algorithm has been proposed to avoid the state-space explosion problem, which can not only efficiently carry out deadlock-checking for a particular system but can tell when and how a system structure lead to deadlocks. In order to avoid the state-space explosion problem encountered in the quantitative analysis of a large scale PEPA model, a fluid approximation approach has recently been proposed, which results in a set of ordinary differential equations (ODEs) to approximate the underlying CTMC. This thesis presents an improved mapping from PEPA to ODEs based on the numerical representation scheme, which extends the class of PEPA models that can be subjected to fluid approximation. Furthermore, we have established the fundamental characteristics of the derived ODEs, such as the existence, uniqueness, boundedness and nonnegativeness of the solution. The convergence of the solution as time tends to infinity for several classes of PEPA models, has been proved under some mild conditions. For general PEPA models, the convergence is proved under a particular condition, which has been revealed to relate to some famous constants of Markov chains such as the spectral gap and the Log-Sobolev constant. This thesis has established the consistency between the fluid approximation and the underlying CTMCs for PEPA, i.e. the limit of the solution is consistent with the equilibrium probability distribution corresponding to a family of underlying density dependent CTMCs. These developments and investigations for PEPA have been applied to both qualitatively and quantitatively evaluate the large scale content adaptation system proposed by the Mobile VCE. These analyses provide an assessment of the current design and should guide the development of the system and contribute towards efficient working patterns and system optimisation.
APA, Harvard, Vancouver, ISO, and other styles
31

Vogel, Thomas, and Holger Giese. "Model-driven engineering of adaptation engines for self-adaptive software : executable runtime megamodels." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6382/.

Full text
Abstract:
The development of self-adaptive software requires the engineering of an adaptation engine that controls and adapts the underlying adaptable software by means of feedback loops. The adaptation engine often describes the adaptation by using runtime models representing relevant aspects of the adaptable software and particular activities such as analysis and planning that operate on these runtime models. To systematically address the interplay between runtime models and adaptation activities in adaptation engines, runtime megamodels have been proposed for self-adaptive software. A runtime megamodel is a specific runtime model whose elements are runtime models and adaptation activities. Thus, a megamodel captures the interplay between multiple models and between models and activities as well as the activation of the activities. In this article, we go one step further and present a modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that considerably eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular for feedback loops. Megamodels are kept explicit and alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops, their runtime models, and adaptation activities explicit at a higher level of abstraction. Moreover, it enables complex solutions where multiple feedback loops interact or even operate on top of each other. Finally, it leverages the co-existence of self-adaptation and off-line adaptation for evolution.
Die Entwicklung selbst-adaptiver Software erfordert die Konstruktion einer sogenannten "Adaptation Engine", die mittels Feedbackschleifen die unterliegende Software steuert und anpasst. Die Anpassung selbst wird häufig mittels Laufzeitmodellen, die die laufende Software repräsentieren, und Aktivitäten wie beispielsweise Analyse und Planung, die diese Laufzeitmodelle nutzen, beschrieben. Um das Zusammenspiel zwischen Laufzeitmodellen und Aktivitäten systematisch zu erfassen, wurden Megamodelle zur Laufzeit für selbst-adaptive Software vorgeschlagen. Ein Megamodell zur Laufzeit ist ein spezielles Laufzeitmodell, dessen Elemente Aktivitäten und andere Laufzeitmodelle sind. Folglich erfasst ein Megamodell das Zusammenspiel zwischen verschiedenen Laufzeitmodellen und zwischen Aktivitäten und Laufzeitmodellen als auch die Aktivierung und Ausführung der Aktivitäten. Darauf aufbauend präsentieren wir in diesem Artikel eine Modellierungssprache für ausführbare Megamodelle zur Laufzeit, EUREMA genannt, die aufgrund eines modellgetriebenen Ansatzes die Entwicklung selbst-adaptiver Software erleichtert. Der Ansatz umfasst eine domänen-spezifische Modellierungssprache und einen Laufzeit-Interpreter für Adaptation Engines, insbesondere für Feedbackschleifen. EUREMA Megamodelle werden über die Spezifikationsphase hinaus explizit zur Laufzeit genutzt, um mittels Interpreter Feedbackschleifen direkt auszuführen. Zusätzlich können Megamodelle zur Laufzeit dynamisch geändert werden, um Feedbackschleifen anzupassen. Daher unterstützt EUREMA die Entwicklung selbst-adaptiver Software durch die explizite Spezifikation von Feedbackschleifen, der verwendeten Laufzeitmodelle, und Adaptionsaktivitäten auf einer höheren Abstraktionsebene. Darüber hinaus ermöglicht EUREMA komplexe Lösungskonzepte, die mehrere Feedbackschleifen und deren Interaktion wie auch die hierarchische Komposition von Feedbackschleifen umfassen. Dies unterstützt schließlich das integrierte Zusammenspiel von Selbst-Adaption und Wartung für die Evolution der Software.
APA, Harvard, Vancouver, ISO, and other styles
32

Koontz, John Timothy. "Digital image-based finite element modeling : simulation of mechanically-induced bone adaptation." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/16474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Loulou, Hassan. "Verifying Design Properties at Runtime Using an MDE-Based Approach Models @Run.Time Verification-Application to Autonomous Connected Vehicles." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS405.

Full text
Abstract:
Un véhicule autonome et connecté (ACV – pour Autonomous Connected Vehicle ) est un système cyber-physique où le monde réel et l’espace numérique virtuel se fusionnent. Ce type de véhicule requiert un processus de validation rigoureuse commençant à la phase de conception et se poursuivant même après le déploiement du logiciel. Un nouveau paradigme est apparu pour le monitorat continu des exécutions des logiciels afin d'autoriser des adaptations automatiquement en temps réel, systématiquement lors d’une détection de changement dans l'environnement d'exécution, d’une panne ou d’un bug. Ce paradigme s’intitule : « Models@Run.time ». Cette thèse s’inscrit dans le cadre des ACVs et plus particulièrement dans le contexte des véhicules qui collaborent et qui partagent leurs données d’une manière sécurisée. Plusieurs approches de modélisation sont déjà utilisées pour exprimer les exigences relatives au contrôle d'accès afin d’imposer des politiques de sécurité. Toutefois, leurs outils de validation ne tiennent pas compte les impacts de l'interaction entre les exigences fonctionnelles et les exigences de sécurité. Cette interaction peut conduire à des violations de sécurité inattendues lors de l'exécution du système ou lors des éventuelles adaptations à l’exécution. En outre, l’estimation en temps réel de l’état de trafic utilisant des données de type crowdsourcing pourrait être utilisée pour les adaptations aux modèles de coopération des AVCs. Cette approche n'a pas encore été suffisamment étudiée dans la littérature. Pour pallier à ces limitations, de nombreuses questions doivent être abordées:• L'évolution des exigences fonctionnelles du système doit être prise en compte lors de la validation des politiques de sécurité ainsi que les scénarios d'attaque doivent être générés automatiquement.• Une approche pour concevoir et détecter automatiquement les anti-patrons (antipatterns) de sécurité doit être développée. En outre, de nouvelles reconfigurations pour les politiques de contrôle d'accès doivent également être identifiées, validées et déployées efficacement à l'exécution.• Les ACVs doivent observer et analyser leur environnement, qui contient plusieurs flux de données dite massives (Big Data) pour proposer de nouveaux modèles de coopération, en temps réel.Dans cette thèse, une approche pour la surveillance de l'environnement des ACVs est proposée. L’approche permet de valider les politiques de contrôle d'accès et de les reconfigurer en toute sécurité. La contribution de cette thèse consiste à:• Guider les Model Checkers de sécurité pour trouver automatiquement les scénarios d'attaque dès la phase de conception.• Concevoir des anti-patterns pour guider le processus de validation, et développer un algorithme pour les détecter automatiquement lors des reconfigurations des modèles.• Construire une approche pour surveiller en temps réel les flux de données dynamiques afin de proposer des adaptations de la politique d'accès lors de l'exécution.L’approche proposée a été validée en utilisant plusieurs exemples liés aux ACVs, et les résultats des expérimentations prouvent la faisabilité de cette approche
Autonomous Connected Vehicles (ACVs) are Cyber-physical systems (CPS) where the computationalworld and the real one meet. These systems require a rigorous validation processthat starts at design phase and continues after the software deployment. Models@Runtimehas appeared as a new paradigm for continuously monitoring software systems execution inorder to enable adaptations whenever a change, a failure or a bug is introduced in the executionenvironment. In this thesis, we are going to tackle ACVs environment where vehicles tries tocollaborate and share their data in a secure manner.Different modeling approaches are already used for expressing access control requirementsin order to impose security policies. However, their validation tools do not consider the impactsof the interaction between the functional and the security requirements. This interaction canlead to unexpected security breaches during the system execution and its potential runtimeadaptations. Also, the real-time prediction of traffic states using crowd sourcing data could beuseful for proposition adaptations to AVCs cooperation models. Nevertheless, it has not beensufficiently studied yet. To overcome these limitations, many issues should be addressed:• The evolution of the system functional part must be considered during the validation ofthe security policy and attack scenarios must be generated automatically.• An approach for designing and automatically detecting security anti-patterns might bedeveloped. Furthermore, new reconfigurations for access control policies also must befound, validated and deployed efficiently at runtime.• ACVs need to observe and analyze their complex environment, containing big-datastreams to recommend new cooperation models, in near real-time.In this thesis, we build an approach for sensing the ACVs environment, validating its accesscontrol models and securely reconfiguring it on the fly. We cover three aspects:• We propose an approach for guiding security models checkers to find the attack scenariosat design time automatically.• We design anti-patterns to guide the validation process. Then, we develop an algorithmto detect them automatically during models reconfigurations. Also, we design a mechanismfor reconfiguring the access control model and we develop a lightweight modularframework for an efficient deployment of new reconfigurations.• We build an approach for the real-time monitoring of dynamic data streams to proposeadaptations for the access policy at runtime.Our proposed approach was validated using several examples related o ACVs. the results ofour experimentations prove the feasibility of this approach
APA, Harvard, Vancouver, ISO, and other styles
34

Geil, Kerrie L., and Kerrie L. Geil. "Assessing the 20th Century Performance of Global Climate Models and Application to Climate Change Adaptation Planning." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/623015.

Full text
Abstract:
Rapid environmental changes linked to human-induced increases in atmospheric greenhouse gas concentrations have been observed on a global scale over recent decades. Given the relative certainty of continued change across many earth systems, the information output from climate models is an essential resource for adaptation planning. But in the face of many known modeling deficiencies, how confident can we be in model projections of future climate? It stands to reason that a realistic simulation of the present climate is at least a necessary (but likely not sufficient) requirement for a model’s ability to realistically simulate the climate of the future. Here, I present the results of three studies that evaluate the 20th century performance of global climate models from phase 5 of the Coupled Model Intercomparison Project (CMIP5). The first study examines precipitation, geopotential height, and wind fields from 21 CMIP5 models to determine how well the North American monsoon system (NAMS) is simulated. Models that best capture large-scale circulation patterns at low levels usually have realistic representations of the NAMS, but even the best models poorly represent monsoon retreat. Difficulty in reproducing monsoon retreat results from an inaccurate representation of gradients in low-level geopotential height across the larger region, which causes an unrealistic flux of low-level moisture from the tropics into the NAMS region that extends well into the post-monsoon season. The second study examines the presence and severity of spurious Gibbs-type numerical oscillations across the CMIP5 suite of climate models. The oscillations can appear as unrealistic spatial waves near discontinuities or sharp gradients in global model fields (e.g., orography) and have been a known problem for decades. Multiple methods of oscillation reduction exist; consequently, the oscillations are presumed small in modern climate models and hence are rarely addressed in recent literature. Here we quantify the oscillations in 13 variables from 48 global climate models along a Pacific ocean transect near the Andes. Results show that 48% of nonspectral models and 95% of spectral models have at least one variable with oscillation amplitude as large as, or greater than, atmospheric interannual variability. The third study is an in-depth assessment model simulations of 20th century monthly minimum and maximum surface air temperature over eight US regions, using mean state, trend, and variability bias metrics. Transparent model performance information is provided in the form of model rankings for each bias type. A wide range in model skill is at the regional scale, but no strong relationships are seen between any of the three bias types or between 20th century bias and 21st century projected change. Using our model rankings, two smaller ensembles of models with better performance over the southwestern U.S. are selected, but they result in negligible differences from the all-model ensemble in the average 21st century projected temperature change and model spread. In other words, models of varied quality (and complexity) are projecting very similar changes in temperature, implying that the models are simulating warming for different physical reasons. Despite this result, we suggest that models with smaller 20th century biases have a greater likelihood of being more physically realistic and therefore, more confidence can be placed in their 21st century projections as compared to projections from models that have demonstrably poor skill over the observational period. This type of analysis is essential for responsibly informing climate resilience efforts.
APA, Harvard, Vancouver, ISO, and other styles
35

Xing, Yang. "Asymptotic behavior of Bayesian nonparametric procedures /." Umeå : Dept. of Forest Economics, Swedish University of Agricultural Sciences, 2009. http://epsilon.slu.se/200935.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Doi, Fabrício. "Objetos adaptativos: aplicação da tecnologia adaptativa à orientação a objetos." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-09012008-095135/.

Full text
Abstract:
Este trabalho estuda o problema da construção de sistemas orientados a objetos com características adaptativas, tendo como principal objetivo simplificar o processo de construção. Para isso o trabalho utiliza como base teórica a Tecnologia Adaptativa e sua aplicação em diversos formalismos. O Modelo Adaptativo de Objetos foi utilizado como base de comparação de soluções para a construção de sistemas adaptativos. Nesta pesquisa são apresentadas aplicações e uma proposição para a construção e modelagem de sistemas adaptativos, através da extensão do conceito de objetos com características da tecnologia adaptativa. Através deste estudo avaliou-se o impacto da aplicação do dispositivo adaptativo em um formalismo com tipo. Os resultados obtidos no presente trabalho demonstram que a tecnologia adaptativa é propícia para linguagens orientadas a objetos e que os diagramas UML são capazes, com pequenas extensões, de representar o comportamento adaptativo adequadamente.
This study addresses the issue of implementing object-oriented software with adaptive characteristics, having as primary purpose simplify the implementing process. The key theoretical basis consisted in adaptive technology and its application in various formalisms. Adaptive Object Model has been taken as comparison basis to solutions to implement adaptive systems. This study describes applications and a proposition to implement and model adaptive systems, through the extension of object concept with adaptive technology characteristics. It also evaluates the impact of applying adaptive devices in formalism with types. The results obtained demonstrate that adaptive technology is suitable for object-oriented languages and that UML diagrams are capable of presenting adaptive behavior appropriately with a small number of extensions.
APA, Harvard, Vancouver, ISO, and other styles
37

Kurekli, Kenan. "System Parameter Adaptation Based On Image Metrics For Automatic Target Detection." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12604966/index.pdf.

Full text
Abstract:
Automatic object detection is a challenging field which has been evolving over decades. The application areas span many domains such as robotics inspection, medical imaging, military targeting, and reconnaissance. Some of the most concentrated efforts in automatic object detection have been in the military domain, where most of the problems deal with automatic target detection and scene analysis in the outdoors using a variety of sensors. One of the critical problems in Automatic Target Detection (ATD) systems is multiscenario adaptation. Most of the ATD systems developed until today perform unpredictably i.e. perform well in certain scenarios, and poorly in others. Unless ATD systems can be made adaptable, their utility in battlefield missions remains questionable. This thesis describes a methodology that adapts parameterized ATD systems with image metrics as the scenario changes so that ATD system can maintain better performance. The methodology uses experimentally obtained performance models, which are functions of image metrics and system parameters, to optimize performance measures of the ATD system. Optimization is achieved by adapting system parameters with incoming image metrics based on performance models as the system works in field. A simple ATD system is also proposed in this work to describe and test the methodology.
APA, Harvard, Vancouver, ISO, and other styles
38

Bryant, W. A. "Functional genomics, analysis of adaptation in, and applications of models to, the metabolism of engineered Escherichia coli." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/19628/.

Full text
Abstract:
In order to examine the metabolism of bacteria in the genus Enterobacteriaceae tools for gene complement comparison and stoichiometric model building have been developed to take advantage of both the number of complete bacterial genome sequences currently available and the relationship between genes and metabolism. A functional genomic approach to improving knowledge of the metabolism of Escherichia coli CFT073 (a uropathogen) has been undertaken taking into account not only its genome sequence, but its close relationship to E. coli MG1655. A fresh comparison of E. coli CFT073 has been done with E. coli MG1655 to identify all those genes in CFT073 that are not present in MG1655 and may have metabolic characteristics. These genes have further been bioinformatically assessed to determine whether they might encode enzymes for the metabolism of chemicals commonly found in human urine, and one set of such genes has been experimentally confirmed to encode an L-sorbose utilisation pathway. Little experimental work has been done as yet to elucidate how bacteria adaptively respond to the introduction of heterologous metabolic genes. To investigate how bacteria respond to such DNA, genes encoding the L-sorbose utilisation and uptake operon from CFT073 have been cloned and transformed into DH5 and a selective pressure (minimal medium with L-sorbose as sole carbon source) has been applied over 100 generations of growth of this strain in serial passage to investigate the change in its behaviour. The availability of large numbers of completely sequenced genomes, along with the development of a stoichiometric metabolic model with very high coverage of E. coli metabolism (iAF1260 [1]) have made possible the analysis of the core metabolism of large numbers of bacteria to investigate gene essentiality in these bacteria. A novel way of assessing gene complement has been developed using BLAST and DiagHunter to improve reliability of gene synteny comparisons with contextual information about the genes and to extend work by others to cover all E. coli and Shigella genome sequences with available sequences on GanBank (as of 1st June 2009) in order to bioinformatically investigate essential genes in these bacteria and the heterogeneity of their metabolic networks. Further to this a metabolic model has been constructed for DH5 with an added L-sorbose pathway and for CFT073 and these models have been used to investigate behavioural changes during adaptation of bacteria to novel heterologous genes.
APA, Harvard, Vancouver, ISO, and other styles
39

Chang, Joshua TsuKang. "Flipping Biological Switches: Solving for Optimal Control: A Dissertation." eScholarship@UMMS, 2015. https://escholarship.umassmed.edu/gsbs_diss/763.

Full text
Abstract:
Switches play an important regulatory role at all levels of biology, from molecular switches triggering signaling cascades to cellular switches regulating cell maturation and apoptosis. Medical therapies are often designed to toggle a system from one state to another, achieving a specified health outcome. For instance, small doses of subpathologic viruses activate the immune system’s production of antibodies. Electrical stimulation revert cardiac arrhythmias back to normal sinus rhythm. In all of these examples, a major challenge is finding the optimal stimulus waveform necessary to cause the switch to flip. This thesis develops, validates, and applies a novel model-independent stochastic algorithm, the Extrema Distortion Algorithm (EDA), towards finding the optimal stimulus. We validate the EDA’s performance for the Hodgkin-Huxley model (an empirically validated ionic model of neuronal excitability), the FitzHugh-Nagumo model (an abstract model applied to a wide range of biological systems that that exhibit an oscillatory state and a quiescent state), and the genetic toggle switch (a model of bistable gene expression). We show that the EDA is able to not only find the optimal solution, but also in some cases excel beyond the traditional analytic approaches. Finally, we have computed novel optimal stimulus waveforms for aborting epileptic seizures using the EDA in cellular and network models of epilepsy. This work represents a first step in developing a new class of adaptive algorithms and devices that flip biological switches, revealing basic mechanistic insights and therapeutic applications for a broad range of disorders.
APA, Harvard, Vancouver, ISO, and other styles
40

Buck, Harleah G. "The Geriatric Cancer Experience in End of Life: Model Adaptation and Testing." [Tampa, Fla.] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Shah, Ankur Savailal. "Prediction Models for Multi-dimensional Power-Performance Optimization on Many Cores." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/31826.

Full text
Abstract:
Power has become a primary concern for HPC systems. Dynamic voltage and frequency scaling (DVFS) and dynamic concurrency throttling (DCT) are two software tools (or knobs) for reducing the dynamic power consumption of HPC systems. To date, few works have considered the synergistic integration of DVFS and DCT in performance-constrained systems, and, to the best of our knowledge, no prior research has developed application-aware simultaneous DVFS and DCT controllers in real systems and parallel programming frameworks. We present a multi-dimensional, online performance prediction framework, which we deploy to address the problem of simultaneous runtime optimization of DVFS, DCT, and thread placement on multi-core systems. We present results from an implementation of the prediction framework in a runtime system linked to the Intel OpenMP runtime environment and running on a real dual-processor quad-core system as well as a dual-processor dual-core system. We show that the prediction framework derives near-optimal settings of the three power-aware program adaptation knobs that we consider. Our overall runtime optimization framework achieves significant reductions in energy (12.27% mean) and ED2 (29.6% mean), through simultaneous power savings (3.9% mean) and performance improvements (10.3% mean). Our prediction and adaptation framework outperforms earlier solutions that adapt only DVFS or DCT, as well as one that sequentially applies DCT then DVFS.

Further, our results indicate that prediction-based schemes for runtime adaptation compare favorably and typically improve upon heuristic search-based approaches in both performance and energy savings.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
42

Renga, Sandra. "An evaluation of two predictive models of adjustment in women with breast cancer : hope versus cognitive adaptation theory." Thesis, Lancaster University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Boxuan. "Estimating the Impacts of Climate Changes on Agricultural Productivities in Thailand, Using Simulation Models." Kyoto University, 2018. http://hdl.handle.net/2433/235992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Nhemachena, Charles. "Agriculture and future climate dynamics in Africa impacts and adaptation options /." Thesis, Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-05302009-122839/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Barden, Katharine. "Perceptual learning of context-sensitive phonetic detail." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/241032.

Full text
Abstract:
Although familiarity with a talker or accent is known to facilitate perception, it is not clear what underlies this phenomenon. Previous research has focused primarily on whether listeners can learn to associate novel phonetic characteristics with low-level units such as features or phonemes. However, this neglects the potential role of phonetic information at many other levels of representation. To address this shortcoming, this thesis investigated perceptual learning of systematic phonetic detail relating to higher levels of linguistic structure, including prosodic, grammatical and morphological contexts. Furthermore, in contrast to many previous studies, this research used relatively natural stimuli and tasks, thus maximising its relevance to perceptual learning in ordinary listening situations. This research shows that listeners can update their phonetic representations in response to incoming information and its relation to linguistic-structural context. In addition, certain patterns of systematic phonetic detail were more learnable than others. These findings are used to inform an account of how new information is integrated with prior experience in speech processing, within a framework that emphasises the importance of phonetic detail at multiple levels of representation.
APA, Harvard, Vancouver, ISO, and other styles
46

Lakoba, Vasiliy Tarasovich. "Ecotypic Variation in Johnsongrass in Its Invaded U.S. Range." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103611.

Full text
Abstract:
Biological invasions have been observed throughout the world for centuries, often with major consequences to biodiversity and food security. Tying invasion to species identity and associated traits has led to numerous hypotheses on why, and where, some species are invasive. In recent decades, attention to intraspecific variation among invaders has produced questions about their adaptation to climate, land use, and environmental change. I examined the intraspecific variation of invasive Johnsongrass's (Sorghum halepense (L.) Pers.) seedling stress response, propagule cold tolerance, and large-scale niche dynamics for correlation with populations' climatic and ecotypic (i.e., agricultural vs. non-agricultural) origin. Overall, I found a greater number of home climate effects than ecotypic effects on various traits. Non-agricultural seed from cold climates and agricultural seed from warm climates germinated more and faster, while non-agricultural seedlings showed uniform chlorophyll production regardless of home soil carbon origin, unlike their agricultural counterparts. Neither seedling stress response nor propagule cold tolerance interacted with ecotype identity; however, drought stress varied with population origins' aridity and soil fertility, and seed from warm/humid and cold/dry climates was most germinable. Comparison of seed and rhizome cold tolerance also suggested that the latter is a conserved trait that may be limiting S. halepense poleward range expansion. This physiological limit, an unchanged cold temperature niche boundary between continents and ecotypes, and a narrowed niche following transition to non-agricultural lands all imply low likelihood of spread based on climatic niche shift. Instead, evidence points to range expansion driven primarily by climate change and highlights agriculture's role in facilitating invasibility. This tandem approach to climate and land use as drivers of intraspecific variation is transferable to other taxa and can help refine our conception of and response to invasion in the Anthropocene.
Doctor of Philosophy
Exotic invasive species are a global problem, threatening biodiversity and biosecurity now and in the future. In the last several decades, ecologists have studied many individual invaders and their traits to understand what drives their spread. More recently, abundant differences in traits between populations within an invasive species have raised questions about humans' role in facilitating invasion through climate change, land use, and other disturbances. I studied the invasive Johnsongrass's (Sorghum halepense (L.) Pers.) response to drought, nutrient limitation, and freezing to detect differences between populations based on their climate and ecotype (agricultural vs. non-agricultural) origin. I also tracked differences in the climates the species occupied across the globe and North America and projected its future distribution under climate change. Overall, I found a greater number of home climate effects than ecotypic effects on various traits. Non-agricultural seed from cold climates and agricultural seed from warm climates germinated the most, while non-agricultural seedlings performed consistently regardless of soil carbon origin, unlike their agricultural counterparts. In addition, drought stress varied with population origins' rainfall and soil fertility, and seed germination favored warm/humid and cold/dry origin. Rhizome (underground stem) cold tolerance appears to be a trait that limits S. halepense poleward range expansion. Along with no change in the coldest climates occupied worldwide and no spread to new climates with transition to non-agricultural lands, this implies that Johnsongrass is unlikely to expand its range without external forces. Instead future range expansion will likely be driven by climate change. This coupled approach to climate and land use affecting invasion is transferable to other species and can help refine both our concepts and response strategies.
APA, Harvard, Vancouver, ISO, and other styles
47

Heaphy, Liam James. "Modelling and translating future urban climate for policy." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/modelling-and-translating-future-urban-climate-for-policy(2c2ca637-bec2-4f60-884d-5d34fa77fb26).html.

Full text
Abstract:
This thesis looks at the practice of climate modelling at the urban scale in relation to projections of future climate. It responds to the question of how climate models perform in a policy context, and how these models are translated in order to have agency at the urban scale. It considers the means and circumstances through which models are constructed to selectively represent urban realities and potential realities in order to explore and reshape the built environment in response to a changing climate. This thesis is concerned with an interdisciplinary area of research and practice, while at the same time it is based on methodologies originating in science and technology studies which were later applied to architecture and planning, geography, and urban studies. Fieldwork consisted of participant-observation and interviews with three groups of practitioners: firstly, climate impacts modellers forming part of the Adaptation and Resilience in a Changing Climate (ARCC) programme; secondly, planners and adaptation policymakers in the cities of Manchester and London; and thirdly, boundary organisations such as the UK Climate Impacts Programme (UKCIP). Project and climate policy material pertinent to these projects and the case study cities were also analysed in tandem. Of particular interest was the common space shared to researchers and stakeholders where modelling results were explained, contextualised, and interrogated for policy-relevant results. This took the form of stakeholder meetings in which the limits of the models in relation to policy demands could be articulated and mediated. In considering the agency of models in relation to uncertainties, it was found that although generated in a context of applied science, models had a limited effect on policy. As such, the salience of urban climatic risk-based assessment for urban planning is restrained, because it presupposes a quantitative understanding of climate impacts that is only slowly forming due to societal and governmental pressures. This can be related both to the nature of models as sites of exploration and experimentation, and to the distribution of expertise in the climate adaptation community. Although both the research and policy communities operate partly in a common space, models and their associated tools operate at a level of sophistication that policy-makers have difficulty comprehending and integrating into planning policy beyond the level of simple guidance and messages. Adaptation in practice is constrained by a limited understanding of climate uncertainties and urban climatology, evident through the present emphasis on catch-all solutions like green infrastructure and win-win solutions rather than the empowerment of actors and a corresponding distribution of adequate resources. An analysis is provided on the means by which models and maps can shape climate adaptation at scales relevant for cities, based on considerations of how models gain agency through forms of encoded expertise like maps and the types of interaction between science and policy that they imply.
APA, Harvard, Vancouver, ISO, and other styles
48

Graveline, Nina. "Adaptation de l'agriculture aux politiques de gestion de l'eau et aux changements globaux : l'apport des modèles de programmation mathématique." Thesis, Paris, AgroParisTech, 2013. http://www.theses.fr/2013AGPT0091/document.

Full text
Abstract:
Cette thèse développe et discute différentes approches micro-économiques de modélisation de l’agriculture pour représenter l’effet de changements globaux et de politiques de gestion de l’eau sur l’adaptation de l’agriculture et sur les ressources en eau. Après un chapitre de synthèse et une revue de la littérature, quatre essais sont présentés. Le premier essai décrit la représentation du comportement de dix exploitations agricoles en Alsace et en Bade (Allemagne) à partir de modèles de programmation linéaire qui intègrent la prise en compte du risque. Après extrapolation, les résultats de simulation sont couplés à une chaîne de modèle plante-sol et de transfert hydrogéologique afin d’estimer la concentration future en nitrate dans l’aquifère du Rhin supérieur. Les simulations des trois scénarios de changements - tendanciel, libéral et interventionniste - suggèrent que les concentrations en nitrates baissent dans les trois cas par rapport à la référence. Le second essai explore l’effet de l’incertitude de changements globaux sur les ressources en eau par des simulations Monte Carlo pour le modèle alsacien (premier essai) et un modèle de demande en eau agricole (Sud-Ouest). Plusieurs niveaux de dépendance entre les paramètres incertains sont caractérisés. L’analyse des résultats montre que les objectifs environnementaux peuvent être déterminés avec suffisamment de précision malgré l’incertitude forte. Le troisième essai développe un modèle agricole régional de programmation mathématique positive avec élasticité de substitution constante entre l’eau et la terre afin d’explorer comment l’agriculture, partiellement irriguée, de Beauce s’adapte à une baisse de la disponibilité en eau. La réponse du rendement à l’eau est calibrée à partir d’information agronomique. Les adaptations à la baisse de disponibilité en eau sont distinguées selon qu’elles correspondent à des baisses de dose d’eau d’irrigation ou de changement de culture. Environ 20% de la réduction est due à la baisse des doses d’eau (marge intensive). Le dernier essai présente un modèle hydro-économique “holistic” de l’agriculture et de l’aquifère de Beauce afin d’évaluer plusieurs politiques de gestion quantitatives de l’eau ainsi que d’évaluer le cas où l’accès à la ressource n’est plus régulé. Des simulations dynamiques sont réalisées à l’horizon 2040 en tenant compte de l’incertitude liée au changement climatique. La politique actuelle de quotas annualisés semble être plus coût-efficace que les autrespolitiques testées (taxes, transferts etc.)
This thesis develops and discusses agricultural-supply modeling approaches for representing the adaptation of farming to global changes and water policies: their effects on agricultural economics and water resources comprise critical information for decision makers. After a summary and a review chapter, four essays are presented. The first essay describes a representation of the behavior of ten typical farms using a risk linear programming model connected to a plant-soil-hydrodynamic model chain, to assess the future level of nitrate contamination in the upper Rhine valley aquifer. The baseline, liberal, and interventionist scenarios for 2015 all result in lower nitrate concentrations. The second essay explores the effects of the economic uncertainty of global changes by means of a Monte Carlo approach distinguishing various levels of dependence on uncertain parameters. Analyses for a nitrate-oriented and a water-use model (in Alsace and southwestern France) show that the environmental objectives can be targeted withsufficient confidence. The third essay develops a flexible specification for positive mathematical programming - constant elasticity of substitution with decreasing returns - to explore how irrigated farming adapts to increased water scarcity in Beauce, France. The possibility of adjusting the application of water per hectare accounts for about 20% of the response. The last essay presents the development of a holistic hydro-economic model of Beauce’s agriculture and aquifer under climate-change uncertainty, so as to evaluate various water policies, as well as the open-access case, up to the year 2040. The results show that the baseline policy is more cost-effective than the other instruments tested (tax, transfer,etc.)
APA, Harvard, Vancouver, ISO, and other styles
49

Regotti, Benjamin P. "Terminology, models and methods for reflecting upon the encounter between liturgy and culture the contribution of Anscar J. Chupungco, O.S.B. /." Theological Research Exchange Network (TREN), 1992. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Warden, Tobias [Verfasser], Otthein [Akademischer Betreuer] Herzog, Otthein [Gutachter] Herzog, and Winfried [Gutachter] Lamersdorf. "Interactive Multiagent Adaptation of Individual Classification Models for Decision Support / Tobias Warden ; Gutachter: Otthein Herzog, Winfried Lamersdorf ; Betreuer: Otthein Herzog." Bremen : Staats- und Universitätsbibliothek Bremen, 2019. http://d-nb.info/1199003611/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography