Dissertationen zum Thema „Kinetic data“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Kinetic data.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Kinetic data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Russel, Daniel. „Kinetic data structures in practice /“. May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Agouris, Ioannis. „The variability of force platform, kinematic and kinetic data in normal and cerebral palsy gait“. Thesis, University of Aberdeen, 2002. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU154295.

Der volle Inhalt der Quelle
Annotation:
Gait analysis can produce useful results once the natural and systematic variability of gait measurements is established [1]. It was therefore the purpose of this study to investigate the variability of temporal-spatial, kinematic and kinetic parameters of gait, in adults and children with normal gait as well as in children with cerebral palsy (CP) gait, using three-dimensional motion analysis. Investigations of the variability and symmetry of ground reaction force (GRF) data, in healthy and CP children, using time and frequency domain analysis, concluded that GRF data in CP children was more variable compared to normal. The vertical force was the least and the mediolateral force the most variable parameter. Time domain analysis is limited since it involves only selected points of the force-time curves, whereas frequency domain analysis contains information about the entire waveforms. Motion analysis require reflective markers attached on the subject's skin. Marker misplacement is considered a major source of variability in gait analysis [2]. A marker placement protocol was established and validated by eight different marker applicators. Within and between-applicators reproducibility was high. Consistency in applying an established protocol is important, although the marker positions may not be entirely accurate. The effect of marker misplacement on the kinematics and kinetics of gait was quantified by using modelling software. The gait parameters in the transverse plane were the most sensitive to market misplacement, which should be taken into account during examination of gait analysis reports. Temporal-spatial, kinematic and kinetic patterns of gait, in adults and children (healthy and CP), were determined. All parameters of normal gait showed considerable repeatability, with kinetics being more repeatable than kinematics. CP gait showed repeatability in the sagittal and frontal plane, however the transfer plane kinematic parameters were highly variable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Liu, Lingyi. „Concussion balance and postural stability assessment system using kinetic data analysis“. Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/50256.

Der volle Inhalt der Quelle
Annotation:
In current scientific literature, there are numerous approaches that clinicians can use to assess the static postural stability of patients. Among them, the Balance Error Scoring System is a notable method with merits such as cost-effectiveness and portability. Traditional measurement of errors made by patients in BESS test experiment relies on the manual inspection of sophisticated clinicians to the whole experiment process. A new avenue of detecting errors with wireless sensor network and signal processing technique can eliminate the instability from subjective evaluation in traditional method. This thesis present a reliable analytical system that can provide accurate evaluation on errors in BESS test of patient with concussion to assist clinicians to investigate their standing postural stability. In this research, the kinetic signal data is collected by wearable WSN equipment consisting of seven sensors embedded with accelerometer and gyroscope fixed on body of patients while they are completing BESS experiment. We use experimental data of 30 subjects to train back-propagation neural network and test the performance of neural network with testing data set. In this procedure, statistical technique such as principal component analysis and independent component analysis are applied in the step of signal pre-processing. Meanwhile, feature extraction is an alternative pre-processing technique for kinetic signal and the feature data serves as input data to train the neural network. With regard to target training data, the standard error information are acquired from the analysis of a group of researchers on video of the conducted experiment and we present them with Gaussian curve signal indicating the possibility of the error event. By testing the neural network, the technique of feature extraction in combination with back-propagation neural network is confirmed to account for the most optimal assessment of the postural error in BESS test. Furthermore, we can confirm the type of each detected error from six possible types of postural errors with neural network classification technique. Each type of error is corresponding to a certain unstable posture according to “BESS Protocol”. Ultimately, the presented error detecting system is convinced to supply reliable evaluation of the static postural stability of patients with concussion problem.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Green, Darren. „Acquisition of kinetic and scale-up data from heat flow calorimetry“. Thesis, London South Bank University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367899.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Schabort, Willem Petrus Du Toit. „Integration of kinetic models with data from 13C-metabolic flux experiments“. Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/707.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gaisford, Simon. „Kinetic and thermodynamic investigation of a series of pharmaceutical excipients“. Thesis, University of Kent, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242881.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ortiz, Joseph Christian, und Joseph Christian Ortiz. „Estimation of Kinetic Parameters From List-Mode Data Using an Indirect Approach“. Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621785.

Der volle Inhalt der Quelle
Annotation:
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Arslan, Mine Özge Alsoy Altınkaya Sacide. „measurement and modeling of thermodynamic and kinetic data of membrane forming systems/“. [s.l.]: [s.n.], 2007. http://library.iyte.edu.tr/tezlerengelli/master/kimyamuh/T000617.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sahrom, Sofyan. „Beyond jump height: Understanding the kinematics and kinetics of the countermovement jump from vertical ground reaction force data through the use of higher-order time derivatives“. Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2019. https://ro.ecu.edu.au/theses/2235.

Der volle Inhalt der Quelle
Annotation:
The countermovement jump (CMJ) is a complex, multi-joint movement that has been well studied in human research, largely through analysis of the ground reaction force-time signal obtained during jumping. Such analysis has required the definition and then the calculation of several critical kinematic and kinetic variables (including peak force, peak eccentric [braking] force, peak power, rate of propulsive force development, modified reactive strength index) which are used to describe jump performance as well as the jumper’s overall neuromuscular function. The accurate calculation of these variables first requires precise identification of critical kinematic and kinetic ‘events’ (e.g. start of jump, end of downward [braking] phase, jump take-off point, etc.), although the accuracy of event identification has not been thoroughly investigated to date, and incorrect event definitions have been commonly used. The main purpose of the current research is to assess the viability of using the yank-time signal, derived from the vertical ground reaction force-time signal, to (i) provide improved detection accuracy of important kinematic and kinetic events using information contained with the ground reaction force-time signal, which have not been perfectly identifiable using existing methods (especially for individuals who exhibit specific ground reaction force profiles; e.g. a bimodal propulsive phase force-time relation) as well as to (ii) determine the association between these events and muscle activation and kinematic temporal profiles during the CMJ, and (iii) examine the effect of the use of new definitions/calculations on the magnitude on important kinematic and kinetic variables. This would allow practitioners to better understand the different movement patterns employed by individuals during CMJs and make appropriate inferences for the detection of technique faults, guidance of exercise programming, etc. The information will also be of interest to animal locomotion biomechanists aiming to infer kinematic and muscle activation events directly from easily-obtained force platform recordings without the need for motion analysis or electromyographic analyses. Deriving the yank-time signal from the vertical ground reaction force-time signal is achieved through differentiation, which can significantly reduce the signal-to-noise ratio and possibly prevent meaningful inference. To ensure the most optimal yank-time signal is derived, three different methods of deriving the yank-time signal were compared in Study 1, and it was established that a combination of 4th-order Butterworth filter and 2nd-order central differentiation yields a suitable yank-time signal for the purpose of identification of centre of mass displacement events during countermovement jumping in humans. In Study 2 the ground reaction force-time signal obtained during maximal CMJ were described in relation to the kinematic and kinetic (including muscular/internal force) events that underpin it through the use of yank and jerk calculations (the time-derivatives of force (kinetics) and acceleration (kinematics)). Events that have not previously been identifiable directly from the force-time record, including the initiation of knee joint flexion (which occurs ~75 ± 88 ms prior to a significant (detectable) decrease in the ground reaction force) and the first movement of the body’s centre of mass (which occurs ~81 ± 78 ms after a decrease in the ground reaction force) were found to be easily and accurately identifiable. The muscle activation and kinematic temporal profiles of individuals with different ground reaction force-time profiles (e.g. unimodal or bimodal propulsive force records) were explored to better understand the factors underpinning the different movement patterns employed by individuals during CMJ. This study represents the main work done within the thesis project. With the viability of the yank-time signal established, the present research then investigated the implications of these new event definitions on the calculation of commonly-calculated CMJ performance variables, including the rate of force development (RFD) and modified reactive strength index (RSImod). For the latter, its suitability as an analogue for the reactive strength index (RSI) measured during drop jumping was simultaneously explored. Both RFD and RSImod were found to be undercalculated by 160% and 22%, respectively. More importantly, the difference in RFD led to significant differences in the rank order of individuals within the whole cohort (n = 32) by up to 30 places (i.e. 93.8%, decrease in rank). which in turn would critically affect the conclusions drawn of an individual‘s physical function. Thus, accurate identification of specific events during jumping using yank-time data leads to different estimates of variables such as RFD and RSImod, which may have implications for human performance testing in the applied sport setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ainsworth, Holly Fiona. „Bayesian inference for stochastic kinetic models using data on proportions of cell death“. Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2499.

Der volle Inhalt der Quelle
Annotation:
The PolyQ model is a large stochastic kinetic model that describes protein aggregation within human cells as they undergo ageing. The presence of protein aggregates in cells is a known feature in many age-related diseases, such as Huntington's. Experimental data are available consisting of the proportions of cell death over time. This thesis is motivated by the need to make inference for the rate parameters of the PolyQ model. Ideally observations would be obtained on all chemical species, observed continuously in time. More realistically, it would be hoped that partial observations were available on the chemical species observed discretely in time. However, current experimental techniques only allow noisy observations on the proportions of cell death at a few discrete time points. This presents an ambitious inference problem. The model has a large state space and it is not possible to evaluate the data likelihood analytically. However, realisations from the model can be obtained using a stochastic simulator such as the Gillespie algorithm. The time evolution of a cell can be repeatedly simulated, giving an estimate of the proportion of cell death. Various MCMC schemes can be constructed targeting the posterior distribution of the rate parameters. Although evaluating the marginal likelihood is challenging, a pseudo-marginal approach can be used to replace the marginal likelihood with an easy to construct unbiased estimate. Another alternative which allows for the sampling error in the simulated proportions is also considered. Unfortunately, in practice, simulation from the model is too slow to be used in an MCMC inference scheme. A fast Gaussian process emulator is used to approximate the simulator. This emulator produces fully probabilistic predictions of the simulator output and can be embedded into inference schemes for the rate parameters. The methods developed are illustrated in two smaller models; the birth-death model and a medium sized model of mitochondrial DNA. Finally, inference on the large PolyQ model is considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Starkie, Andrew John. „Calorimetric methods for the determination of kinetic and thermometric data for safe chemical manufacture“. Thesis, London South Bank University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386234.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Hashemi, Beni Leila. „Development of a 3D Kinetic Data Structure adapted for a 3D Spatial Dynamic Field Simulation“. Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26532/26532.pdf.

Der volle Inhalt der Quelle
Annotation:
Les systèmes d'information géographique (SIG) sont employés couramment pour la représentation, la gestion et l'analyse des données spatiales dans un grand nombre de disciplines, notamment les sciences de la terre, l'agriculture, la sylviculture, la météorologie, l'océanographie et plusieurs autres. Plus particulièrement, les géoscientifiques utilisent de plus en plus ces outils pour l'intégration et la gestion de données dans différents types d'applications environnementales, allant de la gestion des ressources en eau à l'étude du réchauffement climatique. Au delà de ces possibilités, les géoscientifiques doivent modéliser et simuler des champs spatiaux dynamiques et 3D et intégrer aisément les résultats de simulation à d'autres informations spatiales associées afin d'avoir une meilleure compréhension de l'environnement. Cependant, les SIG demeurent extrêmement limités pour la modélisation et la simulation des champs spatiaux qui sont habituellement tridimensionnels et dynamiques. Ces limitations sont principalement reliées aux structures de données spatiales actuelles des SIG qui sont bidimensionnelles et statiques et ne sont pas conçues pour aborder le 3D et les aspects dynamiques des champs spatiaux 3D. Par conséquent, l'objectif principal de ce travail de recherche est d'améliorer la capacité actuelle des SIG concernant la modélisation et la simulation des champs spatiaux dynamiques et 3D par le développement d'une structure de données spatiale 3D cinétique. Selon notre revue de littérature, la tetraèdrisation Delaunay dynamique 3D (DT) et sa structure duale, le diagramme Voronoi 3D (VD), ont un potentiel intéressant pour manipuler la nature tridimensionnelle et dynamique de ce genre de phénomène. Cependant, en raison de l'échantillonnage particulier des données utilisées dans les applications en géosciences, la tetraèdrisation Delaunay de telles données est souvent inadéquate pour l'intégration et la simulation numériques de champs dynamiques. Par exemple, dans une simulation hydrogéologique, les données sont réparties irrégulièrement i.e. verticalement denses et horizontalement clairsemées, ce qui peut résulter en une tessellation inadéquate dont les éléments seront soit très grands, soit très petits, soit très minces. La taille et la forme des éléments formant la tessellation ont un impact important sur l'exactitude des résultats de la simulation, ainsi que sur les coûts de calcul qui y sont reliés. Par conséquent, la première étape de notre travail de recherche est consacrée au développement d’une méthode de raffinement adaptative basée sur la structure de données Delaunay dynamique 3D et à la construction d’une tessellation 3D adaptative pour la représentation et la simulation de champs dynamiques. Cette tessellation s’ajuste à la complexité des champs, en considérant les discontinuités et les critères de forme et de taille. Afin de traiter le comportement dynamique des champs 3D dynamiques dans SIG, nous étendons dans la deuxième étape de cette recherche le VD 3D dynamique au VD 3D cinématique pour pouvoir mettre à jour en temps réel la tessellation 3D lors des procédés de simulation dynamique. Puis, nous montrons comment une telle structure de données spatiale peut soutenir les éléments en mouvement à l’intérieur de la tessellation ainsi que leurs interactions. La structure de données cinétique proposée dans cette recherche permet de gérer de manière élégante les changements de connectivité entre les éléments en mouvement dans la tessellation. En outre, les problèmes résultant de l'utilisation d’intervalles de temps fixes, tels que les dépassements et les collisions non détectées, sont abordés en fournissant des mécanismes très flexibles permettant de détecter et contrôler différents changements (événements) dans la tessellation Delaunay 3D. Enfin, nous étudions le potentiel de la structure de données spatiale cinétique 3D pour la simulation de champs dynamiques dans l'espace tridimensionnel. À cette fin, nous décrivons en détail les différentes étapes menant à l'adaptation de cette structure de données, de sa discrétisation pour des champs 3D continus à son intégration numérique basée sur une méthode événementielle. Nous démontrons également comment la tessellation se déplace et comment la topologie, la connectivité, et les paramètres physiques des cellules de la tessellation sont localement mis à jour suite à un événement topologique survenant dans la tessellation. Trois études de cas sont présentées dans la thèse pour la validation de la structure de données spatiale proposée, et de son potentiel pour la simulation de champs spatiaux 3D et dynamiques. Selon nos observations, pendant le procédé de simulation, la structure de données est préservée et l'information 3D spatiale est gérée adéquatement. En outre, les résultats calculés à partir des expérimentations sont très satisfaisants et sont comparables aux résultats obtenus à partir d'autres méthodes existantes, pour la simulation des mêmes champs dynamiques. En conclusion, certaines des limites de l'approche proposée liées au développement de la structure de données 3D cinétique et à son adaptation pour la représentation et la simulation d'un champ spatial 3D et dynamique sont discutées, et quelques solutions sont suggérées pour l'amélioration de l'approche proposée.
Geographic information systems (GIS) are widely used for representation, management and analysis of spatial data in many disciplines including geosciences, agriculture, forestry, metrology and oceanography etc. In particular, geoscientists have increasingly used these tools for data integration and management purposes in many environmental applications ranging from water resources management to global warming study. Beyond these capabilities, geoscientists need to model and simulate 3D dynamic spatial fields and readily integrate those results with other relevant spatial information in order to have a better understating of the environment. However, GIS are very limited for modeling and simulation of spatial fields which are mostly three dimensional and dynamic. These limitations are mainly related to the existing GIS spatial data structures which are 2D and static and are not designed to address the 3D and dynamic aspects of continuous fields. Hence, the main objective of this research work is to improve the current GIS capabilities for modeling and simulation of 3D spatial dynamic fields by development of a 3D kinetic data structure. Based on our literature review, 3D dynamic Delaunay tetrahedralization (DT) and its dual, 3D Voronoi diagram (VD), have many interesting potentials for handling the 3D and dynamic nature of those kind of phenomena. However, because of the special configurations of datasets in geosciences applications, the DT of such data is often inadequate for numerical integration and simulation of dynamic field. For example, in a hydrogeological simulation, the data form highly irregular set of points aligned in vertical direction and very sparse horizontally which may result in very large, small or thin tessellation elements. The size and shape of tessellation elements have an important impact on the accuracy of the results of the simulation of a field as well as the related computational costs. Therefore, in the first step of the research work, we develop an adaptive refinement method based on 3D dynamic Delaunay data structure, and construct a 3D adaptive tessellation for the representation and simulation of a dynamic field. This tessellation is conformed to represent the complexity of fields, considering the discontinuities and the shape and size criteria. In order to deal with the dynamic behavior of 3D spatial fields in a moving framework within GIS, in the second step, we extend 3D dynamic VD to 3D kinetic VD in the sense of being capable of keeping update the 3D spatial tessellation during a dynamic simulation process. Then, we show how such a spatial data structure can support moving elements within the tessellation and their interactions. The proposed kinetic data structure provides an elegant way for the management of the connectivity changes between moving elements within the tessellation. In addition, the problems resulting from using a fixed time step, such as overshoots and undetected collisions, are addressed by providing very flexible mechanisms to detect and manage different changes (events) in the spatial tessellation by 3D DT. Finally, we study the potentials of the kinetic 3D spatial data structure for the simulation of a dynamic field in 3D space. For this purpose, we describe in detail different steps for the adaption of this data structure from its discretization for a 3D continuous field to its numerical integration based on an event driven method, and show how the tessellation moves and the topology, connectivity, and physical parameters of the tessellation cells are locally updated following any event in the tessellation. For the validation of the proposed spatial data structure itself and its potentials for the simulation of a dynamic field, three case studies are presented in the thesis. According to our observations during the simulation process, the data structure is maintained and the 3D spatial information is managed adequately. Furthermore, the results obtained from the experimentations are very satisfactory and are comparable with results obtained from other existing methods for the simulation of the same dynamic field. Finally, some of the limitations of the proposed approach related to the development of the 3D kinetic data structure itself and its adaptation for the representation and simulation of a 3D dynamic spatial field are discussed and some solutions are suggested for the improvement of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Vanacker, Thomas. „Improvement of an advanced kinetic Monte Carlo algorithm through storing and recycling factorised transition matrices“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239920.

Der volle Inhalt der Quelle
Annotation:
The Kinetic Monte Carlo algorithm is a universal method to simulate the evolution of systems governed by a master equation. However, this approach is severely limited by the kinetic trapping of the simulated trajectories in low energy basins. To alleviate this issue, non-local transitions escaping the trapping basins are performed based on a factorisation of the transition matrix associated with the master equation. Whenever trapping becomes severe, the simulation repeatedly visits a limited number of basins and performs the same factorisations many times. In this report, we present two methods aiming at further improving the efficiency of the factorised Kinetic Monte Carlo algorithm. The first method consists of storing and recycling the transition matrix factorisations, while the second method constructs on-the-fly a graph connecting the factorised transition matrices. The efficiency of these methods is demonstrated on simulations of cluster migration in an iron-based alloy.
Kinetisk Monte Carlo är en universell metod för att simulera utvecklingen hos system som styrs av en master-ekvation. Metoden begränsas dock kraftigt av att de simulerade banorna fångas i potentialgropar. För att mildra problemet utförs icke-lokala övergångar som baseras på faktorisering av övergångsmatrisen för master-ekvationen. Dessa övergångar undviker potentialgroparna. Närhelst infångandet blir för besvärligt besöks ett begränsat antal gropar och samma faktorisering utförs många gånger. I detta examensarbete presenteras två metoder som syftar till att ytterligare öka effektiviteten hos algoritmen Kinetisk Monte Carlo. Den första metoden består i att lagra och återanvända faktoriseringarna av övergångsmatrisen medan den andra metoden konstruerar en graf i flykten som sammanbinder de faktoriserade övergångsmatriserna. Effektiviteten hos metoderna demonstreras genom simuleringar av klustermigration hos en järnbaserad legering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Nguyen, Thi Huyen Tram. „Handling data below the quantification limit in viral kinetic modeling for model evaluation and prediction of treatment outcome“. Paris 7, 2014. http://www.theses.fr/2014PA077224.

Der volle Inhalt der Quelle
Annotation:
Les modèles de cinétique virale (VK) aident à comprendre le cycle de vie du virus de l'hépatite C et les mécanismes d'action des antiviraux. Inclure des informations pharmacocinétiques (PK) dans ces modèles permet de mieux comprendre la réponse virologique. Ces modèles pourraient aider à prédire l'issue du traitement pour chaque patient et personnaliser le traitement. Les données sous la limite de quantification (LOQ) sont très fréquentes dans la modélisation VK. L'impact de ces données et la méthode pour en tenir compte dans l'évaluation de modèle et la prédiction de la réponse thérapeutique sont encore en question. On a étendu l'erreur de prédiction de distribution (pd) et l'erreur de prédiction de distribution décorrélée normalisée (npde) pour tenir compte des données sous LOQ. Les métriques étendues ont de meilleurs comportements, avec de satisfaisantes erreurs de type I et puissances, que les méthodes omettant les données sous LOQ. On a construit un modèle PK¬VK pour caractériser la VK sous l'alisporivir donné avec ou sans peg-IFN. Le modèle a fourni de bonnes prédictions pour les réponses virologiques sous différentes combinaisons et doses de l'alisporivir dans une autre étude. On a aussi étudié plusieurs facteurs qui peuvent influer la prédiction de la réponse thérapeutique individuelle tels que les méthodes de traitement des données BQL, le protocole de prélèvement et l'information a priori sur les paramètres de population. On a montré que l'approche bayésienne pouvait donner de bonnes prédictions à partir de quelques premières réponses, à condition que les données sous LOQ soient correctement prises en compte et l'information a priori soit disponible
Viral kinetic (VK) models are useful tools to understand the lifecycle of hepatitis C virus and the mechanisms of action of antiviral agents. The understanding of virologic response can be improved by including pharmacokinetic (PK) information. VK model might also be useful to predict individual treatment outcome and support treatment personalization. One common problem in VK modeling is data below the quantification limit (BQL). However the impact of these data on model evaluation and treatment response prediction and how to properly handle them in these steps are still in question. We extended prediction discrepancies (pd) and normalized prediction distribution errors (npde) to handle BQL data and evaluated them in a simulation study. The extended metrics have better performance with satisfactory type I errors and powers, compared to the methods omitting BQL data. We developed a PK-VK model to characterize the VK to alisporivir, a cyclophillin inhibitor, given in with or without peg-IFN. The model provided good predictions for the virologic responses (BQL data fraction and SVR rate) for different combinations and doses of alisporivir in another study. We also studied by simulation the use of VK model to predict individual treatment outcome and evaluated several factors that can impact this prediction: methods for handling BQL data, design and a priori information on population parameters. We showed that Bayesian estimation of individual parameters can give good predictions for treatment outcome from only few early responses, provided that BQL data are correctly handled and correct a,priori information is available
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Riposo, Julien. „Computational and Mathematical Methods for Data Analysis in Biology and Finance“. Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066177/document.

Der volle Inhalt der Quelle
Annotation:
Les mathématiques sont comprises en tant qu’ensemble d’idées abstraites, dans le sens où le monde réel – ou plutôt réalité – n’a pas à intervenir. Pourtant, certains faits mathématiques observables dans des données expérimentales ou simulées peuvent être contre-intuitifs. La thèse est divisée en deux parties : premièrement, on étudie mathématiquement les matrices du genre celles dont nous avons discutées en biologie et finance. En particulier, nous mettons en évidence le fait contre-intuitif suivant : pour ces matrices, le vecteur propre associé à la plus haute valeur propre est très proche de la somme de chacune des lignes de la matrice, colonne par colonne. Nous discutons aussi d’applications en théorie des graphes avec bon nombre de simulations numériques. Dans un second temps, nous attaquons le problème des contacts géniques : à partir d’une carte de contact génique, un vrai défi actuel est de retrouver la structure tridimensionnelle de l’ADN. Nous proposons diverses méthodes d’analyse matricielle de données, dont une met en évidence l’existence, dans le noyau, de zones disjointes où les interactions sont de différents types. Ces zones sont des compartiments nucléaires. Avec d’autres données biologiques, nous mettons en évidence la fonction biologique de chacun de ces compartiments. Les outils d’analyses sont ceux utilisés en finance pour analyser des matrices d’auto-corrélation, ou même des séries temporelles
Mathematics are understood as a set of abstract ideas, in the measure of the real world – or reality – has no way to intervene. However, some observable mathematical facts in experimental or simulated data can be counter-intuitive. The PhD is divided into two parts: first, we mathematically study the matrices of the same type of the ones in biology and finance. In particular, we show the following counter-intuitive fact: for these matrices, the eigenvector associated with the highest eigenvalue is close to the sum of each row, column by column. We also discuss some applications to graph theory with many numerical simulations and data analysis.On the other hand, we will face the genetic contact problem: from a contact map, a real current challenge is to find the DNA 3D-structure. We propose several matrix analysis methods, which one show disjoinct areas in the nucleus where the DNA interactions are different. These areas are nuclear compartments. With other biological features, we characterize the biological function of each of the compartments. The analysis tools are the ones already used in finance to analyze the autocorrelation matrices, or even time series
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Claas, Allison. „Systems modeling of quantitative kinetic data identifies receptor tyrosine kinase-specific resistance mechanisms to MAPK pathway inhibition in cancer“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112514.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 149-156).
Targeted cancer therapeutics have seen constraint in clinical efficacy due to resistance. Indicators for resistance may include genetic mutations or protein-level overexpression of targeted or bypass receptor tyrosine kinases (RTKs). While the latter is often attributed to gene amplification, genetic characterization of tumor biopsies has failed to explain substantial proportions of resistance. We hypothesize that post-synthesis mechanisms governing RTK levels may represent underappreciated contributors to drug resistance. We have developed an experimental and computational model for the simultaneous analysis of synthesis and post-synthesis mechanisms contributing to protein level changes. The experimental component quantitatively measures processes operating on multiple time scales in a multi-plexed fashion, with methods generalizable to any membrane bound protein. Parameter distribution estimation by fitting data to an integrative cellular model quantifies native RTK processes and enables the study of treatment induced mechanistic changes. It has been reported that triple negative breast cancer cell lines up-regulate many RTKs in response to Mek inhibition, although reported with conflicting mechanisms. Upon integrated analysis, we find both Axl and Her2 have increased lysate levels after Mek inhibition with 3 Mek inhibitors, Selumetinib, Binimetinib, and PD0325901. Axl changes are attributed to a decrease in proteolytic shedding and protein degradation, and Her2 changes are attributed to decreased synthesis. Met shows a decrease in proteolytic shedding similar to Axl, but compensating synthesis and degradation mechanisms counteract the effect. Contrastingly, Erk inhibition shows minor effects on RTK reprogramming, with Erk dimer inhibitor DEL-22379 exhibiting RTK specific protease effects and highlighting RTK specific outcomes of decreased endocytosis. This quantitative model enables prediction of combination therapies with mechanistic process inhibitors. Our predictions match experimental observations that Axl lysate level increases with Mek inhibition remains unchanged in the presence of transcriptional inhibition, supporting a role for post-synthesis mechanisms. Through additional combination with an Axl inhibitor, we are able to further the anti-proliferative and anti-migratory effect of Mek and transcriptional inhibition in TNBC. This study not only provides a novel and broadly applicable quantitative framework for characterizing RTK level changes, but also emphasizes the RTK, pathway target, and inhibitor variation of RTK reprogramming in drug resistance.
by Allison Claas.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Radchenko, Taras, Valentyn Tatarenko und Sergiy Bokoch. „Calculation of diffusivities in ordering f.c.c. alloy by the kinetic data about short- and long-range order parameters’ relaxation“. Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-196109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Концевой, А. Л., und С. А. Концевой. „Розрахунок кінетичних параметрів за дериватографічними даними“. Thesis, Сумський державний університет, 2016. http://essuir.sumdu.edu.ua/handle/123456789/54457.

Der volle Inhalt der Quelle
Annotation:
Дериватографічне дослідження базується на нагріві зразка з постійною швидкістю зростання температури. Це дозволяє виявити і вивчити особливості екзо- і ендотермічних процесів хімічного перетворення. Крива термогравіметричного (ТГ) аналізу описує втрату маси зразка від температури (тобто отримана в неізотермічних умовах) і може замінити серію ізотермічних кривих залежності ступеня перетворення від часу, що дозволяє вивчити кінетичні закономірності процесу (топохімічного, наприклад).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Tonietto, Matteo. „A Unified Framework For Blood Data Modeling In Dynamic Positron Emission Tomography Studies“. Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3424684.

Der volle Inhalt der Quelle
Annotation:
Quantification of dynamic PET images requires the measurement of radioligand concentrations in the arterial plasma. In general, this cannot be derived from PET images directly but it must be measured from blood samples taken from the subject’s radial artery. The aim of this thesis was to develop and validate a unified framework for the blood data modeling, which was both biologically and experimentally informed, in order to achieve a better description of the blood data.
La quantificazione di immagini dinamiche PET (Tomografia ad Emissione di Positroni) richiede la misurazione della concentrazione di tracciante nel plasma arteriale. In generale, questa non può essere ricavata dalle immagini stesse, ma va misurata attraverso il prelievo di campioni di sangue arteriale durante tutta la durata dell’esame. Lo scopo di questa tesi è stato di sviluppare e validare un framework unificato per la modellazione dei dati arteriali che tenesse in considerazione sia la fisiologia che le variabili sperimentali legate all’esame PET, in modo da raggiungere una migliore descrizione di tali dati.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Doan, Xuan Tien. „Multivariate data analysis for embedded sensor networks within the perishable goods supply chain“. Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/multivariate-data-analysis-for-embedded-sensor-networks-within-the-perishable-goods-supply-chain(0b555420-442b-4787-b730-8acf95878996).html.

Der volle Inhalt der Quelle
Annotation:
This study was aimed at exploring data analysis techniques for generating accurate estimates of the loss in quality of fresh fruits, vegetables and cut flowers in chilled supply chains based on data from advanced sensors. It was motivated by the recent interest in the application of advanced sensors, by emerging concepts in quality controlled logistics, and by the desire to minimise quality losses during transport and storage of the produce. Cut roses were used in this work although the findings will also be applicable to other produce. The literature has reported that whilst temperature was considered to be the most critical post-harvest factor, others such as growing conditions could also be important in the senescence of cut roses. Kinetic modelling was the most commonly used modelling approach for shelf life predictions of foods and perishable produce, but not for estimating vase life (VL) of cut flowers, and so this was explored in this work along with multiple linear regression (MLR) and partial least squares (PLS). As the senescence of cut roses is not fully understood, kinetic modelling could not be implemented directly. Consequently, a novel technique, called Kinetic Linear System (KLS), was developed based on kinetic modelling principles. Simulation studies of shelf life predictions for tomatoes, mushrooms, seasoned soybean sprouts, cooked shrimps and other seafood products showed that the KLS models could effectively replace the kinetic ones. With respect to VL predictions KLS, PLS and MLR were investigated for data analysis from an in-house experiment with cut roses from Cookes Rose Farm (Jersey). The analysis concluded that when the initial and final VLs were available for model calibration, effective estimates of the post-harvest loss in VL of cut roses could be obtained using the post-harvest temperature. Otherwise, when the initial VLs were not available, such effective estimates could not be obtained. Moreover, pre-harvest conditions were shown to correlate with the VL loss but the correlation was too weak to produce or improve an effective estimate of the loss. The results showed that KLS performance was the best while PLS one could be acceptable; but MLR performance was not adequate. In another experiment, boxes of cut roses were transported from a Kenyan farm to a UK distribution centre. Using KLS and PLS techniques, the analysis showed that the growing temperature could be used to obtain effective estimates of the VLs at the farm, at the distribution centre and also the in-transit loss. Further, using post-harvest temperature would lead to a smaller error for the VL at the distribution centre and the VL loss. Nevertheless, the estimates of the VL loss may not be useful practically due to the excessive relative prediction error. Overall, although PLS had a slightly smaller prediction error, KLS worked effectively in many cases where PLS failed, it could handle constraints while PLS could not.In conclusion, KLS and PLS can be used to generate effective estimates of the post-harvest VL loss of cut roses based on post-harvest temperature stresses recorded by advanced sensors. However, the estimates may not be useful practically due to significant relative errors. Alternatively, pre-harvest temperature could be used although it may lead to slightly higher errors. Although PLS had slightly smaller errors KLS was more robust and flexible. Further work is recommended in the objective evaluations of product quality, alternative non-linear techniques and dynamic decision support system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Sirjean, Baptiste. „Étude cinétique de réactions de pyrolyse et de combustion d'hydrocarbures cycliques par les approches de chimie quantique“. Thesis, Vandoeuvre-les-Nancy, INPL, 2007. http://www.theses.fr/2007INPL093N/document.

Der volle Inhalt der Quelle
Annotation:
Les carburants dérivés du pétrole constituent la première source mondiale énergétique et leur approvisionnement constitue un défi actuel majeur impliquant des enjeux économiques et environnementaux cruciaux. Une des voies les plus efficaces pour peser simultanément sur ces deux enjeux passe par la diminution de la consommation en carburant. La simulation numérique constitue dès lors un outil précieux pour améliorer et optimiser les moteurs et les carburants. Les modèles chimiques détaillés sont nécessaires pour comprendre les phénomènes d’auto-inflammation et caractériser la nature et les quantités de polluants émis. Ces modèles mettent en jeu un nombre très important d’espèces et de réactions élémentaires, pour une espèce donnée et pour lesquelles la détermination des données thermodynamiques et cinétiques est un problème crucial. La chimie quantique constitue un outil précieux permettant d’une part de déterminer de façon précise les données thermocinétiques pour bon nombre de systèmes chimiques et d’autre part de mieux comprendre la réactivité de ces systèmes. Dans ce travail, les réactions unimoléculaires de décomposition d’hydrocarbures monocycliques et polycycliques (amorçages, réactions moléculaires, ß-scissions, formations d’éthers cycliques) ont été étudiées à l’aide des méthodes de la chimie quantique. Un mécanisme détaillé de pyrolyse d’un alcane polycyclique a été développé à partir des données thermodynamiques et cinétiques et des corrélations entre structure et réactivité déterminées pour les cyclanes à partir des calculs quantiques. Les simulations effectuées à partir de ce modèle sont en très bon accord avec les résultats expérimentaux de la littérature
Petroleum fuels are the world’s most important primary energy source and the need to maintain their supply is a major actual challenge involving both economical and environmental features. Decreasing fuels consumption is one of the more efficient ways to reconcile the goals of energy price and environmental protection. Numerical simulations become therefore a very important tool to optimize fuels and motors. Detailed chemical kinetic models are required to reproduce the reactivity of fuels and to characterize the amount of emitted pollutants. Such models imply a very large number of chemical species and elementary reactions, for a given species, and the determination of thermodynamic and kinetic data is a critical problem. Nowadays, quantum chemistry methods are able to calculate accurately thermodynamic data for a large number of chemical systems and to elucidate the reactivity of these systems. In this work we have used quantum chemistry to study the unimolecular reactions (initiation, molecular reactions, ß-scissions, cyclic ethers formations) involved in the decomposition of monocyclic and polycyclic hydrocarbons. From the results of quantum chemical calculations, a detailed chemical kinetic mechanism of the pyrolysis of a polycyclic alkane has been developed and validated against experimental data
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Radchenko, Taras, Valentyn Tatarenko und Sergiy Bokoch. „Calculation of diffusivities in ordering f.c.c. alloy by the kinetic data about short- and long-range order parameters’ relaxation: Calculation of diffusivities in ordering f.c.c. alloy by the kineticdata about short- and long-range order parameters’ relaxation“. Diffusion fundamentals 2 (2005) 57, S. 1-2, 2005. https://ul.qucosa.de/id/qucosa%3A14390.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Peng, Zhe, Douglas A. Day, Amber M. Ortega, Brett B. Palm, Weiwei Hu, Harald Stark, Rui Li, Kostas Tsigaridis, William H. Brune und Jose L. Jimenez. „Non-OH chemistry in oxidation flow reactors for the study of atmospheric chemistry systematically examined by modeling“. COPERNICUS GESELLSCHAFT MBH, 2016. http://hdl.handle.net/10150/614743.

Der volle Inhalt der Quelle
Annotation:
Oxidation flow reactors (OFRs) using low-pressure Hg lamp emission at 185 and 254 nm produce OH radicals efficiently and are widely used in atmospheric chemistry and other fields. However, knowledge of detailed OFR chemistry is limited, allowing speculation in the literature about whether some non-OH reactants, including several not relevant for tropospheric chemistry, may play an important role in these OFRs. These non-OH reactants are UV radiation, O(1D), O(3P), and O3. In this study, we investigate the relative importance of other reactants to OH for the fate of reactant species in OFR under a wide range of conditions via box modeling. The relative importance of non-OH species is less sensitive to UV light intensity than to water vapor mixing ratio (H2O) and external OH reactivity (OHRext), as both non-OH reactants and OH scale roughly proportionally to UV intensity. We show that for field studies in forested regions and also the urban area of Los Angeles, reactants of atmospheric interest are predominantly consumed by OH. We find that O(1D), O(3P), and O3 have relative contributions to volatile organic compound (VOC) consumption that are similar or lower than in the troposphere. The impact of O atoms can be neglected under most conditions in both OFR and troposphere. We define “riskier OFR conditions” as those with either low H2O (< 0.1 %) or high OHRext ( ≥  100 s−1 in OFR185 and > 200 s−1 in OFR254). We strongly suggest avoiding such conditions as the importance of non-OH reactants can be substantial for the most sensitive species, although OH may still dominate under some riskier conditions, depending on the species present. Photolysis at non-tropospheric wavelengths (185 and 254 nm) may play a significant (> 20 %) role in the degradation of some aromatics, as well as some oxidation intermediates, under riskier reactor conditions, if the quantum yields are high. Under riskier conditions, some biogenics can have substantial destructions by O3, similarly to the troposphere. Working under low O2 (volume mixing ratio of 0.002) with the OFR185 mode allows OH to completely dominate over O3 reactions even for the biogenic species most reactive with O3. Non-tropospheric VOC photolysis may have been a problem in some laboratory and source studies, but can be avoided or lessened in future studies by diluting source emissions and working at lower precursor concentrations in laboratory studies and by humidification. Photolysis of secondary organic aerosol (SOA) samples is estimated to be significant (> 20 %) under the upper limit assumption of unity quantum yield at medium (1 × 1013 and 1.5 × 1015 photons cm−2 s−1 at 185 and 254 nm, respectively) or higher UV flux settings. The need for quantum yield measurements of both VOC and SOA photolysis is highlighted in this study. The results of this study allow improved OFR operation and experimental design and also inform the design of future reactors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Oras, Aldo. „Kinetic aspects of dATP[alpha]S interaction with P2Y₁ Receptor /“. Online version, 2004. http://dspace.utlib.ee/dspace/bitstream/10062/626/5/oras.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Malangi, Gajendramurthy Chunchesh. „Vers la conception d'une sonde RMN immersible pour le suivi des réactions en solution“. Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAF007.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail de thèse nous présentons notre contribution à la mise au point d’un spectromètre RMN immersible miniature à bas-cout (DIP-NMR) permettant de suivre des réactions chimiques directement sous la hotte ou dans les réacteurs industriels. Nous avons pu tester et optimiser la première version d’un prototype enregistrant des spectres à un scan de produit pur, beaucoup de travail reste à faire pour développer une sonde immersible pouvant faire des acquisitions sur plusieurs canaux simultanément.Les premiers spectres obtenus sur notre prototype sont larges et nous avons dû tester et mettre au point des méthodes chimio-métriques pour extraire les données cinétiques et quantitatives. Ces méthodes ont pu être testés sur des données obtenues sur des spectromètres haut champs volontairement dégradés.En parallèle, une série de réactions d’hydrosilylation de substrats organiques (Nitriles, Esters, Amides Cyclique) en présence d’un catalyseur à base d’Ir(III) ont été suivies par RMN à haut champ et haute résolution. Ce travail a permis de mieux comprendre les mécanismes mis en jeu et d’obtenir des données cinétiques de référence pour les tests des prochaines versions de nos spectromètres immersibles
This endeavour represents a pioneering effort to design and develop a cost-effective low-field Dip-NMR or Immersible NMR system dedicated to monitoring reaction mixtures at the closest possible source of information. While we have successfully designed and optimized the first version of prototype capable of acquiring single scanned NMR spectra of pure samples, further focused study and research are highly warranted to develop and integrate prototype into an immersible probe unit capable of multi-channel acquisition to realize the proposed Dip-NMR system. The NMR spectra obtained from Dip-NMR system were notably broad and necessitated dedicated chemometric methods for quantification and kinetic data extraction. Known reactions were monitored on a high-field high-resolution NMR spectrometer, with the kinetic data from this instrument serving as benchmark data for comparison and evaluation against the data obtained from Dip-NMR system. Various chemometric methods were explored and tested on the mimicked spectra for quantification and kinetic data extraction, with the results subsequently compared with the benchmark data. In parallel with the project’s goal, hydrosilylation reactions involving organic substrates such as nitriles, cyclic amides, and esters were catalyzed using an Ir(III) catalyst and monitored on a high-field high-resolution NMR spectrometer. These investigations have successfully yielded valuable insights into the reactions, contributing to a deeper understanding of the processes involved and laid important reference kinetic data for future testing of an advanced Dip-NMR probe prototype
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Li, Zhonglin. „Tribological, Kinetic and Thermal Characteristics of Copper Chemical Mechanical Planarization“. Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1378%5F1%5Fm.pdf&type=application/pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kon, Kam King Guillaume. „Revisiting Species Sensitivity Distribution : modelling species variability for the protection of communities“. Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10194/document.

Der volle Inhalt der Quelle
Annotation:
La SSD (Species Sensitivity Distribution) est une méthode utilisée par les scientifiques et les régulateurs de tous les pays pour fixer la concentration sans danger de divers contaminants sources de stress pour l'environnement. Bien que fort répandue, cette approche souffre de diverses faiblesses sur le plan méthodologique, notamment parce qu'elle repose sur une utilisation partielle des données expérimentales. Cette thèse revisite la SSD actuelle en tentant de pallier ce défaut. Dans une première partie, nous présentons une méthodologie pour la prise en compte des données censurées dans la SSD et un outil web permettant d'appliquer cette méthode simplement. Dans une deuxième partie, nous proposons de modéliser l'ensemble de l'information présente dans les données expérimentales pour décrire la réponse d'une communauté exposée à un contaminant. A cet effet, nous développons une approche hiérarchique dans un paradigme bayésien. A partir d'un jeu de données décrivant l'effet de pesticides sur la croissance de diatomées, nous montrons l'intérêt de la méthode dans le cadre de l'appréciation des risques, de par sa prise en compte de la variabilité et de l'incertitude. Dans une troisième partie, nous proposons d'étendre cette approche hiérarchique pour la prise en compte de la dimension temporelle de la réponse. L'objectif de ce développement est d'affranchir autant que possible l'appréciation des risques de sa dépendance à la date de la dernière observation afin d'arriver à une description fine de son évolution et permettre une extrapolation. Cette approche est mise en œuvre à partir d'un modèle toxico-dynamique pour décrire des données d'effet de la salinité sur la survie d'espèces d'eau douce
Species Sensitivity Distribution (SSD) is a method used by scientists and regulators from all over the world to determine the safe concentration for various contaminants stressing the environment. Although ubiquitous, this approach suffers from numerous methodological flaws, notably because it is based on incomplete use of experimental data. This thesis revisits classical SSD, attempting to overcome this shortcoming. First, we present a methodology to include censored data in SSD with a web-tool to apply it easily. Second, we propose to model all the information present in the experimental data to describe the response of a community exposed to a contaminant. To this aim, we develop a hierarchical model within a Bayesian framework. On a dataset describing the effect of pesticides on diatom growth, we illustrate how this method, accounting for variability as well as uncertainty, provides benefits to risk assessment. Third, we extend this hierarchical approach to include the temporal dimension of the community response. The objective of that development is to remove the dependence of risk assessment on the date of the last experimental observation in order to build a precise description of its time evolution and to extrapolate to longer times. This approach is build on a toxico-dynamic model and illustrated on a dataset describing the salinity tolerance of freshwater species
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ogaja, Clement Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. „A framework in support of structural monitoring by real time kinematic GPS and multisensor data“. Awarded by:University of New South Wales. School of Surveying and Spatial Information Systems, 2002. http://handle.unsw.edu.au/1959.4/18662.

Der volle Inhalt der Quelle
Annotation:
Due to structural damages from earthquakes and strong winds, engineers and scientists have focused on performance based design methods and sensors directly measuring relative displacements. Among the monitoring methods being considered include those using Global Positioning System (GPS) technology. However, as the technical feasibility of using GPS for recording relative displacements has been (and is still being) proven, the challenge for users is to determine how to make use of the relative displacements being recorded. This thesis proposes a mathematical framework that supports the use of RTK-GPS and multisensor data for structural monitoring. Its main contributions are as follows: (a) Most of the emerging GPS-based structural monitoring systems consist of GPS receiver arrays (dozens or hundreds deployed on a structure), and the issue of integrity of the GPS data generated must be addressed for such systems. Based on this recognition, a methodology for integrity monitoring using a data redundancy approach has been proposed and tested for a multi-antenna measurement environment. The benefit of this approach is that it verifies the reliability of both the measuring instruments and the processed data contrary to the existing methods that only verifies the reliability of the processed data. (b) For real-time structural monitoring applications, high frequency data ought to be generated. A methodology that can extract, in real-time, deformation parameters from high frequency RTK measurements is proposed. The methodology is tested and shown to be effective for determining the amplitude and frequency of structural dynamics. Thus, it is suitable for the dynamic monitoring of towers, tall buildings and long span suspension bridges. (c) In the overall effort of deformation analysis, large quantities of observations are required, both of causative phenomena (e.g., wind velocity, temperature, pressure), and of response effects (e.g., accelerations, coordinate displacements, tilt, strain, etc.). One of the problems to be circumvented is that of dealing with excess data generated both due to process automation and the large number of instruments employed. This research proposes a methodology based on multivariate statistical process control whose benefit is that excess data generated on-line is reduced, while maintaining a timely response analysis of the GPS data (since they can give direct coordinate results). Based on the above contributions, a demonstrator software system was designed and implemented for the Windows operating system. Tests of the system with datasets from UNSW experiments, the Calgary Tower monitoring experiment in Canada, the Xiamen Bank Building monitoring experiment in China, and the Republic Plaza Building monitoring experiment in Singapore, have shown good results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Yin, KangKang. „Data-driven kinematic and dynamic models for character animation“. Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31759.

Der volle Inhalt der Quelle
Annotation:
Human motion plays a key role in the production of films, video games, virtual reality applications, and the control of humanoid robots. Unfortunately, it is hard to generate high quality human motion for character animation either manually or algorithmically. As a result, approaches based on motion capture data have become a central focus of character animation research in recent years. We observe three principal weaknesses in previous work using data-driven approaches for modelling human motion. First, basic balance behaviours and locomotion tasks are currently not well modelled. Second, the ability to produce high quality motion that is responsive to its environment is limited. Third, knowledge about human motor control is not well utilized. This thesis develops several techniques to generalize motion capture character animations to balance and respond. We focus on balance and locomotion tasks, with an emphasis on responding to disturbances, user interaction, and motor control integration. For this purpose, we investigate both kinematic and dynamic models. Kinematic models are intuitive and fast to construct, but have narrow generality, and thus require more data. A novel performance-driven animation interface to a motion database is developed, which allows a user to use foot pressure to control an avatar to balance in place, punch, kick, and step. We also present a virtual avatar that can respond to pushes, with the aid of a motion database of push responses. Consideration is given to dynamics using motion selection and adaption. Dynamic modelling using forward dynamics simulations requires solving difficult problems related to motor control, but permits wider generalization from given motion data. We first present a simple neuromuscular model that decomposes joint torques into feedforward and low-gain feedback components, and can deal with small perturbations that are assumed not to affect balance. To cope with large perturbations we develop explicit balance recovery strategies for a standing character that is pushed in any direction. Lastly, we present a simple continuous balance feedback mechanism that enables the control of a large variety of locomotion gaits for bipeds. Different locomotion tasks, including walking, running, and skipping, are constructed either manually or from motion capture examples. Feedforward torques can be learned from the feedback components, emulating a biological motor learning process that leads to more stable and natural motions with low gains. The results of this thesis demonstrate the potential of a new generation of more sophisticated kinematic and dynamic models of human motion.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Jiang, Mofen. „The Chemical and kinetic mechanism for leaching of chrysocolla by sulfuric acid“. Thesis, The University of Arizona, 1992. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1992_610_sip1_w.pdf&type=application/pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Sorooshian, Jamshid. „Tribological, Thermal and Kinetic Characterization of Dielectric and Metal Chemical Mechanical Planarization Processes“. Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1126%5F1%5Fm.pdf&type=application/pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Steinmetz, Fabian. „Integration of data quality, kinetics and mechanistic modelling into toxicological assessment of cosmetic ingredients“. Thesis, Liverpool John Moores University, 2016. http://researchonline.ljmu.ac.uk/4522/.

Der volle Inhalt der Quelle
Annotation:
In our modern society we are exposed to many natural and synthetic chemicals. The assessment of chemicals with regard to human safety is difficult but nevertheless of high importance. Beside clinical studies, which are restricted to potential pharmaceuticals only, most toxicity data relevant for regulatory decision-making are based on in vivo data. Due to the ban on animal testing of cosmetic ingredients in the European Union, alternative approaches, such as in vitro and in silico tests, have become more prevalent. In this thesis existing non-testing approaches (i.e. studies without additional experiments) have been extended, e.g. QSAR models, and new non-testing approaches, e.g. in vitro data supported structural alert systems, have been created. The main aspect of the thesis depends on the determination of data quality, improving modelling performance and supporting Adverse Outcome Pathways (AOPs) with definitions of structural alerts and physico-chemical properties. Furthermore, there was a clear focus on the transparency of models, i.e. approaches using algorithmic feature selection, machine learning etc. have been avoided. Furthermore structural alert systems have been written in an understandable and transparent manner. Beside the methodological aspects of this work, cosmetically relevant examples of models have been chosen, e.g. skin penetration and hepatic steatosis. Interpretations of models, as well as the possibility of adjustments and extensions, have been discussed thoroughly. As models usually do not depict reality flawlessly, consensus approaches of various non-testing approaches and in vitro tests should be used to support decision-making in the regulatory context. For example within read-across, it is feasible to use supporting information from QSAR models, docking, in vitro tests etc. By applying a variety of models, results should lead to conclusions being more usable/acceptable within toxicology. Within this thesis (and associated publications) novel methodologies on how to assess and employ statistical data quality and how to screen for potential liver toxicants have been described. Furthermore computational tools, such as models for skin permeability and dermal absorption, have been created.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Toraldo, Serra Eugenio Maria <1984&gt. „Inferences on earthquake kinematic properties from data inversion: two different approaches“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5178/1/Toraldo_Eugenio_tesi.pdf.

Der volle Inhalt der Quelle
Annotation:
During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Toraldo, Serra Eugenio Maria <1984&gt. „Inferences on earthquake kinematic properties from data inversion: two different approaches“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5178/.

Der volle Inhalt der Quelle
Annotation:
During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Dextraze, Mathieu Francis. „Comparing Event Detection Methods in Single-Channel Analysis Using Simulated Data“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39729.

Der volle Inhalt der Quelle
Annotation:
With more states revealed, and more reliable rates inferred, mechanistic schemes for ion channels have increased in complexity over the history of single-channel studies. At the forefront of single-channel studies we are faced with a temporal barrier delimiting the briefest event which can be detected in single-channel data. Despite improvements in single-channel data analysis, the use of existing methods remains sub-optimal. As existing methods in single-channel data analysis are unquantified, optimal conditions for data analysis are unknown. Here we present a modular single-channel data simulator with two engines; a Hidden Markov Model (HMM) engine, and a sampling engine. The simulator is a tool which provides the necessary a priori information to be able to quantify and compare existing methods in order to optimize analytic conditions. We demonstrate the utility of our simulator by providing a preliminary comparison of two event detection methods in single-channel data analysis; Threshold Crossing and Segmental k-means with Hidden Markov Modelling (SKM-HMM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Al-Lakany, H. „Human gait analysis : extracting salient features from normal and pathological kinematic data“. Thesis, University of Edinburgh, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.640299.

Der volle Inhalt der Quelle
Annotation:
The study of human walking has been of interest to researchers in different disciplines. Their interest included but was not limited to modelling of the human body, perceiving actions, analysing gait, etc. The data collected includes eighty normal and thirty pathological subjects. The work presented comprises tracking of the markers, analysis of the motion trajectories, extraction of salient features and recognition of gait signatures using a vector quantiser and a clustering mechanism of different groups of subjects - normal and pathological - through interpreting the knowledge obtained from the clusters. A computer program has been implemented to illustrate the ideas of this work. Radial basis functions neural networks are used for tracking the makers, predicting their positions in case of occlusions as well as interframe to obtain a complete smooth trajectory for the motion of the joints in 3D. The analysis algorithm is based on combining the wavelet transform for feature extraction and Kohonen self-organising map (SOM) vector quantiser for classification of the walking patterns. Rules are then extracted from the SOM after self-organisation to determine the salient features characterising each cluster as well differentiating it from others. The approach is demonstrated by its application to kinematic gait data for both normal and pathological subjects. It is shown and experimentally verified that salient features do exist within the internal structure of the kinematic data from which diagnostic signatures are elicited. Existence of such features could be used by clinicians in the orthopaedic field where the gait disease signatures would well mean improved assessment of gait and treatment and possibly early detection of some locomotion impairment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Walton, David Brian. „Analysis of single-molecule kinesin assay data by hidden Markov model filtering“. Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/280221.

Der volle Inhalt der Quelle
Annotation:
Observations of the position of a microscopic bead attached to a single kinesin protein moving along a microtubule contains detailed information about the position of the kinesin as a function of time, although this information remains obscured because of the fluctuations of the bead. The theory of hidden Markov models suggests a possible theoretical framework to analyze these data with an explicit stochastic model describing the kinesin cycle and the attached bead. We model the mechanical cycle of kinesin using a discrete time Markov chain on a periodic lattice, representing the microtubule, and model the position of the bead using an Ornstein-Uhlenbeck autoregressive process. We adapt the standard machinery of hidden Markov models to derive the likelihood of this model using a reference measure, and use the Expectation-Maximization (EM) algorithm to estimate model parameters. Simulated data sets indicate that the method does have potential to better analyze kinesin-bead experiments. However, analysis of the experimental data of Visscher et al. (1999) indicates that current data sets still lack the time resolution to extract significant information about intermediate states. Considerations for future experimental designs are suggested to allow better hidden Markov model analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ruhnow, Felix. „Estimating the motility parameters of single motor proteins from censored experimental data“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-216854.

Der volle Inhalt der Quelle
Annotation:
Cytoskeletal motor proteins are essential to the function of a wide range of intra-cellular mechano-systems. The biophysical characterization of the movement of motor proteins along their filamentous tracks is therefore of large importance. Towards this end, in vitro stepping motility assays are commonly used to determine the motor’s velocities and runlengths. However, comparing results from such experiments has proved difficult due to influences from variations in the experimental setups, the experimental conditions and the data analysis methods. This work describes a novel unified method to evaluate traces of fluorescently-labeled, processive dimeric motor proteins and proposes an algorithm to correct the measurements for finite filament length as well as photobleaching. Statistical errors of the proposed evaluation method are estimated by a bootstrap method. Numerical simulation and experimental data from GFP-labeled kinesin-1 motors stepping along immobilized microtubules was used to verify the proposed approach and it was shown (i) that the velocity distribution should be fitted by a t location-scale probability density function rather than a normal distribution, (ii) that the temperature during the experiments should be controlled with a precision well below 1 K, (iii) that the impossibility to measure events shorter than the image acquisition time needs to be accounted for, (iv) that the motor’s runlength can be estimated independent of the filament length distribution, and (v) that the dimeric nature of the motors needs to be considered when correcting for photobleaching. This allows for a better statistical comparison of motor proteins influenced by other external factors e.g. ionic strength, ATP concentration, or post-translational modifications of the filaments. In this context, the described method was then applied to experimental data to investigate the influence of the nucleotide state of the microtubule on the motility behavior of the kinesin-1 motor proteins. Here, a small but significant difference in the velocity measurements was found, but no significant difference in the runlength and interaction time measurements. Consequently, this work provides a framework for the evaluation of a wide range of experiments with single fluorescently-labeled motor proteins.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Civek, Ezgi. „Comparison Of Kinematic Results Between Metu-kiss &amp“. Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607947/index.pdf.

Der volle Inhalt der Quelle
Annotation:
KISS (Kinematic Support System) is a locally developed gait analysis system at Middle East Technical University (METU), and the performance of the system was evaluated before as a whole. However, such evaluations do not differentiate between the efficacy of the data acquisition system and the model-based gait analysis methodology. In this thesis, kinematic results of the KISS system will be compared with those of the Ankara University based commercial VICON (Oxford Metrics Ltd., Oxford, UK) system, in view of evaluating the performance of data acquisition system and the gait analysis methodology separately. This study is expected to provide guidelines for future developments on the KISS system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Giakas, Giannis K. „Time and frequency domain applications in biomechanics“. Thesis, Manchester Metropolitan University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Bartkus, Tadas Patrick. „An Analytical Model Based on Experimental Data for the Self-Hydrolysis Kinetics of Aqueous Sodium Borohydride“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1283538978.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Eich, Lori A. „Kinematic models of deformation in Southern California constrained by geologic and geodetic data“. Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34666.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2006.
Includes bibliographical references (p. 47-48).
Using a standardized fault geometry based on the Community Block Model, we create two analytic block models of the southern California fault system. We constrain one model with only geodetic data. In the other, we assign a priori slip rates to the San Andreas, Garlock, Helendale, Newport-Inglewood, Owens Valley, Sierra Madre, and Chino faults to create a joint geologic and geodetic model, using the a priori slip rates to refine the results in areas with limited geodetic data. Our results for the San Andreas fault are consistent with geologic slip rates in the north and south, but across the Big Bend area we find its slip rates to be slower than geologic rates. Our geodetic model shows right lateral slip rates of 19.8 + 1.3 mm/yr in the Mojave area and 17.3 ± 1.6 mm/yr near the Imperial fault; the San Gorgonio Pass area displays a left lateral slip rate of 1.8 + 1.7 mm/yr. Our joint geologic and geodetic model results include right lateral slip rates of 18.6 + 1.2 mm/yr in the Mojave area, 22.1 ± 1.6 mm/yr near the Imperial fault, and 9.5 1.4 mm/yr in the San Gorgonio Pass area. Both models show high values (10-13 1 mm/yr) of right lateral slip to the east of the Blackwater fault along the Goldstone, Calico, and Hidalgo faults. We show that substantially different block geometries in the Mojave can produce statistically similar model results due to sparse geodetic data.
by Lori A. Eich.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Ruiz, Sergio. „Kinematic and dynamic inversions of subduction earthquakes using strong motion and cGPS data“. Paris, Institut de physique du globe, 2012. http://www.theses.fr/2012GLOB0006.

Der volle Inhalt der Quelle
Annotation:
L’objectif principal de cette thèse est d'effectuer des inversions des données sismologiques afin d'obtenir la distribution de glissement de cinq tremblements de terre qui ont eu lieu dans des zones de subduction. Ils sont les séismes de : Tocopilla 2007( Mw 7. 8) ; Michilla 2007 (Mw 6. 7) et Maule 2010 (Mw 8. 8) lesquelles ont frappé le Chili, et ceux de Iwate 2008 (Mw 6. 8) et Tohoku 2011 (Mw9. 0) au Japon. Nous avons calculé des inversions cinématiques pour tous ces événements et avons fait des inversions dynamiques pour Michilla 2007 et Iwate 2008. Ces inversions ont été réalisées proposant une distribution géométrique a priori de forme elliptique pour la zone de rupture avec une distribution gaussienne de glissement. La recherche de la meilleure solution est réalisée en utilisant l'algorithme de voisinage. Accélérogrammes et GPS en continu (cGPS) ont été utilisés dans l’inversion. Pour l'inversion du séisme de Tocopilla 2007 la solution converge vers une distribution de glissement modélisée par deux ellipses. Similaire a celle proposée par Peyrat et la. (2010). Pour le tremblement de terre de Maule 2010, la solution converge vers une distribution de glissement modelisée par deux ellipses, le glissement maximal est situé sur la cote nord de la rupture sismique. De inversion des accélérogrammes, deux aspérités étaient localisées dans le nord de la rupture, ces aspérités contrôlaient le mouvement dans la zone de fréquences intermédiaires (0,02 Hz a 0,1 Hz). Le tremblement de terre de Tohoku 2011 a été caractérisé par la rupture d'une seule ellipse. A partir de ces résultats, nous avons exploré l'espace de solutions avec une méthode de Monte Carlo ou seulement 3 paramètres sont libérés: la vitesse de rupture, la taille maximale de glissement l'ellipse et la taille de l'ellipse (en gardant le ratio d'aspect fixe entre leurs axes). Entre ces trois paramètres, il existe une forte ambiguïté, ce qui confirme que la solution n'est pas unique. Aussi pour ce tremblement de terre est effectuée une inversion préliminaire avec une discrétisation classique en rectangles, et nous avons trouvé des résultats similaires à l’inversion elliptique. Enfin nous avons fait l'inversion de deux tremblements de terre intraplaques de profondeur intermédiaire (Mw 6. 8). Nous avons fait les premières inversions dynamiques avec la libération de tous les paramètres. Par les inversions de Monte Carlo, nous confirmons que les inversions dynamiques ne sont pas uniques, et que la rupture des séismes est contrôlé par les paramètres de la loi de frottement. Ces paramètres peuvent prendre des valeurs différentes, mais ils sont regroupés dans une valeur spécifique de moment sismique et du paramètre kappa (kappa est un paramètre proportionnel au rapport entre ligue l'énergie libérée et l’énergie disponible pour la rupture sismique)
We study the inversion of the slip distribution of five earthquakes: 3 that occurred in Chile (Tocopilla 2007, Mw 7. 8; Michilla 2007, Mw 6. 7; Maule 2010, Mw 8. 8) and two from Japan (Iwate 2008, Mw 6. 8, Tohoku 2011, Mw 9. 0). Kinematic inversions are made for them with the exception of Michilla 2007 and dynamic inversions were made for Michilla 2007 and Iwate 2008. Inversions are made using an elliptical rupture area which is characterized by a Gaussian distribution of slip. The search for the best solution is approached using the neighborhood algorithm. Strong motion and continuous GPS (cGPS) data were used in the inversion. For Tocopilla 2007 we proposed a slip distribution characterized by two ellipses confirming previous work by Peyrat et al. (2010). For the Maule 2010 earthquake two ellipses were proposed, the results showed that the maximum slip is located on the northern part of the rupture. Here also we identified the asperities that controlled the movement in the range of intermediate frequencies (0. 02 Hz to 0. 1 Hz) in the north of the rupture. The Tohoku 2011 earthquake was characterized by the rupture of one ellipse. Then we searched for the best solution using a Monte Carlo method, we fixed some parameters and released 3 of them: rupture velocity, maximum slip and ellipse size (keeping fixed the aspect ratio between their axes), finding that between these three parameters there are strong links, confirming that the solution is non unique. Also for this earthquake is develops a preliminary inversion using a classical discretization with rectangles, finding similar results to the elliptical inversion. Finally we made the inversion of two intraplate intermediate depth earthquakes of magnitudes around Mw 6. 8. For these earthquakes we made the first full dynamic inversions. Here we confirmed, making Monte Carlo inversion, those dynamics inversions are not unique and that are characterized by the parameters of the friction law. These parameters can take different values, but they share common values of seismic moment and kappa values (kappa is a parameter that relates the energy released with the energy available for the earthquake rupture)
Se invierte la distribucion de deslizamiento de cinco terremotos, 3 ocurridos en Chile (Tocopilla 2007, Mw 7. 8; Michilla 2007, Mw 6. 7; Maule 2010; Mw 8. 8) y dos de Japon (Iwate 2008, Mw 6. 8 y Tohoku 2011, Mw 9. 0). Se realizan inversiones cinematicas para ellos con la excepcion de Michilla 2007 e inversiones dinamicas para Michilla 2007 e Iwate 2008. Las inversiones son hechas proponiendo a priori una distribucion geometrica del area de ruptura formada por una o dos elipses y distribucion gaussiana de deslizamiento. La busqueda de la mejor solucion se realiza utilizando el algoritmo de vecindad. Acelerogramas y GPS continuos (cGPS) fueron invertidos. Para Tocopilla 2007 se obtiene una distribucion de slip caracterizado por 2 elipses. Para el terremoto del Maule dos elipses fueron propuestas encontrandose que el maximo deslizamiento se ubica en la zona norte de una ruptura de casi 500 km; para este terremoto ademas se identificaron las asperezas que controlaron el movimiento en el rango de frecuencias intermedias (0. 02 Hz a 0. 1 Hz) en la zona norte de la ruptura. El terremoto de Tohoku 2011 pudo ser caracterizado por la ruptura de una elipses y luego se realizo una busqueda de la mejor solucion utilizando un metodo de Monte Carlo fijando algunos parametros y liberando solo 3 de ellos: velocidad de ruptura, deslizamiento maximo y el tamano de la elipse (manteniendo fija la razon de aspecto entre sus ejes), encontrando que entre estos tres parametros existen fuertes acoplamientos, confirmando que la solucion no es unica. Tambien para este terremoto se realiza desarrolla una preliminar inversion utilizando una discretizacion clasica de rectangulos, encontrandose resultados similares a la inversion por elipse. Finalmente realizamos la inversion de dos terremotos intraplaca de profundidad intermedia de magnitud cercana a Mw 6. 8. Para estos terremotos nosotros realizamos las primeras inversiones dinamicas liberando todos los parametros. Aqui se confirma, realizando inversiones del tipo Monte Carlo, que las inversiones no son unicas y que la ruptura de los terremotos queda controlada por los parametros de la ley de friccion, pudiendo tomar diferentes valores pero agrupandose en valores especificos de momento sismico y kappa (kappa es un parametro que relaciona la energia liberada con la energia disponible para que el terremoto se propague)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ruiz, Tapia Sergio. „Kinematic and dynamic inversions of subduccion earthquakes using strong motion and cGPS data“. Tesis, Universidad de Chile, 2012. http://repositorio.uchile.cl/handle/2250/111903.

Der volle Inhalt der Quelle
Annotation:
Doctor en Ciencias, Mención Geología
Se invierte la distribución de deslizamiento de cinco terremotos, 3 ocurridos en Chile (Tocopilla 2007, Mw 7.8; Michilla 2007, Mw 6.7; Maule 2010; Mw 8.8) y dos de Japón (Iwate 2008, Mw 6.8 y Tohoku 2011, Mw 9.0). Se realizan inversiones cinemáticas para ellos con la excepción de Michilla 2007 e inversiones dinámicas para Michilla 2007 e Iwate 2008. Las inversiones son hechas proponiendo a priori una distribución geométrica del área de ruptura formada por una o dos elipses y distribución gaussiana de deslizamiento. La búsqueda de la mejor solución se realiza utilizando el algoritmo de vecindad. Acelerogramas y GPS continuos (cGPS) fueron invertidos. Para Tocopilla 2007 se obtiene una distribución de slip caracterizado por 2 elipses. Para el terremoto del Maule dos elipses fueron propuestas encontrándose que el máximo deslizamiento se ubica en la zona norte de una ruptura de casi 500 km; para este terremoto además se identificaron las asperezas que controlaron el movimiento en el rango de frecuencias intermedias (0.02 Hz a 0.1 Hz) en la zona norte de la ruptura. El terremoto de Tohoku 2011 pudo ser caracterizado por la ruptura de una elipses y luego se realizó una búsqueda de la mejor solución utilizando un método de Monte Carlo fijando algunos parámetros y liberando solo 3 de ellos: velocidad de ruptura, deslizamiento máximo y el tamaño de la elipse (manteniendo fija la razón de aspecto entre sus ejes), encontrando que entre estos tres parámetros existen fuertes acoplamientos, confirmando que la solución no es única. También para este terremoto se realiza desarrolla una preliminar inversión utilizando una discretización clásica de rectángulos, encontrándose resultados similares a la inversión por elipse. Finalmente realizamos la inversión de dos terremotos intraplaca de profundidad intermedia de magnitud cercana a Mw 6.8. Para estos terremotos nosotros realizamos las primeras inversiones dinámicas liberando todos los parámetros. Aquí se confirma, realizando inversiones del tipo Monte Carlo, que las inversiones no son únicas y que la ruptura de los terremotos queda controlada por los parámetros de la ley de fricción, pudiendo tomar diferentes valores pero agrupándose en valores específicos de momento sísmico y kappa (kappa es un parámetro que relaciona la energía liberada con la energía disponible para que el terremoto se propague).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Unutulmazsoy, Yeliz [Verfasser], und Joachim [Akademischer Betreuer] Maier. „Oxidation kinetics of metal films and diffusion in NiO for data storage / Yeliz Unutulmazsoy ; Betreuer: Joachim Maier“. Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2016. http://d-nb.info/1131630149/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Pasquet, Camille. „Evaluation de la biodisponibilité du nickel, cobalt et manganèse dans les poussières de sols ultramafiques et développement d'un outil de bioindication lichénique des poussières émises par les activités minières en Nouvelle Calédonie“. Thesis, Nouvelle Calédonie, 2016. http://www.theses.fr/2016NCAL0008/document.

Der volle Inhalt der Quelle
Annotation:
Les terrains ultramafiques de Nouvelle-Calédonie, riches en Ni, Co, Mn et Cr sont exploités par des mines à ciel ouvert, ce qui génère l'émission de poussières riches en métaux. L'objectif de ce travail est de développer des approches pour estimer le risque environnemental lié aux poussières émises par les mines à ciel ouvert et les usines de traitement du minerai de nickel. L'estimation de la fraction biodisponible des métaux contenus dans deux fractions granulométriques de poussières, celle inférieure à 100 ~m mobilisable par le vent (F<1 OO~m.) et celle susceptible de pénétrer le système respiratoire (PM1 0), a été réalisée par extractions cinétiques à I'EDTA. L'obtention des PM10 a nécessité la mise au point d'une technique de tri par transport des particules dans un tube horizontal grâce à un flux d'azote. Les extractions cinétiques ont permis de discrimtner trois fractions de métaux: rapidement extraite, lentement extraite et non biodisponibles. Les concentrations en métaux potentiellement biodisponibles sont toujours très élevées et la fraction lentement extraite est toujours la plus concentrée. Pour F<1 00 ~m. les constantes cinétiques de la fraction lentement extraite sont plus faibles pour les poussières de sols miniers que celles de sols forestiers. Les poussières issues de sols miniers seraient alors un réservoir en éléments métalliques biodisponibles à plus long terme. La bioindication lichénique avec traitement en données compositionnelles des concentrations en métaux permet de définir un indice de dispersion des poussières. Cette méthodologie pourrait appuyer les réseaux de surveillance de la qualité de l'air en Nouvelle-Calédonie
Bioavailability estimation of nickel, cobalt and manganese in dust from ultramafic soils likely to be mobilized by wind and~eve lopment of a bioindication tool using lichen for dust emitted by mining activities in New Caledonia New Caledonian altered ultramafic soils, particularly rich in Ni, Co, Mn and Cr, are extracted by opencast mines which generale dust rich in metals. The objective of th is work is to develop approaches for environmental risk assessment of dust emitted by opencast mines and nickel ore metallurgical plants. The assessmentof metals' bioavailable fraction from two dust granulometrie size fractions, one less than 100 IJm which is mobilizable by wind (F<1001Jm,) and another one able to penetrate the respiratory system (PM 1 0), has been determined by kinetic extraction with EDT A. The development of a new separation deviee based on particle transport subjected to a nitrogen flux in a horizontal tube has been necessary for PM1 0 segregation. Kinetic extractions le ad to the distinction of th ree metal pools: rapidly labile, less rapidly labile and non-bioavailable. Trace metal potentially bioavailable concentrations were always high and the less rapidly labile pool is always the most concentrated pool. Concerning F<1 001Jm, the less rapidly kinetic constant of the less rapidly labile pool is weaker for mining soils than forest soils. F<1001Jm fractions from mining soils representa more durable reserve in trace metal than the same fraction from forest soils. Bioindication using lichens with compositional data analysis of their metal concentration allow defining an indicator of emission dispersion. This methodology could support air quality monitoring networks in New Caledonia
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lucca, Ernestina <1983&gt. „Kinematic description of the rupture from strong motion data: strategies for a robust inversion“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3530/1/lucca_ernestina_tesi.pdf.

Der volle Inhalt der Quelle
Annotation:
We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lucca, Ernestina <1983&gt. „Kinematic description of the rupture from strong motion data: strategies for a robust inversion“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3530/.

Der volle Inhalt der Quelle
Annotation:
We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Castro, Rodriguez Mary Elizabeth. „Kinetics and Mechanism of Ion Exchange Process and Resin Deactivation during Ultra-Purification of Water“. Diss., Tucson, Arizona : University of Arizona, 2006. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1479%5F1%5Fm.pdf&type=application/pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Porter, Sarah Elizabeth Graham. „Chemometric Analysis of Multivariate Liquid Chromatography Data: Applications in Pharmacokinetics, Metabolomics, and Toxicology“. VCU Scholars Compass, 2006. http://scholarscompass.vcu.edu/etd/1156.

Der volle Inhalt der Quelle
Annotation:
In the first part of this work, LC-MS data were used to calculate the in-vitro intrinsic clearances (CLint) for the metabolism of p-methoxyrnethamphetamine (PMMA) and fluoxetine by the CYP2D6 enzyme using a steady-state (SS) approach and a new general enzyme (GE) screening method. For PMMA, the SS experiment resulted in a CLint of 2.7 ± 0.2 µL pmol 2D6-1min-1 and the GE experiment resulted in a CLint of 3.0 ± 0.6 µL pmol 2D6-1min-1. For fluoxetine, the SS experiment resulted in a CLint of 0.33 ± 0.17 µL pmol 2D6-1min-1 and the GE experiment resulted in a CLint of 0.188 ± 0.013 µL pmol 2D6-1min-1. The inhibition of PMMA metabolism by fluoxetine was also demonstrated.In the second part of the work, target factor analysis was used as part of a library search algorithm for the identification of drugs in LC-DAD chromatograms. The ability to resolve highly overlapped peaks using the spectral data afforded by the DAD is what distinguished this method from conventional library searching methods. A validation data set of 70 chromatograms was used to calculate the sensitivity (correct identification of positives) and specificity (correct identification of negatives) of the method, which were 92% and 94% respectively.Finally, the last part of the work shows the development of data analysis methods for four-way data generated by two-dimensional liquid chromatography separations with DAD. Maize seedlings were analyzed, specifically focusing on indole-3-acetic acid (IAA) and related compounds. Window target testing factor analysis was used to identify the spectral groups represented by the standards in the mutant and wild-type chromatograms. Two curve resolution algorithms were applied to resolve overlapped components in the data and to demonstrate the quantitative potential of these methods. A total of 95 peaks were resolved. Of those peaks, 45 were found in both the mutant and wild-type maize, 16 peaks were unique to the mutants, 13 peaks were unique to the wild-types, and the remaining peaks were standards. Several IAA conjugates were quantified in the maize samples at levels of 0.3 - 2 µg/g plant material.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie