Tesi sul tema "Estimation des microstructures"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Estimation des microstructures.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-20 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Estimation des microstructures".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Blatt, Samantha Heidi. "From the Mouths of Babes: Using Incremental Enamel Microstructures to Evaluate the Applicability of the Moorrees Method of Dental Formation to the Estimation of Age of Prehistoric Native American Children". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365696693.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yang, Zheyi. "Numerical methods to estimate brain micro-structure from diffusion MRI data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAE016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'imagerie par résonance magnétique de diffusion (IRM de diffusion) est une modalité d'imagerie non invasive couramment utilisée pour mesurer les propriétés micro-structurales des tissus biologiques au dessous de la résolution spatiale, en mesurant indirectement le déplacement de diffusion des molécules d'eau. En raison de la complexité géométrique du cerveau et du mécanisme complexe de l'IRM de diffusion, il est difficile de relier directement les signaux reçus à des paramètres biophysiques significatifs, tels que le diamètre des axones ou la densité. Ces dernières années, plusieurs modèles biophysiques ont été introduits pour répondre à ce problème de la faible interprétabilité. Ces modèles représentent les signaux d'IRM de diffusion comme un mélange de signaux analytiques sous certaines hypothèses, par exemple des membranes imperméables, de différentes géométries simples et non connectées, par exemple des sphères et des bâtonnets. Par la suite, ils visent à extraire les paramètres de ces géométries simples, qui sont corrélés avec des paramètres biophysiques, en inversant la formulation analytique. Cependant, la validité de ces hypothèses reste indéterminée dans les expériences réelles. L'objectif de cette thèse est d'améliorer la fiabilité et l'efficacité de l'estimation de la microstructure par deux moyens. Tout d'abord, pour faciliter l'étude quantitative de la domaine de validité des modèles biophysiques et de l'effet de la déformation géométrique et de la perméabilité de la membrane cellulaire par simulation, nous avons proposé deux modèles réduits dérivés de l'équation de Bloch-Torrey, respectivement. Dans le cas de membranes perméables, une nouvelle approche de simulation utilisant une base propre de Laplace imperméable est proposée. Quant à la déformation géométrique, nous utilisons une expansion asymptotique par rapport aux angles de déformation pour approximer le signal. Ces deux modèles réduits permettent de faire les calculs efficaces des signaux pour diverses valeurs de déformation/perméabilité. Des simulations numériques montrent que ces deux modèles peuvent rapidement calculer les signaux avec un niveau d'erreur raisonnable par rapport aux méthodes existantes. Plusieurs études ont été menées sur les effets de la perméabilité et de la déformation sur les signaux ou sur le coefficient de diffusion efficace (ADC en anglais), en utilisant les modèles proposés. Deuxièmement, au lieu d'inverser un modèle de géométries simplifiées, nous présentons une nouvelle approche pour associer la taille des somas dans la matière grise par des biomarqueurs intermédiaires. Des simulations numériques identifient une corrélation entre le diamètre/densité des somas et le point d'inflexion des signaux moyennés sur la direction à des amplitudes élevées (b>2500s/mm^2), offrant des perspectives pour l'estimation de la microstructure. Nous adaptons un réseau neuronal entièrement connecté en utilisant ces biomarqueurs et comparé aux modèles biophysiques, cette approche offre des résultats comparables sur les données synthétiques et in vivo et une estimation rapide car aucune inversion n'est impliquée
Diffusion magnetic resonance imaging (diffusion MRI) is a widely used non-invasive imaging modality to probe the micro-structural properties of biological tissues below the spatial resolution, by indirectly measuring the diffusion displacement of water molecules. Due to the geometrical complexity of the brain and intricate diffusion MRI mechanism, it is challenging to directly link the received signals to meaningful biophysical parameters, such as axon radii or volume fraction.In recent years, several biophysical models have been introduced to address the issue of weak interpretability. These models represent the diffusion MRI signals as a mixture of analytical signals under certain assumptions, e.g. impermeable membranes, of various disconnected simple geometries, such as spheres and sticks. Subsequently, they aim to extract the parameters of these geometries, which correlate with biophysical parameters, by inverting the analytical expression.However, the validity of these assumptions remains undetermined in actual experiments.The objective of this thesis is to improve the microstructure estimation reliability and efficiency from two perspectives. First, to facilitate the quantitative study of the valid range of biophysical models and the effect of geometrical deformation and cell membrane permeability via simulation, we proposed two reduced models derived from the Bloch-Torrey equation, respectively. For the case of the presence of permeable membranes, a new simulation approach using impermeable Laplace eigenbasis is proposed. As for the geometrical deformation, we use an asymptotic expansion with respect to the deformation angles to approximate the signal. These two reduced models enable efficient computation of signals for various values of deformation/permeability. Numerical simulations reveal that these two models can fast compute the signals within a reasonable error level compared to existing methods. Several studies have been conducted about the effects of permeability and deformation on the signals or the apparent diffusion coefficient (ADC), using the proposed models.Second, instead of inverting a simplified geometries model, we present a novel approach to associate soma size in gray matter by intermediary biomarkers. Numerical simulations identify a correlation between the volume-weighted soma radius/volume fraction and the inflection point of direction-averaged signals at high b-values (b>2500s/mm^2), offering insights for microstructure estimation. We fit a fully connected neural network using these biomarkers and compared to biophysical models, this approach offers comparable results on both synthetic and in vivo data and fast estimation since no inversion is involved
3

FitzGerald, Charles Michael. "Tooth crown formation and the variation of enamel microstructural growth markers in modern humans". Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360038.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yevstihnyeyev, Roman. "Estimation of Asset Volatility and Correlation Over Market Microstructure Noise in High-Frequency Data". Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:14398547.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Accurate measurement of asset return volatility and correlation is an important problem in financial econometrics. The presence of market microstructure noise in high-frequency data complicates such estimations. This study extends a prior application of a model-based volatility estimator with autocorrelated market microstructure noise to estimation of correlation. The model is applied to a high-frequency dataset including a stock and an index, and the results are compared to some existing models. This study supports previous findings that including an autocorrelation factor produces an estimator potentially less vulnerable to market microstructure noise, and finds that the same is true about the extended correlation estimator that is introduced here.
5

Fang, Chengran. "Neuron modeling, Bloch-Torrey equation, and their application to brain microstructure estimation using diffusion MRI". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'estimation non invasive de la microstructure du cerveau, qui se compose de nombreux neurites, de somas et de cellules gliales, est essentielle pour l'imagerie cérébrale. L'IRM de diffusion (IRMd) est une technique prometteuse pour sonder les propriétés microstructurelles du cerveau en dessous de la résolution spatiale des scanners IRM. En raison de la complexité structurelle du tissu cérébral et du mécanisme complexe de l'IRM de diffusion, l'estimation de la microstructure in vivo est un défi. Les méthodes existantes utilisent généralement des géométries simplifiées, notamment des sphères et des bâtons, pour modéliser les structures neuronales et obtenir des expressions analytiques des signaux intracellulaires. La validité des hypothèses faites par ces méthodes reste indéterminée. Cette thèse vise à faciliter l'estimation de la microstructure du cerveau par simulation en remplaçant les géométries simplifiées par des modèles réalistes de la géométrie des neurones et les expressions analytiques des signaux intracellulaires par des simulations d'IRM de diffusion. Combinées à des modèles précis de la géométrie des neurones, les simulations numériques d'IRMd peuvent donner des signaux intracellulaires précis et incorporer les effets dus, par exemple, à l'ondulation des neurites ou à l'échange d'eau entre le soma et les neurites.Malgré ces avantages, les simulations d'IRMd n'ont pas été largement adoptées en raison de l'inaccessibilité des fantômes numériques, de la faible efficacité de calcul des simulateurs d'IRMd et de la difficulté d'approximer les mappings implicites entre les signaux d'IRMd et les propriétés de la microstructure. Cette thèse contribue à la résolution des problèmes susmentionnés de la manière suivante : (1) en développant un générateur de maillage de neurones open-source et en rendant accessibles au public plus d'un millier de maillages cellulaires réalistes ; (2) en augmentant d'un facteur dix l'efficacité de calcul de la méthode du formalisme matriciel numérique ; (3) en mettant en œuvre une nouvelle méthode de simulation qui fournit une représentation de type Fourier des signaux IRMd ; (4) en proposant un cadre d'apprentissage supervisé basé sur la simulation pour estimer la microstructure du cerveau par IRM de diffusion
Non-invasively estimating brain microstructure that consists of a very large number of neurites, somas, and glial cells is essential for future neuroimaging. Diffusion MRI (dMRI) is a promising technique to probe brain microstructural properties below the spatial resolution of MRI scanners. Due to the structural complexity of brain tissue and the intricate diffusion MRI mechanism, in vivo microstructure estimation is challenging.Existing methods typically use simplified geometries, particularly spheres, and sticks, to model neuronal structures and to obtain analytical expressions of intracellular signals. The validity of the assumptions made by these methods remains undetermined. This thesis aims to facilitate simulationdriven brain microstructure estimation by replacing simplified geometries with realistic neuron geometry models and the analytical intracellular signal expressions with diffusion MRI simulations. Combined with accurate neuron geometry models, numerical dMRI simulations can give accurate intracellular signals and seamlessly incorporate effects arising from, for instance, neurite undulation or water exchange between soma and neurites.Despite these advantages, dMRI simulations have not been widely adopted due to the difficulties in constructing realistic numerical phantoms, the high computational cost of dMRI simulations, and the difficulty in approximating the implicit mappings between dMRI signals and microstructure properties. This thesis addresses the above problems by making four contributions. First, we develop a high-performance opensource neuron mesh generator and make publicly available over a thousand realistic cellular meshes.The neuron mesh generator, swc2mesh, can automatically and robustly convert valuable neuron tracing data into realistic neuron meshes. We have carefully designed the generator to maintain a good balance between mesh quality and size. A neuron mesh database, NeuronSet, which contains 1213 simulation-ready cell meshes and their neuroanatomical measurements, was built using the mesh generator. These meshes served as the basis for further research. Second, we increased the computational efficiency of the numerical matrix formalism method by accelerating the eigendecomposition algorithm and exploiting GPU computing. The speed was increased tenfold. With similar accuracy, the optimized numerical matrix formalism is 20 times faster than the FEM method and 65 times faster than a GPU-based Monte Carlo method. By performing simulations on realistic neuron meshes, we investigated the effect of water exchange between somas and neurites, and the relationship between soma size and signals. We then implemented a new simulation method that provides a Fourier-like representation of the dMRI signals. This method was derived theoretically and implemented numerically. We validated the convergence of the method and showed that the error behavior is consistent with our error analysis. Finally, we propose a simulation-driven supervised learning framework to estimate brain microstructure using diffusion MRI. By exploiting the powerful modeling and computational capabilities that are mentioned above, we have built a synthetic database containing the dMRI signals and microstructure parameters of 1.4 million artificial brain voxels. We have shown that this database can help approximate the underlying mappings of the dMRI signals to volume and surface fractions using artificial neural networks
6

Fernandez, Tapia Joaquin. "Modeling, optimization and estimation for the on-line control of trading algorithms in limit-order markets". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066354/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'objectif de ce travail de thèse est une étude quantitive des differents problèmes mathematiques qui apparaissent en trading algorithmique. Concrètement, on propose une approche scientifique pour optimiser des processus relatifs a la capture et provision de liquidités pour des marchés electroniques.Du au fort caractère appliqué de ce travail, on n'est pas seulement intéressés par la rigeur mathématique de nos résultats, mais on souhaite aussi a comprendre ce travail de recherche dans le contexte des differentes étapes qui font partie de l'implementation pratique des outils que l'on developpe; par exemple l'interpretation du modèle, l'estimation de parametres, l'implementation informatique etc.Du point de vue scientifique, le coeur de notre travail est fondé sur deux techniques empruntées au monde de l'optimisation et des probabilités, celles sont : le contrôle stochastique et l'approximation stochastique.En particulier, on présente des resultats academiques originaux pour le probleme de market-making haute fréquence et le problème de liquidation de portefeuille en utilisant des limit-orders; dans le deux cas on utilise une approche d'optimisation dite backwards. De la même façon, on résout le problème de market-making en utilisant une approche "forward", ceci étant innovateur dans la litterature du trading optimal car il ouvre la porte à des techniques d'apprentissage automatique.Du pont de vue pratique, cette thèse cherches à creer un point entre la recherche academique et l'industrie financière. Nos resultats sont constamment considérés dans la perspective de leur implementation pratique. Ainsi, on concentre une grande partie de notre travail a étudier les differents facteurs qui sont importants a comprendre quand on transforme nos techniques quantitatives en valeur industrielle: comprendre la microstructure des marchés, des faits stylisés, traitrement des données, discussions sur les modèles, limitations de notre cadre scientifique etc
This PhD thesis focuses on the quantitative analysis of mathematical problems arising in the field of optimal algorithmic trading. Concretely, we propose a scientific approach in order to optimize processes related to the capture and provision of liquidity in electronic markets. Because of the strongly industry-focused character of this work, not only we are interested in giving rigorous mathematical results but also to understand this research project in the context of the different stages that come into play during the practical implementation of the tools developed throughout the following chapters (e.g. model interpretation, parameter estimation, programming etc.).From a scientific standpoint the core of our work focuses on two techniques taken from the world of optimization and probability; these are, stochastic control and stochastic approximation. In particular, we provide original academic results for the problem of high frequency market making and the problem of portfolio liquidation by using limit orders; both by using a backward optimization approach. We also propose a forward optimization framework to solve the market making problem; the latter approach being quite innovative for optimal trading, as it opens the door for machine learning techniques.From a practical angle, this PhD thesis seeks to create a bridge between academic research and practitioners. Our mathematical findings are constantly put in perspective in terms of their practical implementation. Hence, we focus a large part of our work on studying the different factors that are of paramount importance to understand when transforming our quantitative techniques into industrial value: understanding the underlying market microstructure, empirical stylized facts, data processing, discussion about the models, limitations of our scientific framework etc
7

Sun, Yucheng. "Essays in volatility estimation based on high frequency data". Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/402831.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Based on high-frequency price data, this thesis focuses on estimating the realized covariance and the integrated volatility of asset prices, and applying volatility estimation to price jump detection. The first chapter uses the LASSO procedure to regularize some estimators of high dimensional realized covariance matrices. We establish theoretical properties of the regularized estimators that show its estimation precision and the probability that they correctly reveal the network structure of the assets. The second chapter proposes a novel estimator of the integrated volatility which is the quadratic variation of the continuous part in the price process. This estimator is obtained by truncating the two-scales realized variance estimator. We show its consistency in the presence of market microstructure noise and finite or infinite activity jumps in the price process. The third chapter employs this estimator to design a test to explore the existence of price jumps with noisy price data.
Basándonos en datos de precios de alta frecuencia, esta tesis se centra en la estimación de la covarianza realizada y la volatilidad integrada de precios de activos, y la aplicación de la estimación de la volatilidad para la detección de saltos en los precios. El primer capítulo utiliza el procedimiento LASSO para regularizar algunos estimadores de matrices de covarianza realizada de alta dimensión. Establecemos propiedades teóricas de los estimadores regularizados que muestran su precisión de estimación y la probabilidad de que revelen correctamente la estructura de red de los activos. En el segundo capítulo se propone un nuevo estimador de la volatilidad integrada que es la variación cuadrática de la parte continua en el proceso de precios. Este estimador se obtiene truncando el estimador de varianza realizado en dos escalas. Demostramos su consistencia en presencia de ruido de microestructura del mercado y saltos de actividad finitos o infinitos en el proceso de precios. El tercer capítulo emplea este estimador para diseñar un test para explorar la existencia de saltos en los precios con ruido.
8

Tunyavetchakit, Sophon [Verfasser], e Rainer [Akademischer Betreuer] Dahlhaus. "Volatility Decomposition and Nonparametric Estimation of Spot Volatility of Models with Poisson Sampling under Market Microstructure Noise / Sophon Tunyavetchakit ; Betreuer: Rainer Dahlhaus". Heidelberg : Universitätsbibliothek Heidelberg, 2016. http://d-nb.info/1180615786/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

[Verfasser], Sophon Tunyavetchakit, e Rainer [Akademischer Betreuer] Dahlhaus. "Volatility Decomposition and Nonparametric Estimation of Spot Volatility of Models with Poisson Sampling under Market Microstructure Noise / Sophon Tunyavetchakit ; Betreuer: Rainer Dahlhaus". Heidelberg : Universitätsbibliothek Heidelberg, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:16-heidok-214504.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Bornert, Michel. "Morphologie microstructurale et comportement mécanique ; caractérisations expérimentales, approches par bornes et estimations autocohérentes généralisées". Phd thesis, Ecole Nationale des Ponts et Chaussées, 1996. http://tel.archives-ouvertes.fr/tel-00113078.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Prévoir le comportement de matéiaux hétérogènes aléatoires en fonction de la répartition spatiale des constituants reste un problème largement ouvert, dont on expose les enjeux, le formalisme et les diverses approches micromécaniques associées. Une étude expérimentale sur des matériaux biphasés fer/argent et fer/cuivre de morphologie "matrice/inclusions" ou à phases co-continues montre que l'influence des paramètres morphologiques est surtout sensible à l'échelle locale et concerne notamment les hétérogénéités de déformation, caractérisées en termes de moyennes par phase et de fonctions de distribution, mesurées grâce à une technique originale de microextensométrie. Les modèles classiques ne rendant pas compte des phénomènes observés, une nouvelle approche fondée sur la notion de "motif morphologique représentatif" est proposée. Un premier modèle à motifs multicouches admet une expression semi-analytique mais s'avère toujours insuffisant. Une description morphologique plus riche obtenue avec des motifs de structure interne quelconque en distribution ellipsoïdale conduit à des encadrements rigoureux et des estimations autocohérentes du comportement linéaire effectif. La théorie sous-jacente est exposée en détail, la signification physique véritable de ces modèles étant précisée. La mise en œuvre effective par des outils numériques conduit par exemple à de nouveaux résultats pour des composites anisotropes à renforts particulaires, dont on décrit ainsi correctement les interactions locales. La prise en compte de certains phénomènes de corrélation à grande distance observés expérimentalement dans le domaine non linéaire reste en revanche une question ouverte.
11

Bibinger, Markus. "Estimating the quadratic covariation from asynchronous noisy high-frequency observations". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/16365.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ein nichtparametrisches Schätzverfahren für die quadratische Kovariation von hochfrequent nicht-synchron beobachteter Itô-Prozessen mit einem additiven Rauschen wird entwickelt. Für eine artverwandte Folge von statistischen Experimenten wird die lokal asymptotische Normalität (LAN) im Sinne von Le Cam bewiesen. Mit dieser lassen sich optimale Konvergenzraten und Effizienzschranken für asymptotische Varianzen ableiten. Der vorgestellte Schätzer wird auf Grundlage von zwei modernen Verfahren, für die Anwendung bei nicht-synchronen Beobachtungen zum einen, und einem additiven Rauschen zum anderen, entwickelt. Der Hayashi-Yoshida Schätzer wird in einer neuen Darstellung eingeführt, welche einen Synchronisierungsalgorithmus mit einschließt, der für die kombinierte Methode ausgelegt werden kann. Es wird eine stabiles zentrales Grenzwerttheorem bewiesen, wobei spezieller Wert auf die Analyse des Einflusses der Nicht-Synchronität auf die asymptotische Varianz gelegt wird. Nach diesen Vorbereitungen wird das kombinierte Schätzverfahren für den allgemeinsten Fall nicht-synchroner verrauschter Beobachtungen vorgestellt. Dieses beruht auf Subsampling- und Multiskalenmethoden, die auf Mykland, Zhang und Aït-Sahalia zurück gehen. Es vereint positive Eigenschaften der beiden Ursprünge. Das zentrale Resultat dieser Arbeit ist der Beweis, dass der Schätzfehler stabil in Verteilung gegen eine gemischte Normalverteilung konvergiert. Für die asymptotische Varianz wird ein konsistenter Schätzer angegeben. In einer Anwendungsstudie wird eine praktische Implementierung des Schätzverfahrens, die die Wahl von abhängigen Parametern beinhaltet, getestet und auf ihre Eigenschaften im Falle endlicher Stichprobenumfänge untersucht. Neuen fortgeschrittenen Entwicklungen auf dem Forschungsfeld von Seite anderer Autoren wird Rechnung getragen durch Vergleiche und diesbezügliche Kommentare.
A nonparametric estimation approach for the quadratic covariation of Itô processes from high-frequency observations with an additive noise is developed. It is proved that a closely related sequence of statistical experiments is locally asymptotically normal (LAN) in the Le Cam sense. By virtue of this property optimal convergence rates and efficiency bounds for asymptotic variances of estimators can be concluded. The proposed nonparametric estimator is founded on a combination of two modern estimation methods devoted to an additive observation noise on the one hand and asynchronous observation schemes on the other hand. We reinvent this Hayashi-Yoshida estimator in a new illustration that can serve as a synchronization method which is possible to adapt for the combined approach. A stable central limit theorem is proved focusing especially on the impact of non-synchronicity on the asymptotic variance. With this preparations on hand, the generalized multiscale estimator for the noisy and asynchronous setting arises. This convenient method for the general model is based on subsampling and multiscale estimation techniques that have been established by Mykland, Zhang and Aït-Sahalia. It preserves valuable features of the synchronization methodology and the estimators to cope with noise perturbation. The central result of the thesis is that the estimation error of the generalized multiscale estimator converges with optimal rate stably in law to a centred mixed normal limiting distribution on fairly general regularity assumptions. For the asymptotic variance a consistent estimator based on time transformed histograms is given making the central limit theorem feasible. In an application study a practicable estimation algorithm including a choice of tuning parameters is tested for its features and finite sample size behaviour. We take account of recent advances on the research field by other authors in comparisons and notes.
12

Kokoszka, Florian. "Estimations du mélange vertical le long de sections hydrologiques en Atlantique Nord". Thesis, Brest, 2012. http://www.theses.fr/2012BRES0097/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le mélange vertical dans l’océan contribue au maintien de la Cellule Méridienne de Circulation Océanique (MOC) en permettant le renouvellement des eaux profondes. Une coupe transverse d’une partie de la MOC est réalisée par la radiale hydrologique OVIDE qui a lieu tous les 2 ans depuis 2002 entre le Portugal et le Groenland. L’énergie nécessaire au mélange est fournie par les ondes internes générées par le vent et la marée, et des mesures de micro-structure (VMP) en 2008 montrent des valeurs de dissipation Evmp intensifiées dans la thermocline principale et au niveau des topographies. Notre étude se base sur ces observations pour étudier la fine-structure verticale de l’océan et estimer indirectement la dissipation E due aux ondes internes à l’aide de mesures CTD et LADCP. La comparaison au mesures du VMP permet d’optimiser la paramétrisation de E en encadrant les observations par facteur 3 et leurs valeurs moyennes à ±30%. L’application sur l’ensemble des données OVIDE permet d’obtenir une cartographie du mélange a travers le bassin. La distribution géographique de la diffusion verticale K est similaire le long des 5 sections, avec des valeurs de l’ordre de 10−4m2/s dans la thermocline principale, au fond et au niveau des topographies, et de l’ordre de 10−5m2/s dans l’océan intérieur. Des différences régionales sont présentes et K peut être localement proche de 10−3m2/s. L’étude de la section FOUREX 1997 révèle une intensification de K le long de la dorsale médio-Atlantique où les valeurs moyennes sont de 2 à 3 plus fortes que le long des sections OVIDE. La distribution spatiale des échelles de Thorpe LT est corrélée avec celle du mélange. Néanmoins la dissi-pation basée sur LT surestime Evmp d’un facteur 10 à100, ce qui pourrait être dû à une mauvaise représentation de la durée de vie de la turbulence dans l’océan. Certains mécanismes susceptibles de générer des ondes internes sont proposés. Des sites possibles de génération par la marée sont localisés à l’aide d’un modèle simple de la trajectoire des rayons d’ondes. Une corrélation possible entre les mouvements géostrophiques et les ondes internes est envisagée dans la thermocline principale. Enfin l’étude des angles de Turner montre que des instabilités de double-diffusion peuvent être présentes sur une grande partie de la section
Vertical mixing in the ocean contributes to sustain the Meridionnal Overturning. Circulation (MOC) by allowing the renewal of deep waters. A section across the MOC is performed by the hydrological radial OVIDE repeated every two years between Portugal and Greenland since 2002. The energy required for mixing is provided by internal waves generated by wind and tides and micro-structure measurements(VMP) in 2008 show intensified values of dissipation Evmp in the main thermocline and near topographies. Our study is based on these observations and aims tostudy the vertical fine-scale structure of the ocean. Estimates of the dissipation E due to internal waves are made with CTD and LADCP measurements. The comparison with VMP measurements allow us to optimize the parameterization of E by framing the observations by factor 3 and their mean values at ±30%. The systematic application to the OVIDE dataset provides a mapping of the mixing across the basin. Geographical distribution of the vertical diffusion K is similar along the five sections, with values near10−4m2/s in the main thermocline and at the bottom of topographies, and near 10−5m2/s in the ocean interior. Regional differences are present and K can belocally close to 10−3m2/s. Application to FOUREX1997 datas et reveals an increase of K along the Mid-Atlantic Ridge, where the average values are 2 to 3stronger than along OVIDE sections. The spatial distribution of Thorpe scales LT appears to be correlated with internal waves mixing patterns. Nevertheless dissipation estimates based on LT overestimates Evmp by a 10 to 100 factor, which maybe due to misrepresentation of the stage of turbulence development in the ocean. Some mechanisms that can generate internal waves are proposed. Probable sites where tidal generation could occur are located using a simple model of wave beam trajectory. A possible correlation between geostrophic flows and internal waves is considered in the main thermocline. Finally the study of Turnerangles shows that double-diffusion instabilities may bepresent over a large part of the section
13

Gannac, Yves. "Alliages Fe-6,5%Si élaborés par solidification rapide sous atmosphère controlée : microstructure, propriétés magnétiques et comparaison avec des alliages Fe-Si industriels". Toulouse, INSA, 1992. http://www.theses.fr/1992ISAT0015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La realisation d'un dispositif de solidification rapide sous atmosphere controlee permet l'elaboration de rubans fer-silicium avec un exces de silicium (6,5% masse) dont l'etat de surface est ameliore par rapport a celui de rubans trempes a l'air. Un bon controle de parametres de coulee conduit a l'elaboration de rubans de qualite et geometrie correctes et reproductibles. Les proprietes magnetiques a faible frequence (0-50 hz), placent ces alliages a un niveau legerement inferieur a celui des alliages industriels a grains orientes, mais nettement au-dessus des alliages a grains non orientes qui ont une texture voisine de celles des rubans. La chute des proprietes quand la frequence augmente n'est pas observee sur les rubans, conferant a ceux-ci des proprietes a 400 hz meilleures que celles des alliages industriels a grains orientes ou non. Pour un large domaine d'induction, des equations parametriques simples peuvent representer les courbes d'aimantation experimentales. L'etude de la decomposition en serie de fourier met en evidence divers aspects physiques qui influencent l'allure de ces courbes (mecanismes d'aimantation, courants induits)
14

Al-Saleh, Mohammad A. "Nonlinear Parameter Estimation for Multiple Site-Type Polyolefin Catalysts Using an Integrated Microstructure Deconvolution Methodology". Thesis, 2011. http://hdl.handle.net/10012/5818.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The microstructure of polyolefins determines their macroscopic properties. Consequently, it is essential to predict how polymerization conditions will affect polyolefin microstructure. The most important microstructural distributions of ethylene/alfa-olefin copolymers made with coordination catalysts are their molecular weight (MWD), chemical composition (CCD), and comonomer sequence length (CSLD). Several mathematical models have been developed to predict these microstructural distributions; reliable techniques to estimate parameters for these models, however, are still poorly developed, especially for catalysts that have multiple site types, such as heterogeneous Ziegler-Natta complexes. Most commercial polyolefins are made with heterogeneous Ziegler-Natta catalysts, which make polyolefins with broad MWD, CCD, and CSLD. This behavior is attributed to the presence of several active site types, leading to a final product that can be seen as a blend of polymers made on the different catalyst site types. The main objective of this project is to develop a methodology to estimate the most important parameters needed to describe the microstructure of ethylene/alfa-olefin copolymers made with these multiple site-type catalysts. To accomplish this objective, we developed the Integrated Deconvolution Estimation Model (IDEM). IDEM estimates ethylene/alf-olefin reactivity ratios for each site type in two-steps. In the first step, the copolymer MWD, measured by high-temperature gel permeation chromatography, is deconvoluted into several Flory’s most probable distributions to determine the number of site types and the weight fractions of copolymer made on each of them. In the second estimation step, the model uses the MWD deconvolution information to fit the copolymer triad distributions measured by 13C NMR and estimate the reactivity ratios per site type. This is the first time that MWD and triad distribution information is integrated to estimate the reactivity ratio per site type of multiple site-type catalysts used to make ethylene/alfa-olefin copolymers. IDEM was applied to two sets of ethylene-co-1-butene copolymers made with a commercial Ziegler-Natta catalyst, covering a wide range of 1-butene fractions. In the first set of samples (EBH), hydrogen was used as a chain transfer agent, whereas it was absent in the second set (EB). Comparison of the reactivity ratio estimates for the sets of samples permitted the quantification of the hydrogen effect on the reactivity ratios of the different site types present in the Ziegler-Natta catalyst used in this thesis. Since 13C NMR it is an essential analytical step in IDEM, triad distributions for the EB and EBH copolymers were measured in two different laboratories (Department of Chemistry at the University of Waterloo, and Dow Chemical Research Center at Freeport, Texas). IDEM was applied to both set of triad measurements to find out the effect of interlaboratory 13C NMR analysis on reactivity ratio estimation.
15

Tsai, Yun-Cheng, e 蔡芸琤. "Estimating Realized Variance and True Prices from High-Frequency Data with Microstructure Noise". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/07329595626980843759.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
博士
國立臺灣大學
資訊工程學研究所
104
The market prices and the continuous quadratic variation play critical roles in high-frequency trading. However, the microstructure noise could make the observed prices differ from the true prices and hence bias the estimates of continuous quadratic variation. Following Zhou, we assume the observed prices are the result of adding microstructure noise to the true but hidden prices. Microstructure noise is assumed to be independent and identically distributed (i.i.d.); it is also independent of true prices. Zhang et al. propose a batch estimator for the continuous quadratic variation of high-frequency data in the presence of microstructure noise. It gives the estimates after all the data arrive. This thesis proposes a recursive version of their estimator that outputs variation estimates as the data arrive. The recursive version estimator gives excellent estimates well before all the data arrive. Both real high-frequency futures data and simulation data confirm the performance of recursive estimator. When prices are sampled from a geometric Brownian motion process, the Kalman filter can produce optimal estimates of true prices from the observed prices. However, the covariance matrix of microstructure noise and that of true prices must be known for this claim to hold. In practice, neither covariance matrix is known so they must be estimated. This thesis presents a robust Kalman filter (RKF) to estimate the true prices when microstructure noise is present. The RKF does not need the aforesaid covariance matrices as inputs. Simulation results show that the RKF gives essentially identical estimates to the Kalman filter, which has access to the two above mentioned covariance matrices.
16

Schmidt-Hieber, Anselm Johannes. "Nonparametric Methods in Spot Volatility Estimation". Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-000D-F1CF-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Solomon, Christopher S. "Estimation and Control of Friction in Bulk Plastic Deformation Process". Thesis, 2018. http://etd.iisc.ac.in/handle/2005/4194.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Friction plays a significant role in bulk plastic deformation processes in controlling the tool life, formability of the work piece material and the quality of the finished product such as, surface finish, microstructure and mechanical properties. Friction causes in-homogenous deformations, leading to defects in the finished products.Excessive friction leads to heat generation, wear, pick-up and galling of the tool surface, resulting in premature failure of the tools. Computer simulations based on Finite Element Methods are being extensively used for process planning and tool designing metal working industry for bulk plastic deformation process.Material and friction models used in simulation packages are very important to have accurate process simulations. Hence it is essential to estimate the friction and understand its role on deformation of the work piece to do realistic process simulations. Friction in bulk plastic deformation processes is influenced by many factors such as velocity, temperature, contact pressure and tribological conditions such as surface roughness, lubrication etc. Among the above factors, Surface Roughness and Surface Topography (ST) of the die material are the important parameters that influence the friction between the dies and the work-piece. The abbreviation “ST” is used for surface topography. Transfer layer formation and the coefficient of friction along with its two components, namely, the adhesion and ploughing, are controlled by the ST. Ploughing component is mainly the frictional resistance caused by asperities of hard surface ploughing through soft material. The force required for plastic flow of softer material represents the ploughing friction component. Adhesion component of friction is due to the cold welding/adhesive bond occurring in the real contact area of asperities. The force required to shear the adhesion junctions formed at the interface represents the adhesion component. Though surface characteristics such as roughness were dealt with by many researchers, the influence of surface topography on friction and microstructure evolution in bulk plastic deformation is still to be understood well.Though the work done by Menezes et al. has shown that friction is influenced by the surface roughness, ST and transfer layer, but does not link this ST to the micro structural evolution of the material.Friction influences the strain and strain rates imparted to the deforming material. The strain and strain rate (apart from the temperature) imposed on the deforming material would in turn influence the microstructural evolution of the work-piece. Thus, for application to industrial scale it is important that the influence of friction on the bulk deformation and microstructural evolution, if any, be understood.Further, the techniques used by Menezes et al. for generating the STs are very difficult to be adopted in industrial scale. The present thesis addresses the following three issues on the possible influence of friction in metal forming. • Use of surface generation techniques that can be easily adapted at the industrial scale. • Role of ST on friction during room temperature and high temperature deformation. • Role of this friction on the microstructural evolution during bulk plastic deformation of Aluminium alloys.
18

Kotchoni, Rachidi. "Efficient estimation using the characteristic function : theory and applications with high frequency data". Thèse, 2010. http://hdl.handle.net/1866/4392.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nous abordons deux sujets distincts dans cette thèse: l'estimation de la volatilité des prix d'actifs financiers à partir des données à haute fréquence, et l'estimation des paramétres d'un processus aléatoire à partir de sa fonction caractéristique. Le chapitre 1 s'intéresse à l'estimation de la volatilité des prix d'actifs. Nous supposons que les données à haute fréquence disponibles sont entachées de bruit de microstructure. Les propriétés que l'on prête au bruit sont déterminantes dans le choix de l'estimateur de la volatilité. Dans ce chapitre, nous spécifions un nouveau modèle dynamique pour le bruit de microstructure qui intègre trois propriétés importantes: (i) le bruit peut être autocorrélé, (ii) le retard maximal au delà duquel l'autocorrélation est nulle peut être une fonction croissante de la fréquence journalière d'observations; (iii) le bruit peut avoir une composante correlée avec le rendement efficient. Cette dernière composante est alors dite endogène. Ce modèle se différencie de ceux existant en ceci qu'il implique que l'autocorrélation d'ordre 1 du bruit converge vers 1 lorsque la fréquence journalière d'observation tend vers l'infini. Nous utilisons le cadre semi-paramétrique ainsi défini pour dériver un nouvel estimateur de la volatilité intégrée baptisée "estimateur shrinkage". Cet estimateur se présente sous la forme d'une combinaison linéaire optimale de deux estimateurs aux propriétés différentes, l'optimalité étant défini en termes de minimisation de la variance. Les simulations indiquent que l'estimateur shrinkage a une variance plus petite que le meilleur des deux estimateurs initiaux. Des estimateurs sont également proposés pour les paramètres du modèle de microstructure. Nous clôturons ce chapitre par une application empirique basée sur des actifs du Dow Jones Industrials. Les résultats indiquent qu'il est pertinent de tenir compte de la dépendance temporelle du bruit de microstructure dans le processus d'estimation de la volatilité. Les chapitres 2, 3 et 4 s'inscrivent dans la littérature économétrique qui traite de la méthode des moments généralisés. En effet, on rencontre en finance des modèles dont la fonction de vraisemblance n'est pas connue. On peut citer en guise d'exemple la loi stable ainsi que les modèles de diffusion observés en temps discrets. Les méthodes d'inférence basées sur la fonction caractéristique peuvent être envisagées dans ces cas. Typiquement, on spécifie une condition de moment basée sur la différence entre la fonction caractéristique (conditionnelle) théorique et sa contrepartie empirique. Le défit ici est d'exploiter au mieux le continuum de conditions de moment ainsi spécifié pour atteindre la même efficacité que le maximum de vraisemblance dans les inférences. Ce défit a été relevé par Carrasco et Florens (2000) qui ont proposé la procédure CGMM (continuum GMM). La fonction objectif que ces auteurs proposent est une forme quadratique hilbertienne qui fait intervenir l'opérateur inverse de covariance associé au continuum de condition de moments. Cet opérateur inverse est régularisé à la Tikhonov pour en assurer l'existence globale et la continuité. Carrasco et Florens (2000) ont montré que l'estimateur obtenu en minimisant cette forme quadratique est asymptotiquement aussi efficace que l'estimateur du maximum de vraisemblance si le paramètre de régularisation (α) tend vers zéro lorsque la taille de l'échatillon tend vers l'infini. La nature de la fonction objectif du CGMM soulève deux questions importantes. La première est celle de la calibration de α en pratique, et la seconde est liée à la présence d'intégrales multiples dans l'expression de la fonction objectif. C'est à ces deux problématiques qu'essayent de répondent les trois derniers chapitres de la présente thèse. Dans le chapitre 2, nous proposons une méthode de calibration de α basée sur la minimisation de l'erreur quadratique moyenne (EQM) de l'estimateur. Nous suivons une approche similaire à celle de Newey et Smith (2004) pour calculer un développement d'ordre supérieur de l'EQM de l'estimateur CGMM de sorte à pouvoir examiner sa dépendance en α en échantillon fini. Nous proposons ensuite deux méthodes pour choisir α en pratique. La première se base sur le développement de l'EQM, et la seconde se base sur des simulations Monte Carlo. Nous montrons que la méthode Monte Carlo délivre un estimateur convergent de α optimal. Nos simulations confirment la pertinence de la calibration de α en pratique. Le chapitre 3 essaye de vulgariser la théorie du chapitre 2 pour les modèles univariés ou bivariés. Nous commençons par passer en revue les propriétés de convergence et de normalité asymptotique de l'estimateur CGMM. Nous proposons ensuite des recettes numériques pour l'implémentation. Enfin, nous conduisons des simulations Monte Carlo basée sur la loi stable. Ces simulations démontrent que le CGMM est une méthode fiable d'inférence. En guise d'application empirique, nous estimons par CGMM un modèle de variance autorégressif Gamma. Les résultats d'estimation confirment un résultat bien connu en finance: le rendement est positivement corrélé au risque espéré et négativement corrélé au choc sur la volatilité. Lorsqu'on implémente le CGMM, une difficulté majeure réside dans l'évaluation numérique itérative des intégrales multiples présentes dans la fonction objectif. Les méthodes de quadrature sont en principe parmi les plus précises que l'on puisse utiliser dans le présent contexte. Malheureusement, le nombre de points de quadrature augmente exponentiellement en fonction de la dimensionalité (d) des intégrales. L'utilisation du CGMM devient pratiquement impossible dans les modèles multivariés et non markoviens où d≥3. Dans le chapitre 4, nous proposons une procédure alternative baptisée "reéchantillonnage dans le domaine fréquentielle" qui consiste à fabriquer des échantillons univariés en prenant une combinaison linéaire des éléments du vecteur initial, les poids de la combinaison linéaire étant tirés aléatoirement dans un sous-espace normalisé de ℝ^{d}. Chaque échantillon ainsi généré est utilisé pour produire un estimateur du paramètre d'intérêt. L'estimateur final que nous proposons est une combinaison linéaire optimale de tous les estimateurs ainsi obtenus. Finalement, nous proposons une étude par simulation et une application empirique basées sur des modèles autorégressifs Gamma. Dans l'ensemble, nous faisons une utilisation intensive du bootstrap, une technique selon laquelle les propriétés statistiques d'une distribution inconnue peuvent être estimées à partir d'un estimé de cette distribution. Nos résultats empiriques peuvent donc en principe être améliorés en faisant appel aux connaissances les plus récentes dans le domaine du bootstrap.
In estimating the integrated volatility of financial assets using noisy high frequency data, the time series properties assumed for the microstructure noise determines the proper choice of the volatility estimator. In the first chapter of the current thesis, we propose a new model for the microstructure noise with three important features. First of all, our model assumes that the noise is L-dependent. Secondly, the memory lag L is allowed to increase with the sampling frequency. And thirdly, the noise may include an endogenous part, that is, a piece that is correlated with the latent returns. The main difference between this microstructure model and existing ones is that it implies a first order autocorrelation that converges to 1 as the sampling frequency goes to infinity. We use this semi-parametric model to derive a new shrinkage estimator for the integrated volatility. The proposed estimator makes an optimal signal-to-noise trade-off by combining a consistent estimators with an inconsistent one. Simulation results show that the shrinkage estimator behaves better than the best of the two combined ones. We also propose some estimators for the parameters of the noise model. An empirical study based on stocks listed in the Dow Jones Industrials shows the relevance of accounting for possible time dependence in the noise process. Chapters 2, 3 and 4 pertain to the generalized method of moments based on the characteristic function. In fact, the likelihood functions of many financial econometrics models are not known in close form. For example, this is the case for the stable distribution and a discretely observed continuous time model. In these cases, one may estimate the parameter of interest by specifying a moment condition based on the difference between the theoretical (conditional) characteristic function and its empirical counterpart. The challenge is then to exploit the whole continuum of moment conditions hence defined to achieve the maximum likelihood efficiency. This problem has been solved in Carrasco and Florens (2000) who propose the CGMM procedure. The objective function of the CGMM is a quadrqtic form on the Hilbert space defined by the moment function. That objective function depends on a Tikhonov-type regularized inverse of the covariance operator associated with the moment function. Carrasco and Florens (2000) have shown that the estimator obtained by minimizing the proposed objective function is asymptotically as efficient as the maximum likelihood estimator provided that the regularization parameter (α) converges to zero as the sample size goes to infinity. However, the nature of this objective function raises two important questions. First of all, how do we select α in practice? And secondly, how do we implement the CGMM when the multiplicity (d) of the integrals embedded in the objective-function d is large. These questions are tackled in the last three chapters of the thesis. In Chapter 2, we propose to choose α by minimizing the approximate mean square error (MSE) of the estimator. Following an approach similar to Newey and Smith (2004), we derive a higher-order expansion of the estimator from which we characterize the finite sample dependence of the MSE on α. We provide two data-driven methods for selecting the regularization parameter in practice. The first one relies on the higher-order expansion of the MSE whereas the second one uses only simulations. We show that our simulation technique delivers a consistent estimator of α. Our Monte Carlo simulations confirm the importance of the optimal selection of α. The goal of Chapter 3 is to illustrate how to efficiently implement the CGMM for d≤2. To start with, we review the consistency and asymptotic normality properties of the CGMM estimator. Next we suggest some numerical recipes for its implementation. Finally, we carry out a simulation study with the stable distribution that confirms the accuracy of the CGMM as an inference method. An empirical application based on the autoregressive variance Gamma model led to a well-known conclusion: investors require a positive premium for bearing the expected risk while a negative premium is attached to the unexpected risk. In implementing the characteristic function based CGMM, a major difficulty lies in the evaluation of the multiple integrals embedded in the objective function. Numerical quadratures are among the most accurate methods that can be used in the present context. Unfortunately, the number of quadrature points grows exponentially with d. When the data generating process is Markov or dependent, the accurate implementation of the CGMM becomes roughly unfeasible when d≥3. In Chapter 4, we propose a strategy that consists in creating univariate samples by taking a linear combination of the elements of the original vector process. The weights of the linear combinations are drawn from a normalized set of ℝ^{d}. Each univariate index generated in this way is called a frequency domain bootstrap sample that can be used to compute an estimator of the parameter of interest. Finally, all the possible estimators obtained in this fashion can be aggregated to obtain the final estimator. The optimal aggregation rule is discussed in the paper. The overall method is illustrated by a simulation study and an empirical application based on autoregressive Gamma models. This thesis makes an extensive use of the bootstrap, a technique according to which the statistical properties of an unknown distribution can be estimated from an estimate of that distribution. It is thus possible to improve our simulations and empirical results by using the state-of-the-art refinements of the bootstrap methodology.
The attached file is created with Scientific Workplace Latex
19

(8741097), Ritwik Bandyopadhyay. "ENSURING FATIGUE PERFORMANCE VIA LOCATION-SPECIFIC LIFING IN AEROSPACE COMPONENTS MADE OF TITANIUM ALLOYS AND NICKEL-BASE SUPERALLOYS". Thesis, 2020.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, the role of location-specific microstructural features in the fatigue performance of the safety-critical aerospace components made of Nickel (Ni)-base superalloys and linear friction welded (LFW) Titanium (Ti) alloys has been studied using crystal plasticity finite element (CPFE) simulations, energy dispersive X-ray diffraction (EDD), backscatter electron (BSE) images and digital image correlation (DIC).

In order to develop a microstructure-sensitive fatigue life prediction framework, first, it is essential to build trust in the quantitative prediction from CPFE analysis by quantifying uncertainties in the mechanical response from CPFE simulations. Second, it is necessary to construct a unified fatigue life prediction metric, applicable to multiple material systems; and a calibration strategy of the unified fatigue life model parameter accounting for uncertainties originating from CPFE simulations and inherent in the experimental calibration dataset. To achieve the first task, a genetic algorithm framework is used to obtain the statistical distributions of the crystal plasticity (CP) parameters. Subsequently, these distributions are used in a first-order, second-moment method to compute the mean and the standard deviation for the stress along the loading direction (σ_load), plastic strain accumulation (PSA), and stored plastic strain energy density (SPSED). The results suggest that an ~10% variability in σ_load and 20%-25% variability in the PSA and SPSED values may exist due to the uncertainty in the CP parameter estimation. Further, the contribution of a specific CP parameter to the overall uncertainty is path-dependent and varies based on the load step under consideration. To accomplish the second goal, in this thesis, it is postulated that a critical value of the SPSED is associated with fatigue failure in metals and independent of the applied load. Unlike the classical approach of estimating the (homogenized) SPSED as the cumulative area enclosed within the macroscopic stress-strain hysteresis loops, CPFE simulations are used to compute the (local) SPSED at each material point within polycrystalline aggregates of 718Plus, an additively manufactured Ni-base superalloy. A Bayesian inference method is utilized to calibrate the critical SPSED, which is subsequently used to predict fatigue lives at nine different strain ranges, including strain ratios of 0.05 and -1, using nine statistically equivalent microstructures. For each strain range, the predicted lives from all simulated microstructures follow a log-normal distribution; for a given strain ratio, the predicted scatter is seen to be increasing with decreasing strain amplitude and are indicative of the scatter observed in the fatigue experiments. Further, the log-normal mean lives at each strain range are in good agreement with the experimental evidence. Since the critical SPSED captures the experimental data with reasonable accuracy across various loading regimes, it is hypothesized to be a material property and sufficient to predict the fatigue life.

Inclusions are unavoidable in Ni-base superalloys, which lead to two competing failure modes, namely inclusion- and matrix-driven failures. Each factor related to the inclusion, which may contribute to crack initiation, is isolated and systematically investigated within RR1000, a powder metallurgy produced Ni-base superalloy, using CPFE simulations. Specifically, the role of the inclusion stiffness, loading regime, loading direction, a debonded region in the inclusion-matrix interface, microstructural variability around the inclusion, inclusion size, dissimilar coefficient of thermal expansion (CTE), temperature, residual stress, and distance of the inclusion from the free surface are studied in the emergence of two failure modes. The CPFE analysis indicates that the emergence of a failure mode is an outcome of the complex interaction between the aforementioned factors. However, the possibility of a higher probability of failure due to inclusions is observed with increasing temperature, if the CTE of the inclusion is higher than the matrix, and vice versa. Any overall correlation between the inclusion size and its propensity for damage is not found, based on inclusion that is of the order of the mean grain size. Further, the CPFE simulations indicate that the surface inclusions are more damaging than the interior inclusions for similar surrounding microstructures. These observations are utilized to instantiate twenty realistic statistically equivalent microstructures of RR1000 – ten containing inclusions and remaining ten without inclusions. Using CPFE simulations with these microstructures at four different temperatures and three strain ranges for each temperature, the critical SPSED is calibrated as a function of temperature for RR1000. The results suggest that critical SPSED decreases almost linearly with increasing temperature and is appropriate to predict the realistic emergence of the competing failure modes as a function of applied strain range and temperature.

LFW process leads to the development of significant residual stress in the components, and the role of residual stress in the fatigue performance of materials cannot be overstated. Hence, to ensure fatigue performance of the LFW Ti alloys, residual strains in LFW of similar (Ti-6Al-4V welded to Ti-6Al-4V or Ti64-Ti64) and dissimilar (Ti-6Al-4V welded to Ti-5Al-5V-5Mo-3Cr or Ti64-Ti5553) Ti alloys have been characterized using EDD. For each type of LFW, one sample is chosen in the as-welded (AW) condition and another sample is selected after a post-weld heat treatment (HT). Residual strains have been separately studied in the alpha and beta phases of the material, and five components (three axial and two shear) have been reported in each case. In-plane axial components of the residual strains show a smooth and symmetric behavior about the weld center for the Ti64-Ti64 LFW samples in the AW condition, whereas these components in the Ti64-Ti5553 LFW sample show a symmetric trend with jump discontinuities. Such jump discontinuities, observed in both the AW and HT conditions of the Ti64-Ti5553 samples, suggest different strain-free lattice parameters in the weld region and the parent material. In contrast, the results from the Ti64-Ti64 LFW samples in both AW and HT conditions suggest nearly uniform strain-free lattice parameters throughout the weld region. The observed trends in the in-plane axial residual strain components have been rationalized by the corresponding microstructural changes and variations across the weld region via BSE images.

In the literature, fatigue crack initiation in the LFW Ti-6Al-4V specimens does not usually take place in the seemingly weakest location, i.e., the weld region. From the BSE images, Ti-6Al-4V microstructure, at a distance from the weld-center, which is typically associated with crack initiation in the literature, are identified in both AW and HT samples and found to be identical, specifically, equiaxed alpha grains with beta phases present at the alpha grain boundaries and triple points. Hence, subsequent fatigue performance in LFW Ti-6Al-4V is analyzed considering the equiaxed alpha microstructure.

The LFW components made of Ti-6Al-4V are often designed for high cycle fatigue performance under high mean stress or high R ratios. In engineering practice, mean stress corrections are employed to assess the fatigue performance of a material or structure; albeit this is problematic for Ti-6Al-4V, which experiences anomalous behavior at high R ratios. To address this problem, high cycle fatigue analyses are performed on two Ti-6Al-4V specimens with equiaxed alpha microstructures at a high R ratio. In one specimen, two micro-textured regions (MTRs) having their c-axes near-parallel and perpendicular to the loading direction are identified. High-resolution DIC is performed in the MTRs to study grain-level strain localization. In the other specimen, DIC is performed on a larger area, and crack initiation is observed in a random-textured region. To accompany the experiments, CPFE simulations are performed to investigate the mechanistic aspects of crack initiation, and the relative activity of different families of slip systems as a function of R ratio. A critical soft-hard-soft grain combination is associated with crack initiation indicating possible dwell effect at high R ratios, which could be attributed to the high-applied mean stress and high creep sensitivity of Ti-6Al-4V at room temperature. Further, simulations indicated more heterogeneous deformation, specifically the activation of multiple families of slip systems with fewer grains being plasticized, at higher R ratios. Such behavior is exacerbated within MTRs, especially the MTR composed of grains with their c-axes near parallel to the loading direction. These features of micro-plasticity make the high R ratio regime more vulnerable to fatigue damage accumulation and justify the anomalous mean stress behavior experienced by Ti-6Al-4V at high R ratios.

20

Absolonová, Karolína. "Histologický odhad dožitého věku jedince ze spálené a nespálené kompaktní kosti lidského žebra". Doctoral thesis, 2012. http://www.nusl.cz/ntk/nusl-309514.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The content of the presented dissertation work is the study of the histological structure of the burned and unburned compact bone of the human rib. The aim was to evaluate the effects of the differently high cremation temperatures on the structure of the bone tissue, and, on the basis of these findings, to design an applicable methodology for the estimation of the age-at-death of an unknown individual. As the research material the recent human ribs were used, belonging to the individuals of known age-at-death, sex and cause of death. The skeletal samples were experimentally burned under the beforehand set conditions. Every bone was divided into several pieces; one of them remained unburned, and the other were burned at the temperatures of 600, 700, 800 and 1000řC. From burned and unburned bone samples the undecalcified and unstained cross- sections were made, which were microscopically analysed under the magnification of 100×. The histological analysis was performed in the digital microphotographs using the SigmaScan Pro 5 image analysis programme. In each cross-section in total 28 variables were studied, and obtained histomorphometric data were statistically processed using the Statistica 6 programme. The result of the research is the description of the changes of histological structures caused by...

Vai alla bibliografia