Teses / dissertações sobre o tema "Extrapolation"

Siga este link para ver outros tipos de publicações sobre o tema: Extrapolation.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Extrapolation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Johnson, Walter William. "Studies in motion extrapolation /". The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487265143146004.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Domingo, Salazar Carlos. "Endpoint estimates via extrapolation theory". Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/396143.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this thesis, we study different variants of Rubio de Francia’s extrapolation that allow us to obtain estimates near L1. This theory is subsequently applied to deduce enpoint boundedness for the Bochner-Riesz operator and other classes of multipliers. We also present results related to Yano’s extrapolation on Lorentz spaces and how it can be related to the theory of weights.
En aquesta tesi, estudiem variants de l’extrapolació de Rubio de Francia que permetin obtenir estimacions a prop de l’espai L1. Aquesta teoria l’apliquem després per deduïr acotacions a l’extrem per l’operador de Bochner-Riesz i altres classes de multiplicadors. També presentem altres resultats sobre teoria d’extrapolació de tipus Yano en espais de Lorentz i sobre com es pot relacionar amb la teoria de pesos.
3

Salam, Ahmed. "Extrapolation : extension et nouveaux résultats". Lille 1, 1993. http://www.theses.fr/1993LIL10191.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Lin, Tim T. Y., e Felix J. Herrmann. "Compressed wavefield extrapolation with curvelets". Society of Exploration Geophysicists, 2007. http://hdl.handle.net/2429/560.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
An explicit algorithm for the extrapolation of one-way wavefields is proposed which combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in {3-D}. By using ideas from ``compressed sensing'', we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume, thereby reducing the size of the operators. According {to} compressed sensing theory, signals can successfully be recovered from an imcomplete set of measurements when the measurement basis is incoherent} with the representation in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can successfully be extrapolated in the modal domain via a computationally cheaper operatoion. A proof of principle for the ``compressed sensing'' method is given for wavefield extrapolation in 2-D. The results show that our method is stable and produces identical results compared to the direct application of the full extrapolation operator.
5

Lim, Hee Jin. "Facilitatory neural dynamics for predictive extrapolation". [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1759.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Fang, Zhide. "Robust extrapolation designs for linear models". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0035/NQ46835.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Segovia, Carlos. "Extrapolation and commutators of singular integrals". Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/97397.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
l. Introduction In these notes we shall present results concerning LP inequalities with different but related weights for commutators of singular and strongly singular integrals. These commutators turn out to be controlled by commutator of fractional order of the Hardy-Littlewood maximal operator. The boundedness properties are obtained by extrapolation from infinity. These notes are based mainly on [G-H-S-T].
8

Musielak-Mersak, Céline. "Vieillissement cognitif, apprentissage fonctionnel et extrapolation". Reims, 2005. http://theses.univ-reims.fr/exl-doc/GED00000224.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
L'objectif de ce travail est d'étudier l'effet du vieillissement sur les processus cognitifs impliqués dans l'abstraction et l'adaptation aux relations complexes de l'environnement. Un total de 208 personnes (âgées de 18-25, 40-50, 65-75 et 76-90 ans) ont participé à cette étude. Dans l'expérience 1, l'apprentissage de fonctions curvilinéaires (en forme de U et de U-inversé) est comparé à celui de fonctions linéaires (directe et inverse). Un test d'extrapolation permet d'évaluer la qualité de l'abstraction selon la fonction apprise. Les résultats montrent que les personnes âgées conservent leurs capacités d'extrapolation, tout particulièrement lorsque la fonction est directe. Les différences liées à l'âge ne sont que quantitatives. Ces performances peuvent s'interpréter dans le cadre théorique des fonctions exécutives. L'expérience 2 permet d'observer l'impact de l'âge sur le passage d'une stratégie d'apprentissage fonctionnelle à une stratégie associative. Les résultats témoignent des difficultés éprouvées par les personnes âgées lorsque aucune fonction ne peut être exploitée pour relier les variables. La flexibilité et la capacité de mémoire de travail des personnes âgées seraient insuffisantes pour leur permettre un passage réussi entre les deux stratégies. Un projet est présenté. Il permettrait d'étudier l'effet de l'âge sur l'apprentissage de relations probabilistes en présence d'indices non pertinents
The aim of the present study is to examine the effect of aging on abstraction and adjustment to the complex relationships of the environement. A total of 208 individuals (aged 18-25, 40-50, 65-75, 76-90 years old) participated in this study. In experiment 1, the leaming of curvilinear functions (U-shaped and Inverse U-shaped functions) is compared with the learning of linear functions (direct and inverse functions). An extrapolation test is conducted to examine abstraction. Results show that extrapolation capacities are preserved in the elderly, especially when the relation between cue and criterion is a direct one. Age related differences are only quantitative. The results can be interpreted within the theoretical framework of executive functions. Experiment 2 is aimed at examining the effects of aging on shirting from a functional strategy of learning to an associative strategy of learning. Resuls show difficulties in older people when no function can be used to associate variables. It seems that, in the elderly, the lack of flexibility and the reduction of working memory capacity prevent the shirting between the two strategies. A project is presented. It is aimed at examining the effect of aging on multiple-cue probability learning tasks with non pertinent eues
9

Beals, Mark J. "Radar target imaging using data extrapolation". Connect to resource, 1993. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1200677949.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Musielak-Mersak, Céline Chasseigne Gérard. "Vieillissement cognitif, apprentissage fonctionnel et extrapolation". Reims : Éditeur, 2005. http://scdurca.univ-reims.fr/exl-doc/GED00000224.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Jung, M., e U. Rüde. "Implicit extrapolation methods for multilevel finite element computations". Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199800516.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Extrapolation methods for the solution of partial differential equations are commonly based on the existence of error expansions for the approximate solution. Implicit extrapolation, in the contrast, is based on applying extrapolation indirectly, by using it on quantities like the residual. In the context of multigrid methods, a special technique of this type is known as \034 -extrapolation. For finite element systems this algorithm can be shown to be equivalent to higher order finite elements. The analysis is local and does not use global expansions, so that the implicit extrapolation technique may be used on unstructured meshes and in cases where the solution fails to be globally smooth. Furthermore, the natural multilevel structure can be used to construct efficient multigrid and multilevel preconditioning techniques. The effectivity of the method is demonstrated for heat conduction problems and problems from elasticity theory.
12

Pyke, Aryn Alexandra. "Extrapolation of wideband speech from the telephone band". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ29415.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Best, Lisa A. "Graphical perception of nonlinear trends : discrimination and extrapolation /". Fogler Library, University of Maine, 2001. http://www.library.umaine.edu/theses/pdf/BestLA2001.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Hunt, John Jung C. G. "Jung and his archetypes : an extrapolation on polarity /". [Richmond, N.S.W.] : University of Western Sydney, Hawkesbury, Faculty of Social Inquiry, 1999. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030519.100731/index.html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Yano, Marcus Omori. "Extrapolation of autoregressive model for damage progression analysis /". Ilha Solteira, 2019. http://hdl.handle.net/11449/182287.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Orientador: Samuel da Silva
Resumo: O principal objetivo deste trabalho é usar métodos de extrapolação em coeficientes de modelos autorregressivos (AR), para fornecer informações futuras de condições de estruturas na existência de mecanismo de danos pré-definidos. Os modelos AR são estimados considerando a predição de um passo à frente, verificados e validados a partir de dados de vibração de uma estrutura na condição não danificada. Os erros de predição são usados para extrair um indicador para classificar a condição do sistema. Então, um novo modelo é identificado se qualquer variação de índices de dano ocorrer, e seus coeficientes são comparados com os do modelo de referência. A extrapolação dos coeficientes de AR é realizada através das splines cúbicas por partes que evitam possíveis instabilidades e alterações indesejáveis dos polinômios, obtendo aproximações adequadas através de polinômios de baixa ordem. Uma curva de tendência para o indicador capaz de predizer o comportamento futuro pode ser obtida a partir da extrapolação direta dos coeficientes. Uma estrutura de três andares com um para-choque e uma coluna de alumínio colocada no centro do último andar são analisados com diferentes cenários de dano para ilustrar a abordagem. Os resultados indicam a possibilidade de estimar a condição futura do sistema a partir dos dados de vibração nas condições de danos iniciais.
Abstract: The main purpose of this work is to apply extrapolation methods upon coefficients of autoregressive models (AR), to provide future condition information of structures in the existence of predefined damage mechanism. The AR models are estimated considering one-step-ahead prediction, verified and validated from vibration data of a structure in the undamaged condition. The prediction errors are used to extract an indicator to classify the system state condition. Then, a new model is identified if any variation of damage indices occurs, and its coefficients are compared to the ones from the reference model. The extrapolation of the AR coefficients is performed through the piecewise cubic splines that avoid possible instabilities and undesirable changes of the polynomials, obtaining suitable approximations through low-order polynomials. A trending curve for the indicator capable of predicting future behavior can be obtained from direct coefficient extrapolation. A benchmark of a three-story building structure with a bumper and an aluminum column placed on the center of the top floor is analyzed with different damage scenarios to illustrate the approach. The results indicate the feasibility of estimating the future system state from the vibration data in the initial damage conditions.
Mestre
16

Vekemans, Denis. "Algorithmes pour méthodes de prédiction". Lille 1, 1995. http://www.theses.fr/1995LIL10176.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Supposant connaître quelques termes d'une suite, nous définissons une méthode de prédiction comme étant un procédé capable de fournir une approximation des termes suivants. Nous utilisons des méthodes d'extrapolation, qui servent généralement à accélérer la convergence des suites, pour construire des méthodes de prédiction. Souvent, les méthodes de prédiction nécessitent la résolution d'un système linéaire (ou non linéaire). Mais, grâce aux algorithmes relatifs aux méthodes d'extrapolation, dans ce travail, nous les évitons. De plus, nous pouvons donner des résultats de consistance pour ces méthodes.
17

Herschke, Philippe M. "Modeling and extrapolation of path delays in GPS signals". Zurich : ETH, Swiss Federal Institute of Technology, Department of Physics, 2002. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=90.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Duminil, Sébastien. "Extrapolation vectorielle et applications aux équations aux dérivées partielles". Phd thesis, Université du Littoral Côte d'Opale, 2012. http://tel.archives-ouvertes.fr/tel-00790115.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Nous nous intéressons, dans cette thèse, à l'étude des méthodes d'extrapolation polynômiales et à l'application de ces méthodes dans l'accélération de méthodes de points fixes pour des problèmes donnés. L'avantage de ces méthodes d'extrapolation est qu'elles utilisent uniquement une suite de vecteurs qui n'est pas forcément convergente, ou qui converge très lentement pour créer une nouvelle suite pouvant admettreune convergence quadratique. Le développement de méthodes cycliques permet, deplus, de limiter le coût de calculs et de stockage. Nous appliquons ces méthodes à la résolution des équations de Navier-Stokes stationnaires et incompressibles, à la résolution de la formulation Kohn-Sham de l'équation de Schrödinger et à la résolution d'équations elliptiques utilisant des méthodes multigrilles. Dans tous les cas, l'efficacité des méthodes d'extrapolation a été montrée.Nous montrons que lorsqu'elles sont appliquées à la résolution de systèmes linéaires, les méthodes d'extrapolation sont comparables aux méthodes de sous espaces de Krylov. En particulier, nous montrons l'équivalence entre la méthode MMPE et CMRH. Nous nous intéressons enfin, à la parallélisation de la méthode CMRH sur des processeurs à mémoire distribuée et à la recherche de préconditionneurs efficaces pour cette même méthode.
19

Haigh, Martin David. "Beam extrapolation and photosensor testing for the T2K experiment". Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3908/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Our understanding of the physics of neutrino oscillations has evolved rapidly over the past decade or so, with results from the SNO, Super-K, MINOS and CHOOZ experiments, among others, producing results favouring a three-neutrino mixing model, and significantly constraining the parameter space for this mixing. There are still several important questions to be answered however: we do not know whether Θ13 is non-zero, or whether (sin2 2Θ23) is maximal; also, we do not know the sign of the large mass splitting ΔM2, or whether CP-violation occurs in the lepton sector. The latter is possibly the most exciting of all - leptonic CP- violation is a requirement for leptogenesis, and could therefore indicate a solution to the matter-antimatter asymmetry problem in cosmology. The T2K long-baseline neutrino experiment is one of a new generation of neutrino projects, which will make more precise measurements of Θ13 and Θ23 than has been achieved by previous experiments. It uses the Super-K water Čerenkov detector at Kamioka as a far detector, and also has a suite of new near detectors. These are largely scintillator-based, but use a novel photosensor, the silicon photomultiplier (SiPM), for light readout. T2K has been leading the effort to understand and model these new sensors, and the present work will describe the current state-of-the-art in device characterisation, and also the effort to ensure the quality of the devices installed in the calorimeter of the ND280 near detector. An important part of a long-baseline analysis is the extrapolation of the neutrino flux measured at the near detector to predict that at the far detector. Methods to do this have been developed by previous experiments; however T2K uses an innovative configuration whereby the main detectors are displaced from the neutrino beam centre, removing much of the high-energy tail in the neutrino flux to reduce background from non-quasielastic events. This thesis evaluates the effectiveness of two extrapolation techniques, used by previous experiments, for the T2K configuration.
20

Rokkanen, Miikka. "Extrapolation and bandwidth choice in the regression discontinuity design". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90131.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 150-160).
This thesis consists of three methodological contributions to the literature on the regression discontinuity (RD) design. The first two chapters develop approaches to the extrapolation of treatment effects away from the cutoff in RD and use them to study the achievement effects of attending selective public schools, known as exam schools, in Boston. The third chapter develops an adaptive bandwidth choice algorithm for local polynomial regression-based RD estimators. The first chapter develops a latent factor-based approach to RD extrapolation that is then used to estimate effects of exam school attendance for infra-marginal 7th grade applicants. Achievement gains from Boston exam schools are larger for applicants with lower English and Math abilities. I also use the model to predict the effects of introducing either minority or socioeconomic preferences in exam school admissions. Affirmative action has modest average effects on achievement, while increasing the achievement of the applicants who gain access to exam schools as a result. The second chapter, written jointly with Joshua Angrist, develops a covariate-based approach to RD extrapolation that is then used to estimate effects of exam school attendance for infra-marginal 9th grade applicants. The estimates suggest that the causal effects of exam school attendance for applicants with running variable values well away from admissions cutoffs differ little from those for applicants with values that put them on the margin of acceptance. The third chapter develops an adaptive bandwidth choice algorithm for local polynomial regression-based RD estimators. The algorithm allows for different choices for the order of polynomial and kernel function. In addition, the algorithm automatically takes into account the inclusion of additional covariates as well as alternative assumptions on the variance-covariance structure of the error terms. I show that the algorithm produces a consistent estimator of the asymptotically optimal bandwidth and that the resulting regression discontinuity estimator satisfies the asymptotic optimality criterion of Li (1987). Finally, I provide Monte Carlo evidence suggesting that the proposed algorithm also performs well in finite samples.
by Miikka Rokkanen.
Ph. D.
21

Connell, Matthew. "Bayesian Model Mixing for Extrapolation from an EFT Toy". Ohio University Honors Tutorial College / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1619122381888487.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Amor, Zaineb. "Bone segmentation and extrapolation in Cone-Beam Computed Tomography". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279802.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This work was done within the French R&D center of GE Medical Systems and focused on two main tasks: skull bone segmentation on 3D Cone-Beam Computed Tomography (CBCT) data and skull volumetric shape extrapolation on 3D CBCT data using deep learning approaches. The motivation behind the first task is that it would allow interventional radiologists to visualize only the vessels directly without adding workflow to their procedures and exposing the patients to extra radiations. The motivation behind the second task is that it would help understand and eventually correct some artifacts related to partial volumes. The skull segmentation labels were prepared while taking into ac- count imaging-modality related considerations and anatomy-related considerations. The architecture that was chosen for the segmentation task was chosen after experimenting with three different networks, the hyperparameters were also optimized. The second task explored the feasability of extrapolating the volumetric shape of the skull outside of the field of view with limited data. At first, a simple convolutional autoencoder architecture was explored, then, adversarial training was added. Adversarial training did not improve the performances considerably.
I detta arbete undersöktes två huvuduppgifter, skallbenssegmentering på 3D CBCT-data och extrapolering av skallvolumetrisk form på 3D CBCT-data. För båda uppgifterna användes djupinlärningsmetoder. Den första uppgiften är användbar eftersom den skulle göra det möjligt för interventionsradiologer att endast visualisera blodkärlen direkt utan att lägga till arbetsflöde i sina procedurer. För att förbereda uppgifterna tog vi hänsyn till avbildningsmodalitetsrelaterade faktorer och anatomirelaterade faktorer. Arkiekturen för denna uppgift valdes efter experiment med tre olika nätverk, hyperparametrarna optimerades också. Den andra uppgiften undersökte möjligheten att extrapolera den volumetriska formen på skallen utanför synfältet med begränsade data. Denna uppgift är viktig eftersom den möjliggör korrigering av specifika artefakter kopplade till partiella volymer. I början undersöktes en enkel autoencoder-arkitektur, därefter tillkom adversarial training vilket inte avsevärt förbättrade prestandan.
23

Moudiki, Thierry. "Interest rates modeling for insurance : interpolation, extrapolation, and forecasting". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1110/document.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
L'ORSA Own Risk Solvency and Assessment est un ensemble de règles définies par la directive européenne Solvabilité II. Il est destiné à servir d'outil d'aide à la décision et d'analyse stratégique des risques. Dans le contexte de l'ORSA, les compagnies d'assurance doivent évaluer leur solvabilité future, de façon continue et prospective. Pour ce faire, ces dernières doivent notamment obtenir des projections de leur bilan (actif et passif) sur un certain horizon temporel. Dans ce travail de thèse, nous nous focalisons essentiellement sur l'aspect de prédiction des valeurs futures des actifs. Plus précisément, nous traitons de la courbe de taux, de sa construction et de son extrapolation à une date donnée, et de ses prédictions envisagées dans le futur. Nous parlons dans le texte de "courbe de taux", mais il s'agit en fait de construction de courbes de facteurs d'actualisation. Le risque de défaut de contrepartie n'est pas explicitement traité, mais des techniques similaires à celles développées peuvent être adaptées à la construction de courbe de taux incorporant le risque de défaut de contrepartie
The Own Risk Solvency and Assessment (ORSA) is a set of processes defined by the European prudential directive Solvency II, that serve for decision-making and strategic analysis. In the context of ORSA, insurance companies are required to assess their solvency needs in a continuous and prospective way. For this purpose, they notably need to forecast their balance sheet -asset and liabilities- over a defined horizon. In this work, we specifically focus on the asset forecasting part. This thesis is about the Yield Curve, Forecasting, and Forecasting the Yield Curve. We present a few novel techniques for the construction, the extrapolation of static curves (that is, curves which are constructed at a fixed date), and for forecasting the spot interest rates over time. Throughout the text, when we say "Yield Curve", we actually mean "Discount curve". That is: we ignore the counterparty credit risk, and consider that the curves are risk-free. Though, the same techniques could be applied to construct/forecast the actual risk-free curves and credit spread curves, and combine both to obtain pseudo- discount curves incorporating the counterparty credit risk
24

Battaglini, Luca. "The extrapolation of occluded motion: basic mechanism and application". Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Predicting the future states of moving objects that are hidden by an occluder for a brief period is of paramount importance to our ability to interact within a dynamic environment. This phenomenon is known as motion extrapolation (ME). Numerous gaps in the literature can be found disregarding the mechanisms involved in ME of which the current thesis attempts to address. Behavioural experiments usually utilize a prediction-of-motion paradigm, which requires participants to make a direct estimation of the time-to-contact (TTC). In this task, the initial trajectory of a target stimulus is presented, which then becomes occluded, observers are then asked to respond when they believe the target has reached a marked point behind that occluder without it ever actually reappearing (Tresilian, 1999; Rosenbaum, 1972). Alternatively, other experiments have adopted a timing discrimination task in which participants are required to indicate whether a moving target, following occlusion, reappears ‘early’ or ‘late’ (Makin, Poliakoff & El-Deredy, 2009; Makin, Poliakoff, Ackerley & El-Deredy, 2012). Experiments In the first part of this thesis, I investigated whether the visual memory system is active during the extrapolation of occluded motion and whether it reflects speed misperception due to the well-known illusion such as the apparent slower speed of low contrast object or large size object (Thompson 1982; Epstein 1978). Results revealed that with a TTC task observers estimate longer time to contact with low contrast and large stimuli compared to high contrast and small stimuli respectively. Note that the stimuli in both conditions are moving at equal speed. Therefore, the illusion of the apparent slower speed with low contrast and large stimuli remains in the visual memory system and influences motion extrapolation. Chapter III aims to investigate the interaction between real motion and motion extrapolation. Gilden and colleagues (1995) showed that motion adaptation affects TTC judgment showing that real motion detectors are somehow also involved during ME. A step further that I made was to investigate the effect of brief motion priming and adaptation, occurring at the earliest levels of the cortical visual streams, on time-to-contact (TTC) estimation of a target passing behind an occluder. By using different exposure times of directional motion presented in the occluder area prior to the target’s disappearance behind it, my aim was to modulate (prime or adapt) extrapolated motion of the invisible target, thus producing different TTC estimates. Results showed that longer (yet sub-second) exposures to motion in the same direction of the target produced late TTC estimates, whereas shorter exposures produced shorter TTC estimates, indicating that rapid forms of motion adaptation and motion priming affect extrapolated motion. My findings suggest that motion extrapolation might occur at the earliest levels of cortical processing of motion, where these rapid mechanisms of priming and adaptation take place. In Chapter IV of my thesis, I explore not only the visual factors of motion extrapolation, but also the timing mechanisms involved and their electrophysiological correlates. The first question is whether the temporal processing is required for accurate ME, and whether this is indexed by neural activity of the Contingent Negative Variation (CNV). A second question is, whether there is a specific electrophysiological correlates that highlight the shifting from real motion perception to motion extrapolation. In this electroencephalographic experiment, participants were adapted with a moving texture (Gilden et al., 1995). The adaptation with the moving texture could bias and modify temporal processing. Participants made a direct estimation of Time to Contact, which showed that classic adaptations were able to bias temporal judgments and modulate the amplitude of the CNV, suggesting a complex feedforward-feedback network between low- and high level cortical mechanisms. Finally, a negative defection (N190) was found, for the first time, as a neurophysiological correlate in the temporal-occipital electrodes in the right and left hemisphere for the rightwards and leftwards ME respectively, indicating the involvement of motion mechanisms of intermediate cortical level in ME. Chapter V aims to show at distinguishing between extrapolation, and interpolation of occluded motion. Extrapolation is the ability to extract the trajectory, speed and direction of a moving target that becomes hidden by an occluder, thanks to the information extracted from the visible trajectory. Interpolation is a similar phenomenon, i.e. from the visible trajectory one can extract speed and direction as in Extrapolation. The main difference is that for interpolate visible cue are needed along the invisible trajectory. If the occluder is invisible and the occluded trajectory is symmetrical respect to a visible cue, one can connect these cues (spatial points) in order to form a spatio-temporal map and infer where and when the target will reappear. This is not possible in absence of visible cues such as in extrapolation condition. In a new task, observers were required to press a button as fast as possible (reaction time) when they saw a moving target reappearing from an invisible occluder. Results showed that observers could even anticipate the reappearance of an object moving behind the occluder. However, only in some circumstances: i) when the occluder was not positioned over the blind spot but in retinal areas that project to the visual cortex; ii) with an entirely invisible occluder the visible motion before occlusion had to be presented and iii) visual-spatial cues had to signal the center of the invisible trajectory. When these conditions are given, observers can use the spatial information given by the point of disappearance, the visible cue that represented the center of the invisible trajectory, then infer the point of reappearance by symmetry. Therefore having a set of discrete spatial positions (and its cortical representation) in which the moving occluded target will be in a certain moment of time, it is convenient to interpolate this point in order to create a spatio-temporal map to infer where and when the object will be (saliency map). I consider this process of motion interpolation as an amodal filling-in process. The last part of my thesis involved a practical application of ME. Participants cannot interpolate when the moving target passes in a zone over retinal areas that do not project to the visual cortex (blind spot). In this case, observers perform a true reaction time and do not anticipate the response. Patients with Macular Degeneration cannot see with their fovea since it is damaged. Therefore, that part of the retina does not project to the visual cortex anymore. In a task in which they have to press a response button when a moving target disappear into or reappear from their scotoma, we predict that they cannot anticipate the response to the reappearance of the target. Five patients with macular degeneration were therefore instructed to press a button when they see a moving target disappear into and reappear from their scotoma. Patients repeated this task several times with different linear trajectories of the target. Connecting the point in space in which a patient presses the button, it was possible to draw the shape and the size of the scotoma with a software. The size of the scomota found with this experiment was compared with that measured with a Nidek MP-1. A linear correlation of R2 about of 0.8 was found between the Nidek MP-1 and scotoma measured connecting the point in which patients reported to see the target reappear from their scotoma. Therefore, this software which was written by me (considering its limits) may become a useful tool to obtain a reliable perimetry in a given situation in which an expensive machine such as the MP-1 is not available.
Predire la futura posizione di un oggetto in movimento che viene nascosto per un breve periodo di tempo è molto importante per interagire con le numerose variabili dinamiche del nostro mondo circostante. Per predire quando questo oggetto riapparirà alla nostra vista è necessario estrapolarne il movimento (motion extrapolation, ME) durante il periodo in cui non è visibile. Ci sono molte lacune in letteratura riguardo i meccanismi sottostanti a questa apparentemente semplice operazione e questa tesi mira proprio allo studio di questi. Esperimenti comportamentali solitamente utilizzano un compito in cui si chiede ai partecipanti di premere un tasto quando ritengono che un oggetto in movimento, che viene nascosto da un occlusore durante la parte finale del suo percorso, abbia raggiunto una certa posizione spaziale indicata da un indizio. L’istruzione più comune data ai partecipanti è quella di fare una stima del tempo di contatto fra l’oggetto target e l’indizio (time to contact, TTC) (Tresilian, 1999; Rosenbaum, 1972). In questo tipo di esperimento il target non ricompare mai. Un altro paradigma è quello di chiedere ai soggetti di riportare se un target, che viene nascosto per un periodo di tempo più o meno lungo, ricompare in tempo, o in anticipo, o in ritardo rispetto a quanto atteso dai partecipanti nel caso in cui il target mantenesse un moto rettilineo uniforme (Makin, Poliakoff & El-Deredy, 2009; Makin, Poliakoff, Ackerley & El-Deredy, 2012). Esperimenti Nella prima parte di questa tesi (Capitolo II), sono andato ad investigare il ruolo che assume il sistema di memoria visiva durante l’estrapolazione di movimento. Inoltre mi sono chiesto se le illusioni che modificano la percezione di velocità interferiscano con l’estrapolazione di movimento andando a modificare la stima del tempo di contatto di due target con differente velocità esperita, ma stessa velocità fisica. La velocità percepita di un oggetto veniva modificata cambiando il contrasto o la dimensione degli oggetti (Thompson, 1972; Epstein 1978). I risultati mostrano come in un compito di stima del tempo di contatto i partecipanti stimino un tempo di contatto più lungo quando la velocità percepita viene diminuita e un tempo di contatto più corto quando la velocità percepita è aumentata nonostante la velocità fisica sia sempre la stessa. Pertanto l’illusione di velocità viene mantenuta nel sistema di memoria visiva influenzando la stima del tempo di contatto. Il Capitolo III, prende in esame la relazione fra movimento reale e movimento estrapolato. Gilden e colleghi (1995) hanno mostrato come un adattamento visivo abbia un effetto sul giudizio della stima del tempo di contatto. Un ulteriore passo rispetto a questa ricerca è stato indagare se anche effetti di adattamento e priming rapidi possano influire sul giudizio del TTC. Adattamento e priming visivo rapidi avvengono a livelli corticali di elaborazione molto precoci, e se questi hanno un effetto sul TTC è ragionevole pensare per estensione che il movimento estrapolato possa essere elaborato anch’esso (o almeno in parte) a questi livelli. Ai partecipanti che hanno preso parte a questo esperimento veniva mostrato nella stessa regione retinica dove successivamente il target veniva occluso, uno stimolo di adattamento lungo (600ms) o uno stimolo di adattamento breve (80ms) costituito da una tessitura che si muoveva o nella stessa direzione del target o nella direzione opposta. I risultati mostrano come un adattamento lungo nella stessa direzione del target produca una stima maggiore del TTC (similmente ad un motion aftereffect), mentre un adattamento breve produca una sottostima (similmente ad un effetto di priming). Questo indica che l’estrapolazione del movimento possa essere processato (almeno parzialmente) addirittura ai primi livelli dell’elaborazione visiva del movimento dove i meccanismi di priming e adattamento rapidi vengono computati. Il Capitolo IV della mia tesi esplora non solo i fattori visivi del movimento estrapolato ma anche l’elaborazione temporale. Una prima questione è se l’elaborazione temporale in un compito TTC possa essere descritto da una componente elettrofisiologica come la CNV. Una seconda questione è trovare correlati elettrofisiologici per l’estrapolazione del movimento. I partecipanti che prendevano parte all’esperimento venivano adattati con una tessitura in movimento usando la stessa procedura usata da Gilden e colleghi (1995) mentre l’attività elettrocorticale veniva registrata. L’adattamento produceva un bias nella stima del tempo di contatto e la direzione dell’adattamento modulava l’ampiezza della CNV. Infine una deflessione negativa (N190) è stata trovata negli elettrodi temporo-occipitali come indice dell’estrapolazione del movimento. Questi risultati mostrano come durante un compito di TTC, l’elaborazione temporale sia evidenziata e descritta dalla componente CNV, e come questa componente possa essere modulata da un adattamento visivo di movimento. Inoltre la N190 trovata in questo studio potrebbe essere un marker dell’attivazione dei meccanismi alla base dell’estrapolazione del movimento. Nel Capitolo V, l’obiettivo è stato quello di distinguere tra “estrapolazione” e “interpolazione” del movimento invisibile. L’estrapolazione è la capacità di estrarre la traiettoria, velocità, direzione e inferire approssimativamente la posizione di un oggetto in movimento non più visibile, perché nascosto da un occlusore, grazie alle informazioni presentate durante il suo percorso visibile. L’interpolazione è concetto molto simile al precedente, quindi anche in questo caso grazie al movimento visibile si può estrarre la traiettoria, velocità e direzione dell’oggetto nascosto da un occlusore. La sostanziale differenza è che per interpolare sono necessari degli indizi visivi posizionati lungo la traiettoria invisibile. Se l’occlusore è invisibile e la traiettoria è simmetrica rispetto a uno di questi indizi spaziali, è possibile unire questi indizi (punti) in una mappa spazio-temporale e inferire dove e quando l’oggetto ricomparirà, cosa non possibile in assenza di indizi spaziali e quindi nella condizione di sola estrapolazione. In un nuovo tipo di compito i partecipanti all’esperimento dovevano premere un tasto il più velocemente possibile, quando vedevano ricomparire un target in movimento rettilineo uniforme che veniva nascosto da un occlusore per un certo periodo di tempo. I risultati mostrano che è possibile addirittura anticipare la ricomparsa del target. Infatti talvolta i partecipanti premevano il tasto di risposta qualche centesimo di secondo prima che il target effettivamente ricomparisse. Questo però era possibile solo in alcune circostanze: 1) l’occlusore non doveva essere messo nella zona in cui è presente la macchia cieca, dove non ci sono proiezioni alla corteccia, 2) doveva esserci il movimento visibile (traiettoria visibile) del target prima della scomparsa e 3) quando l’occlusore era totalmente invisibile un indizio visivo, come la croce di fissazione, doveva essere presentato per indicare la parte centrale della traiettoria invisibile. Quando queste condizioni erano presenti, i partecipanti potevano usare l’informazione spaziale data dal punto di scomparsa e dalla croce di fissazione che indicava il centro della traiettoria invisibile, per inferire per simmetria il punto di ricomparsa dello stimolo. Quindi, avendo a disposizione un set di punti discreti nello spazio sui quali stimare in quale momento il target li avrebbe attraversati, i partecipanti probabilmente interpolavano questi punti in una mappa spazio-temporale per inferire dove e quando il target riappariva. Questo processo di interpolazione di movimento è considerato come un processo di filling-in amodale. L’ultima parte della mia tesi coinvolge un’applicazione pratica dell’estrapolazione del movimento. Nel capitolo V, viene mostrato come sia impossibile interpolare quando l’occlusore è posto sopra la macchia cieca e quando mancano indizi che nella traiettoria invisibile. In questo caso infatti i partecipanti rispondevano con un vero tempo di reazione e non anticipavano la risposta. Pazienti con maculopatia degenerativa non possono vedere con la loro fovea dal momento che è danneggiata. Pertanto non hanno più proiezioni di questa zona retinica alla corteccia. In un compito in cui viene chiesto di premere un tasto di risposta quando un oggetto scompare nel loro scotoma o riappare dal loro scotoma è quindi improbabile che riescano ad anticipare la risposta usando un meccanismo di interpolazione. È stato condotto un esperimento in cui cinque soggetti con maculopatia degenerativa dovevano appunto rispondere il più velocemente possibile quando un pallino in movimento scompariva dentro il loro scotoma e premere di nuovo lo stesso tasto quando questo ricompariva dal loro scotoma. I partecipanti ripetevano questo tipo di compito per numerose traiettorie (lineari) del pallino. Unendo i punti nello spazio in cui il paziente riportava di non vedere o di vedere nuovamente il target, un programma al computer riproduceva forma e dimensioni dello scotoma. Lo scotoma trovato veniva poi confrontato con quello ottenuto con la microperimetria Nidek-MP1. Una correlazione lineare con un R2 di circa 0.8 è stata trovata nella misurazione dello scotoma con la Nidek-MP1 e lo scotoma misurato con quest’ultimo esperimento unendo i punti nello spazio in cui i pazienti vedevano ricomparire il pallino. Pertanto questo programma molto semplice potrà nel futuro essere usato per misurare la dimensione di uno scotoma quando apparecchiature costose e complesse come la Nidek-MP1 non sono disponibili.
25

Boujjat, Houssame. "Modélisation, optimisation et extrapolation d’un réacteur solaire de gazéification de biomasse". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALI049.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
La présente thèse propose d’étudier un réacteur solaire à jet pour la gazéification de biomasse de l’échelle du laboratoire à l'échelle industrielle en combinant simulations numériques et expérimentations. Un modèle numérique multiphysique a été développé à l’aide du logiciel de CFD Fluent© pour simuler la gazéification de particules de bois à la vapeur dans le réacteur solaire. Le modèle développé tient compte de l’écoulement diphasique solide/gaz grâce à une approche DPM (Discrete Phase Modelling) en interaction avec le rayonnement et la chimie. Une étape de validation expérimentale à 1200°C a montré des rendements sur gaz froid supérieur à 1 grâce à la valorisation de l’énergie solaire et un taux de conversion du carbone approchant 80%. Le modèle a permis d’acquérir des informations clés sur le déroulement du processus de gazéification au sein de la cavité solaire et d’identifier des pistes d’amélioration du procédé. L’utilisation de matériaux de lit inertes en suspension dans la cavité s’est avérée judicieuse. Cette piste a été étudiée à la fois par simulation numérique grâce à une approche granulaire Eulérienne, puis sur banc expérimental à 1200°C et 1300°C. Une amélioration maximale relative du rendement carbone de 8% a ainsi été atteinte. L’un des obstacles critiques à l’extrapolation du réacteur est dû à la variabilité de l’énergie solaire qui entrave la continuité du procédé. Afin d’assurer une production continue de gaz indépendamment de la ressource solaire, l’hybridation du réacteur par oxy-combustion partielle de la charge a été étudiée. Il a été montré que l'injection contrôlée d'O2 durant les périodes de faible énergie solaire est une solution pertinente pour contrôler la température du procédé. Un modèle dynamique 0D a ensuite été développé pour prédire l’évolution de la température et la production de syngaz à l’échelle du MWth selon deux modes de chauffage : solaire et hybride solaire-combustion. Des simulations annuelles ont été par la suite réalisées pour prédire les performances du réacteur, la consommation des réactifs et les volumes de gaz produits. Ces données ont été utilisées pour analyser la faisabilité technico-économique du procédé pour la production industrielle de dihydrogène
The present thesis proposes to study a novel spouted bed solar reactor for biomass thermochemical gasification from laboratory to industrial scale by combining numerical simulations and lab-scale experimentations. The main objective is to provide new insights into the reactor operation in order to improve its performance, flexibility and industrial integration. A multiphysics numerical model of the reactor was developed using the Fluent© software for the simulation of solar steam gasification of wood particles. The model takes into account the two-phase solid/gas flow using the DPM (Discrete Phase Modelling) approach in interaction with radiation and chemistry. An experimental validation step at 1200°C showed Cold Gas Efficiencies higher than 1 thanks to the efficient valorization of solar energy and a Carbon Conversion Efficiency approaching 80%. The simulations provided key information on the particles solar conversion within the solar cavity and allowed to identify paths for improving the conversion. The use of inert bed materials as a heat transfer medium inside the cavity appeared judicious. This solution was examined both numerically using a granular Eulerian approach, and experimentally at 1200°C and 1300°C. A maximum relative improvement of the carbon conversion efficiency by 8% was this way achieved. The variability of solar energy is one of the critical obstacles hindering the scale-up of the technology. In order to ensure a continuous syngas production whatever the solar resource, the solar reactor was hybridized thanks to partial feedstock oxy-combustion. The study showed that the injection of a controlled amount of O2 is a relevant solution to overcome solar energy variability and to control the reactor temperature. A dynamic 0D model was then developed to predict the temperature and syngas production evolution at MWth scale according to two heating modes: solar-only and hybrid solar-combustion. Annual simulations were subsequently performed to predict reactor performance, reactants consumption and gas production volumes. These data were used to analyze the technical and economic feasibility of the process for the industrial production of hydrogen
26

Hu, Ying. "Théorèmes ergodiques et théorèmes d'extrapolation non-commutatifs". Besançon, 2007. http://www.theses.fr/2007BESA2011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Cette thèse est à la croisée de la théorie des espaces non-commutatifs, de la théorie ergodique et de la théorie de l'extrapolation. Elle se compose de deux chapitres. Le premier est consacré aux théorèmes ergodiques maximaux pour des actions d’un certain groupe dans le cas non-commutatif. Dans le second on s'intéresse à des théorèmes d'extrapolation non-commutatifs desquels on déduit plusieurs applications. Dans le chapitre 1, on montre des inégalités ergodiques maximales pour une suite d'opérateurs et pour leurs moyennes dans l’espace non-commutatif. On obtient les théorèmes ergodiques individuels correspondants. Comme exemple, on a des analogues non-commutatifs des théorèmes de Nevo-Stein. Les résultats du chapitre 1, en particulier la forme des inégalités obtenues, nous incitent à examiner des énoncés non-commutatifs du théorème d'extrapolation classique de Yano. Les preuves de ces théorèmes constituent les résultats principaux du chapitre 2. Il fournit en outre un grand nombre d'applications. Tout d'abord on en déduit des résultats sur le théorème de Rota non-commutatif. Puis, on les combine avec notre travail du chapitre 1 pour l’étendre au cas. Troisièmement, on en tire des informations sur une algèbre de groupe de von Neumann. En dernier lieu, on montre des analogues non-commutatifs du théorème de Burkholder-Chow
This thesis is at the crossing of the noncommutative spaces theory, the ergodic theory and the extrapolation theory. It consists of two chapiters. The first one is devoted to the maximal ergodic theorems for actions of some group in the noncommutative case. In the second one, we research noncommutative extrapolation theorems and their applications. In the first chapiter, we prove maximal ergodic inequalities for a sequence of operators and for their averages in the noncommutative space. We also obtain the corresponding individual ergodic theorems. As an example, we get noncommutative analogues of the theorems of Nevo-Stein. The results in the chapter 1, in particular the form of the inequalities obtained, incite us to investigate some noncommutative statements of Yano’s classical extrapolation theorem. The proofs of these noncommutative ones are the main results in the chapter 2. Moreover, they offer many applications. Firstly, we deduce some results about the noncommutative Rota theorem. Then, we combine them with our work of the chapter 1 so that we extend it to the case. Thirdly, we get some information on a group von Neumann algebra. At last, we prove noncommutative analogues of Burkholder-Chow’ theorem
27

Van, Hoof Bram. "Property indices : Extrapolation of the IPD Japan Capital Growth Index". Thesis, KTH, Bygg- och fastighetsekonomi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-77013.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The aim of this work is to extrapolate the IPD Japan Capital Growth index series historically back to the early 1980’s. Using existing, long-running, macro-economic and property-related time series as inputs, we will try to set up a statistical model which can extrapolate the existing eight-year track record back for as many years as statistically significant. Our aim is to set up a model which allows us to produce a historical real estate capital growth series going back for 15 to 20 year.
28

Zankowski, Corey E. "Calibration of photon and electron beams with an extrapolation chamber". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ44642.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Eliasson, Salomon. "An Extrapolation Technique of Cloud Characteristics Using Tropical Cloud Regimes". Thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-303881.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This thesis tests a technique based on objectively identified tropical cloud regimes, in which some cloud characteristics are extrapolated from a single site in the tropics to the entire tropics. Information on cloud top pressure, cloud optical thickness and total cloud cover from 1985-2000 has been derived from the ISCCP D1 data set and has been used to create maps of tropical cloud regimes and maps of total cloud cover over the tropics. The distribution and characteristics of the tropical cloud regimes has been discussed after which total cloud cover values were extrapolated to the cloud regimes over the tropics. After a qualitative and quantitative assessment was used to evaluate the success of the extrapolating method, it was found that the method worked especially well for time averaged extrapolated data sets using the median values of total cloud cover values.
I detta magisterexamensarbete testas en metod som baseras på objektivt framtagna molnregimer, där några molnegenskaper extrapoleras från en plats i tropikerna till resten av tropikerna. Informationen om molntoppstrycket, molnens optiska djup och det totala molntäcket från 1985-2000 har hämtats från ISCCP D1 data set och har använts till att skapa kartor för tropiska molnregimer och för det totala molntäcket över tropikerna. Distributionen och egenskaperna av de tropiska molnregimerna har diskuterats och användes sedan för att extrapolera det totala molntäcket över tropikerna. En kvalitativ och kvantitativ undersökning användes för att utvärdera framgångarna med extrapoleringsmetoden. Det framkom att metoden fungerade särskilt bra för extrapolerade data set med median totala molntäcksvärden över längre tidsperioder.
30

Bouillon-Camara, Anne-Laure. "Extrapolation du procédé de granulation humide en mélangeur haute vitesse". Vandoeuvre-les-Nancy, INPL, 2005. http://docnum.univ-lorraine.fr/public/INPL/2005_BOUILLON_CAMARA_A_L.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Peu d'outils théoriques sont actuellement disponibles pour étudier les changements d'échelle (intra- et extrapolation) dans le procédé de granulation humide en mélangeur rapide. Une approche consiste à s'appuyer sur des corrélations établies pour l'agitation des liquides. Dans ce cadre, les similitudes géométriques, cinématiques et dynamiques ont été étudiées, un nouveau système d'ajout du liquide de mouillage a été développé, une méthode de mesure de la consistance de la poudre humide a été élaborée. Enfin, une corrélation a été établie entre trois granulateurs MiPro Pro-C-epT sur deux formules différentes. Par ailleurs, un plan d'expérience a été réalisé afin de comprendre l'influence des différents paramètres opératoires selon la taille des appareils. Par suite le mécanisme de croissance a été étudié à petite échelle et a permis de comprendre quels sont les mécanismes qui intervenaient au cours de la granulation et qui pourraient correspondre à un mécanisme de broyage-enrobage
Controlling wet granulation process scale-up is still today a challenge due to the lack of tools which could describe quite accurately the mechanisms of granule growth in high shear mixers. The well-known approach consisting in characterizing a mixing process of liquid phases with non-dimensional numbers has therefore been applied to the granular medium in the high shear mixer to establish a correlation between MiPro Pro-C-epT mixers of 3 different scales. In that frame, a method to measure the consistency of the powder has been developed. In addition, the influence of key operating variables on the granule size distributions has been studied by using a D-optimal experimental plan. The results obtained allowed to conclude that phase transitions during granulation appear at constant liquid/powder ratio whatever the scale of the mixer. Finally, the granule growth kinetic has been determined in the small scale mixer and this allowed to characterize the granule growth mechanism in small scale mixer as being mainly driven by a fragmentation-layering process
31

Nava, Jaime. "Towards more reliable extrapolation algorithms with applications to organic chemistry". To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Shan, Guojian. "Imaging of steep reflectors in anisotropic media by wavefield extrapolation /". May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Behera, Abhinna. "Vers l’Extrapolation à l’échelle continentale de l’impact des overshoots sur le bilan de l’eau stratosphérique". Thesis, Reims, 2018. http://www.theses.fr/2018REIMS011/document.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Cette thèse a pour but de préparer un travail d’extrapolation de l’impact des overshoots stratosphériques (SOC) sur le bilan de vapeur d’eau (VE) dans la couche de la tropopause tropicale (TTL) et dans la basse stratosphère à l’échelle continentale.Pour ce faire, nous profitons des mesures de la campagne de terrain TRO-Pico tenue à Bauru, au Brésil, pendant deux saisons convectives/humides en 2012 et 2013, et de plusieurs simulations numériques de la TTL sur un domaine englobant une grande partie de l’Amérique du Sud avec le modèle méso-échelle BRAMS.Premièrement, nous effectuer une simulation d’une une saison humide complète sans tenir compte des SOC. Cette simulation est ensuite évaluée pour d’autres caractéristiques clés typiques (température de la TTL, VE, sommets de nuages et ondes de gravité) dans la TTL. En l’absence de SOC et avant d’extrapoler son leur impact, nous démontrons que le modèle reproduit correctement les caractéristiques principales de la TTL. L’importance de l’ascension lente à grande échelle par rapport aux processus convectifs profonds à échelle finie est ensuite discutée.Deuxièmement, à partir de simulations BRAMS à fine à échelle de cas de SOC observés pendant TRO-Pico, nous déduisons des quantités physiques (flux de glace, bilan de masse de glace, tailles des SOCs), qui serviront à définir un forçage de l’impact des overshoots dans des simulations à grande échelle. Nous montrons un impact maximum d’environ 2 kt en VE et 6 kt de glace par SOC. Ces chiffres sont 30% nférieurs pour un autre réglage microphysique du modèle. Nous montrons que seul trois types d’hydrométéores du modèle contribuent à cette hydratation
This dissertation aims at laying a foundation on upscaling work of the impact of stratospheric overshooting convection (SOC) on the water vapor budget in the tropical tropopause layer (TTL) and lower stratosphere at a continental scale.To do so, we take advantage of the TRO-Pico field campaign measurements held at Bauru, Brazil, during two wet/convective seasons in 2012 and 2013, and perform accordingly several numerical simulations of the TTL which encompass through a large part of south America using the BRAMS mesoscale model.Firstly, we adopt a strategy of simulating a full wet season without considering SOC. This simulation is then evaluated for other typical key features (e.g., TTL temperature, convective clouds, gravity wave) of the TTL. In the absence of SOC and before upscaling its impact, we demonstrate that the model has a fair enough ability to reproduce a typical TTL. The importance of large-scale upwelling in comparison to the finite-scale deep convective processes is then discussed.Secondly, from fine scale BRAMS simulations of an observational case of SOC during TRO-Pico, we deduce physical parameters (mass flux, ice mass budget, SOC size) that will be used to set a nudging of the SOC impact in large-scale simulations. A typical maximum impact of about 2kt of water vapor, and 6kt of ice per SOC cell is computed. This estimation is 30% lower for another microphysical setup of the model. We also show that the stratospheric hydration by SOC is mainly due to two types of hydrometeors in the model
34

Roure, Perdices Eduard. "Restricted Weak Type Extrapolation of Multi-Variable Operators and Related Topics". Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/668407.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
A remarkable result in Harmonic Analysis is the so-called Rubio de Francia’s extrapolation theorem. Roughly speaking, it says that if one has an operator T that is bounded on Lp(v), for some p 1 and every weight v in Ap, then T is bounded in Lq(w), for every q > 1 and every weight w in Aq. Rubio de Francia’s extrapolation theory is very useful in practice, but there is an issue: it does not allow to produce estimates for q = 1. The works of M. J. Carro, L. Grafakos, and J. Soria [9], and M. J. Carro and J. Soria [14] give a solution to this problem, allowing to extrapolate down to the endpoint q = 1. In this project, we started building upon these works to produce multi- variable extensions of the extrapolation results that they presented. We have succeeded in this endeavor, and now we possess extrapolation schemes in the setting of weighted Lorentz spaces that are of great use when trying to bound multi-variable operators for which no sparse domination is known, and also when working with Lorentz spaces outside the Banach-range. As a particular case, we have studied product-type operators, two-variable commutators, averaging operators, and bi-linear multipliers. Sawyer-type inequalities play a fundamental role in the proof of our multi-variable extrapolation schemes and are essential to complete the charac- terization of the weighted restricted weak type bounds for the point-wise product of Hardy-Littlewood maximal operators. In this work, we have ex- tended the classical weak (1, 1) Sawyer-type inequalities proved in [27] to the general restricted weak type case, even in the multi-variable setting. In 2017, at the University of Alabama, we started a collaboration with David V. Cruz-Uribe to produce restricted weak type bounds for fractional operators, Calderón-Zygmund operators, and commutators of these operators. We managed to obtain satisfactory results on this matter, even two-weight norm inequalities, applying a wide variety of techniques on sparse domination, function spaces, and weighted theory.
En el camp de la Teoria de pesos, un resultat que ha atret l’atenció de molts investigadors és l’anomenat Teorema d’extrapolació de Rubio de Francia. En la seva forma més simple, diu que si tenim un operador T que està acotat a l’espai de Lebesgue Lp(v), per algun p 1 i cada pes v en Ap, llavors T està acotat a l’espai de Lebesgue Lq(w), per cada q > 1 i cada pes w en Aq. L’extrapolació de Rubio de Francia proporciona un potent conjunt d’eines en l’Anàlisi Harmònica, però té un punt feble; no permet arribar a l’extrem q = 1. Els treballs de M. J. Carro, L. Grafakos, i J. Soria [9], i M. J. Carro i J. Soria [14] resolen aquest problema, obtenint esquemes d’extrapolació de tipus dèbil (1, 1) amb pesos en A1. En aquest projecte de tesi vam començar a estudiar aquests articles per produir extensions multivariable dels resultats d’extrapolació que s’hi ex- posen. Hem tingut èxit en aquesta tasca, i ara posseïm esquemes d’extrapolació multivariable de tipus mixt i dèbil restringit que són de gran utilitat en l’obtenció d’acotacions d’operadors en múltiples variables pels quals no es coneixen resultats de dominació sparse, i també quan treballem en espais de Lorentz pels quals la dualitat no està disponible. Com a cas particular, hem estudiat operadors producte, commutadors en dos variables i multiplicadors bilineals. Les desigualtats de tipus Sawyer han jugat un paper fonamental en les demostracions dels nostres teoremes d’extrapolació, així com en l’estudi del producte puntual d’operadors maximals de Hardy-Littlewood. Hem sigut capaços d’ampliar les desigualtats de Sawyer clàssiques de [27] al tipus dèbil restringit amb pesos en ARp , i també hem demostrat les corresponents extensions multivariable. Durant una estada de tres mesos a la Universitat d’Alabama, vam iniciar una col·laboració amb David V. Cruz-Uribe. El nostre objectiu era estudiar els operadors fraccionaris i de Calderón-Zygmund en múltiples variables, i obtenir-ne acotacions de tipus dèbil restringit amb pes. Combinant tècniques de dominació sparse i propietats dels espais de Lorentz, vam demostrar diverses estimacions per aquests operadors, i també pels seus commutadors.
35

Ariyadasa, A. "Filtering and extrapolation techniques in numerical solution of ordinary differential equations". Thesis, University of Ottawa (Canada), 1985. http://hdl.handle.net/10393/4595.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Budde, Christian [Verfasser]. "General Extrapolation Spaces and Perturbations of Bi-Continuous Semigroups / Christian Budde". Wuppertal : Universitätsbibliothek Wuppertal, 2019. http://d-nb.info/1202951007/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Gateau, Valérie. "Extrapolation des durées de validité des principes actifs en phase solide". Paris 5, 1990. http://www.theses.fr/1990PA05P185.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Phillips, Tyrone. "Extrapolation-based Discretization Error and Uncertainty Estimation in Computational Fluid Dynamics". Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/31504.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The solution to partial differential equations generally requires approximations that result in numerical error in the final solution. Of the different types of numerical error in a solution, discretization error is the largest and most difficult error to estimate. In addition, the accuracy of the discretization error estimates relies on the solution (or multiple solutions used in the estimate) being in the asymptotic range. The asymptotic range is used to describe the convergence of a solution, where an asymptotic solution approaches the exact solution at a rate proportional to the change in mesh spacing to an exponent equal to the formal order of accuracy. A non-asymptotic solution can result in unpredictable convergence rates introducing uncertainty in discretization error estimates. To account for the additional uncertainty, various discretization uncertainty estimators have been developed. The goal of this work is to evaluation discretization error and discretization uncertainty estimators based on Richardson extrapolation for computational fluid dynamics problems. In order to evaluate the estimators, the exact solution should be known. A select set of solutions to the 2D Euler equations with known exact solutions are used to evaluate the estimators. Since exact solutions are only available for trivial cases, two applications are also used to evaluate the estimators which are solutions to the Navier-Stokes equations: a laminar flat plate and a turbulent flat plate using the k-Ï SST turbulence model. Since the exact solutions to the Navier-Stokes equations for these cases are unknown, numerical benchmarks are created which are solutions on significantly finer meshes than the solutions used to estimate the discretization error and uncertainty. Metrics are developed to evaluate the accuracy of the error and uncertainty estimates and to study the behavior of each estimator when the solutions are in, near, and far from the asymptotic range. Based on the results, general recommendations are made for the implementation of the error and uncertainty estimators. In addition, a new uncertainty estimator is proposed with the goal of combining the favorable attributes of the discretization error and uncertainty estimators evaluated. The new estimator is evaluated using numerical solutions which were not used for development and shows improved accuracy over the evaluated estimators.
Master of Science
39

More, Sushant N. "Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1468246999.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Thomas, Michael Patrick. "Long term extrapolation and hedging of the South African yield curve". Diss., Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-06172009-085254.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Rajamani, Kumar T. "Three dimensional surface extrapolation from sparse data using deformable bone models /". Bern : [s.n.], 2006. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000279098.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Karapiperi, Anna. "Extrapolation methods and their applications in numerical analysis and applied mathematics". Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424504.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This Ph.D. thesis discusses some applications of extrapolation methods. In numerical analysis and in applied mathematics one has often to deal with sequences which converge slowly to their limit. Extrapolation methods can be used to accelerate the convergence of a slow converging sequence or even to sum up divergent series. The first two chapters of this thesis are devoted to scalar sequence transformations. We revisit Aitken's Δ2 process and we propose three new transformations which generalize it. The convergence and acceleration properties of one of our transformations are discussed theoretically and verified experimentally using diverging and converging sequences. Shanks transformation and Wynn's ε-algorithm are studied extensively; we remind the particular rules due to Wynn for treating isolated singularities, i.e. when two consecutive elements are equal or almost equal, and the more general particular rules proposed by Cordellier for treating non-isolated singularities, i.e. when more than two elements are equal. A new implementation of the generalized particular rule is given covering all the cases, namely singularities caused by two or more elements that are equal or almost equal. In the remaining part of the thesis we focus on vector extrapolation. First we briefly describe the vector ε-algorithm, the topological ε-algorithm and the simplified topological ε-algorithm, which was recently introduced by Brezinski and Redivo-Zaglia. In the sequel, we present under a unified notation the Algebraic Reconstruction Techniques, the Simultaneous Iterative Reconstruction Techniques, and other iterative regularization methods, which are commonly used for solving linear inverse problems. Last, we study the gain of applying extrapolation on these methods in imaging problems. In particular, we use the simplified topological ε-algorithm in order to extrapolate a sequence generated by methods such as Landweber's and Cimmino's when solving image reconstruction and restoration problems. The numerical results illustrate the good performance of the accelerated methods compared to their unaccelerated versions and other methods.
Questa tesi di dottorato tratta alcune applicazioni dei metodi di estrapolazione. Spesso in analisi numerica e nella matematica applicata si devono trattare successioni che convergono lentamente al loro limite. I metodi di estrapolazione possono essere utilizzati per accelerare la convergenza di una successione che converge lentamente o anche per sommare serie divergenti. I primi due capitoli della tesi sono dedicati alle trasformazioni di successioni scalari. Viene ripreso il Δ2 di Aitken e vengono proposte tre nuove trasformazioni che lo generalizzano. Le proprietà di convergenza e di accelerazione di una delle trasformazioni sono discusse teoricamente e verificate sperimentalmente usando delle successioni divergenti e convergenti. La trasformazione di Shanks e l'ε-algorithm di Wynn sono accuratamente studiati; vengono richiamate le regole particolari proposte da Wynn per il trattamento delle singolarità isolate, ovvero quando due elementi consecutivi sono uguali o quasi uguali, ed anche le regole particolari, più generali, proposte da Cordellier, per il trattamento delle singolarità non isolate, ovvero quando più di due elementi sono uguali. Viene proposta una nuova generale implementazione delle regole particolari in modo da poter trattare tutti i casi possibili, ossia la presenza di singolarità causata da due o più elementi che sono uguali o quasi uguali. Nella parte rimanente della tesi ci si concentra sull'estrapolazione vettoriale. Prima vengono brevemente descritti l'ε-algorithm vettoriale, l'ε-algorithm topologico e la sua versione semplificata, recentemente introdotta da Brezinski e Redivo-Zaglia. Successivamente, vengono presentate, con una notazione unificata le Algebraic Reconstruction Techniques (ART), le Simultaneous Iterative Reconstruction Techniques (SIRT) e altri metodi iterativi di regolarizzazione, che sono comunemente utilizzati per risolvere problemi inversi lineari. Infine, vengono illustrati i vantaggi ottenuti applicando l'estrapolazione ai precedenti metodi iterativi, utilizzati su problemi relativi alle immagini. In particolare, viene utilizzato il simplified topological ε-algorithm al fine di estrapolare una successione generata da metodi di tipo Landweber e Cimmino quando si risolvono problemi di ricostruzione e di restauro di immagini. I risultati numerici mostrano un buon comportamento dei metodi accelerati rispetto alle loro versioni non accelerate ed anche rispetto ad altri metodi.
43

Radonovich, David Charles. "Methods of Extrapolating Low Cycle Fatigue Data to High Stress Amplitudes". Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3460.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Modern gas turbine component design applies much effort into prediction and avoidance of fatigue. Advances in the prediction of low-cycle fatigue (LCF) cracks will reduce repair and replacement costs of turbine components. These cracks have the potential to cause component failure. Regression modeling of low-cycle fatigue (LCF)test data is typically restricted for use over the range of the test data. It is often difficult to characterize the plastic strain curve fit constants when the plastic strain is a small fraction of the total strain acquired. This is often the case with high strength, moderate ductility Ni-base superalloys. The intent of this project is to identify the optimal technique for extrapolating LCF test results into stress amplitudes approaching the ultimate strength. The proposed method to accomplish this is by finding an appropriate upper and lower bounds for the cyclic stress-strain and strain-life equations. Techniques investigated include: monotonic test data anchor points, strain-compatibility, and temperature independence of the Coffin-Manson relation. A Ni-base superalloy (IN738 LC) data set with fully reversed fatigue tests at several elevated temperatures with minimal plastic strain relative to the total strain range was used to model several options to represent the upper and lower bounds of material behavior. Several high strain LCF tests were performed with stress amplitudes approaching the ultimate strength. An augmented data set was developed by combining the high strain data with the original data set. The effectiveness of the bounding equations is judged by comparing the bounding equation results with the base data set to a linear regression model using the augmented data set.
M.S.M.E.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Mechanical Engineering MSME
44

Joncour, Frédéric. "Migration profondeur avant sommation en amplitude préservée par extrapolation de forme d'onde". Phd thesis, École Nationale Supérieure des Mines de Paris, 2005. http://pastel.archives-ouvertes.fr/pastel-00001616.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
La migration est une étape clé de la chaîne de traitement des données de sismique réflexion. Intervenant après les phases de pré-traitement, et d'estimation du modèle de vitesse, elle peut servir de base à la caractérisation litho-sismique du réservoir. En effet lorsqu'elle est faîte avant sommation, en profondeur et en amplitude préservée, elle permet d'obtenir les réflectivités du sous-sol en fonction de l'angle d'incidence de l'onde sismique. Une inversion stratigraphique des paramètres élastiques du réservoir est alors possible permettant une caractérisation sismique plus détaillée du réservoir. Jusqu'à présent la migration en amplitude préservée était essentiellement basée sur des techniques de traçé de rayons, qui hélas présentent de réelles limitations pour les milieux géologiques complexes caractérisés par de fortes variations latérales de vitesse. L'utilisation d'approximations n one-way z paraxiales de l'équation d'onde permet de s'affranchir de ces limitations puisque, dans le cadre de la migration profondeur, elles fournissent des solutions précises et robustes pour l'ensemble de la bande de fréquences sismiques. En outre elles prennent en compte naturellement les trajectoires multiples induites par des modèles de vitesse complexes (en particulier dans le cas des structures salifères ). Longtemps pénalisées par leur coût numérique dans les applications 3D ces méthodes peuvent actuellement être appliquées sur données réelles. Elles portent le nom de migration par équation d'onde. Sur le plan de la préservation des amplitudes l'étude de la migration par équation d'onde n'a pas débouché jusqu'à présent sur une formulation aussi aboutie qu'avec l'utilisation de la théorie des raies. Dans ce domaine les efforts doivent porter tant sur la propagation numérique du champs d'onde, que sur la condition d'imagerie. Mon travail de thèse porte sur la définition et le développement numérique d'une méthode de migration par équation d'onde quantitative à 2D. Dans un premier temps, j'ai abordé l'étude de la préservation des amplitudes par l'approximation "one-way" paraxiale de l'équation des ondes. Je me suis familiarisé avec la technique en m'appuyant sur les travaux et les algorithmes développés à l'Institut Français du Pétrole. Dans un second temps, j'ai modifié le principe d'imagerie classique, de façon à constituer des collections migrées en fonction de l'angle de réflexion, et à retrouver l'information sur la dépendance angulaire de la réflectivité ou de la perturbation d'impédance. Cela devrait nous permettre de mieux caractériser le sous-sol dans le cas de milieux complexes ou les analyses classiques (AVO) ne donnent pas de résultats satisfaisants.
45

Romann, Alexandra. "Evaluating the performance of simulation extrapolation and Bayesian adjustments for measurement error". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5236.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Measurement error is a frequent issue in many research areas. For instance, in health research it is often of interest to understand the relationship be tween an outcome and an exposure, which is often mismeasured if the study is observational or a gold standard is costly or absent. Measurement error in the explanatory variable can have serious effects, such as biased parame ter estimation, loss of power, and masking of the features of the data. The structure of the measurement error is usually not known to the investigators, leading to many difficulties in finding solutions for its correction. In this thesis, we consider problems involving a correctly measured con tinuous or binary response, a mismeasured continuous exposure variable, along with another correctly measured covariate. We compare our proposed Bayesian approach to the commonly used simulation extrapolation (SIMEX) method. The Bayesian model incorporates the uncertainty of the measure ment error variance and the posterior distribution is generated by using the Gibbs sampler as well as the random walk Metropolis algorithm. The com parison between the Bayesian and SIMEX approaches is conducted using different cases of a simulated data including validation data, as well as the Framingham Heart Study data which provides replicates but no validation data. The Bayesian approach is more robust to changes in the measurement error variance or validation sample size, and consistently produces wider credible intervals as it incorporates more uncertainty. The underlying theme of this thesis is the uncertainty involved in the es timation of the measurement error variance. We investigate how accurately this parameter has to be estimated and how confident one has to be about this estimate in order to produce better results by choosing the Bayesian measurement error correction over the naive analysis where measurement error is ignored.
46

Khatun, Mahmuda. "Interpolation and extrapolation of point patterns based on variation analysis on measures". Thesis, University of Strathclyde, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502280.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Suppose that we observe a point pattern in an observation window W₀ C Rd. In this study we pose ourselves the following two main questions: How can one extend the pattern in a 'reasonable way' to a larger window W ) W₀? Can one predict possible gaps in the observed point pattern where points were 'expected' but somehow failed to realise. We address these questions by assuming that the point pattern is an observed part of a realisation of a non-homogeneous Poisson process in W and estimate its intensity so that to mimic a given distributional characteristic of the pattern, for instance, its sample nearest neighbour distribution. The project aims to develop prediction and extrapolation techniques for random points processes and related techniques of optimisation of functionals depending on a measure. Applications are numerous and important, including restoration of images, detection of impurities in material science, prediction of anomalies in geology, etc.
47

Dai, Ruxin. "Richardson Extrapolation-Based High Accuracy High Efficiency Computation for Partial Differential Equations". UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/20.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this dissertation, Richardson extrapolation and other computational techniques are used to develop a series of high accuracy high efficiency solution techniques for solving partial differential equations (PDEs). A Richardson extrapolation-based sixth-order method with multiple coarse grid (MCG) updating strategy is developed for 2D and 3D steady-state equations on uniform grids. Richardson extrapolation is applied to explicitly obtain a sixth-order solution on the coarse grid from two fourth-order solutions with different related scale grids. The MCG updating strategy directly computes a sixth-order solution on the fine grid by using various combinations of multiple coarse grids. A multiscale multigrid (MSMG) method is used to solve the linear systems resulting from fourth-order compact (FOC) discretizations. Numerical investigations show that the proposed methods compute high accuracy solutions and have better computational efficiency and scalability than the existing Richardson extrapolation-based sixth order method with iterative operator based interpolation. Completed Richardson extrapolation is explored to compute sixth-order solutions on the entire fine grid. The correction between the fourth-order solution and the extrapolated sixth-order solution rather than the extrapolated sixth-order solution is involved in the interpolation process to compute sixth-order solutions for all fine grid points. The completed Richardson extrapolation does not involve significant computational cost, thus it can reach high accuracy and high efficiency goals at the same time. There are three different techniques worked with Richardson extrapolation for computing fine grid sixth-order solutions, which are the iterative operator based interpolation, the MCG updating strategy and the completed Richardson extrapolation. In order to compare the accuracy of these Richardson extrapolation-based sixth-order methods, truncation error analysis is conducted on solving a 2D Poisson equation. Numerical comparisons are also carried out to verify the theoretical analysis. Richardson extrapolation-based high accuracy high efficiency computation is extended to solve unsteady-state equations. A higher-order alternating direction implicit (ADI) method with completed Richardson extrapolation is developed for solving unsteady 2D convection-diffusion equations. The completed Richardson extrapolation is used to improve the accuracy of the solution obtained from a high-order ADI method in spatial and temporal domains simultaneously. Stability analysis is given to show the effects of Richardson extrapolation on stable numerical solutions from the underlying ADI method.
48

Yang, Jiansong. "Quantitative prediction of metabolic drug-drug interactions : in vitro - in vivo extrapolation". Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422638.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Salo, Aki Iikka Tapio. "An assessment of video motion analysis : variability, reliability, camera orientation and extrapolation". Thesis, University of Exeter, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286552.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Fackler, Stephan [Verfasser]. "Regularity properties of sectorial operators: extrapolation, counterexamples and generic classes / Stephan Fackler". Ulm : Universität Ulm. Fakultät für Mathematik und Wirtschaftswissenschaften, 2015. http://d-nb.info/1064939813/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Vá para a bibliografia