Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: GRADIENT COMPUTATION.

Rozprawy doktorskie na temat „GRADIENT COMPUTATION”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „GRADIENT COMPUTATION”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Qiao, Lei Ph D. Massachusetts Institute of Technology. "Variational constitutive updates for strain gradient isotropic plasticity". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55079.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 93-96).
In the past decades, various strain gradient isotropic plasticity theories have been developed to describe the size-dependence plastic deformation mechanisms observed experimentally in micron-indentation, torsion, bending and thin-film bulge tests in metallic materials. Strain gradient plasticity theories also constitute a convenient device to introduce ellipticity in the differential equations governing plastic deformation in the presence of softening. The main challenge to the numerical formulations is that the effective plastic strain, a local internal variable in the classic isotropic plasticity theory, is now governed by the partial differential equation which includes spatial derivatives. Most of the current numerical formulations are based on Aifantis' one-parameter model with a Laplacian term [Aifantis and Muhlhaus, ijss, 28:845-857, 1991]. As indicated in the paper [Fleck and Hutchinson, jmps, 49:2245-2271, 2001], one parameter is not sufficient to match the experimental data. Therefore a robust and efficient computational framework that can deal with more parameters is still in need. In this thesis, a numerical formulation based on the framework of variational constitutive updates is presented to solve the initial boundary value problem in strain gradient isotropic plasticity. One advantage of this approach compared to the mixed methods is that it avoids the need to solve for both the displacement and the effective plastic strain fields simultaneously. Another advantage of this approach is, as has been amply established for many other material models, that the solution of the problem follows a minimum principle, thus providing a convenient basis for error estimation and adaptive remeshing.
(cont.) The advantages of the framework of variational constitutive updates have already been verified in a wide class of material models including visco-elasticity, visco-plasticity, crystal plasticity and soil, however this approach has not been implemented in the strain gradient plasticity models. In this thesis, a three-parameter strain gradient isotropic plasticity model is formulated within the variational framework, which is then taken as a basis for finite element discretization. The resulting model is implemented in a computer code and exercised on the benchmark problems to demonstrate the robustness and versatility of the proposed method.
by Lei Qiao.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
2

Damou, Merzak. "Measurement and computation of a turbulent jet in an axial pressure gradient". Thesis, University of Manchester, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305418.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sitta, Alessandro. "Privacy-Preserving Distributed Optimization via Obfuscated Gradient Tracking". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Znajdź pełny tekst źródła
Streszczenie:
As the modern world becomes increasingly digitized and interconnected, distributed systems have proven to be effective in the processing of large volumes of data. In this context, optimization techniques have become essential in an extensive range of domains. However, a major concern, regarding the privacy issue in handling sensitive data, has recently emerged. To address this privacy issue we propose a novel consensus-based privacy-preserving distributed optimization algorithm called Obfuscated Gradient Tracking. The algorithm is characterized by a balanced noise insertion method which protects private data from being revealed to others, while not affecting the result’s accuracy. Indeed, we theoretically prove that the introduced perturbations do not condition the convergence properties of the algorithm, which is proven to reach the optimal solution without compromises. Moreover, security against the widely-used honest-but-curious adversary model, is shown. Furthermore, numerical tests are performed to show the effectiveness of the novel algorithm, both in terms of privacy and convergence properties. Numerical results highlight the Obfuscated Gradient Tracking attractiveness, against standard distributed algorithms, when privacy issues are involved. Finally, we present a privacy-preserving distributed Deep Learning application developed using our novel algorithm, with the aim of demonstrating its general applicability.
Style APA, Harvard, Vancouver, ISO itp.
4

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations". Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Pełny tekst źródła
Streszczenie:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
Style APA, Harvard, Vancouver, ISO itp.
5

Sautter, Rubens Andreas. "Gradient Pattern Analysis: New methodological and computational features with applications". Instituto Nacional de Pesquisas Espaciais (INPE), 2018. http://urlib.net/sid.inpe.br/mtc-m21c/2018/05.07.12.09.

Pełny tekst źródła
Streszczenie:
Neste trabalho é apresentado a Análise de Padrões Gradientes (no inglês GPA), um formalismo que descreve operadores para a análise de matrizes, por meio da simetria. Com o objetivo de analisar bases de dados extensas, neste trabalho é proposto o refinamento da versão mais popular do GPA, a respeito da medida e da complexidade computacional. Neste estudo é apresentado todos os momentos gradiente, e testado o primeiro e segundo momento gradiente (respectivamente G1 e G2). A fim de testar o refinamento das técnicas G1 e G2 é apresentado os casos de estudos: (i) um estudo de caso dinâmicos em Grade de Mapas Acoplados (no inglês CML) e (ii) um estudo de caso estático em Morfologia de galáxias. Em relação aplicação (i), duas transições de estado do sistema são apresentados: quebra de simetria e sincronização. Em relação a aplicação (ii), foi desenvolvido um pipeline de análise não paramétrica de galáxias conhecido como CyMorph. O pipeline apresentado incorpora uma versão aprimorada das técnicas de análise morfologica, G1 e G2. O objetivo principal do CyMorph dentro do escopo do projeto de pesquisa é classificar galáxias entre elipticas (early-type) e espirais (late-type). Analisando o desempenho da técnica de GPA frente as técnicas tradicionais de morfologia, observou-se que G2 é o segundo melhor parâmetro morfométrico no conjunto apresentado.
In this work it is presented the Gradient Pattern Analysis (GPA), a formalism that describes operators for analysis of spatially extended system, concerning its asymmetry. Aiming to work with large datasets, it is proposed improvements to the most popular version of GPA, with respect to the metric measurement and computational efficiency. We also review and explore the gradient moments, and propose two new operators. In order to validate the implementation of the operators G1 and G2, the following study cases are presented: (i) a dynamical study case in Coupled Map Lattices (CML), and (ii) a static case study in Galaxy Morphology. With respect to application (i), we analyze two system transitions: symmetry breaking and synchronization. Concerning the application (ii), it is presented a system of galaxy morphometrics named CyMorph, which has an important role on a project for studying the galaxies formation and evolution. The aim of CyMorph is to classify galaxies, between early-type and late-type using non-parametric morphometrics. G1 and G2 were integrated to CyMorph. We observe that G2 is the second-best morphometric in a system with 10 metrics.
Style APA, Harvard, Vancouver, ISO itp.
6

Chauffour, Marie-Laure. "Shock-based waverider design with pressure gradient corrections and computational simulations". College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1829.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Dept. of Aerospace Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
7

Fischer, Paul [Verfasser], i Paul [Akademischer Betreuer] Steinmann. "C1 Continuous Methods in Computational Gradient Elasticity / Paul Fischer. Betreuer: Paul Steinmann". Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2011. http://d-nb.info/1015783635/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Miles, Alexander, William Duncan, Brian Klug i Colton Holmes. "Rapid Prototyped Terahertz-Domain Gradient Index Optics: Computational Design, Simulation, and Manufacture". International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595744.

Pełny tekst źródła
Streszczenie:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
There are a myriad of applications for terahertz radiation: security, military radar, product inspection, and telecommunications. These require manipulation of the radiation beyond simple transmission and detection, namely refraction: focusing, defocusing, and collimation. The current state of the art fabrication of terahertz lenses is an expensive and time consuming processes; involving high purity semiconductors and months of lead time. Our project focused on demonstrating that an inexpensive and quick process could reduce the production investment required by more than three orders of magnitude. This process is based on fabrication using a novel gradient index structure produced with polymer-jetting rapid-prototyping machine.
Style APA, Harvard, Vancouver, ISO itp.
9

Thill, Serge. "A computational analysis of the gradient navigation strategies of the nematode Caenorhabditis elegans". Thesis, University of Leicester, 2008. http://hdl.handle.net/2381/4014.

Pełny tekst źródła
Streszczenie:
In the present thesis, we apply computational methods to the study of animal behaviour. Specifically, we are interested in the gradient navigation strategies of C. elegans, for which we show that there are many interesting questions that have not yet been answered by existing research. In order to study the behaviour of C. elegans, we first develop a range of tools to help us do so. We base a large part of our work on Markov-like models of behaviour and since these are not Markovian in the strict sense (limiting the analytical tools which can be used to study their behaviour), we first present a possible transformation from a Markov-like model with variable transition probabilities into a strictly Markovian model. We next present a framework for studying the behaviour of behavioural models which is not restricted to the work presented here but is likely to find general use in behavioural studies. Using these tools, we then analyse the chemotactic behaviour of C. elegans, showing that we can adequately explain most features of this behaviour using energy-efficiency considerations. We also show that the main behavioural strategy, so-called pirouettes is likely to be caused by an inability to sample the environment during a turn and that the animal my not be acting upon gradient information while reversing. Finally, we investigate the deterministic isotherm tracking strategy displayed by C. elegans. We develop a computational model for this behaviour which is able to reproduce all of the main features of C. elegans isotherm tracking and we propose a candidate neural circuit which might encode this strategy. Additionally, we briefly discuss the use of stochastic strategies by the animal when moving towards its preferred temperature. In summary, the work presented here therefore provides contributions to two major fields: we extend the methodology available for behavioural analysis in ethology and we contribute a number of insights and advancements to the field of C. elegans research.
Style APA, Harvard, Vancouver, ISO itp.
10

Norris, Michael K. "INCORPORATING HISTOGRAMS OF ORIENTED GRADIENTS INTO MONTE CARLO LOCALIZATION". DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1629.

Pełny tekst źródła
Streszczenie:
This work presents improvements to Monte Carlo Localization (MCL) for a mobile robot using computer vision. Solutions to the localization problem aim to provide fine resolution on location approximation, and also be resistant to changes in the environment. One such environment change is the kidnapped/teleported robot problem, where a robot is suddenly transported to a new location and must re-localize. The standard method of "Augmented MCL" uses particle filtering combined with addition of random particles under certain conditions to solve the kidnapped robot problem. This solution is robust, but not always fast. This work combines Histogram of Oriented Gradients (HOG) computer vision with particle filtering to speed up the localization process. The major slowdown in Augmented MCL is the conditional addition of random particles, which depends on the ratio of a short term and long term average of particle weights. This ratio does not change quickly when a robot is kidnapped, leading the robot to believe it is in the wrong location for a period of time. This work replaces this average-based conditional with a comparison of the HOG image directly in front of the robot with a cached version. This resulted in a speedup ranging from from 25.3% to 80.7% (depending on parameters used) in localization time over the baseline Augmented MCL.
Style APA, Harvard, Vancouver, ISO itp.
11

Löf, Henrik. "Parallelizing the Method of Conjugate Gradients for Shared Memory Architectures". Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86295.

Pełny tekst źródła
Streszczenie:
Solving Partial Differential Equations (PDEs) is an important problem in many fields of science and engineering. For most real-world problems modeled by PDEs, we can only approximate the solution using numerical methods. Many of these numerical methods result in very large systems of linear equations. A common way of solving these systems is to use an iterative solver such as the method of conjugate gradients. Furthermore, due to the size of these systems we often need parallel computers to be able to solve them in a reasonable amount of time. Shared memory architectures represent a class of parallel computer systems commonly used both in commercial applications and in scientific computing. To be able to provide cost-efficient computing solutions, shared memory architectures come in a large variety of configurations and sizes. From a programming point of view, we do not want to spend a lot of effort optimizing an application for a specific computer architecture. We want to find methods and principles of optimizing our programs that are generally applicable to a large class of architectures. In this thesis, we investigate how to implement the method of conjugate gradients efficiently on shared memory architectures. We seek algorithmic optimizations that result in efficient programs for a variety of architectures. To study this problem, we have implemented the method of conjugate gradients using OpenMP and we have measured the runtime performance of this solver on a variety of both uniform and non-uniform shared memory architectures. The input data used in the experiments come from a Finite-Element discretization of the Maxwell equations in three dimensions of a fighter-jet geometry. Our results show that, for all architectures studied, optimizations targeting the memory hierarchy exhibited the largest performance increase. Improving the load balance, by balancing the arithmetical work and minimizing the number of global barriers showed to be of lesser importance. Overall, bandwidth minimization of the iteration matrix showed to be the most efficient optimization. On non-uniform architectures, proper data distribution showed to be very important. In our experiments we used page migration to improve the data distribution during runtime. Our results indicate that page migration can be very efficient if we can keep the migration cost low. Furthermore, we believe that page migration can be introduced in a portable way into OpenMP in the form of a directive with a affinity-on-next-touch semantic.
Style APA, Harvard, Vancouver, ISO itp.
12

Dube, Zack. "Computational Reconstruction of the Physical Eye Using a New Gradient Index of Refraction Model". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34791.

Pełny tekst źródła
Streszczenie:
This thesis proposes and tests an individually customizable model of the human crystalline lens. This model will be crucial in developing both research on the human eye and driving diagnostic tools to help plan and treat optical issues, such as those requiring refractive surgery. This thesis attempts to meet two goals: first, it will determine whether this new lens model can reproduce the major aberrations of real human eyes using a computational framework. Second, it will use clinical information to measure how well this model is able to predict post-operation results in refractive surgery, attempting to meet clinical standards of error. The model of the crystalline lens proposed within this thesis is shown to be valid, as it is able to both reproduce individual patient's optical information, and correctly predicts the optical results of a refractive surgery of an individual human eye within clinical standards of error.
Style APA, Harvard, Vancouver, ISO itp.
13

Lemdiasov, Rostislav A. "A general purpose computational approach to the design of gradient coils for arbitrary geometries". Worcester, Mass. : Worcester Polytechnic Institute, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-09214-155502/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Chen, Guozhang. "Dynamical and computational mechanisms of biological and artificial neural circuits". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23710.

Pełny tekst źródła
Streszczenie:
Spatiotemporal activity dynamics with criticality have been widely observed in the cortex. In this thesis, we focus on elucidating the fundamental computational roles of these critical circuit dynamics in both biological and artificial neural networks. We first propose a novel neural computation mechanism, Computing by Modulating Spontaneous Activity (CMSA), for understanding visual perception. Using biophysically plausible circuit models, we demonstrate that spontaneous activity patterns with criticality are modulated by external stimuli to give rise to neural responses. We find that CMSA mechanism of generating neural responses provides profound computational advantages, such as speeding up cortical processing. CMSA mechanism provides a unifying explanation for many experimental findings at both the single-neuron and circuit levels. We further demonstrate that dynamical wave patterns, emerging from the non-equilibrium critical regime of a new type of models that integrate essential structural and neurophysiological properties of cortical circuits, exhibit Lévy walk-like movement. Such Lévy dynamics modulated by salient inputs provide a mechanistic account of recent key experimental findings on spatial attention sampling, including the theta-band sampling rate, theta-gamma coupling. This dynamical mechanism of attention, when applied to natural stimuli, can better explain the attention maps and attention sampling paths found in psychophysical studies than the classical winner-take-all model of attention. Finally, we demonstrate that Lévy walk-like learning dynamics also emerge from deep neural networks trained with stochastic gradient descent (SGD) algorithm, and that such learning dynamics enable the SGD optimizer to escape from local minima. This discovery of deep learning provides a framework for understanding the strengths and weaknesses of different network architectures and the development of new learning methods.
Style APA, Harvard, Vancouver, ISO itp.
15

Pinto, Marcio Augusto Sampaio 1977. "Método de otimização assitido para comparação entre poços convencionais e inteligentes considerando incertezas". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263725.

Pełny tekst źródła
Streszczenie:
Orientador: Denis José Schiozer
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências
Made available in DSpace on 2018-08-24T00:34:10Z (GMT). No. of bitstreams: 1 Pinto_MarcioAugustoSampaio_D.pdf: 5097853 bytes, checksum: bc8b7f6300987de2beb9a57c26ad806a (MD5) Previous issue date: 2013
Resumo: Neste trabalho, um método de otimização assistido é proposto para estabelecer uma comparação refinada entre poços convencionais e inteligentes, considerando incertezas geológicas e econômicas. Para isto é apresentada uma metodologia dividida em quatro etapas: (1) representação e operação dos poços no simulador; (2) otimização das camadas/ou blocos completados nos poços convencionais e do número e posicionamento das válvulas nos poços inteligentes; (3) otimização da operação dos poços convencionais e das válvulas nos poços inteligentes, através de um método híbrido de otimização, composto pelo algoritmo genético rápido, para realizar a otimização global, e pelo método de gradiente conjugado, para realizar a otimização local; (4) uma análise de decisão considerando os resultados de todos os cenários geológicos e econômicos. Esta metodologia foi validada em modelos de reservatórios mais simples e com configuração de poços verticais do tipo five-spot, para em seguida ser aplicada em modelos de reservatórios mais complexos, com quatro poços produtores e quatro injetores, todos horizontais. Os resultados mostram uma clara diferença ao aplicar a metodologia proposta para estabelecer a comparação entre os dois tipos de poços. Apresenta também a comparação entre os resultados dos poços inteligentes com três tipos de controle, o reativo e mais duas formas de controle proativo. Os resultados mostram, para os casos utilizados nesta tese, uma ampla vantagem em se utilizar pelo menos uma das formas de controle proativo, ao aumentar a recuperação de óleo e VPL, reduzindo a produção e injeção de água na maioria dos casos
Abstract: In this work, an assisted optimization method is proposed to establish a refined comparison between conventional and intelligent wells, considering geological and economic uncertainties. For this, it is presented a methodology divided into four steps: (1) representation and operation of wells in the simulator, (2) optimization of the layers /blocks with completion in conventional wells and the number and placement of the valves in intelligent wells; (3) optimization of the operation of the conventional and valves in the intelligent, through a hybrid optimization method, comprising by fast genetic algorithm, to perform global optimization, and the conjugate gradient method, to perform local optimization; (4) decision analysis considering the results of all geological and economic scenarios. This method was validated in simple reservoir models and configuration of vertical wells with five-spot type, and then applied to a more complex reservoir model, with four producers and four injectors wells, all horizontal. The results show a clear difference in applying the proposed methodology to establish a comparison between the two types of wells. It also shows the comparison between the results of intelligent wells with three types of control, reactive and two ways of proactive control. The results show, for the cases used in this work, a large advantage to use intelligent wells with at least one form of proactive control, to enhance oil recovery and NPV, reducing water production and injection in most cases
Doutorado
Reservatórios e Gestão
Doutor em Ciências e Engenharia de Petróleo
Style APA, Harvard, Vancouver, ISO itp.
16

Higgins, K. "Comparison of engineering correlations for predicting heat transfer in zero-pressure-gradient compressible boundary layers with CFD and experimental data". Fishermans Bend, Victoria : Defence Science and Technology Organisation, 2008. http://hdl.handle.net/1947/9653.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Keller, Frank. "Gradience in grammar : experimental and computational aspects of degrees of grammaticality". Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/744.

Pełny tekst źródła
Streszczenie:
This thesis deals with gradience in grammar, i.e., with the fact that some linguistic structures are not fully acceptable or unacceptable, but receive gradient linguistic judgments. The importance of gradient data for linguistic theory has been recognized at least since Chomsky's Logical Structure of Linguistic Theory. However, systematic empirical studies of gradience are largely absent, and none of the major theoretical frameworks is designed to account for gradient data. The present thesis addresses both questions. In the experimental part of the thesis (Chapters 3-5), we present a set of magnitude estimation experiments investigating gradience in grammar. The experiments deal with unaccusativity/unergativity, extraction, binding, word order, and gapping. They cover all major modules of syntactic theory, and draw on data from three languages (English, German, and Greek). In the theoretical part of thesis (Chapters 6 and 7), we use these experimental results to motivate a model of gradience in grammar. This model is a variant of Optimality Theory, and explains gradience in terms of the competition of ranked, violable linguistic constraints. The experimental studies in this thesis deliver two main results. First, they demonstrate that an experimental investigation of gradient phenomena can advance linguistic theory by uncovering acceptability distinctions that have gone unnoticed in the theoretical literature. An experimental approach can also settle data disputes that result from the informal data collection techniques typically employed in theoretical linguistics, which are not well-suited to investigate the behavior of gradient linguistic data. Second, we identify a set of general properties of gradient data that seem to be valid for a wide range of syntactic phenomena and across languages. (a) Linguistic constraints are ranked, in the sense that some constraint violations lead to a greater degree of unacceptability than others. (b) Constraint violations are cumulative, i.e., the degree of unacceptability of a structure increases with the number of constraints it violates. (c) Two constraint types can be distinguished experimentally: soft constraints lead to mild unacceptability when violated, while hard constraint violations trigger serious unacceptability. (d) The hard/soft distinction can be diagnosed by testing for effects from the linguistic context; context effects only occur for soft constraints; hard constraints are immune to contextual variation. (e) The soft/hard distinction is crosslinguistically stable. In the theoretical part of the thesis, we develop a model of gradient grammaticality that borrows central concepts from Optimality Theory, a competition-based grammatical framework. We propose an extension, Linear Optimality Theory, motivated by our experimental results on constraint ranking and the cumulativity of violations. The core assumption of our model is that the relative grammaticality of a structure is determined by the weighted sum of the violations it incurs. We show that the parameters of the model (the constraint weights), can be estimated using the least square method, a standard model fitting algorithm. Furthermore, we prove that standard Optimality Theory is a special case of Linear Optimality Theory. To test the validity of Linear Optimality Theory, we use it to model data from the experimental part of the thesis, including data on extraction, gapping, and word order. For all data sets, a high model fit is obtained and it is demonstrated that the model's predictions generalize to unseen data. On a theoretical level, our modeling results show that certain properties of gradient data (the hard/soft distinction, context effects, and crosslinguistic effects) do not have to be stipulated, but follow from core assumptions of Linear Optimality Theory.
Style APA, Harvard, Vancouver, ISO itp.
18

Alhendal, Yousuf A. "Computational two phase Marangoni flow in a microgravity environment". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/computational-two-phase-marangoni-flow-in-a-microgravity-environment(a3ba6f7f-f619-4bae-a355-e7b007d97e13).html.

Pełny tekst źródła
Streszczenie:
The lack of significant buoyancy effects in zero-gravity conditions poses an issue with fluid transfer in a stagnant liquid. In this thesis, the movement of a bubble or droplet in both stagnant and rotating liquids is analysed and presented numerically using computational fluid dynamics (CFD). The governing continuum conservation equations for two-phase flow are solved using the commercial software package (2011). The Volume of Fluid (VOF) method is used to track the liquid/gas interface in 2D and 3D domains. User-Defined Functions (UDFs) are employed in order to include the effect of surface tension gradient and fluid properties as a function of temperature, with a view to efficiently investigating temperature effects on the properties of the two phases. The flow is driven via Marangoni influence induced by the surface tension gradient, which in turn drives the bubble/droplet from the cold to the hot region. For stationary liquid, the results indicate that the scaled velocity of the bubble decreases with an increase in the Marangoni number, which agrees with the results of previous space experiments. An expression for predicting the scaled velocity of a bubble has been regressed based on the obtained data from the present numerical study for thermal Marangoni numbers up to 10,721. An expression for predicting the scaled velocity of a Fluorinert droplet migrating in oil has also been presented for an MaT range from 24.05 to 2771. The interactions of two droplets in thermocapillary motion have also been studied and compared with the results obtained for the isolated droplet. The results have shown that the leading droplet will not move faster than if it were isolated, as the trailing droplet has no influence on the velocity of the leading droplet. Three-dimensional results show that no bubbles broke in any of the cases observed and agglomeration could occur during thermocapillary migration for bubbles placed side by side. The results of the motion of a singular and multiple bubbles incorporating thermocapillary forces in a rotating liquid in a zero-gravity environment have been presented for the first time. When the Rossby number is 1, the effects of rotation are important. Furthermore, the deflection of the gas bubble motion increases towards the axis of rotation with a decrease in the Rossby number (Ro). Bubble population balance modelling has been investigated in normal gravity using Luo kernels for breakage and agglomeration and two different laminar kernels for zero-gravity conditions. The simulations covered a wide range of scenarios and results are presented as a bell and histogram shapes for number density and particle percentage distribution, respectively.
Style APA, Harvard, Vancouver, ISO itp.
19

Tscherter, Vincent. "Exorciser : Automatic generation and interactive grading of structured excercises in the theory of computation /". Zürich, 2004. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15654.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

von, Stillfried Florian. "Computational fluid-dynamics investigations of vortex generators for flow-separation control". Doctoral thesis, KTH, Turbulens, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-94879.

Pełny tekst źródła
Streszczenie:
Many flow cases in fluid dynamics face undesirable flow separation due to ad-verse pressure gradients on wall boundaries. This occurs, for example, due togeometrical reasons as in a highly curved turbine-inlet duct or on flow-controlsurfaces such as wing trailing-edge flaps within a certain angle-of-attack range.Here, flow-control devices are often used in order to enhance the flow and delayor even totally eliminate flow separation. Flow control can e.g. be achieved byusing passive or active vortex generators (VGs) for momentum mixing in theboundary layer of such flows. This thesis focusses on such passive and activeVGs and their modelling for computational fluid dynamics investigations. First, a statistical VG model approach for passive vane vortex genera-tors (VVGs), developed at the Royal Institute of Technology Stockholm andthe Swedish Defence Research Agency, was evaluated and further improvedby means of experimental data and three-dimensional fully-resolved computa-tions. This statistical VVG model approach models those statistical vortexstresses that are generated at the VG by the detaching streamwise vortices.This is established by means of the Lamb-Oseen vortex model and the Prandtllifting-line theory for the determination of the vortex strength. Moreover, thisansatz adds the additional vortex stresses to the turbulence of a Reynolds-stresstransport model. Therefore, it removes the need to build fully-resolved three-dimensional geometries of VVGs in a computational fluid dynamics mesh. Usu-ally, the generation of these fully-resolved geometries is rather costly in termsof preprocessing and computations. By applying VVG models, the costs arereduced to that of computations without VVGs. The original and an improvedcalibrated passive VVG model show sensitivity for parameter variations suchas the modelled VVG geometry and the VVG model location on a flat plate inzero- and adverse-pressure-gradient flows, in a diffuser, and on an airfoil withits high-lift system extracted. It could be shown that the passive VG modelqualitatively and partly quantitatively describes correct trends and tendenciesfor these different applications. In a second step, active vortex-generator jets (VGJs) are considered. They were experimentally investigated in a zero-pressure-gradient flat-plate flow atTechnische Universitä̈t Braunschweig, Germany, and have been re-evaluated for our purposes and a parameterization of the generated vortices was conducted. Dependencies of the generated vortices and their characteristics on the VGJsetup parameters could be identified and quantified. These dependencies wereused as a basis for the development of a new statistical VGJ model. This modeluses the ansatz of the passive VVG model in terms of the vortex model, theadditional vortex-stress tensor, and its summation to the Reynolds stress ten-sor. Yet, it does not use the Prandtl lifting-line theory for the determinationof the circulation but an ansatz for the balance of the momentum impact thatthe VGJ has on the mean flow. This model is currently under developmentand first results have been evaluated against experimental and fully-resolvedcomputational results of a flat plate without pressure gradient.

QC 20120511

Style APA, Harvard, Vancouver, ISO itp.
21

Pettersson, Lucas. "Localization with Time-of-Flight cameras". Thesis, KTH, Numerisk analys, NA, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273788.

Pełny tekst źródła
Streszczenie:
Time-of-Flight (ToF) cameras are becoming an increasingly common sensor in smartphones. These sensors are able to produce depth measurements in a grid at a relatively high rate. Using these depth measurements, point-clouds representing the captured scene can be produced. Previous research has been conducted in using ToF or LIDAR images to localize the camera. In this paper, we investigate several methods to localize the camera using point-clouds and surface meshes. Small alterations were made to some of the algorithms but the concepts remain the same. The main algorithms consisted of ICP variants as well as a relatively recent method called Corrective Gradient Refinement (CGR). The results obtained from generated data indicate that some of the methods are applicable for real-time applications, and the position estimates are comparable to those found in previous results.
Time-of-flight (ToF)-kameror blir en allt vanligare sensor i mobiltelefoner. Dessa sensorer kan producera djupmätningar i ett rutnät med relativt hög frekvens. Med hjälp av dessa djupmätningar kan ett punktmoln som representerar den fångade scenen produceras. Tidigare forskning har gjorts med hjälp av ToF- eller LIDAR-bilder för att lokalisera kameran. Här undersöks flera metoder för att lokalisera kameran med hjälp av ett punktmoln och en triangulering av en modell. Algoritmerna bestod till största delen av ICP-varianter samt en relativt ny metod som heter Corrective Gradient Refinement (CGR). Resultaten som erhållits från genererade data indikerar att vissa av metoderna är lämplig för realtidsapplikationer och felet på positioneringen är jämförbart med dem som hittades i tidigare resultat.
Style APA, Harvard, Vancouver, ISO itp.
22

Harding, Glen. "Shape from Gradients. A psychophysical and computational study of the role complex illumination gradients, such as shading and mutual illumination, play in three-dimensional shape perception". Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/7266.

Pełny tekst źródła
Streszczenie:
The human visual system gathers information about three-dimensional object shape from a wide range of sources. How effectively we can use these sources, and how they are combined to form a consistent and accurate percept of the 3D world is the focus of much research. In complex scenes inter-reflections of light between surfaces (mutual illumination) can occur, creating chromatic illumination gradients. These gradients provide a source of information about 3D object shape, but little research has been conducted into the capabilities of the visual system to use such information. The experiments described here were conducted with the aim of understanding the influence of chromatic gradients from mutual illumination on 3D shape perception. Psychophysical experiments are described that were designed to investigate: If the human visual system takes account of mutual illumination when estimating 3D object shape, and how this might occur; How colour shading cues are integrated with other shape cues; The relative influence on 3D shape perception of achromatic (luminance) shading and chromatic shading from mutual illumination. In addition, one chapter explores a selection of mathematical models of cue integration and their applicability in this case. The results of the experiments suggest that the human visual system is able to quickly assess and take account of colour mutual illuminations when estimating 3D object shape, and use chromatic gradients as an independent and effective cue. Finally, mathematical modelling reveals that the chromatic gradient cue is likely integrated with other shape cues in a way that is close to statistically optimal.
Style APA, Harvard, Vancouver, ISO itp.
23

Tavares, Jean Leite. "CalibraÃÃo da dispersividade longitudinal de aquÃferos atravÃs do mÃtodo iterativo do gradiente de concentraÃÃo". Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13317.

Pełny tekst źródła
Streszczenie:
nÃo hÃ
O entendimento de como ocorre o transporte de solutos em Ãguas subterrÃneas à fundamental para o correto gerenciamento deste recurso cada vez mais sujeito a pressÃes antrÃpicas. Para a modelagem deste processo, um parÃmetro fundamental à o coeficiente de dispersÃo hidrodinÃmica, cujo componente mais importante, em meio poroso, à a dispersividade longitudinal (αL). A determinaÃÃo de αL ocorre normalmente atravÃs de experimentos com aplicaÃÃo restrita Ãs Ãreas de interesse e resultados normalmente imprecisos. O presente trabalho teve como objetivo principal calibrar αL em aquÃferos atravÃs do MÃtodo Iterativo do Gradiente de ConcentraÃÃo (MIGC) elaborado a partir de uma adaptaÃÃo da metodologia proposta para o MÃtodo Iterativo do Gradiente HidrÃulico (MIGH). O processo se inicia a partir da simulaÃÃo hidrodinÃmica, posteriormente sÃo inseridas concentraÃÃes obtidas em campo ou hipotÃticas no modelo de simulaÃÃo de transporte de solutos. Em seguida à gerada uma matriz de concentraÃÃes que servirà de base para o processo iterativo do MIGC. Foram estruturados dois modelos, um denominado modelo observado ou fixo, no qual as concentraÃÃes de campo sÃo fixadas e outro modelo denominado de calculado ou nÃo fixo. Ao longo do processo de calibraÃÃo sÃo geradas matrizes de concentraÃÃes que permitem calcular os gradientes espaciais de concentraÃÃo nos modelos fixo e nÃo fixo. A mÃdia dos Ãngulos entre os gradientes de concentraÃÃes observados e calculados e o erro mÃdio quadrÃtico sÃo os critÃrios de convergÃncia para aferir o processo de calibraÃÃo. Nas cinco modelagens apresentadas, os resultados para estes dois critÃrios indicam a eficiÃncia do mÃtodo de calibraÃÃo. Seguindo a metodologia clÃssica de calibraÃÃo de parÃmetros, no MIGC as iteraÃÃes continuam enquanto os valores mÃnimos para os referidos critÃrios nÃo forem atingidos e ocorre, por conseguinte, a alteraÃÃo dos valores da dispersividade longitudinal entre as iteraÃÃes. Este mecanismo à baseado na razÃo entre os gradientes de concentraÃÃes observados e calculados em cada uma das cÃlulas do modelo. Os resultados foram obtidos a partir do uso de concentraÃÃes de nitrato em dois exemplos hipotÃticos com graus de complexidade diferenciados e um caso real aplicado na regiÃo do municÃpio do Crato, inserido na Bacia Sedimentar do Araripe, situada no sul do estado do CearÃ, Brasil. Para os casos estudados houve uma convergÃncia expressiva das mÃdias dos Ãngulos formados entre os gradientes de concentraÃÃo observados e calculados e do erro mÃdio quadrÃtico das concentraÃÃes, obtendo-se, como resultado final, uma matriz com a maioria dos valores de αL prÃximos aos determinados inicialmente. O MIGC se apresenta como um mÃtodo prÃtico e rÃpido para a calibraÃÃo da dispersividade longitudinal. Sugere-se o desenvolvimento de um programa computacional que automatize o MIGC para que o mesmo seja otimizado em situaÃÃes mais complexas.
The understanding of how the transport of solutes in ground water occurs is critical to the proper management of this resource increasingly subjected to thropogenic pressures. In order to model this process, a key parameter is the coefficient of hydrodynamic dispersion, whose most important component in porous environments is the longitudinal dispersivity (αL). The determination of αL usually takes place through experiments restricted to areas of interest and with often inaccurate results. Numerical methods are also used in order to attain parameter estimation. This study aimed mainly at calibrating αL in aquifers through the Iterative Method of the Gradient of Concentration (IMGC) derived from an adaptation of the methodology proposed for the Iterative Method of Hydraulic Gradient (IMHG). The process starts from the hydrodynamic simulation. Later, concentrations obtained from fields or in a hypothetic way are inserted using the model of simulation of solute transportation. Then, a matrix of concentrations is generated, which will work as the basis for the iterative process of IMGC. Two models were structured: one called observed or fixed model in which the concentration field is fixed and another model named calculated or not fixed. Throughout the calibration process arrays of concentrations are generated and used for calculating the spatial concentration of gradients in fixed and not fixed models. The average angle between the gradients of observed and calculated concentrations and the mean squared error are the convergence criteria to assess the calibration process. In the five models presented in this research, the results for these two criteria indicate the efficiency of the calibration method. Following the classical methodology of parameter calibration in the IMGC iterations continue as the minimum values for these criteria are not attained, therefore, the change of the longitudinal dispersivity varies among iterations. This mechanism is based on the ratio between the observed and calculated gradients of concentrations at each model cells. The results were obtained from the use of nitrate concentrations at two hypothetical samples with different degrees of complexity and also with a real case applied int he municipality of Crato, inserted in the Sedimentary Basin region of Araripe. For the cases analyzed in this study, there was a significant decay of the average angle formed between the gradients of observed and calculated concentration and the mean squared error of concentrations, obtaining as a result, a matrix with αL values close to those initially stipulated. IMGC presents itself as a practical method. It is suggested the development of a computer program that automates IMGC so that it is better used in more complex situation.
Style APA, Harvard, Vancouver, ISO itp.
24

Van, der Westhuizen Nicolé. "Gradient calculations of non-orthogonal meshes in the finite volume method / N. van der Westhuizen". Thesis, North-West University, 2013. http://hdl.handle.net/10394/9722.

Pełny tekst źródła
Streszczenie:
The handling of gradient calculations on non-orthogonal meshes in the Finite Volume Method (FVM) is important in the modelling of complex geometries, since different implementation methods have an influence on the accuracy and the stability of the solution. The application in the current study is the numerical solution of heat conduction in a complex geometry. It finds relevance in many engineering applications such as the Micro-Channel Heat Exchanger (MCHE) that acts as a recuperator in a High Temperature Reactor (HTR) power generation cycle. A program based on the FVM was developed in Excel for the solution of the diffusion equation on a non-orthogonal mesh. A test case of heat conduction in a rectangular block, meshed with a tetrahedral mesh, was solved with the Excel code. The same test case was solved with OpenFOAM. The results of the two codes were compared. Small differences were found and their origins were traced to slightly different implementation methods. Knowledge of the differences in implementation between the two codes resulted in a better understanding of the aspects that influenced accuracy and stability. Computations on meshes with the presence of mesh skewness and non-orthogonal mesh lines at boundaries were performed and an accompanying decrease in accuracy was observed. The results showed that the standard FVM as implemented in the Excel code and in OpenFOAM will need advanced methods to compensate for mesh skewness and non-orthogonality found at boundaries. During the study, a deeper knowledge and understanding was gained of the challenge of obtaining accurate solutions of heat conduction on non-orthogonal meshes. This knowledge may lead to the overall improvement of the simulation of heat transfer models in general and for the MCHE specifically.
Thesis (MIng (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2013.
Style APA, Harvard, Vancouver, ISO itp.
25

AZEGAMI, Hideyuki, Akihiro KOKURA, 秀幸 畦上 i 章弘 小倉. "密度を設計変数に用いた形状適合問題の解法". 一般社団法人日本機械学会, 2010. http://hdl.handle.net/2237/21122.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Truscott, Simon. "A heterogenous three-dimensional computational model for wood drying". Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15960/1/Simon_Trustcott_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
The objective of this PhD research program is to develop an accurate and efficient heterogeneous three-dimensional computational model for simulating the drying of wood at temperatures below the boiling point of water. The complex macroscopic drying equations comprise a coupled and highly nonlinear system of physical laws for liquid and energy conservation. Due to the heterogeneous nature of wood, the physical model parameters strongly depend upon the local pore structure, wood density variation within growth rings and variations in primary and secondary system variables. In order to provide a realistic representation of this behaviour, a set of previously determined parameters derived using sophisticated image analysis methods and homogenisation techniques is embedded within the model. From the literature it is noted that current three-dimensional computational models for wood drying do not take into consideration the heterogeneities of the medium. A significant advance made by the research conducted in this thesis is the development of a three - dimensional computational model that takes into account the heterogeneous board material properties which vary within the transverse plane with respect to the pith position that defines the radial and tangential directions. The development of an accurate and efficient computational model requires the consideration of a number of significant numerical issues, including the virtual board description, an effective mesh design based on triangular prismatic elements, the control volume finite element discretisation process for the cou- pled conservation laws, the derivation of an accurate dux expression based on gradient approximations together with flux limiting, and finally the solution of a large, coupled, nonlinear system using an inexact Newton method with a suitably preconditioned iterative linear solver for computing the Newton correction. This thesis addresses all of these issues for the case of low temperature drying of softwood. Specific case studies are presented that highlight the efficiency of the proposed numerical techniques and illustrate the complex heat and mass transport processes that evolve throughout drying.
Style APA, Harvard, Vancouver, ISO itp.
27

Truscott, Simon. "A heterogenous three-dimensional computational model for wood drying". Queensland University of Technology, 2004. http://eprints.qut.edu.au/15960/.

Pełny tekst źródła
Streszczenie:
The objective of this PhD research program is to develop an accurate and efficient heterogeneous three-dimensional computational model for simulating the drying of wood at temperatures below the boiling point of water. The complex macroscopic drying equations comprise a coupled and highly nonlinear system of physical laws for liquid and energy conservation. Due to the heterogeneous nature of wood, the physical model parameters strongly depend upon the local pore structure, wood density variation within growth rings and variations in primary and secondary system variables. In order to provide a realistic representation of this behaviour, a set of previously determined parameters derived using sophisticated image analysis methods and homogenisation techniques is embedded within the model. From the literature it is noted that current three-dimensional computational models for wood drying do not take into consideration the heterogeneities of the medium. A significant advance made by the research conducted in this thesis is the development of a three - dimensional computational model that takes into account the heterogeneous board material properties which vary within the transverse plane with respect to the pith position that defines the radial and tangential directions. The development of an accurate and efficient computational model requires the consideration of a number of significant numerical issues, including the virtual board description, an effective mesh design based on triangular prismatic elements, the control volume finite element discretisation process for the cou- pled conservation laws, the derivation of an accurate dux expression based on gradient approximations together with flux limiting, and finally the solution of a large, coupled, nonlinear system using an inexact Newton method with a suitably preconditioned iterative linear solver for computing the Newton correction. This thesis addresses all of these issues for the case of low temperature drying of softwood. Specific case studies are presented that highlight the efficiency of the proposed numerical techniques and illustrate the complex heat and mass transport processes that evolve throughout drying.
Style APA, Harvard, Vancouver, ISO itp.
28

Weiss, Joseph. "A Solid-State 11B NMR and Computational Study of Boron Electric Field Gradient and Chemical Shift Tensors in Boronic Acids and Boronic Esters". Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19737.

Pełny tekst źródła
Streszczenie:
The results of a solid-state 11B NMR study of a series of boronic acids, boronic esters, and boronic acid catechol cyclic esters with aromatic substituents are reported in this thesis. Boron-11 electric field gradient (EFG) and chemical shift (CS) tensors obtained from analyses of spectra acquired in magnetic fields of 9.4 T and 21.1 T are demonstrated to be useful for gaining insight into the molecular and electronic structure about the boron nucleus. It can be concluded that when adequate electronic variation is present in the compounds being studied, Ω is generally the most characteristic boron NMR parameter of the molecular and electronic environment for boronic acids and esters. Importantly, these data are only reliably accessible in ultrahigh magnetic fields. The experimental span values result from a delicate interplay of several competing factors, including hydrogen bonding, the value of the dihedral angle, and the type of aromatic ring system present.
Style APA, Harvard, Vancouver, ISO itp.
29

Khurelbaatar, Tsolmonbaatar. "Quantitative Imaging and Computational Modelling to Estimate the Relationship between Mechanical Strain and Changes within the Distal Tibia in First-Time Marathon Trainees". Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1323.

Pełny tekst źródła
Streszczenie:
Background Running is a popular form of exercise that more than 55 million Americans actively participate. Endurance running like marathon and half- marathon is getting increasingly popular among active runners. Although the effect of running is considered beneficial to bone health, the direct relationship between strains and strain gradients occurred during long distance running and bone changes is still not clear. Especially, given a high rate of injury associated with the first-time marathon, understanding the direct effect of strain stimuli on bone health is an important issue. Based on the previous studies, we hypothesized that the higher values of strain will induce bone adaptation more effectively and will lead to higher bone osteogenic changes. Since osteocytes sense shear stress caused by the interstitial fluid flow, which is created by the deformations, and regulate activities of osteoblasts and osteoclast that govern bone adaptation, we also hypothesized that the local strain gradient will create pressure differences within the interstitial fluid network and will increase fluid flow. Furthermore, due to that increased fluid flow, the regions with the higher strain gradient will experience a higher amount of bone adaptation. Thus, in this study, our purpose was to define the effect of the strains and strain gradients on bone changes within distal tibia, which is the most prone anatomical site to low risk stress fracture, during training for first-time marathon. Methods High-resolution and low-resolution computed tomographic (CT) images of the distal tibia were obtained before and after a self-selected training from runners who were actively training to participate in their first-time marathon in the next calendar year. The low resolution scan covered a 69.864 mm length of the distal end of the tibia while the high resolution CT scan covered a 9.02 mm region of the distal tibia. Using low resolution CT image based subject specific finite element (FE) models, the strains and strain gradients of the distal tibia at the instance of the peak ground reaction force (GRF) were calculated. The baseline and follow-up high resolution CT scans were used in high resolution peripheral quantitative CT (HRpQCT) analysis and the estimation of bone changes over the training period. Finally, the effect of strains and strain gradients on the distal tibia bone changes was estimated based on the FE model driven strain values and HRpQCT analysis driven bone changes. We used a linear mixed model to define the relationship between strain values and bone changes in the distal tibia. Results The strain values that occurred during marathon training had significant effects on bone changes in the distal tibia. Particularly, the strain gradients showed a higher effect than the strains. In the cortical compartment, the strain gradients, which were calculated as a strain difference of a node from the surrounding nodes (Strain Gradient-1), affected the bone mineral density (BMD) negatively, and per 1000 µε increase resulted in 2.123% decrease in the cortical BMD. The strain gradients, which were calculated as a strain difference of a node from the surrounding nodes normalized to distance to surrounding nodes (Strain Gradient-2), presented a positive effect on the cortical bone volume with a slope of 4.335% / 1000 µε. In the trabecular compartment, the strain gradient-1 showed negative effects on the percent change in BMD and bone mineral density (BMC), whereas the strain gradient-2 showed positive effects on the percent change in BMD and BMC. Conclusion The linear mixed model analysis revealed a statistically significant (p < 0.05) relationship between strain gradients that occurred during running and distal tibia bone changes. The strains, biometrics, and initial parameters of bone did not show any significant effect on the bone changes. The connection between local strain environment and bone changes in the distal tibia investigated in this study is an important step to understand the mechanism of mechanically induced bone adaptation.
Style APA, Harvard, Vancouver, ISO itp.
30

PAGLIANA, NICOLO'. "On the Role of Regularization in Machine Learning: Classical Theory, Computational Aspects and Modern Regimes". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1081700.

Pełny tekst źródła
Streszczenie:
In this work we study performances of different machine learning models by focusing on regularization properties in order to explain different phenomena that are observed in practice. We consider linear models on possibly infinite dimensionale feature space that are trained by optimizing an empirical mean squared errror. We study regularization properties of accelerated method like Nesterov or $ u$-method and properties of interpolating estimator where the main regularization sources vanish to zero and explain different behaviour which can be seen in practical applications.
Style APA, Harvard, Vancouver, ISO itp.
31

Attrell, Robert J. "A Solid-State 35Cl and 81Br NMR and Computational Study of Chlorine and Bromine Electric Field Gradient and Chemical Shift Tensors in Haloanilinium Halides". Thesis, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20546.

Pełny tekst źródła
Streszczenie:
The results of a systematic 35Cl, 81Br, and 127I SSNMR spectroscopic study of a series of halogen-substituted anilinium halide salts are presented. Solid-state NMR of these nuclides, bromine-/81 and iodine-127 in particular, is not well established. Twenty-one compounds thought to exhibit halogen bonding were prepared based on modified literature procedures, and two crystal structures were solved. Experiments show that collection of SSNMR spectra of the anions is feasible, though ultrahigh magnetic fields (21.1 T) and variable offset data acquisition were found to be essential. Electric field gradient and chemical shift tensors are measured experimentally for all 21 compounds, significantly expanding the body of data for the quadrupolar halogen nuclei. Quadrupolar coupling constants for chlorine-35 ranged from 2.12 to 6.04 MHz, for bromine-81 from 12.3 to 45.3 MHz, and for iodine-127 from 57.50 to 152.50 MHz. Gauge-including projector-augmented wave density functional theory (GIPAW-DFT) calculations were used to provide insight as to how the NMR parameters vary with local environment and long-range crystal packing. Overall, calculations reproduced the experimental trends in quadrupolar coupling constants and chemical shift tensor span (Ω) but failed to provide quantitative agreement within experimental error. Experimental and computational data were analyzed in order to provide insight into how halogen bonding influences NMR parameters. Several trends were elucidated from this study, including an inverse correlation between Ω and the length of the shortest halogen-halide contact (d). In selected bromine compounds, for example, Ω (81Br) was measured to increase from 120 to 240 ppm as d decreased from 3.838 to 3.443 Å. In summary, this study has demonstrated the feasibility and utility of quadrupolar halogen SSNMR, and that these techniques may prove useful in characterizing halogen bonding interactions in solids.
Style APA, Harvard, Vancouver, ISO itp.
32

Strydom, Willem Jacobus. "Recovery based error estimation for the Method of Moments". Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96881.

Pełny tekst źródła
Streszczenie:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: The Method of Moments (MoM) is routinely used for the numerical solution of electromagnetic surface integral equations. Solution errors are inherent to any numerical computational method, and error estimators can be effectively employed to reduce and control these errors. In this thesis, gradient recovery techniques of the Finite Element Method (FEM) are formulated within the MoM context, in order to recover a higher-order charge of a Rao-Wilton-Glisson (RWG) MoM solution. Furthermore, a new recovery procedure, based specifically on the properties of the RWG basis functions, is introduced by the author. These recovered charge distributions are used for a posteriori error estimation of the charge. It was found that the newly proposed charge recovery method has the highest accuracy of the considered recovery methods, and is the most suited for applications within recovery based error estimation. In addition to charge recovery, the possibility of recovery procedures for the MoM solution current are also investigated. A technique is explored whereby a recovered charge is used to find a higher-order divergent current representation. Two newly developed methods for the subsequent recovery of the solenoidal current component, as contained in the RWG solution current, are also introduced by the author. A posteriori error estimation of the MoM current is accomplished through the use of the recovered current distributions. A mixed second-order recovered current, based on a vector recovery procedure, was found to produce the most accurate results. The error estimation techniques developed in this thesis could be incorporated into an adaptive solver scheme to optimise the solution accuracy relative to the computational cost.
AFRIKAANSE OPSOMMING: Die Moment Metode (MoM) vind algemene toepassing in die numeriese oplossing van elektromagnetiese oppervlak integraalvergelykings. Numeriese foute is inherent tot die prosedure: foutberamingstegnieke is dus nodig om die betrokke foute te analiseer en te reduseer. Gradiënt verhalingstegnieke van die Eindige Element Metode word in hierdie tesis in die MoM konteks geformuleer. Hierdie tegnieke word ingespan om die oppervlaklading van 'n Rao-Wilton-Glisson (RWG) MoM oplossing na 'n verbeterde hoër-orde voorstelling te neem. Verder is 'n nuwe lading verhalingstegniek deur die outeur voorgestel wat spesifiek op die eienskappe van die RWG basis funksies gebaseer is. Die verhaalde ladingsverspreidings is geïmplementeer in a posteriori fout beraming van die lading. Die nuut voorgestelde tegniek het die akkuraatste resultate gelewer, uit die groep verhalingstegnieke wat ondersoek is. Addisioneel tot ladingsverhaling, is die moontlikheid van MoM-stroom verhalingstegnieke ook ondersoek. 'n Metode vir die verhaling van 'n hoër-orde divergente stroom komponent, gebaseer op die verhaalde lading, is geïmplementeer. Verder is twee nuwe metodes vir die verhaling van die solenodiale komponent van die RWG stroom deur die outeur voorgestel. A posteriori foutberaming van die MoM-stroom is met behulp van die verhaalde stroom verspreidings gerealiseer, en daar is gevind dat 'n gemengde tweede-orde verhaalde stroom, gebaseer op 'n vektor metode, die beste resultate lewer. Die foutberamingstegnieke wat in hierdie tesis ondersoek is, kan in 'n aanpasbare skema opgeneem word om die akkuraatheid van 'n numeriese oplossing, relatief tot die berekeningskoste, te optimeer.
Style APA, Harvard, Vancouver, ISO itp.
33

Jiménez, Garay Gabriel Alexandro. "Deep Learning for Semantic Segmentation versus Classification in Computational Pathology: Application to mitosis analysis in Breast Cancer grading". Master's thesis, Pontificia Universidad Católica del Perú, 2019. http://hdl.handle.net/20.500.12404/13969.

Pełny tekst źródła
Streszczenie:
Existing computational pathology approaches did not allow, yet, the emergence of effective/efficient computer-aided tools used as a second opinion for pathologists in the daily practice. Focusing on the case of computer-based qualification for breast cancer diagnosis, the present article proposes two deep learning architectures to efficiently and effectively detect and classify mitosis in a histopathological tissue sample. The first method consisted of two parts, entailing a preprocessing of the digital histological image and a free-handcrafted-feature Convolutional Neural Network (CNN) used for binary classification. Results show that the methodology proposed can achieve 95% accuracy in testing with an F1-score of 94.35%, which is higher than the results from the literature using classical image processing techniques and also higher than the approaches using handcrafted features combined with CNNs. The second approach was an end-to-end methodology using semantic segmentation. Results showed that this algorithm can achieve an accuracy higher than 95% in testing and an average Dice index of 0.6 which is higher than the results from the literature using CNNs (0.9 F1-score). Additionally, due to the semantic properties of the deep learning approach, an end-to-end deep learning framework is viable to perform both tasks: detection and classification of mitosis. The results showed the potential of deep learning in the analysis of Whole Slide Images (WSI) and its integration to computer-aided systems. The extension of this work to whole slide images is also addressed in the last two chapters; as well as, some computational key points that are useful when constructing a computer-aided-system inspired by the described technology.
Trabajo de investigación
Style APA, Harvard, Vancouver, ISO itp.
34

Iglesias, Susana Marrero. "Simulação e análise de gradientes de tensão em materiais e ligas metálicas". Universidade do Estado do Rio de Janeiro, 2008. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=788.

Pełny tekst źródła
Streszczenie:
As tensões residuais influenciam o comportamento mecânico dos metais e ligas. O campo de tensões pode determinar o desempenho do material. Então, torna-se de grande importância o estudo e a determinação das tensões residuais. O método padrão da difração de raios X para medir tensões s residuais (Método sen2ψ ) apresenta limitações na determinação de tensões caracterizadas por alto gradiente. È por isso que o estudo do gradiente de tensão superficial é um dos mais importantes problemas teóricos e experimentais em mecânica, especialmente no caso da análise de tensões superficiais que surgem após vários tipos de tratamentos superficiais, como: processamento por laser e tecnologia de implantação iônica. A simulação computacional e os métodos numéricos nos dão a possibilidade de resolver os problemas da determinação dos parâmetros do gradiente de tensão. Neste trabalho foi desenvolvida e aplicada a modelação e simulação computacional para o estudo do comportamento de materiais na presença de distribuições de tensão caracterizadas por alto gradiente. Foi aplicada a modelagem para diferentes tipos de tensões com gradiente para metais puros e ligas metálicas. Foi determinada uma forma para usar a modelagem no caso de materiais compósitos e são apresentados resultados para o caso do silumínio, mostrando o comportamento similar deste material compósito aos materiais puros como aço e alumínio. Usando a análise de Fourier, foi determinada a função de distorção nos casos de perfis alargados pela existência de gradiente de tensão. Foi determinada também a relação entre as funções de atenuação e de tensão na função de distorção e são apresentados resultados para diferentes tipos de distribuições de tensão. Ademais, foi desenvolvida uma metodologia para a determinação dos parâmetros do gradiente de tensão usando a simulação computacional das linhas de difração desenvolvida neste trabalho. A metodologia desenvolvida é baseada na simulação dos perfis de difração distorcidos pelo gradiente de tensão superficial e na análise destas distorções. Das metodologias desenvolvidas para a determinação de distribuições de tensões para o caso de gradiente, a que usa transformada de Fourier é matematicamente correta, porem apresenta dificuldades na determinação das funções de gradiente para casos de distribuições muito complexas e na determinação dos relacionamentos das escalas das funções envolvidas no processo de deconvolução. A alta complexidade deste método e os problemas numéricos que acarreta nos levam a propor a metodologia desenvolvida usando as larguras integrais dos perfis de difração para a determinação aproximada da função de distribuição de tensões. Esta metodologia é de simples aplicação e a sua precisão dependerá da quantidade de dados determinados na simulação direta para a obtenção dos gráficos de calibração.
The residual stress influences the mechanical behavior of metals and alloys. The stress field can determine the material performance. Then, become of great importance the study and determination of residual stress field. The standard X-ray diffraction method for residual stress measurement ent ( sen2ψ Method) presents limitations in the strong stress gradient determination. For that reason, the superficial stress gradient determination is one of the most important theoretical and experimental problem in engineering, especially in the analysis of the stress gradients due to surfaces treatments as laser or ionic implantation. The computational simulation and numerical methods give us the possibility of solve this problems. In this work is developed and applied two methods of modeling and simulation for the study of the material behavior with strong stress gradient. Is applied this modeling technique for different stress distribution in pure metals and metallic alloys. Is established a methodology for the composite material cases and is presents the results for the case of siluminium, showing that the behavior is similar to the other metals testing as steel and aluminum. Also, using Fourier analysis is determined the distortion function for the broadened profiles in the presence of stress gradient. In this work is shown the relation between the attenuation and stress distribution functions in the distortion. Is shown, results for several stress distribution functions. Moreover, is developed a methodology for the determination of stress gradient parameters using the computational simulation of the diffraction lines also developed in this work. The methodology is based in the simulation of the profiles broadened by stress gradient and the analysis of these distortions. Of the developed methodologies for the determination of stress distribution functions for the gradient case, the one that uses Fourier analysis is mathematically correct, but presents difficulties in the determination of the gradient functions for cases of complex stress distributions and in the scales determination of the functions involved in the deconvolution process. The high complexity of this method and the numeric problems that it carries leaves as, to propose the use of the developed methodology using the integral breath of the diffraction profiles for the approximate determination of the stress distribution function. The methodology application is simple and its accuracy will depend on the amount of data determined in the direct simulation to obtain the calibration graphics.
Style APA, Harvard, Vancouver, ISO itp.
35

Portelenelle, Brice. "La méthode LS-STAG avec schémas diamants pour l'approximation de la diffusion : une méthode de type "cut-cell" précise et efficace pour les écoulements incompressibles en géométries 3D complexes". Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0136/document.

Pełny tekst źródła
Streszczenie:
La méthode LS-STAG est une méthode cartésienne pour le calcul d’écoulements incompressibles en géométries complexes, qui propose une discrétisation précise des équations de Navier-Stokes dans les cut-cells, cellules polyédriques de forme complexe créées par l’intersection du maillage cartésien avec la frontière du solide immergé. Originalement développée pour les géométries 2D, son extension aux géométries 3D se heurte au défi posé par le grand nombre de types de cut-cells (108) à considérer. Récemment, la méthode LS-STAG a été étendue aux géométries complexes 3D dont la frontière est parallèle à l’un des axes du repère cartésien, où sont uniquement présentes les contreparties extrudées des cut-cells 2D. Cette étude a notamment souligné deux points à élucider pour le développement d’une méthode totalement 3D : premièrement, le calcul des flux diffusifs par un simple schéma à deux points s’est révélé insuffisamment précis dans les cut-cells 3D-extrudées du fait de la non orthogonalité. Ensuite, l’implémentation de ces flux à la paroi, qui s’effectue en imposant une discrétisation distincte pour chaque type de cut-cell extrudée, se révèle trop complexe pour être étendue avec succès aux nombreux types supplémentaires de cut-cells 3D, et doit être simplifiée et rationalisée. Dans cette thèse, le premier point est résolu en utilisant l’outil des schémas diamants, d’abord étudié en 2D pour l’équation de la chaleur puis les équations de Navier-Stokes dans l’approximation de Boussinesq, puis étendu en 3D. En outre, les schémas diamants ont permis de revisiter intégralement la discrétisation du tenseur des contraintes des équations de Navier-Stokes, où disparaît le traitement au cas par cas selon la disposition de la frontière solide dans les cut-cells. Cela a permis d’aboutir à une discrétisation systématique, précise et algorithmiquement efficace pour les écoulements en géométries totalement 3D. La validation numérique de la méthode LS-STAG avec schémas diamants est présentée pour une série de cas tests en géométries complexes 2D et 3D. Sa précision est d’abord évaluée par comparaison avec des solutions analytiques en 2D, puis en 3D par la simulation d’un écoulement de Stokes entre deux sphères concentriques. La robustesse de la méthode est notamment mise en évidence par l’étude d’écoulements autour d’une sphère en rotation, dans les régimes laminaires (stationnaire et instationnaire), ainsi que pour un régime faiblement turbulent
The LS-STAG method is a cartesian method for the computations of incompressible flows in complex geometries, which consists in an accurate discretisation of the Navier-Stokes equations in cut-cells, polyhedral cells with complex shape made by the intersection of cartesian mesh and the immersed boundary. Originally developed for 2D geometries, where only three types of generic cut-cells appear, its extension to 3D geometries has to deal with the large amount of cut-cells types (108). Recently, the LS-STAG method had been extended to 3D complex geometries whose boundary is parallel to an axis of the cartesian coordinate system, where there are only the extruded counterparts of 2D cut-cells. This study highlighted two points to deal with in order to develop a totally 3D method: firstly, the computation of diffusive fluxes by a simple 2-points scheme has shown to be insufficiently accurate in 3D-extruded cut-cells due to the non-orthogonality. In addition to that, implementation of these fluxes on the immersed boundary, which is done with a case by case discretisation according to the type of the cut-cells, appears to be too difficult for its successful extension to the several extra types of 3D cut-cells, and needs to be simplified and rationalized. In this thesis, the first point is solved by using the diamond scheme tool, firstly studied in 2D for the heat equation then for the Navier-Stokes equations in Boussinesq approximation, and finally extended to 3D. Moreover, the diamond schemes have been used to fully revisit the discretisation of shear stresses from Navier-Stokes equations, where the case by case procedure is removed. These modifications have permitted to come up with a systematic discretisation that is accurate and algorithmically efficient for flows in totally 3D geometries. The numerical validation of the LS-STAG method with diamond schemes is presented for a series of test cases in 2D and 3D complex geometries. The precision is firstly assessed by comparison with analytical solutions in 2D, then in 3D by the simulation of Stokes flow between two concentric spheres. The robustess of the method is highlighted by the simulations of flows past a rotating sphere, in laminar modes (steady and unsteady), as well as in a weakly turbulent mode
Style APA, Harvard, Vancouver, ISO itp.
36

Widdifield, Cory. "Multinuclear Solid-State Magnetic Resonance Studies on ‘Exotic’ Quadrupolar Nuclei: Acquisition Methods, High-Order Effects, Quantum Chemical Computations, and NMR Crystallography". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20722.

Pełny tekst źródła
Streszczenie:
This dissertation attempts to extend the classes of halogen-containing systems which may be studied using solid-state nuclear magnetic resonance (SSNMR). As line shape broadening due to the quadrupolar interaction (QI) scales inversely with the applied field, high-field magnet technology is indispensable for this research. Combining advanced radiofrequency pulse sequences with high-field wideline data acquisition allowed for the collection of very broad SSNMR signals of all quadrupolar halogen nuclei (i.e., 35/37Cl, 79/81Br and 127I) within a reasonable amount of experimental time. The initial systems for study were of the MX2 variety (M = Mg, Ca, Sr, Ba; X = Cl, Br, I). In total, 9 anhydrous compounds were tested. The effects of hydrate formation were tested on 7 additional compounds. Systematic trends in the observed δiso values (and to a lesser extent, Ω and CQ) were found to be diagnostic of the extent of hydration in these materials. Resolving power was successfully tested using SrBr2, which possesses 4 magnetically unique sites. The composition of CaBr2•xH2O was convincingly determined using SSNMR data and the hydration trends noted above. The sensitivity of the QI to the local bonding environment (e.g., bond distance changes of less than 0.05 Å) was used to refine (when coupled with gauge-including projector augmented-wave density functional theory (GIPAW DFT) quantum chemical computations) the structure of MgBr2, and was used to correct prior NMR data for CaCl2 (earlier accounts had been performed upon a CaCl2 hydrate). During NMR data analysis of certain iodine-containing materials, it was found that standard fitting software (which uses perturbation theory) could not reproduce the observations. Proper analysis required the use of exact simulation software and allowed for the observation of high-order quadrupole-induced effects (HOQIE). This motivated further studies using rhenium-185/187 nuclei, where it was expected that HOQIE would be more dramatic. The observed rhenium SSNMR spectra possessed additional fine structure that had never been observed before experimentally, nor would be expected from currently-available perturbation theory analysis software. Lastly, preliminary results are shown where 127I SSNMR is used to study important supramolecular systems, and the composition of the popular synthetic reagent ‘GaI’ is elucidated.
Style APA, Harvard, Vancouver, ISO itp.
37

Prost, Jean-Philippe. "Modelling Syntactic Gradience with Loose Constraint-based Parsing". Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00352828.

Pełny tekst źródła
Streszczenie:
La grammaticalité d'une phrase est habituellement conçue comme une notion binaire : une phrase est soit grammaticale, soit agrammaticale. Cependant, bon nombre de travaux se penchent de plus en plus sur l'étude de degrés d'acceptabilité intermédiaires, auxquels le terme de gradience fait parfois référence. À ce jour, la majorité de ces travaux s'est concentrée sur l'étude de l'évaluation humaine de la gradience syntaxique. Cette étude explore la possibilité de construire un modèle robuste qui s'accorde avec ces jugements humains.
Nous suggérons d'élargir au langage mal formé les concepts de Gradience Intersective et de Gradience Subsective, proposés par Aarts pour la modélisation de jugements graduels. Selon ce nouveau modèle, le problème que soulève la gradience concerne la classification d'un énoncé dans une catégorie particulière, selon des critères basés sur les caractéristiques syntaxiques de l'énoncé. Nous nous attachons à étendre la notion de Gradience Intersective (GI) afin qu'elle concerne le choix de la meilleure solution parmi un ensemble de candidats, et celle de Gradience Subsective (GS) pour qu'elle concerne le calcul du degré de typicité de cette structure au sein de sa catégorie. La GI est alors modélisée à l'aide d'un critère d'optimalité, tandis que la GS est modélisée par le calcul d'un degré d'acceptabilité grammaticale. Quant aux caractéristiques syntaxiques requises pour permettre de classer un énoncé, notre étude de différents cadres de représentation pour la syntaxe du langage naturel montre qu'elles peuvent aisément être représentées dans un cadre de syntaxe modèle-théorique (Model-Theoretic Syntax). Nous optons pour l'utilisation des Grammaires de Propriétés (GP), qui offrent, précisément, la possibilité de modéliser la caractérisation d'un énoncé. Nous présentons ici une solution entièrement automatisée pour la modélisation de la gradience syntaxique, qui procède de la caractérisation d'une phrase bien ou mal formée, de la génération d'un arbre syntaxique optimal, et du calcul d'un degré d'acceptabilité grammaticale pour l'énoncé.
À travers le développement de ce nouveau modèle, la contribution de ce travail comporte trois volets.
Premièrement, nous spécifions un système logique pour les GP qui permet la révision de sa formalisation sous l'angle de la théorie des modèles. Il s'attache notamment à formaliser les mécanismes de satisfaction et de relâche de contraintes mis en oeuvre dans les GP, ainsi que la façon dont ils permettent la projection d'une catégorie lors du processus d'analyse. Ce nouveau système introduit la notion de satisfaction relâchée, et une formulation en logique du premier ordre permettant de raisonner au sujet d'un énoncé.
Deuxièmement, nous présentons notre implantation du processus d'analyse syntaxique relâchée à base de contraintes (Loose Satisfaction Chart Parsing, ou LSCP), dont nous prouvons qu'elle génère toujours une analyse syntaxique complète et optimale. Cette approche est basée sur une technique de programmation dynamique (dynamic programming), ainsi que sur les mécanismes décrits ci-dessus. Bien que d'une complexité élevée, cette solution algorithmique présente des performances suffisantes pour nous permettre d'expérimenter notre modèle de gradience.
Et troisièmement, après avoir postulé que la prédiction de jugements humains d'acceptabilité peut se baser sur des facteurs dérivés de la LSCP, nous présentons un modèle numérique pour l'estimation du degré d'acceptabilité grammaticale d'un énoncé. Nous mesurons une bonne corrélation de ces scores avec des jugements humains d'acceptabilité grammaticale. Qui plus est, notre modèle s'avère obtenir de meilleures performances que celles obtenues par un modèle préexistant que nous utilisons comme référence, et qui, quant à lui, a été expérimenté à l'aide d'analyses syntaxiques générées manuellement.
Style APA, Harvard, Vancouver, ISO itp.
38

Zhang, Meng. "Evolutionary Learning of Boosted Features for Visual Inspection Automation". BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7324.

Pełny tekst źródła
Streszczenie:
Feature extraction is one of the major challenges in object recognition. Features that are extracted from one type of objects cannot always be used directly for a different type of objects, therefore limiting the performance of feature extraction. Having an automatic feature learning algorithm could be a big advantage for an object recognition algorithm. This research first introduces several improvements on a fully automatic feature construction method called Evolution COnstructed Feature (ECO-Feature). These improvements are developed to construct more robust features and make the training process more efficient than the original version. The main weakness of the original ECO-Feature algorithm is that it is designed only for binary classification and cannot be directly applied to multi-class cases. We also observe that the recognition performance depends heavily on the size of the feature pool from which features can be selected and the ability of selecting the best features. For these reasons, we have developed an enhanced evolutionary learning method for multi-class object classification to address these challenges. Our method is called Evolutionary Learning of Boosted Features (ECO-Boost). ECO-Boost method is an efficient evolutionary learning algorithm developed to automatically construct highly discriminative image features from the training image for multi-class image classification. This unique method constructs image features that are often overlooked by humans, and is robust to minor image distortion and geometric transformations. We evaluate this algorithm with a few visual inspection datasets including specialty crops, fruits and road surface conditions. Results from extensive experiments confirm that ECO-Boost performs closely comparable to other methods and achieves a good balance between accuracy and simplicity for real-time multi-class object classification applications. It is a hardware-friendly algorithm that can be optimized for hardware implementation in an FPGA for real-time embedded visual inspection applications.
Style APA, Harvard, Vancouver, ISO itp.
39

Tavares, Paulo Roberto Lacerda. "Modelagem computacional e calibraÃÃo da condutividade hidrÃulica horizontal em aqÃÃfero da Bacia Sedimentar do Araripe utilizando o mÃtodo iterativo do gradiente hidrÃulico". Universidade Federal do CearÃ, 2010. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4517.

Pełny tekst źródła
Streszczenie:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
O processo de calibraÃÃo de parÃmetros hidrodinÃmicos representa parte importante dos modelos e sua abordagem geral consiste em minimizar uma funÃÃo objetivo expressa pela diferenÃa entre as cargas hidrÃulicas observadas e calculadas. Guo e Zhang, na dÃcada de 90 e, depois, Schuster e AraÃjo (2004) propuseram um processo alternativo para estimaÃÃo dos parÃmetros, atravÃs da minimizaÃÃo da diferenÃa entre os gradientes hidrÃulicos observados e calculados e denominado MÃtodo Iterativo do Gradiente HidrÃulico (MIGH). A geraÃÃo da matriz de cargas observadas tem funÃÃo determinante no sucesso do processo de calibraÃÃo. Na abordagem tradicional, a matriz observada à constituÃda por meio de um processo de interpolaÃÃo. Este trabalho propÃe-se a investigar a geraÃÃo da matriz observada, utilizando o prÃprio processo de simulaÃÃo, fixando, no modelo, os pontos de cargas aferidas em campo. Perceberam-se, ainda, grandes esforÃos manuais na busca de integrar o software simulador (PMWIN) e o calibrador (MIGH); neste sentido, toda a metodologia de calibraÃÃo foi implementada e automatizada no programa computacional desenvolvido na pesquisa, denominado UFC-MIGH. Acrescido ao processo de calibraÃÃo, foram investigadas tÃcnicas de geoprocessamento atravÃs de um Sistema de InformaÃÃes GeogrÃficas (SIG), para simplificar a geraÃÃo das matrizes de entrada do modelo. Para aplicaÃÃo da metodologia proposta, foram calibradas as condutividades hidrÃulicas horizontais de dois exemplos hipotÃticos e um caso real, referente a uma porÃÃo do aqÃÃfero da Bacia Sedimentar do Araripe, abrangendo a totalidade da Ãrea urbana do municÃpio do Crato. Em todos os casos simulados, os resultados com o emprego do MIGH, demonstraram erros menores, confirmados atravÃs dos critÃrios de convergÃncia e comparados à metodologia tradicional, implementada no software PEST. Dessa forma, o MIGH demonstra maior flexibilidade, mostrando eficÃcia superior em problemas com ou sem o zoneamento das condutividades hidrÃulicas. AlÃm disso, a nova metodologia de geraÃÃo das matrizes de cargas observadas garante resultados hidrogeologicamente mais consistentes, comparada à utilizaÃÃo da interpolaÃÃo, que nÃo contempla singularidades do modelo, como rios, poÃos de bombeamento e contornos impermeÃveis. Por fim, como importantes produtos finais, estÃo presentes no trabalho, o mapa de condutividade hidrÃulica horizontal e o mapa potenciomÃtrico da Ãrea-Crato
The process of calibration of hydrodynamic parameters represents an important part in computer modeling; its general approach consists of minimizing an objective function expressed by the difference between observed and simulated hydraulics heads. Guo and Zhang, in the decade of 90 and Schuster and AraÃjo (2004)proposed an alternative process for estimating the parameters, through the minimization of the difference between observed and simulated hydraulic gradients and denominated Iterative Method of the Hydraulic Gradient (IMHG). The generation of the matrix of observed heads has decisive function in the success of the calibration process. In the traditional approach, the observed matrix is obtained using a mathematical interpolation. This work intends to investigate the generation of the observed head matrix, using the simulation process itself, fixing, in the model, the points where the heads are known. Normally, great manual efforts were noticed in the attempt of integrating the simulating software (PMWIN) and the (IMHG), therefore, all calibration methodology was automatically implemented and a computer code was developed, denominated UFC-MIGH. Geoprocessing techniques were added to the calibration process in order to simplify the generation of the input matrix of the model. For the application of the proposed methodology, the horizontal hydraulic conductivities of two hypothetical examples and a real case were used. The real case consisted of a portion of the aquifer of the Sedimentary Basin of Araripe,including the total urban area of the municipal district of Crato. In all of the simulated cases, the results them IMHG, demonstrated smaller errors than the traditional methodology, implemented in the software PEST. Thus, IMHG demonstrated larger flexibility, showing superior effectiveness in problems with or without the zoning of the hydraulic conductivities. The new methodology of generation of observed heads matrix is much more hydrogeological consistent than using just mathematical interpolation. This new methodology takes into account singularities of the model, as rivers, wells and impermeable outlines. Finally, as important final products, present in the thesis, the map of horizontal hydraulic conductivity and the contour map of the hydraulic heads of the area where the methodology was applied
Style APA, Harvard, Vancouver, ISO itp.
40

畔上, 秀幸, Hideyuki Azegami, 志強 呉 i Zhi Chang Wu. "線形弾性問題における領域最適化解析(力法によるアプローチ)". 日本機械学会, 1994. http://hdl.handle.net/2237/7239.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

AZEGAMI, Hideyuki, i Zhi Chang WU. "Domain Optimization Analysis in Linear Elastic Problems (Approach Using Traction Method)". 日本機械学会, 1996. http://hdl.handle.net/2237/12156.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Ozturk, Tugce. "Experimental and Computational Investigation of the Microstructure-Mechanical Deformation Relationship in Polycrystalline Materials, Applied to Additively Manufactured Titanium Alloys". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/900.

Pełny tekst źródła
Streszczenie:
Parts made out of titanium alloys demonstrate anisotropic mechanical properties when manufactured by electron beam melting, an emerging additive manufacturing technique. Understanding the process history dependent heterogeneous microstructure, and its effect on mechanical properties is crucial in determining the performance of additively manufactured titanium alloys as the mechanical behavior heavily relies on the underlying microstructural features. This thesis work focuses on combined experimental and computational techniques for microstructure characterization, synthetic microstructure generation, mechanical property measurement, and mechanical behavior modeling of polycrystalline materials, with special focus on dual phase titanium alloys. Macroscopic mechanical property measurements and multi-modal microstructure characterizations (high energy X-ray diffraction, computed tomography and optical microscopy) are performed on additively manufactured Ti-6Al-4V parts, revealing the heterogeneity of the microstructure and properties with respect to the build height. Because characterizing and testing every location within a build is not practical, a computational methodology is established in order to reduce the time and cost spent on microstructure-property database creation. First a statistical volume element size is determined for the Fast Fourier Transform based micromechanical modeling technique through a sensitivity study performed on an experimental Ni-based superalloy and syntheticW, Cu, Ni and Ti structures, showing that as the contrast of properties (e.g., texture, field localization, anisotropy, rate-sensitivity) increases, so does the minimum simulation domain size requirement. In all deformation regimes a minimum volume element is defined for both single and dual phase materials. The database is then expanded by generating statistically representative Ti structures which are modified for features of interest, e.g., lath thickness, grain size and orientation distribution, to be used in spectral full-field micromechanical modeling. The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions. Fast Fourier transform based technique, being a spectral, full-field deformation modeling tool, is shown to be capable of capturing the relative contribution from varying microstructural features such as phase fractions, grain morphology/ size and texture on the overall mechanical properties as the results indicate that the mean field behavior is predominantly controlled by the alpha phase fraction and the prior beta phase orientation.
Style APA, Harvard, Vancouver, ISO itp.
43

Cai, Shang-Gui. "Computational fluid-structure interaction with the moving immersed boundary method". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2276/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, une nouvelle méthode de frontières immergées a été développée pour la simulation d'interaction fluide-structure, appelée la méthode de frontières immergées mobiles (en langage anglo-saxon: MIBM). L'objectif principal de cette nouvelle méthode est de déplacer arbitrairement les solides à géométrie complexe dans un fluide visqueux incompressible, sans remailler le domaine fluide. Cette nouvelle méthode a l'avantage d'imposer la condition de non-glissement à l'interface d'une manière exacte via une force sans introduire des constantes artificielles modélisant la structure rigide. Cet avantage conduit également à la satisfaction de la condition CFL avec un pas de temps plus grand. Pour un calcul précis de la force induite par les frontières mobiles, un système linéaire a été introduit et résolu par la méthode de gradient conjugué. La méthode proposée peut être intégrée facilement dans des solveurs résolvant les équations de Navier-Stokes. Dans ce travail la MIBM a été mise en œuvre en couplage avec un solveur fluide utilisant une méthode de projection adaptée pour obtenir des solutions d'ordre deux en temps et en espace. Le champ de pression a été obtenu par l'équation de Poisson qui a été résolue à l'aide de la méthode du gradient conjugué préconditionné par la méthode multi-grille. La combinaison de ces deux méthodes a permis un gain de temps considérable par rapport aux méthodes classiques de la résolution des systèmes linéaires. De plus le code de calcul développé a été parallélisé sur l'unité graphique GPU équipée de la bibliothèque CUDA pour aboutir à des hautes performances de calcul. Enfin, comme application de nos travaux sur la MIBM, nous avons étudié le couplage "fort" d'interaction fluide-structure (IFS). Pour ce type de couplage, un schéma implicite partitionné a été adopté dans lequel les conditions à l'interface sont satisfaites via un schéma de type "point fixe". Pour réduire le temps de calcul inhérent à cette application, un nouveau schéma de couplage a été proposé pour éviter la résolution de l'équation de Poisson durant les itérations du "point fixe". Cette nouvelle façon de résoudre les problèmes IFS a montré des performances prometteuses pour des systèmes en IFS complexe
In this thesis a novel non-body conforming mesh formulation is developed, called the moving immersed boundary method (MIBM), for the numerical simulation of fluid-structure interaction (FSI). The primary goal is to enable solids of complex shape to move arbitrarily in an incompressible viscous fluid, without fitting the solid boundary motion with dynamic meshes. This novel method enforces the no-slip boundary condition exactly at the fluid-solid interface with a boundary force, without introducing any artificial constants to the rigid body formulation. As a result, large time step can be used in current method. To determine the boundary force more efficiently in case of moving boundaries, an additional moving force equation is derived and the resulting system is solved by the conjugate gradient method. The proposed method is highly portable and can be integrated into any fluid solver as a plug-in. In the present thesis, the MIBM is implemented in the fluid solver based on the projection method. In order to obtain results of high accuracy, the rotational incremental pressure correction projection method is adopted, which is free of numerical boundary layer and is second order accurate. To accelerate the calculation of the pressure Poisson equation, the multi-grid method is employed as a preconditioner together with the conjugate gradient method as a solver. The code is further parallelized on the graphics processing unit (GPU) with the CUDA library to enjoy high performance computing. At last, the proposed MIBM is applied to the study of two-way FSI problem. For stability and modularity reasons, a partitioned implicit scheme is selected for this strongly coupled problem. The interface matching of fluid and solid variables is realized through a fixed point iteration. To reduce the computational cost, a novel efficient coupling scheme is proposed by removing the time-consuming pressure Poisson equation from this fixed point interaction. The proposed method has shown a promising performance in modeling complex FSI system
Style APA, Harvard, Vancouver, ISO itp.
44

Aklil, Nassim. "Apprentissage actif sous contrainte de budget en robotique et en neurosciences computationnelles. Localisation robotique et modélisation comportementale en environnement non stationnaire". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066225/document.

Pełny tekst źródła
Streszczenie:
La prise de décision est un domaine très étudié en sciences, que ce soit en neurosciences pour comprendre les processus sous tendant la prise de décision chez les animaux, qu’en robotique pour modéliser des processus de prise de décision efficaces et rapides dans des tâches en environnement réel. En neurosciences, ce problème est résolu online avec des modèles de prises de décision séquentiels basés sur l’apprentissage par renforcement. En robotique, l’objectif premier est l’efficacité, dans le but d’être déployés en environnement réel. Cependant en robotique ce que l’on peut appeler le budget et qui concerne les limitations inhérentes au matériel, comme les temps de calculs, les actions limitées disponibles au robot ou la durée de vie de la batterie du robot, ne sont souvent pas prises en compte à l’heure actuelle. Nous nous proposons dans ce travail de thèse d’introduire la notion de budget comme contrainte explicite dans les processus d’apprentissage robotique appliqués à une tâche de localisation en mettant en place un modèle basé sur des travaux développés en apprentissage statistique qui traitent les données sous contrainte de budget, en limitant l’apport en données ou en posant une contrainte de temps plus explicite. Dans le but d’envisager un fonctionnement online de ce type d’algorithmes d’apprentissage budgétisé, nous discutons aussi certaines inspirations possibles qui pourraient être prises du côté des neurosciences computationnelles. Dans ce cadre, l’alternance entre recherche d’information pour la localisation et la décision de se déplacer pour un robot peuvent être indirectement liés à la notion de compromis exploration-exploitation. Nous présentons notre contribution à la modélisation de ce compromis chez l’animal dans une tâche non stationnaire impliquant différents niveaux d’incertitude, et faisons le lien avec les méthodes de bandits manchot
Decision-making is a highly researched field in science, be it in neuroscience to understand the processes underlying animal decision-making, or in robotics to model efficient and rapid decision-making processes in real environments. In neuroscience, this problem is resolved online with sequential decision-making models based on reinforcement learning. In robotics, the primary objective is efficiency, in order to be deployed in real environments. However, in robotics what can be called the budget and which concerns the limitations inherent to the hardware, such as computation times, limited actions available to the robot or the lifetime of the robot battery, are often not taken into account at the present time. We propose in this thesis to introduce the notion of budget as an explicit constraint in the robotic learning processes applied to a localization task by implementing a model based on work developed in statistical learning that processes data under explicit constraints, limiting the input of data or imposing a more explicit time constraint. In order to discuss an online functioning of this type of budgeted learning algorithms, we also discuss some possible inspirations that could be taken on the side of computational neuroscience. In this context, the alternation between information retrieval for location and the decision to move for a robot may be indirectly linked to the notion of exploration-exploitation compromise. We present our contribution to the modeling of this compromise in animals in a non-stationary task involving different levels of uncertainty, and we make the link with the methods of multi-armed bandits
Style APA, Harvard, Vancouver, ISO itp.
45

畔上, 秀幸, i Hideyuki Azegami. "領域最適化問題の一解法". 日本機械学会, 1994. http://hdl.handle.net/2237/7238.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Brundin, Michelle, Peter Morris, Gustav Åhlman i Emil Rosén. "Implementation av webbsida för rekommendationssystem med användaruppbyggd databas". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175489.

Pełny tekst źródła
Streszczenie:
The goal of this project was to create a web-based, crowd-sourced, correlational database, that easily allowed users to submit objects and receive correlated objects as results. The webservice was created in the web development languages of HTML, CSS, PHP and Javscript, with MySQL to handle the database. Simultaneous development was kept in check with the aid of the source code management system GIT. Upon completion, the service contained several HTML-views, the ability to add and rate objects, a per-object dedicated page with information parsed from Wikipedia.org, and a view with objects ranked in accordance to the preferences specific to the current user. Roughly a month after the beginning of development, the website was publicly launched and promoted in order to collect data, and improvements were added to the website as needed. Two weeks after the public launch, the collected data was measured and analyzed. The algorithm proved effective and scalable, especially with the introduction of tags and simultaneous computation of object features.
Style APA, Harvard, Vancouver, ISO itp.
47

Berguin, Steven Henri. "A method for reducing dimensionality in large design problems with computationally expensive analyses". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53504.

Pełny tekst źródła
Streszczenie:
Strides in modern computational fluid dynamics and leaps in high-power computing have led to unprecedented capabilities for handling large aerodynamic problem. In particular, the emergence of adjoint design methods has been a break-through in the field of aerodynamic shape optimization. It enables expensive, high-dimensional optimization problems to be tackled efficiently using gradient-based methods in CFD; a task that was previously inconceivable. However, adjoint design methods are intended for gradient-based optimization; the curse of dimensionality is still very much alive when it comes to design space exploration, where gradient-free methods cannot be avoided. This research describes a novel approach for reducing dimensionality in large, computationally expensive design problems to a point where gradient-free methods become possible. This is done using an innovative application of Principal Component Analysis (PCA), where the latter is applied to the gradient distribution of the objective function; something that had not been done before. This yields a linear transformation that maps a high-dimensional problem onto an equivalent low-dimensional subspace. None of the original variables are discarded; they are simply linearly combined into a new set of variables that are fewer in number. The method is tested on a range of analytical functions, a two-dimensional staggered airfoil test problem and a three-dimensional Over-Wing Nacelle (OWN) integration problem. In all cases, the method performed as expected and was found to be cost effective, requiring only a relatively small number of samples to achieve large dimensionality reduction.
Style APA, Harvard, Vancouver, ISO itp.
48

Masoudi, Mohammad Amin. "Robust Deep Reinforcement Learning for Portfolio Management". Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42743.

Pełny tekst źródła
Streszczenie:
In Finance, the use of Automated Trading Systems (ATS) on markets is growing every year and the trades generated by an algorithm now account for most of orders that arrive at stock exchanges (Kissell, 2020). Historically, these systems were based on advanced statistical methods and signal processing designed to extract trading signals from financial data. The recent success of Machine Learning has attracted the interest of the financial community. Reinforcement Learning is a subcategory of machine learning and has been broadly applied by investors and researchers in building trading systems (Kissell, 2020). In this thesis, we address the issue that deep reinforcement learning may be susceptible to sampling errors and over-fitting and propose a robust deep reinforcement learning method that integrates techniques from reinforcement learning and robust optimization. We back-test and compare the performance of the developed algorithm, Robust DDPG, with UBAH (Uniform Buy and Hold) benchmark and other RL algorithms and show that the robust algorithm of this research can reduce the downside risk of an investment strategy significantly and can ensure a safer path for the investor’s portfolio value.
Style APA, Harvard, Vancouver, ISO itp.
49

ElNady, Khaled. "Modèles de comportement non linéaire des matériaux architecturés par des méthodes d'homogénéisation discrètes en grandes déformations. Application à des biomembranes et des textiles". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0032/document.

Pełny tekst źródła
Streszczenie:
Ce travail porte sur le développement de modèles micromécaniques pour le calcul de la réponse homogénéisée de matériaux architecturés, en particulier des matériaux se présentant sous forme de treillis répétitifs. Les matériaux architecturés et micro-architecturés couvrent un domaine très large de de propriétés mécaniques, selon la connectivité nodale, la disposition géométrique des éléments structuraux, leurs propriétés mécaniques, et l'existence d'une possible hiérarchie structurale. L'objectif principal de la thèse est la prise en compte des nonlinéarités géométriques résultant des évolutions importantes de la géométrie initiale du treillis, causée par une rigidité de flexion des éléments structuraux faible en regard de leur rigidité en extension. La méthode dite d'homogénéisation discrète est développée pour prendre en compte les non linéarités géométriques pour des treillis quais périodiques; des schémas incrémentaux sont construits qui reposent sur la résolution incrémentale et séquentielle des problèmes de localisation - homogénéisation posés sur une cellule de base identifiée, soumise à un chargement contrôlé en déformation. Le milieu continu effectif obtenu est en général un milieu micropolaire anisotrope, dont les propriétés effectives reflètent la disposition des éléments structuraux et leurs propriétés mécaniques. La réponse non affine des treillis conduit à des effets de taille qui sont pris en compte soit par un enrichissement de la cinématique par des variables de microrotation ou par la prise en compte des seconds gradients du déplacement. La construction de milieux effectifs du second gradient est faite dans un formalisme de petites perturbations. Il est montré que ces deux types de milieu effectif sont complémentaires en raison de l'analogie existant lors de la construction théorique des réponses homogénéisées, et par le fait qu'ils fournissent des longueurs internes en extension, flexion et torsion. Des applications à des structures tissées et des membranes biologiques décrites comme des réseaux de filaments quais-périodiques ont été faites. Les réponses homogénéisées obtenues sont validées par des comparaisons avec des simulations par éléments finis réalisées sur un volume élémentaire représentatif de la structure. Les schémas d'homogénéisation ont été implémentés dans un code de calcul dédié, alimenté par un fichier de données d'entrée de la géométrie du treillis et de ses propriétés mécaniques. Les modèles micromécaniques développés laissent envisager du fait de leur caractère prédictif la conception de nouveaux matériaux architecturés permettant d'élargir les frontières de l'espace 'matériaux-propriétés'
The present thesis deals with the development of micromechanical schemes for the computation of the homogenized response of architectured materials, focusing on periodical lattice materials. Architectured and micro-architectured materials cover a wide range of mechanical properties according to the nodal connectivity, geometrical arrangement of the structural elements, their moduli, and a possible structural hierarchy. The principal objective of the thesis is the consideration of geometrical nonlinearities accounting for the large changes of the initial lattice geometry, due to the small bending stiffness of the structural elements, in comparison to their tensile rigidity. The so-called discrete homogenization method is extended to the geometrically nonlinear setting for periodical lattices; incremental schemes are constructed based on a staggered localization-homogenization computation of the lattice response over a repetitive unit cell submitted to a controlled deformation loading. The obtained effective medium is a micropolar anisotropic continuum, the effective properties of which accounting for the geometrical arrangement of the structural elements within the lattice and their mechanical properties. The non affine response of the lattice leads to possible size effects which can be captured by an enrichment of the classical Cauchy continuum either by adding rotational degrees of freedom as for the micropolar effective continuum, or by considering second order gradients of the displacement field. Both strategies are followed in this work, the construction of second order grade continua by discrete homogenization being done in a small perturbations framework. We show that both strategies for the enrichment of the effective continuum are complementary due to the existing analogy in the construction of the micropolar and second order grade continua by homogenization. The combination of both schemes further delivers tension, bending and torsion internal lengths, which reflect the lattice topology and the mechanical properties of its structural elements. Applications to textiles and biological membranes described as quasi periodical networks of filaments are considered. The computed effective response is validated by comparison with FE simulations performed over a representative unit cell of the lattice. The homogenization schemes have been implemented in a dedicated code written in combined symbolic and numerical language, and using as an input the lattice geometry and microstructural mechanical properties. The developed predictive micromechanical schemes offer a design tool to conceive new architectured materials to expand the boundaries of the 'material-property' space
Style APA, Harvard, Vancouver, ISO itp.
50

Van, Den Eijnden Bram. "Modélisation multi-échelle du comportement hydro-méchanique des roches argileuses". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI034/document.

Pełny tekst źródła
Streszczenie:
Les études de faisabilité concernant le stockage géologique profond des déchets radioactifs ont conduit un intérêt accru concernant la modélisation géomécanique de la roche hte. En France, une roche hte potentielle est l'argilite du Callovo-Oxfordien du site de Meuse/Haute Marne. Etant donné que le principe de stockage géologique profond repose fortement sur la capacité de confinement de la formation hte, sa faible perméabilité est d'une importance clé. La perméabilité étant dépendante de la microstructure du matériau et de son évolution sous chargement, le comportement couplé hydro-mécanique de l'argilite est important. En effet, des modifications mécaniques sont induites par le creusement de la galerie d'entreposage, générant une zone endommagée (EDZ), pouvant conduire une modification de la perméabilité dans le voisinage de la galerie. Dans les matériaux microstructure complexe comme l'argilite du Callovo-Oxfordien, le comportement macroscopique trouve son origine dans l'interaction des constituants micro-mécaniques. En plus du couplage entre le comportement hydraulique et mécanique, un couplage entre les échelles micro et macro existe. Par le biais de l'élaboration d'un cadre d'homogénéisation du couplage hydro-mécanique, une approche de modélisation deuxéchelles est développée dans ce travail, dans laquelle la relation constitutive macroscopique découle directement du comportement à l'échelle microscopique. Un modèle existant du couplage hydro-mécanique, reposant sur l'identification de grains et d'espaces poreux intergranulaires à l'échelle micro est adopté comme point de départ. Ce modèle repose sur une homogénéisation numérique du comportement à la petite échelle afin d'obtenir à l'échelle macroscopique la réponse en contrainte et de transport du fluide interstitiel. Ce modèle est basé sur un VER périodique qui permet de déduire le comportement macroscopique local de l'argilite. En réponse, en un point d'intégration macro donné, à un incrément de la déformation et du gradient de pression, la réponse du VER permet d'exprimer l'incrément de contrainte et de flux associé, constituant de fait un équivalent numérique de la relation constitutive. Les problèmes aux conditions limites macro et micro sont traités simultanément par la méthode élément fini. Pour obtenir les opérateurs tangents consistants à l'échelle macro, la méthode d'homogénéisation par condensation statique des opérateurs tangeants micro est étendu au cas avec couplage hydro-mécanique. L'implémentation du modèle double échelle et la mise en uvre des développements théoriques d'homogénéisation ont été effectués dans le code élément fini Lagamine (Université de Liège). Pour la modélisation de la localisation de la déformation à l'échelle macro, qui, dans un formalisme de milieu continu classique, souffre de la dépendance au maillage, l'approche double-échelle a été utilisée dans un formalisme de milieu enrichi de type milieu de second gradient pour matériau poreux saturé. Les capacités du modèle homogénéisé numériquement, utilisé dans un cadre de milieu de second gradient, sont ensuite démontrées par des simulations d'essais dométriques et d'essais de compression biaxiaux. L'approche se confirme être un moyen puissant pour modéliser l'anisotropie initiale et induite du comportement mécanique et du comportement hydraulique. Pour la modélisation du comportement de l'argilite du Callovo-Oxfordien, des VER sont construits en tenant compte des travaux de caractérisation de la géométrie des inclusions microscopiques et des résultats expérimentaux d'essais macroscopiques.La loi de comportement homogénéisée numériquement ainsi calibrée est utilisée dans des simulations de creusement de galerie jusqu'à des niveaux d'endommagement générant une localisation de la déformation.Ces calculs montrent à la fois la pertinence et l'applicabilité du concept double échelle pour l'évaluation du comportement hydromécanique des EDZ dans un contexte du stockage des déchets radioactifs
Feasibility studies for deep geological radioactive waste disposal facilities have led to an increased interest in the geomechanical modelling of its host rock. In France, a potential host rock is the Callovo-Oxfordian claystone. The low permeability of this material is of key importance, as the principal of deep geological disposal strongly relies on the sealing capacity of the host formation. The permeability being coupled to the mechanical material state, hydromechanical coupled behaviour of the claystone becomes important when mechanical alterations are induced by gallery excavation in the so-called excavation damaged zone (EDZ). In materials with microstructure such as the Callovo-Oxfordian claystone [Robinet et al., 2012], the macroscopic behaviour has its origin in the interaction of its mi- cromechanical constituents. In addition to the coupling between hydraulic and mech- anical behaviour, a coupling between the micro (material microstructure) and macro will be made. By means of the development of a framework of computational homo- genization for hydromechanical coupling, a doublescale modelling approach is formu- lated, for which the macroscale constitutive relations are derived from the microscale by homogenization. An existing model for the modelling of hydromechanical coupling based on the distinct definition of grains and intergranular pore space [Frey, 2010] is adopted and modified to enable the application of first order computational homogenization for obtaining macroscale stress and fluid transport responses. This model is used to constitute a periodic representative elementary volume (REV) that allows the rep- resentation of the local macroscopic behaviour of the claystone. As a response to deformation loading, the behaviour of the REV represents the numerical equivalent of a constitutive relation at the macroscale. For the required consistent tangent operators, the framework of computational homogenization by static condensation [Kouznetsova et al., 2001] is extended to hy- dromechanical coupling. The theoretical developments of this extension are imple- mented in the finite element code Lagamine (Li` ege) as an independent constitutive relation. For the modelling of localization of deformation, which in classical FE meth- ods suffers from the well-known mesh dependency, the doublescale approach of hy- dromechanical coupling is combined with a local second gradient model [Collin et al., 2006] to control the internal length scale of localized deformation. By accepting the periodic boundary conditions as a regularization of the microscale deformation, the use of the multiscale model in combination with the local second gradient model can be used for modelling localization phenomena in HM-coupled settings with material softening. The modelling capacities of the approach are demonstrated by means of simula- tions of oedometer tests and biaxial compression tests. The approach is demonstrated to be a powerful way to model anisotropy in the mechanical as well as the hydraulic behaviour of the material both in the initial material state and as an effect of hy- dromechanical alterations. For the application to the modelling of Callovo-Oxfordian claystone, microstructural REVs are calibrated to geometrical characteristics of the inclusion that form the microstructure under consideration and to macroscale ex- perimental results of the mechanical behaviour. The calibrated constitutive relation is used in the simulation of gallery excavation processes. These computations give a proof of concept of the doublescale assessment of the hydromechanical behaviour of the excavation damaged zones around galleries in the context of nuclear waste disposal
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii