Dissertations / Theses on the topic 'Non-dimensional analysis'

To see the other types of publications on this topic, follow the link: Non-dimensional analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 dissertations / theses for your research on the topic 'Non-dimensional analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PACHAS, MAURO ARTEMIO CARRION. "THREE-DIMENSIONAL DETERMINISTIC AND NON DETERMINISTIC LIMIT ANALYSIS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5750@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
O presente trabalho tem como objetivo estudar o comportamento de estruturas geotécnicas mediante o uso de Análise Limite Numérica. Para isto foi desenvolvido o programa GEOLIMA (GEOtechnical LIMit Analysis) com base na teoria de Análise Limite Numérica utilizando o Método de Elementos Finitos (MEF), considerando problemas bidimensionais e tridimensionais. Devido ao fato das propriedades do solo serem variáveis aleatórias, a Análise Não Determinística também foi considerada mediante o uso do Método Estatístico Linear e do Método de Monte Carlo. Inicialmente, são apresentados os fundamentos da teoria de Análise Limite Determinística e sua formulação mista pelo Método de Elementos Finitos. A seguir são apresentados os fundamentos de Análise Não Determinística, onde os métodos Estatístico Linear e Monte Carlo são descritos. As fases de desenvolvimento do GEOLIMA são descritas de forma resumida e a validação é feita mediante a comparação de resultados obtidos com soluções analíticas ou outras soluções. A seguir, uma aplicação em 2D é apresentada com a finalidade de ilustrar a Análise Limite Determinística e Não Determinística mediante o método Estatístico Linear e o método de Monte Carlo. Finalmente, duas aplicações em 3D são apresentadas: um problema relativo à frente de escavação de um túnel e um estudo de painéis de mineração. Os resultados deste trabalho indicam a viabilidade de usar Análise Limite Determinística e Não Determinística no estudo de problemas geotécnicos.
The present work has the purpose of studying the behavior of geotechnical structures by means of numerical analysis. For this, program GEOLIMA (GEOtechnical LIMit Analysis) was developed based on the theory of Numerical Limit Analysis using the Finite Element Method (FEM), considering bidimensional and three-dimensional problems. Due to the fact that the properties of the ground are generally random variables, Non Deterministic Analysis was also considered by means of the Linear Statistical and the Monte Carlo Methods. Initially, the fundamentals of Deterministic Limit Analysis and its mixed formulation are presented. Then, the fundamentals of Non Deterministic Theory are presented, and the Linear Statistic and the Monte Carlo Methods are described. The development phases of GEOLIMA are briefly described. Its validation is made by comparing the results obtained with analytical solutions or other solutions. Following, a 2D application is made with the purpose of illustrating Deterministic and Non Deterministic Limit Analysis. Finally, two 3D applications are presented: a problem related to the excavation of a tunnel front and a problem related to mining panels. The results of this work indicate the viability of using Deterministic and Non Deterministic Limit Analysis in the study of geotechnical problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Yablonsky, Eugene. "Characterization of operators in non-gaussian infinite dimensional analysis." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1054787409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Starowicz, Sharon Ann. "A non-dimensional analysis of cardiovascular function and thermoregulation." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/101150.

Full text
Abstract:
The cardiovascular system plays a vital role in protecting the body from temperature extremes due to its unique ability to store, transport, and dissipate heat. A comprehensive study of the thermoregulatory aspects of the system is severely limited by its complexity and the interdependency of its many component variables. Before a formal study can be initiated, certain fundamental properties of the cardiovascular system must be established and the physical processes associated with heat and mass transport must first be understood. To this end, over six hundred variables relating to the system's heat transport characteristics were identified. The variables were grouped to form dimensionless quantities using the Buckingham Pi Theorem. Each dimensionless quantity, or parameter, is composed of definable physical quantities that reflect the interaction between various components of the system. From the analysis, a series of reference scales was identified and, in turn, used to facilitate the physical interpretation of each resulting parameter. As a result of this analysis, a working set of physical and experimental quantities was derived to identify significant heat and mass transport processes involved in cardiovascular thermoregulation and to establish the relative rate at which these processes occur.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
4

Elfarra, Monier Ali Supervisor :. Akmandor İ Sinan. "Two dimensional finite volume weighted essentially non-oscillatory euler schemes with uniform and non-uniform grid coefficients." Ankara : METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12605898/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Elfarra, Monier Ali. "Two-dimensional Finite Volume Weighted Essentially Non-oscillatory Euler Schemes With Uniform And Non-uniform Grid Coefficients." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12605898/index.pdf.

Full text
Abstract:
In this thesis, Finite Volume Weighted Essentially Non-Oscillatory (FV-WENO) codes for one and two-dimensional discretised Euler equations are developed. The construction and application of the FV-WENO scheme and codes will be described. Also the effects of the grid coefficients as well as the effect of the Gaussian Quadrature on the solution have been tested and discussed. WENO schemes are high order accurate schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the high approximation level, where a convex combination of all the candidate stencils is used with certain weights. Those weights are used to eliminate the stencils, which contain discontinuity. WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures. The applications tested in this thesis are the Diverging Nozzle, Shock Vortex Interaction, Supersonic Channel Flow, Flow over Bump, and supersonic Staggered Wedge Cascade. The numerical solutions for the diverging nozzle and the supersonic channel flow are compared with the analytical solutions. The results for the shock vortex interaction are compared with the Roe scheme results. The results for the bump flow and the supersonic staggered cascade are compared with results from literature.
APA, Harvard, Vancouver, ISO, and other styles
6

Le, Gros Brian Neil. "Three-dimensional, non-linear finite element analysis, and elastic modulus optimization of a geometry for a non-metallic femoral stem." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65632.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dudziak, William James. "PRESENTATION AND ANALYSIS OF A MULTI-DIMENSIONAL INTERPOLATION FUNCTION FOR NON-UNIFORM DATA: MICROSPHERE PROJECTION." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1183403994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ho, John Rong Ming. "Higher-order kinematic error sensitivity analysis and optimum dimensional tolerancing of dyad and non-dyad mechanisms." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq23340.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Burger, Heidi. "Isogeometric Analysis: Fundamentals and details of implementation. From first steps to two-dimensional non-linear problems." Master's thesis, Faculty of Engineering and the Built Environment, 2018. http://hdl.handle.net/11427/30008.

Full text
Abstract:
Isogeometric analysis (IGA) is a computational analysis technique that can serve as an alternative to the traditional finite element method (FEM) in approximating solutions to differential equations. IGA is not necessarily more efficient that traditional FEM, but because of its nature, can naturally handle a greater variety of complex geometries. IGA is based on the use of NURBS (non-uniform rational B-splines), mathematical descriptions of geometry which are the standard of representing geometry in computer aided design (CAD) modeling software. IGA therefore links the CAD world to the world of analysis. Traditional FEM was developed before NURBS, in the 1950s and therefore developed quite separately. This project focuses on the fundamentals and implementation of IGA for problems, including one-dimensional, two-dimensional scalar, two-dimensional vector-valued and simple non-linear problems. For each new problem, the underlying mathematics is developed and the implementation is discussed in detail. One of the major contributions of this project is considered to be the detail in which the implementation of the Neumann boundary condition is described. There is none of this level of detail in any of the available literature. All problems solved are demonstrative and was written in a modular way that is easy to read and understand. Furthermore, how to extract NURBS data from CAD software is discussed, which would prove useful for future problems with more complex geometry. While the work done in this project is not considered novel, the thoroughness in which the project was approached is hoped to be useful for future projects. From this project, the work can be expanded to more complex geometries, multi-patch problems with the help of CAD programs or more complex non-linear problems.
APA, Harvard, Vancouver, ISO, and other styles
10

雅史, 齋藤. "Non-invasive assessment of arterial stiffness by pulse wave analysis : in vivo measurements and one-dimensional theoretical model." Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB12426688/?lang=0, 2012. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB12426688/?lang=0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Shaarbaf, Ihsan Ali Saib. "Three-dimensional non-linear finite element analysis of reinforced concrete beams in torsion : reinforced concrete members under torsion and bending are analysed up to failure : a non-linear concrete model for general states of stress including compressive strength degradation due to cracking is described." Thesis, University of Bradford, 1990. http://hdl.handle.net/10454/3576.

Full text
Abstract:
This thesis describes a non-linear finite element model suitable for the analysis of reinforced concrete, or steel, structures under general three-dimensional states of loading. The 20 noded isoparametric brick element has been used to model the concrete and reinforcing bars are idealised as axial members embedded within the concrete elements. The compressive behaviour of concrete is simulated by an elasto-plastic work hardening model followed by a perfectly plastic plateau which is terminated at the onset the . crushing. In tension, a smeared crack model with fixed orthogonal cracks has been used with the inclusion of models for the retained post-cracking stress and the reduced shear modulus. The non-linear equations of equilibrium have been solved using an incremental-iterative technique operating under load control. The solution algorithms used are the standard and the modified Newton-Raphson methods. Line searches have been implemented to accelerate convergence. The numerical integration has been generally carried out using 15 point Gaussian type rules. Results of a study to investigate the performance of these rules show that the 15 point rules are accurate and computationally efficient compared with the 27(3X3X3) point Gaussian rule. The three- dimensional finite element model has been used to investigate the problem of elasto-plastic torsion of homogeneous members. The accuracy of the finite element solutions obtained for beams of different cross-sections subjected to pure and warping torsion have been assessed by comparing them with the available exact or approximate analytical solutions. Because the present work is devoted towards the analysis of reinforced concrete members which fail in shear or torsional modes, the computer program incorporates three models to account for the degradation in the compressive strength of concrete due to presence of tensile straining of transverse reinforcement. The numerical solutions obtained for reinforced concrete panels under pure shear and beams in torsion and combined torsion and bending reveal that the inclusion of a model for reducing the compressive strength of cracked concrete can significantly improve the correlation of the predicted post-cracking stiffness and the computed ultimate loads with the experimental results. Parametric studies to investigate the effects of some important material and solution parameters have been carried out. It is concluded that in the presence of a compression strength reduction model, the tension-stiffening parameters required for reinforced concrete members under torsion should be similar to those used for members in which bending dominates.
APA, Harvard, Vancouver, ISO, and other styles
12

Stringham, Bryan Jay. "Non-Dimensional Modeling of the Effects of Weld Parameters on Peak Temperature and Cooling Rate in Friction Stir Welding." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6710.

Full text
Abstract:
Methods for predicting weld properties based on welding parameters are needed in friction stir welding (FSW). FSW is a joining process in which the resulting properties depend on the thermal cycle of the weld. Buckingham's Pi theorem and heat transfer analysis was used to identify dimensionless parameters relevant to the FSW process. Experimental data from Al 7075 and HSLA-65 on five different backing plate materials and a wide range of travel speeds and weld powers was used to create a dimensionless, empirical model relating critical weld parameters to the peak temperature rise and cooling rate of the weld. The models created have R-squared values greater than 0.99 for both dimensionless peak temperature rise and cooling rate correlations. The model can be used to identify weld parameters needed to produce a desired peak temperature rise or cooling rate. The model can also be used to explore the relative effects of welding parameters on the weld thermal response.
APA, Harvard, Vancouver, ISO, and other styles
13

Moarii, Matahi. "Apprentissage de données génomiques multiples pour le diagnostic et le pronostic du cancer." Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0086/document.

Full text
Abstract:
De nombreuses initiatives ont été mises en places pour caractériser d'un point de vue moléculaire de grandes cohortes de cancers à partir de diverses sources biologiques dans l'espoir de comprendre les altérations majeures impliquées durant la tumorogénèse. Les données mesurées incluent l'expression des gènes, les mutations et variations de copy-number, ainsi que des signaux épigénétiques tel que la méthylation de l'ADN. De grands consortium tels que “The Cancer Genome Atlas” (TCGA) ont déjà permis de rassembler plusieurs milliers d'échantillons cancéreux mis à la disposition du public. Nous contribuons dans cette thèse à analyser d'un point de vue mathématique les relations existant entre les différentes sources biologiques, valider et/ou généraliser des phénomènes biologiques à grande échelle par une analyse intégrative de données épigénétiques et génétiques.En effet, nous avons montré dans un premier temps que la méthylation de l'ADN était un marqueur substitutif intéressant pour jauger du caractère clonal entre deux cellules et permettait ainsi de mettre en place un outil clinique des récurrences de cancer du sein plus précis et plus stable que les outils actuels, afin de permettre une meilleure prise en charge des patients.D'autre part, nous avons dans un second temps permis de quantifier d'un point de vue statistique l'impact de la méthylation sur la transcription. Nous montrons l'importance d'incorporer des hypothèses biologiques afin de pallier au faible nombre d'échantillons par rapport aux nombre de variables.Enfin, nous montrons l'existence d'un phénomène biologique lié à l'apparition d'un phénotype d'hyperméthylation dans plusieurs cancers. Pour cela, nous adaptons des méthodes de régression en utilisant la similarité entre les différentes tâches de prédictions afin d'obtenir des signatures génétiques communes prédictives du phénotypes plus précises.En conclusion, nous montrons l'importance d'une collaboration biologique et statistique afin d'établir des méthodes adaptées aux problématiques actuelles en bioinformatique
Several initiatives have been launched recently to investigate the molecular characterisation of large cohorts of human cancers with various high-throughput technologies in order to understanding the major biological alterations related to tumorogenesis. The information measured include gene expression, mutations, copy-number variations, as well as epigenetic signals such as DNA methylation. Large consortiums such as “The Cancer Genome Atlas” (TCGA) have already gathered publicly thousands of cancerous and non-cancerous samples. We contribute in this thesis in the statistical analysis of the relationship between the different biological sources, the validation and/or large scale generalisation of biological phenomenon using an integrative analysis of genetic and epigenetic data.Firstly, we show the role of DNA methylation as a surrogate biomarker of clonality between cells which would allow for a powerful clinical tool for to elaborate appropriate treatments for specific patients with breast cancer relapses.In addition, we developed systematic statistical analyses to assess the significance of DNA methylation variations on gene expression regulation. We highlight the importance of adding prior knowledge to tackle the small number of samples in comparison with the number of variables. In return, we show the potential of bioinformatics to infer new interesting biological hypotheses.Finally, we tackle the existence of the universal biological phenomenon related to the hypermethylator phenotype. Here, we adapt regression techniques using the similarity between the different prediction tasks to obtain robust genetic predictive signatures common to all cancers and that allow for a better prediction accuracy.In conclusion, we highlight the importance of a biological and computational collaboration in order to establish appropriate methods to the current issues in bioinformatics that will in turn provide new biological insights
APA, Harvard, Vancouver, ISO, and other styles
14

Alizad, Vida. "Effects of transcranial direct current stimulation on gait in people with and without Parkinson's disease." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129454/1/Vida_Alizad_Thesis.pdf.

Full text
Abstract:
This study was conducted to examine the effects of transcranial direct current stimulation (tDCS) on gait in people with and without Parkinson's disease. tDCS is a non-invasive brain stimulation, which uses weak direct current (1–2 mA) to the brain via electrodes applied on the skin of the scalp. The findings of this study provided future direction, particularly in terms of configuration of tDCS for gait improvement in people with PD and helped move the emerging brain stimulation approach to PD closer to clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
15

ARTARIA, ANDREA. "Objective Bayesian Analysis for Differential Gaussian Directed Acyclic Graphs." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/55327.

Full text
Abstract:
Often we are confronted with heterogeneous multivariate data, i.e., data coming from several categories, and the interest may center on the differential structure of stochastic dependence among the variables between the groups. The focus in this work is on the two groups problem and is faced modeling the system through a Gaussian directed acyclic graph (DAG) couple linked in a fashion to obtain a joint estimation in order to exploit, whenever they exist, similarities between the graphs. The model can be viewed as a set of separate regressions and the proposal consists in assigning a non-local prior to the regression coefficients with the objective of enforcing stronger sparsity constraints on model selection. The model selection is based on Moment Fractional Bayes Factor, and is performed through a stochastic search algorithm over the space of DAG models.
APA, Harvard, Vancouver, ISO, and other styles
16

Tagwireyi, Paradzayi. "Ant and spider dynamics in complex riverine landscapes of the Scioto River basin, Ohio: implications for riparian ecosystem structure and function." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398983906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Borra, Chaitanya. "DYNAMICS OF LARGE ARRAY MICRO/NANO RESONATORS." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590758736333883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Manchon, Xavier. "Contribution à la prédiction du déroulement de scénarios d'accidents graves dans un RNR-Na." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC047/document.

Full text
Abstract:
La démarche de conception et de sûreté du réacteur ASTRID, démonstrateur de Réacteur à Neutrons Rapides refroidi au Sodium, implique la modélisation de scénarios d’accidents graves qui font intervenir une fusion du cœur du réacteur. L’objectif de la thèse, en soutien à cette modélisation, est de contribuer à l’identification des processus susceptibles de faire bifurquer un scénario d’accident grave. Deux phases d’un scénario sont traitées pour cela. Tout d’abord, le début d’une séquence de perte de débit primaire non protégée est analysé à l’aide d’un critère analytique développé pendant la thèse, visant à prédire la bifurcation de la décroissance du débit vers un état stabilisé ou bien vers un état instable, menant à la dégradation du cœur. Ce nouveau critère, qui présente l’intérêt de tenir compte de l’effet de l’évolution de la puissance sur la stabilité du débit, est vérifié à l’aide d’un outil de calcul dédié aux accidents de perte de débit non protégés. Dans un second temps, les processus prépondérants impliqués dans une vaporisation de combustible liquide suivie d’une détente de sa vapeur, consécutives à une excursion de puissance accidentelle, sont identifiés via une analyse dimensionnelle. En reprenant les résultats de cette analyse, un outil de calcul est par la suite développé, dont l’objet est de déterminer l’énergie mécanique transmise à la cuve du réacteur lors de la détente. La question du transfert thermique entre la vapeur de combustible se détendant et le caloporteur est particulièrement étudiée. Cet outil est validé via une comparaison à des résultats expérimentaux et à des résultats de calculs issus d’un autre code. Des études paramétriques permettent enfin de quantifier la variabilité des résultats due au choix de modélisation et aux incertitudes sur les données physiques employées
Severe accidents’ modeling is required for the design and safety analysis of ASTRID, a Generation IV Sodium-cooled Fast Reactor under development in France. This thesis aims at contributing to identify the driving processes of ASTRID’s severe accidents scenarios. First, a stability criterion is developed to analyze the beginning of an unprotected loss of flow accident. This stability criterion assesses whether the decreasing flow is stable or unstable, leading to the core disassembly. This criterion also considers power variations during the loss of flow, which former stability criteria do not take into account. Then, the driving processes of a transient involving a fuel vaporisation followed by its vapor expansion are identified using a dimensional analysis. The simplifications justified by this dimensional analysis are considered further to develop a numerical tool that computes the mechanical energy transmitted to the core vessel in case of fuel vaporisation. The thermal exchange between the expanding fuel vapor and the sodium coolant is especially analyzed. The tool is validated by comparing its results to experimental measures and to another tool’s computations. In the end, parametric studies are done in order to assess the tool computations’ variability induced by physical uncertainties or modeling options
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Zuwei. "Analysis of polymer flows in the three dimensional extrusion dies." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1179857806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Costa, Miguel Henrique de Oliveira. "Modelagem do comportamento estrutural de sistemas treliçados espaciais para escoramentos de estruturas de aço, concreto e mistas (aço-concreto)." Universidade do Estado do Rio de Janeiro, 2012. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=5580.

Full text
Abstract:
A utilização de treliças para o escoramento de elementos estruturais de concreto armado e aço é considerada uma solução eficaz para o atual sistema de construção de engenharia civil. Uma mudança de atitude no processo de construção, associado com a redução dos custos causou um aumento considerável na utilização de treliças tridimensionais em aço com maior capacidade de carga. Infelizmente, o desenho destes sistemas estruturais baseia-se em cálculos muito simplificados relacionadas com vigas de uma dimensão, com propriedades de inércia constantes. Tal modelagem, muito simplificada, não pode representar adequadamente a resposta real dos modelos estruturais e pode levar a inviabilidade econômica ou mesmo inseguro desenho estrutural. Por outro lado, estas estruturas treliçadas estão relacionadas com modelos de geometria complexa e são desenhados para suportar níveis de cargas muito elevadas. Portanto, este trabalho de investigação propôs modelos de elementos finitos que representam o caráter tridimensional real do sistema de escoramento, avaliando o comportamento estático e dinâmico estrutural com mais confiabilidade e segurança. O modelo computacional proposto, desenvolvido para o sistema estrutural não linear de análise estática e dinâmica, aprovou as habituais técnicas de refinamento de malha presentes em simulações do método de elementos finitos, com base no programa ANSYS [1]. O presente estudo analisou os resultados de análises linear-elástica e não linear geométrica para ações de serviço, físicos e geométricos para as ações finais. Os resultados do presente estudo foram obtidas, com base na análise linear-elástica e não linearidade geométrica e física, e comparados com os fornecidos pela metodologia simplificada tradicional de cálculo e com os limites recomendadas por normas de concepção.
The use of lattice structures for shoring of steel, composite and reinforced concrete structures is considered an effective solution in the construction of civil engineering systems. An attitudinal change in the construction process associated with costs reduction has caused a considerable increase in the use of three-dimensional lattice steel truss systems with greater load capacity. Unfortunately, the design of these structural systems is based on very simplified calculations related to one-dimensional beams with constant inertia properties. Such a very simplified modeling cannot adequately represent the actual response of the structural models and can lead to uneconomic or even unsafe structural design. On the other hand, these lattice steel structures are related to three-dimensional models of complex geometry and are designed to support very high loading levels. Therefore, this work research has proposed finite element models that represent the actual three-dimensional character of shoring system, evaluating the static and dynamic structural behavior with more reliability and security. The proposed computational model, developed for the structural system non-linear static and dynamic analysis, adopted the usual mesh refinement techniques present in finite element method simulations, based on the Annoys program. The present study has considered the results of a linear-elastic and non-linear geometric analysis for serviceability actions, physical and geometrical nonlinear analysis for ultimate actions. The results of the present investigation were obtained, based on linear-elastic and non-linear geometric and physical analysis, and compared with those supplied by the traditional simplified methodology of calculation and with the limits recommended by design standards.
APA, Harvard, Vancouver, ISO, and other styles
21

Pesquet-Popescu, Béatrice. "Modélisation bidimensionnelle de processus non stationnaires et application à l'étude du fond sous-marin." Cachan, Ecole normale supérieure, 1998. http://www.theses.fr/1998DENS0021.

Full text
Abstract:
Dans cette thèse, nous proposons des généralisations anisotropes des champs 2D de type fractal. Tout d'abord, nous introduisons les champs 2D à accroissements stationnaires fractionnaires et nous montrons que le mouvement brownien fractionnaire appartient à cette classe de processus. L'intérêt d'une analyse multi résolution de ces champs est démontré théoriquement et sur un exemple d'application à la localisation sous-marine. Pour la modélisation de données, un moyen efficace pour caractériser les textures à accroissements stationnaires est fourni par la fonction de structure. Nous soulignons la possibilité de contrôler l'anisotropie de ces champs par le biais de cette fonction, dont nous proposons également plusieurs modèles. La fonction de structure est aussi employée pour l'interpolation des champs non stationnaires à accroissements stationnaires. Un autre aspect de ce travail concerne les extensions bidimensionnelles des processus ARIMA fractionnaires et leurs liens avec les champs continus présentés. Finalement, nous considérons des processus auto-similaires non-gaussiens et étudions les statistiques de leurs coefficients d'ondelettes.
APA, Harvard, Vancouver, ISO, and other styles
22

Kong, Xiaoli. "High Dimensional Multivariate Inference Under General Conditions." UKnowledge, 2018. https://uknowledge.uky.edu/statistics_etds/33.

Full text
Abstract:
In this dissertation, we investigate four distinct and interrelated problems for high-dimensional inference of mean vectors in multi-groups. The first problem concerned is the profile analysis of high dimensional repeated measures. We introduce new test statistics and derive its asymptotic distribution under normality for equal as well as unequal covariance cases. Our derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. We also derive consistent and unbiased estimators of the asymptotic variances for equal and unequal covariance cases respectively. The second problem considered is the accurate inference for high-dimensional repeated measures in factorial designs as well as any comparisons among the cell means. We derive asymptotic expansion for the null distributions and the quantiles of a suitable test statistic under normality. We also derive the estimator of parameters contained in the approximate distribution with second-order consistency. The most important contribution is high accuracy of the methods, in the sense that p-values are accurate up to the second order in sample size as well as in dimension. The third problem pertains to the high-dimensional inference under non-normality. We relax the commonly imposed dependence conditions which has become a standard assumption in high dimensional inference. With the relaxed conditions, the scope of applicability of the results broadens. The fourth problem investigated pertains to a fully nonparametric rank-based comparison of high-dimensional populations. To develop the theory in this context, we prove a novel result for studying the asymptotic behavior of quadratic forms in ranks. The simulation studies provide evidence that our methods perform reasonably well in the high-dimensional situation. Real data from Electroencephalograph (EEG) study of alcoholic and control subjects is analyzed to illustrate the application of the results.
APA, Harvard, Vancouver, ISO, and other styles
23

Heitz, Jean-François. "Propagation d'ondes en milieu non linéaire : applications à la reconnaissance des sols et au génie parasismique." Grenoble 1, 1992. http://www.theses.fr/1992GRE10120.

Full text
Abstract:
La propagation d'ondes en milieu non lineaire est etudiee sur les plans theorique, experimental et numerique. Des applications touchant les domaines du genie parasismique et de la reconnaissance in situ des sols sont proposes. Apres un etat des connaissances acquises a ce jour sur le comportement du sol sous sollicitation dynamique, un modele de comportement viscoelastique non lineaire est introduit dans l'equation fondamentale de la dynamique. L'equation du mouvement du sol obtenue exhibe au second membre un terme de source contenant l'ensemble des termes caracteristiques lies a la non-linearite du sol. Une analyse dans l'espace des frequences montre que le comportement non lineaire deviatoire du sol est a l'origine de modifications du contenu spectral au cours de la propagation par rapport au contenu frequentiel de la sollicitation. Une methode iterative de resolution de l'equation du mouvement est utilisee. A chaque iteration, la solution est obtenue explicitement par l'utilisation de transformations fonctionnelles integrales appropriees. Deux essais dynamiques in situ en surface et en puits avec excitation harmonique sont interpretes sur la base de l'approche theorique precedente. Le premier essai a permis essentiellement de mettre en evidence in situ le comportement non lineaire du sol sous sollicitation sinusoidale. Pour le second essai, des calculs simulent la reponse du sol a distance du puits et une methode d'identification de parametres caracteristiques du comportement non lineaire du sol est proposee. Une autre application de l'approche theorique proposee permet d'etudier les effets de site unidimensionnels caracteristiques du comportement non lineaire d'un horizon de sol reposant sur un demi-espace et subissant une sollicitation transitoire de type sismiques. Une extension du calcul en configuration bidimensionnelle est ensuite proposee
APA, Harvard, Vancouver, ISO, and other styles
24

Liley, Albert James. "Statistical co-analysis of high-dimensional association studies." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/270628.

Full text
Abstract:
Modern medical practice and science involve complex phenotypic definitions. Understanding patterns of association across this range of phenotypes requires co-analysis of high-dimensional association studies in order to characterise shared and distinct elements. In this thesis I address several problems in this area, with a general linking aim of making more efficient use of available data. The main application of these methods is in the analysis of genome-wide association studies (GWAS) and similar studies. Firstly, I developed methodology for a Bayesian conditional false discovery rate (cFDR) for levering GWAS results using summary statistics from a related disease. I extended an existing method to enable a shared control design, increasing power and applicability, and developed an approximate bound on false-discovery rate (FDR) for the procedure. Using the new method I identified several new variant-disease associations. I then developed a second application of shared control design in the context of study replication, enabling improvement in power at the cost of changing the spectrum of sensitivity to systematic errors in study cohorts. This has application in studies on rare diseases or in between-case analyses. I then developed a method for partially characterising heterogeneity within a disease by modelling the bivariate distribution of case-control and within-case effect sizes. Using an adaptation of a likelihood-ratio test, this allows an assessment to be made of whether disease heterogeneity corresponds to differences in disease pathology. I applied this method to a range of simulated and real datasets, enabling insight into the cause of heterogeneity in autoantibody positivity in type 1 diabetes (T1D). Finally, I investigated the relation of subtypes of juvenile idiopathic arthritis (JIA) to adult diseases, using modified genetic risk scores and linear discriminants in a penalised regression framework. The contribution of this thesis is in a range of methodological developments in the analysis of high-dimensional association study comparison. Methods such as these will have wide application in the analysis of GWAS and similar areas, particularly in the development of stratified medicine.
APA, Harvard, Vancouver, ISO, and other styles
25

Link, David John. "BOOTSTRAP ENHANCED N-DIMENSIONAL DEFORMATION OF SPACE WITH ACOUSTIC RESONANCE SPECTROSCOPY." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_diss/728.

Full text
Abstract:
Acoustic methods can often be used with limited or no sample preparations making them ideal for rapid process analytical technologies (PATs). This dissertation focuses on the possible use of acoustic resonance spectroscopy as a PAT in the pharmaceutical industry. Current good manufacturing processes (cGMP) need new technologies that have the ability to perform quality assurance testing on all products. ARS is a rapid and non destructive method that has been used to perform qualitative studies but has a major drawback when it comes to quantitative studies. Acoustic methods create highly non linear correlations which usually results in high level computations and chemometrics. Quantification studies including powder contamination levels, hydration amounts and active pharmaceutical ingredient (API) concentrations have been used to test the hypothesis that bootstrap enhanced n-dimensional deformation of space (BENDS) could be used to overcome the highly non linear correlations that occur with acoustic resonance spectroscopy (ARS) eliminating a major drawback with ARS to further promote the device as a possible process analytical technology (PAT) in the pharmaceutical industry. BENDS is an algorithm that has been created to calculate a reduced linear calibration model from highly non linear relationships with ARS spectra. ARS has been shown to correctly identify pharmaceutical tablets and with the incorporation of BENDS, determine the hydration amount of aspirin tablets, D-galactose contamination levels of Dtagatose powders and the D-tagatose concentrations in resveratrol/D-tagatose combinatory tablets.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Paul Chong Chan. "A QUALITATIVE AND QUANTITATIVE ANALYSIS OF SOFT TISSUE CHANGE EVALUATION BY ORTHODONTISTS IN CLASS II NON EXTRACTION ORTHODONTIC TREATMENT USING THE 3dMD SYSTEM." Master's thesis, Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/217032.

Full text
Abstract:
Oral Biology
M.S.
With the advent of cephalometrics in the 1930s, numerous studies have focused on the profile of a face to achieve a more esthetic orthodontic treatment outcome. With such heavy emphasis on facial esthetics, a shift in focus from the profile view to the oblique view has become necessary as the smile in the oblique view is what the general public evaluates. The purpose of this pilot study was to determine whether the current tools for diagnosis and treatment evaluation are sufficient. Currently, 2-dimensional composite photographs are utilized in evaluating the soft tissue. At Temple University, 3-dimensional images, which show all sides of the patient's face, are used adjunctively to 2-dimensional composite photographs. In this study, faculty members at the Temple University Department of Orthodontics were asked to complete surveys after viewing two different image modalities, 2-dimensional images and a 3-dimensional video of the same patient. They were asked to fill out the soft tissue goals for specific facial landmarks. Patient photos were in the smiling view as current literature lacks studies on this view. Faculty members' responses from analyzing the 2-dimensional images and 3-dimensional video for each patient were compared to determine which areas had frequent discrepancies from using two different image modalities. During the survey, a voice recorder captured any comments regarding the images. The ultimate goal of this qualitative pilot study was to identify when 3-dimensional imaging is necessary in treatment planning and evaluation, with an added hope to further advance research in 3-dimensional imaging and its vast possibilities to advance the field of orthodontics. Based on the data collected, the following conclusions were made: 1. The qualitative data highlighted that 3-dimensional imaging would be necessary in cases with skeletal deformities. 2. In the oblique view, 3-dimensional imaging is superior than 2-dimensional imaging by showing more accurate shadow, contour, and depth of the soft tissue. 3. Further improvement is necessary to create a virtual patient with treatment simulation abilities. 4. The comfort level among orthodontists of 2-dimensional imaging was higher than 3-dimensional imaging. With more widespread use of 3-dimensional imaging, more orthodontists may gradually reach a higher comfort level in using this relatively new technology. 5. Faculty members expressed high willingness to use 3-dimensional imaging if improvement in new technology could allow for more manipulation and accurate soft tissue prediction. 6. 3-dimensional imaging is superior in its efficiency, quick capture time, and lack of need for multiple images. Implementation of 3-dimensional imaging could streamline the records process and help with practice efficiency without compromising the image quality. 7. Both patients and orthodontists may benefit from using 3-dimensional imaging. Patients can see an accurate representation of themselves and possibly view their own treatment simulation upon further improvement in current technology. Orthodontists would benefit with much more accurate images that may serve as the virtual patient. 8. Besides the exorbitantly high cost, faculty members thought that more advances were needed and the current benefit was not great enough to justify the investment. The results were consistent with other studies that used the oblique view in that the 2-dimensional oblique view lacks depth and does not provide adequate information. With further improvement in current 3-dimensional imaging, this technology can benefit orthodontists in visualizing their patients. In addition, patients can benefit by hopefully seeing a live and accurate simulation of themselves instantly as a virtual patient. With these benefits of 3-dimensional imaging, it may one day be the new standard in patient records in the field of orthodontics.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
27

Sommer, Oliver. "Ein Beitrag zur Untersuchung des Verhaltens dünner Flüssigkeitsfilme nahe gekrümmten Substratoberflächen." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-154946.

Full text
Abstract:
In der vorliegenden Arbeit wurde das Verhalten dünner Flüssigkeitsfilme an gekrümmten Substratoberflächen durch experimentelle Beschichtungsversuche basierend auf der non-invasiven laserinduzierten Fluoreszenzmesstechnik und durch numerische Filmsimulationen mit Hilfe des Volume-of-Fluid Mehrphasenmodells untersucht. Besonderes Interesse galt dabei dem Finden optimaler Einflussgrößenkombinationen zur Reduzierung des Fettkanten-Effekts. In der hierfür durchgeführten Parameterstudie wurden sowohl Applikationsparameter wie der Kantenrundungsradius und die Applikationsschichtdicke als auch Stoffparameter der untersuchten Flüssigkeit wie die Viskosität und die Oberflächenspannung variiert. Neben qualitativen Beschreibungen der entstandenen Fettkantengestalten sind als Resultate auch Größen zur Quantifizierung der Fettkanten festgelegt worden und systematisch dargestellt. Es konnte nachgewiesen werden, dass ungünstige und geeignete Parameterkonfigurationen existieren, welche prägnante bzw. kaum auffällige Fettkanten erzeugen, insbesondere im Experiment. Über die dabei eingreifenden Mechanismen der zugrundeliegenden Strömungen wurden konkrete Hypothesen aufgestellt, auch um die resultierenden Proportionalitäten der Fettkantengrößen bezüglich der Einflussgrößen zu plausibilisieren. Weiterhin konnte eine Aussage über die Signifikanz der untersuchten Einflussgrößen getroffen werden. Abschließend wurde eine geeignete dimensionslose Kenngröße generiert, um den Fettkanten-Effekt parameterübergreifend beschreiben zu können, wodurch mittels der Ähnlichkeitstheorie auch eine gewisse Abschätzung des Fettkanten-Effekts ermöglicht wird
In this study the behaviour of a thin liquid layer at a curved solid edge was examined by experimental coating investigations based on the laser-induced fluorescence technique and by numerical film simulations based on the Volume-of-Fluid multiphase flow model, respectively. The main motivation was to find optimal combinations of influencing quantities to reduce the fat-edge effect. Therefore a study of these quantities was performed, in which application parameters like edge radii of curvature and application layer thicknesses as well as determining liquid properties like viscosity and surface tension have been varied. Results are described qualitatively at corresponding fat-edge shapes and quantified by suitable fat-edge parameters, which had to be identified and selected. It could be shown that adverse and appropriate influencing parameter combinations exist, which generate conspicuous and less distinctive fat-edges, respectively - especially in laboratory experiments. The experimental findings and proportionalities regarding fat-edge shapes and dimensions are found to be physically plausible. Furthermore an order of significance of the influencing quantities established. Eventually, a dimensionless quantity was derived by dimensional analysis, which describes the fat-edge effect. Thus, the fat-edge effect has also been described by the application of similarity theory and the corresponding dimenionless number, respectively
APA, Harvard, Vancouver, ISO, and other styles
28

Morlot, Jean-Baptiste. "Annotation of the human genome through the unsupervised analysis of high-dimensional genomic data." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066641/document.

Full text
Abstract:
Le corps humain compte plus de 200 types cellulaires différents possédant une copie identique du génome mais exprimant un ensemble différent de gènes. Le contrôle de l'expression des gènes est assuré par un ensemble de mécanismes de régulation agissant à différentes échelles de temps et d'espace. Plusieurs maladies ont pour cause un dérèglement de ce système, notablement les certains cancers, et de nombreuses applications thérapeutiques, comme la médecine régénérative, reposent sur la compréhension des mécanismes de la régulation géniques. Ce travail de thèse propose, dans une première partie, un algorithme d'annotation (GABI) pour identifier les motifs récurrents dans les données de séquençage haut-débit. La particularité de cet algorithme est de prendre en compte la variabilité observée dans les réplicats des expériences en optimisant le taux de faux positif et de faux négatif, augmentant significativement la fiabilité de l'annotation par rapport à l'état de l'art. L'annotation fournit une information simplifiée et robuste à partir d'un grand ensemble de données. Appliquée à une base de données sur l'activité des régulateurs dans l'hématopoieïse, nous proposons des résultats originaux, en accord avec de précédentes études. La deuxième partie de ce travail s'intéresse à l'organisation 3D du génome, intimement lié à l'expression génique. Elle est accessible grâce à des algorithmes de reconstruction 3D à partir de données de contact entre chromosomes. Nous proposons des améliorations à l'algorithme le plus performant du domaine actuellement, ShRec3D, en permettant d'ajuster la reconstruction en fonction des besoins de l'utilisateur
The human body has more than 200 different cell types each containing an identical copy of the genome but expressing a different set of genes. The control of gene expression is ensured by a set of regulatory mechanisms acting at different scales of time and space. Several diseases are caused by a disturbance of this system, notably some cancers, and many therapeutic applications, such as regenerative medicine, rely on understanding the mechanisms of gene regulation. This thesis proposes, in a first part, an annotation algorithm (GABI) to identify recurrent patterns in the high-throughput sequencing data. The particularity of this algorithm is to take into account the variability observed in experimental replicates by optimizing the rate of false positive and false negative, increasing significantly the annotation reliability compared to the state of the art. The annotation provides simplified and robust information from a large dataset. Applied to a database of regulators activity in hematopoiesis, we propose original results, in agreement with previous studies. The second part of this work focuses on the 3D organization of the genome, intimately linked to gene expression. This structure is now accessible thanks to 3D reconstruction algorithm from contact data between chromosomes. We offer improvements to the currently most efficient algorithm of the domain, ShRec3D, allowing to adjust the reconstruction according to the user needs
APA, Harvard, Vancouver, ISO, and other styles
29

Nassiri, Esmail. "Modelling nonlinear behaviour of two-dimensional steel structures subjected to cyclic loading." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bussy, Simon. "Introduction of high-dimensional interpretable machine learning models and their applications." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS488.

Full text
Abstract:
Dans ce manuscrit sont introduites de nouvelles méthodes interprétables de machine learning dans un contexte de grande dimension. Différentes procédures sont alors proposées : d'abord le C-mix, un modèle de mélange de durées qui détecte automatiquement des sous-groupes suivant le risque d'apparition rapide de l'événement temporel étudié; puis la pénalité binarsity, une combinaison entre variation totale pondérée et contrainte linéaire par bloc qui s'applique sur l'encodage "one-hot'' de covariables continues ; et enfin la méthode binacox qui applique la pénalité précédente dans un modèle de Cox en tirant notamment parti de sa propriété de détection automatique de seuils dans les covariables continues. Pour chacune d'entre elles, les propriétés théoriques sont étudiées comme la convergence algorithmique ou l'établissement d'inégalités oracles non-asymptotiques, et une étude comparative avec l'état de l'art est menée sur des données simulées et réelles. Toutes les méthodes obtiennent de bons résultats prédictifs ainsi qu'en terme de complexité algorithmique, et chacune dispose d'atouts intéressants sur le plan de l'interprétabilité
This dissertation focuses on the introduction of new interpretable machine learning methods in a high-dimensional setting. We developped first the C-mix, a mixture model of censored durations that automatically detects subgroups based on the risk that the event under study occurs early; then the binarsity penalty combining a weighted total variation penalty with a linear constraint per block, that applies on one-hot encoding of continuous features; and finally the binacox model that uses the binarsity penalty within a Cox model to automatically detect cut-points in the continuous features. For each method, theoretical properties are established: algorithm convergence, non-asymptotic oracle inequalities, and comparison studies with state-of-the-art methods are carried out on both simulated and real data. All proposed methods give good results in terms of prediction performances, computing time, as well as interpretability abilities
APA, Harvard, Vancouver, ISO, and other styles
31

Ayvazyan, Vigen. "Etude de champs de température séparables avec une double décomposition en valeurs singulières : quelques applications à la caractérisation des propriétés thermophysiques des matérieux et au contrôle non destructif." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14671/document.

Full text
Abstract:
La thermographie infrarouge est une méthode largement employée pour la caractérisation des propriétés thermophysiques des matériaux. L’avènement des diodes laser pratiques, peu onéreuses et aux multiples caractéristiques, étendent les possibilités métrologiques des caméras infrarouges et mettent à disposition un ensemble de nouveaux outils puissants pour la caractérisation thermique et le contrôle non desturctif. Cependant, un lot de nouvelles difficultés doit être surmonté, comme le traitement d’une grande quantité de données bruitées et la faible sensibilité de ces données aux paramètres recherchés. Cela oblige de revisiter les méthodes de traitement du signal existantes, d’adopter de nouveaux outils mathématiques sophistiqués pour la compression de données et le traitement d’informations pertinentes. Les nouvelles stratégies consistent à utiliser des transformations orthogonales du signal comme outils de compression préalable de données, de réduction et maîtrise du bruit de mesure. L’analyse de sensibilité, basée sur l’étude locale des corrélations entre les dérivées partielles du signal expérimental, complète ces nouvelles approches. L'analogie avec la théorie dans l'espace de Fourier a permis d'apporter de nouveaux éléments de réponse pour mieux cerner la «physique» des approches modales.La réponse au point source impulsionnel a été revisitée de manière numérique et expérimentale. En utilisant la séparabilité des champs de température nous avons proposé une nouvelle méthode d'inversion basée sur une double décomposition en valeurs singulières du signal expérimental. Cette méthode par rapport aux précédentes, permet de tenir compte de la diffusion bi ou tridimensionnelle et offre ainsi une meilleure exploitation du contenu spatial des images infrarouges. Des exemples numériques et expérimentaux nous ont permis de valider dans une première approche cette nouvelle méthode d'estimation pour la caractérisation de diffusivités thermiques longitudinales. Des applications dans le domaine du contrôle non destructif des matériaux sont également proposées. Une ancienne problématique qui consiste à retrouver les champs de température initiaux à partir de données bruitées a été abordée sous un nouveau jour. La nécessité de connaitre les diffusivités thermiques du matériau orthotrope et la prise en compte des transferts souvent tridimensionnels sont complexes à gérer. L'application de la double décomposition en valeurs singulières a permis d'obtenir des résultats intéressants compte tenu de la simplicité de la méthode. En effet, les méthodes modales sont basées sur des approches statistiques de traitement d'une grande quantité de données, censément plus robustes quant au bruit de mesure, comme cela a pu être observé
Infrared thermography is a widely used method for characterization of thermophysical properties of materials. The advent of the laser diodes, which are handy, inexpensive, with a broad spectrum of characteristics, extend metrological possibilities of infrared cameras and provide a combination of new powerful tools for thermal characterization and non destructive evaluation. However, this new dynamic has also brought numerous difficulties that must be overcome, such as high volume noisy data processing and low sensitivity to estimated parameters of such data. This requires revisiting the existing methods of signal processing, adopting new sophisticated mathematical tools for data compression and processing of relevant information.New strategies consist in using orthogonal transforms of the signal as a prior data compression tools, which allow noise reduction and control over it. Correlation analysis, based on the local cerrelation study between partial derivatives of the experimental signal, completes these new strategies. A theoretical analogy in Fourier space has been performed in order to better understand the «physical» meaning of modal approaches.The response to the instantaneous point source of heat, has been revisited both numerically and experimentally. By using separable temperature fields, a new inversion technique based on a double singular value decomposition of experimental signal has been introduced. In comparison with previous methods, it takes into account two or three-dimensional heat diffusion and therefore offers a better exploitation of the spatial content of infrared images. Numerical and experimental examples have allowed us to validate in the first approach our new estimation method of longitudinal thermal diffusivities. Non destructive testing applications based on the new technique have also been introduced.An old issue, which consists in determining the initial temperature field from noisy data, has been approached in a new light. The necessity to know the thermal diffusivities of an orthotropic medium and the need to take into account often three-dimensional heat transfer, are complicated issues. The implementation of the double singular value decomposition allowed us to achieve interesting results according to its ease of use. Indeed, modal approaches are statistical methods based on high volume data processing, supposedly robust as to the measurement noise
APA, Harvard, Vancouver, ISO, and other styles
32

Honnouvo, Gilbert. "Gabor analysis and wavelet-like transforms on some non-Euclidean 2-dimensional manifolds." Thesis, 2007. http://spectrum.library.concordia.ca/975285/1/NR30119.pdf.

Full text
Abstract:
Many problems in physics require the crafting of suitable tools for the analysis of data emanating from various non-Euclidean manifolds. The main tools, currently employed for this purpose, are Gabor type frames or general frames, and wavelets. Given this backdrop, the primary objective of this thesis is the development of wavelet-like and time frequency type transforms on certain non-Euclidean manifolds. An immediate example of such a manifold (in the sense that it is homeomorphic to several other two-dimensional manifolds of revolution) is the two-dimensional infinite cylinder, for which we construct here Gabor type frames and wavelets. The two-dimensional cylinder, as a surface of revolution, is naturally homeomorphic to several other two-dimensional manifolds (themselves also surfaces of revolution). Examples are the one-sheeted hyperboloid, the paraboloid with its apex removed, the sphere with two points removed, the ellipsoid with two points removed, the plane with the origin removed, the upper sheet, of the two sheeted hyperboloid, with one point removed, and so on. Using this fact, in this thesis we build Gabor type frames and wavelets on these manifolds. We also present a method for constructing wavelet-like transforms on a large class of such surfaces of revolution using a group theoretic approach. Finally, as a beginning to a related but different sort of study, we construct some localization operators associated to group representations, using symbols (in the sense of pseudo-differential operators) which are operator valued functions on the group.
APA, Harvard, Vancouver, ISO, and other styles
33

Liao, Xiaoyun. "Dimensional variation analysis and optimal process design for non-rigid sheet metal assemblies." 2006. http://hdl.handle.net/1993/20851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

HUANG, JUN-QIN, and 黃俊欽. "Three dimensional finite element method applied in flow analysis of non-newtonian fluids." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/33334679843475303303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Singh, Jagdish Pratap. "Non-Dimensional Kinetoelastic Maps for Nonlinear Behavior of Compliant Suspensions." Thesis, 2014. http://hdl.handle.net/2005/3137.

Full text
Abstract:
Compliant suspensions are often used in micromechanical devices and precision mechanisms as substitutes for kinematic joints. While their small-displacement behavior is easily captured in simple formulae, large-displacement behavior requires nonlinear finite element analysis. In this work, we present a method that helps capture the geometrically nonlinear behavior of compliant suspensions using parameterized non-dimensional maps. The maps are created by performing one nonlinear finite element analysis for any one loading condition for one instance of a suspension of a given topology and fixed proportions. These maps help retrieve behavioral information for any other instance of the same suspension with changed size, cross-section dimensions, material, and loading. Such quantities as multi-axial stiffness, maximum stress, natural frequency, etc. ,can be quickly and accurately estimated from the maps. These quantities are non-dimensionalized using suitable factors that include loading, size, cross-section, and material properties. The maps are useful in not only understanding the limits of performance of the topology of a given suspension with fixed proportions but also in design. We have created the maps for 20 different suspensions. Case studies are included to illustrate the effectiveness of the method in microsystem design as well as in precision mechanisms. In particular, the method and 2D plots of non-dimensional kinetoelastic maps provide a comprehensive view of sensitivity, cross-axis sensitivity, linearity, maximum stress, and bandwidth for microsensors and microactuators.
APA, Harvard, Vancouver, ISO, and other styles
36

Narasimhan, S. "Three Dimensional Viscoplastic And Geomertrically Non-Linear Finite Element Analysis Of Adhesively Bonded Joints." Thesis, 1998. http://etd.iisc.ernet.in/handle/2005/2166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Hsin-Cyuan, and 陳信銓. "Error Analysis of Non-contact Three-dimensional Object Scanning using Smartphone and Line Laser." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/37775263324747274702.

Full text
Abstract:
碩士
國立高雄應用科技大學
模具工程系碩士班
101
The purpose of this study is to design and apply computer vision methods to acquire the spatial three-dimensional coordinate values of objects in space. The first step is to find the relative relationship between CCD, line laser and calibration pattern, respectively. By using the designed calibration pattern, the extrinsic and intrinsic parameters of camera can be obtained. Then, the measurement results can be calculated by using the relation of parametric equations. Calculated results will be compared with the actual physical measuring results to verify the feasibility of this method and to explore its further development direction. This study is divided into three parts. The first part is to find the extrinsic and intrinsic parameters of camera which involve rotation, translation, scale factor, focal length and image center of the camera-laser configuration. The second part is to extract the lens distortion. The distortion of calibration plays an interesting role because CCD usually has manufacturing inaccuracy and assembled error. In order to reduce the measurement error, the lens distortion error has to be corrected. The final part is to obtain the point on the intersection of the projection line and the laser plane, which is the spatial coordinate value of object. The experiments have been carried out for different environment conditions, one is to reduce environment light and the other is to increase the object reflection light. Based on the experimental results, to increase the object reflection light significantly outperformed the other environment conditions with respect to the image process. In this experiment, gauge block (21mm, 23mm, 25mm, 50mm and 100mm) were selected to be as benchmark object. The results show that the average error on length is 0.38mm in the X-axis direction, 0.37mm in the Y-axis direction and 0.35mm in the Z-axis direction. Toe lasts were also used as object to be tested. The results showed that the maximum error is 0.33% in EUR36 while 0.218% in EUR37.
APA, Harvard, Vancouver, ISO, and other styles
38

Ameur, Makrem. "Three-dimensional model of non-load bearing LSF walls under fire." Master's thesis, 2019. http://hdl.handle.net/10198/23286.

Full text
Abstract:
Mestrado de dupla diplomação com a Université Libre de Tunis
The present work presents numerical study with the aim of analysing the fire performance on LSF non load bearing walls. Numerical validation of the full-scale fire test developed by Anthony Deloge Ariyanayagam, Mahen Mahendran [1] was developed using transient thermal analysis, assuming perfect contact between different materials to determine the fire insolation criteria (I). The insulation criterion is defined by the average temperature or by the maximum temperature determined on the unexposed side of the wall. Two extra 3D numerical analysis were developed with the objective of understanding the thermal effect of the cavity size and the number of protection layers. Two different types of errors were used to compare the numerical and experimental results. The absolute relative error has been applied to compare the fire resistance time obtained by the numerical simulation and the fire test. The Root mean square (RMS) was used to compare the time history temperature error, determined on different locations of the wall section on specific points.
O presente trabalho apresenta um estudo numérico com o objetivo de analisar o desempenho ao fogo em paredes não estruturais fabricadas em aço enformado a frio LSF. Será apresentada a validação numérica do ensaio experimental de resistência ao fogo, de um modelo em grande escala, desenvolvido por Anthony Deloge Ariyanayagam, Mahen Mahendran [15]. Este objetivo foi alcançado usando uma análise térmica transitória, assumindo contato perfeito entre diferentes materiais. Foi assim possível aplicar o critério de isulamento de fogo (I), determinada pela temperatura média ou pela temperatura máxima determinada do lado não exposto. Duas simulações numéricas 3D adicionais foram desenvolvidas com o objetivo de se conhecer a influência térmica da espessura da cavidade e a influência do número de camadas de proteção. A comparação entre os resultados numéricos e experimentais foi realizada com dois métodos. O erro relativo absoluto foi utilizado para comparar o tempo de resistência ao fogo obtido pela simulação numérica e o ensaio experimental. O erro quadrático médio (RMS) foi usado para comparar a evolução da temperatura em diferentes locais da secção da parede para determinados instantes de tempo.
APA, Harvard, Vancouver, ISO, and other styles
39

覃照基. "The establishment and analysis of the three dimensional model of aseismatic non-ballast track system." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/08323167545738933293.

Full text
Abstract:
碩士
國立彰化師範大學
機電工程學系
100
Due to the highly development of technology, the track industry is incessant progression. In railway transportation, the structure of track system is gradually changed from traditional ballast track to non-ballast track. This is due to that the maintenance work of traditional ballast track is dangerous and time consuming. In addition, due to the shortage of ballast resource, the traditional ballast track then has to be replaced. Since the structure non-ballast track is rigid, durable and free-maintenance, it becomes the mainstream of research in track structure engineering in the world; it is also the main choice of newly risen railway architecture in our country. Hence, a fully understanding of the structure behavior of non-ballast track is necessary for designing the track system. The computer aided design is first to be applied to construct two types of 3D aseismatic non-ballast track model. Then, the FEM is used to accomplish the static and dynamic analysis of the system. The result of this study indicates that the structural stability of the non-ballast track system increases with the increase of the Young’s modulus of the sleeper. However, the restriction is that the bending stress of the sleeper has to be under its limit. The result also shows that the dynamic displacement and acceleration of the non-ballast track system decrease with the increase of the area of the elastomer. In order to check the accuracy of the proposed model developed in this study, the result is compared with the references and the Track Engineering Standard of R.R.B.
APA, Harvard, Vancouver, ISO, and other styles
40

Huang, Yi-Jin, and 黃奕縉. "Analysis of Two-dimensional Potential Flows with Non-Smooth Boundaries by the Method of Fundamental Solutions." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/82943094330992174358.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
101
The present thesis contributes to discuss the feasibility and properties of different forms of the basis functions of the method of fundamental solutions in potential problems. Formerly, the fundamental solutions of the Laplace equation derived from the Green’s function in the polar coordinate system consider only the function of radius. In order to preserve the completeness of the solution, the complex variable theory is employed to reconsider the fundamental solution. The real part of the complex analytic function is a function of radius and the imaginary part is the function of argument. We assume that the imaginary part to be the angular basis of the Laplace equation. For the properties of the angular basis function, we promote a nodes distribution way corresponding to the angular basis function. In a series of the approximation of the potential problems, the radial basis and the angular basis functions are compared in the same computational domains from the circular to cusp domains and thin-boundary geometries with different types of boundary conditions. From these numerical experiments, the angular basis function is found to be favorable of simulating the domains with acute and narrow regions. Furthermore, the basic aerodynamic problems of airfoils are also discussed in the present study through the potential problems. Numerical results in this thesis are compared favorably with the exact solutions. The results demonstrate the rationality and the feasibility of our assumptions and numerical model.
APA, Harvard, Vancouver, ISO, and other styles
41

Cheng, Sheng-Yin, and 鄭盛尹. "Two-dimensional analysis of non-orthogonal stagnation point flow over a moving plate with constant velocity." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/nx7x9r.

Full text
Abstract:
碩士
國立臺灣大學
應用力學研究所
107
When fluid flows over an object, it will creates friction, resistance and lift on the object. In addition to the shape of the object, different fluid speed and temperature must have different effect on the problem. Many things in our life can be viewed as this problem. So it is worth to study. The predecessors studied the fluid impinges vertically on a infinitely extending stationary plate, impinges vertically on a plate with constant velocity, and impinges obliquely on a stationary plate. However, no one studied the fluid obliquely impinges on a plate with constant velocity. Therefore, this paper refers to the predecessors, the two-dimensional analysis of non-orthogonal stagnation point flow over a moving plate with constant velocity is studied with the similar solution technique. The hypothesis is that the fluid is a steady-state and incompressible. The differential equation is derived from the Navier-Stokes equation. The boundary conditions of the boundary value problem are obtained by the setting of the stream function and the condition of the plate. However, the differential equation is nonlinear and must be solved by numerical method. In this paper, the boundary value problem solver bvp4c is used to solve the boundary value problem. In addition to the numerical solution solved by bvp4c, the paper also finds the function of the approximate solution. Chapter four shows the velocity distribution of different positions in the flow field by the approximate solution and discusses the influence of the angle and the plate speed on the velocity in the flow field. The result shows that if the angle is small, the change of velocity in the flow field will be small. Later, the stagnation point formula is found under the condition that the shear stress at the plate is zero. The stagnation point will shift to right as the plate speed increases. If the angle is small, the shift will be faster. Finally, we plot the streamline to understand the flow field under different angle and plate speed. If the speed of the plate is zero, the streamline of Ψ=0 will intersect the plate at the position of the stagnation point. However, if the plate speed is not zero, the streamline of Ψ=0 will shift to left and will not intersect the plate. Keywords: similarity solution, non-orthogonal stagnation point flow, moving plate, boundary value problem, numerical method
APA, Harvard, Vancouver, ISO, and other styles
42

Mayrink, Vinicius Diniz. "Factor Models to Describe Linear and Non-linear Structure in High Dimensional Gene Expression Data." Diss., 2011. http://hdl.handle.net/10161/3865.

Full text
Abstract:

An important problem in the analysis of gene expression data is the identification of groups of features that are coherently expressed. For example, one often wishes to know whether a group of genes, clustered because of correlation in one data set, is still highly co-expressed in another data set. For some microarray platforms there are many, relatively short, probes for each gene of interest. In this case, it is possible that a given probe is not measuring its targeted transcript, but rather a different gene with a similar region (called cross-hybridization). Similarly, the incorrect mapping of short nucleotide sequences to a target gene is a common issue related to the young technology producing RNA-Seq data. The expression pattern across samples is a valuable source of information, which can be used to address distinct problems through the application of factor models. Our first study is focused on the identification of the presence/absence status of a gene in a sample. We compare our factor model to state-of-the-art detection methods; the results suggest superior performance of the factor analysis for detecting transcripts. In the second study, we apply factor models to investigate gene modules (groups of coherently expressed genes). Variation in the number of copies of regions of the genome is a well known and important feature of most cancers. Copy number alteration is detected for a group of genes in breast cancer; our goal is to examine this abnormality in the same chromosomal region for other types of tumors (Ovarian, Lung and Brain). In the third application, the expression pattern related to RNA-Seq count data is evaluated through a factor model based on the Poisson distribution. Here, the presence/absence of coherent patterns is closely associated with the number of incorrect read mappings. The final study of this dissertation is dedicated to the analysis of multi-factor models with linear and non-linear structure of interactions between latent factors. The interaction terms can have important implications in the model; they represent relationships between genes which cannot be captured in an ordinary analysis.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
43

Cui, Kai. "Bayesian Modeling and Computation for Mixed Data." Diss., 2012. http://hdl.handle.net/10161/6168.

Full text
Abstract:

Multivariate or high-dimensional data with mixed types are ubiquitous in many fields of studies, including science, engineering, social science, finance, health and medicine, and joint analysis of such data entails both statistical models flexible enough to accommodate them and novel methodologies for computationally efficient inference. Such joint analysis is potentially advantageous in many statistical and practical aspects, including shared information, dimensional reduction, efficiency gains, increased power and better control of error rates.

This thesis mainly focuses on two types of mixed data: (i) mixed discrete and continuous outcomes, especially in a dynamic setting; and (ii) multivariate or high dimensional continuous data with potential non-normality, where each dimension may have different degrees of skewness and tail-behaviors. Flexible Bayesian models are developed to jointly model these types of data, with a particular interest in exploring and utilizing the factor models framework. Much emphasis has also been placed on the ability to scale the statistical approaches and computation efficiently up to problems with long mixed time series or increasingly high-dimensional heavy-tailed and skewed data.

To this end, in Chapter 1, we start with reviewing the mixed data challenges. We start developing generalized dynamic factor models for mixed-measurement time series in Chapter 2. The framework allows mixed scale measurements in different time series, with the different measurements having distributions in the exponential family conditional on time-specific dynamic latent factors. Efficient computational algorithms for Bayesian inference are developed that can be easily extended to long time series. Chapter 3 focuses on the problem of jointly modeling of high-dimensional data with potential non-normality, where the mixed skewness and/or tail-behaviors in different dimensions are accurately captured via the proposed heavy-tailed and skewed factor models. Chapter 4 further explores the properties and efficient Bayesian inference for the generalized semiparametric Gaussian variance-mean mixtures family, and introduce it as a potentially useful family for modeling multivariate heavy-tailed and skewed data.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
44

Latini, Marco. "Simulations and analysis of two- and three-dimensional single-mode Richtmyer-Meshkov instability using weighted essentially non-oscillatory and vortex methods." Thesis, 2007. https://thesis.library.caltech.edu/4868/1/marco-thesis.pdf.

Full text
Abstract:
An incompressible vorticity-streamfunction (VS) method is developed to investigate the single-mode Richtmyer-Meshkov instability in two and three dimensions. The initial vortex sheet (representing the initial shocked interface) is thickened to regularize the limit of classical Lagrangian vortex methods. In the limit of smaller thickness, the initial velocity converges to the velocity of a vortex sheet. The vorticity on the Cartesian grid follows the vorticity evolution equation augmented by the baroclinic vorticity production term (to capture the effects of the instability on the layer) and a viscous dissipation term. The equations are discretized using a fourth-order in space and third-order in time semi-implicit Adams-Bashforth backward differentiation scheme. The convergence properties of the method with respect to varying the diffuse interface thickness and viscosity are investigated. It is shown that the small-scale structures within the roll-up are more sensitive to the diffuse interface thickness than to the viscosity. By contrast, the large-scale quantities, including the perturbation, bubble, and spike amplitudes are less sensitive. Fourth-order point-wise convergence is achieved, provided that a sufficiently fine grid is used. In two dimensions, the VS method is applied to investigate late-time nonlinear effects of the single-mode Mach 1.3 air(acetone)/SF_6 shock tube experiment of Jacobs and Krivets. The results are also compared to those from compressible ninth-order weighted essentially non-oscillatory (WENO) simulations. The density fields from the WENO and VS methods agree with the experimental PLIF images in the large-scale structures but differ in the small-scale structures. The WENO method exhibits small-scale disordered structure similar to that in the experiment, while the VS method does not capture such structure, but shows a strong rotating core. The perturbation amplitudes from the two methods are in good agreement and match the experimental data points well. The WENO bubble amplitude is smaller than the VS amplitude and vice versa for the spike amplitude. Comparing amplitudes from simulations with varying Mach number shows that as the Mach number increases, the differences in the bubble and spike amplitudes increase due to intensifying pressure perturbations not present in the incompressible VS method. The perturbation amplitude from the WENO and VS methods is also compared to the predictions of nonlinear amplitude growth models in which the growth rate was reduced to account for the diffuse initial interface. In general, the model predictions agree with the simulation amplitudes at early-to-intermediate times and underpredict at later times, corresponding to the late nonlinear regime. The WENO simulation is used to investigate reshock, which occurs when the transmitted shock reflects from the end wall of the test section and interacts with the evolving layer. The post-reshock mixing layer width agrees well with the predictions of reshock models for short times until the interaction of the reflected rarefaction with the layer. The VS simulation was also compared to classical Lagrangian and vortex-in-cell simulations as the Atwood number was varied. For low Atwood numbers, all three simulations agree. As the Atwood number increases, the VS simulation shows differences in the bubble and spike amplitudes compared to the Lagrangian and VIC simulations, as the baroclinic vorticity production for a diffuse layer is different from that of a thin layer. The simulation amplitudes agree with the predictions of nonlinear amplitude growth models at early times. The growth models underpredict the amplitudes at later times. The investigation is extended to three dimensions, where the initial perturbation is a product of sinusoids and the initial vorticity deposition is given by linear instability analysis. The instability evolution and dynamics of vorticity are visualized using the mass fraction and enstrophy isosurface, respectively. For the WENO and VS methods, two roll-ups corresponding to the bubble and spike regions form, and the vorticity shows the formation of a ring-like structure. The perturbation amplitudes from the WENO and VS methods are in excellent agreement. The bubble and spike amplitude are in good agreement at early times. At later times, the WENO bubble amplitude is smaller than the VS amplitude and vice versa for the spike. The nonlinear three-dimensional Zhang-Sohn model agrees with the simulation amplitudes at early times, and underpredicts later. In three dimensions, the enstrophy iso-surface after reshock shows significant fragmentation and the formation of small, short, tubular structures. Simulations with different initial amplitudes show that the mixing layer width after reshock does not depend on the pre-shock amplitude. Finally, the effects of Atwood number are investigated using the VS method and the amplitudes are compared to the predictions of the Zhang-Sohn model. The simulation and the models are in agreement at early times, while the models underpredict later. The VS method constitutes a useful numerical approach to investigate the Richtmyer-Meshkov instability in two and three dimensions. The VS method and, more generally, vortex methods are valid tools for predicting the large-scale instability features, including the perturbation amplitudes, into the late nonlinear regime.
APA, Harvard, Vancouver, ISO, and other styles
45

MARMONTI, ENRICO. "“Thermodynamic Analysis and Simulation of an Interactive Façade -Studio del comportamento termofisico di una facciata in doppia pelle di vetro integrata ad un sistema impiantistico HVAC”." Doctoral thesis, 2015. http://hdl.handle.net/2158/1005444.

Full text
Abstract:
Obiettivo della tesi è lo studio delle prestazioni energetiche di una facciata a doppia pelle di vetro (DSF) interattiva e i vantaggi derivanti dalla integrazione tra il sistema edilizio e l’impianto di condizionamento ad aria (HVAC) dell’edificio. Questa tipologia di facciata ha permesso il controllo delle condizioni interne della cavità e la creazione di un buffer termico che isola, frapponendosi tra l’ambiente interno e le sollecitazioni climatiche esterne: una ventilazione meccanica controllata della facciata può rendere questo sistema un vero e proprio componente impiantistico, diffuso su tutto l’involucro edilizio che interagisce in modo dinamico con il sistema edificio-impianto, oltre ad essere fonte di recuperi energetici tra l’aria estratta dalla facciata e quella esterna di rinnovo, per un preriscaldamento di quest’ultima. Infine, l'applicazione dell’analisi dimensionale (utilizzando il teorema di Buckingham) ha permesso di determinare correlazioni tra numeri puri e di testare l’applicabilità e la robustezza del metodo proposto, determinando un modello fisico ed un utile strumento per valutare le prestazioni del sistema costruttivo sia in fase progettuale, al variare di semplici parametri fisici o progettuali, sia integrata in modelli di analisi dinamica del sistema edificioimpianto per determinare l’apporto energetico o le interazioni con il sistema edificio-impianto. Double Skin Façades are widely popular in modern architecture to inspire the collective imagination more contact with the surrounding environment: natural or forced ventilation techniques provide high performance and make it able to interact with outdoor climatic stress. The aim of thesis is to evaluate the energy performance of DSF interactive systems and the benefits provided by the integration between building glazed envelope and HVAC system. The state of art in the scientific literature allowed us to evaluate and compare different physical assumptions and CFD modeling techniques proposed: data from in-situ monitoring campaign into a office building in Eindhoven (the Netherland) allowed a verification of the physical assumptions. Then, CFD analysis was extended to different DSF system solution in order to optimize design criteria. The proposed façade guarantee a control of indoor conditions, creating a thermal buffer interposed between indoor and outdoor climatic stresses. A controlled mechanical ventilation make this facade a part of the HVAC plant system, widely diffused in the building envelope, that provides a dynamic interaction with the building-plant system: in addition it become a source of energy recovery between the exhausted and fresh air. Finally, the application of non-dimensional analysis (Buckingham theory) test the applicability and robustness of this method providing a physical model able to evaluate their energy performance both in the design phase (to evaluate their variation changing simple physical parameters) or in dynamic simulation to determine their energy intake or interactions when the facade is integrated in the building-plant systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Burela, Ramesh Gupta. "Asymptotically Correct Dimensional Reduction of Nonlinear Material Models." Thesis, 2011. http://etd.iisc.ernet.in/2005/3909.

Full text
Abstract:
This work aims at dimensional reduction of nonlinear material models in an asymptotically accurate manner. The three-dimensional(3-D) nonlinear material models considered include isotropic, orthotropic and dielectric compressible hyperelastic material models. Hyperelastic materials have potential applications in space-based inflatable structures, pneumatic membranes, replacements for soft biological tissues, prosthetic devices, compliant robots, high-altitude airships and artificial blood pumps, to name a few. Such structures have special engineering properties like high strength-to-mass ratio, low deflated volume and low inflated density. The majority of these applications imply a thin shell form-factor, rendering the problem geometrically nonlinear as well. Despite their superior engineering properties and potential uses, there are no proper analysis tools available to analyze these structures accurately yet efficiently. The development of a unified analytical model for both material and geometric nonlinearities encounters mathematical difficulties in the theory but its results have considerable scope. Therefore, a novel tool is needed to dimensionally reduce these nonlinear material models. In this thesis, Prof. Berdichevsky’s Variational Asymptotic Method(VAM) has been applied rigorously to alleviate the difficulties faced in modeling thin shell structures(made of such nonlinear materials for the first time in the history of VAM) which inherently exhibit geometric small parameters(such as the ratio of thickness to shortest wavelength of the deformation along the shell reference surface) and physical small parameters(such as moderate strains in certain applications). Saint Venant-Kirchhoff and neo-Hookean 3-D strain energy functions are considered for isotropic hyperelastic material modeling. Further, these two material models are augmented with electromechanical coupling term through Maxwell stress tensor for dielectric hyperelastic material modeling. A polyconvex 3-D strain energy function is used for the orthotropic hyperelastic model. Upon the application of VAM, in each of the above cases, the original 3-D nonlinear electroelastic problem splits into a nonlinear one-dimensional (1-D) through-the-thickness analysis and a nonlinear two-dimensional(2-D) shell analysis. This greatly reduces the computational cost compared to a full 3-D analysis. Through-the-thickness analysis provides a 2-D nonlinear constitutive law for the shell equations and a set of recovery relations that expresses the 3-D field variables (displacements, strains and stresses) through thethicknessintermsof2-D shell variables calculated in the shell analysis (2-D). Analytical expressions (asymptotically accurate) are derived for stiffness, strains, stresses and 3-D warping field for all three material types. Consistent with the three types of 2-D nonlinear constitutive laws,2-D shell theories and corresponding finite element programs have been developed. Validation of present theory is carried out with a few standard test cases for isotropic hyperelastic material model. For two additional test cases, 3-Dfinite element analysis results for isotropic hyperelastic material model are provided as further proofs of the simultaneous accuracy and computational efficiency of the current asymptotically-correct dimensionally-reduced approach. Application of the dimensionally-reduced dielectric hyperelastic material model is demonstrated through the actuation of a clamped membrane subjected to an electric field. Finally, the through-the-thickness and shell analysis procedures are outlined for the orthotropic nonlinear material model.
APA, Harvard, Vancouver, ISO, and other styles
47

Araújo, Vital Nai Quei Pereira. "Análise comparativa de modelos de cálculo de estruturas de betão armado." Master's thesis, 2013. http://hdl.handle.net/10316/38510.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Civil apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Nesta tese realizou-se uma abordagem não linear de uma viga contínua de betão armado, de dois tramos de um trabalho experimental anteriormente feito por Ana Maria S. Teixeira Bastos (1997, FEUP), e compara-se os resultados com programas comerciais de cálculo de elementos finitos. Na análise não linear utilizaram-se modelos elasto-plásticos e fendilhação para betão, aplicados através do Método de Elementos Finitos (MEF). Efectuou-se o dimensionamento de uma estrutura com modelos de análise-linear elástica com ou sem redistribuição. Comparou-se os resultados experimentais das vigas com os obtidos com os programas comerciais de software midas® FEA e Abaqus® CAE 6.10-1, usando elementos finitos bidimensionais, modelo elasto-plástico e modelo de fendilhação distribuída (“Smeared Crack”). E obteve-se conclusões relativas aos modelos utilizados, documentando de forma conveniente os casos de aplicação das ferramentas e modelos.
This thesis made an approach to linear and non-linear analysis of a reinforced concrete beam, the two spans of structure an experimental work previously done by Ana Maria S. Teixeira Bastos (1997, FEUP). The results were compared with commercial software’s of finite elements calculations. The elasto-plastic and smeared crack models are applied to twodimensional formulations of Finite Element Methods (FEM). The design of the structure considering the linear elastic behaviour with or without redistribution was made. The comparison of experimental results of beams with midas® FEA and Abaqus® CAE 6.10-1 commercial software´s was made, using two-dimensional finite elements with elasto-plasticity and the Smeared Crack models. The conclusions were made about the results obtained with the models used in the cases of application of the tools were documented in an appropriate way.
APA, Harvard, Vancouver, ISO, and other styles
48

Yao, Sung-Yi, and 姚松逸. "Three-Dimensional Electromagnetic Force Analyses and Driver Design of A Non-Contacting Steel Plate Conveyance System." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/97866530720702653623.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
90
Based on the design concepts of linear induction motors, a non-contacting steel plate conveyance system for steel mill application has been constructed. To reduce the noise and friction from conventional roller conveyance system, the designed system is aimed to simultaneously provide adequate lift, propulsive, and guide forces to the steel plate. At first, the preliminary understandings of the characteristics of lift force have been gained through the simple magnetic circuit analyses, and together with other mechanical concepts develop the laboratory prototype. Then, through three-dimensional finite element analyses and state model developments, the system’s static and quasi-dynamic/dynamic operational characteristics are investigated. Finally, the validity of this system has been verified by experimental measurement. Thus, the analyses and results of the experiment clearly show that the designed non-contacting steel plate conveyance system is certainly feasible.
APA, Harvard, Vancouver, ISO, and other styles
49

Ferreira, Larissa Lourdes Luz. "Fire effect on non-loadbearing light steel framing walls – numerical and simple calculation methods." Master's thesis, 2019. http://hdl.handle.net/10198/23310.

Full text
Abstract:
Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do Paraná
This work presents a study on the effects of fire in non-loadbearing Light Steel Framing (LSF) walls. Since there is still a desire to use a much more simplified method in routine fire resistance design, the objective of this study is to propose an equation that describes the effective width for calculating the fire resistance through one dimensional simplified analysis also described in this study. The one dimensional analysis was possible by considering fourteen layers, where five layers are presented on the gypsum exposed wall, four layers on the cavity and five more layers on the gypsum unexposed wall. The heat flow is considered as one path in both of the gypsum layers but divided into five different paths in the cavity, considering heat transfer between different materials, only in the y direction. Two different methods for the calculation of the effective width were proposed and validated with eleven different configurations of LSF walls, with experimental and numerical two-dimensional results, in order to find which method is more effective. Lastly, parametric studies were made using seven different cavity insulation materials, five different spacing between studs and five different cavity spacing, maintaining every other specification as constant, to understand their role in fire resistance of LSF non-loadbearing walls.
Este trabalho apresenta um estudo nos efeitos de incêndios em estruturas de paredes leves em aço enformado a frio não portantes. Como há ainda um desejo de usar um método muito mais simplificado para encontrar um design de resistência ao fogo, o objetivo deste estudo é propor uma equação que descreva a largura efetiva para o cálculo da resistência ao fogo através de análise uni-dimensional que também é descrita neste estudo. A análise uni-dimensional foi possível considerand quatorze camadas, onde cinco camadas estão apresentadas na parede de gesso exposta ao fogo, quatro camadas na cavidade e mais cinco camadas na parede de gesso não exposta ao fogo. O fluxo de calor é considerado com um único caminho em ambas as paredes de gesso, porém dividido em cinco caminhos diferentes na cavidade, considerando transferência de calor entre materiais diferentes, apenas na direção y. Dois diferentes métodos para o cálculo da largura efetiva foram propostos e validados com onze diferentes configurações de paredes leves em aço enformado a frio, com resultados experimentais e numéricos de duas dimensões, a fim de encontrar qual método é mais eficaz. Por fim, estudos paramétricos foram realizados utilizando sete diferentes materiais de isolamento na cavidade, cinco diferentes espaçamentos entre os montantes verticais e cinco diferentes espaçamentos de cavidade, mantendo todas as outras especificações constantes, a fim de entender seu papel na resistência ao fogo de paredes leves em aço enformado a frio não portantes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography