Siga este link para ver outros tipos de publicações sobre o tema: Curved meshes.

Teses / dissertações sobre o tema "Curved meshes"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 39 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Curved meshes".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Gargallo, Peiró Abel. "Validation and generation of curved meshes for high-order unstructured methods". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/275977.

Texto completo da fonte
Resumo:
In this thesis, a new framework to validate and generate curved high-order meshes for complex models is proposed. The main application of the proposed framework is to generate curved meshes that are suitable for finite element analysis with unstructured high-order methods. Note that the lack of a robust and automatic curved mesh generator is one of the main issues that has hampered the adoption of high-order methods in industry. Specifically, without curved high-order meshes composed by valid elements and that match the domain boundary, the convergence rates and accuracy of high-order methods cannot be realized. The main motivation of this work is to propose a framework to address this issue. First, we propose a definition of distortion (quality) measure for curved meshes of any polynomial degree. The presented measures allow validating if a high-order mesh is suitable to perform finite element analysis with an unstructured high-order method. In particular, given a high-order element, the measures assign zero quality if the element is invalid, and one if the element corresponds to the selected ideal configuration (desired shape and nodal distribution). Moreover, we prove that if the quality of an element is not zero, the region where the determinant of the Jacobian is not positive has measure zero. We present several examples to illustrate that the proposed measures can be used to validate high-order isotropic and boundary layer meshes. Second, we develop a smoothing and untangling procedure to improve the quality for curved high-order meshes. Specifically, we propose a global non-linear least squares minimization of the defined distortion measures. The distortion is regularized to allow untangling invalid meshes, and it ensures that if the initial configuration is valid, it never becomes invalid. Moreover, the optimization procedure preserves, whenever is possible, some geometrical features of the linear mesh such as the shape, stretching, straight-sided edges, and element size. We demonstrate through examples that the implementation of the optimization problem is robust and capable of handling situations in which the mesh before optimization contains a large number of invalid elements. We consider cases with polynomial approximations up to degree ten, large deformations of the curved boundaries, concave boundaries, and highly stretched boundary layer elements. Third, we extend the definition of distortion and quality measures to curved high-order meshes with the nodes on parameterized surfaces. Using this definition, we also propose a smoothing and untangling procedure for meshes on CAD surfaces. This procedure is posed in terms of the parametric coordinates of the mesh nodes to enforce that the nodes are on the CAD geometry. In addition, we prove that the procedure is independent of the surface parameterization. Thus, it can optimize meshes on CAD surfaces defined by low-quality parameterizations. Finally, we propose a new mesh generation procedure by means of an a posteriori approach. The approach consists of modifying an initial linear mesh by first, introducing high-order nodes, second, displacing the boundary nodes to ensure that they are on the CAD surface, and third, smoothing and untangling the resulting mesh to produce a valid curved high-order mesh. To conclude, we include several examples to demonstrate that the generated meshes are suitable to perform finite element analysis with unstructured high-order methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Pester, M. "Behandlung gekrümmter Oberflächen in einem 3D-FEM-Programm für Parallelrechner". Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199801386.

Texto completo da fonte
Resumo:
The paper presents a method for generating curved surfaces of 3D finite element meshes by mesh refinement starting with a very coarse grid. This is useful for parallel implementations where the finest meshes should be computed and not read from large files. The paper deals with simple geometries as sphere, cylinder, cone. But the method may be extended to more complicated geometries. (with 45 figures)
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

COSTA, JOAO MARCOS SILVA DA. "PROPERTIES OF DISCRETE SILHOUETTE CURVES ON PLANAR QUAD MESHES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36007@1.

Texto completo da fonte
Resumo:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
No presente trabalho apresentamos um estudo de curvas silhuetas discretas sobre alguns tipos particulares de malhas, com o objetivo de avaliar propriedades dessas curvas. Nosso objeto de estudo são malhas quadrangulares, ou seja, onde todas as faces sejam quadriláteros e também sejam planares. Em particular dois tipos de malhas são discutidas: circular e cônica. Essas malhas são particularmente interessantes em arquitetura para modelagem de estrutura de vidros. A geração das malhas é feita aplicando-se um processo de otimização e em seguida, sobre essas malhas, definimos curvas discretas como candidatas a silhuetas e buscamos medidas de qualidade para essas curvas.
In this work we study discrete silhouette curves on Planar Quad meshes (PQ meshes), with the objective of evaluate some properties of these curves. PQ meshes correspond to planar quadrilaterals meshes, and our interest is focused particularly on two kinds of meshes: Conical and Circular. They are interesting in architecture for design with glass structures. An optimization process is applied for the mesh generation and we follow defining discrete curves on the meshes to obtain silhouette and to measure their quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wu, Sing-on, e 胡成安. "Smoothing the silhouettes of polyhedral meshes by boundary curve interpolation". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B2981554X.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wu, Sing-on. "Smoothing the silhouettes of polyhedral meshes by boundary curve interpolation /". Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21790942.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

USAI, FRANCESCO. "Structured meshes: composition and remeshing guided by the Curve-Skeleton". Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266879.

Texto completo da fonte
Resumo:
Virtual sculpting is currently a broadly used modeling metaphor with rising popularity especially in the entertainment industry. While this approach unleashes the artists' inspiration and creativity and leads to wonderfully detailed and artistic 3D models, it has the side effect, purely technical, of producing highly irregular meshes that are not optimal for subsequent processing. Converting an unstructured mesh into a more regular and struc- tured model in an automatic way is a challenging task and still open prob- lem. Since structured meshes are useful in different applications, it is of in- terest to be able to guarantee such property also in scenarios of part based modeling, which aim to build digital objects by composition, instead of modeling them from a scratch. This thesis will present methods for obtaining structured meshes in two different ways. First is presented a coarse quad layout computation method which starts from a triangle mesh and the curve-skeleton of the shape. The second approach allows to build complex shapes by procedural composition of PAM's. Since both quad layouts and PAMs exploit their global struc- ture, similarities between the two will be discussed, especially how their structure has correspondences to the curve-skeleton describing the topology of the shape being represented. Since both the presented methods rely on the information provided by the skeleton, the difficulties of using automat- ically extracted curve-skeletons without processing are discussed, and an interactive tool for user-assisted processing is presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Guo, Ruchi. "A Linear Immersed Finite Element Space Defined by Actual Interface Curve on Triangular Meshes". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/79946.

Texto completo da fonte
Resumo:
In this thesis, we develop the a new immersed finite element(IFE) space formed by piecewise linear polynomials defined on sub-elements cut by the actual interface curve for solving elliptic interface problems on interface independent meshes. A group of geometric identities and estimates on interface elements are derived. Based on these geometric identities and estimates, we establish a multi-point Taylor expansion of the true solutions and show the estimates for the second order terms in the expansion. Then, we construct the local IFE spaces by imposing the weak jump conditions and nodal value conditions on the piecewise polynomials. The unisolvence of the IFE shape functions is proven by the invertibility of the well-known Sherman-Morrison system. Furthermore we derive a group of fundamental identities about the IFE shape functions, which show that the two polynomial components in an IFE shape function are highly related. Finally we employ these fundamental identities and the multi-point Taylor expansion to derive the estimates for IFE interpolation errors in L2 and semi-H1 norms.
Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Machado, Luís Gustavo Pinheiro. "Malhas adaptativas em domínios definidos por fronteiras curvas". Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13012008-215606/.

Texto completo da fonte
Resumo:
Dois métodos distintos são descritos e implementados. O primeiro método, proposto por Ruppert, possui garantias teóricas de qualidade quando a fronteira do domínio obedece certas restrições. O segundo método, proposto por Persson, possibilita um maior controle na densidade dos elementos que discretizam o domínio. As vantagens, desvantagens e particularidades de cada um dos métodos são descritas e detalhadas
Two distinct methods are described and implemented. The first method, proposed by Ruppert, has theoretical guarantees on the quality of elements when the domain boundaries respect certain restrictions. The second method, proposed by Persson, makes it possible to have greater control over the density of the elements that make up the domain. The advantages, disadvantages and specific points about each method are described and detailed
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ghantous, Joyce. "Prise en compte de conditions aux bords d'ordre élevé et analyse numérique de problèmes de diffusion sur maillages courbes à l'aide d'éléments finis d'ordre élevé". Electronic Thesis or Diss., Pau, 2024. http://www.theses.fr/2024PAUU3024.

Texto completo da fonte
Resumo:
Cette thèse porte sur l'analyse numérique d'équations aux dérivées partielles impliquant des conditions de bord d'ordre élevé de type Ventcel en utilisant la méthode des éléments finis. Afin de définir l'opérateur de Laplace-Beltrami intervenant dans la condition au bord, le domaine est supposé lisse : ainsi le domaine maillé ne correspond pas au domaine physique initial, entrainant une erreur géométrique. Nous utilisons alors des maillages courbes afin de réduire cette erreur et définissons un opérateur de lift permettant de comparer la solution exacte définie sur le domaine initial et la solution approchée définie sur le domaine discrétisé. Nous obtenons alors des estimations d'erreur a priori, exprimées en termes d'erreur d'approximation par éléments finis et d'erreur géométrique. Nous étudions des problèmes avec termes sources et des problèmes spectraux ainsi que des équations scalaires et les équations vectorielles de l'élasticité linéaire. Des expériences numériques en 2D et 3D valident et complètent ces résultats théoriques, soulignant en particulier l'optimalité des erreurs obtenues. Ces simulations permettent également d'identifier une super-convergence des erreurs sur les maillages quadratiques
This thesis focuses on the numerical analysis of partial differential equations involving high-order boundary conditions of the Ventcel type using the finite element method. To define the Laplace-Beltrami operator involved in the boundary condition, the domain is assumed to be smooth: thus, the meshed domain does not correspond to the initial physical domain, resulting in a geometric error. We then use curved meshes to reduce this error and define a lift operator that allows comparing the exact solution defined on the initial domain with the approximate solution defined on the discretized domain. We obtain a priori error estimates, expressed in terms of finite element approximation error and geometric error. We study problems with source terms and spectral problems, as well as scalar equations and vector equations of linear elasticity. Numerical experiments in 2D and 3D validate and complement these theoretical results, particularly highlighting the optimality of the obtained errors. These simulations also identify a super-convergence of the errors on quadratic meshes
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Nascimento, Filipe de Carvalho. "Aproximação poligonal robusta de curvas implícitas". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23112016-104719/.

Texto completo da fonte
Resumo:
Modelagem geométrica envolvendo objetos implícitos é um tema de intensa pesquisa em Computação Gráfica. Portanto, obter técnicas eficientes para representar esses objetos é de extrema importância. Dois grupos de objetos implícitos relevantes para Computação Gráfica são as curvas implícitas e superfícies implícitas. As técnicas tradicionais para se aproximar curvas e superfícies implícitas envolvem dividir o domínio e buscar em suas partições partes da curva ou da superfície. Neste projeto propomos um novo métodos de poligonização robusta de curvas implícitas usando uma ferramenta numérica auto-validada chamada de Aritmética Afim. O método consiste na poligonização adaptativa de curvas implícitas em malhas triangulares tridimensionais.
Geometric modeling involving implicit objects is a topic of intense research in Computer Graphics. Thus, obtain efficient techniques for representing these objects is of utmost importance. Two groups of relevant implicit objects for Computer Graphics are implicit curves and implicit surfaces. Traditional techniques for approximating implicit curves and surfaces involve splitting the domain and searching for parts of the curve or the surface. In this project we propose a new methods of robust polygonization of implicit curves using the self-validated numerical tool called Affine Arithmetic. The method consists in the adaptive polygonization of implicit curves in three-dimensional triangular meshes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Belarif, Kamel. "Propriétés génériques des mesures invariantes en courbure négative". Thesis, Brest, 2017. http://www.theses.fr/2017BRES0059/document.

Texto completo da fonte
Resumo:
Dans ce mémoire, nous étudions les propriétés génériques satisfaites par des mesures invariantes par l’action du flot géodésique {∅t}t∈R sur des variétés M non compactes de courbure sectionnelle négative pincée. Nous nous intéressons dans un premier temps au cas des variétés hyperboliques. L’existence d’une représentation symbolique du flot géodésique pour les variétés hyperboliques convexes cocompactes ainsi que la propriété de mélange topologique du flot géodésique nous permet de démontrer que l’ensemble des mesures de probabilité ∅t−invariantes, faiblement mélangeantes est résiduel dans l’ensemble M1 des mesures de probabilité invariantes par l’action du flot géodésique. Si nous supposons que la courbure de M est variable, nous ignorons si le flot géodésique est topologiquement mélangeant. Ainsi les méthodes utilisées précédemment ne peuvent plus s’adapter à notre situation. Afin de généraliser le résultat précédent, nous faisons appel à des outils issus du formalisme thermodynamique développés récemment par F.Paulin, M.Pollicott et B.Schapira. Plus précisément, la démonstration de notre résultat repose sur la possibilité de construire, pour toute orbite périodique Op une suite de mesures de Gibbs mélangeantes, finies, convergeant faiblement vers la mesure de Dirac supportée sur Op. Nous montrons que ce fait est possible lorsque M est géométriquement finie. Dans le cas contraire, il n’existe pas d’exemple de variétés géométriquement infinies possédant une mesure de Gibbs finie. Cependant, nous conjecturons que ce fait est possible pour toute variété M. Afin de supporter cette affirmation, nous démontrons dans la dernière partie de ce manuscrit un critère de finitude pour les mesures de Gibbs
In this work, we study the properties satisfied by the probability measures invariant by the geodesic flow {∅t}t∈R on non compact manifolds M with pinched negative sectional curvature. First, we restrict our study to hyperbolic manifolds. In this case, ∅t is topologically mixing in restriction to its non-wandering set. Moreover, if M is convex cocompact, there exists a symbolic representation of the geodesic flow which allows us to prove that the set of ∅t-invariant, weakly-mixing probability measures is a dense Gδ−set in the set M1 of probability measures invariant by the geodesic flow. The question of the topological mixing of the geodesic flow is still open when the curvature of M is non constant. So the methods used on hyperbolic manifolds do not apply on manifolds with variable curvature. To generalize the previous result, we use thermodynamics tools developed recently by F.Paulin, M.Pollicott et B.Schapira. More precisely, the proof of our result relies on our capacity of constructing, for all periodic orbits Op a sequence of mixing and finite Gibbs measures converging to the Dirac measure supported on Op. We will show that such a construction is possible when M is geometrically finite. If it is not, there are no examples of geometrically infinite manifolds with a finite Gibbs measure. We conjecture that it is always possible to construct a finite Gibbs measure on a pinched negatively curved manifold. To support this conjecture, we prove a finiteness criterion for Gibbs measures
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Lucci, Paulo Cesar de Alvarenga 1974. "Descrição matematica de geometrias curvas por interpolação transfinita". [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258706.

Texto completo da fonte
Resumo:
Orientador: Philippe Remy Bernard Devloo
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e Urbanismo
Made available in DSpace on 2018-08-13T10:14:35Z (GMT). No. of bitstreams: 1 Lucci_PauloCesardeAlvarenga_M.pdf: 6661587 bytes, checksum: b77bb456093ce1f153056c6b2fa89626 (MD5) Previous issue date: 2009
Resumo: Este trabalho é dedicado ao desenvolvimento de uma metodologia específica de mapeamento curvo aplicável a qualquer tipo de elemento geométrico regular. Trata-se de uma generalização do modelo matemático de representação geométrica apresentado em 1967 por Steven Anson Coons, denominado "Bilinearly Blended Coons Patches", o qual ajusta uma superfície retangular em um contorno delimitado por quatro curvas arbitrárias. A generalização proposta permitirá a utilização deste tipo de interpolação geométrica em elementos de qualquer topologia, através de uma sistemática única e consistente.
Abstract: In this work a methodology is developed for mathematical representation of curved domains, applicable to any type of finite element geometry. This methodology is a generalization of the mathematical model of a geometric representation presented in 1967 by Steven Anson Coons, called "Bilinearly Blended Coons Patches", which patch a rectangular surface in four arbitrary boundary curves. The proposed methodology is a kind of geometric transfinite interpolation applicable to elements of any topology, using a single and consistent systematic.
Mestrado
Estruturas
Mestre em Engenharia Civil
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Gontijo, Ana Paula Bensemann. "Avaliação do desenvolvimento motor grosso em crianças de 0 a 18 meses de idade: criação de curvas de percentil para a população brasileira". Universidade Federal de Minas Gerais, 2012. http://hdl.handle.net/1843/BUOS-8TBPLW.

Texto completo da fonte
Resumo:
Professionals in the field of children´s rehabilitation in Brazil have shown a growing interest in objective documentation of the neurodevelopment in infants. However, most standardized instruments for assessing children's performance was developed in North America, Canada and Europe. Problems related to the use of imported normative tests range from inaccurate translations to inadequate socio-culturalethnic validity. To contribute to the provision of adequate resources for child assessment, the general objective of this dissertation was to describe the gross motor performance of infants from the metropolitan region of Belo Horizonte / MG, from birth to the acquisition of independent walk, in order to examine the adequacy of the standardized norms of the Alberta Infant Motor Scale for Brazilian infants. In this regard, the specific objectives were: 1) characterize the gross motor performance of Brazilian infants ages between zero and 18 months old, 2) compare the gross motor performance of Brazilian children with the normative data from the Canadian standardization sample, 3) investigate the relationship between gross motor development and social conditions, represented by the human development index (HDI) and socioeconomic classification (ABEP). To achieve these goals, 660 infants were evaluated with the AIMS, in a sample comprised by 330 females and 330 males, stratified by age groups from zero to 18 eighteen months, in proportions similar to the original Canadian group, and distributed in three blocks according to the Human Development Index (HDI) of the metropolitan region of Belo Horizonte- MG. For each age group, zero to 18 months, we calculated the average total AIMS scores and standard deviation. The data for each age were compared to the Canadian standard data by means of Student's t-test. Brazilian infants showed lower scores in the age groups 1 <2 months (p = 0.021), 4 <5 months (p = 0.000), 5 <6 months (p = 0.001) and 10 <11 months (p = 0.009) and highest score at the age 0 <1 month (p = 0.045). Comparison of percentile curves (5th, 10th, 25th, 50th, 75th and 90th), investigated with binomial test, indicated statistically significant differences scattered over all the percentile curves. The greatest number of age differences were observed in the 75th percentile curve but significant differences were also found in the 5th percentile at 9 <10 and 10 <11 months of age; 10th at 4 <5, 9 <1010 <11 months; 25th at 4 <5, 5 <6, 11 <12 months; 50th at 4 <5, 5 <6 months and 90 to 10 <11 11 <0:13 <14 months. There were no statistically significant differences between female and male groups (except at age 12 <13, where girls had a higher score) between the three groups of HDI (except at 13 months <14, where the group presenting medium HDI scored higher than the high HDI group) and among the five levels of socioeconomic classification ABEP. In conclusion, the differences in the 5th and 10th percentile curves lead us to recommend the use of the percentile curves presented in the current study for both uses of AIMS, in clinical practice and in scientific research, with Brazilian infants.
Profissionais da área de reabilitação infantil no Brasil têm demonstrado crescente interesse pela documentação objetiva do desenvolvimento neuropsicomotor de lactentes. No entanto, grande parte dos instrumentos padronizados de avaliação do desempenho infantil foi desenvolvida na América do Norte, Canadá e Europa. Problemas relacionados ao uso de testes normativos importados vão desde traduções inacuradas a validade sócio-étnica-cultural inadequada. Visando contribuir para a oferta de recursos adequados para avaliação infantil, o objetivo geral desta tese foi descrever o desempenho motor grosso de crianças da região metropolitana de Belo Horizonte/MG, desde o nascimento até a aquisição da marcha independente, para examinar a adequação das normas da Alberta Infant Motor Scale para lactentes brasileiros. Dentro deste propósito, os objetivos específicos foram: 1) caracterizar o desempenho motor grosso de crianças brasileiras na faixa etária compreendida entre zero e 18 meses; 2) comparar o desempenho motor grosso de crianças brasileiras com os dados normativos da amostra de padronização canadense; 3) investigar a relação entre desenvolvimento motor grosso e as condições sociais, representadas pelo índice de desenvolvimento humano (IDH) e Classificação socioeconômica (ABEP). Para alcançar esses objetivos, 660 lactentes foram avaliados com a AIMS, sendo 330 do sexo feminino e 330 do sexo masculino, estratificados em grupos por idade entre zero e 18 dezoito meses, em número proporcional ao grupo canadense original, e agrupadas em três blocos de acordo com o Índice de Desenvolvimento Humano Municipal (IDH-M) da região metropolitana de Belo Horizonte-MG. Para cada grupo de idade, zero a 18 meses, foi calculado a média do escore total da AIMS e o desvio-padrão. Os dados de cada idade foram comparados com os dados normativos canadenses por meio do student t-test. Lactentes brasileiros apresentaram escores mais baixos nos grupos de idade 1<2 mês (p=0,021), 4<5 mês (p= 0.000), 5<6 mês (p=0.001) e 10<11 mês (p=0.009) e escore mais alto na idade 0<1 mês (p=0.045). A comparação das curvas de percentil (5th, 10th, 25th, 50th, 75th e 90th), investigada com teste binominal, indicou diferenças estatisticamente significativas distribuídas em todas as curvas de percentil. O maior número de diferenças por idade foram observados na curva de percentil 75th, mas, diferenças significativas também foramencontradas no percentil 5th aos 9< 10 e 10<11 meses de idade; 10th aos 4<5, 9<10, 10<11meses; 25th aos 4<5, 5<6, 11<12 meses; 50th aos 4<5, 5<6 meses e 90th aos 10<11, 11<12 e 13<14 meses. Não foram encontradas diferenças estatisticamente significativas entre grupos feminino e masculino (exceto na idade 12<13, na qual as meninas apresentaram escore mais alto), entre os três grupos de IDH (exceto no mês 13<14, no qual o grupo IDH médio apresentou maior escore que o grupo IDH alto) e entre os cinco níveis de classificação sócioeconômica ABEP. Conclui-se que, as diferenças encontradas nas curvas de percentil 5th e 10th nos levam a recomendar o uso da curva de percentil apresentada neste estudo tanto para o uso da AIMS na prática clínica quanto em pesquisas científicas com crianças brasileiras.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Sciaccitano, Marie. "Élasticités Environnementales d'Engel : Mesures, Estimations et Déterminants". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ0030.

Texto completo da fonte
Resumo:
Dans le contexte du changement climatique, la transition écologique repose en partie sur l'adoption d'une consommation durable, en accord avec les Objectifs de Développement Durable des Nations-Unies (2015). La pluralité de définitions associées à la consommation durable entraîne l'absence d'un consensus en matières de classification et de délimitation du périmètre de la consommation durable. En conséquence, les données et les mesures relatives à la consommation durable sont limitées, ce qui restreint la réalisation d'études empiriques à ce sujet. La thèse s'emploie à résoudre ces contraintes à travers trois chapitres, en proposant une méthodologie et des résultats originaux.Le premier chapitre propose une méthodologie pour mesurer de la consommation durable des ménages dans 150 pays, sur la période 1995-2015. Cette mesure est construite à partir des biens et services environnementaux dont les classifications, CLEG et APEC, sont identifiées dans la demande finale des ménages des tables Input-Output Exiobase3rx.Le deuxième chapitre étudie la relation entre le revenu disponible et la consommation durable au niveau macro-économique, en retenant le cadre de la Courbe Environnementale d'Engel. Cette analyse empirique estime l'effet du revenu disponible sur la consommation durable des ménages, ainsi que les élasticités d'Engel qui, en fonction de leurs valeurs, déterminent si cette consommation est assimilée à une consommation de luxe ou de nécessité. Par ailleurs, notre estimation économétrique introduit un instrument à la Bartik qui suggère un effet significatif des politiques fiscales vertes sur la consommation durable des ménages. À partir d'une simulation, nous soulignons l'importance de prendre en compte la vraie valeur des élasticités d'Engel dans le cadre de politiques de redistribution globale, telles que le Fonds Climat.Le troisième chapitre évalue l'effet des inégalités de revenus sur ce type de consommation, contribuant ainsi au débat sur l'arbitrage entre qualité environnementale et inégalités. En utilisant trois indicateurs d'inégalités, nos résultats suggèrent que l'impact des inégalités sur la consommation durable peut varier en fonction du niveau de revenu et de la mesure d'inégalités considérée. En proposant un polynôme de degré supérieur, nous explorons la sensibilité de cette consommation à une variation du niveau d'inégalités et déterminons un niveau optimal d'inégalités qui maximise la consommation durable des ménages.Dans l'ensemble, cette thèse contribue à mesurer la consommation durable des ménages par pays et à en explorer ses déterminants, tout en offrant des perspectives pour l'élaboration de politiques environnementales et redistributives
In the context of climate change, ecological transition relies in part on the adoption of sustainable consumption, in line with the United Nations' Sustainable Development Goals (2015). The diversity of definitions associated with sustainable consumption leads to a lack of consensus related to the classifications and scope of sustainable consumption. Consequently, data and measures related to sustainable consumption are limited, thereby restricting empirical studies on this subject. The thesis addresses these constraints through three chapters, offering the original methodology and results.Chapter 1 presents a methodology for measuring household sustainable consumption in 150 countries over the period 1995-2015. This measure is constructed using environmental goods and services, classified as CLEG and APEC, identified in the final demand of households from the Input-Output Tables Exiobase3rx.Chapter 2 explores the relationship between disposable income and sustainable consumption at the macroeconomic level, referring to the Environmental Engel Curve framework. The empirical analysis estimates the effect of disposable income on household sustainable consumption and determines Engelian elasticities which, depending on their values, categorize this consumption as either a luxury or a necessity consumption. Furthermore, our econometric estimation introduces a Bartik instrument, suggesting a significant impact of green fiscal policies on household sustainable consumption. Performing a simulation exercise, we emphasize the importance of considering the «true value» of Engelian elasticities in the context of global redistribution policies, such as the Climate Fund.Chapter 3 evaluates the effect of income inequality on this type of consumption, thereby contributing to the debate on the trade-off between environmental quality and inequalities. Using three inequality indicators, our results suggest that the impacts of inequalities on sustainable consumption depend on the income level and the measure of inequalities considered. By introducing a higher-order polynomial, we analyse the sensitivity of this consumption to changes in the level of inequalities and determine an optimal level of inequalities that maximizes household sustainable consumption.Overall, this thesis contributes to the measurement of household sustainable consumption by country and explores its determinants, while also providing insights for environmental and redistributive policies
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Giroldo, Fernanda. "Bond strength between mesh reinforcement and concrete at elevated temperatures". Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/bond-strength-between-mesh-reinforcement-and-concrete-at-elevated-temperatures(1ed2c861-9c1a-44bb-a080-30cb7810a94c).html.

Texto completo da fonte
Resumo:
This thesis investigates, using finite element modelling and experimental investigation, the fracture of mesh reinforcement in composite floor slabs at elevated temperatures. The main objective of the research is the study of the bond strength between the welded mesh reinforcement and concrete at elevated temperatures, since this was found to be the principal behaviour that governs the fracture of the reinforcement in a composite floor slab.The experimental programme included steady state and transient pull-out tests carried out at temperatures varying from 20°C to 1000°C. However, unlike previous work, which concentrated on the bond of single bars, rectangular normal concrete prisms were constructed with one longitudinal bar, ensuring a bond length of 200 mm, and one transverse bar welded centrally. As a result, the influence of the weld of the mesh reinforcement in the bond strength between reinforcement and concrete was studied. The bond strength-slip-temperature relationship was obtained for various sized ribbed and plain bars. It was found that the 6, 7 and 8mm diameter ribbed mesh failed by fracture of the longitudinal bar at all temperatures, including ambient temperature. It was shown that the reduction of bond strength of ribbed mesh was similar to the reduction in strength of the bar, which together with the observed modes of failure, lead to the conclusion that ribbed mesh can be assumed to be fully bonded at all temperatures. The 10mm diameter ribbed mesh failed by splitting due to the cover-bar diameter ratio being small. In contrast, all the plain bars failed by fracture of the weld followed by pull-out of the bar. Therefore the correct bond stress-slip relationship should be modelled for smooth bars to accurately predict global structural behaviour.The investigation using finite element modelling utilizes the DIANA program. The incorporation by the author of the bond strength-slip-temperature relationship within the models permits a better prediction of fracture of the reinforcement in composite floor slabs. It has been shown that smooth bars are more beneficial since the bond is broken before fracture of the bar allowing strains to be distributed along the bar. In the case of ribbed bars the bond is such that localised strain will occur in the bar at crack locations leading quickly to fracture of the reinforcement.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Perrault, Matthieu. "Evaluation de la vulnérabilité sismique de bâtiments à partir de mesures in situ". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00934454.

Texto completo da fonte
Resumo:
L'objectif de cette thèse est d'analyser et de caractériser le lien entre les mouvements du sol et la réponse des bâtiments. En particulier, on s'intéresse à réduire les incertitudes qui entrent en jeu dans les méthodes de vulnérabilité, afin d'estimer plus précisément la vulnérabilité des structures au risque sismique. Pour ce faire, cette thèse est basée sur l'utilisation de données enregistrées au sein de bâtiments. Les enregistrements de vibrations ambiantes et de séismes de faibles amplitudes au sein de bâtiments permettent de caractériser le comportement élastique de ces structures. Des modèles peuvent ensuite être définis afin d'étudier les différentes composantes de la variabilité présente au sein des courbes de fragilité. On s'intéresse dans un premier temps au premier niveau de dommage, au-delà duquel les caractéristiques élastiques des bâtiments sont modifiées et les modèles définis ne sont plus valables. Des enregistrements de séismes ayant eu lieu depuis le début des années 1970 dans des bâtiments californiens sont ensuite utilisés. A partir de ces données, on a pu mettre en évidence des relations entre le déplacement au sein des structures et des paramètres décrivant la nocivité des signaux sismiques. Suivant l'indicateur utilisé pour représenter la nocivité des séismes, la réponse des bâtiments peut être estimée plus ou moins précisément. Notamment, les indicateurs faisant intervenir les paramètres dynamiques des bâtiments sont davantage liés avec la réponse des structures, représentée par son déplacement inter-étage moyen. La variabilité de la réponse des bâtiments peut être améliorée en regroupant les bâtiments par typologies (définies suivant leur matériau principal de construction et leur hauteur). En apportant davantage d'informations sur les structures, on peut ainsi diminuer la composante épistémique de la variabilité. De plus, en combinant des indicateurs de nocivité, on peut améliorer la précision pour la prédiction de la réponse de la structure. Une forme fonctionnelle est ainsi proposée afin d'estimer le déplacement inter-étage moyen au sein des structures, pour plusieurs typologies de bâtiments, à partir de quatre indicateurs de nocivité. Cette forme fonctionnelle est utilisée pour établir des courbes de fragilité et peut également être utilisée afin de donner une première estimation des dommages à la suite d'un séisme, en comparant les valeurs de déformations inter-étages avec des valeurs de référence (FEMA, 2003). Enfin, une méthode hybride est proposée pour la construction de courbes de fragilité, faisant intervenir un modèle de comportement non linéaire. Les paramètres de ce modèle sont définis de telle sorte que la réponse du modèle s'ajuste aux enregistrements de séismes effectués dans les bâtiments. Ce modèle est ensuite utilisé pour évaluer les composantes d'incertitudes et pour construire des courbes de fragilité pour tous les niveaux de dommages.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Di, Bernardino Éléna. "Modélisation de la dépendance et mesures de risque multidimensionnelles". Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00838598.

Texto completo da fonte
Resumo:
Cette thèse a pour but le développement de certains aspects de la modélisation de la dépendance dans la gestion des risques en dimension plus grande que un. Le premier chapitre est constitué d'une introduction générale. Le deuxième chapitre est constitué d'un article s'intitulant " Estimating Bivariate Tail : a copula based approach ", soumis pour publication. Il concerne la construction d'un estimateur de la queue d'une distribution bivariée. La construction de cet estimateur se fonde sur une méthode de dépassement de seuil (Peaks Over Threshold method) et donc sur une version bivariée du Théorème de Pickands-Balkema-de Haan. La modélisation de la dépendance est obtenue via la Upper Tail Dependence Copula. Nous démontrons des propriétés de convergence pour l'estimateur ainsi construit. Le troisième chapitre repose sur un article: " A multivariate extension of Value-at-Risk and Conditional-Tail-Expectation", soumis pour publication. Nous abordons le problème de l'extension de mesures de risque classiques, comme la Value-at-Risk et la Conditional-Tail-Expectation, dans un cadre multidimensionnel en utilisant la fonction de Kendall multivariée. Enfin, dans le quatrième chapitre de la thèse, nous proposons un estimateur des courbes de niveau d'une fonction de répartition bivariée avec une méthode plug-in. Nous démontrons des propriétés de convergence pour les estimateurs ainsi construits. Ce chapitre de la thèse est lui aussi constitué d'un article, s'intitulant " Plug-in estimation of level sets in a non-compact setting with applications in multivariate risk theory", accepté pour publication dans la revue ESAIM:Probability and Statistics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Lebrat, Léo. "Projection au sens de Wasserstein 2 sur des espaces structurés de mesures". Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0035.

Texto completo da fonte
Resumo:
Cette thèse s’intéresse à l’approximation pour la métrique de 2-Wasserstein de mesures de probabilité par une mesure structurée. Les mesures structurées étudiées sont des discrétisations consistantes de mesures portées par des courbes continues à vitesse et à accélération bornées. Nous comparons deux types d’approximations pour ces courbes continues : l’approximation constante par morceaux et linéaire par morceaux. Pour chaque méthode, des algorithmes rapides et fonctionnant pour une discrétisation fine ont été développés. Le problème d’approximation se divise en deux étapes avec leurs propres défis théoriques et algorithmiques : le calcul de la distance de Wasserstein 2 et son optimisation par rapport aux paramètres de structure. Ce travail est initialement motivé par la génération de trajectoires d’IRM en acquisition compressée, toutefois nous donnons de nouvelles applications potentielles pour ces méthodes
This thesis focuses on the approximation for the 2-Wasserstein metric of probability measures by structured measures. The set of structured measures under consideration is made of consistent discretizations of measures carried by a smooth curve with a bounded speed and acceleration. We compare two different types of approximations of the curve: piecewise constant and piecewise linear. For each of these methods, we develop fast and scalable algorithms to compute the 2-Wasserstein distance between a given measure and the structured measure. The optimization procedure reveals new theoretical and numerical challenges, it consists of two steps: first the computation of the 2-Wasserstein distance, second the optimization of the parameters of structure. This work is initially motivated by the design of trajectories in MRI acquisition, however we provide new applications of these methods
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Morand, Marion. "Surface de symétrie d’une structure 3D : application à l’étude des déformations scoliotiques du dos". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS107.

Texto completo da fonte
Resumo:
Nous nous intéressons dans cette thèse à l'étude de la symétrie des maillages 3D. Généralement, celle-ci est définie comme une symétrie orthogonale par rapport à un plan. Cependant, cette caractérisation n'est pleinement pertinente que dans le cas de structures bilatérales "droites". Dans notre cas d'application, les déformations scoliotiques de la surface du dos, l'analyse des asymétries est alors très imprécise. Nous proposons donc de généraliser la notion de symétrie de maillage 3D en définissant une symétrie orthogonale par rapport à une surface quelconque non plane. Après avoir étudié les limites de la symétrie plane, nous proposons une nouvelle méthode calculant une surface de symétrie d'un maillage 3D. Cet algorithme itératif se fonde sur la décomposition de la structure étudiée en un ensemble de bandes adaptatives, définies orthogonalement à une courbe de symétrie puis sur le calcul de plans de symétrie locaux pour chacune de ces bandes. Ces derniers sont alors interpolés afin d'obtenir la surface de symétrie. Un intérêt particulier est porté à la robustesse de l'algorithme, qui doit pouvoir s'adapter aux différentes déformations du maillage. Nous proposons ensuite une méthode permettant, à partir de la surface de symétrie, de calculer une carte d'asymétrie courbe et normalisée. Enfin, nous présentons une application de nos contributions dans le cadre de l'étude des déformations induites par la scoliose. Nous montrons alors que l'étude de la surface de symétrie du dos permet de catégoriser les différents types de scoliose et de construire un modèle 3D de la colonne vertébrale, sans avoir recours à de l'imagerie irradiante
In this thesis we are interested in the study of the symmetry of 3D meshes. Usually, this is defined as an orthogonal symmetry with respect to a plane. However, this characterisation is only fully relevant in case of "straight" bilateral structures. For our case about scoliotic deformations of the back surface, the analysis of asymmetries is very imprecise.Therefore we propose to generalise the notion of 3D mesh symmetry by defining an orthogonal symmetry with respect to any non-planar surface.After having studied the limits of plane symmetry, we suggest a new method to calculate a surface of symmetry for a 3D mesh. This iterative algorithm is based on the decomposition of the studied structure into a set of adaptive bands, defined orthogonally to a symmetry curve, and then on the calculation of local symmetry planes for each of these bands. These bands are later interpolated to obtain the surface of symmetry. A particular focus is put into the robustness of the algorithm, which must be able to adapt to the various possible deformations of the mesh.We then propose a method able to compute a curved and standardised asymmetry map from the surface of symmetry.Lastly, we present an application of our contributions for the study of scoliosis-induced deformities.We then show that the study of the surface of symmetry of the back makes it possible to categorise the different types of scoliosis and build a 3D model of the spine, without resorting to radiative imaging
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Isel, Sandra. "Développement de méthodologies et d'outils numériques pour l'évaluation du débit en réseau hydraulique à surface libre". Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD008/document.

Texto completo da fonte
Resumo:
L’évaluation du débit en réseaux hydrauliques à surface libre est une problématique actuelle sur le plan scientifique, à forts enjeux technologiques, économiques et écologiques. Dans cette thèse, de nouvelles méthodologies d’instrumentation, basées sur une synergie entre mesures non intrusives de hauteur d’eau et modélisation numérique ont été développées. Celles-ci s’appliquent d’une part à des collecteurs dont le fonctionnement hydraulique est complexe et, d’autre part, à des ouvrages non-standard (Venturi, déversoirs d’orage). Ce travail de thèse multidisciplinaire vise une meilleure compréhension de l’écoulement pour en déduire des relations Q=f(hi) plus robustes, spécifiques à chaque site et associées à leurs incertitudes; mais également l’identification de possibles modifications du site de mesure afin d’améliorer l’estimation du débit. Au final, l’applicabilité des méthodologies développées a été éprouvée au travers de plusieurs études sur sites réels
The evaluation of the flow rate in free surface water systems is a current scientific problem, related to high technological, economical and ecological issues. In this study, new methods of instrumentation based on a synergy between non-intrusive water level measurements and numerical modeling have been developed. These methods are applied first to sewer pipes with complex hydraulic conditions then to non-standard hydraulic structures (Venturi flumes, Combined Sewer Overflows). This multidisciplinary work aims at a better understanding of the flow to identify more robust site-specific Q=f(hi) relationships related to their uncertainties. It also aims at the identification of possible modification of the measurement site in order to improve the flow rate evaluation. Finally, the applicability of the developed methodologies has been tested through several real site studies
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Malak, Karam Maurine. "A contribution to photonic MEMS : study of optical resonators and interferometers based on all-silicon Bragg reflectors". Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00769408.

Texto completo da fonte
Resumo:
This research work has been conducted to introduce a novel class of Fabry-Perot (FP) resonators : curved FP cavity based on coating-free Bragg mirrors of cylindrical shape, obtained by silicon micromachining. Another specificity is the rather large cavity lengths (L>200 µm) combined with high quality factor Q (up to 104), for the purpose of applications requiring cavity enhanced absorption spectroscopy, in which the product Q.L is a figure of merit. In this contest, the basic architecture has been modeled analytically to know the high order transverse modes supported by such cavities. Hence, the experimental conditions which lead to preferential excitation (or rejection) of these modes have been tested experimentally leading to the validation of our theoretical model and to a better understanding of the cavity behaviour. A second architecture, based on the curved FP together with a fiber rod lens has been developed for the purpose of providing stable designs. It was also modeled, fabricated and characterized leading to the expected performance improvements. On another side, a highlight on one of the potential applications that we identified for the curved cavities is presented by inserting the cavity into an electro-mechanical system. It consists of exciting and measuring tiny vibration through opto-mechanical coupling in a MEMS mechanical resonator embedding an FP cavity.Finally, as a complement to our study on resonators, we started exploring applications of optical interferometers based on similar micromachined silicon Bragg mirrors. For this purpose, an optical measurement microsystem was designed, fabricated and characterized ; it consists of an optical probe for surface profilometry in confined environments, based on an all-silicon Michelson interferometer
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Froehly, Algiane. "Couplage d’un schéma aux résidus distribués à l’analyse isogéométrique : méthode numérique et outils de génération et adaptation de maillage". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14563/document.

Texto completo da fonte
Resumo:
Lors de simulations numériques d’ordre élevé, la discrétisation sous-paramétrique du domaine de calcul peut générer des erreurs dominant l’erreur liée à la discrétisation des variables. De nombreux travaux proposent d’utiliser l’analyse isogéométrique afin de mieux représenter les géométries et de résoudre ce problème.Nous présenterons dans ce travail le couplage du schéma aux résidus distribués limité et stabilisé de Lax-Frieirichs avec l’analyse isogéométrique. En particulier, nous construirons une famille de fonctions de base permettant de représenter exactement les coniques et définies tant sur les éléments triangulaires que quadrangulaires : les fonctions de base de Bernstein rationnelles. Nous nous intéresserons ensuite à la génération de maillages précis pour l’analyse isogéométrique. Notre méthode consiste à créer un maillage courbe à partir d’un maillage linéaire par morceaux de la géométrie. Le maillage obtenu en sortie de notre procédure est non-structuré, conforme et assure la continuité de nos fonctions de base sur tout le domaine. Pour finir, nous décrirons les différentes méthodes d’adaptation de maillages développées : l’élévation d’ordre et le raffinement isotrope. Bien évidemment, la géométrie exacte du maillage courbe d’entrée est préservée au cours des processus d’adaptation
During high order simulations, the approximation error may be dominated by the errors linked to the sub-parametric discretization used for the geometry representation. Many works propose to use an isogeometric analysis approach to better represent the geometry and hence solve this problem. In this work, we will present the coupling between the limited stabilized Lax-Friedrichs residual distributed scheme and the isogeometric analysis. Especially, we will build a family of basis functions defined on both triangular and quadrangular elements and allowing the exact representation of conics : the rational Bernstein basis functions. We will then focus in how to generate accurate meshes for isogeometric analysis. Our idea is to create a curved mesh from a classical piecewise-linear mesh of the geometry. We obtain a conforming unstructured mesh which ensures the continuity of the basis functions over the entire mesh. Last, we will detail the curved mesh adaptation methods developed : the order elevation and the isotropic mesh refinement. Of course, the adaptation processes preserve the exact geometry of the initial curved mesh
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Shepherd, Jason F. "Interval Matching and Control for Hexahedral Mesh Generation of Swept Volumes". BYU ScholarsArchive, 1999. https://scholarsarchive.byu.edu/etd/3452.

Texto completo da fonte
Resumo:
Surface meshing algorithms require certain relationships among the number of intervals on the curves that bound the surface. Assigning the number of intervals to all of the curves in the model such that all relationships are satisfied is called interval assignment. Volume meshing algorithms also require certain relationships among the numbers of intervals on each of the curves on the volume. These relationships are not always captured by surface meshing requirements. This thesis presents a news technique for automatically identifying volume constraints. In this technique, volume constraints are grouped with surface constraints and are solved simultaneously. A sweepable volume has source, target and linking surfaces. The technique described in this thesis uses graph algorithms to identify independent, parallel sets of linking surfaces, and determine if they correspond to through-holes or blind-holes. For blind-holes, the algorithm generates constraints that prevent the hole from being too deep in interval parameter space and, thus, penetrating opposite target surfaces. For each linking set, the adjoining source and target surfaces are partially ordered by the structure of the linking set. A small set of representative paths for each linking set is found, and the representative paths for all linking sets are gathered and distilled by Gaussian elimination into a small set of constraints.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Nyyssönen, V. (Virva). "Transvaginal mesh-augmented procedures in gynecology:outcomes after female urinary incontinence and pelvic organ prolapse surgery". Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526205632.

Texto completo da fonte
Resumo:
Abstract Problems of female urinary incontinence and pelvic organ prolapse are common. Traditional operative techniques in the treatment of these conditions have unsatisfactory efficacy outcomes and involve complications. Attempts have been made to solve this problem with synthetic meshes, but with the use of meshes mesh-related complications have appeared. The situation is difficult because the number of different meshes, techniques and instrumentations is numerous. The present study was conducted to investigate the safety issues and complication rates of four structurally different polypropylene meshes used in transvaginal surgery when treating female urinary incontinence and apical or posterior vaginal prolapse. Vaginal mesh exposures were under special interest. Subjective outcome and patient satisfaction of tension-free vaginal tape (TVT) and transobturator tape (TOT) methods in the treatment of female urinary incontinence were reported. Objective and subjective cures of posterior intravaginal sling (PIVS) and Elevate®Posterior procedures were investigated. The incidence of vaginal mesh exposure varied between different meshes. The highest exposure incidence, 16–25%, was found with heavyweight microporous multifilament mesh. The lowest mesh exposure incidence, 0.9%, was seen with lightweight macroporous monofilament mesh. The subjective cures of the TVT and TOT procedures were 84% and 80%, and patient satisfaction rates were 79% and 74%, respectively. The objective cure of posterior IVS was only 69% and patient satisfaction rate 62%, while Elevate®Posterior reached 84–98% objective cure rate, depending on the definition used. Subjective efficacy of this procedure was 86%. According to this study, the use of heavyweight microporous multifilament should be abandoned because of the intolerably high vaginal mesh exposure incidence. The subjective efficacy and patient satisfaction of TVT and TOT procedures are satisfactory. Both objective and subjective cure rates of posterior IVS are poor, whereas the Elevate®Posterior technique with lightweight macroporous monofilament mesh presents promising results
Tiivistelmä Virtsankarkailu ja emättimen monimuotoiset laskeumat ovat naisilla yleisiä. Näitä vaivoja perinteisillä leikkaustekniikoilla hoidettaessa leikkaustulokset ovat olleet epätyydyttäviä sekä tehon että komplikaatioiden ilmaantuvuuden osalta. Ongelmaa on yritetty ratkaista synteettisien verkkojen avulla, mutta verkkojen käytön myötä niihin on havaittu liittyvän myös ongelmia. Tilannetta hankaloittaa myös erilaisten verkkomateriaalien, tekniikoiden ja instrumentaatioiden runsaslukuisuus. Tässä tutkimuksessa selvitettiin neljän rakenteeltaan erilaisen polypropyleenistä valmistetun verkon turvallisuutta ja komplikaatioiden esiintyvyyttä hoidettaessa verkkoavusteisesti naisen virtsankarkailua ja emättimen pohjukan tai emättimen takaseinämän laskeumaa. Erityisenä kiinnostuksen kohteena olivat verkkoihin liittyvät eroosiot. Virtsankarkailun hoidon subjektiivinen teho ja potilastyytyväisyys selvitettiin käytettäessä tension-free vaginal tape- (TVT) ja transobturator tape (TOT) -tekniikoita. Laskeumien hoidon objektiivinen ja subjektiivinen teho arvioitiin käytettäessä posterior intravaginal sling- (PIVS) ja Elevate®Posterior -tekniikoita. Verkon eroosioiden ilmaantuvuus vaihteli rakenteeltaan erilaisten verkkojen välillä siten, että tiivistä mikroporoottista multifilamenttinauhaa käytettäessä eroosioiden ilmaantuvuus oli 16–25 %, kun taas kevyttä makroporoottista monofilamenttiverkkoa käytettäessä eroosioprosentti oli 0.9. TVT-menetelmällä saavutettiin 84 %:n ja TOT menetelmällä 80 %:n subjektiivinen teho. TVT-potilaista hoitoon tyytyväisiä oli 79 % ja TOT-potilaista 74 %. Posteriorinen IVS saavutti vain 69 %:n objektiivisen tehon pohjukan laskeuman hoidossa. Potilastyytyväisyys oli samaa luokkaa, 62 %. Sen sijaan Elevate®Posterior-menetelmää käytettäessä saavutettiin käytetystä tehon määritelmästä riippuen 84–98 %:n objektiivinen teho. Subjektiivinen teho tällä menetelmällä oli 86 %. Tämän tutkimuksen perusteella tiiviin mikroporoottisen multifilamenttiverkon käyttöön liittyvä verkkoeroosioiden määrä on sietämättömän suuri. Vakiintuneiden TVT- ja TOT-menetelmien subjektiivinen teho ja potilastyytyväisyys ovat hyväksyttäviä. PIVS-metodia käytettäessä sekä objektiivinen että subjektiivinen tulos on huono, kun taas Elevate®Posterior-menetelmän ja siinä käytetyn kevyen verkon käytöstä saadut tulokset ovat lupaavia
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Lucas, Audrey. "Support logiciel robuste aux attaques passives et actives pour l'arithmétique de la cryptographie asymétrique sur des (très) petits coeurs de calcul". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S070.

Texto completo da fonte
Resumo:
Cette thèse porte sur le développement et l'évaluation de protections contrant simultanément des attaques par perturbation (FA) et des attaques par observation (SCA) dans le contexte de la cryptographie basée sur les courbes elliptiques (ECC). Deux protections ont été proposées pour la multiplication scalaire (SM), l'opération principale d'ECC. La première, nommée vérification de point (PV), permet une uniformisation de la SM grâce à une vérification de l'appartenance du point courant à la courbe. La SM ainsi obtenue est uniforme et donc résistante aux SPA mais aussi résistante à certaines FA. La seconde, nommée compteur d'itérations (IC), protège le scalaire contre certaines FA, tout en ayant un comportement uniforme et avec un très faible surcoût. Ces deux protections ont été implantées sur un microcontrôleur Cortex M0 pour les courbes de Weierstrass et de Montgomery, et ce pour différents types de coordonnées. Le surcoût de ces méthodes varie entre 48 % et 62 % dans le pire des cas (lorsque la PV est réalisée à chaque itération de la SM). Cela est moindre que celui des protections de bases habituelles contre les SCA. Un simulateur d'activité théorique au niveau arithmétique est également proposé. Il reproduit l'architecture d'un microcontrôleur 32 bits très simple. L'activité théorique est modélisée grâce à la variation du poids de Hamming des données manipulées lors de l'exécution. Grâce à ce simulateur, l'impact des opérandes sur l'activité des unités arithmétiques a pu être illustré. De plus, des attaques SPA et DPA furent réalisées pour évaluer les protections précédentes. Nos protections montrent une amélioration de la sécurité
This thesis deals with protection development and evaluation against fault attacks (FA) and side channel attacks (SCA) simultaneously. These protections have been developed for elliptic curves cryptography (ECC) and its main operation, the scalar multiplication (MS). Two protections have been proposed. The first is point verification (PV) checking that the current point is effectively on the curve, with a uniformization behavior. Thus, this new SM with PV is robust against some FAs and also SPA, since it is uniform. The second one is called counter iteration (IC). ICC protects the scalar against major FAs with a uniform behavior. Its overhead is very small. Our protections have been implemented on Cortex M0 microcontroller for Weiertrass and Montgomery curves and for different types of coordinates. The overhead is between 48 % and 62 %, in the worst case (when the PV is made at each SM iteration). This overhead is smaller than overhead of usual basic protections against SPA. A theorical activity simulator has also been developed. It reproduces the architecture of a simple 32-bit microcontroller. Theoric activity is modeled by the Hamming weigh variations of manipulated data during execution. Thanks to the simulator, the impact of operands is illustrated for arithmetic units. Moreover, SPA and DPA attacks were made for evaluating our protections. Our protections show some security improvements
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Mahdade, Mounir. "Vers une représentation parcimonieuse de la variabilité morphologique des rivières non-jaugées adaptée au problème inverse hauteur-débit". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS168.

Texto completo da fonte
Resumo:
L’absence de mesures in situ dans les rivières non-jaugées empêche la construction de courbes de tarage, utiles à plusieurs applications hydrologiques et hydrauliques. Ces dernières décennies, l'idée d'estimer les débits par des méthodes de télédétection a émergé, toujours sur le principe de construire un lien entre hauteur d'eau et débit. Cependant, ce changement s'accompagne d'un changement d'échelle de la mesure de hauteur, qui n'est plus rattachée à une section mais au tronçon, conduisant à la notion de courbe de tarage au tronçon sous les mêmes hypothèses qu'une courbe de tarage à la section. Cette thèse aborde la construction d'une telle courbe. Étant donné que les paramètres de friction, de la bathymétrie et du débit sont inconnus. Pour réduire la dimensionnalité du problème, une étude hydromorphologique montre que la variabilité géométrique des rivières peut être représentée sous forme d’un modèle périodique 2D dont la forme en plan est basée sur une courbe de Kinoshita. Afin de tester et de valider ce modèle, une simulation de référence 2D est faite sur un tronçon de la Garonne avec une topographie continue de haute résolution. La surface libre simulée peut être considérée comme un jeu de "pseudo-observations" similaires à celles qui seront produites par la mission SWOT. Le modèle hydraulique direct se base sur une simplification géométrique non-uniforme (modèle périodique) et un solveur des équations de Saint-Venant (Basilisk). Une inversion stochastique par algorithme génétique permet d’estimer la courbe de tarage au tronçon dans un régime stationnaire en testant les paramètres géométriques et de frottements qui reconstituent au mieux les signatures observées
The lack of in situ measurements in ungauged rivers prevents the construction of rating curves, useful for several hydrological and hydraulic applications. In recent decades, the idea of estimating discharges by remote sensing methods has emerged, based on the principle of constructing a link between water elevation and discharge. However, this change is accompanied by a change in the scale of the elevation measurement, which is no longer attached to a cross-section but to the reach, leading to the notion of a reach-average rating curve under the same assumptions as a cross-section rating curve. This thesis treats the construction of such a curve. Since the parameters of friction, bathymetry and discharge are unknown, and to reduce the dimensionality of the problem, a hydromorphological study shows that the geometrical variability of rivers can be represented in 2D periodic model whose planform is based on a Kinoshita curve. In order to test and validate this model, a 2D reference simulation is produced on a 40km reach of the Garonne River with a continuous high-resolution topography. The simulated free surface can be considered as a set of "pseudo-observations" like those that will be produced by the SWOT mission. The 2D direct hydraulic model is based on a non-uniform geometric simplification (periodic model) and a solver of the Saint-Venant equations (Basilisk). A stochastic inversion by genetic algorithm allows to estimate the reach-averaged rating curve in a stationary regime by testing the geometrical and friction parameters that best reconstitute the observed signatures
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Capdeville, Stéphanie. "Couches minces de ferrites spinelles à propriétés semiconductrices destinées à la réalisation de microbolomètres". Phd thesis, Université Paul Sabatier - Toulouse III, 2005. http://tel.archives-ouvertes.fr/tel-00009540.

Texto completo da fonte
Resumo:
Les conditions d'élaboration de couches minces de ferrites riches en zinc ont été étudiées en vue de leur intégration comme matériau thermomètre dans des microbolomètres. La pulvérisation cathodique radiofréquence s'est avérée assimilable à un traitement à haute température sous faible pression d'oxygène. La modification judicieuse des conditions de dépôt a permis de limiter les phénomènes de réduction, évitant ainsi la formation de l'oxyde mixte fer-zinc de type wustite. Toutefois, les ferrites obtenus sont déficitaires en oxygène et présentent une température de Curie anormalement élevée. L'évolution de la porosité des films a été établie en fonction des paramètres de dépôt utilisés. Des mesures d'impédance complexe ont révélé que les zones intergranulaires gouvernaient la conduction; leur résistivité augmentant avec la porosité des couches minces. Des recuits sous air et des changements de l'ordre magnétique ont également permis de modifier les propriétés électriques des films.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Dosso, Fangan Yssouf. "Contribution de l'arithmétique des ordinateurs aux implémentations résistantes aux attaques par canaux auxiliaires". Electronic Thesis or Diss., Toulon, 2020. http://www.theses.fr/2020TOUL0007.

Texto completo da fonte
Resumo:
Cette thèse porte sur deux éléments actuellement incontournables de la cryptographie à clé publique, qui sont l’arithmétique modulaire avec de grands entiers et la multiplication scalaire sur les courbes elliptiques (ECSM). Pour le premier, nous nous intéressons au système de représentation modulaire adapté (AMNS), qui fut introduit par Bajard et al. en 2004. C’est un système de représentation de restes modulaires dans lequel les éléments sont des polynômes. Nous montrons d’une part que ce système permet d’effectuer l’arithmétique modulaire de façon efficace et d’autre part comment l’utiliser pour la randomisation de cette arithmétique afin de protéger l’implémentation des protocoles cryptographiques contre certaines attaques par canaux auxiliaires. Pour l’ECSM, nous abordons l’utilisation des chaînes d’additions euclidiennes (EAC) pour tirer parti de la formule d’addition de points efficace proposée par Méloni en 2007. L’objectif est d’une part de généraliser au cas d’un point de base quelconque l’utilisation des EAC pour effectuer la multiplication scalaire ; cela, grâce aux courbes munies d’un endomorphisme efficace. D’autre part, nous proposons un algorithme pour effectuer la multiplication scalaire avec les EAC, qui permet la détection de fautes qui seraient commises par un attaquant que nous détaillons
This thesis focuses on two currently unavoidable elements of public key cryptography, namely modular arithmetic over large integers and elliptic curve scalar multiplication (ECSM). For the first one, we are interested in the Adapted Modular Number System (AMNS), which was introduced by Bajard et al. in 2004. In this system of representation, the elements are polynomials. We show that this system allows to perform modular arithmetic efficiently. We also explain how AMNS can be used to randomize modular arithmetic, in order to protect cryptographic protocols implementations against some side channel attacks. For the ECSM, we discuss the use of Euclidean Addition Chains (EAC) in order to take advantage of the efficient point addition formula proposed by Meloni in 2007. The goal is to first generalize to any base point the use of EAC for ECSM; this is achieved through curves with one efficient endomorphism. Secondly, we propose an algorithm for scalar multiplication using EAC, which allows error detection that would be done by an attacker we detail
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Venelli, Alexandre. "Contribution à la sécurite physique des cryptosystèmes embarqués". Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22005/document.

Texto completo da fonte
Resumo:
Ces travaux de thèse se concentrent sur l'étude des attaques par canaux cachés et les implications sur les mesures à prendre pour un concepteur de circuits sécurisés. Nous nous intéressons d'abord aux différentes attaques par canaux cachés en proposant une amélioration pour un type d'attaque générique particulièrement intéressante : l'attaque par analyse d'information mutuelle. Nous étudions l'effet des différentes techniques d'estimation d'entropie sur les résultats de l'attaque. Nous proposons l'utilisation de fonctions B-splines comme estimateurs étant donné qu'elles sont bien adaptées à notre scénario d'attaques par canaux cachés. Nous étudions aussi l'impact que peut avoir ce type d'attaques sur un cryptosystème symétrique connu, l'Advanced Encryption Standard (AES), en proposant une contre-mesure basée sur la structure algébrique de l'AES. L'opération principale de la majorité des systèmes ECC est la multiplication scalaire qui consiste à additionner un certain nombre de fois un point de courbe elliptique avec lui-même. Dans une deuxième partie, nous nous intéressons à la sécurisation de cette opération. Nous proposons un algorithme de multiplication scalaire à la fois efficace et résistant face aux principales attaques par canaux cachés. Nous étudions enfin les couplages, une construction mathématique basée sur les courbes elliptiques, qui possède des propriétés intéressantes pour la création de nouveaux protocoles cryptographiques. Nous évaluons finalement la résistance aux attaques par canaux cachés de ces constructions
This thesis focuses on the study of side-channel attacks as well as their consequences on the secure implementation of cryptographic algorithms. We first analyze different side-channel attacks and we propose an improvement of a particularly interesting generic attack: the mutual information analysis. We study the effect of state of the art entropy estimation techniques on the results of the attack. We propose the use of B-spline funtions as estimators as they are well suited to the side-channel attack scenario. We also investigate the consequences of this kind of attack on a well known symmetric cryptosystem, the Advanced Encryption Standard (AES), and we propose a countermeasure based on the algebraic structure of AES. The main operation of ECC is the scalar multiplication that consists of adding an elliptic curve point to itself a certain number of times. In the second part, we investigate how to secure this operation. We propose a scalar multiplication algorithm that is both efficient and secure against main side-channel attacks. We then study pairings, a mathematical construction based on elliptic curves. Pairings have many interesting properties that allow the creation of new cryptographic protocols. We finally evaluate the side-channel resistance of pairings
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

El, Moustaine Ethmane. "Authentication issues in low-cost RFID". Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00997688.

Texto completo da fonte
Resumo:
This thesis focuses on issues related to authentication in low-cost radio frequency identification technology, more commonly referred to as RFID. This technology it is often referred to as the next technological revolution after the Internet. However, due to the very limited resources in terms of computation, memory and energy on RFID tags, conventional security algorithms cannot be implemented on low-cost RFID tags making security and privacy an important research subject today. First of all, we investigate the scalability in low-cost RFID systems by developing a ns-3 module to simulate the universal low-cost RFID standard EPC Class-1 Generation-2 in order to establish a strict framework for secure identification in low-cost RFID systems. We show that, the symmetrical key cryptography is excluded from being used in any scalable low-cost RFID standard. Then, we propose a scalable authentification protocol based on our adaptation of the famous public key cryptosystem NTRU. This protocol is specially designed for low-cost RFID systems, it can be efficiently implemented into low-cost tags. Finally, we consider the zero-knowledge identification i.e. when the no secret sharing between the tag and the reader is needed. Such identification approaches are very helpful in many RFID applications when the tag changes constantly the field of administration. We propose two lightweight zero-knowledge identification approaches based on GPS and randomized GPS schemes. The proposed approaches consist in storing in the back-end precomputed values in the form of coupons. So, the GPS-based variant can be private and the number of coupons can be much higher than in other approaches thus leading to higher resistance to denial of service attacks for cheaper tags
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Robert, Jean-Marc. "Contrer l'attaque Simple Power Analysis efficacement dans les applications de la cryptographie asymétrique, algorithmes et implantations". Thesis, Perpignan, 2015. http://www.theses.fr/2015PERP0039/document.

Texto completo da fonte
Resumo:
Avec le développement des communications et de l'Internet, l'échange des informations cryptées a explosé. Cette évolution a été possible par le développement des protocoles de la cryptographie asymétrique qui font appel à des opérations arithmétiques telles que l'exponentiation modulaire sur des grands entiers ou la multiplication scalaire de point de courbe elliptique. Ces calculs sont réalisés par des plates-formes diverses, depuis la carte à puce jusqu'aux serveurs les plus puissants. Ces plates-formes font l'objet d'attaques qui exploitent les informations recueillies par un canal auxiliaire, tels que le courant instantané consommé ou le rayonnement électromagnétique émis par la plate-forme en fonctionnement.Dans la thèse, nous améliorons les performances des opérations résistantes à l'attaque Simple Power Analysis. Sur l'exponentiation modulaire, nous proposons d'améliorer les performances par l'utilisation de multiplications modulaires multiples avec une opérande commune optimisées. Nous avons proposé trois améliorations sur la multiplication scalaire de point de courbe elliptique : sur corps binaire, nous employons des améliorations sur les opérations combinées AB,AC et AB+CD sur les approches Double-and-add, Halve-and-add et Double/halve-and-add et l'échelle binaire de Montgomery ; sur corps binaire, nous proposons de paralléliser l'échelle binaire de Montgomery ; nous réalisons l'implantation d'une approche parallèle de l'approche Right-to-left Double-and-add sur corps premier et binaire, Halve-and-add et Double/halve-and-add sur corps binaire
The development of online communications and the Internet have made encrypted data exchange fast growing. This has been possible with the development of asymmetric cryptographic protocols, which make use of arithmetic computations such as modular exponentiation of large integer or elliptic curve scalar multiplication. These computations are performed by various platforms, including smart-cards as well as large and powerful servers. The platforms are subject to attacks taking advantage of information leaked through side channels, such as instantaneous power consumption or electromagnetic radiations.In this thesis, we improve the performance of cryptographic computations resistant to Simple Power Analysis. On modular exponentiation, we propose to use multiple multiplications sharing a common operand to achieve this goal. On elliptic curve scalar multiplication, we suggest three different improvements : over binary fields, we make use of improved combined operation AB,AC and AB+CD applied to Double-and-add, Halve-and-add and Double/halve-and-add approaches, and to the Montgomery ladder ; over binary field, we propose a parallel Montgomery ladder ; we make an implementation of a parallel approach based on the Right-to-left Double-and-add algorithm over binary and prime fields, and extend this implementation to the Halve-and-add and Double/halve-and-add over binary fields
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Cornelie, Marie-Angela. "Implantations et protections de mécanismes cryptographiques logiciels et matériels". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM029/document.

Texto completo da fonte
Resumo:
La protection des mécanismes cryptographiques constitue un enjeu important lors du développement d'un système d'information car ils permettent d'assurer la sécurisation des données traitées. Les supports utilisés étant à la fois logiciels et matériels, les techniques de protection doivent s'adapter aux différents contextes.Dans le cadre d'une cible logicielle, des moyens légaux peuvent être mis en oeuvre afin de limiter l'exploitation ou les usages. Cependant, il est généralement difficile de faire valoir ses droits et de prouver qu'un acte illicite a été commis. Une alternative consiste à utiliser des moyens techniques, comme l'obscurcissement de code, qui permettent de complexifier les stratégies de rétro-conception en modifiant directement les parties à protéger.Concernant les implantations matérielles, on peut faire face à des attaques passives (observation de propriétés physiques) ou actives, ces dernières étant destructives. Il est possible de mettre en place des contre-mesures mathématiques ou matérielles permettant de réduire la fuite d'information pendant l'exécution de l'algorithme, et ainsi protéger le module face à certaines attaques par canaux cachés.Les travaux présentés dans ce mémoire proposent nos contributions sur ces sujets tes travaux. Nous étudions et présentons les implantations logicielle et matérielle réalisées pour le support de courbes elliptiques sous forme quartique de Jacobi étendue. Ensuite, nous discutons des problématiques liées à la génération de courbes utilisables en cryptographie et nous proposons une adaptation à la forme quartique de Jacobi étendue ainsi que son implantation. Dans une seconde partie, nous abordons la notion d'obscurcissement de code source. Nous détaillons les techniques que nous avons implantées afin de compléter un outil existant ainsi que le module de calcul de complexité qui a été développé
The protection of cryptographic mechanisms is an important challenge while developing a system of information because they allow to ensure the security of processed data. Since both hardware and software supports are used, the protection techniques have to be adapted depending on the context.For a software target, legal means can be used to limit the exploitation or the use. Nevertheless, it is in general difficult to assert the rights of the owner and prove that an unlawful act had occurred. Another alternative consists in using technical means, such as code obfuscation, which make the reverse engineering strategies more complex, modifying directly the parts that need to be protected.Concerning hardware implementations, the attacks can be passive (observation of physical properties) or active (which are destructive). It is possible to implement mathematical or hardware countermeasures in order to reduce the information leakage during the execution of the code, and thus protect the module against some side channel attacks.In this thesis, we present our contributions on theses subjects. We study and present the software and hardware implementations realised for supporting elliptic curves given in Jacobi Quartic form. Then, we discuss issues linked to the generation of curves which can be used in cryptography, and we propose an adaptation to the Jacobi Quartic form and its implementation. In a second part, we address the notion of code obfuscation. We detail the techniques that we have implemented in order to complete an existing tool, and the complexity module which has been developed
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

El, Moustaine Ethmane. "Authentication issues in low-cost RFID". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0030.

Texto completo da fonte
Resumo:
Cette thèse se concentre sur les problèmes liés à l’authentification dans la technologie RFID. Cette technologie est l’une des technologies les plus prometteuses dans le domaine de l’informatique ubiquitaire, elle est souvent désignée comme la prochaine révolution après Internet. Cependant, à cause des ressources très limitées en termes de calcul, mémoire et énergie sur les étiquettes RFID, les algorithmes classiques de sécurité ne peuvent pas être implémentés sur les étiquettes à bas coût rendant ainsi la sécurité et la vie privée un important sujet de recherche aujourd’hui. Dans un premier temps, nous étudions le passage à l’échelle dans les systèmes RFID à bas coût en développant un module pour ns-3 qui simule le standard EPC Class 1 Generation 2 pour établir un cadre stricte pour l’identification sécurisée des RFID à bas coût, ce qui nous conduit à l’utilisation de la cryptographie à clés publiques. Ensuite, nous proposons un protocole d’authentification basé sur une adaptation que nous avons introduit sur le célèbre cryptosystème NTRU. Ce protocole est spécialement conçu pour les RFID à bas coût comme les étiquettes n’implémentent que des opérations simples (xor, décalages, addition) et il garantit le passage à l’échelle. Enfin, nous considérons l’identification à divulgation nulle de connaissance, ce type d’approches est très utile dans de nombreuses applications RFID. Nous proposons deux protocoles à divulgation nulle de connaissance basés sur cryptoGPS et cryptoGPS randomisé. Ces approches consistent à stocker sur le serveur des coupons pré-calculés, ainsi la sécurité et la vie privée sont mieux supportées que dans les autres approches de ce type
This thesis focuses on issues related to authentication in low-cost radio frequency identification technology, more commonly referred to as RFID. This technology it is often referred to as the next technological revolution after the Internet. However, due to the very limited resources in terms of computation, memory and energy on RFID tags, conventional security algorithms cannot be implemented on low-cost RFID tags making security and privacy an important research subject today. First of all, we investigate the scalability in low-cost RFID systems by developing a ns-3 module to simulate the universal low-cost RFID standard EPC Class-1 Generation-2 in order to establish a strict framework for secure identification in low-cost RFID systems. We show that, the symmetrical key cryptography is excluded from being used in any scalable low-cost RFID standard. Then, we propose a scalable authentification protocol based on our adaptation of the famous public key cryptosystem NTRU. This protocol is specially designed for low-cost RFID systems, it can be efficiently implemented into low-cost tags. Finally, we consider the zero-knowledge identification i.e. when the no secret sharing between the tag and the reader is needed. Such identification approaches are very helpful in many RFID applications when the tag changes constantly the field of administration. We propose two lightweight zero-knowledge identification approaches based on GPS and randomized GPS schemes. The proposed approaches consist in storing in the back-end precomputed values in the form of coupons. So, the GPS-based variant can be private and the number of coupons can be much higher than in other approaches thus leading to higher resistance to denial of service attacks for cheaper tags
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Daněk, Michal. "Simulace toroidních cívek v Ansoft Maxwell 3D". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218197.

Texto completo da fonte
Resumo:
The master thesis is focused on the simulation of the toroid coils in Ansoft Maxwell 3D software, which uses finite element method for electromagnetic field simulation. Firstly the process creation of the geometric model toroid coil with seventy-five threaded is presented. It is necessary to debug this model and prepare it for the mesh generation. Physical properties are assign to this model and it gives rise to the physical model. We will set boundaries, excitation current, core material, winding material and the parameters for the mesh generations. New material Kashke K4000 will be created in the materials library and subsequently we will define its BH curve on the basis of datasheet. Analysis is made in two modes. Direct currents (7,5A; 10A; 15A; 20A; 25A) and (non)linear materials are used in magnetostatic solution. Toroid coil is excited by current pulse in transient solution. In Ansoft Maxwell Circuit editor a source which generates current pulse will be created. This excitation will be assigned to the toroid coil as an extern source through a terminal. Core material is linear in the case of transient analysis, because Ansoft Maxwell 3D doesn´t allow to use nonlinear material in this solution. Settings are different in transient and in magnetostatic analysis. End time and time step are entered to solve this task in transient analysis. Time points are entered too. Flux density and electromagnetic field strength are calculated in these time points and later it will be possible to view the results. Calculated fields are shown as the pictures in this thesis. The procedure how to use a field calculator in the postprocessing is given as well. The achievements are summarized in the conclusion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Král, Petr. "Verifikace nelineárních materiálových modelů betonu". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2015. http://www.nusl.cz/ntk/nusl-227601.

Texto completo da fonte
Resumo:
Diploma thesis is focused on the description of the parameters of nonlinear material models of concrete, which are implemented in a computational system LS-DYNA, interacting with performance of nonlinear test calculations in system LS-DYNA on selected problems, which are formed mainly by simulations of tests of mechanical and physical properties of concrete in uniaxial compressive and tensile on cylinders with applying different boundary conditions and by simulation of bending slab, with subsequent comparison of some results of test calculations with results of the experiment. The thesis includes creation of appropriate geometric models of selected problems, meshing of these geometric models, description of parameters and application of nonlinear material models of concrete on selected problems, application of loads and boundary conditions on selected problems and performance of nonlinear calculations in a computational system LS-DYNA. Evaluation of results is made on the basis of stress-strain diagrams and load-displacement diagrams based on nonlinear calculations taking into account strain rate effects and on the basis of hysteresis curves based on nonlinear calculations in case of application of cyclic loading on selected problems. Verification of nonlinear material models of concrete is made on the basis of comparison of some results of test calculations with results obtained from the experiment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Cadien, Adam Samuel. "Applications of the Wavelet Transform to B Mixing Analysis". 2008. http://hdl.handle.net/2429/868.

Texto completo da fonte
Resumo:
Abstract The neutral B mesons B0 and B0s can under go flavor changing oscillations due to interactions by the weak force. Experiments which measure the frequency of these state transitions produce extremely noisy results that are difficult to analyse. A method for extracting the frequency of B mesons oscillations using the continuous wavelet transform is developed here. In this paper the physics of B meson mixing is related, leading to the derivation of a function describing the expected amount of mixing present in B0 and B0s meson decays. This result is then used to develop a new method for analysing the underlying frequency of oscillation in B mixing. An introduction to wavelet theory is provided in addition to details on interpreting daughter wavelet coefficient diagrams. Finally, the effectiveness of the analysis technique produced, referred to as the Template Fitting Method, is investigated through an application to data generated using Monte Carlo methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

LOVATO, Christian. "Three-dimensional body scanning: methods and applications for anthropometry". Doctoral thesis, 2013. http://hdl.handle.net/11562/540549.

Texto completo da fonte
Resumo:
In questa tesi descriviamo i metodi informatici e gli esperimenti eseguiti per l’applicazione della tecnologia whole body 3D scanner in supporto dell’antropometria. I body scanner restituiscono in uscita una nuvola di punti, solitamente trasformata in mesh triangolare mediante l’uso di algoritmi specifici per supportare la visualizzazione 3D della superficie e l’estrazione di misure e landmarks antropometrici significativi. L’antropometria digitale è già stata utilizzata con successo in vari studi per valutare importanti parametri medici. L’analisi antropometrica digitale è solitamente eseguita utilizzando soluzioni software fornite dai costruttori che sono chiuse e specifiche per il prodotto, che richiedono attenzione nell’acquisizione e dei forti limiti sulla posa assunta dal soggetto. Questo può portare a dei problemi nella comparazione di dati acquisiti in diversi luoghi, nella realizzazione di studi multicentrici su larga scala e nell’applicazione di metodi avanzati di shape analysis sui modelli acquisiti. L’obiettivo del nostro lavoro è di superare questi problemi selezionando e personalizzando strumenti di processing geometrico capaci di creare un sistema aperto ed indipendente dallo strumento per l’analisi di dati da body scanner. Abbiamo inoltre sviluppato e validato dei metodi per estrarre automaticamente dei punti caratteristici, segmenti corporei e misure significative che possono essere utilizzate nella ricerca antropometrica e metabolica. Nello specifico, presentiamo tre esperimenti. Nel primo, utilizzando uno specifico software per l’antropometria digitale, abbiamo valutato la performance dello scanner Breuckmann BodySCAN nelle misure antropometriche. I soggetti degli esperimenti sono 12 giovani adulti che sono stati sottoposti procedure di antropometria manuale e digitale tridimensionale (25 misurazioni) indossando abbigliamento intimo attillato. Le misure duplicate effettuate da un’antropometrista esperto mostrano una correlazione r=0.975-0.999; la loro media è significativamente (secondo il test t di Student) diversa su 4 delle 25 misure. Le misure digitali effettuate in duplicato da un antropometrista esperto e da due antropometristi non esperti, mostrano indici di correlazione individuali r che variano nel range 0.975-0.999 e medie che che erano significativamente diverse in una misurazione su 25. La maggior parte delle misure effettuate dall’antropometrista esperto, manuali e digitali, mostrano una correlazione significativa (coefficiente di correlazione intra-classe che variano nell’intervallo 0.855-0.995, p<0.0001). Concludiamo che lo scanner Breuckmann BodySCAN è uno strumento affidabile ed efficace per le misure antropometriche. In un secondo esperimento, compariamo alcune caratteristiche geometriche facilmente misurabili ottenute dalle scansioni di femmine obese (BMI>30) con i parametri di composizione corporea (misurata con una DXA) dei soggetti stessi, per investigare quali misure dei descrittori di forma correlavano meglio con il grasso del torso e corporeo. I risultati ottenuti mostrano che alcuni dei parametri geometrici testati presentano una elevata correlazione, mentre altri non correlano fortemente con il grasso corporeo. Questi risultati supportano il ruolo dell’antropometria digitale nell’indagine sulle caratteristiche fisiche rilevanti per la salute, ed incoraggiano la realizzazione di ulteriori studi che analizzino la relazione tra descrittori di forma e composizione corporea. Infine, presentiamo un nuovo metodo per caratterizzare le superfici tridimensionali mediante il calcolo di una funzione chiamata “Area projection transform”, la quale misura la possibilità dei punti dello spazio 3D di essere il centro di simmetria radiale della forma a predeterminati raggi. La trasformata può essere usata per rilevare e caratterizzare in maniera robusta i regioni salienti (approssimativamente parti sferiche e cilindriche) ed è, quindi, adatta ad applicazioni come la detection di caratteristiche anatomiche. In particolare, mostriamo che è possibile costruire grafi che uniscono questi punti seguendo i valori massimali della MAPT (Radial Simmetry Graphs) e che questi grafi possono essere usati per estrarre rilevanti proprietà della forma o definire corrispondenze puntuali robuste nei confronti di problematiche quali parti mancanti, rumore topologico e deformazioni articolate. Concludiamo che le potenziali applicazioni della tecnologia della scansione tridimensionale applicata all’antropometria sono innumerevoli, limitate solo dall’abilità della conoscienza scientifica di connettere il fenomeno biologico con le appropriate descrizioni matematiche/geometriche.
In this thesis we describe the developed computer method and experiments performed in order to apply whole body 3D scanner technology in support to anthropometry. The output of whole body scanners is a cloud of points, usually transformed in a triangulated mesh through the use of specific algorithms in order to support the 3D visualization of the surface and the extraction of meaningful anthropometric landmarks and measurements. Digital anthropometry has been already used in various studies to assess important health-related parameters. Digital anthropometric analysis is usually performed using device-specific and closed software solutions provided by scanner manufacturers, and requires often a careful acquisition, with strong constraints on subject pose. This may create problems in comparing data acquired in different places and performing large-scale multi-centric studies as well as in applying advanced shape analysis tools on the captured models. The aim of our work is to overcome these problems by selecting and customizing geometrical processing tools able to create an open and device-independent method for the analysis of body scanner data. We also developed and validated methods to extract automatically feature points, body segments and relevant measurements that can be used in anthropometric and metabolic research. In particular we present three experiments. In the first, using specific digital anthropometry software, we evaluated the Breuckmann BodySCAN for performance in anthropometric measurement. Subjects of the experiment were 12 young adults underwent both manual and 3D digital anthropometry (25 measurements) wearing close-fitting underwear. Duplicated manual measurement taken by one experienced anthropometrist showed correlation r 0.975-0.999; their means were significantly different in four out of 25 measurements by Student’s t test. Duplicate digital measurements taken by one experienced anthropometrist and two naïve anthropometrists showed individual correlation coefficients r ranging 0.975-0.999 and means were significantly different in one out of 25 measurements. Most measurements taken by the experienced anthropometrist in the manual and digital mode showed significant correlation (intraclass correlation coefficient ranging 0.855-0.995, p<0.0001). We conclude that the Breuckmann BodyScan is reliable and effective tool for digital anthropometry. In a second experiment, we compare easily detectable geometrical features obtained from 3D scans of female obese (BMI > 30) subjects with body composition (measured with a DXA device) of the same subjects, in order to investigate which measurements on shape descriptors better correlate with torso and body fat. The results obtained show that some of the tested geometrical parameters have a relevant correlation, while other ones do not strongly correlate with body fat. These results support the role of digital anthropometry in investigating health-related physical characteristics and encourage the realization of further studies analyzing the relationships between shape descriptors and body composition. Finally, we present a novel method to characterize 3D surfaces through the computation of a function called Area Projection Transform, measuring the likelihood of points in the 3D space to be center of radial symmetry at selected scales (radii). The transform can be used to detect and characterize robustly salient regions (approximately spherical and cylindrical parts) and it is, therefore, suitable for applications like anatomical features detection. In particular, we show that it is possible to build graphs joining these points following maximal values of the MAPT (Radial Symmetry Graphs) and that these graphs can be used to extract relevant shape properties or to establish point correspondences on models robustly against holes, topological noise and articulated deformations. It is concluded that whole body scanning technology application to anthropometry are potentially countless, limited only by the ability of science to connect the biological phenomenon with the appropriate mathematical/geometrical descriptions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Βαβουράκης, Βασίλειος. "Χρήση μεθόδων συνοριακών στοιχείων και τοπικών ολοκληρωτικών εξισώσεων χωρίς διακριτοποίηση για την αριθμητική επίλυση προβλημάτων κυματικής διάδοσης σε εφαρμογές μη-καταστροφικού ελέγχου". Thesis, 2006. http://nemertes.lis.upatras.gr/jspui/handle/10889/845.

Texto completo da fonte
Resumo:
Ο στόχος της παρούσας διδακτορικής διατριβής είναι διττός: η ανάπτυξη και η εφαρμογή αριθμητικών τεχνικών για την επίλυση προβλημάτων που εμπίπτουν στην περιοχή του Μη-Καταστροφικού Ελέγχου. Συγκεκριμένα αναπτύχθηκαν η Μέθοδος των Συνοριακών Στοιχείων (ΜΣΣ) και η Μέθοδος των Τοπικών Ολοκληρωτικών Εξισώσεων χωρίς Διακριτοποίηση για την αριθμητική ανάλυση στατικών και μεταβατικών προβλημάτων στο πεδίο της ελαστικότητας και της αλληλεπίδρασης ελαστικού με ακουστικό μέσο στις δύο διαστάσεις. Σημαντικό μέρος της διδακτορικής διατριβής αποτέλεσε η ανάπτυξη προγράμματος ηλεκτρονικού υπολογιστή, το οποίο επιλύει τα προβλήματα στα οποία πραγματεύεται το παρόν σύγγραμμα. Η διδακτορική διατριβή αποτελείται από τρεις ενότητες. Στην πρώτη ενότητα γίνεται πλήρης περιγραφή της απαραίτητης θεωρίας για την κάλυψη και κατανόηση των αριθμητικών ΜΣΣ αλλά και των Τοπικών Μεθόδων χωρίς Διακριτοποίηση (ΤΜχΔ). Στη δεύτερη ενότητα εφαρμόζονται οι προαναφερθείσες αριθμητικές μέθοδοι για την επίλυση στατικών και δυναμικών (στο πεδίο συχνοτήτων) διδιάστατων προβλημάτων, ώστε να πιστοποιηθεί η ακρίβεια και η αξιοπιστία των εν λόγω μεθοδολογιών. Τέλος, στην τρίτη ενότητα οι αριθμητικές ΜΣΣ και ΤΜχΔ εφαρμόζονται για την επίλυση προβλημάτων κυματικής διάδοσης που εμπίπτουν στο πεδίο του Μη-Καταστροφικού Ελέγχου. Πιο συγκεκριμένα μελετήθηκε η κυματική διάδοση σε ελεύθερες επίπεδες πλάκες και σε κυλινδρικές δεξαμενές αποθήκευσης υγρών καυσίμων.
The aim of this doctoral thesis is twofold: the development and implementation of numerical techniques for solving wave propagation problems in Non-Destructive Testing applications. Particularly, the Boundary Element Method (BEM) and the Local Boyndary Integral Equation Method are developed, so as to numerically solve static and transient problems on the field of elasticity and fluid-structure interaction in two dimensions. A major part of the present research is the construction of a computer program for solving such kind of problems. This textbook consists of three sections. In the first section, a thorough description on the theory of the BEM and the Local Meshless Methods (LMM) is done. The second section is dedicated for the numerical implementation of the BEM and LMM for solving steady state and time-harmonic two dimensional elastic and acoustic problems, in order to verify the accuracy and the ability of the proposed methodologies to solve the above-mentioned problems. Finally in the third section, the wave propagation problems of traction-free plates and cylindrical fuel storage tanks is studied, from the perspective of Non-Destructive Testing. The numerical methods of BEM and LMM are implemented, as well as spectral methods are utilized, for drawing useful conclusions on the wave propagation phenomena.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Ivan, Lucian. "Development of High-order CENO Finite-volume Schemes with Block-based Adaptive Mesh Refinement (AMR)". Thesis, 2011. http://hdl.handle.net/1807/29759.

Texto completo da fonte
Resumo:
A high-order central essentially non-oscillatory (CENO) finite-volume scheme in combination with a block-based adaptive mesh refinement (AMR) algorithm is proposed for solution of hyperbolic and elliptic systems of conservation laws on body- fitted multi-block mesh. The spatial discretization of the hyperbolic (inviscid) terms is based on a hybrid solution reconstruction procedure that combines an unlimited high-order k-exact least-squares reconstruction technique following from a fixed central stencil with a monotonicity preserving limited piecewise linear reconstruction algorithm. The limited reconstruction is applied to computational cells with under-resolved solution content and the unlimited k-exact reconstruction procedure is used for cells in which the solution is fully resolved. Switching in the hybrid procedure is determined by a solution smoothness indicator. The hybrid approach avoids the complexity associated with other ENO schemes that require reconstruction on multiple stencils and therefore, would seem very well suited for extension to unstructured meshes. The high-order elliptic (viscous) fluxes are computed based on a k-order accurate average gradient derived from a (k+1)-order accurate reconstruction. A novel h-refinement criterion based on the solution smoothness indicator is used to direct the steady and unsteady refinement of the AMR mesh. The predictive capabilities of the proposed high-order AMR scheme are demonstrated for the Euler and Navier-Stokes equations governing two-dimensional compressible gaseous flows as well as for advection-diffusion problems characterized by the full range of Peclet numbers, Pe. The ability of the scheme to accurately represent solutions with smooth extrema and yet robustly handle under-resolved and/or non-smooth solution content (i.e., shocks and other discontinuities) is shown for a range of problems. Moreover, the ability to perform mesh refinement in regions of smooth but under-resolved and/or non-smooth solution content to achieve the desired resolution is also demonstrated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia