Thèses sur le sujet « Shape Analysi »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Shape Analysi.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Shape Analysi ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Krantz, Amanda. « Temporal Multivariate Distribution Analysis of Cell Shape Descriptors ». Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-182264.

Texte intégral
Résumé :
In early drug discovery and the study of the effects of new chemical compounds on cancer cells, the change in cell shape over time provides vital information about cell health. Live-cell image analysis systems can be used to extract cell-shape describing parameters of individual cells during exposure to new drugs. Multivariate statistical analysis is then applied to understand cell morphology and the correlation between various shape descriptors. Principal component analysis integrated with histogram distribution analysis is a method to compress and summarize important cellular data features without loss of information about the individual cell shapes. A workflow for this kind of analysis is being developed at Sartorius and aims to aid in the biological interpretation of different experimental results. However, methods for exploring the time dimension in the experiments are not yet fully explored, and a temporal view of the data would increase understanding of the change in cell morphology metrics over time. In this study, we implement the workflow to a data set generated from the microscope IncuCyte and investigate a possible continuation of time-series analysis on the data. The results demonstrate how we can use principal component analysis in two steps together with histogram distributions of different experimental conditions to study cell shapes over time. Scores and loadings from the analysis are used as new observations representing the original data, and the evolution of score-value can be backtracked to cell morphology metrics changing in time. The results show a comprehensive way of studying how cells from all experimental conditions relate to each other during the course of an experiment.
Styles APA, Harvard, Vancouver, ISO, etc.
2

黃美香 et Mee-heung Cecilia Wong. « Shape analysis ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211999.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wong, Mee-heung Cecilia. « Shape analysis / ». [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13637642.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bell, Jason. « An analysis of global shape processing using radial frequency contours ». University of Western Australia. School of Psychology, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0051.

Texte intégral
Résumé :
Encoding the shape of objects within the visual environment is one of the important roles of the visual system. This thesis investigates the proposition that human sensitivity to a broad range of closed-contour shapes is underpinned by multiple shape channels (Loffler, Wilson, & Wilkinson, 2003). Radial frequency (RF) contours are a novel type of stimulus that can be used to represent simple and complex shapes; they are created by sinusoidally modulating the radius of a circle, where the number of cycles of modulation defines the RF number (Wilkinson, Wilson, & Habak, 1998). This thesis uses RF contours to enhance our understanding of the visual processes which support shape perception. The first part of the thesis combines low and high RF components, which Loffler et al. have suggested are detected by separate global and local processes respectively, onto a single contour and shows that, even when combined, the components are detected independently at threshold. The second part of the thesis combines low RF components from across the range where global processing has been demonstrated (up to approximately RF10) onto a single contour in order to test for interactions between them. The resulting data reveal that multiple narrow-band contour shape channels are required to account for performance, and also indicate that these shape channels have inhibitory connections between them. The third part of the thesis examines the local characteristics which are used to represent shape information within these channels. The results show that both the breadth (polar angle subtended) of individual curvature features, and their relative angular positions (in relation to object centre) are important for representing RF shapes; however, processing is IV not tuned for object size, or for modulation amplitude. In addition, we show that luminance and contrast cues are effectively combined at the level where these patterns are detected, indicating a single later processing stage is adequate to explain performance for these pattern characteristics. Overall the findings show that narrow-band shape channels are a useful way to explain sensitivity to a broad range of closed-contour shapes. Modifications to the current RF detection model (Poirier & Wilson, 2006) are required to incorporate inhibitory connections between shape channels and also, to accommodate the effective integration of luminance and contrast cues.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sroufe, Paul. « E‐Shape Analysis ». Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12201/.

Texte intégral
Résumé :
The motivation of this work is to understand E-shape analysis and how it can be applied to various classification tasks. It has a powerful feature to not only look at what information is contained, but rather how that information looks. This new technique gives E-shape analysis the ability to be language independent and to some extent size independent. In this thesis, I present a new mechanism to characterize an email without using content or context called E-shape analysis for email. I explore the applications of the email shape by carrying out a case study; botnet detection and two possible applications: spam filtering and social-context based finger printing. The second part of this thesis takes what I apply E-shape analysis to activity recognition of humans. Using the Android platform and a T-Mobile G1 phone I collect data from the triaxial accelerometer and use it to classify the motion behavior of a subject.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vittert, Liberty. « Facial shape analysis ». Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6669/.

Texte intégral
Résumé :
Stereophotogrammetric imaging systems produce representations of surfaces (two-dimensional manifolds in three-dimensional space) through triangulations of a large number of estimated surface points. Traditional forms of analysis of these surfaces are based on point locations (manually marked anatomical landmarks) as described in Chapter 1. An advanced application of these types of landmarks will be thoroughly examined in Chapter 2 through the concept of Ghost Imaging. The results of this chapter necessitated a reliability study of stereophotogrammetric imaging systems which is discussed in Chapter 3. Given the results of the reliability study, an investigation info new definitions of landmarks and facial shape description is undertaken in Chapter 4. A much richer representation is expressed by the curves which track the ridges and valleys of the dense surface and by the relatively smooth surface patches which lie between these curves. New automatic methods for identifying anatomical curves and the resulting full surface representation, based on shape index, curvature, smoothing techniques, warping, and bending energy, are described. Chapter 5 discussed new and extended tools of analysis that are necessary for this richer representation of facial shape. These methods will be applied in Chapter 6 to different shape objects, including the human face, mussel shells, and computational imaging comparisons. Issues of sexual dimorphism (differences in shapes between males and females), change in shape with age, as well as pre- and post-facial surgical intervention will be explored. These comparisons will be made using new methodological tools developed specifically for the new curve and surface identification method. In particular, the assessment of facial asymmetry and the questions involved in comparing facial shapes in general, at both the individual and the group level, will also be considered. In Chapter 7, Bayesian methods are explored to determine further ways in which to understand and compare human facial features. In summary, this thesis shows a novel method of curve and full facial mesh identification that is used, successfully, in pilot case studies of multiple types of surfaces. It then shows a novel proof of principle for using Bayesian methods to create a fully automatic process in facial shape characterisation. In order to view this thesis in full, please view in Adobe Reader.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Sroufe, Paul Dantu Ram. « E-shape analysis ». [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/ark:/67531/metadc12201.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Janan, Faraz. « Shape analysis in mammograms ». Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:96aaecce-a7bd-404f-9916-778603dbb396.

Texte intégral
Résumé :
The number of women diagnosed with the breast cancer continues to rise year on year. Breast cancer is now the most common type of cancer in the UK, with over 55000 cases reported last year. In most cases, mammography is the first step towards diagnosing breast cancer. However, it continues to have many practical limitations as compared to more sophisticated modalities such as MRI. The relatively low cost of mammography, together with the ever increasing risk of women contracting the disease, has led to many developed countries having a breast screening program. These routine breast screens are taken at different points in time and are called temporal mammograms. Currently, a radiologist tends to qualitatively assess temporal mammograms and look for any abnormalities or suspicious regions that might be of a concern. In this thesis, we develop an automatic shape analysis model that can detect and quantify such changes inside the breast. This will not only help in early diagnosis of the disease, which is key to survival, but will potentially aid prognosis and post treatment care. The core to this thesis is the use of Circular Integral Invariants. We explore its multi-scale properties and use it for image smoothing to reduce image noise and enhance features for segmentation. We implement, modify and enhance a segmentation method which previously has been successfully used to acquire breast regions of interest. We applied such Integral Invariants for shape description, to be used for shape matching as well as for subdividing shapes into sub-regions and quantifying the differences between two such shapes. We combine boundary information with the information from inside a shape, thus eccentrically transforming shapes before describing their structure. We develop a novel false positives reduction method based on Integral Invariants scale space. A second aspect of the thesis is the evaluation of and emphasis on the use of breast density maps against the commonly used intensity maps or x-rays. We find density maps sufficient to use in clinical practice. The methods developed in this thesis aim to help clinicians in making diagnostic decision at the point of case. Our shape analysis model is easy to compute, fast and very general in nature that could be deployed in a wide range of applications, beyond mammography.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Petty, Emma Marie. « Shape analysis in bioinformatics ». Thesis, University of Leeds, 2009. http://etheses.whiterose.ac.uk/822/.

Texte intégral
Résumé :
In this thesis we explore two main themes, both of which involve proteins. The first area of research focuses on the analyses of proteins displayed as spots on 2-dimensional planes. The second area of research focuses on a specific protein and how interactions with this protein can naturally prevent or, in the presence of a pesticide, cause toxicity. The first area of research builds on previously developed EM methodology to infer the matching and transformation necessary to superimpose two partially labelled point configurations, focusing on the application to 2D protein images. We modify the methodology to account for the possibility of missing and misallocated markers, where markers make up the labelled proteins manually located across images. We provide a way to account for the likelihood of an increased edge variance within protein images. We find that slight marker misallocations do not greatly influence the final output superimposition when considering data simulated to mimic the given dataset. The methodology is also successfully used to automatically locate and remove a grossly misallocated marker within the given dataset before further analyses is carried out. We develop a method to create a union of replicate images, which can then be used alone in further analyses to reduce computational expense. We describe how the data can be modelled to enable the inference on the quality of a dataset, a property often overlooked in protein image analysis. To complete this line of research we provide a method to rank points that are likely to be present in one group of images but absent in a second group. The produced score is used to highlight the proteins that are not present in both image sets representing control or diseased tissue, therefore providing biological indicators which are vitally important to improve the accuracy of diagnosis. In the second area of research, we test the hypothesis that pesticide toxicity is related to the shape similarity between the pesticide molecule itself and the natural ligand of the protein to which a pesticide will bind (and ultimately cause toxicity). A ligand of aprotein is simply a small molecule that will bind to that protein. It seems intuitive that the similarities between a naturally formed ligand and a synthetically developed ligand (the pesticide) may be an indicator of how well a pesticide and the protein bind, as well as provide an indicator of pesticide toxicity. A graphical matching algorithm is used to infer the atomic matches across ligands, with Procrustes methodology providing the final superimposition before a measure of shape similarity is defined considering the aligned molecules. We find evidence that the measure of shape similarity does provide a significant indicator of the associated pesticide toxicity, as well as providing a more significant indicator than previously found biological indicators. Previous research has found that the properties of a molecule in its bioactive form are more suitable indicators of an associated activity. Here, these findings dictate that the docked conformation of a pesticide within the protein will provide more accurate indicators of the associated toxicity. So next we use a docking program to predict the docked conformation of a pesticide. We provide a technique to calculate the similarity between the docks of both the pesticide and the natural ligand. A similar technique is used to provide a measure for the closeness of fit between a pesticide and the protein. Both measures are then considered as independent variables for the prediction of toxicity. In this case the results show potential for the calculated variables to be useful toxicity predictors, though further analysis is necessary to properly explore their significance.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Li, Bingjue. « Variable-Geometry Extrusion Die Synthesis and Morphometric Analysis Via Planar, Shape-Changing Rigid-Body Mechanisms ». University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1497529085483053.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Boucher, Maxime. « Shape analysis of cortical folds ». Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104620.

Texte intégral
Résumé :
The cortical surface in humans is comprised of several folds which are juxtaposed together to form a biologically meaningful pattern. For many biological reasons, the geometry of this pattern changes in time: folds get longer, wider and deeper. The relationship between the shape of cortical folds and biological factors such as gender and aging can be studied using statistical shape analysis techniques.An essential step in this process is the matching of the folds on different cortical surfaces. Fold matching can be done using a scalar field describing the relative depth of folds on a surface. The match between folds is established by comparing the scalar field of each surface. The first contribution of this thesis is to define a function as a solution to a sparse linear system, which characterizes the relative depth of the folds on a surface. This thesis shows that the accuracy of surface registration is improved by 11% using the proposed scalar field as a shape descriptor to register surfaces. The second contribution of this thesis is to propose a statistical method to detect directional differences in the shape of folds on the cerebral cortex. A directional difference can be, for example, a difference in its length in the direction parallel to it. Previous statistical tests for shape analysis only determine if two folds are different locally, and they typically average across multiple local directions. The method proposed in this thesis provides directional information to better understand the factors that relate to differences in fold shape.The third contribution of this thesis is the development of anisotropic diffusion kernels on surfaces to highlight shape differences that affect fold shape. Diffusion of scalar and tensor fields is used in statistical shape analysis to increase the detection power of statistical tests. However, diffusion also decreases the capacity of statistical tests to localize significant shape differences. Prior to this thesis, diffusion kernels used on surfaces were isotropic in shape and blurred information over multiple folds. Anisotropic diffusion kernels, on the other hand, can increase statistical power by concentrating diffusion along fold orientation and highlighting the variability in shape that is localized to specific folds.In summary, this thesis provides tools that increase the amount of information that can be gathered about the morphometry of the cerebral cortex using statistical shape analysis. The accuracy of surface registration is increased, the analysis of the underlying deformation field allows us to determine if a difference in shape affects fold length or width and diffusion kernels produce statistical results that highlight the variability in shape that is localized to specific folds.
La surface corticale du cerveau humain contient plusieurs plis, ou sillons, qui juxtaposés forment un motif cohérent. Pour plusieurs raisons biologiques, la géométrie du motif formé par les plis corticaux change avec le temps: les plis deviennent plus longs, plus ouverts et plus creux. La relation entre la forme des plis corticaux et plusieurs facteurs biologiques, tels que le vieillissement et le genre du sujet peut être étudiée en utilisant des méthodes statistiques d'analyse de formes.Une étape essentielle de l'analyse statistique de la forme du cerveau humain est la mise en correspondance des sillons de surfaces corticales différentes. La mise en correspondance peut se faire à l'aide d'un champ scalaire décrivant la profondeur relative des sillons sur une surface. La correspondance entre les sillons est établie en comparant le champ scalaire respectif de chaque surface. La première contribution de cette thèse est de décrire un champ scalaire rapide à calculer et qui caractérise la profondeur relative des sillons sur une surface. L'utilisation du champ scalaire proposé dans cette thèse a amené une amélioration de 11% de la précision de la mise en correspondance. La seconde contribution de cette thèse est une méthode statistique permettant de localiser des différences directionnelles dans la forme des plis. Par exemple, un sillon plus long aura une différence de longueur dans la direction parallèle au sillon. La méthode statistique présentée dans cette thèse permet de déterminer la direction selon laquelle la forme des sillons diffère le plus. Les autres méthodes statistiques ne pouvant que déterminer si localement deux sillons sont différents, la méthode proposée dans cette thèse procure davantage d'information pour comprendre la forme des sillons.La troisième contribution de cette thèse est de proposer une méthode de diffusion anisotrope sur la surface corticale afin de faire ressortir les différences qui affectent la forme des sillons. La diffusion de champs scalaires et de tenseurs est utilisée afin d'augmenter la capacité de détection des tests statistiques. Par contre, la diffusion réduit aussi la capacité de localisation des méthodes statistiques. Avant cette thèse, la diffusion sur la surface se faisait de façon isotrope et l'information sur la forme des sillons était diffusée sur une région couvrant plusieurs sillons. La diffusion anisotrope permet d'augmenter le pouvoir de détection des tests statistiques sans pour autant réduire la capacité de mettre en évidence une différence dans la forme qui est localisée à un sillon spécifique.Le résultat de cette thèse est qu'il est possible d'analyser la forme des sillons du cortex cérébral en utilisant une méthode générale d'analyse de déformations. La précision de la mise en correspondance a été augmentée, l'analyse des champs de déformations permet de déterminer si une différence affecte la longueur ou la largeur du sillon et la diffusion utilisée pour augmenter la puissance des tests statistiques permet de mettre en évidence des différences dans la forme qui est localisée à un sillon spécifique.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Miller, James. « Shape curve analysis using curvature ». Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/854/.

Texte intégral
Résumé :
Statistical shape analysis is a field for which there is growing demand. One of the major drivers for this growth is the number of practical applications which can use statistical shape analysis to provide useful insight. An example of one of these practical applications is investigating and comparing facial shapes. An ever improving suite of digital imaging technology can capture data on the three-dimensional shape of facial features from standard images. A field for which this offers a large amount of potential analytical benefit is the reconstruction of the facial surface of children born with a cleft lip or a cleft lip and palate. This thesis will present two potential methods for analysing data on the facial shape of children who were born with a cleft lip and/or palate using data from two separate studies. One form of analysis will compare the facial shape of one year old children born with a cleft lip and/or palate with the facial shape of control children. The second form of analysis will look for relationships between facial shape and psychological score for ten year old children born with a cleft lip and/or palate. While many of the techniques in this thesis could be extended to different applications much of the work is carried out with the express intention of producing meaningful analysis of the cleft children studies. Shape data can be defined as the information remaining to describe the shape of an object after removing the effects of location, rotation and scale. There are numerous techniques in the literature to remove the effects of location, rotation and scale and thereby define and compare the shapes of objects. A method which does not require the removal of the effects of location and rotation is to define the shape according to the bending of important shape curves. This method can naturally provide a technique for investigating facial shape. When considering a child's face there are a number of curves which outline the important features of the face. Describing these feature curves gives a large amount of information on the shape of the face. This thesis looks to define the shape of children's faces using functions of bending, called curvature functions, of important feature curves. These curvature functions are not only of use to define an object, they are apt for use in the comparison of two or more objects. Methods to produce curvature functions which provide an accurate description of the bending of face curves will be introduced in this thesis. Furthermore, methods to compare the facial shape of groups of children will be discussed. These methods will be used to compare the facial shape of children with a cleft lip and/or palate with control children. There is much recent literature in the area of functional regression where a scalar response can be related to a functional predictor. A novel approach for relating shape to a scalar response using functional regression, with curvature functions as predictors, is discussed and illustrated by a study into the psychological state of ten year old children who were born with a cleft lip or a cleft lip and palate. The aim of this example is to investigate whether any relationship exists between the bending of facial features and the psychological score of the children, and where relationships exist to describe their nature. The thesis consists of four parts. Chapters 1 and 2 introduce the data and give some background to the statistical techniques. Specifically, Chapter 1 briefly introduces the idea of shape and how the shape of objects can be defined using curvature. Furthermore, the two studies into facial shape are introduced which form the basis of the work in this thesis. Chapter 2 gives a broad overview of some standard shape analysis techniques, including Procrustes methods for alignment of objects, and gives further details of methods based on curvature. Functional data analysis techniques which are of use throughout the thesis are also discussed. Part 2 consists of Chapters 3 to 5 which describe methods to find curvature functions that define the shape of important curves on the face and compare these functions to investigate differences between control children and children born with a cleft lip and/or palate. Chapter 3 considers the issues with finding and further analysing the curvature functions of a plane curve whilst Chapter 4 extends the methods to space curves. A method which projects a space curve onto two perpendicular planes and then uses the techniques of Chapter 3 to calculate curvature is introduced to facilitate anatomical interpretation. Whilst the midline profile of a control child is used to illustrate the methods in Chapters 3 and 4, Chapter 5 uses curvature functions to investigate differences between control children and children born with a cleft lip and/or palate in terms of the bending of their upper lips. Part 3 consists of Chapters 6 and 7 which introduce functional regression techniques and use these to investigate potential relationships between the psychological score and facial shape, defined by curvature functions, of cleft children. Methods to both display graphically and formally analyse the regression procedure are discussed in Chapter 6 whilst Chapter 7 uses these methods to provide a systematic analysis of any relationship between psychological score and facial shape. The final part of the thesis presents conclusions discussing both the effectiveness of the methods and some brief anatomical/psychological findings. There are also suggestions of potential future work in the area.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Mullan, Claire. « Shape analysis of synthetic diamond ». Thesis, Loughborough University, 1997. https://dspace.lboro.ac.uk/2134/10841.

Texte intégral
Résumé :
Two-dimensional images of synthetic industrial diamond particles were obtained using a camera, framegrabber and PC-based image analysis software. Various methods for shape quantification were applied, including two-dimensional shape factors, Fourier series expansion of radius as a function of angle, boundary fractal analysis, polygonal harmonics, and corner counting methods. The shape parameter found to be the most relevant was axis ratio, defined as the ratio of the minor axis to the major axis of the ellipse with the same second moments of area as the particle. Axis ratio was used in an analysis of the sorting of synthetic diamonds on a vibrating table. A model was derived based on the probability that a particle of a given axis ratio would travel to a certain bin. The model described the sorting of bulk material accurately but it was found not to be applicable if the shape mix of the feed material changed dramatically. This was attributed to the fact that the particle-particle interference was not taken into account. An expert system and a neural network were designed in an attempt to classify particles by a combination of four shape parameters. These systems gave good results when discriminating between particles from bin I and bin 9 but not for neighbouring bins or for more than two classes. The table sorting process was discussed in light of the findings and it was demonstrated that the shape distributions of sorted diamond fractions can be quantified in a useful and meaningful way.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Wood, Christopher Martin. « Shape analysis using Young's fringes ». Thesis, Liverpool John Moores University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261442.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Alfahad, Mai F. A. M. « Statistical shape analysis of helices ». Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/21675/.

Texte intégral
Résumé :
Consider a sequence of equally spaced points along a helix in three-dimensional space, which are observed subject to statistical noise. In this thesis, maximum likelihood (ML) method is developed to estimate the parameters of the helix. Statistical properties of the estimator are studied and comparisons are made to other estimators found in the literature. Methods are established here for the fitting of unkinked and kinked helices. For an unkinked helix an initial estimate of a helix axis is estimated by a modified eigen-decomposition or a method from the literature. Mardia-Holmes model can be used to estimate the initial helix axis but it is often not very successful one since it requires initial parameters. A better method for initial axis estimation is the Rotfit method. If the the axis is known, we minimize the residual sum of squares (RSS) to estimate the helix parameters and then optimize the axis estimate. For a kinked helix, we specify a test statistic by simulating the null distribution of unkinked helices. If the kink position is known, then the test statistic approximately follows an F-distribution. If the null hypothesis is rejected i.e. the helix has a change point, and then cut the helix into two sub-helices between the change point where the helix has the maximum statistic. Statistics test are studied to test how differ these two sub-helices from each other. Parametric bootstrap procedure is used to study these statistics. The shapes of protein alpha-helices are used to illustrate the procedure.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Zackula-David, Rosalee E. « Assessing schizophrenia with shape analysis / ». free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p1418079.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Fonn, Eivind. « Computing Metrics on Riemannian Shape Manifolds : Geometric shape analysis made practical ». Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9868.

Texte intégral
Résumé :

Shape analysis and recognition is a field ripe with creative solutions and innovative algorithms. We give a quick introduction to several different approaches, before basing our work on a representation introduced by Klassen et. al., considering shapes as equivalence classes of closed curves in the plane under reparametrization, and invariant under translation, rotation and scaling. We extend this to a definition for nonclosed curves, and prove a number of results, mostly concerning under which conditions on these curves the set of shapes become manifolds. We then motivate the study of geodesics on these manifolds as a means to compute a shape metric, and present two methods for computing such geodesics: the shooting method from Klassen et. al. and the ``direct'' method, new to this paper. Some numerical experiments are performed, which indicate that the direct method performs better for realistically chosen parameters, albeit not asymptotically.

Styles APA, Harvard, Vancouver, ISO, etc.
18

Wang, Binhai. « Contour-based shape description and analysis for shape retrieval and classification ». Thesis, University of East Anglia, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443090.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Maurel, Pierre. « Shape gradients, shape warping and medical application to facial expression analysis ». Paris 7, 2008. http://www.theses.fr/2008PA077151.

Texte intégral
Résumé :
Cette thèse porte sur le domaine des statistiques de formes. Une forme peut être une courbe plane en 2D ou une surface en 3D. Afin de pouvoir définir ces statistiques (moyenne, modes de variation), nous avons étudié plus précisément, dans une première partie plutôt théorique, le recalage et la mise en correspondance de deux formes entre elle. Cela consiste à développer des moyens de déformer une forme sur une autre. Des distances sont définies entre deux formes et une descente de gradient est effectuée pour déformer la première en la seconde. Nous avons donc défini la notion de gradient sur l'espace des formes et généralisé cette définition pour définir des champs de déformations qui ne dérivent plus d'un gradient. Cette notion a été appliquée pour construire une méthode permettant de déformer une courbe en une autre en étant guidé par des points d'amers définissant des correspondances entre ces deux courbes. Dans une seconde partie, nous présentons une application de ces méthodes à l'analyse d'expressions faciales de patients épileptiques en collaboration avec l'équipe du Professeur Patrick Chauvel à l'hôpital de La Timone à Marseille. Nous avons développé des techniques pour quantifier ces expressions faciales, et ainsi pouvoir les comparer entre elles. Nous avons ensuite étudié un moyen de mettre en relation ces expressions faciales (enregistrées pendant des crises d'épilepsies) avec le signal électrique enregistré simultanément dans le cerveau des patients. Cette mise en relation répond à une demande de l'équipe médicale qui se sert de cette information parmi d'autres pour affiner leur diagnostic
This work focuses on the issue of modeling prior knowledge about shapes, an essential problem in Computer Vision. A shape can be a planar curve in 2D or a surface in 3D. In order to model shape statistics, we studied in a first part, rather theoretical, shape warping and matching. We start by defining distances between shapes? Then, in order to deform a shape onto another, we define the gradient of this shape functional and apply a gradient descent scheme. We also developed a generalization of the gradient notion which can take priors into account and which do not derive from any inner product. We used this new notion for defining an extension of the very well-known level set method that can handle landmarks knowledge. On the application side and in collaboration with professor Patrick Chauvel at La Timone Hospital, Marseille, we worked on the task of correlating facial expressions and the electrical activity in the brain during the epileptic seizures. Therefore, we developed a method for fitting a three-dimensional face model under uncontrolled imaging conditions and used this method for analyzing facial expressions of epileptic patients. Finally we present a first step in the direction of being able to interrelate electrical activity produced by the brain during the seizure (and recorded by stereoelectroencephalography electrodes) and the facial expressions
Styles APA, Harvard, Vancouver, ISO, etc.
20

Nain, Delphine. « Scale-based decomposable shape representations for medical image segmentation and shape analysis ». Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11192006-184858/.

Texte intégral
Résumé :
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2007.
Aaron Bobick, Committee Chair ; Allen Tannenbaum, Committee Co-Chair ; Greg Turk, Committee Member ; Steven Haker, Committee Member ; W. Eric. L. Grimson, Committee Member.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Valdés, Amaro Daniel Alejandro. « Statistical shape analysis for bio-structures : local shape modelling, techniques and applications ». Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/3810/.

Texte intégral
Résumé :
A Statistical Shape Model (SSM) is a statistical representation of a shape obtained from data to study variation in shapes. Work on shape modelling is constrained by many unsolved problems, for instance, difficulties in modelling local versus global variation. SSM have been successfully applied in medical image applications such as the analysis of brain anatomy. Since brain structure is so complex and varies across subjects, methods to identify morphological variability can be useful for diagnosis and treatment. The main objective of this research is to generate and develop a statistical shape model to analyse local variation in shapes. Within this particular context, this work addresses the question of what are the local elements that need to be identified for effective shape analysis. Here, the proposed method is based on a Point Distribution Model and uses a combination of other well known techniques: Fractal analysis; Markov Chain Monte Carlo methods; and the Curvature Scale Space representation for the problem of contour localisation. Similarly, Diffusion Maps are employed as a spectral shape clustering tool to identify sets of local partitions useful in the shape analysis. Additionally, a novel Hierarchical Shape Analysis method based on the Gaussian and Laplacian pyramids is explained and used to compare the featured Local Shape Model. Experimental results on a number of real contours such as animal, leaf and brain white matter outlines have been shown to demonstrate the effectiveness of the proposed model. These results show that local shape models are efficient in modelling the statistical variation of shape of biological structures. Particularly, the development of this model provides an approach to the analysis of brain images and brain morphometrics. Likewise, the model can be adapted to the problem of content based image retrieval, where global and local shape similarity needs to be measured.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Peter, Adrian M. « Information geometry for shape analysis probabilistic models for shape matching and indexing / ». [Gainesville, Fla.] : University of Florida, 2008. http://purl.fcla.edu/fcla/etd/UFE0022484.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lescoat, Thibault. « Geometric operators for 3D modeling using dictionary-based shape representations ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT005.

Texte intégral
Résumé :
Dans cette thèse, nous étudions les représentations haut-niveau de formes 3D et nous développons les primitives algorithmiques nécessaires à la manipulation d'objets représentés par composition d'éléments. Nous commençons par une revue de l'état de l'art, des représentations bas-niveau usuelles jusqu'à celles haut-niveau, utilisant des dictionnaires. En particulier, nous nous intéressons à la représentation de formes via la composition discrète d'atomes tirés d'un dictionnaire de formes.Nous observons qu'il n'existe aucune méthode permettant de fusionner des atomes (placés sans intersection) de manière plausible ; en effet, la plupart des méthodes requiert des intersections ou alors ne préservent pas les détails grossiers. De plus, très peu de techniques garantissent la préservation de l'entrée, une propriété importante lors du traitement de formes créées par des artistes. Nous proposons donc un opérateur de composition qui propage les détails grossiers tout en conservant l'entrée dans le résultat.Dans le but de permettre une édition interactive, nous cherchons à prévisualiser la composition d'objets lourds. Pour cela, nous proposons de simplifier les atomes avant de les composer. Nous introduisons donc une méthode de simplification de maillage qui préserve les détails grossiers. Bien que notre méthode soit plus contrainte que les approches précédentes qui ne produisent pas de maillage, elle résulte en des formes simplifiées fidèles aux formes détaillées
In this thesis, we study high-level 3D shape representations and developed the algorithm primitives necessary to manipulate shapes represented as a composition of several parts. We first review existing representations, starting with the usual low-level ones and then expanding on a high-level family of shape representations, based on dictionaries. Notably, we focus on representing shapes via a discrete composition of atoms from a dictionary of parts.We observe that there was no method to smoothly blend non-overlapping atoms while still looking plausible. Indeed, most methods either required overlapping parts or do not preserve large-scale details. Moreover, very few methods guaranteed the exact preservation of the input, which is very important when dealing with artist-authored meshes to avoid destroying the artist's work. We address this challenge by proposing a composition operator that is guaranteed to exactly keep the input while also propagating large-scale details.To improve the speed of our composition operator and allow interactive edition, we propose to simplify the input parts prior to completing them. This allow us to interactively previsualize the composition of large meshes. For this, we introduce a method to simplify a detailed mesh to a coarse one by preserving the large details. While more constrained than related approaches that do not produce a mesh, our method still yields faithful outputs
Styles APA, Harvard, Vancouver, ISO, etc.
24

Noel, Laurent. « Discrete shape analysis for global illumination ». Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1130/document.

Texte intégral
Résumé :
Les images de synthèse sont présentes à travers un grand nombre d'applications tel que les jeux vidéo, le cinéma, l'architecture, la publicité, l'art, la réalité virtuelle, la visualisation scientifique, l'ingénierie en éclairage, etc. En conséquence, la demande en photoréalisme et techniques de rendu rapide ne cesse d'augmenter. Le rendu réaliste d'une scène virtuelle nécessite l'estimation de son illumination globale grâce à une simulation du transport de lumière, un processus coûteux en temps de calcul dont la vitesse de convergence diminue généralement lorsque la complexité de la scène augmente. En particulier, une forte illumination indirecte combinée à de nombreuses occlusions constitue une caractéristique globale de la scène que les techniques existantes ont du mal à gérer. Cette thèse s'intéresse à ce problème à travers l'application de techniques d'analyse de formes pour le rendu 3D.Notre principal outil est un squelette curviligne du vide de la scène, représenté par un graphe contenant des informations sur la topologie et la géométrie de la scène. Ce squelette nous permet de proposer de nouvelles méthodes pour améliorer des techniques de rendu temps réel et non temps réel. Concernant le rendu temps réel, nous utilisons les informations géométriques du squelette afin d'approximer le rendu des ombres projetés par un grand nombre de points virtuels de lumière représentant l'illumination indirecte de la scène 3D.Pour ce qui est du rendu non temps réel, nos travaux se concentrent sur des algorithmes basés sur l'échantillonnage de chemins, constituant actuellement le principal paradigme en rendu physiquement plausible. Notre squelette mène au développement de nouvelles stratégies d'échantillonnage de chemins, guidés par des caractéristiques topologiques et géométriques. Nous adressons également ce problème à l'aide d'un second outil d'analyse de formes: la fonction d'ouverture du vide de la scène, décrivant l'épaisseur locale du vide en chacun de ses points. Nos contributions offrent une amélioration des méthodes existantes and indiquent clairement que l'analyse de formes offre de nombreuses opportunités pour le développement de nouvelles techniques de rendu 3D
Nowadays, computer generated images can be found everywhere, through a wide range of applications such as video games, cinema, architecture, publicity, artistic design, virtual reality, scientific visualization, lighting engineering, etc. Consequently, the need for visual realism and fast rendering is increasingly growing. Realistic rendering involves the estimation of global illumination through light transport simulation, a time consuming process for which the convergence rate generally decreases as the complexity of the input virtual 3D scene increases. In particular, occlusions and strong indirect illumination are global features of the scene that are difficult to handle efficiently with existing techniques. This thesis addresses this problem through the application of discrete shape analysis to rendering. Our main tool is a curvilinear skeleton of the empty space of the scene, a sparse graph containing important geometric and topological information about the structure of the scene. By taking advantage of this skeleton, we propose new methods to improve both real-time and off-line rendering methods. Concerning real-time rendering, we exploit geometric information carried by the skeleton for the approximation of shadows casted by a large set of virtual point lights representing the indirect illumination of the 3D scene. Regarding off-line rendering, our works focus on algorithms based on path sampling, that constitute the main paradigm of state-of-the-art methods addressing physically based rendering. Our skeleton leads to new efficient path sampling strategies guided by topological and geometric features. Addressing the same problem, we also propose a sampling strategy based on a second tool from discrete shape analysis: the opening function of the empty space of the scene, describing the local thickness of that space at each point. Our contributions demonstrate improvements over existing approaches and clearly indicate that discrete shape analysis offers many opportunities for the development of new rendering techniques
Styles APA, Harvard, Vancouver, ISO, etc.
25

Chaussard, John. « Topological tools for discrete shape analysis ». Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00587411.

Texte intégral
Résumé :
L'analyse d'images est devenue ces dernières années une discipline de plus en plus riche de l'informatique. La topologie discrète propose un panel d'outils incontournables dans le traitement d'images, notamment grâce à l'outil du squelette, qui permet de simplifier des objets tout en conservant certaines informations intactes. Cette thèse étudie comment certains outils de la topologie discrète, notamment les squelettes, peuvent être utilisés pour le traitement d'images de matériaux.Le squelette d'un objet peut être vu comme une simplification d'un objet, possédant certaines caractéristiques identiques à celles de l'objet original. Il est alors possible d'étudier un squelette et de généraliser certains résultats à l'objet entier. Dans une première partie, nous proposons une nouvelle méthode pour conserver, dans un squelette, certaines caractéristiques géométriques de l'objet original (méthode nécessitant un paramètre de filtrage de la part de l'utilisateur) et obtenir ainsi un squelette possédant la même apparence que l'objet original. La seconde partie propose de ne plus travailler avec des objets constitués de voxels, mais avec des objets constitués de complexes cubiques. Dans ce nouveau cadre, nous proposons de nouveaux algorithmes de squelettisation, dont certains permettent de conserver certaines caractéristiques géométriques de l'objet de départ dans le squelette, de façon automatique (aucun paramètre de filtrage ne doit être donné par l'utilisateur). Nous montrerons ensuite comment un squelette, dans le cadre des complexes cubiques, peut être décomposé en différentes parties. Enfin, nous montrerons nos résultats sur différentes applications, allant de l'étude des matériaux à l'imagerie médicale
Styles APA, Harvard, Vancouver, ISO, etc.
26

Rezanejad, Morteza. « Flux graphs for 2D shape analysis ». Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=117144.

Texte intégral
Résumé :
This thesis considers a method for computing skeletal representations based on the average outward flux (AOF) of the gradient of the Euclidean distance function to the boundary of a 2D object through the boundary of a region that is shrunk. It then shows how the original method, developed by Dimitrov et al. [17] can be optimized and made more efficient and proposes an algorithm for computing flux invariants which is a number of times faster. It further exploits a relationship between the AOF and the object angle at endpoints, branch points and regular points of the skeleton to obtain more complete boundary reconstruction results than those demonstrated in prior work. Using this optimized implementation, new measures for skeletal simplification are proposed based on two criteria: the uniqueness of an inscribed disk as a tool for defining salience, and the limiting average outward flux value. The simplified skeleton when abstracted as a directed graph is shown to be far less complex than popular skeletal graphs in the literature, such as the shock graph, by a number of graph complexity measures including: number of nodes, number of edges, depth of the graph, number of skeletal points, and the sum of topological signature vector (TSV) values. We conclude the thesis by applying the simplified graph to a view-based object recognition experiment previously arranged for shock graphs. The results suggest that our new simplified graph yields recognition scores very close to those obtained using shock graphs but with a smaller number of nodes, edges, and skeletal points.
Ce mémoire propose une méthode pour calculer des représentations squelettiques en fonction du flux moyen décrit par le gradient de la fonction de distance Euclidienne aux limites d'un objet 2D qui rétrécit. La méthode originale développée par Dimitrov et al. [17] est ensuite optimisée afin de calculer des invariants de flux plus rapidement. Une relation entre l'AOF et l'angle de l'objet aux extrémités (aux points de branches et des points réguliers du squelette) est exploitée afin d'obtenir une reconstruction plus précises des limites de l'objet par rapport aux travaux précédents. En utilisant cette implémentation optimisée, de nouvelles mesures de simplification de squelettes sont proposées selon deux critères: l'unicité d'un disque inscrit comme un outil permettant de définir la saillance, et la limitation de la moyenne du flux à l'extérieur. Il est démontré que le squelette simplifié, abstrait par un graphe orienté, est beaucoup moins complexe que des graphes squelettiques conventionnels mentionnés dans la littérature, tel que le graphe de choc. Les mesures de complexité de graphe comprennent le nombre de nuds, le nombre de bords, la profondeur du graphe, le nombre de points du squelette et la somme des valeurs du vecteur des signes topologiques (TSV). La thèse se finit en appliquant le graphe simplifié au problème de reconnaissance d'objets basée sur la vue, préalablement adapté pour l'utilisation de graphes de choc. Les résultats suggèrent que notre nouveau graphe simplifié atteint des performances similaires à celles des graphes de choc, mais avec moins de nuds, de bords et de points du squelette plus rapide.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Mei, Lin. « Statistical analysis of shape and deformation ». Thesis, Imperial College London, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.542932.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Tsironis, P. « A shape descriptor for EEG analysis ». Thesis, University of Sussex, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374476.

Texte intégral
Résumé :
The Shape Descriptor is a tool for shape analysis of EEG data. It combines shape analysis of wave forms with medical knowledge of EEG features (spikes. slow waves and artifacts). Its contribution is that it optimizes human recognition by providing an accurate shape representation (slopes. durations. amplitudes) using mathematical criteria (error norm. randomness of error ete.) and offering valuable information about their structural properties. The Shape Descriptor has been implemented on a Unix system using Pascal language. The description of the EEG data by linear segments is achieved in two stages. Module 1 provides an initial segmentation of the wave form. The original data is approximated by a polynomial of low degree called Uniform Approximation. using as criterion of clO8el1ess the mioimn error norm. The extraction of linear segments is achieved through the use of hed error approximation techniques. These allow the description of data by straight line segments whose pointwise error does not exceed a pre-assigned value (ie the minimn error obtained in the uniform approximation). The function of Module 2 is to obtiJo better ftptesentation of the EEG data by minimizing the error norm. This is achieved by the split-and-merge algorithm which attempts to minimize the error by moving the junction points of the linear segments. Successive segments with similar approximating c:oefIicients are merged while linear segments with great error are split provided that these processes do not yield greater error. The Shape Descriptor is a good candidate for EEG shape analysis. not only for transients but alao for artifacts and moft complicated patterns
Styles APA, Harvard, Vancouver, ISO, etc.
29

MacDonald, M. « Analysis of shape using Delaunay triangulations ». Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246340.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Er, Fikret. « Robust methods in statistical shape analysis ». Thesis, University of Leeds, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342394.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Ruwanthi, Kolamunnage Dona Rasanga. « Statistical shape analysis for bilateral symmetry ». Thesis, University of Leeds, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418233.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Dryden, Ian Leslie. « The statistical analysis of shape data ». Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392774.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Hughes, Alex. « Shape analysis and pose from contour ». Thesis, University of York, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428406.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Golland, Poilna 1971. « Statistical shape analysis of anatomical structures ». Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86776.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 123-130).
In this thesis, we develop a computational framework for image-based statistical analysis of anatomical shape in different populations. Applications of such analysis include understanding developmental and anatomical aspects of disorders when comparing patients vs. normal controls, studying morphological changes caused by aging, or even differences in normal anatomy, for example, differences between genders. Once a quantitative description of organ shape is extracted from input images, the problem of identifying differences between the two groups can be reduced to one of the classical questions in machine learning, namely constructing a classifier function for assigning new examples to one of the two groups while making as few mistakes as possible. In the traditional classification setting, the resulting classifier is rarely analyzed in terms of the properties of the input data that are captured by the discriminative model. In contrast, interpretation of the statistical model in the original image domain is an important component of morphological analysis. We propose a novel approach to such interpretation that allows medical researchers to argue about the identified shape differences in anatomically meaningful terms of organ development and deformation. For each example in the input space, we derive a discriminative direction that corresponds to the differences between the classes implicitly represented by the classifier function.
(cont.) For morphological studies, the discriminative direction can be conveniently represented by a deformation of the original shape, yielding an intuitive description of shape differences for visualization and further analysis. Based on this approach, we present a system for statistical shape analysis using distance transforms for shape representation and the Support Vector Machines learning algorithm for the optimal classifier estimation. We demonstrate it on artificially generated data sets, as well as real medical studies.
by Polina Golland.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Stone, J. V. « Shape from texture : a computational analysis ». Thesis, University of Sussex, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240346.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Kulikova, Maria. « Shape recognition for image scene analysis ». Nice, 2009. http://www.theses.fr/2009NICE4081.

Texte intégral
Résumé :
Cette thèse englobe deux parties principales. La première partie est dédiée au problème de la classification d’espèces d’arbres en utilisant des descripteurs de forme, en combinant ou non, avec ceux de radiométrie ou de texture. Nous montrons notamment que l’information sur la forme améliore la performance d’un classifieur. Pour cela, dans un premier temps, une étude des formes de couronnes d’arbres extraites à partir d’images aériennes fermées dans un espace de formes, en utilisant la notion de chemin géodésique sous deux métriques dans des espaces appropriés : une métrique non-élastique en utilisant la représentation par la fonction d’angle de la courbe, ainsi qu’une métrique élastique induite par une représentation par la racine carrée appelée q-fonction. Une étape préliminaire nécessaire à la classification est l’extraction des couronnes d’arbre. Dans une seconde partie nous abordons donc le problème de l’extraction d’objets à forme complexe arbitraire à partir des images de télédétection de très haute résolution. Nous construisons un modèle fondé sur les processus ponctuels marqués. Son originalité tient dans sa prise en compte d’objets à forme arbitraire par rapport aux objets à forme paramétrique, e. G. Ellipses ou rectangles. Les formes sélectionnées sont obtenues par la minimisation locale d’une énergie de type contours actifs avec différents a priori sur la forme incorporée. Les objets de la configuration finale sont ensuite sélectionnés parmi les candidats par une dynamique de naissances et morts multiple, couplée à un schéma de recuit simulé. L’approche est validée sur des images de zones forestières à très haute résolution fournies par l’Université d’Agriculture en Suède
This thesis includes two main parts. In the first part we address the problem of tree crown classification into species using shape features, without, or in combination with, those of radiometry and texture, to demonstrate that shape information improves classification performance. For this purpose, we first study the shapes of tree crowns extracted from very high resolution aerial infra-red images. For our study, we choose a methodology based on the shape analysis of closed continuous curves on shape spaces using geodesic paths under the bending metric with the angle function curve representation, and the elastic metric with the square root q-function representation? A necessary preliminary step to classification is extraction of the tree crowns. In the second part, we address thus the problem of extraction of multiple objects with complex, arbitrary shape from remote sensing images of very high resolution. We develop a model based on marked point process. Its originality lies on its use of arbitrarily-shaped objects as opposed to parametric shape objects, e. G. Ellipses or rectangles. The shapes considered are obtained by local minimisation of an energy of contour active type with weak and the strong shape prior knowledge included. The objects in the final (optimal) configuration are then selected from amongst these candidates by a birth-and-death dynamics embedded in an annealing scheme. The approach is validated on very high resolutions of forest provided by the Swedish University of Agriculture
Styles APA, Harvard, Vancouver, ISO, etc.
37

Gkolias, Theodoros. « Shape analysis in protein structure alignment ». Thesis, University of Kent, 2018. https://kar.kent.ac.uk/66682/.

Texte intégral
Résumé :
In this Thesis we explore the problem of structural alignment of protein molecules using statistical shape analysis techniques. The structural alignment problem can be divided into three smaller ones: the representation of protein structures, the sampling of possible alignments between the molecules and the evaluation of a given alignment. Previous work done in this field, can be divided in two approaches: an adhoc algorithmic approach from the Bioinformatics literature and an approach using statistical methods either in a likelihood or Bayesian framework. Both approaches address the problem from a different scope. For example, the algorithmic approach is easy to implement but lacks an overall modelling framework, and the Bayesian address this issue but sometimes the implementation is not straightforward. We develop a method which is easy to implement and is based on statistical assumptions. In order to asses the quality of a given alignment we use a size and shape likelihood density which is based in the structure information of the molecules. This likelihood density is also extended to include sequence infor- mation and gap penalty parameters so that biologically meaningful solution can be produced. Furthermore, we develop a search algorithm to explore possible alignments from a given starting point. The results suggest that our approach produces better or equal alignments when it is compared to the most recent struc- tural alignment methods. In most of the cases we managed to achieve a higher number of matched atoms combined with a high TMscore. Moreover, we extended our method using Bayesian techniques to perform alignments based on posterior modes. In our approach, we estimate directly the mode of the posterior distribution which provides the final alignment between two molecules. We also, choose a different approach for treating the mean parameter. In previous methods the mean was either integrated out of the likelihood density or considered as fixed. We choose to assign a prior over it and obtain its posterior mode. Finally, we consider an extension of the likelihood model assuming a Normal density for both the matched and unmatched parts of a molecule and diagonal covariance structure. We explore two different variants. In the first we consider a fixed zero mean for the unmatched parts of the molecules and in the second we consider a common mean for both the matched and unmatched parts. Based on simulated and real results, both models seems to perform well in obtaining high number of matched atoms and high TMscore.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Razib, Muhammad. « Structural Surface Mapping for Shape Analysis ». FIU Digital Commons, 2017. https://digitalcommons.fiu.edu/etd/3517.

Texte intégral
Résumé :
Natural surfaces are usually associated with feature graphs, such as the cortical surface with anatomical atlas structure. Such a feature graph subdivides the whole surface into meaningful sub-regions. Existing brain mapping and registration methods did not integrate anatomical atlas structures. As a result, with existing brain mappings, it is difficult to visualize and compare the atlas structures. And also existing brain registration methods can not guarantee the best possible alignment of the cortical regions which can help computing more accurate shape similarity metrics for neurodegenerative disease analysis, e.g., Alzheimer’s disease (AD) classification. Also, not much attention has been paid to tackle surface parameterization and registration with graph constraints in a rigorous way which have many applications in graphics, e.g., surface and image morphing. This dissertation explores structural mappings for shape analysis of surfaces using the feature graphs as constraints. (1) First, we propose structural brain mapping which maps the brain cortical surface onto a planar convex domain using Tutte embedding of a novel atlas graph and harmonic map with atlas graph constraints to facilitate visualization and comparison between the atlas structures. (2) Next, we propose a novel brain registration technique based on an intrinsic atlas-constrained harmonic map which provides the best possible alignment of the cortical regions. (3) After that, the proposed brain registration technique has been applied to compute shape similarity metrics for AD classification. (4) Finally, we propose techniques to compute intrinsic graph-constrained parameterization and registration for general genus-0 surfaces which have been used in surface and image morphing applications.
Styles APA, Harvard, Vancouver, ISO, etc.
39

MORGERA, ANDREA. « Dominant points detection for shape analysis ». Doctoral thesis, Università degli Studi di Cagliari, 2012. http://hdl.handle.net/11584/266073.

Texte intégral
Résumé :
The growing interest in recent years towards the multimedia and the large amount of information exchanged across the network involves the various fields of research towards the study of methods for automatic identification. One of the main objectives is to associate the information content of images, using techniques for identifying composing objects. Among image descriptors, contours reveal are very important because most of the information can be extracted from them and the contour analysis offers a lower computational complexity also. The contour analysis can be restricted to the study of some salient points with high curvature from which it is possible to reconstruct the original contour. The thesis is focused on the polygonal approximation of closed digital curves. After an overview of the most common shape descriptors, distinguished between simple descriptors and external methods, that focus on the analysis of boundary points of objects, and internal methods, which use the pixels inside the object also, a description of the major methods regarding the extraction of dominant points studied so far and the metrics typically used to evaluate the goodness of the polygonal approximation found is given. Three novel approaches to the problem are then discussed in detail: a fast iterative method (DPIL), more suitable for realtime processing, and two metaheuristics methods (GAPA, ACOPA) based on genetic algorithms and Ant Colony Optimization (ACO), more com- plex from the point of view of the calculation, but more precise. Such techniques are then compared with the other main methods cited in literature, in order to assess the performance in terms of computational complexity and polygonal approximation error, and measured between them, in order to evaluate the robustness with respect to affine transformations and conditions of noise. Two new techniques of shape matching, i.e. identification of objects belonging to the same class in a database of images, are then described. The first one is based on the shape alignment and the second is based on a correspondence by ACO, which puts in evidence the excellent results, both in terms of computational time and recognition accuracy, obtained through the use of dominant points. In the first matching algorithm the results are compared with a selection of dominant points generated by a human operator while in the second the dominant points are used instead of a constant sampling of the outline typically used for this kind of approach.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Li, Huisong. « Shape abstractions with support for sharing and disjunctions ». Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE060.

Texte intégral
Résumé :
L'analyse statique des programmes permet de calculer automatiquement des propriétés sémantiques valides pour toutes les exécutions. En particulier, dans le cas des programmes manipulant des structures de données complexes en mémoire, l'analyse statique peut inférer des invariants utiles pour prouver la sûreté des accès à la mémoire ou la préservation d'invariants structurels. Beaucoup d'analyses de ce type manipulent des états mémoires abstraits représentés par des conjonctions en logique de séparation dont les prédicats de base décrivent des blocs de mémoire atomiques ou bien résument des régions non-bornées de la mémoire telles que des listes ou des arbres. De telles analyses utilisent souvent des disjonctions finies d’états mémoires abstraits afin de mieux capturer leurs dissimilarités. Les analyses existantes permettent de raisonner localement sur les zones mémoires mais présentent les inconvénients suivants: (1) Les prédicats inductifs ne sont pas assez expressifs pour décrire précisément toutes les structures de données dynamiques, du fait de la présence de pointeurs vers des parties arbitraires (i.e., non-locales) de ces structures ; (2) Les opérations abstraites correspondant à des accès en lecture ou en écriture sur ces prédicats inductifs reposent sur une opération matérialisant les cellules mémoires correspondantes. Cette opération de matérialisation crée en général de nouvelles disjonctions, ce qui nuit à l'impératif d'efficacité. Hélas, les prédicats exprimant des contraintes de structure locale ne sont pas suffisants pour déterminer de façon adéquate les ensembles de disjonctions devant être fusionnés, ni pour définir les opérations d'union et d'élargissement d'états abstraits. Cette thèse est consacrée à l'étude et la mise au point de prédicats en logique de séparation permettant de décrire des structures de données dynamiques, ainsi que des opérations abstraites afférentes. Nous portons une attention particulière aux opérations d'union et d'élargissement d’états abstraits. Nous proposons une méthode pratique permettant de raisonner globalement sur ces états mémoires au sein des méthodes existantes d'analyse de propriétés structurelles et autorisant la fusion précise et efficace de disjonctions. Nous proposons et implémentons une abstraction structurelle basée sur les variables d'ensembles qui lorsque elle est utilisée conjointement avec les prédicats inductifs permet la spécification et l'analyse de propriétés structurelles globales. Nous avons utilisé ce domaine abstrait afin d'analyser une famille de programmes manipulant des graphes représentés par liste d'adjacence. Nous proposons un critère sémantique permettant de fusionner les états mémoires abstraits similaires en se basant sur leur silhouette, cette dernière représentant certaines propriétés structurelles globales vérifiées par l'état correspondant. Les silhouettes s'appliquent non seulement à la fusion de termes dans les disjonctions d'états mémoires mais également à l'affaiblissement de conjonctions de prédicats de logique de séparation en prédicats inductifs. Ces contributions nous permettent de définir des opérateurs d’union et d'élargissement visant à préserver les disjonctions requises pour que l'analyse se termine avec succès. Nous avons implémenté ces contributions au sein de l'analyseur MemCAD et nous en avons évaluées l'impact sur l'analyse de bibliothèques existantes écrites en C et implémentant différentes structures de données, incluant des listes doublement chaînées, des arbres rouge-noir, des arbres AVL et des arbres "splay". Nos résultats expérimentaux montrent que notre approche est à même de contrôler la taille des disjonctions à des fins de performance sans pour autant nuire à la précision de l'analyse
Shape analyses rely on expressive families of logical properties to infer complex structural invariants, such that memory safety, structure preservation and other memory properties of programs dealing with dynamic data structures can be automatically verified. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summary predicates that describe unbounded heap regions like lists or trees using inductive definitions. Moreover, they use finite disjunctions of abstract memory states in order to take into account dissimilar shapes. Although existing analyses enable local reasoning of memory regions, they do, however, have the following issues: (1) The summary predicates are not expressive enough to describe precisely all the dynamic data structures. In particular, a fairly large number of data structures with unbounded sharing, such as graphs, cannot be described inductively in a local manner; (2) Abstract operations that read or write into summaries rely on materialization of memory cells. The materialization operation in general creates new disjunctions, yet the size of disjunctions should be kept small for the sake of efficiency. However, local predicates are not enough to determine the right set of disjuncts that should be clumped together and to define precise abstract join and widen operations. In this thesis, we study separating conjunction-based shape predicates and the related abstract operations, in particular, abstract joining and widening operations that lead to critical modifications of abstract states. We seek a lightweight way to enable some global reasoning in existing shape analyses such that shape predicates are more expressive for abstracting data structures with unbounded sharing and disjuncts can be clumped precisely and efficiently. We propose a shape abstraction based on set variables that when integrated with inductive definitions enables the specification and shape analysis of structures with unbounded sharing. We implemented the shape analysis domain by combining a separation logic-based shape abstract domain of the MemCAD analyzer and a set abstract domain, where the set abstractions are used to track unbounded pointer sharing properties. Based on this abstract domain, we analyzed a group of programs dealing with adjacency lists of graphs. We design a general semantic criterion to clump abstract memory states based on their silhouettes that express global shape properties, \ie, clumping abstract states when their silhouettes are similar. Silhouettes apply not only to the conservative union of disjuncts but also to the weakening of separating conjunctions of memory predicates into inductive summaries. Our approach allows us to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer and evaluate it on real-world C libraries for different data structures, including doubly-linked lists, red-black trees, AVL-trees and splay-trees. The experimental results show that our approach is able to keep the size of disjunctions small for scalability and preserve case splits that takes into account dissimilar shapes for precision
Styles APA, Harvard, Vancouver, ISO, etc.
41

Delyon, Alexandre. « Shape Optimisation Problems Around the Geometry of Branchiopod Eggs ». Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0123.

Texte intégral
Résumé :
Dans cette thèse nous nous intéressons à un problème de mathématiques appliquées à la biologie. Le but est d'expliquer la forme des œufs d'Eulimnadia, un petit animal appartenant à la classe des Branchiopodes, et plus précisément les Limnadiides. En effet, d'après la théorie de l'évolution il est raisonnable de penser que la forme des êtres vivants où des objets issus d'êtres vivants est optimisée pour garantir la survie et l'expansion de l'espèce en question. Pour ce faire nous avons opté pour la méthode de modélisation inverse. Cette dernière consiste à proposer une explication biologique à la forme des œufs, puis de la modéliser sous forme d'un problème de mathématique, et plus précisément d'optimisation de forme, que l'on cherche à résoudre pour enfin comparer la forme obtenue à la forme réelle des œufs. Nous avons étudié deux modélisations, l'une amenant à des problèmes de géométrie et de packing, l'autre à des problèmes d'optimisation de forme en élasticité linéaire. Durant la résolution du premier problème issue de la modélisation, une autre question mathématique s'est naturellement posée à nous, et nous sommes parvenus à la résoudre, donnant lieu à l'obtention du diagramme de Blaschke Santalo (A,D,r) complet. En d'autre mots nous pouvons répondre à la question suivante : étant donné trois nombres A,D, et r positifs, est-il possible de trouver un ensemble convexe du plan dont l'aire est égale à A, le diamètre égal à D, et le rayon du cercle inscrit égal à r ?
In this thesis we are interested in a problem of mathematics applied to biology. The aim is to explain the shape of the eggs of Eulimnadia, a small animal belonging to the class Branchiopods}, and more precisely the Limnadiidae. Indeed, according to the theory of evolution it is reasonable to think that the shape of living beings or objects derived from living beings is optimized to ensure the survival and expansion of the species in question. To do this we have opted for the inverse modeling method. The latter consists in proposing a biological explanation for the shape of the eggs, then modeling it in the form of a mathematical problem, and more precisely a shape optimisation problem which we try to solve and finally compare the shape obtained to the real one. We have studied two models, one leading to geometry and packing problems, the other to shape optimisation problems in linear elasticity. After the resolution of the first modeling problem, another mathematical question naturally arose to us, and we managed to solve it, resulting in the complete Blaschke-Santalò (A,D,r) diagram. In other words we can answer the following question: given three positive numbers A,D, and r, and it is possible to find a convex set of the plane whose area is equal to A, diameter equal to D, and radius of the inscribed circle equal to r
Styles APA, Harvard, Vancouver, ISO, etc.
42

Olsson, Karin, et Therese Persson. « Shape from Silhouette Scanner ». Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1348.

Texte intégral
Résumé :

The availability of digital models of real 3D objects is becoming more and more important in many different applications (e-commerce, virtual visits etc). Very often the objects to be represented cannot be modeled by means of the classical 3D modeling tools because of the geometrical complexity or color texture. In these cases, devices for the automatic acquisition of the shape and the color of the objects (3D scanners or range scanners) have to be used.

The scanner presented in this work, a Shape from silhouette scanner, is very cheap (it is based on the use of a simple digital camera and a turntable) and easy to use. While maintaining the camera on a tripod and the object on the turntable, the user acquires images with different rotation angles of the table. The fusion of all the acquired views enables the production of a digital 3D representation of the object.

Existing Shape from silhouette scanners operate in an indirect way. They subdivide the object definition space in a regular 3D grid and verify that a voxel belongs to the object by verifying that its 2D projection falls inside the silhouette of the corresponding image. Our scanner adopts a direct method: by using a new 3D representation scheme and algorithm, the Marching Intersections data structure, we can directly intersect all the 3D volumes obtained by the silhouettes extracted from the images.

Styles APA, Harvard, Vancouver, ISO, etc.
43

Ukida, Hiroyuki. « Shape-from-shading analysis for reconstructing 3D object shape using an image scanner ». 京都大学 (Kyoto University), 2003. http://hdl.handle.net/2433/59293.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Lima, VeroÌ‚nica Maria Cadena. « Resistant fitting methods for statistical shape comparison ». Thesis, University of Leeds, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275749.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

JÃnior, IÃlis Cavalcante de Paula. « DetecÃÃo de cantos em formas binÃrias planares e aplicaÃÃo em recuperaÃÃo de formas ». Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=11004.

Texte intégral
Résumé :
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Sistemas de recuperaÃÃo de imagens baseada em conteÃdo (do termo em inglÃs, Content-Based Image Retrieval - CBIR) que operam em bases com grande volume de dados constituem um problema relevante e desafiador em diferentes Ãreas do conhecimento, a saber, medicina, biologia, computaÃÃo, catalogaÃÃo em geral, etc. A indexaÃÃo das imagens nestas bases pode ser realizada atravÃs de conteÃdo visual como cor, textura e forma, sendo esta Ãltima caracterÃstica a traduÃÃo visual dos objetos em uma cena. Tarefas automatizadas em inspeÃÃo industrial, registro de marca, biometria e descriÃÃo de imagens utilizam atributos da forma, como os cantos, na geraÃÃo de descritores para representaÃÃo, anÃlise e reconhecimento da mesma, possibilitando ainda que estes descritores se adequem ao uso em sistemas de recuperaÃÃo. Esta tese aborda o problema da extraÃÃo de caracterÃsticas de formas planares binÃrias a partir de cantos, na proposta de um detector multiescala de cantos e sua aplicaÃÃo em um sistema CBIR. O mÃtodo de detecÃÃo de cantos proposto combina uma funÃÃo de angulaÃÃo do contorno da forma, a sua decomposiÃÃo nÃo decimada por transformada wavelet ChapÃu Mexicano e a correlaÃÃo espacial entre as escalas do sinal de angulaÃÃo decomposto. A partir dos resultados de detecÃÃo de cantos, foi realizado um experimento com o sistema CBIR proposto, em que informaÃÃes locais e globais extraÃdas dos cantos detectados da forma foram combinadas à tÃcnica DeformaÃÃo Espacial DinÃmica (do termo em inglÃs, Dynamic Space Warping), para fins de anÃlise de similaridade formas com tamanhos distintos. Ainda com este experimento foi traÃada uma estratÃgia de busca e ajuste dos parÃmetros multiescala de detectores de cantos, segundo a maximizaÃÃo de uma funÃÃo de custo. Na avaliaÃÃo de desempenho da metodologia proposta, e outras tÃcnicas de detecÃÃo de cantos, foram empregadas as medidas PrecisÃo e RevocaÃÃo. Estas medidas atestaram o bom desempenho da metodologia proposta na detecÃÃo de cantos verdadeiros das formas, em uma base pÃblica de imagens cujas verdades terrestres estÃo disponÃveis. Para a avaliaÃÃo do experimento de recuperaÃÃo de imagens, utilizamos a taxa Bullâs eye em trÃs bases pÃblicas. Os valores alcanÃados desta taxa mostraram que o experimento proposto foi bem sucedido na descriÃÃo e recuperaÃÃo das formas, dentre os demais mÃtodos avaliados.
Content-based image retrieval (CBIR) applied to large scale datasets is a relevant and challenging problem present in medicine, biology, computer science, general cataloging etc. Image indexing can be done using visual information such as colors, textures and shapes (the visual translation of objects in a scene). Automated tasks in industrial inspection, trademark registration, biostatistics and image description use shape attributes, e.g. corners, to generate descriptors for representation, analysis and recognition; allowing those descriptors to be used in image retrieval systems. This thesis explores the problem of extracting information from binary planar shapes from corners, by proposing a multiscale corner detector and its use in a CBIR system. The proposed corner detection method combines an angulation function of the shape contour, its non-decimated decomposition using the Mexican hat wavelet and the spatial correlation among scales of the decomposed angulation signal. Using the information provided by our corner detection algorithm, we made experiments with the proposed CBIR. Local and global information extracted from the corners detected on shapes was used in a Dynamic Space Warping technique in order to analyze the similarity among shapes of different sizes. We also devised a strategy for searching and refining the multiscale parameters of the corner detector by maximizing an objective function. For performance evaluation of the proposed methodology and other techniques, we employed the Precision and Recall measures. These measures proved the good performance of our method in detecting true corners on shapes from a public image dataset with ground truth information. To assess the image retrieval experiments, we used the Bullâs eye score in three public databases. Our experiments showed our method performed well when compared to the existing approaches in the literature.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Yan, Dongming. « Variational shape segmentation and mesh generation ». Click to view the E-thesis via HKUTO, 2010. http://sunzi.lib.hku.hk/hkuto/record/B43932514.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Yan, Dongming, et 严冬明. « Variational shape segmentation and mesh generation ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B43932514.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Taback, Nathan Asher. « Likelihood asymptotics and location-scale-shape analysis ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0012/NQ35340.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Atkinson, Gary A. « Surface shape and reflectance analysis using polarisation ». Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437614.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Chinthapalli, Vamsi Krishna. « Face shape analysis in people with epilepsy ». Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10043826/.

Texte intégral
Résumé :
Stereophotogrammetry and dense surface modelling are novel techniques that have been used to study face shape in genetic and neurodevelopmental disorders. In people with epilepsy, it has been recognised that the condition may be associated with underlying structural variants or malformations of cortical development in some cases. Here I recruited 869 people with epilepsy or unaffected relatives and control subjects to study face shape. I sought to explore whether face shape and symmetry, using new metrics for each, could help to predict those people with epilepsy who may have potential underlying genetic or structural causes. My reproducibility studies found that stereophotogrammetry and dense surface modelling were susceptible to error from changes in head position or face expression, but not from camera calibration, image acquisition and image landmarking. The next study found that in people with epilepsy, a measurement of atypical face shape, Face Shape Difference (FSD), was significantly increased in those with pathogenic structural variants compared to those without pathogenic structural variants. The FSD value was used to predict the presence of pathogenic structural variants with a sensitivity of 66- 80% and specificity of 65-78%. Body mass index affects face shape in a partly predictable manner. The effect of body mass index differences was controlled for in a further analysis. I then analysed facial asymmetry and showed that it was increased in people with developmental lesions in the brain but not in people with pathogenic structural variants. A final study showed that stereophotogrammetry, dense surface modelling, FSD and reflected FSD could be used to study a single genetic disorder associated with epilepsy, to find previously unrecognised face shape changes. Stereophotogrammetry and dense surface modelling therefore appear to be promising tools to aid both in discovery of underlying causes for epilepsy and in understanding of such causes in terms of facial development.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie