Academic literature on the topic 'Kendall shape spaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kendall shape spaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kendall shape spaces"

1

Goodall, Colin R., and Kanti V. Mardia. "A geometrical derivation of the shape density." Advances in Applied Probability 23, no. 3 (September 1991): 496–514. http://dx.doi.org/10.2307/1427619.

Full text
Abstract:
The density for the shapes of random configurations of N independent Gaussian-distributed landmarks in the plane with unequal means was first derived by Mardia and Dryden (1989a). Kendall (1984), (1989) describes a hierarchy of spaces for landmarks, including Euclidean figure space containing the original configuration, preform space (with location removed), preshape space (with location and scale removed), and shape space. We derive the joint density of the landmark points in each of these intermediate spaces, culminating in confirmation of the Mardia–Dryden result in shape space. This three-step derivation is an appealing alternative to the single-step original derivation, and also provides strong geometrical motivation and insight into Kendall's hierarchy. Preform space and preshape space are respectively Euclidean space with dimension 2(N–1) and the sphere in that space, and thus the first two steps are reasonably familiar. The third step, from preshape space to shape space, is more interesting. The quotient by the rotation group partitions the preshape sphere into equivalence classes of preshapes with the same shape. We introduce a canonical system of preshape coordinates that include 2(N–2) polar coordinates for shape and one coordinate for rotation. Integration over the rotation coordinate gives the Mardia–Dryden result. However, the usual geometrical intuition fails because the set of preshapes keeping the rotation coordinate (however chosen) fixed is not an integrable manifold. We characterize the geometry of the quotient operation through the relationships between distances in preshape space and distances among the corresponding shapes.
APA, Harvard, Vancouver, ISO, and other styles
2

Goodall, Colin R., and Kanti V. Mardia. "A geometrical derivation of the shape density." Advances in Applied Probability 23, no. 03 (September 1991): 496–514. http://dx.doi.org/10.1017/s0001867800023703.

Full text
Abstract:
The density for the shapes of random configurations of N independent Gaussian-distributed landmarks in the plane with unequal means was first derived by Mardia and Dryden (1989a). Kendall (1984), (1989) describes a hierarchy of spaces for landmarks, including Euclidean figure space containing the original configuration, preform space (with location removed), preshape space (with location and scale removed), and shape space. We derive the joint density of the landmark points in each of these intermediate spaces, culminating in confirmation of the Mardia–Dryden result in shape space. This three-step derivation is an appealing alternative to the single-step original derivation, and also provides strong geometrical motivation and insight into Kendall's hierarchy. Preform space and preshape space are respectively Euclidean space with dimension 2(N–1) and the sphere in that space, and thus the first two steps are reasonably familiar. The third step, from preshape space to shape space, is more interesting. The quotient by the rotation group partitions the preshape sphere into equivalence classes of preshapes with the same shape. We introduce a canonical system of preshape coordinates that include 2(N–2) polar coordinates for shape and one coordinate for rotation. Integration over the rotation coordinate gives the Mardia–Dryden result. However, the usual geometrical intuition fails because the set of preshapes keeping the rotation coordinate (however chosen) fixed is not an integrable manifold. We characterize the geometry of the quotient operation through the relationships between distances in preshape space and distances among the corresponding shapes.
APA, Harvard, Vancouver, ISO, and other styles
3

Varano, Valerio, Stefano Gabriele, Franco Milicchio, Stefan Shlager, Ian Dryden, and Paolo Piras. "Geodesics in the TPS Space." Mathematics 10, no. 9 (May 5, 2022): 1562. http://dx.doi.org/10.3390/math10091562.

Full text
Abstract:
In shape analysis, the interpolation of shapes’ trajectories is often performed by means of geodesics in an appropriate Riemannian Shape Space. Over the past several decades, different metrics and shape spaces have been proposed, including Kendall shape space, LDDMM based approaches, and elastic contour, among others. Once a Riemannian space is chosen, geodesics and parallel transports can be used to build splines or piecewise geodesics paths. In a recent paper, we introduced a new Riemannian shape space named TPS Space based on the Thin Plate Spline interpolant and characterized by an appropriate metric and parallel transport rule. In the present paper, we further explore the geometry of the TPS Space by characterizing the properties of its geodesics. Several applications show the capability of the proposed formulation to conserve important physical properties of deformation, such as local strains and global elastic energy.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yunfan, Vic Patrangenaru, and Ruite Guo. "A Central Limit Theorem for extrinsic antimeans and estimation of Veronese–Whitney means and antimeans on planar Kendall shape spaces." Journal of Multivariate Analysis 178 (July 2020): 104600. http://dx.doi.org/10.1016/j.jmva.2020.104600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Huiling. "Locating Fréchet means with application to shape spaces." Advances in Applied Probability 33, no. 2 (June 2001): 324–38. http://dx.doi.org/10.1017/s0001867800010818.

Full text
Abstract:
We use Jacobi field arguments and the contraction mapping theorem to locate Fréchet means of a class of probability measures on locally symmetric Riemannian manifolds with non-negative sectional curvatures. This leads, in particular, to a method for estimating Fréchet mean shapes, with respect to the distance function ρ determined by the induced Riemannian metric, of a class of probability measures on Kendall's shape spaces. We then combine this with the technique of ‘horizontally lifting’ to the pre-shape spheres to obtain an algorithm for finding Fréchet mean shapes, with respect to ρ, of a class of probability measures on Kendall's shape spaces in terms of the vertices of random shapes. This gives us, for example, an algorithm for finding Fréchet mean shapes of samples of configurations on the plane which is expressed directly in terms of the vertices.
APA, Harvard, Vancouver, ISO, and other styles
6

Klingenberg, Christian Peter. "Walking on Kendall’s Shape Space: Understanding Shape Spaces and Their Coordinate Systems." Evolutionary Biology 47, no. 4 (August 18, 2020): 334–52. http://dx.doi.org/10.1007/s11692-020-09513-x.

Full text
Abstract:
Abstract More and more analyses of biological shapes are using the techniques of geometric morphometrics based on configurations of landmarks in two or three dimensions. A fundamental concept at the core of these analyses is Kendall’s shape space and local approximations to it by shape tangent spaces. Kendall’s shape space is complex because it is a curved surface and, for configurations with more than three landmarks, multidimensional. This paper uses the shape space for triangles, which is the surface of a sphere, to explore and visualize some properties of shape spaces and the respective tangent spaces. Considerations about the dimensionality of shape spaces are an important step in understanding them, and can offer a coordinate system that can translate between positions in the shape space and the corresponding landmark configurations and vice versa. By simulation studies “walking” along that are great circles around the shape space, each of them corresponding to the repeated application of a particular shape change, it is possible to grasp intuitively why shape spaces are curved and closed surfaces. From these considerations and the available information on shape spaces for configurations with more than three landmarks, the conclusion emerges that the approach using a tangent space approximation in general is valid for biological datasets. The quality of approximation depends on the scale of variation in the data, but existing analyses suggest this should be satisfactory to excellent in most empirical datasets.
APA, Harvard, Vancouver, ISO, and other styles
7

LE, HUILING, and DENNIS BARDEN. "ON SIMPLEX SHAPE SPACES." Journal of the London Mathematical Society 64, no. 2 (October 2001): 501–12. http://dx.doi.org/10.1112/s0024610701002332.

Full text
Abstract:
The right-invariant Riemannian metric on simplex shape spaces in fact makes them particular Riemannian symmetric spaces of non-compact type. In the paper, the general properties of such symmetric spaces are made explicit for simplex shape spaces. In particular, a global matrix coordinate representation is suggested, with respect to which several geometric features, important for shape analysis, have simple and easily computable expressions. As a typical application, it is shown how to locate the Fréchet means of a class of probability measures on the simplex shape spaces, a result analogous to that for Kendall's shape spaces.
APA, Harvard, Vancouver, ISO, and other styles
8

Kume, Alfred, and Huiling Le. "Estimating Fréchet means in Bookstein's shape space." Advances in Applied Probability 32, no. 3 (September 2000): 663–74. http://dx.doi.org/10.1239/aap/1013540237.

Full text
Abstract:
In [8], Le showed that procrustean mean shapes of samples are consistent estimates of Fréchet means for a class of probability measures in Kendall's shape spaces. In this paper, we investigate the analogous case in Bookstein's shape space for labelled triangles and propose an estimator that is easy to compute and is a consistent estimate of the Fréchet mean, with respect to sinh(δ/√2), of any probability measure for which such a mean exists. Furthermore, for a certain class of probability measures, this estimate also tends almost surely to the Fréchet mean calculated with respect to the Riemannian distance δ.
APA, Harvard, Vancouver, ISO, and other styles
9

Kume, Alfred, and Huiling Le. "Estimating Fréchet means in Bookstein's shape space." Advances in Applied Probability 32, no. 03 (September 2000): 663–74. http://dx.doi.org/10.1017/s0001867800010181.

Full text
Abstract:
In [8], Le showed that procrustean mean shapes of samples are consistent estimates of Fréchet means for a class of probability measures in Kendall's shape spaces. In this paper, we investigate the analogous case in Bookstein's shape space for labelled triangles and propose an estimator that is easy to compute and is a consistent estimate of the Fréchet mean, with respect to sinh(δ/√2), of any probability measure for which such a mean exists. Furthermore, for a certain class of probability measures, this estimate also tends almost surely to the Fréchet mean calculated with respect to the Riemannian distance δ.
APA, Harvard, Vancouver, ISO, and other styles
10

Huckemann, Stephan, and Herbert Ziezold. "Principal component analysis for Riemannian manifolds, with an application to triangular shape spaces." Advances in Applied Probability 38, no. 2 (June 2006): 299–319. http://dx.doi.org/10.1239/aap/1151337073.

Full text
Abstract:
Classical principal component analysis on manifolds, for example on Kendall's shape spaces, is carried out in the tangent space of a Euclidean mean equipped with a Euclidean metric. We propose a method of principal component analysis for Riemannian manifolds based on geodesics of the intrinsic metric, and provide a numerical implementation in the case of spheres. This method allows us, for example, to compare principal component geodesics of different data samples. In order to determine principal component geodesics, we show that in general, owing to curvature, the principal component geodesics do not pass through the intrinsic mean. As a consequence, means other than the intrinsic mean are considered, allowing for several choices of definition of geodesic variance. In conclusion we apply our method to the space of planar triangular shapes and compare our findings with those of standard Euclidean principal component analysis.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kendall shape spaces"

1

Maignant, Elodie. "Plongements barycentriques pour l'apprentissage géométrique de variétés : application aux formes et graphes." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4096.

Full text
Abstract:
Une image obtenue par IRM, c'est plus de 60 000 pixels. La plus grosse protéine connue chez l'être humain est constituée d'environ 30 000 acides aminés. On parle de données en grande dimension. En réalité, la plupart des données en grande dimension ne le sont qu'en apparence. Par exemple, de toutes les images que l'on pourrait générer aléatoirement en coloriant 256 x 256 pixels, seule une infime proportion ressemblerait à l'image IRM d'un cerveau humain. C'est ce qu'on appelle la dimension intrinsèque des données. En grande dimension, apprentissage rime donc souvent avec réduction de dimension. Il existe de nombreuses méthodes de réduction de dimension, les plus récentes pouvant être classées selon deux approches.Une première approche, connue sous le nom d'apprentissage de variétés (manifold learning) ou réduction de dimension non linéaire, part du constat que certaines lois physiques derrière les données que l'on observe ne sont pas linéaires. Ainsi, espérer expliquer la dimension intrinsèque des données par un modèle linéaire est donc parfois irréaliste. Au lieu de cela, les méthodes qui relèvent du manifold learning supposent un modèle localement linéaire.D'autre part, avec l'émergence du domaine de l'analyse statistique de formes, il y eu une prise de conscience que de nombreuses données sont naturellement invariantes à certaines symétries (rotations, permutations, reparamétrisations...), invariances qui se reflètent directement sur la dimension intrinsèque des données. Ces invariances, la géométrie euclidienne ne peut pas les retranscrire fidèlement. Ainsi, on observe un intérêt croissant pour la modélisation des données par des structures plus fines telles que les variétés riemanniennes. Une deuxième approche en réduction de dimension consiste donc à généraliser les méthodes existantes à des données à valeurs dans des espaces non-euclidiens. On parle alors d'apprentissage géométrique. Jusqu'à présent, la plupart des travaux en apprentissage géométrique se sont focalisés sur l'analyse en composantes principales.Dans la perspective de proposer une approche qui combine à la fois apprentissage géométrique et manifold learning, nous nous sommes intéressés à la méthode appelée locally linear embedding, qui a la particularité de reposer sur la notion de barycentre, notion a priori définie dans les espaces euclidiens mais qui se généralise aux variétés riemanniennes. C'est d'ailleurs sur cette même notion que repose une autre méthode appelée barycentric subspace analysis, et qui fait justement partie des méthodes qui généralisent l'analyse en composantes principales aux variétés riemanniennes. Ici, nous introduisons la notion nouvelle de plongement barycentrique, qui regroupe les deux méthodes. Essentiellement, cette notion englobe un ensemble de méthodes dont la structure rappelle celle des méthodes de réduction de dimension linéaires et non linéaires, mais où le modèle (localement) linéaire est remplacé par un modèle barycentrique -- affine.Le cœur de notre travail consiste en l'analyse de ces méthodes, tant sur le plan théorique que pratique. Du côté des applications, nous nous intéressons à deux exemples importants en apprentissage géométrique : les formes et les graphes. En particulier, on démontre que par rapport aux méthodes standard de réduction de dimension en analyse statistique des graphes, les plongements barycentriques se distinguent par leur meilleure interprétabilité. En plus des questions pratiques liées à l'implémentation, chacun de ces exemples soulève ses propres questions théoriques, principalement autour de la géométrie des espaces quotients. Parallèlement, nous nous attachons à caractériser géométriquement les plongements localement barycentriques, qui généralisent la projection calculée par locally linear embedding. Enfin, de nouveaux algorithmes d'apprentissage géométrique, novateurs dans leur approche, complètent ce travail
An MRI image has over 60,000 pixels. The largest known human protein consists of around 30,000 amino acids. We call such data high-dimensional. In practice, most high-dimensional data is high-dimensional only artificially. For example, of all the images that could be randomly generated by coloring 256 x 256 pixels, only a very small subset would resemble an MRI image of a human brain. This is known as the intrinsic dimension of such data. Therefore, learning high-dimensional data is often synonymous with dimensionality reduction. There are numerous methods for reducing the dimension of a dataset, the most recent of which can be classified according to two approaches.A first approach known as manifold learning or non-linear dimensionality reduction is based on the observation that some of the physical laws behind the data we observe are non-linear. In this case, trying to explain the intrinsic dimension of a dataset with a linear model is sometimes unrealistic. Instead, manifold learning methods assume a locally linear model.Moreover, with the emergence of statistical shape analysis, there has been a growing awareness that many types of data are naturally invariant to certain symmetries (rotations, reparametrizations, permutations...). Such properties are directly mirrored in the intrinsic dimension of such data. These invariances cannot be faithfully transcribed by Euclidean geometry. There is therefore a growing interest in modeling such data using finer structures such as Riemannian manifolds. A second recent approach to dimension reduction consists then in generalizing existing methods to non-Euclidean data. This is known as geometric learning.In order to combine both geometric learning and manifold learning, we investigated the method called locally linear embedding, which has the specificity of being based on the notion of barycenter, a notion a priori defined in Euclidean spaces but which generalizes to Riemannian manifolds. In fact, the method called barycentric subspace analysis, which is one of those generalizing principal component analysis to Riemannian manifolds, is based on this notion as well. Here we rephrase both methods under the new notion of barycentric embeddings. Essentially, barycentric embeddings inherit the structure of most linear and non-linear dimension reduction methods, but rely on a (locally) barycentric -- affine -- model rather than a linear one.The core of our work lies in the analysis of these methods, both on a theoretical and practical level. In particular, we address the application of barycentric embeddings to two important examples in geometric learning: shapes and graphs. In addition to practical implementation issues, each of these examples raises its own theoretical questions, mostly related to the geometry of quotient spaces. In particular, we highlight that compared to standard dimension reduction methods in graph analysis, barycentric embeddings stand out for their better interpretability. In parallel with these examples, we characterize the geometry of locally barycentric embeddings, which generalize the projection computed by locally linear embedding. Finally, algorithms for geometric manifold learning, novel in their approach, complete this work
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kendall shape spaces"

1

Guigui, Nicolas, Elodie Maignant, Alain Trouvé, and Xavier Pennec. "Parallel Transport on Kendall Shape Spaces." In Lecture Notes in Computer Science, 103–10. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80209-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maignant, Elodie, Alain Trouvé, and Xavier Pennec. "Riemannian Locally Linear Embedding with Application to Kendall Shape Spaces." In Lecture Notes in Computer Science, 12–20. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38271-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gaikwad, Akshay V., Saurabh J. Shigwan, and Suyash P. Awate. "A Statistical Model for Smooth Shapes in Kendall Shape Space." In Lecture Notes in Computer Science, 628–35. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24574-4_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paskin, Martha, Daniel Baum, Mason N. Dean, and Christoph von Tycowicz. "A Kendall Shape Space Approach to 3D Shape Estimation from 2D Landmarks." In Lecture Notes in Computer Science, 363–79. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20086-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rouahi, Hibat Allah, Riadh Mtibaa, and Ezzeddine Zagrouba. "Gaussian Bayes Classifier for 2D Shapes in Kendall Space." In Representations, Analysis and Recognition of Shape and Motion from Imaging Data, 152–60. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60654-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rouahi, Hibat’Allah, Riadh Mtibaa, and Ezzeddine Zagrouba. "Bayesian Approach in Kendall Shape Space for Plant Species Classification." In Pattern Recognition and Image Analysis, 322–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58838-4_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nava-Yazdani, Esfandiar, Hans-Christian Hege, and Christoph von Tycowicz. "A Geodesic Mixed Effects Model in Kendall’s Shape Space." In Multimodal Brain Image Analysis and Mathematical Foundations of Computational Anatomy, 209–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33226-6_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Smallman-Raynor, Matthew, Andrew Cliff, Keith Ord, and Peter Haggett. "Epidemics as Diffusion Waves." In A Geography of Infection, 1–34. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192848390.003.0001.

Full text
Abstract:
‘Epidemics as Diffusion Waves’ establishes the parameters of the book. It begins by defining the basic building blocks of the study: infection, contagion, and disease. Key epidemiological concepts and terms are introduced and defined, and classic compartmental (susceptible → infective → recovered; SIR) approaches to epidemic modelling are outlined. Epidemic waves in the time domain are examined in terms of their shape (logistic model, Kendall waves), as repetitive wave trains (Bartlett threshold model), and as branching networks (Reed–Frost model). The second half of the chapter examines epidemic waves in the space domain. The swash–backwash model of the single epidemic wave introduces the concept of the spatial basic reproduction number R0A, the geographical analogue of the basic reproduction number R 0. Spatial simulations of diffusion waves are explored through the work of Torsten Hägerstrand. The chapter concludes with a consideration of dyadic (lag correlation and spatial interaction) approaches to epidemic modelling.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kendall shape spaces"

1

Hosni, Nadia, Hassen Drira, Faten Chaieb, and Boulbaba Ben Amor. "3D Gait Recognition based on Functional PCA on Kendall's Shape Space." In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8545040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jacob, Geethu, and Sukhendu Das. "Moving Object Segmentation in Jittery Videos by Stabilizing Trajectories Modeled in Kendall's Shape Space." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography