Littérature scientifique sur le sujet « Kendall shape spaces »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Kendall shape spaces ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Kendall shape spaces"
Goodall, Colin R., et Kanti V. Mardia. « A geometrical derivation of the shape density ». Advances in Applied Probability 23, no 3 (septembre 1991) : 496–514. http://dx.doi.org/10.2307/1427619.
Texte intégralGoodall, Colin R., et Kanti V. Mardia. « A geometrical derivation of the shape density ». Advances in Applied Probability 23, no 03 (septembre 1991) : 496–514. http://dx.doi.org/10.1017/s0001867800023703.
Texte intégralVarano, Valerio, Stefano Gabriele, Franco Milicchio, Stefan Shlager, Ian Dryden et Paolo Piras. « Geodesics in the TPS Space ». Mathematics 10, no 9 (5 mai 2022) : 1562. http://dx.doi.org/10.3390/math10091562.
Texte intégralWang, Yunfan, Vic Patrangenaru et Ruite Guo. « A Central Limit Theorem for extrinsic antimeans and estimation of Veronese–Whitney means and antimeans on planar Kendall shape spaces ». Journal of Multivariate Analysis 178 (juillet 2020) : 104600. http://dx.doi.org/10.1016/j.jmva.2020.104600.
Texte intégralLe, Huiling. « Locating Fréchet means with application to shape spaces ». Advances in Applied Probability 33, no 2 (juin 2001) : 324–38. http://dx.doi.org/10.1017/s0001867800010818.
Texte intégralKlingenberg, Christian Peter. « Walking on Kendall’s Shape Space : Understanding Shape Spaces and Their Coordinate Systems ». Evolutionary Biology 47, no 4 (18 août 2020) : 334–52. http://dx.doi.org/10.1007/s11692-020-09513-x.
Texte intégralLE, HUILING, et DENNIS BARDEN. « ON SIMPLEX SHAPE SPACES ». Journal of the London Mathematical Society 64, no 2 (octobre 2001) : 501–12. http://dx.doi.org/10.1112/s0024610701002332.
Texte intégralKume, Alfred, et Huiling Le. « Estimating Fréchet means in Bookstein's shape space ». Advances in Applied Probability 32, no 3 (septembre 2000) : 663–74. http://dx.doi.org/10.1239/aap/1013540237.
Texte intégralKume, Alfred, et Huiling Le. « Estimating Fréchet means in Bookstein's shape space ». Advances in Applied Probability 32, no 03 (septembre 2000) : 663–74. http://dx.doi.org/10.1017/s0001867800010181.
Texte intégralHuckemann, Stephan, et Herbert Ziezold. « Principal component analysis for Riemannian manifolds, with an application to triangular shape spaces ». Advances in Applied Probability 38, no 2 (juin 2006) : 299–319. http://dx.doi.org/10.1239/aap/1151337073.
Texte intégralThèses sur le sujet "Kendall shape spaces"
Maignant, Elodie. « Plongements barycentriques pour l'apprentissage géométrique de variétés : application aux formes et graphes ». Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4096.
Texte intégralAn MRI image has over 60,000 pixels. The largest known human protein consists of around 30,000 amino acids. We call such data high-dimensional. In practice, most high-dimensional data is high-dimensional only artificially. For example, of all the images that could be randomly generated by coloring 256 x 256 pixels, only a very small subset would resemble an MRI image of a human brain. This is known as the intrinsic dimension of such data. Therefore, learning high-dimensional data is often synonymous with dimensionality reduction. There are numerous methods for reducing the dimension of a dataset, the most recent of which can be classified according to two approaches.A first approach known as manifold learning or non-linear dimensionality reduction is based on the observation that some of the physical laws behind the data we observe are non-linear. In this case, trying to explain the intrinsic dimension of a dataset with a linear model is sometimes unrealistic. Instead, manifold learning methods assume a locally linear model.Moreover, with the emergence of statistical shape analysis, there has been a growing awareness that many types of data are naturally invariant to certain symmetries (rotations, reparametrizations, permutations...). Such properties are directly mirrored in the intrinsic dimension of such data. These invariances cannot be faithfully transcribed by Euclidean geometry. There is therefore a growing interest in modeling such data using finer structures such as Riemannian manifolds. A second recent approach to dimension reduction consists then in generalizing existing methods to non-Euclidean data. This is known as geometric learning.In order to combine both geometric learning and manifold learning, we investigated the method called locally linear embedding, which has the specificity of being based on the notion of barycenter, a notion a priori defined in Euclidean spaces but which generalizes to Riemannian manifolds. In fact, the method called barycentric subspace analysis, which is one of those generalizing principal component analysis to Riemannian manifolds, is based on this notion as well. Here we rephrase both methods under the new notion of barycentric embeddings. Essentially, barycentric embeddings inherit the structure of most linear and non-linear dimension reduction methods, but rely on a (locally) barycentric -- affine -- model rather than a linear one.The core of our work lies in the analysis of these methods, both on a theoretical and practical level. In particular, we address the application of barycentric embeddings to two important examples in geometric learning: shapes and graphs. In addition to practical implementation issues, each of these examples raises its own theoretical questions, mostly related to the geometry of quotient spaces. In particular, we highlight that compared to standard dimension reduction methods in graph analysis, barycentric embeddings stand out for their better interpretability. In parallel with these examples, we characterize the geometry of locally barycentric embeddings, which generalize the projection computed by locally linear embedding. Finally, algorithms for geometric manifold learning, novel in their approach, complete this work
Chapitres de livres sur le sujet "Kendall shape spaces"
Guigui, Nicolas, Elodie Maignant, Alain Trouvé et Xavier Pennec. « Parallel Transport on Kendall Shape Spaces ». Dans Lecture Notes in Computer Science, 103–10. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80209-7_12.
Texte intégralMaignant, Elodie, Alain Trouvé et Xavier Pennec. « Riemannian Locally Linear Embedding with Application to Kendall Shape Spaces ». Dans Lecture Notes in Computer Science, 12–20. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38271-0_2.
Texte intégralGaikwad, Akshay V., Saurabh J. Shigwan et Suyash P. Awate. « A Statistical Model for Smooth Shapes in Kendall Shape Space ». Dans Lecture Notes in Computer Science, 628–35. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24574-4_75.
Texte intégralPaskin, Martha, Daniel Baum, Mason N. Dean et Christoph von Tycowicz. « A Kendall Shape Space Approach to 3D Shape Estimation from 2D Landmarks ». Dans Lecture Notes in Computer Science, 363–79. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20086-1_21.
Texte intégralRouahi, Hibat Allah, Riadh Mtibaa et Ezzeddine Zagrouba. « Gaussian Bayes Classifier for 2D Shapes in Kendall Space ». Dans Representations, Analysis and Recognition of Shape and Motion from Imaging Data, 152–60. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60654-5_13.
Texte intégralRouahi, Hibat’Allah, Riadh Mtibaa et Ezzeddine Zagrouba. « Bayesian Approach in Kendall Shape Space for Plant Species Classification ». Dans Pattern Recognition and Image Analysis, 322–31. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58838-4_36.
Texte intégralNava-Yazdani, Esfandiar, Hans-Christian Hege et Christoph von Tycowicz. « A Geodesic Mixed Effects Model in Kendall’s Shape Space ». Dans Multimodal Brain Image Analysis and Mathematical Foundations of Computational Anatomy, 209–18. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33226-6_22.
Texte intégralSmallman-Raynor, Matthew, Andrew Cliff, Keith Ord et Peter Haggett. « Epidemics as Diffusion Waves ». Dans A Geography of Infection, 1–34. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192848390.003.0001.
Texte intégralActes de conférences sur le sujet "Kendall shape spaces"
Hosni, Nadia, Hassen Drira, Faten Chaieb et Boulbaba Ben Amor. « 3D Gait Recognition based on Functional PCA on Kendall's Shape Space ». Dans 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8545040.
Texte intégralJacob, Geethu, et Sukhendu Das. « Moving Object Segmentation in Jittery Videos by Stabilizing Trajectories Modeled in Kendall's Shape Space ». Dans British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.49.
Texte intégral