Literatura académica sobre el tema "Kendall shape spaces"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Kendall shape spaces".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Kendall shape spaces"
Goodall, Colin R. y Kanti V. Mardia. "A geometrical derivation of the shape density". Advances in Applied Probability 23, n.º 3 (septiembre de 1991): 496–514. http://dx.doi.org/10.2307/1427619.
Texto completoGoodall, Colin R. y Kanti V. Mardia. "A geometrical derivation of the shape density". Advances in Applied Probability 23, n.º 03 (septiembre de 1991): 496–514. http://dx.doi.org/10.1017/s0001867800023703.
Texto completoVarano, Valerio, Stefano Gabriele, Franco Milicchio, Stefan Shlager, Ian Dryden y Paolo Piras. "Geodesics in the TPS Space". Mathematics 10, n.º 9 (5 de mayo de 2022): 1562. http://dx.doi.org/10.3390/math10091562.
Texto completoWang, Yunfan, Vic Patrangenaru y Ruite Guo. "A Central Limit Theorem for extrinsic antimeans and estimation of Veronese–Whitney means and antimeans on planar Kendall shape spaces". Journal of Multivariate Analysis 178 (julio de 2020): 104600. http://dx.doi.org/10.1016/j.jmva.2020.104600.
Texto completoLe, Huiling. "Locating Fréchet means with application to shape spaces". Advances in Applied Probability 33, n.º 2 (junio de 2001): 324–38. http://dx.doi.org/10.1017/s0001867800010818.
Texto completoKlingenberg, Christian Peter. "Walking on Kendall’s Shape Space: Understanding Shape Spaces and Their Coordinate Systems". Evolutionary Biology 47, n.º 4 (18 de agosto de 2020): 334–52. http://dx.doi.org/10.1007/s11692-020-09513-x.
Texto completoLE, HUILING y DENNIS BARDEN. "ON SIMPLEX SHAPE SPACES". Journal of the London Mathematical Society 64, n.º 2 (octubre de 2001): 501–12. http://dx.doi.org/10.1112/s0024610701002332.
Texto completoKume, Alfred y Huiling Le. "Estimating Fréchet means in Bookstein's shape space". Advances in Applied Probability 32, n.º 3 (septiembre de 2000): 663–74. http://dx.doi.org/10.1239/aap/1013540237.
Texto completoKume, Alfred y Huiling Le. "Estimating Fréchet means in Bookstein's shape space". Advances in Applied Probability 32, n.º 03 (septiembre de 2000): 663–74. http://dx.doi.org/10.1017/s0001867800010181.
Texto completoHuckemann, Stephan y Herbert Ziezold. "Principal component analysis for Riemannian manifolds, with an application to triangular shape spaces". Advances in Applied Probability 38, n.º 2 (junio de 2006): 299–319. http://dx.doi.org/10.1239/aap/1151337073.
Texto completoTesis sobre el tema "Kendall shape spaces"
Maignant, Elodie. "Plongements barycentriques pour l'apprentissage géométrique de variétés : application aux formes et graphes". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4096.
Texto completoAn MRI image has over 60,000 pixels. The largest known human protein consists of around 30,000 amino acids. We call such data high-dimensional. In practice, most high-dimensional data is high-dimensional only artificially. For example, of all the images that could be randomly generated by coloring 256 x 256 pixels, only a very small subset would resemble an MRI image of a human brain. This is known as the intrinsic dimension of such data. Therefore, learning high-dimensional data is often synonymous with dimensionality reduction. There are numerous methods for reducing the dimension of a dataset, the most recent of which can be classified according to two approaches.A first approach known as manifold learning or non-linear dimensionality reduction is based on the observation that some of the physical laws behind the data we observe are non-linear. In this case, trying to explain the intrinsic dimension of a dataset with a linear model is sometimes unrealistic. Instead, manifold learning methods assume a locally linear model.Moreover, with the emergence of statistical shape analysis, there has been a growing awareness that many types of data are naturally invariant to certain symmetries (rotations, reparametrizations, permutations...). Such properties are directly mirrored in the intrinsic dimension of such data. These invariances cannot be faithfully transcribed by Euclidean geometry. There is therefore a growing interest in modeling such data using finer structures such as Riemannian manifolds. A second recent approach to dimension reduction consists then in generalizing existing methods to non-Euclidean data. This is known as geometric learning.In order to combine both geometric learning and manifold learning, we investigated the method called locally linear embedding, which has the specificity of being based on the notion of barycenter, a notion a priori defined in Euclidean spaces but which generalizes to Riemannian manifolds. In fact, the method called barycentric subspace analysis, which is one of those generalizing principal component analysis to Riemannian manifolds, is based on this notion as well. Here we rephrase both methods under the new notion of barycentric embeddings. Essentially, barycentric embeddings inherit the structure of most linear and non-linear dimension reduction methods, but rely on a (locally) barycentric -- affine -- model rather than a linear one.The core of our work lies in the analysis of these methods, both on a theoretical and practical level. In particular, we address the application of barycentric embeddings to two important examples in geometric learning: shapes and graphs. In addition to practical implementation issues, each of these examples raises its own theoretical questions, mostly related to the geometry of quotient spaces. In particular, we highlight that compared to standard dimension reduction methods in graph analysis, barycentric embeddings stand out for their better interpretability. In parallel with these examples, we characterize the geometry of locally barycentric embeddings, which generalize the projection computed by locally linear embedding. Finally, algorithms for geometric manifold learning, novel in their approach, complete this work
Capítulos de libros sobre el tema "Kendall shape spaces"
Guigui, Nicolas, Elodie Maignant, Alain Trouvé y Xavier Pennec. "Parallel Transport on Kendall Shape Spaces". En Lecture Notes in Computer Science, 103–10. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80209-7_12.
Texto completoMaignant, Elodie, Alain Trouvé y Xavier Pennec. "Riemannian Locally Linear Embedding with Application to Kendall Shape Spaces". En Lecture Notes in Computer Science, 12–20. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38271-0_2.
Texto completoGaikwad, Akshay V., Saurabh J. Shigwan y Suyash P. Awate. "A Statistical Model for Smooth Shapes in Kendall Shape Space". En Lecture Notes in Computer Science, 628–35. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24574-4_75.
Texto completoPaskin, Martha, Daniel Baum, Mason N. Dean y Christoph von Tycowicz. "A Kendall Shape Space Approach to 3D Shape Estimation from 2D Landmarks". En Lecture Notes in Computer Science, 363–79. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20086-1_21.
Texto completoRouahi, Hibat Allah, Riadh Mtibaa y Ezzeddine Zagrouba. "Gaussian Bayes Classifier for 2D Shapes in Kendall Space". En Representations, Analysis and Recognition of Shape and Motion from Imaging Data, 152–60. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60654-5_13.
Texto completoRouahi, Hibat’Allah, Riadh Mtibaa y Ezzeddine Zagrouba. "Bayesian Approach in Kendall Shape Space for Plant Species Classification". En Pattern Recognition and Image Analysis, 322–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58838-4_36.
Texto completoNava-Yazdani, Esfandiar, Hans-Christian Hege y Christoph von Tycowicz. "A Geodesic Mixed Effects Model in Kendall’s Shape Space". En Multimodal Brain Image Analysis and Mathematical Foundations of Computational Anatomy, 209–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33226-6_22.
Texto completoSmallman-Raynor, Matthew, Andrew Cliff, Keith Ord y Peter Haggett. "Epidemics as Diffusion Waves". En A Geography of Infection, 1–34. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192848390.003.0001.
Texto completoActas de conferencias sobre el tema "Kendall shape spaces"
Hosni, Nadia, Hassen Drira, Faten Chaieb y Boulbaba Ben Amor. "3D Gait Recognition based on Functional PCA on Kendall's Shape Space". En 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8545040.
Texto completoJacob, Geethu y Sukhendu Das. "Moving Object Segmentation in Jittery Videos by Stabilizing Trajectories Modeled in Kendall's Shape Space". En British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.49.
Texto completo