Auswahl der wissenschaftlichen Literatur zum Thema „Robust Representations“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Robust Representations" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Robust Representations"
Kuo, Yen-Ling. „Learning Representations for Robust Human-Robot Interaction“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 20 (24.03.2024): 22673. http://dx.doi.org/10.1609/aaai.v38i20.30289.
Der volle Inhalt der QuelleYang, Shuo, Tianyu Guo, Yunhe Wang und Chang Xu. „Adversarial Robustness through Disentangled Representations“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 4 (18.05.2021): 3145–53. http://dx.doi.org/10.1609/aaai.v35i4.16424.
Der volle Inhalt der QuelleIddianozie, Chidubem, und Gavin McArdle. „Towards Robust Representations of Spatial Networks Using Graph Neural Networks“. Applied Sciences 11, Nr. 15 (27.07.2021): 6918. http://dx.doi.org/10.3390/app11156918.
Der volle Inhalt der QuelleVu, Hung, Tu Dinh Nguyen, Trung Le, Wei Luo und Dinh Phung. „Robust Anomaly Detection in Videos Using Multilevel Representations“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 5216–23. http://dx.doi.org/10.1609/aaai.v33i01.33015216.
Der volle Inhalt der QuelleHo, Edward Kei Shiu, und Lai Wan Chan. „Analyzing Holistic Parsers: Implications for Robust Parsing and Systematicity“. Neural Computation 13, Nr. 5 (01.05.2001): 1137–70. http://dx.doi.org/10.1162/08997660151134361.
Der volle Inhalt der QuelleYang, Qing, Jun Chen und Najla Al-Nabhan. „Data representation using robust nonnegative matrix factorization for edge computing“. Mathematical Biosciences and Engineering 19, Nr. 2 (2021): 2147–78. http://dx.doi.org/10.3934/mbe.2022100.
Der volle Inhalt der QuelleParlett, Beresford N., und Inderjit S. Dhillon. „Relatively robust representations of symmetric tridiagonals“. Linear Algebra and its Applications 309, Nr. 1-3 (April 2000): 121–51. http://dx.doi.org/10.1016/s0024-3795(99)00262-1.
Der volle Inhalt der QuelleMedina, Josep R., und Carlos R. Sanchez‐Carratala. „Robust AR Representations of Ocean Spectra“. Journal of Engineering Mechanics 117, Nr. 12 (Dezember 1991): 2926–30. http://dx.doi.org/10.1061/(asce)0733-9399(1991)117:12(2926).
Der volle Inhalt der QuelleHigashi, Masatake, Fuyuki Torihara, Nobuhiro Takeuchi, Toshio Sata, Tsuyoshi Saitoh und Mamoru Hosaka. „Robust algorithms for face-based representations“. Computer-Aided Design 29, Nr. 2 (Februar 1997): 135–46. http://dx.doi.org/10.1016/s0010-4485(96)00042-5.
Der volle Inhalt der QuelleRostami, Mohammad. „Internal Robust Representations for Domain Generalization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 13 (26.06.2023): 15451. http://dx.doi.org/10.1609/aaai.v37i13.26818.
Der volle Inhalt der QuelleDissertationen zum Thema "Robust Representations"
Tran, Thi Quynh Nhi. „Robust and comprehensive joint image-text representations“. Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Der volle Inhalt der QuelleThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Tran, Thi Quynh Nhi. „Robust and comprehensive joint image-text representations“. Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096.
Der volle Inhalt der QuelleThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Tran, Brandon Vanhuy. „Building and using robust representations in image classification“. Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127912.
Der volle Inhalt der QuelleCataloged from the official PDF of thesis.
Includes bibliographical references (pages 115-131).
One of the major appeals of the deep learning paradigm is the ability to learn high-level feature representations of complex data. These learned representations obviate manual data pre-processing, and are versatile enough to generalize across tasks. However, they are not yet capable of fully capturing abstract, meaningful features of the data. For instance, the pervasiveness of adversarial examples--small perturbations of correctly classified inputs causing model misclassification--is a prominent indication of such shortcomings. The goal of this thesis is to work towards building learned representations that are more robust and human-aligned. To achieve this, we turn to adversarial (or robust) training, an optimization technique for training networks less prone to adversarial inputs. Typically, robust training is studied purely in the context of machine learning security (as a safeguard against adversarial examples)--in contrast, we will cast it as a means of enforcing an additional prior onto the model. Specifically, it has been noticed that, in a similar manner to the well-known convolutional or recurrent priors, the robust prior serves as a "bias" that restricts the features models can use in classification--it does not allow for any features that change upon small perturbations. We find that the addition of this simple prior enables a number of downstream applications, from feature visualization and manipulation to input interpolation and image synthesis. Most importantly, robust training provides a simple way of interpreting and understanding model decisions. Besides diagnosing incorrect classification, this also has consequences in the so-called "data poisoning" setting, where an adversary corrupts training samples with the hope of causing misbehaviour in the resulting model. We find that in many cases, the prior arising from robust training significantly helps in detecting data poisoning.
by Brandon Vanhuy Tran.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mathematics
Parekh, Sanjeel. „Learning representations for robust audio-visual scene analysis“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015/document.
Der volle Inhalt der QuelleThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Parekh, Sanjeel. „Learning representations for robust audio-visual scene analysis“. Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015.
Der volle Inhalt der QuelleThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Herdtweck, Christian [Verfasser], und Heinrich [Akademischer Betreuer] Bülthoff. „Learning Data-Driven Representations for Robust Monocular Computer Vision Applications / Christian Herdtweck ; Betreuer: Heinrich Bülthoff“. Tübingen : Universitätsbibliothek Tübingen, 2014. http://d-nb.info/1162897317/34.
Der volle Inhalt der QuelleXu, Guanglin. „Optimization under uncertainty: conic programming representations, relaxations, and approximations“. Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5881.
Der volle Inhalt der QuelleBarbano, Carlo Alberto Maria. „Collateral-Free Learning of Deep Representations : From Natural Images to Biomedical Applications“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT038.
Der volle Inhalt der QuelleDeep Learning (DL) has become one of the predominant tools for solving a variety of tasks, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data. However, it has been shown that they might also learn additional features, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as this additional information can contain bias, noise, or sensitive information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL-based pipelines, especially if they involve private users' data. Learning robust representations that are free of collateral information can be highly relevant for a variety of fields and applications, like medical applications and decision support systems.In this thesis, we introduce the concept of Collateral Learning, which refers to all those instances in which a model learns more information than intended. The aim of Collateral Learning is to bridge the gap between different fields in DL, such as robustness, debiasing, generalization in medical imaging, and privacy preservation. We propose different methods for achieving robust representations free of collateral information. Some of our contributions are based on regularization techniques, while others are represented by novel loss functions.In the first part of the thesis, we lay the foundations of our work, by developing techniques for robust representation learning on natural images. We focus on one of the most important instances of Collateral Learning, namely biased data. Specifically, we focus on Contrastive Learning (CL), and we propose a unified metric learning framework that allows us to both easily analyze existing loss functions, and derive novel ones. Here, we propose a novel supervised contrastive loss function, ε-SupInfoNCE, and two debiasing regularization techniques, EnD and FairKL, that achieve state-of-the-art performance on a number of standard vision classification and debiasing benchmarks.In the second part of the thesis, we focus on Collateral Learning in medical imaging, specifically on neuroimaging and chest X-ray images. For neuroimaging, we present a novel contrastive learning approach for brain age estimation. Our approach achieves state-of-the-art results on the OpenBHB dataset for age regression and shows increased robustness to the site effect. We also leverage this method to detect unhealthy brain aging patterns, showing promising results in the classification of brain conditions such as Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD). For chest X-ray images (CXR), we will target Covid-19 classification, showing how Collateral Learning can effectively hinder the reliability of such models. To tackle such issue, we propose a transfer learning approach that, combined with our regularization techniques, shows promising results on an original multi-site CXRs dataset.Finally, we provide some hints about Collateral Learning and privacy preservation in DL models. We show that some of our proposed methods can be effective in preventing certain information from being learned by the model, thus avoiding potential data leakage
Terzi, Matteo. „Learning interpretable representations for classification, anomaly detection, human gesture and action recognition“. Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.
Der volle Inhalt der Quelle山本, 有作, und Yusaku Yamamoto. „密行列固有値解法の最近の発展(I) : Multiple Relatively Robust Representationsアルゴリズム“. 日本応用数理学会, 2005. http://hdl.handle.net/2237/10838.
Der volle Inhalt der QuelleBücher zum Thema "Robust Representations"
Li, Sheng, und Yun Fu. Robust Representation for Data Analytics. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2.
Der volle Inhalt der Quelledel, Pobil Angel Pasqual, und Serna Miguel Angel, Hrsg. Spatial representation and motion planning. Berlin: Springer-Verlag, 1995.
Den vollen Inhalt der Quelle findende, Velde Walter Van, Hrsg. Toward learning robots. Cambridge, Mass: MIT Press, 1993.
Den vollen Inhalt der Quelle findenSegre, Alberto Maria. Machine learning of robot assembly plans. Boston: Kluwer Academic Publishers, 1988.
Den vollen Inhalt der Quelle findenBurhans, Robert L. Robert L. Burhans. Springfield, Ill: The University, 1987.
Den vollen Inhalt der Quelle findenWallgrün, Jan Oliver. Hierarchical Voronoi graphs: Spatial representation and reasoning for mobile robots. Heidelberg: Springer, 2010.
Den vollen Inhalt der Quelle findenMullane, John. Random Finite Sets for Robot Mapping and SLAM: New Concepts in Autonomous Robotic Map Representations. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
Den vollen Inhalt der Quelle findenWolter, Diedrich. Spatial representation and reasoning for robot mapping: A shape-based approach. Berlin: Springer, 2008.
Den vollen Inhalt der Quelle findenHeikkonen, Jukka. Subsymbolic representations, self-organizing maps, and object motion learning. Lappeenranta, Finland: Lappeenranta University of Technology, 1994.
Den vollen Inhalt der Quelle findenA, Boswell Sharon, und Washington State Oral History Program., Hrsg. Robert F. Goldsworthy: An oral history. Olympia: Washington State Oral History Program, Office of the Secretary of State, 1999.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Robust Representations"
Li, Sheng, und Yun Fu. „Fundamentals of Robust Representations“. In Advanced Information and Knowledge Processing, 9–16. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_2.
Der volle Inhalt der QuelleLi, Sheng, und Yun Fu. „Robust Representations for Collaborative Filtering“. In Advanced Information and Knowledge Processing, 123–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_7.
Der volle Inhalt der QuelleLi, Sheng, und Yun Fu. „Robust Representations for Response Prediction“. In Advanced Information and Knowledge Processing, 147–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_8.
Der volle Inhalt der QuelleLi, Sheng, und Yun Fu. „Robust Representations for Outlier Detection“. In Advanced Information and Knowledge Processing, 175–201. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_9.
Der volle Inhalt der QuelleLi, Sheng, und Yun Fu. „Robust Representations for Person Re-identification“. In Advanced Information and Knowledge Processing, 203–22. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_10.
Der volle Inhalt der QuelleLi, Hengjian, Lianhai Wang und Zutao Zhang. „Robust Palmprint Recognition Based on Directional Representations“. In Intelligent Information Processing VI, 372–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32891-6_46.
Der volle Inhalt der QuelleSom, Anirudh, Kowshik Thopalli, Karthikeyan Natesan Ramamurthy, Vinay Venkataraman, Ankita Shukla und Pavan Turaga. „Perturbation Robust Representations of Topological Persistence Diagrams“. In Computer Vision – ECCV 2018, 638–59. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01234-2_38.
Der volle Inhalt der QuelleVemuri, Baba C., Jundong Liu und José L. Marroquin. „Robust Multimodal Image Registration Using Local Frequency Representations“. In Lecture Notes in Computer Science, 176–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45729-1_17.
Der volle Inhalt der QuelleWallraven, Christian, und Heinrich Bülthoff. „Acquiring Robust Representations for Recognition from Image Sequences“. In Lecture Notes in Computer Science, 216–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45404-7_29.
Der volle Inhalt der QuelleChen, Xiang, Xin Xie, Zhen Bi, Hongbin Ye, Shumin Deng, Ningyu Zhang und Huajun Chen. „Disentangled Contrastive Learning for Learning Robust Textual Representations“. In Artificial Intelligence, 215–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93049-3_18.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Robust Representations"
Li, Yitong, Trevor Cohn und Timothy Baldwin. „Learning Robust Representations of Text“. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/d16-1207.
Der volle Inhalt der QuelleZhang, Yue, und Xiafei Lei. „Deeply learned electrocardiogram representations are robust“. In 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2017. http://dx.doi.org/10.1109/fskd.2017.8393130.
Der volle Inhalt der QuelleSmyth, Aidan, Niall Lyons, Ted Wada, Robert Zopf, Ashutosh Pandey und Avik Santra. „Robust Representations for Keyword Spotting Systems“. In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956211.
Der volle Inhalt der QuelleZheng, Peng, Aleksandr Y. Aravkin, Jayaraman Jayaraman Thiagarajan und Karthikeyan Natesan Ramamurthy. „Learning Robust Representations for Computer Vision“. In 2017 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2017. http://dx.doi.org/10.1109/iccvw.2017.211.
Der volle Inhalt der QuelleMüller, Thomas, und Hinrich Schuetze. „Robust Morphological Tagging with Word Representations“. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1055.
Der volle Inhalt der QuelleKawakami, Kazuya, Luyu Wang, Chris Dyer, Phil Blunsom und Aaron van den Oord. „Learning Robust and Multilingual Speech Representations“. In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.106.
Der volle Inhalt der QuelleLi, Xiumei, und Guoan Bi. „Reassignment methods for robust time-frequency representations“. In Signal Processing (ICICS). IEEE, 2009. http://dx.doi.org/10.1109/icics.2009.5397517.
Der volle Inhalt der QuelleLi, Yitong, Timothy Baldwin und Trevor Cohn. „Towards Robust and Privacy-preserving Text Representations“. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-2005.
Der volle Inhalt der QuelleMahabal, Abhijit, Dan Roth und Sid Mittal. „Robust Handling of Polysemy via Sparse Representations“. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-2031.
Der volle Inhalt der QuelleKing, Brian, I.-Fan Chen, Yonatan Vaizman, Yuzong Liu, Roland Maas, Sree Hari Krishnan Parthasarathi und Björn Hoffmeister. „Robust Speech Recognition via Anchor Word Representations“. In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-1570.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Robust Representations"
Sznaier, Mario. Multiobject Robust Control of Nonlinear Systems via State Dependent Coefficient Representations and Applications. Fort Belvoir, VA: Defense Technical Information Center, Januar 2000. http://dx.doi.org/10.21236/ada419042.
Der volle Inhalt der QuelleBarraquand, Jerome, und Jean-Claude Latombe. Robot Motion Planning: A Distributed Representation Approach. Fort Belvoir, VA: Defense Technical Information Center, Mai 1989. http://dx.doi.org/10.21236/ada209890.
Der volle Inhalt der QuelleFeeney, Patricia, Matthias Liffers, Estelle Cheng und Paul Vierkant. Better Together: Complete Metadata as Robust Infrastructure. Crossref, November 2022. http://dx.doi.org/10.13003/m3237yt.
Der volle Inhalt der QuelleBowyer, Kevin. Development of the Aspect Graph Representation for Use in Robot Vision. Fort Belvoir, VA: Defense Technical Information Center, Oktober 1991. http://dx.doi.org/10.21236/ada247109.
Der volle Inhalt der QuelleBoyer, Marcel. Comments on competition policy and labour markets. CIRANO, Februar 2022. http://dx.doi.org/10.54932/iqio1721.
Der volle Inhalt der QuelleRuvinsky, Alicia, Maria Seale, R. Salter und Natàlia Garcia-Reyero. An ontology for an epigenetics approach to prognostics and health management. Engineer Research and Development Center (U.S.), März 2023. http://dx.doi.org/10.21079/11681/46632.
Der volle Inhalt der QuelleBalfour, Lindsay, Adrienne Evans, Marcus Maloney und Sarah Merry. Postdigital Intimacies for Online Safety. Coventry University, Mai 2023. http://dx.doi.org/10.18552/pdc/2023/0001.
Der volle Inhalt der QuelleZio, Enrico, und Nicola Pedroni. Literature review of methods for representing uncertainty. Fondation pour une culture de sécurité industrielle, Dezember 2013. http://dx.doi.org/10.57071/124ure.
Der volle Inhalt der QuelleБережна, Маргарита Василівна. Maleficent: from the Matriarch to the Scorned Woman (Psycholinguistic Image). Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/5766.
Der volle Inhalt der QuelleShukla, Indu, Rajeev Agrawal, Kelly Ervin und Jonathan Boone. AI on digital twin of facility captured by reality scans. Engineer Research and Development Center (U.S.), November 2023. http://dx.doi.org/10.21079/11681/47850.
Der volle Inhalt der Quelle