Gotowa bibliografia na temat „Robust Representations”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Robust Representations”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Robust Representations"
Kuo, Yen-Ling. "Learning Representations for Robust Human-Robot Interaction". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 20 (24.03.2024): 22673. http://dx.doi.org/10.1609/aaai.v38i20.30289.
Pełny tekst źródłaYang, Shuo, Tianyu Guo, Yunhe Wang i Chang Xu. "Adversarial Robustness through Disentangled Representations". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 4 (18.05.2021): 3145–53. http://dx.doi.org/10.1609/aaai.v35i4.16424.
Pełny tekst źródłaIddianozie, Chidubem, i Gavin McArdle. "Towards Robust Representations of Spatial Networks Using Graph Neural Networks". Applied Sciences 11, nr 15 (27.07.2021): 6918. http://dx.doi.org/10.3390/app11156918.
Pełny tekst źródłaVu, Hung, Tu Dinh Nguyen, Trung Le, Wei Luo i Dinh Phung. "Robust Anomaly Detection in Videos Using Multilevel Representations". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 5216–23. http://dx.doi.org/10.1609/aaai.v33i01.33015216.
Pełny tekst źródłaHo, Edward Kei Shiu, i Lai Wan Chan. "Analyzing Holistic Parsers: Implications for Robust Parsing and Systematicity". Neural Computation 13, nr 5 (1.05.2001): 1137–70. http://dx.doi.org/10.1162/08997660151134361.
Pełny tekst źródłaYang, Qing, Jun Chen i Najla Al-Nabhan. "Data representation using robust nonnegative matrix factorization for edge computing". Mathematical Biosciences and Engineering 19, nr 2 (2021): 2147–78. http://dx.doi.org/10.3934/mbe.2022100.
Pełny tekst źródłaParlett, Beresford N., i Inderjit S. Dhillon. "Relatively robust representations of symmetric tridiagonals". Linear Algebra and its Applications 309, nr 1-3 (kwiecień 2000): 121–51. http://dx.doi.org/10.1016/s0024-3795(99)00262-1.
Pełny tekst źródłaMedina, Josep R., i Carlos R. Sanchez‐Carratala. "Robust AR Representations of Ocean Spectra". Journal of Engineering Mechanics 117, nr 12 (grudzień 1991): 2926–30. http://dx.doi.org/10.1061/(asce)0733-9399(1991)117:12(2926).
Pełny tekst źródłaHigashi, Masatake, Fuyuki Torihara, Nobuhiro Takeuchi, Toshio Sata, Tsuyoshi Saitoh i Mamoru Hosaka. "Robust algorithms for face-based representations". Computer-Aided Design 29, nr 2 (luty 1997): 135–46. http://dx.doi.org/10.1016/s0010-4485(96)00042-5.
Pełny tekst źródłaRostami, Mohammad. "Internal Robust Representations for Domain Generalization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13 (26.06.2023): 15451. http://dx.doi.org/10.1609/aaai.v37i13.26818.
Pełny tekst źródłaRozprawy doktorskie na temat "Robust Representations"
Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations". Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Pełny tekst źródłaThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations". Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096.
Pełny tekst źródłaThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Tran, Brandon Vanhuy. "Building and using robust representations in image classification". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127912.
Pełny tekst źródłaCataloged from the official PDF of thesis.
Includes bibliographical references (pages 115-131).
One of the major appeals of the deep learning paradigm is the ability to learn high-level feature representations of complex data. These learned representations obviate manual data pre-processing, and are versatile enough to generalize across tasks. However, they are not yet capable of fully capturing abstract, meaningful features of the data. For instance, the pervasiveness of adversarial examples--small perturbations of correctly classified inputs causing model misclassification--is a prominent indication of such shortcomings. The goal of this thesis is to work towards building learned representations that are more robust and human-aligned. To achieve this, we turn to adversarial (or robust) training, an optimization technique for training networks less prone to adversarial inputs. Typically, robust training is studied purely in the context of machine learning security (as a safeguard against adversarial examples)--in contrast, we will cast it as a means of enforcing an additional prior onto the model. Specifically, it has been noticed that, in a similar manner to the well-known convolutional or recurrent priors, the robust prior serves as a "bias" that restricts the features models can use in classification--it does not allow for any features that change upon small perturbations. We find that the addition of this simple prior enables a number of downstream applications, from feature visualization and manipulation to input interpolation and image synthesis. Most importantly, robust training provides a simple way of interpreting and understanding model decisions. Besides diagnosing incorrect classification, this also has consequences in the so-called "data poisoning" setting, where an adversary corrupts training samples with the hope of causing misbehaviour in the resulting model. We find that in many cases, the prior arising from robust training significantly helps in detecting data poisoning.
by Brandon Vanhuy Tran.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mathematics
Parekh, Sanjeel. "Learning representations for robust audio-visual scene analysis". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015/document.
Pełny tekst źródłaThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Parekh, Sanjeel. "Learning representations for robust audio-visual scene analysis". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015.
Pełny tekst źródłaThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Herdtweck, Christian [Verfasser], i Heinrich [Akademischer Betreuer] Bülthoff. "Learning Data-Driven Representations for Robust Monocular Computer Vision Applications / Christian Herdtweck ; Betreuer: Heinrich Bülthoff". Tübingen : Universitätsbibliothek Tübingen, 2014. http://d-nb.info/1162897317/34.
Pełny tekst źródłaXu, Guanglin. "Optimization under uncertainty: conic programming representations, relaxations, and approximations". Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5881.
Pełny tekst źródłaBarbano, Carlo Alberto Maria. "Collateral-Free Learning of Deep Representations : From Natural Images to Biomedical Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT038.
Pełny tekst źródłaDeep Learning (DL) has become one of the predominant tools for solving a variety of tasks, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data. However, it has been shown that they might also learn additional features, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as this additional information can contain bias, noise, or sensitive information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL-based pipelines, especially if they involve private users' data. Learning robust representations that are free of collateral information can be highly relevant for a variety of fields and applications, like medical applications and decision support systems.In this thesis, we introduce the concept of Collateral Learning, which refers to all those instances in which a model learns more information than intended. The aim of Collateral Learning is to bridge the gap between different fields in DL, such as robustness, debiasing, generalization in medical imaging, and privacy preservation. We propose different methods for achieving robust representations free of collateral information. Some of our contributions are based on regularization techniques, while others are represented by novel loss functions.In the first part of the thesis, we lay the foundations of our work, by developing techniques for robust representation learning on natural images. We focus on one of the most important instances of Collateral Learning, namely biased data. Specifically, we focus on Contrastive Learning (CL), and we propose a unified metric learning framework that allows us to both easily analyze existing loss functions, and derive novel ones. Here, we propose a novel supervised contrastive loss function, ε-SupInfoNCE, and two debiasing regularization techniques, EnD and FairKL, that achieve state-of-the-art performance on a number of standard vision classification and debiasing benchmarks.In the second part of the thesis, we focus on Collateral Learning in medical imaging, specifically on neuroimaging and chest X-ray images. For neuroimaging, we present a novel contrastive learning approach for brain age estimation. Our approach achieves state-of-the-art results on the OpenBHB dataset for age regression and shows increased robustness to the site effect. We also leverage this method to detect unhealthy brain aging patterns, showing promising results in the classification of brain conditions such as Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD). For chest X-ray images (CXR), we will target Covid-19 classification, showing how Collateral Learning can effectively hinder the reliability of such models. To tackle such issue, we propose a transfer learning approach that, combined with our regularization techniques, shows promising results on an original multi-site CXRs dataset.Finally, we provide some hints about Collateral Learning and privacy preservation in DL models. We show that some of our proposed methods can be effective in preventing certain information from being learned by the model, thus avoiding potential data leakage
Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.
Pełny tekst źródła山本, 有作, i Yusaku Yamamoto. "密行列固有値解法の最近の発展(I) : Multiple Relatively Robust Representationsアルゴリズム". 日本応用数理学会, 2005. http://hdl.handle.net/2237/10838.
Pełny tekst źródłaKsiążki na temat "Robust Representations"
Li, Sheng, i Yun Fu. Robust Representation for Data Analytics. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2.
Pełny tekst źródładel, Pobil Angel Pasqual, i Serna Miguel Angel, red. Spatial representation and motion planning. Berlin: Springer-Verlag, 1995.
Znajdź pełny tekst źródłade, Velde Walter Van, red. Toward learning robots. Cambridge, Mass: MIT Press, 1993.
Znajdź pełny tekst źródłaMachine learning of robot assembly plans. Boston: Kluwer Academic Publishers, 1988.
Znajdź pełny tekst źródłaBurhans, Robert L. Robert L. Burhans. Springfield, Ill: The University, 1987.
Znajdź pełny tekst źródłaWallgrün, Jan Oliver. Hierarchical Voronoi graphs: Spatial representation and reasoning for mobile robots. Heidelberg: Springer, 2010.
Znajdź pełny tekst źródłaMullane, John. Random Finite Sets for Robot Mapping and SLAM: New Concepts in Autonomous Robotic Map Representations. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
Znajdź pełny tekst źródłaWolter, Diedrich. Spatial representation and reasoning for robot mapping: A shape-based approach. Berlin: Springer, 2008.
Znajdź pełny tekst źródłaHeikkonen, Jukka. Subsymbolic representations, self-organizing maps, and object motion learning. Lappeenranta, Finland: Lappeenranta University of Technology, 1994.
Znajdź pełny tekst źródłaA, Boswell Sharon, i Washington State Oral History Program., red. Robert F. Goldsworthy: An oral history. Olympia: Washington State Oral History Program, Office of the Secretary of State, 1999.
Znajdź pełny tekst źródłaCzęści książek na temat "Robust Representations"
Li, Sheng, i Yun Fu. "Fundamentals of Robust Representations". W Advanced Information and Knowledge Processing, 9–16. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_2.
Pełny tekst źródłaLi, Sheng, i Yun Fu. "Robust Representations for Collaborative Filtering". W Advanced Information and Knowledge Processing, 123–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_7.
Pełny tekst źródłaLi, Sheng, i Yun Fu. "Robust Representations for Response Prediction". W Advanced Information and Knowledge Processing, 147–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_8.
Pełny tekst źródłaLi, Sheng, i Yun Fu. "Robust Representations for Outlier Detection". W Advanced Information and Knowledge Processing, 175–201. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_9.
Pełny tekst źródłaLi, Sheng, i Yun Fu. "Robust Representations for Person Re-identification". W Advanced Information and Knowledge Processing, 203–22. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_10.
Pełny tekst źródłaLi, Hengjian, Lianhai Wang i Zutao Zhang. "Robust Palmprint Recognition Based on Directional Representations". W Intelligent Information Processing VI, 372–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32891-6_46.
Pełny tekst źródłaSom, Anirudh, Kowshik Thopalli, Karthikeyan Natesan Ramamurthy, Vinay Venkataraman, Ankita Shukla i Pavan Turaga. "Perturbation Robust Representations of Topological Persistence Diagrams". W Computer Vision – ECCV 2018, 638–59. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01234-2_38.
Pełny tekst źródłaVemuri, Baba C., Jundong Liu i José L. Marroquin. "Robust Multimodal Image Registration Using Local Frequency Representations". W Lecture Notes in Computer Science, 176–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45729-1_17.
Pełny tekst źródłaWallraven, Christian, i Heinrich Bülthoff. "Acquiring Robust Representations for Recognition from Image Sequences". W Lecture Notes in Computer Science, 216–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45404-7_29.
Pełny tekst źródłaChen, Xiang, Xin Xie, Zhen Bi, Hongbin Ye, Shumin Deng, Ningyu Zhang i Huajun Chen. "Disentangled Contrastive Learning for Learning Robust Textual Representations". W Artificial Intelligence, 215–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93049-3_18.
Pełny tekst źródłaStreszczenia konferencji na temat "Robust Representations"
Li, Yitong, Trevor Cohn i Timothy Baldwin. "Learning Robust Representations of Text". W Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/d16-1207.
Pełny tekst źródłaZhang, Yue, i Xiafei Lei. "Deeply learned electrocardiogram representations are robust". W 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2017. http://dx.doi.org/10.1109/fskd.2017.8393130.
Pełny tekst źródłaSmyth, Aidan, Niall Lyons, Ted Wada, Robert Zopf, Ashutosh Pandey i Avik Santra. "Robust Representations for Keyword Spotting Systems". W 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956211.
Pełny tekst źródłaZheng, Peng, Aleksandr Y. Aravkin, Jayaraman Jayaraman Thiagarajan i Karthikeyan Natesan Ramamurthy. "Learning Robust Representations for Computer Vision". W 2017 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2017. http://dx.doi.org/10.1109/iccvw.2017.211.
Pełny tekst źródłaMüller, Thomas, i Hinrich Schuetze. "Robust Morphological Tagging with Word Representations". W Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1055.
Pełny tekst źródłaKawakami, Kazuya, Luyu Wang, Chris Dyer, Phil Blunsom i Aaron van den Oord. "Learning Robust and Multilingual Speech Representations". W Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.106.
Pełny tekst źródłaLi, Xiumei, i Guoan Bi. "Reassignment methods for robust time-frequency representations". W Signal Processing (ICICS). IEEE, 2009. http://dx.doi.org/10.1109/icics.2009.5397517.
Pełny tekst źródłaLi, Yitong, Timothy Baldwin i Trevor Cohn. "Towards Robust and Privacy-preserving Text Representations". W Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-2005.
Pełny tekst źródłaMahabal, Abhijit, Dan Roth i Sid Mittal. "Robust Handling of Polysemy via Sparse Representations". W Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-2031.
Pełny tekst źródłaKing, Brian, I.-Fan Chen, Yonatan Vaizman, Yuzong Liu, Roland Maas, Sree Hari Krishnan Parthasarathi i Björn Hoffmeister. "Robust Speech Recognition via Anchor Word Representations". W Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-1570.
Pełny tekst źródłaRaporty organizacyjne na temat "Robust Representations"
Sznaier, Mario. Multiobject Robust Control of Nonlinear Systems via State Dependent Coefficient Representations and Applications. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2000. http://dx.doi.org/10.21236/ada419042.
Pełny tekst źródłaBarraquand, Jerome, i Jean-Claude Latombe. Robot Motion Planning: A Distributed Representation Approach. Fort Belvoir, VA: Defense Technical Information Center, maj 1989. http://dx.doi.org/10.21236/ada209890.
Pełny tekst źródłaFeeney, Patricia, Matthias Liffers, Estelle Cheng i Paul Vierkant. Better Together: Complete Metadata as Robust Infrastructure. Crossref, listopad 2022. http://dx.doi.org/10.13003/m3237yt.
Pełny tekst źródłaBowyer, Kevin. Development of the Aspect Graph Representation for Use in Robot Vision. Fort Belvoir, VA: Defense Technical Information Center, październik 1991. http://dx.doi.org/10.21236/ada247109.
Pełny tekst źródłaBoyer, Marcel. Comments on competition policy and labour markets. CIRANO, luty 2022. http://dx.doi.org/10.54932/iqio1721.
Pełny tekst źródłaRuvinsky, Alicia, Maria Seale, R. Salter i Natàlia Garcia-Reyero. An ontology for an epigenetics approach to prognostics and health management. Engineer Research and Development Center (U.S.), marzec 2023. http://dx.doi.org/10.21079/11681/46632.
Pełny tekst źródłaBalfour, Lindsay, Adrienne Evans, Marcus Maloney i Sarah Merry. Postdigital Intimacies for Online Safety. Coventry University, maj 2023. http://dx.doi.org/10.18552/pdc/2023/0001.
Pełny tekst źródłaZio, Enrico, i Nicola Pedroni. Literature review of methods for representing uncertainty. Fondation pour une culture de sécurité industrielle, grudzień 2013. http://dx.doi.org/10.57071/124ure.
Pełny tekst źródłaБережна, Маргарита Василівна. Maleficent: from the Matriarch to the Scorned Woman (Psycholinguistic Image). Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/5766.
Pełny tekst źródłaShukla, Indu, Rajeev Agrawal, Kelly Ervin i Jonathan Boone. AI on digital twin of facility captured by reality scans. Engineer Research and Development Center (U.S.), listopad 2023. http://dx.doi.org/10.21079/11681/47850.
Pełny tekst źródła