Academic literature on the topic 'Robust Representations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Robust Representations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Robust Representations":

1

Kuo, Yen-Ling. "Learning Representations for Robust Human-Robot Interaction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (March 24, 2024): 22673. http://dx.doi.org/10.1609/aaai.v38i20.30289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For robots to robustly and flexibly interact with humans, they need to acquire skills to use across scenarios. One way to enable the generalization of skills is to learn representations that are useful for downstream tasks. Learning a representation for interactions requires an understanding of what (e.g., objects) as well as how (e.g., actions, controls, and manners) to interact with. However, most existing language or visual representations mainly focus on objects. To enable robust human-robot interactions, we need a representation that is not just grounded at the object level but to reason at the action level. The ability to reason about an agent’s own actions and other’s actions will be crucial for long-tail interactions. My research focuses on leveraging the compositional nature of language and reward functions to learn representations that generalize to novel scenarios. Together with the information from multiple modalities, the learned representation can reason about task progress, future behaviors, and the goals/beliefs of an agent. The above ideas have been demonstrated in my research on building robots to understand language and engage in social interactions.
2

Yang, Shuo, Tianyu Guo, Yunhe Wang, and Chang Xu. "Adversarial Robustness through Disentangled Representations." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3145–53. http://dx.doi.org/10.1609/aaai.v35i4.16424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Despite the remarkable empirical performance of deep learning models, their vulnerability to adversarial examples has been revealed in many studies. They are prone to make a susceptible prediction to the input with imperceptible adversarial perturbation. Although recent works have remarkably improved the model's robustness under the adversarial training strategy, an evident gap between the natural accuracy and adversarial robustness inevitably exists. In order to mitigate this problem, in this paper, we assume that the robust and non-robust representations are two basic ingredients entangled in the integral representation. For achieving adversarial robustness, the robust representations of natural and adversarial examples should be disentangled from the non-robust part and the alignment of the robust representations can bridge the gap between accuracy and robustness. Inspired by this motivation, we propose a novel defense method called Deep Robust Representation Disentanglement Network (DRRDN). Specifically, DRRDN employs a disentangler to extract and align the robust representations from both adversarial and natural examples. Theoretical analysis guarantees the mitigation of the trade-off between robustness and accuracy with good disentanglement and alignment performance. Experimental results on benchmark datasets finally demonstrate the empirical superiority of our method.
3

Iddianozie, Chidubem, and Gavin McArdle. "Towards Robust Representations of Spatial Networks Using Graph Neural Networks." Applied Sciences 11, no. 15 (July 27, 2021): 6918. http://dx.doi.org/10.3390/app11156918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The effectiveness of a machine learning model is impacted by the data representation used. Consequently, it is crucial to investigate robust representations for efficient machine learning methods. In this paper, we explore the link between data representations and model performance for inference tasks on spatial networks. We argue that representations which explicitly encode the relations between spatial entities would improve model performance. Specifically, we consider homogeneous and heterogeneous representations of spatial networks. We recognise that the expressive nature of the heterogeneous representation may benefit spatial networks and could improve model performance on certain tasks. Thus, we carry out an empirical study using Graph Neural Network models for two inference tasks on spatial networks. Our results demonstrate that heterogeneous representations improves model performance for down-stream inference tasks on spatial networks.
4

Vu, Hung, Tu Dinh Nguyen, Trung Le, Wei Luo, and Dinh Phung. "Robust Anomaly Detection in Videos Using Multilevel Representations." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5216–23. http://dx.doi.org/10.1609/aaai.v33i01.33015216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Detecting anomalies in surveillance videos has long been an important but unsolved problem. In particular, many existing solutions are overly sensitive to (often ephemeral) visual artifacts in the raw video data, resulting in false positives and fragmented detection regions. To overcome such sensitivity and to capture true anomalies with semantic significance, one natural idea is to seek validation from abstract representations of the videos. This paper introduces a framework of robust anomaly detection using multilevel representations of both intensity and motion data. The framework consists of three main components: 1) representation learning using Denoising Autoencoders, 2) level-wise representation generation using Conditional Generative Adversarial Networks, and 3) consolidating anomalous regions detected at each representation level. Our proposed multilevel detector shows a significant improvement in pixel-level Equal Error Rate, namely 11.35%, 12.32% and 4.31% improvement in UCSD Ped 1, UCSD Ped 2 and Avenue datasets respectively. In addition, the model allowed us to detect mislabeled anomalies in the UCDS Ped 1.
5

Ho, Edward Kei Shiu, and Lai Wan Chan. "Analyzing Holistic Parsers: Implications for Robust Parsing and Systematicity." Neural Computation 13, no. 5 (May 1, 2001): 1137–70. http://dx.doi.org/10.1162/08997660151134361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Holistic parsers offer a viable alternative to traditional algorithmic parsers. They have good generalization performance and are robust inherently. In a holistic parser, parsing is achieved by mapping the connectionist representation of the input sentence to the connectionist representation of the target parse tree directly. Little prior knowledge of the underlying parsing mechanism thus needs to be assumed. However, it also makes holistic parsing difficult to understand. In this article, an analysis is presented for studying the operations of the confluent pre-order parser (CPP). In the analysis, the CPP is viewed as a dynamical system, and holistic parsing is perceived as a sequence of state transitions through its state-space. The seemingly one-shot parsing mechanism can thus be elucidated as a step-by-step inference process, with the intermediate parsing decisions being reflected by the states visited during parsing. The study serves two purposes. First, it improves our understanding of how grammatical errors are corrected by the CPP. The occurrence of an error in a sentence will cause the CPP to deviate from the normal track that is followed when the original sentence is parsed. But as the remaining terminals are read, the two trajectories will gradually converge until finally the correct parse tree is produced. Second, it reveals that having systematic parse tree representations alone cannot guarantee good generalization performance in holistic parsing. More important, they need to be distributed in certain useful locations of the representational space. Sentences with similar trailing terminals should have their corresponding parse tree representations mapped to nearby locations in the representational space. The study provides concrete evidence that encoding the linearized parse trees as obtained via preorder traversal can satisfy such a requirement.
6

Yang, Qing, Jun Chen, and Najla Al-Nabhan. "Data representation using robust nonnegative matrix factorization for edge computing." Mathematical Biosciences and Engineering 19, no. 2 (2021): 2147–78. http://dx.doi.org/10.3934/mbe.2022100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<abstract> <p>As a popular data representation technique, Nonnegative matrix factorization (NMF) has been widely applied in edge computing, information retrieval and pattern recognition. Although it can learn parts-based data representations, existing NMF-based algorithms fail to integrate local and global structures of data to steer matrix factorization. Meanwhile, semi-supervised ones ignore the important role of instances from different classes in learning the representation. To solve such an issue, we propose a novel semi-supervised NMF approach via joint graph regularization and constraint propagation for edge computing, called robust constrained nonnegative matrix factorization (RCNMF), which learns robust discriminative representations by leveraging the power of both L2, 1-norm NMF and constraint propagation. Specifically, RCNMF explicitly exploits global and local structures of data to make latent representations of instances involved by the same class closer and those of instances involved by different classes farther. Furthermore, RCNMF introduces the L2, 1-norm cost function for addressing the problems of noise and outliers. Moreover, L2, 1-norm constraints on the factorial matrix are used to ensure the new representation sparse in rows. Finally, we exploit an optimization algorithm to solve the proposed framework. The convergence of such an optimization algorithm has been proven theoretically and empirically. Empirical experiments show that the proposed RCNMF is superior to other state-of-the-art algorithms.</p> </abstract>
7

Parlett, Beresford N., and Inderjit S. Dhillon. "Relatively robust representations of symmetric tridiagonals." Linear Algebra and its Applications 309, no. 1-3 (April 2000): 121–51. http://dx.doi.org/10.1016/s0024-3795(99)00262-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Medina, Josep R., and Carlos R. Sanchez‐Carratala. "Robust AR Representations of Ocean Spectra." Journal of Engineering Mechanics 117, no. 12 (December 1991): 2926–30. http://dx.doi.org/10.1061/(asce)0733-9399(1991)117:12(2926).

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Higashi, Masatake, Fuyuki Torihara, Nobuhiro Takeuchi, Toshio Sata, Tsuyoshi Saitoh, and Mamoru Hosaka. "Robust algorithms for face-based representations." Computer-Aided Design 29, no. 2 (February 1997): 135–46. http://dx.doi.org/10.1016/s0010-4485(96)00042-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rostami, Mohammad. "Internal Robust Representations for Domain Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15451. http://dx.doi.org/10.1609/aaai.v37i13.26818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Model generalization under distributional changes remains a significant challenge for machine learning. We present consolidating the internal representation of the training data in a model as a strategy of improving model generalization.

Dissertations / Theses on the topic "Robust Representations":

1

Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations." Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La présente thèse étudie la modélisation conjointe des contenus visuels et textuels extraits à partir des documents multimédias pour résoudre les problèmes intermodaux. Ces tâches exigent la capacité de ``traduire'' l'information d'une modalité vers une autre. Un espace de représentation commun, par exemple obtenu par l'Analyse Canonique des Corrélation ou son extension kernelisée est une solution généralement adoptée. Sur cet espace, images et texte peuvent être représentés par des vecteurs de même type sur lesquels la comparaison intermodale peut se faire directement.Néanmoins, un tel espace commun souffre de plusieurs déficiences qui peuvent diminuer la performance des ces tâches. Le premier défaut concerne des informations qui sont mal représentées sur cet espace pourtant très importantes dans le contexte de la recherche intermodale. Le deuxième défaut porte sur la séparation entre les modalités sur l'espace commun, ce qui conduit à une limite de qualité de traduction entre modalités. Pour faire face au premier défaut concernant les données mal représentées, nous avons proposé un modèle qui identifie tout d'abord ces informations et puis les combine avec des données relativement bien représentées sur l'espace commun. Les évaluations sur la tâche d'illustration de texte montrent que la prise en compte de ces information fortement améliore les résultats de la recherche intermodale. La contribution majeure de la thèse se concentre sur la séparation entre les modalités sur l'espace commun pour améliorer la performance des tâches intermodales. Nous proposons deux méthodes de représentation pour les documents bi-modaux ou uni-modaux qui regroupent à la fois des informations visuelles et textuelles projetées sur l'espace commun. Pour les documents uni-modaux, nous suggérons un processus de complétion basé sur un ensemble de données auxiliaires pour trouver les informations correspondantes dans la modalité absente. Ces informations complémentaires sont ensuite utilisées pour construire une représentation bi-modale finale pour un document uni-modal. Nos approches permettent d'obtenir des résultats de l'état de l'art pour la recherche intermodale ou la classification bi-modale et intermodale
This thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
2

Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations." Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La présente thèse étudie la modélisation conjointe des contenus visuels et textuels extraits à partir des documents multimédias pour résoudre les problèmes intermodaux. Ces tâches exigent la capacité de ``traduire'' l'information d'une modalité vers une autre. Un espace de représentation commun, par exemple obtenu par l'Analyse Canonique des Corrélation ou son extension kernelisée est une solution généralement adoptée. Sur cet espace, images et texte peuvent être représentés par des vecteurs de même type sur lesquels la comparaison intermodale peut se faire directement.Néanmoins, un tel espace commun souffre de plusieurs déficiences qui peuvent diminuer la performance des ces tâches. Le premier défaut concerne des informations qui sont mal représentées sur cet espace pourtant très importantes dans le contexte de la recherche intermodale. Le deuxième défaut porte sur la séparation entre les modalités sur l'espace commun, ce qui conduit à une limite de qualité de traduction entre modalités. Pour faire face au premier défaut concernant les données mal représentées, nous avons proposé un modèle qui identifie tout d'abord ces informations et puis les combine avec des données relativement bien représentées sur l'espace commun. Les évaluations sur la tâche d'illustration de texte montrent que la prise en compte de ces information fortement améliore les résultats de la recherche intermodale. La contribution majeure de la thèse se concentre sur la séparation entre les modalités sur l'espace commun pour améliorer la performance des tâches intermodales. Nous proposons deux méthodes de représentation pour les documents bi-modaux ou uni-modaux qui regroupent à la fois des informations visuelles et textuelles projetées sur l'espace commun. Pour les documents uni-modaux, nous suggérons un processus de complétion basé sur un ensemble de données auxiliaires pour trouver les informations correspondantes dans la modalité absente. Ces informations complémentaires sont ensuite utilisées pour construire une représentation bi-modale finale pour un document uni-modal. Nos approches permettent d'obtenir des résultats de l'état de l'art pour la recherche intermodale ou la classification bi-modale et intermodale
This thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
3

Tran, Brandon Vanhuy. "Building and using robust representations in image classification." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 115-131).
One of the major appeals of the deep learning paradigm is the ability to learn high-level feature representations of complex data. These learned representations obviate manual data pre-processing, and are versatile enough to generalize across tasks. However, they are not yet capable of fully capturing abstract, meaningful features of the data. For instance, the pervasiveness of adversarial examples--small perturbations of correctly classified inputs causing model misclassification--is a prominent indication of such shortcomings. The goal of this thesis is to work towards building learned representations that are more robust and human-aligned. To achieve this, we turn to adversarial (or robust) training, an optimization technique for training networks less prone to adversarial inputs. Typically, robust training is studied purely in the context of machine learning security (as a safeguard against adversarial examples)--in contrast, we will cast it as a means of enforcing an additional prior onto the model. Specifically, it has been noticed that, in a similar manner to the well-known convolutional or recurrent priors, the robust prior serves as a "bias" that restricts the features models can use in classification--it does not allow for any features that change upon small perturbations. We find that the addition of this simple prior enables a number of downstream applications, from feature visualization and manipulation to input interpolation and image synthesis. Most importantly, robust training provides a simple way of interpreting and understanding model decisions. Besides diagnosing incorrect classification, this also has consequences in the so-called "data poisoning" setting, where an adversary corrupts training samples with the hope of causing misbehaviour in the resulting model. We find that in many cases, the prior arising from robust training significantly helps in detecting data poisoning.
by Brandon Vanhuy Tran.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mathematics
4

Parekh, Sanjeel. "Learning representations for robust audio-visual scene analysis." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objectif de cette thèse est de concevoir des algorithmes qui permettent la détection robuste d’objets et d’événements dans des vidéos en s’appuyant sur une analyse conjointe de données audio et visuelle. Ceci est inspiré par la capacité remarquable des humains à intégrer les caractéristiques auditives et visuelles pour améliorer leur compréhension de scénarios bruités. À cette fin, nous nous appuyons sur deux types d'associations naturelles entre les modalités d'enregistrements audiovisuels (réalisés à l'aide d'un seul microphone et d'une seule caméra), à savoir la corrélation mouvement/audio et la co-occurrence apparence/audio. Dans le premier cas, nous utilisons la séparation de sources audio comme application principale et proposons deux nouvelles méthodes dans le cadre classique de la factorisation par matrices non négatives (NMF). L'idée centrale est d'utiliser la corrélation temporelle entre l'audio et le mouvement pour les objets / actions où le mouvement produisant le son est visible. La première méthode proposée met l'accent sur le couplage flexible entre les représentations audio et de mouvement capturant les variations temporelles, tandis que la seconde repose sur la régression intermodale. Nous avons séparé plusieurs mélanges complexes d'instruments à cordes en leurs sources constituantes en utilisant ces approches.Pour identifier et extraire de nombreux objets couramment rencontrés, nous exploitons la co-occurrence apparence/audio dans de grands ensembles de données. Ce mécanisme d'association complémentaire est particulièrement utile pour les objets où les corrélations basées sur le mouvement ne sont ni visibles ni disponibles. Le problème est traité dans un contexte faiblement supervisé dans lequel nous proposons un framework d’apprentissage de représentation pour la classification robuste des événements audiovisuels, la localisation des objets visuels, la détection des événements audio et la séparation de sources.Nous avons testé de manière approfondie les idées proposées sur des ensembles de données publics. Ces expériences permettent de faire un lien avec des phénomènes intuitifs et multimodaux que les humains utilisent dans leur processus de compréhension de scènes audiovisuelles
The goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
5

Parekh, Sanjeel. "Learning representations for robust audio-visual scene analysis." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objectif de cette thèse est de concevoir des algorithmes qui permettent la détection robuste d’objets et d’événements dans des vidéos en s’appuyant sur une analyse conjointe de données audio et visuelle. Ceci est inspiré par la capacité remarquable des humains à intégrer les caractéristiques auditives et visuelles pour améliorer leur compréhension de scénarios bruités. À cette fin, nous nous appuyons sur deux types d'associations naturelles entre les modalités d'enregistrements audiovisuels (réalisés à l'aide d'un seul microphone et d'une seule caméra), à savoir la corrélation mouvement/audio et la co-occurrence apparence/audio. Dans le premier cas, nous utilisons la séparation de sources audio comme application principale et proposons deux nouvelles méthodes dans le cadre classique de la factorisation par matrices non négatives (NMF). L'idée centrale est d'utiliser la corrélation temporelle entre l'audio et le mouvement pour les objets / actions où le mouvement produisant le son est visible. La première méthode proposée met l'accent sur le couplage flexible entre les représentations audio et de mouvement capturant les variations temporelles, tandis que la seconde repose sur la régression intermodale. Nous avons séparé plusieurs mélanges complexes d'instruments à cordes en leurs sources constituantes en utilisant ces approches.Pour identifier et extraire de nombreux objets couramment rencontrés, nous exploitons la co-occurrence apparence/audio dans de grands ensembles de données. Ce mécanisme d'association complémentaire est particulièrement utile pour les objets où les corrélations basées sur le mouvement ne sont ni visibles ni disponibles. Le problème est traité dans un contexte faiblement supervisé dans lequel nous proposons un framework d’apprentissage de représentation pour la classification robuste des événements audiovisuels, la localisation des objets visuels, la détection des événements audio et la séparation de sources.Nous avons testé de manière approfondie les idées proposées sur des ensembles de données publics. Ces expériences permettent de faire un lien avec des phénomènes intuitifs et multimodaux que les humains utilisent dans leur processus de compréhension de scènes audiovisuelles
The goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
6

Herdtweck, Christian [Verfasser], and Heinrich [Akademischer Betreuer] Bülthoff. "Learning Data-Driven Representations for Robust Monocular Computer Vision Applications / Christian Herdtweck ; Betreuer: Heinrich Bülthoff." Tübingen : Universitätsbibliothek Tübingen, 2014. http://d-nb.info/1162897317/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Guanglin. "Optimization under uncertainty: conic programming representations, relaxations, and approximations." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In practice, the presence of uncertain parameters in optimization problems introduces new challenges in modeling and solvability to operations research. There are three main paradigms proposed for optimization problems under uncertainty. These include stochastic programming, robust optimization, and sensitivity analysis. In this thesis, we examine, improve, and combine the latter two paradigms in several relevant models and applications. In the second chapter, we study a two-stage adjustable robust linear optimization problem in which the right-hand sides are uncertain and belong to a compact, convex, and tractable uncertainty set. Under standard and simple assumptions, we reformulate the two-stage problem as a copositive optimization program, which in turns leads to a class of tractable semidefinite-based approximations that are at least as strong as the affine policy, which is a well studied tractable approximation in the literature. We examine our approach over several examples from the literature and the results demonstrate that our tractable approximations significantly improve the affine policy. In particular, our approach recovers the optimal values of a class of instances of increasing size for which the affine policy admits an arbitrary large gap. In the third chapter, we leverage the concept of robust optimization to conduct sensitivity analysis of the optimal value of linear programming (LP). In particular, we propose a framework for sensitivity analysis of LP problems, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex, and tractable uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case LP and two-stage adjustable linear programming. We define the best-case and worst-case LP optimal values over the uncertainty set. As the concept aligns well with the general spirit of robust optimization, we denote our approach as robust sensitivity analysis. While the best-case and worst-case optimal values are difficult to compute in general, we prove that they equal the optimal values of two separate, but related, copositive programs. We then develop tight, tractable conic relaxations to provide bounds on the best-case and worst case optimal values, respectively. We also develop techniques to assess the quality of the bounds, and we validate our approach computationally on several examples from—and inspired by—the literature. We find that the bounds are very strong in practice and, in particular, are at least as strong as known results for specific cases from the literature. In the fourth chapter of this thesis, we study the expected optimal value of a mixed 0-1 programming problem with uncertain objective coefficients following a joint distribution. We assume that the true distribution is not known exactly, but a set of independent samples can be observed. Using the Wasserstein metric, we construct an ambiguity set centered at the empirical distribution from the observed samples and containing all distributions that could have generated the observed samples with a high confidence. The problem of interest is to investigate the bound on the expected optimal value over the Wasserstein ambiguity set. Under standard assumptions, we reformulate the problem into a copositive programming problem, which naturally leads to a tractable semidefinite-based approximation. We compare our approach with a moment-based approach from the literature for two applications. The numerical results illustrate the effectiveness of our approach. Finally, we conclude the thesis with remarks on some interesting open questions in the field of optimization under uncertainty. In particular, we point out that some interesting topics that can be potentially studied by copositive programming techniques.
8

Barbano, Carlo Alberto Maria. "Collateral-Free Learning of Deep Representations : From Natural Images to Biomedical Applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’apprentissage profond est devenu l'un des outils prédominants pour résoudre une variété de tâches, souvent avec des performances supérieures à celles des méthodes précédentes. Les modèles d'apprentissage profond sont souvent capables d'apprendre des représentations significatives et abstraites des données sous-jacentes. Toutefois, il a été démontré qu'ils pouvaient également apprendre des caractéristiques supplémentaires, qui ne sont pas nécessairement pertinentes ou nécessaires pour la tâche souhaitée. Cela peut poser un certain nombre de problèmes, car ces informations supplémentaires peuvent contenir des biais, du bruit ou des informations sensibles qui ne devraient pas être prises en compte (comme le sexe, la race, l'âge, etc.) par le modèle. Nous appelons ces informations "collatérales". La présence d'informations collatérales se traduit par des problèmes pratiques, en particulier lorsqu'il s'agit de données d'utilisateurs privés. L'apprentissage de représentations robustes exemptes d'informations collatérales peut être utile dans divers domaines, tels que les applications médicales et les systèmes d'aide à la décision.Dans cette thèse, nous introduisons le concept d'apprentissage collatéral, qui se réfère à tous les cas où un modèle apprend plus d'informations que prévu. L'objectif de l'apprentissage collatéral est de combler le fossé entre différents domaines, tels que la robustesse, le débiaisage, la généralisation en imagerie médicale et la préservation de la vie privée. Nous proposons différentes méthodes pour obtenir des représentations robustes exemptes d'informations collatérales. Certaines de nos contributions sont basées sur des techniques de régularisation, tandis que d'autres sont représentées par de nouvelles fonctions de perte.Dans la première partie de la thèse, nous posons les bases de notre travail, en développant des techniques pour l'apprentissage de représentations robustes sur des images naturelles, en se concentrant sur les données biaisées.Plus précisément, nous nous concentrons sur l'apprentissage contrastif (CL) et nous proposons un cadre d'apprentissage métrique unifié qui nous permet à la fois d'analyser facilement les fonctions de perte existantes et d'en dériver de nouvelles.Nous proposons ici une nouvelle fonction de perte contrastive supervisée, ε-SupInfoNCE, et deux techniques de régularisation de débiaisage, EnD et FairKL, qui atteignent des performances de pointe sur un certain nombre de repères de classification et de débiaisage de vision standard.Dans la deuxième partie de la thèse, nous nous concentrons sur l'apprentissage collatéral sur les images de neuro-imagerie et de radiographie thoracique. Pour la neuro-imagerie, nous présentons une nouvelle approche d'apprentissage contrastif pour l'estimation de l'âge du cerveau. Notre approche atteint des résultats de pointe sur l'ensemble de données OpenBHB pour la régression de l'âge et montre une robustesse accrue à l'effet de site. Nous tirons également parti de cette méthode pour détecter des modèles de vieillissement cérébral malsains, ce qui donne des résultats prometteurs dans la classification d'affections cérébrales telles que les troubles cognitifs légers (MCI) et la maladie d'Alzheimer (AD). Pour les images de radiographie thoracique (CXR), nous ciblerons la classification Covid-19, en montrant comment l'apprentissage collatéral peut effectivement nuire à la fiabilité de ces modèles. Pour résoudre ce problème, nous proposons une approche d'apprentissage par transfert qui, combinée à nos techniques de régularisation, donne des résultats prometteurs sur un ensemble de données CXR multisites.Enfin, nous donnons quelques indications sur l'apprentissage collatéral et la préservation de la vie privée dans les modèles DL. Nous montrons que certaines des méthodes que nous proposons peuvent être efficaces pour empêcher que certaines informations soient apprises par le modèle, évitant ainsi une fuite potentielle de données
Deep Learning (DL) has become one of the predominant tools for solving a variety of tasks, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data. However, it has been shown that they might also learn additional features, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as this additional information can contain bias, noise, or sensitive information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL-based pipelines, especially if they involve private users' data. Learning robust representations that are free of collateral information can be highly relevant for a variety of fields and applications, like medical applications and decision support systems.In this thesis, we introduce the concept of Collateral Learning, which refers to all those instances in which a model learns more information than intended. The aim of Collateral Learning is to bridge the gap between different fields in DL, such as robustness, debiasing, generalization in medical imaging, and privacy preservation. We propose different methods for achieving robust representations free of collateral information. Some of our contributions are based on regularization techniques, while others are represented by novel loss functions.In the first part of the thesis, we lay the foundations of our work, by developing techniques for robust representation learning on natural images. We focus on one of the most important instances of Collateral Learning, namely biased data. Specifically, we focus on Contrastive Learning (CL), and we propose a unified metric learning framework that allows us to both easily analyze existing loss functions, and derive novel ones. Here, we propose a novel supervised contrastive loss function, ε-SupInfoNCE, and two debiasing regularization techniques, EnD and FairKL, that achieve state-of-the-art performance on a number of standard vision classification and debiasing benchmarks.In the second part of the thesis, we focus on Collateral Learning in medical imaging, specifically on neuroimaging and chest X-ray images. For neuroimaging, we present a novel contrastive learning approach for brain age estimation. Our approach achieves state-of-the-art results on the OpenBHB dataset for age regression and shows increased robustness to the site effect. We also leverage this method to detect unhealthy brain aging patterns, showing promising results in the classification of brain conditions such as Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD). For chest X-ray images (CXR), we will target Covid-19 classification, showing how Collateral Learning can effectively hinder the reliability of such models. To tackle such issue, we propose a transfer learning approach that, combined with our regularization techniques, shows promising results on an original multi-site CXRs dataset.Finally, we provide some hints about Collateral Learning and privacy preservation in DL models. We show that some of our proposed methods can be effective in preventing certain information from being learned by the model, thus avoiding potential data leakage
9

Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The goal of this thesis is to provide algorithms and models for classification, gesture recognition and anomaly detection with a partial focus on human activity. In applications where humans are involved, it is of paramount importance to provide robust and understandable algorithms and models. A way to accomplish this requirement is to use relatively simple and robust approaches, especially when devices are resource-constrained. The second approach, when a large amount of data is present, is to adopt complex algorithms and models and make them robust and interpretable from a human-like point of view. This motivates our thesis that is divided in two parts. The first part of this thesis is devoted to the development of parsimonious algorithms for action/gesture recognition in human-centric applications such as sports and anomaly detection for artificial pancreas. The data sources employed for the validation of our approaches consist of a collection of time-series data coming from sensors, such as accelerometers or glycemic. The main challenge in this context is to discard (i.e. being invariant to) many nuisance factors that make the recognition task difficult, especially where many different users are involved. Moreover, in some cases, data cannot be easily labelled, making supervised approaches not viable. Thus, we present the mathematical tools and the background with a focus to the recognition problems and then we derive novel methods for: (i) gesture/action recognition using sparse representations for a sport application; (ii) gesture/action recognition using a symbolic representations and its extension to the multivariate case; (iii) model-free and unsupervised anomaly detection for detecting faults on artificial pancreas. These algorithms are well-suited to be deployed in resource constrained devices, such as wearables. In the second part, we investigate the feasibility of deep learning frameworks where human interpretation is crucial. Standard deep learning models are not robust and, unfortunately, literature approaches that ensure robustness are typically detrimental to accuracy in general. However, in general, real-world applications often require a minimum amount of accuracy to be employed. In view of this, after reviewing some results present in the recent literature, we formulate a new algorithm being able to semantically trade-off between accuracy and robustness, where a cost-sensitive classification problem is provided and a given threshold of accuracy is required. In addition, we provide a link between robustness to input perturbations and interpretability guided by a physical minimum energy principle: in fact, leveraging optimal transport tools, we show that robust training is connected to the optimal transport problem. Thanks to these theoretical insights we develop a new algorithm that provides robust, interpretable and more transferable representations.
10

山本, 有作, and Yusaku Yamamoto. "密行列固有値解法の最近の発展(I) : Multiple Relatively Robust Representationsアルゴリズム." 日本応用数理学会, 2005. http://hdl.handle.net/2237/10838.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Robust Representations":

1

Li, Sheng, and Yun Fu. Robust Representation for Data Analytics. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

del, Pobil Angel Pasqual, and Serna Miguel Angel, eds. Spatial representation and motion planning. Berlin: Springer-Verlag, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

de, Velde Walter Van, ed. Toward learning robots. Cambridge, Mass: MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Segre, Alberto Maria. Machine learning of robot assembly plans. Boston: Kluwer Academic Publishers, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burhans, Robert L. Robert L. Burhans. Springfield, Ill: The University, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wallgrün, Jan Oliver. Hierarchical Voronoi graphs: Spatial representation and reasoning for mobile robots. Heidelberg: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mullane, John. Random Finite Sets for Robot Mapping and SLAM: New Concepts in Autonomous Robotic Map Representations. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wolter, Diedrich. Spatial representation and reasoning for robot mapping: A shape-based approach. Berlin: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heikkonen, Jukka. Subsymbolic representations, self-organizing maps, and object motion learning. Lappeenranta, Finland: Lappeenranta University of Technology, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Goldsworthy, Robert F. Robert F. Goldsworthy: An oral history. Olympia: Washington State Oral History Program, Office of the Secretary of State, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Robust Representations":

1

Li, Sheng, and Yun Fu. "Fundamentals of Robust Representations." In Advanced Information and Knowledge Processing, 9–16. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Sheng, and Yun Fu. "Robust Representations for Collaborative Filtering." In Advanced Information and Knowledge Processing, 123–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Sheng, and Yun Fu. "Robust Representations for Response Prediction." In Advanced Information and Knowledge Processing, 147–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Sheng, and Yun Fu. "Robust Representations for Outlier Detection." In Advanced Information and Knowledge Processing, 175–201. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Sheng, and Yun Fu. "Robust Representations for Person Re-identification." In Advanced Information and Knowledge Processing, 203–22. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60176-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Hengjian, Lianhai Wang, and Zutao Zhang. "Robust Palmprint Recognition Based on Directional Representations." In Intelligent Information Processing VI, 372–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32891-6_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Som, Anirudh, Kowshik Thopalli, Karthikeyan Natesan Ramamurthy, Vinay Venkataraman, Ankita Shukla, and Pavan Turaga. "Perturbation Robust Representations of Topological Persistence Diagrams." In Computer Vision – ECCV 2018, 638–59. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01234-2_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vemuri, Baba C., Jundong Liu, and José L. Marroquin. "Robust Multimodal Image Registration Using Local Frequency Representations." In Lecture Notes in Computer Science, 176–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45729-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wallraven, Christian, and Heinrich Bülthoff. "Acquiring Robust Representations for Recognition from Image Sequences." In Lecture Notes in Computer Science, 216–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45404-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Xiang, Xin Xie, Zhen Bi, Hongbin Ye, Shumin Deng, Ningyu Zhang, and Huajun Chen. "Disentangled Contrastive Learning for Learning Robust Textual Representations." In Artificial Intelligence, 215–26. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93049-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Robust Representations":

1

Li, Yitong, Trevor Cohn, and Timothy Baldwin. "Learning Robust Representations of Text." In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/d16-1207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yue, and Xiafei Lei. "Deeply learned electrocardiogram representations are robust." In 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2017. http://dx.doi.org/10.1109/fskd.2017.8393130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Smyth, Aidan, Niall Lyons, Ted Wada, Robert Zopf, Ashutosh Pandey, and Avik Santra. "Robust Representations for Keyword Spotting Systems." In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Peng, Aleksandr Y. Aravkin, Jayaraman Jayaraman Thiagarajan, and Karthikeyan Natesan Ramamurthy. "Learning Robust Representations for Computer Vision." In 2017 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2017. http://dx.doi.org/10.1109/iccvw.2017.211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Müller, Thomas, and Hinrich Schuetze. "Robust Morphological Tagging with Word Representations." In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kawakami, Kazuya, Luyu Wang, Chris Dyer, Phil Blunsom, and Aaron van den Oord. "Learning Robust and Multilingual Speech Representations." In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xiumei, and Guoan Bi. "Reassignment methods for robust time-frequency representations." In Signal Processing (ICICS). IEEE, 2009. http://dx.doi.org/10.1109/icics.2009.5397517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yitong, Timothy Baldwin, and Trevor Cohn. "Towards Robust and Privacy-preserving Text Representations." In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-2005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mahabal, Abhijit, Dan Roth, and Sid Mittal. "Robust Handling of Polysemy via Sparse Representations." In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-2031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

King, Brian, I.-Fan Chen, Yonatan Vaizman, Yuzong Liu, Roland Maas, Sree Hari Krishnan Parthasarathi, and Björn Hoffmeister. "Robust Speech Recognition via Anchor Word Representations." In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-1570.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Robust Representations":

1

Sznaier, Mario. Multiobject Robust Control of Nonlinear Systems via State Dependent Coefficient Representations and Applications. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada419042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barraquand, Jerome, and Jean-Claude Latombe. Robot Motion Planning: A Distributed Representation Approach. Fort Belvoir, VA: Defense Technical Information Center, May 1989. http://dx.doi.org/10.21236/ada209890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Feeney, Patricia, Matthias Liffers, Estelle Cheng, and Paul Vierkant. Better Together: Complete Metadata as Robust Infrastructure. Crossref, November 2022. http://dx.doi.org/10.13003/m3237yt.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
According to the survey we conducted prior to this webinar series dedicated to the APAC community, metadata quality was one of the most voted topics to be covered in the webinars, which is understandable - out of FAIRsFAIR’s 15 assessment metrics for the FAIRness of research objects, 12 are about metadata. Rich and persistent metadata that incorporate identifiers and encode generic and domain-specific information, accessibility and licensing, and links between objects using standardized vocabulary and communication protocols is the cornerstone of a versatile, equitable, and trustworthy scholarly infrastructure ecosystem. In this webinar, we want to focus on the various aspects of enriching the metadata of research outputs. What is considered rich or complete, what does it mean to the metadata capture and curation workflows, how is this process supported, what services are underpinned by which part of the metadata, etc? In this webinar, we’ll hear from Matthias Liffers from ARDC and representatives from Crossref, DataCite, and ORCID, to share their perspectives and provide guidance toward a world with richer metadata. This webinar takes place on Nov 28, 2022, 06:00 AM Universal Time UTC/ 14:00 Beijing. This webinar will last 90 minutes including time for Q&A. The slides and recording will be shared afterward with all who register for the event.
4

Bowyer, Kevin. Development of the Aspect Graph Representation for Use in Robot Vision. Fort Belvoir, VA: Defense Technical Information Center, October 1991. http://dx.doi.org/10.21236/ada247109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boyer, Marcel. Comments on competition policy and labour markets. CIRANO, February 2022. http://dx.doi.org/10.54932/iqio1721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditionally, labour concerns have not been top-of-mind when considering competition policy, but the current approach to wage-fixing, anti-poaching, and anti-mobility agreements between firms has been one of the main reasons behind recent Parliamentary attention to competition policy and labour markets. Key stakeholders in academic and policy circles have called for more robust enforcement regarding monopsony / oligopsony power in labour markets, when assessing mergers and acquisitions for example, as well as regarding market power in labour representation (unions) and certification as entry barriers in labour markets. The objective here is to identify the numerous challenges and pitfalls in assessing the level of competition on labour markets, both supply and demand, and in addressing remedies if necessary.
6

Ruvinsky, Alicia, Maria Seale, R. Salter, and Natàlia Garcia-Reyero. An ontology for an epigenetics approach to prognostics and health management. Engineer Research and Development Center (U.S.), March 2023. http://dx.doi.org/10.21079/11681/46632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Techniques in prognostics and health management have advanced considerably in the last few decades, enabled by breakthroughs in computational methods and supporting technologies. These predictive models, whether data-driven or physics-based, target the modeling of a system’s aggregate performance. As such, they generalize assumptions about the modelled system’s components, and are thus limited in their ability to represent individual components and the dynamic environmental factors that affect composite system health. To address this deficiency, we have developed an epigenetics-inspired knowledge representation for engineered system state that encompasses components and environmental factors. Epigenetics is concerned with explaining how environmental factors affect the expression of an organism’s genetic material. The field has derived important in-sights into the development and progression of disease states based on how environmental factors impact genetic material, causing variations in how a gene is expressed. The health of an engineered system is similarly influenced by its environment. A foundation for a new approach to prognostics based on epigenetics must begin by representing the entities and relationships of an engineered system from the perspective of epigenetics. This paper presents an ontology for an epigenetics-inspired representation of an engineered system. An ontology describing the epigenetics of an engineered system will enable the composition of a formal model and the incremental development of a more robust, causal reasoning system.
7

Balfour, Lindsay, Adrienne Evans, Marcus Maloney, and Sarah Merry. Postdigital Intimacies for Online Safety. Coventry University, May 2023. http://dx.doi.org/10.18552/pdc/2023/0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This report offers a multi-sector response to the Online Safety Bill (OSB). The shape and content of the OSB has generated discussion amongst policy specialists, stakeholders and lobbyists in key services and sectors, political advisors and appointed representatives, and academics and researchers – as well as a general public interested in what the OSB will mean for people made vulnerable or at risk of harms online. We report on the discussions that took place in four co-production workshops with representatives from the areas of: intimate digital health tools and services marketed to those who identify as women; image-based and technologically-enabled abuse; “toxic” internet communities; and protections for people with mental health conditions and neurodiversity. As the OSB reaches the final stages of approval through the UK government, this report provides a response from people working in these areas, highlighting the voices and perspectives of those invested in ensuring a vibrant, equal, inclusive, and safe digital society can flourish. Our recommendations include the need for: robust, transparent risk assessment and frameworks for preventing harm that work across life-stages; going above and beyond the current OSB legislation to raise awareness and educate to reduce harms; recognition in the OSB and elsewhere of the national threat of Violence Against Women and Girls (VAWG); and, an increase in information sharing and working across sectors of the technology industry, service providers, and charity, law, and government to generate new approaches for a better future.
8

Zio, Enrico, and Nicola Pedroni. Literature review of methods for representing uncertainty. Fondation pour une culture de sécurité industrielle, December 2013. http://dx.doi.org/10.57071/124ure.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This document provides a critical review of different frameworks for uncertainty analysis, in a risk analysis context: classical probabilistic analysis, imprecise probability (interval analysis), probability bound analysis, evidence theory, and possibility theory. The driver of the critical analysis is the decision-making process and the need to feed it with representative information derived from the risk assessment, to robustly support the decision. Technical details of the different frameworks are exposed only to the extent necessary to analyze and judge how these contribute to the communication of risk and the representation of the associated uncertainties to decision-makers, in the typical settings of high-consequence risk analysis of complex systems with limited knowledge on their behaviour.
9

Бережна, Маргарита Василівна. Maleficent: from the Matriarch to the Scorned Woman (Psycholinguistic Image). Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/5766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of the research is to identify the elements of the psycholinguistic image of the leading character in the dark fantasy adventure film Maleficent directed by Robert Stromberg (2014). The task consists of two stages, at the first of which I identify the psychological characteristics of the character to determine to which of the archetypes Maleficent belongs. As the basis, I take the classification of film archetypes by V. Schmidt. At the second stage, I distinguish the speech peculiarities of the character that reflex her psychological image. This paper explores 98 Maleficent’s turns of dialogues in the film. According to V. Schmidt’s classification, Maleficent belongs first to the Matriarch archetype and later in the plot to the Scorned Woman archetype. These archetypes are representations of the powerful goddess of marriage and fertility Hera, being respectively her heroic and villainous embodiments. There are several crucial characteristics revealed by speech elements.
10

Shukla, Indu, Rajeev Agrawal, Kelly Ervin, and Jonathan Boone. AI on digital twin of facility captured by reality scans. Engineer Research and Development Center (U.S.), November 2023. http://dx.doi.org/10.21079/11681/47850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The power of artificial intelligence (AI) coupled with optimization algorithms can be linked to data-rich digital twin models to perform predictive analysis to make better informed decisions about installation operations and quality of life for the warfighters. In the current research, we developed AI connected lifecycle building information models through the creation of a data informed smart digital twin of one of US Army Corps of Engineers (USACE) buildings as our test case. Digital twin (DT) technology involves creating a virtual representation of a physical entity. Digital twin is created by digitalizing data collected through sensors, powered by machine learning (ML) algorithms, and are continuously learning systems. The exponential advance in digital technologies enables facility spaces to be fully and richly modeled in three dimensions and can be brought together in virtual space. Coupled with advancement in reinforcement learning and computer graphics enables AI agents to learn visual navigation and interaction with objects. We have used Habitat AI 2.0 to train an embodied agent in immersive 3D photorealistic environment. The embodied agent interacts with a 3D environment by receiving RGB, depth and semantically segmented views of the environment and taking navigational actions and interacts with the objects in the 3D space. Instead of training the robots in physical world we are training embodied agents in simulated 3D space. While humans are superior at critical thinking, creativity, and managing people, whereas robots are superior at coping with harsh environments and performing highly repetitive work. Training robots in controlled simulated world is faster and can increase their surveillance, reliability, efficiency, and survivability in physical space.

To the bibliography