Rozprawy doktorskie na temat „Robust Representations”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Robust Representations”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations". Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Pełny tekst źródłaThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations". Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096.
Pełny tekst źródłaThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Tran, Brandon Vanhuy. "Building and using robust representations in image classification". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127912.
Pełny tekst źródłaCataloged from the official PDF of thesis.
Includes bibliographical references (pages 115-131).
One of the major appeals of the deep learning paradigm is the ability to learn high-level feature representations of complex data. These learned representations obviate manual data pre-processing, and are versatile enough to generalize across tasks. However, they are not yet capable of fully capturing abstract, meaningful features of the data. For instance, the pervasiveness of adversarial examples--small perturbations of correctly classified inputs causing model misclassification--is a prominent indication of such shortcomings. The goal of this thesis is to work towards building learned representations that are more robust and human-aligned. To achieve this, we turn to adversarial (or robust) training, an optimization technique for training networks less prone to adversarial inputs. Typically, robust training is studied purely in the context of machine learning security (as a safeguard against adversarial examples)--in contrast, we will cast it as a means of enforcing an additional prior onto the model. Specifically, it has been noticed that, in a similar manner to the well-known convolutional or recurrent priors, the robust prior serves as a "bias" that restricts the features models can use in classification--it does not allow for any features that change upon small perturbations. We find that the addition of this simple prior enables a number of downstream applications, from feature visualization and manipulation to input interpolation and image synthesis. Most importantly, robust training provides a simple way of interpreting and understanding model decisions. Besides diagnosing incorrect classification, this also has consequences in the so-called "data poisoning" setting, where an adversary corrupts training samples with the hope of causing misbehaviour in the resulting model. We find that in many cases, the prior arising from robust training significantly helps in detecting data poisoning.
by Brandon Vanhuy Tran.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mathematics
Parekh, Sanjeel. "Learning representations for robust audio-visual scene analysis". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015/document.
Pełny tekst źródłaThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Parekh, Sanjeel. "Learning representations for robust audio-visual scene analysis". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT015.
Pełny tekst źródłaThe goal of this thesis is to design algorithms that enable robust detection of objectsand events in videos through joint audio-visual analysis. This is motivated by humans’remarkable ability to meaningfully integrate auditory and visual characteristics forperception in noisy scenarios. To this end, we identify two kinds of natural associationsbetween the modalities in recordings made using a single microphone and camera,namely motion-audio correlation and appearance-audio co-occurrence.For the former, we use audio source separation as the primary application andpropose two novel methods within the popular non-negative matrix factorizationframework. The central idea is to utilize the temporal correlation between audio andmotion for objects/actions where the sound-producing motion is visible. The firstproposed method focuses on soft coupling between audio and motion representationscapturing temporal variations, while the second is based on cross-modal regression.We segregate several challenging audio mixtures of string instruments into theirconstituent sources using these approaches.To identify and extract many commonly encountered objects, we leverageappearance–audio co-occurrence in large datasets. This complementary associationmechanism is particularly useful for objects where motion-based correlations are notvisible or available. The problem is dealt with in a weakly-supervised setting whereinwe design a representation learning framework for robust AV event classification,visual object localization, audio event detection and source separation.We extensively test the proposed ideas on publicly available datasets. The experimentsdemonstrate several intuitive multimodal phenomena that humans utilize on aregular basis for robust scene understanding
Herdtweck, Christian [Verfasser], i Heinrich [Akademischer Betreuer] Bülthoff. "Learning Data-Driven Representations for Robust Monocular Computer Vision Applications / Christian Herdtweck ; Betreuer: Heinrich Bülthoff". Tübingen : Universitätsbibliothek Tübingen, 2014. http://d-nb.info/1162897317/34.
Pełny tekst źródłaXu, Guanglin. "Optimization under uncertainty: conic programming representations, relaxations, and approximations". Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5881.
Pełny tekst źródłaBarbano, Carlo Alberto Maria. "Collateral-Free Learning of Deep Representations : From Natural Images to Biomedical Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT038.
Pełny tekst źródłaDeep Learning (DL) has become one of the predominant tools for solving a variety of tasks, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data. However, it has been shown that they might also learn additional features, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as this additional information can contain bias, noise, or sensitive information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL-based pipelines, especially if they involve private users' data. Learning robust representations that are free of collateral information can be highly relevant for a variety of fields and applications, like medical applications and decision support systems.In this thesis, we introduce the concept of Collateral Learning, which refers to all those instances in which a model learns more information than intended. The aim of Collateral Learning is to bridge the gap between different fields in DL, such as robustness, debiasing, generalization in medical imaging, and privacy preservation. We propose different methods for achieving robust representations free of collateral information. Some of our contributions are based on regularization techniques, while others are represented by novel loss functions.In the first part of the thesis, we lay the foundations of our work, by developing techniques for robust representation learning on natural images. We focus on one of the most important instances of Collateral Learning, namely biased data. Specifically, we focus on Contrastive Learning (CL), and we propose a unified metric learning framework that allows us to both easily analyze existing loss functions, and derive novel ones. Here, we propose a novel supervised contrastive loss function, ε-SupInfoNCE, and two debiasing regularization techniques, EnD and FairKL, that achieve state-of-the-art performance on a number of standard vision classification and debiasing benchmarks.In the second part of the thesis, we focus on Collateral Learning in medical imaging, specifically on neuroimaging and chest X-ray images. For neuroimaging, we present a novel contrastive learning approach for brain age estimation. Our approach achieves state-of-the-art results on the OpenBHB dataset for age regression and shows increased robustness to the site effect. We also leverage this method to detect unhealthy brain aging patterns, showing promising results in the classification of brain conditions such as Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD). For chest X-ray images (CXR), we will target Covid-19 classification, showing how Collateral Learning can effectively hinder the reliability of such models. To tackle such issue, we propose a transfer learning approach that, combined with our regularization techniques, shows promising results on an original multi-site CXRs dataset.Finally, we provide some hints about Collateral Learning and privacy preservation in DL models. We show that some of our proposed methods can be effective in preventing certain information from being learned by the model, thus avoiding potential data leakage
Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.
Pełny tekst źródła山本, 有作, i Yusaku Yamamoto. "密行列固有値解法の最近の発展(I) : Multiple Relatively Robust Representationsアルゴリズム". 日本応用数理学会, 2005. http://hdl.handle.net/2237/10838.
Pełny tekst źródłaHuang, Weilin. "Robust facial representation for recognition". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/robust-facial-representation-for-recognition(ee2f295c-7b1a-4966-bd12-17edba43b2b4).html.
Pełny tekst źródłaDrapeau, Samuel. "Risk preferences and their robust representation". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2010. http://dx.doi.org/10.18452/16135.
Pełny tekst źródłaThe goal of this thesis is the conceptual study of risk and its quantification via robust representations. We concentrate in a first part on context invariant features related to this notion: diversification and monotonicity. We introduce and study the general properties of three key concepts, risk order, risk measure and risk acceptance family and their one-to-one relations. Our main result is a uniquely characterized dual robust representation of lower semicontinuous risk orders on topological vector space. We also provide automatic continuity and robust representation results on specific convex sets. This approach allows multiple interpretation of risk depending on the setting: model risk in the case of random variables, distributional risk in the case of lotteries, discounting risk in the case of consumption streams... Various explicit computations in those different settings are then treated (economic index of riskiness, certainty equivalent, VaR on lotteries, variational preferences...). In the second part, we consider preferences which might require additional information in order to be expressed. We provide a mathematical framework for this idea in terms of preorders, called conditional preference orders, which are locally compatible with the available information. This allows us to construct conditional numerical representations of conditional preferences. We obtain a conditional version of the von Neumann and Morgenstern representation for measurable stochastic kernels and extend then to a conditional version of the variational preferences. We finally clarify the interplay between model risk and distributional risk on the axiomatic level.
Lee, Chia-ying (Chia-ying Jackie). "Closed-loop auditory-based representation for robust speech recognition". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60176.
Pełny tekst źródłaIncludes bibliographical references (p. 93-96).
A closed-loop auditory based speech feature extraction algorithm is presented to address the problem of unseen noise for robust speech recognition. This closed-loop model is inspired by the possible role of the medial olivocochlear (MOC) efferent system of the human auditory periphery, which has been suggested in [6, 13, 42] to be important for human speech intelligibility in noisy environment. We propose that instead of using a fixed filter bank, the filters used in a feature extraction algorithm should be more flexible to adapt dynamically to different types of background noise. Therefore, in the closed-loop model, a feedback mechanism is designed to regulate the operating points of filters in the filter bank based on the background noise. The model is tested on a dataset created from TIDigits database. In this dataset, five kinds of noise are added to synthesize noisy speech. Compared with the standard MFCC extraction algorithm, the proposed closed-loop form of feature extraction algorithm provides 9.7%, 9.1% and 11.4% absolution word error rate reduction on average for three kinds of filter banks respectively.
by Chia-ying Lee.
S.M.
Siméoni, Oriane. "Robust image representation for classification, retrieval and object discovery". Thesis, Rennes 1, 2020. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/415eb65b-d5f7-4be7-85e6-c2ecb2aba4dc.
Pełny tekst źródłaNeural network representations proved to be relevant for many computer vision tasks such as image classification, object detection, segmentation or instance-level image retrieval. A network is trained for one particular task and requires a large number of labeled data. We propose in this thesis solutions to extract the most information with the least supervision. First focusing on the classification task, we examine the active learning process in the context of deep learning and show that combining it to semi-supervised and unsupervised techniques boost greatly results. We then investigate the image retrieval task, and in particular we exploit the spatial localization information available ``for free'' in CNN feature maps. We first propose to represent an image by a collection of affine local features detected within activation maps, which are memory-efficient and robust enough to perform spatial matching. Then again extracting information from feature maps, we discover objects of interest in images of a dataset and gather their representations in a nearest neighbor graph. Using the centrality measure on the graph, we are able to construct a saliency map per image which focuses on the repeating objects and allows us to compute a global representation excluding clutter and background
Althaus, Philipp. "Indoor Navigation for Mobile Robots : Control and Representations". Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3644.
Pełny tekst źródłaThis thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen.
Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events.
In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly.
A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation.
In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction.
Keywords:mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction
Nielsen, Casper Falkenberg. "A robust framework for medical image segmentation through adaptable class-specific representation". Thesis, Middlesex University, 2002. http://eprints.mdx.ac.uk/13507/.
Pełny tekst źródłaLaforgue, Pierre. "Deep kernel representation learning for complex data and reliability issues". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT006.
Pełny tekst źródłaThe first part of this thesis aims at exploring deep kernel architectures for complex data. One of the known keys to the success of deep learning algorithms is the ability of neural networks to extract meaningful internal representations. However, the theoretical understanding of why these compositional architectures are so successful remains limited, and deep approaches are almost restricted to vectorial data. On the other hand, kernel methods provide with functional spaces whose geometry are well studied and understood. Their complexity can be easily controlled, by the choice of kernel or penalization. In addition, vector-valued kernel methods can be used to predict kernelized data. It then allows to make predictions in complex structured spaces, as soon as a kernel can be defined on it.The deep kernel architecture we propose consists in replacing the basic neural mappings functions from vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHSs). Although very different at first glance, the two functional spaces are actually very similar, and differ only by the order in which linear/nonlinear functions are applied. Apart from gaining understanding and theoretical control on layers, considering kernel mappings allows for dealing with structured data, both in input and output, broadening the applicability scope of networks. We finally expose works that ensure a finite dimensional parametrization of the model, opening the door to efficient optimization procedures for a wide range of losses.The second part of this thesis investigates alternatives to the sample mean as substitutes to the expectation in the Empirical Risk Minimization (ERM) paradigm. Indeed, ERM implicitly assumes that the empirical mean is a good estimate of the expectation. However, in many practical use cases (e.g. heavy-tailed distribution, presence of outliers, biased training data), this is not the case.The Median-of-Means (MoM) is a robust mean estimator constructed as follows: the original dataset is split into disjoint blocks, empirical means on each block are computed, and the median of these means is finally returned. We propose two extensions of MoM, both to randomized blocks and/or U-statistics, with provable guarantees. By construction, MoM-like estimators exhibit interesting robustness properties. This is further exploited by the design of robust learning strategies. The (randomized) MoM minimizers are shown to be robust to outliers, while MoM tournament procedure are extended to the pairwise setting.We close this thesis by proposing an ERM procedure tailored to the sample bias issue. If training data comes from several biased samples, computing blindly the empirical mean yields a biased estimate of the risk. Alternatively, from the knowledge of the biasing functions, it is possible to reweight observations so as to build an unbiased estimate of the test distribution. We have then derived non-asymptotic guarantees for the minimizers of the debiased risk estimate thus created. The soundness of the approach is also empirically endorsed
Wolter, Diedrich. "Spatial representation and reasoning for robot mapping a shape-based approach /". Berlin : Springer, 2008. http://www.myilibrary.com?id=186085.
Pełny tekst źródłaDondrup, Christian. "Human-robot spatial interaction using probabilistic qualitative representations". Thesis, University of Lincoln, 2016. http://eprints.lincoln.ac.uk/28665/.
Pełny tekst źródłaOliveira, José Ricardo Marques de. "World representation for an autonomous driving robot". Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/2121.
Pełny tekst źródłaCondução autónoma constitui a deslocação de um agente, robô ou veículo, de um qualquer ponto no espaço para um outro, sem qualquer intervenção humana, por forma a atingir objectivos pré-estabelecidos. Para conduzir de forma autónoma, usando planeamento de trajectória, é crucial que o agente consiga representar abstractamente tanto o conhecimento a priori acerca do mundo, como a informação que este vai adquirindo à medida que avança. Para alcançar este propósito, desenvolveu-se um sistema para ser usado na pista da Competição de Condução Autónoma do Festival Nacional de Robótica. Este sistema caracteriza-se por ser flexível e modular. Tais características permitem não são a adição componentes na pista acima referida, mas também a fácil expansão do suporte a outros tipos de pistas ou circuitos. Concluiu-se, pois, que o modelo de representação mais adequado para o sistema que se pretendia desenvolver seria um modelo híbrido, na medida em que, ao nível global tal representação seria topológica e ao nível local métrica. Ou seja, dividindo a pista em secções, estas são a base para a representação topológica, sendo depois cada secção mapeada internamente de forma métrica. Ao integrar o trabalho desta dissertação com o sistema global lograva-se alcançar um sistema de Condução Autónoma susceptível de planear a curto e médio prazo, com vista a melhorar o desempenho dos robôs usados no projecto, relativamente à solução anteriormente usada, que era baseada num sistema reactivo com alguma memória e noção de estado, mas sem planeamento de trajectória. ABSTRACT: Autonomous driving is the movement of an agent, robot or vehicle, from some point in space to another one, without any human intervention, in order to achieve predetermined goals. To drive autonomously using trajectory planning, it is vital to have an abstraction of the knowledge about the world, be it a priori or information that the agent acquires during the driving. For this, we developed a system capable of abstractly represent, not only the track for the Autonomous Driving Competition of the Portuguese Robotics Open, but also, tracks with similar characteristics. The system was developed in a exible and modular manner, in order to allow the addition of new elements to the stated track and the easy expansion to support other types of tracks and circuits. The conclusion was that the most appropriate representation model for the system we were trying to develop was an hybrid model, in that, at a global level the representation would be topological and at a local level it would be metrical. In other words, dividing the track into sections, these are the basis for the topological representation, being each of the sections then mapped internally using a metrical representation. Integrating the work of this dissertation in the global system, one hoped to achieve a Autonomous Driving system capable of short and medium term planning, with the goal of improve the performance of the ROTA project robots, comparatively with the previous solution, which was based in a reactive system with some memory and to some degree stateful.
Ko, W. Y. Albert, i 高永賢. "The design of a representation and analysis method for modular self-reconfigurable robots". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29513807.
Pełny tekst źródłaLi, Wing Yin (Cherry). "Narrative and representation in Robert Schumann's Waldszenen, Op. 82". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/11994.
Pełny tekst źródłaNGUYEN, DONG HAI PHUONG. "Toward Robots with Peripersonal Space Representation for Adaptive Behaviors". Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/942472.
Pełny tekst źródłaMesgarani, Nima. "Representation of speech in the primary auditory cortex and its implications for robust speech processing". College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8586.
Pełny tekst źródłaThesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Schlenoff, Craig. "Inferring intentions through state representations in cooperative human-robot environments". Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS064/document.
Pełny tekst źródłaHumans and robots working safely and seamlessly together in a cooperative environment is one of the future goals of the robotics community. When humans and robots can work together in the same space, a whole class of tasks becomes amenable to automation, ranging from collaborative assembly to parts and material handling to delivery. Proposed standards exist for collaborative human-robot safety, but they focus on limiting the approach distances and contact forces between the human and the robot. These standards focus on reactive processes based only on current sensor readings. They do not consider future states or task-relevant information. A key enabler for human-robot safety in cooperative environments involves the field of intention recognition, in which the robot attempts to understand the intention of an agent (the human) by recognizing some or all of their actions to help predict the human’s future actions.We present an approach to inferring the intention of an agent in the environment via the recognition and representation of state information. This approach to intention recognition is different than many ontology-based intention recognition approaches in the literature as they primarily focus on activity (as opposed to state) recognition and then use a form of abduction to provide explanations for observations. We infer detailed state relationships using observations based on Region Connection Calculus 8 (RCC-8) and then infer the overall state relationships that are true at a given time. Once a sequence of state relationships has been determined, we use a Bayesian approach to associate those states with likely overall intentions to determine the next possible action (and associated state) that is likely to occur. We compare the output of the Intention Recognition Algorithm to those of an experiment involving human subjects attempting to recognize the same intentions in a manufacturing kitting domain. The results show that the Intention Recognition Algorithm, in almost every case, performed as good, if not better, than a human performing the same activity
Hafidi, Hakim. "Robust machine learning for Graphs/Networks". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT004.
Pełny tekst źródłaThis thesis addresses advancements in graph representation learning, focusing on the challengesand opportunities presented by Graph Neural Networks (GNNs). It highlights the significanceof graphs in representing complex systems and the necessity of learning node embeddings that capture both node features and graph structure. The study identifies key issues in GNNs, such as their dependence on high-quality labeled data, inconsistent performanceacross various datasets, and susceptibility to adversarial attacks.To tackle these challenges, the thesis introduces several innovative approaches. Firstly, it employs contrastive learning for node representation, enabling self-supervised learning that reduces reliance on labeled data. Secondly, a Bayesian-based classifier isproposed for node classification, which considers the graph’s structure to enhance accuracy. Lastly, the thesis addresses the vulnerability of GNNs to adversarialattacks by assessing the robustness of the proposed classifier and introducing effective defense mechanisms.These contributions aim to improve both the performance and resilience of GNNs in graph representation learning
McNeill, Dean K. "Adaptive visual representations for autonomous mobile robots using competitive learning algorithms". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35045.pdf.
Pełny tekst źródłaGlover, Arren John. "Developing grounded representations for robots through the principles of sensorimotor coordination". Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/71763/1/Arren_Glover_Thesis.pdf.
Pełny tekst źródłaWallgrün, Jan Oliver. "Hierarchical Voronoi graphs spatial representation and reasoning for mobile robots". Berlin Heidelberg Springer, 2008. http://d-nb.info/99728210X/04.
Pełny tekst źródłaCosgun, Akansel. "Navigation behavior design and representations for a people aware mobile robot system". Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54944.
Pełny tekst źródłaSjöö, Kristoffer. "Functional understanding of space : Representing spatial knowledge using concepts grounded in an agent's purpose". Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48400.
Pełny tekst źródłaQC 20111125
Wu, Jianxin. "Visual place categorization". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29784.
Pełny tekst źródłaCommittee Chair: Rehg, James M.; Committee Member: Christensen, Henrik; Committee Member: Dellaert, Frank; Committee Member: Essa, Irfan; Committee Member: Malik, Jitendra. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Ivan, Vladimir. "Topology based representations for motion synthesis and planning". Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10520.
Pełny tekst źródłaSundvall, Denise, i Sara Harila. "Rise of The Robots : En innehållsanalys om representation av virtuella influencers". Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-73567.
Pełny tekst źródłaHuang, Di. "Robust face recognition based on three dimensional data". Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00693158.
Pełny tekst źródłaLiemhetcharat, Somchaya. "Representation, Planning, and Learning of Dynamic Ad Hoc Robot Teams". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/304.
Pełny tekst źródłaTan, Chee Khoon. "Fuzzy spatial representation and sensory integration for mobile robot task". Thesis, University of Nottingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409387.
Pełny tekst źródłaTwardon, Lukas [Verfasser]. "Bimanual Interaction with Clothes. Topology, Geometry, and Policy Representations in Robots / Lukas Twardon". Bielefeld : Universitätsbibliothek Bielefeld, 2019. http://d-nb.info/1200097610/34.
Pełny tekst źródłaWolter, Diedrich. "Spatial representation and reasoning for robot mapping a shape-based approach". Berlin Heidelberg Springer, 2006. http://d-nb.info/989966941/34.
Pełny tekst źródłaGarg, Sourav. "Robust visual place recognition under simultaneous variations in viewpoint and appearance". Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134410/1/Sourav%20Garg%20Thesis.pdf.
Pełny tekst źródłaVasudevan, Shrihari. "Spatial cognition for mobile robots : a hierarchical probabilistic concept-oriented representation of space". Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17612.
Pełny tekst źródłaBrook, James. "Robert Wilson and an aesthetic of human behaviour in the performing body". Thesis, University of Gloucestershire, 2013. http://eprints.glos.ac.uk/2836/.
Pełny tekst źródłaDantam, Neil Thomas. "A linguistic method for robot verification programming and control". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54284.
Pełny tekst źródłaYuan, Fang [Verfasser]. "Interactive acquisition of spatial representations with mobile robots / Fang Yuan. Technische Fakultät - AG Angewandte Informatik". Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2011. http://d-nb.info/101799630X/34.
Pełny tekst źródłaHarris, John Steven. "Of Rauschenberg, policy and representation at the Vancouver Art Gallery : a partial history 1966-1983". Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25419.
Pełny tekst źródłaArts, Faculty of
Art History, Visual Art and Theory, Department of
Graduate
Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping". Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.
Pełny tekst źródłaMontavon, Grégoire Verfasser], Klaus-Robert [Akademischer Betreuer] [Müller, Yoshua [Akademischer Betreuer] Bengio i Léon [Akademischer Betreuer] Bottou. "On layer-wise representations in deep neural networks / Grégoire Montavon. Gutachter: Klaus-Robert Müller ; Yoshua Bengio ; Léon Bottou. Betreuer: Klaus-Robert Müller". Berlin : Technische Universität Berlin, 2013. http://d-nb.info/1065665458/34.
Pełny tekst źródłaWolter, Diedrich [Verfasser]. "Spatial representation and reasoning for robot mapping : a shape-based approach / Diedrich Wolter". Berlin, 2008. http://d-nb.info/989966941/34.
Pełny tekst źródłaDayoub, Feras. "An adaptive spherical view representation for mobile robot navigation in non-stationary environments". Thesis, University of Lincoln, 2011. https://eprints.qut.edu.au/105983/1/Dayoub_PhD_Thesis_2011.pdf.
Pełny tekst źródłaStening, John. "Exploring Internal Simulations of Perception in a Mobile Robot using Abstractions". Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-907.
Pełny tekst źródłaThis thesis investigates the possibilities of explaining higher cognition as internal simulations of perception and action at an abstract level. Relatively recent findings in both neuroscience and psychology indicates that both perception and action can be internally simulated by activating sensory and motor areas in the brain in absence of sensory input and without any resulting overt behavior. An investigation was conducted in order to test the hypothesis that perception can be simulated in a mobile robot using abstractions. The result from this investigation showed that this was indeed the case but that the accuracy was limited. The simulations allowed the robot to anticipate long chains of future situations but were not good enough to support any overt behavior. To further improve the results there is a need for better training techniques and/or a more complex architecture.