Letteratura scientifica selezionata sul tema "Representation learning (artifical intelligence)"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Representation learning (artifical intelligence)".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Representation learning (artifical intelligence)":

1

Hamilton, William L. "Graph Representation Learning". Synthesis Lectures on Artificial Intelligence and Machine Learning 14, n. 3 (15 settembre 2020): 1–159. http://dx.doi.org/10.2200/s01045ed1v01y202009aim046.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Konidaris, George, Leslie Pack Kaelbling e Tomas Lozano-Perez. "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning". Journal of Artificial Intelligence Research 61 (31 gennaio 2018): 215–89. http://dx.doi.org/10.1613/jair.5575.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We consider the problem of constructing abstract representations for planning in high-dimensional, continuous environments. We assume an agent equipped with a collection of high-level actions, and construct representations provably capable of evaluating plans composed of sequences of those actions. We first consider the deterministic planning case, and show that the relevant computation involves set operations performed over sets of states. We define the specific collection of sets that is necessary and sufficient for planning, and use them to construct a grounded abstract symbolic representation that is provably suitable for deterministic planning. The resulting representation can be expressed in PDDL, a canonical high-level planning domain language; we construct such a representation for the Playroom domain and solve it in milliseconds using an off-the-shelf planner. We then consider probabilistic planning, which we show requires generalizing from sets of states to distributions over states. We identify the specific distributions required for planning, and use them to construct a grounded abstract symbolic representation that correctly estimates the expected reward and probability of success of any plan. In addition, we show that learning the relevant probability distributions corresponds to specific instances of probabilistic density estimation and probabilistic classification. We construct an agent that autonomously learns the correct abstract representation of a computer game domain, and rapidly solves it. Finally, we apply these techniques to create a physical robot system that autonomously learns its own symbolic representation of a mobile manipulation task directly from sensorimotor data---point clouds, map locations, and joint angles---and then plans using that representation. Together, these results establish a principled link between high-level actions and abstract representations, a concrete theoretical foundation for constructing abstract representations with provable properties, and a practical mechanism for autonomously learning abstract high-level representations.
3

Rezayi, Saed. "Learning Better Representations Using Auxiliary Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 13 (26 giugno 2023): 16133–34. http://dx.doi.org/10.1609/aaai.v37i13.26927.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Representation Learning is the core of Machine Learning and Artificial Intelligence as it summarizes input data points into low dimensional vectors. This low dimensional vectors should be accurate portrayals of the input data, thus it is crucial to find the most effective and robust representation possible for given input as the performance of the ML task is dependent on the resulting representations. In this summary, we discuss an approach to augment representation learning which relies on external knowledge. We briefly describe the shortcoming of the existing techniques and describe how an auxiliary knowledge source could result in obtaining improved representations.
4

FROMMBERGER, LUTZ. "LEARNING TO BEHAVE IN SPACE: A QUALITATIVE SPATIAL REPRESENTATION FOR ROBOT NAVIGATION WITH REINFORCEMENT LEARNING". International Journal on Artificial Intelligence Tools 17, n. 03 (giugno 2008): 465–82. http://dx.doi.org/10.1142/s021821300800400x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The representation of the surrounding world plays an important role in robot navigation, especially when reinforcement learning is applied. This work uses a qualitative abstraction mechanism to create a representation of space consisting of the circular order of detected landmarks and the relative position of walls towards the agent's moving direction. The use of this representation does not only empower the agent to learn a certain goal-directed navigation strategy faster compared to metrical representations, but also facilitates reusing structural knowledge of the world at different locations within the same environment. Acquired policies are also applicable in scenarios with different metrics and corridor angles. Furthermore, gained structural knowledge can be separated, leading to a generally sensible navigation behavior that can be transferred to environments lacking landmark information and/or totally unknown environments.
5

Haghir Chehreghani, Morteza, e Mostafa Haghir Chehreghani. "Learning representations from dendrograms". Machine Learning 109, n. 9-10 (16 agosto 2020): 1779–802. http://dx.doi.org/10.1007/s10994-020-05895-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract We propose unsupervised representation learning and feature extraction from dendrograms. The commonly used Minimax distance measures correspond to building a dendrogram with single linkage criterion, with defining specific forms of a level function and a distance function over that. Therefore, we extend this method to arbitrary dendrograms. We develop a generalized framework wherein different distance measures and representations can be inferred from different types of dendrograms, level functions and distance functions. Via an appropriate embedding, we compute a vector-based representation of the inferred distances, in order to enable many numerical machine learning algorithms to employ such distances. Then, to address the model selection problem, we study the aggregation of different dendrogram-based distances respectively in solution space and in representation space in the spirit of deep representations. In the first approach, for example for the clustering problem, we build a graph with positive and negative edge weights according to the consistency of the clustering labels of different objects among different solutions, in the context of ensemble methods. Then, we use an efficient variant of correlation clustering to produce the final clusters. In the second approach, we investigate the combination of different distances and features sequentially in the spirit of multi-layered architectures to obtain the final features. Finally, we demonstrate the effectiveness of our approach via several numerical studies.
6

Saitta, Lorenza. "Representation change in machine learning". AI Communications 9, n. 1 (1996): 14–20. http://dx.doi.org/10.3233/aic-1996-9102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Rives, Alexander, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences". Proceedings of the National Academy of Sciences 118, n. 15 (5 aprile 2021): e2016239118. http://dx.doi.org/10.1073/pnas.2016239118.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.
8

Kang, Zhao, Xiao Lu, Jian Liang, Kun Bai e Zenglin Xu. "Relation-Guided Representation Learning". Neural Networks 131 (novembre 2020): 93–102. http://dx.doi.org/10.1016/j.neunet.2020.07.014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Prorok, Máté. "Applications of artificial intelligence systems". Deliberationes 15, Különszám (2022): 76–88. http://dx.doi.org/10.54230/delib.2022.k.sz.76.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nowadays artificial intelligence is a rapidly developing technology that encompasses the development of intelligent algorithms and machines capable of learning. Therefore, it is relevant and timely to examine the topic. These artificial intelligence algorithms and machines have the ability to perform tasks that traditionally relied on human intelligence in the past. This study provides an in-depth exploration of artificial intelligence systems and their key components. It examines various aspects of artificial intelligence systems, including natural language processing, machine learning, detection and pattern recognition, and knowledge representation and other form of artificial intelligence systems. Natural language processing enables machines to understand and generate human language, while machine learning empowers systems to learn from data and improve their performance over time. Detection and pattern recognition allow artificial intelligence systems to interpret and understand complex sensory inputs, while knowledge representation enables the storage and utilization of information. Furthermore, other form of artificial intelligence systems will be also discussed. This study sheds light on the fundamental elements of artificial intelligence systems, paving the way for their practical applications and advancements.
10

Mazoure, Bogdan, Thang Doan, Tianyu Li, Vladimir Makarenkov, Joelle Pineau, Doina Precup e Guillaume Rabusseau. "Low-Rank Representation of Reinforcement Learning Policies". Journal of Artificial Intelligence Research 75 (27 ottobre 2022): 597–636. http://dx.doi.org/10.1613/jair.1.13854.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.

Tesi sul tema "Representation learning (artifical intelligence)":

1

Li, Hao. "Towards Fast and Efficient Representation Learning". Thesis, University of Maryland, College Park, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10845690.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization?

To reduce the computation cost for fast inference, we propose to prune filters from CNNs that are identified as having a small effect on the prediction accuracy. By removing filters with small norms together with their connected feature maps, the computation cost can be reduced accordingly without using special software or hardware. We show that simple filter pruning approach can reduce the inference cost while regaining close to the original accuracy by retraining the networks.

To further reduce the inference cost, quantizing model parameters with low-precision representations has shown significant speedup, especially for edge devices that have limited computing resources, memory capacity, and power consumption. To enable on-device learning on lower-power systems, removing the dependency of full-precision model during training is the key challenge. We study various quantized training methods with the goal of understanding the differences in behavior, and reasons for success or failure. We address the issue of why algorithms that maintain floating-point representations work so well, while fully quantized training methods stall before training is complete. We show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.

Finally, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. We introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. The sharpness of minimizers correlates well with generalization error when this visualization is used. Then, using a variety of visualizations, we explore how training hyper-parameters affect the shape of minimizers, and how network architecture affects the loss landscape.

2

Denize, Julien. "Self-supervised representation learning and applications to image and video analysis". Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR37.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans cette thèse, nous développons des approches d'apprentissage auto-supervisé pour l'analyse d'images et de vidéos. L'apprentissage de représentation auto-supervisé permet de pré-entraîner les réseaux neuronaux à apprendre des concepts généraux sans annotations avant de les spécialiser plus rapidement à effectuer des tâches, et avec peu d'annotations. Nous présentons trois contributions à l'apprentissage auto-supervisé de représentations d'images et de vidéos. Premièrement, nous introduisons le paradigme théorique de l'apprentissage contrastif doux et sa mise en œuvre pratique appelée Estimation Contrastive de Similarité (SCE) qui relie l'apprentissage contrastif et relationnel pour la représentation d'images. Ensuite, SCE est étendue à l'apprentissage de représentation vidéo temporelle globale. Enfin, nous proposons COMEDIAN, un pipeline pour l'apprentissage de représentation vidéo locale-temporelle pour l'architecture transformer. Ces contributions ont conduit à des résultats de pointe sur de nombreux benchmarks et ont donné lieu à de multiples contributions académiques et techniques publiées
In this thesis, we develop approaches to perform self-supervised learning for image and video analysis. Self-supervised representation learning allows to pretrain neural networks to learn general concepts without labels before specializing in downstream tasks faster and with few annotations. We present three contributions to self-supervised image and video representation learning. First, we introduce the theoretical paradigm of soft contrastive learning and its practical implementation called Similarity Contrastive Estimation (SCE) connecting contrastive and relational learning for image representation. Second, SCE is extended to global temporal video representation learning. Lastly, we propose COMEDIAN a pipeline for local-temporal video representation learning for transformers. These contributions achieved state-of-the-art results on multiple benchmarks and led to several academic and technical published contributions
3

Aboul-Enien, Hisham Abdel-Ghaffer. "Neural network learning and knowledge representation in a multi-agent system". Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Carvalho, Micael. "Deep representation spaces". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ces dernières années, les techniques d’apprentissage profond ont fondamentalement transformé l'état de l'art de nombreuses applications de l'apprentissage automatique, devenant la nouvelle approche standard pour plusieurs d’entre elles. Les architectures provenant de ces techniques ont été utilisées pour l'apprentissage par transfert, ce qui a élargi la puissance des modèles profonds à des tâches qui ne disposaient pas de suffisamment de données pour les entraîner à partir de zéro. Le sujet d'étude de cette thèse couvre les espaces de représentation créés par les architectures profondes. Dans un premier temps, nous étudions les propriétés de leurs espaces, en prêtant un intérêt particulier à la redondance des dimensions et la précision numérique de leurs représentations. Nos résultats démontrent un fort degré de robustesse, pointant vers des schémas de compression simples et puissants. Ensuite, nous nous concentrons sur le l'affinement de ces représentations. Nous choisissons d'adopter un problème multi-tâches intermodal et de concevoir une fonction de coût capable de tirer parti des données de plusieurs modalités, tout en tenant compte des différentes tâches associées au même ensemble de données. Afin d'équilibrer correctement ces coûts, nous développons également un nouveau processus d'échantillonnage qui ne prend en compte que des exemples contribuant à la phase d'apprentissage, c'est-à-dire ceux ayant un coût positif. Enfin, nous testons notre approche sur un ensemble de données à grande échelle de recettes de cuisine et d'images associées. Notre méthode améliore de 5 fois l'état de l'art sur cette tâche, et nous montrons que l'aspect multitâche de notre approche favorise l'organisation sémantique de l'espace de représentation, lui permettant d'effectuer des sous-tâches jamais vues pendant l'entraînement, comme l'exclusion et la sélection d’ingrédients. Les résultats que nous présentons dans cette thèse ouvrent de nombreuses possibilités, y compris la compression de caractéristiques pour les applications distantes, l'apprentissage multi-modal et multitâche robuste et l'affinement de l'espace des caractéristiques. Pour l'application dans le contexte de la cuisine, beaucoup de nos résultats sont directement applicables dans une situation réelle, en particulier pour la détection d'allergènes, la recherche de recettes alternatives en raison de restrictions alimentaires et la planification de menus
In recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
5

Newman-Griffis, Denis R. "Capturing Domain Semantics with Representation Learning: Applications to Health and Function". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587658607378958.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Cao, Xi Hang. "On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics". Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/586006.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Computer and Information Science
Ph.D.
Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets.
Temple University--Theses
7

Panesar, Kulvinder. "Conversational artificial intelligence - demystifying statistical vs linguistic NLP solutions". Universitat Politécnica de Valéncia, 2020. http://hdl.handle.net/10454/18121.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
yes
This paper aims to demystify the hype and attention on chatbots and its association with conversational artificial intelligence. Both are slowly emerging as a real presence in our lives from the impressive technological developments in machine learning, deep learning and natural language understanding solutions. However, what is under the hood, and how far and to what extent can chatbots/conversational artificial intelligence solutions work – is our question. Natural language is the most easily understood knowledge representation for people, but certainly not the best for computers because of its inherent ambiguous, complex and dynamic nature. We will critique the knowledge representation of heavy statistical chatbot solutions against linguistics alternatives. In order to react intelligently to the user, natural language solutions must critically consider other factors such as context, memory, intelligent understanding, previous experience, and personalized knowledge of the user. We will delve into the spectrum of conversational interfaces and focus on a strong artificial intelligence concept. This is explored via a text based conversational software agents with a deep strategic role to hold a conversation and enable the mechanisms need to plan, and to decide what to do next, and manage the dialogue to achieve a goal. To demonstrate this, a deep linguistically aware and knowledge aware text based conversational agent (LING-CSA) presents a proof-of-concept of a non-statistical conversational AI solution.
8

Tamaazousti, Youssef. "Vers l’universalité des représentations visuelle et multimodales". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC038/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
En raison de ses enjeux sociétaux, économiques et culturels, l’intelligence artificielle (dénotée IA) est aujourd’hui un sujet d’actualité très populaire. L’un de ses principaux objectifs est de développer des systèmes qui facilitent la vie quotidienne de l’homme, par le biais d’applications telles que les robots domestiques, les robots industriels, les véhicules autonomes et bien plus encore. La montée en popularité de l’IA est fortement due à l’émergence d’outils basés sur des réseaux de neurones profonds qui permettent d’apprendre simultanément, la représentation des données (qui était traditionnellement conçue à la main), et la tâche à résoudre (qui était traditionnellement apprise à l’aide de modèles d’apprentissage automatique). Ceci résulte de la conjonction des avancées théoriques, de la capacité de calcul croissante ainsi que de la disponibilité de nombreuses données annotées. Un objectif de longue date de l’IA est de concevoir des machines inspirées des humains, capables de percevoir le monde, d’interagir avec les humains, et tout ceci de manière évolutive (c’est `a dire en améliorant constamment la capacité de perception du monde et d’interaction avec les humains). Bien que l’IA soit un domaine beaucoup plus vaste, nous nous intéressons dans cette thèse, uniquement à l’IA basée apprentissage (qui est l’une des plus performante, à ce jour). Celle-ci consiste `a l’apprentissage d’un modèle qui une fois appris résoud une certaine tâche, et est généralement composée de deux sous-modules, l’un représentant la donnée (nommé ”représentation”) et l’autre prenant des décisions (nommé ”résolution de tâche”). Nous catégorisons, dans cette thèse, les travaux autour de l’IA, dans les deux approches d’apprentissage suivantes : (i) Spécialisation : apprendre des représentations à partir de quelques tâches spécifiques dans le but de pouvoir effectuer des tâches très spécifiques (spécialisées dans un certain domaine) avec un très bon niveau de performance; ii) Universalité : apprendre des représentations à partir de plusieurs tâches générales dans le but d’accomplir autant de tâches que possible dansdifférents contextes. Alors que la spécialisation a été largement explorée par la communauté de l’apprentissage profond, seules quelques tentatives implicites ont été réalisée vers la seconde catégorie, à savoir, l’universalité. Ainsi, le but de cette thèse est d’aborder explicitement le problème de l’amélioration de l’universalité des représentations avec des méthodes d’apprentissage profond, pour les données d’image et de texte. [...]
Because of its key societal, economic and cultural stakes, Artificial Intelligence (AI) is a hot topic. One of its main goal, is to develop systems that facilitates the daily life of humans, with applications such as household robots, industrial robots, autonomous vehicle and much more. The rise of AI is highly due to the emergence of tools based on deep neural-networks which make it possible to simultaneously learn, the representation of the data (which were traditionally hand-crafted), and the task to solve (traditionally learned with statistical models). This resulted from the conjunction of theoretical advances, the growing computational capacity as well as the availability of many annotated data. A long standing goal of AI is to design machines inspired humans, capable of perceiving the world, interacting with humans, in an evolutionary way. We categorize, in this Thesis, the works around AI, in the two following learning-approaches: (i) Specialization: learn representations from few specific tasks with the goal to be able to carry out very specific tasks (specialized in a certain field) with a very good level of performance; (ii) Universality: learn representations from several general tasks with the goal to perform as many tasks as possible in different contexts. While specialization was extensively explored by the deep-learning community, only a few implicit attempts were made towards universality. Thus, the goal of this Thesis is to explicitly address the problem of improving universality with deep-learning methods, for image and text data. We have addressed this topic of universality in two different forms: through the implementation of methods to improve universality (“universalizing methods”); and through the establishment of a protocol to quantify its universality. Concerning universalizing methods, we proposed three technical contributions: (i) in a context of large semantic representations, we proposed a method to reduce redundancy between the detectors through, an adaptive thresholding and the relations between concepts; (ii) in the context of neural-network representations, we proposed an approach that increases the number of detectors without increasing the amount of annotated data; (iii) in a context of multimodal representations, we proposed a method to preserve the semantics of unimodal representations in multimodal ones. Regarding the quantification of universality, we proposed to evaluate universalizing methods in a Transferlearning scheme. Indeed, this technical scheme is relevant to assess the universal ability of representations. This also led us to propose a new framework as well as new quantitative evaluation criteria for universalizing methods
9

Liu, Xudong. "MODELING, LEARNING AND REASONING ABOUT PREFERENCE TREES OVER COMBINATORIAL DOMAINS". UKnowledge, 2016. http://uknowledge.uky.edu/cs_etds/43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In my Ph.D. dissertation, I have studied problems arising in various aspects of preferences: preference modeling, preference learning, and preference reasoning, when preferences concern outcomes ranging over combinatorial domains. Preferences is a major research component in artificial intelligence (AI) and decision theory, and is closely related to the social choice theory considered by economists and political scientists. In my dissertation, I have exploited emerging connections between preferences in AI and social choice theory. Most of my research is on qualitative preference representations that extend and combine existing formalisms such as conditional preference nets, lexicographic preference trees, answer-set optimization programs, possibilistic logic, and conditional preference networks; on learning problems that aim at discovering qualitative preference models and predictive preference information from practical data; and on preference reasoning problems centered around qualitative preference optimization and aggregation methods. Applications of my research include recommender systems, decision support tools, multi-agent systems, and Internet trading and marketing platforms.
10

Cleland, Benjamin George. "Reinforcement Learning for Racecar Control". The University of Waikato, 2006. http://hdl.handle.net/10289/2507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis investigates the use of reinforcement learning to learn to drive a racecar in the simulated environment of the Robot Automobile Racing Simulator. Real-life race driving is known to be difficult for humans, and expert human drivers use complex sequences of actions. There are a large number of variables, some of which change stochastically and all of which may affect the outcome. This makes driving a promising domain for testing and developing Machine Learning techniques that have the potential to be robust enough to work in the real world. Therefore the principles of the algorithms from this work may be applicable to a range of problems. The investigation starts by finding a suitable data structure to represent the information learnt. This is tested using supervised learning. Reinforcement learning is added and roughly tuned, and the supervised learning is then removed. A simple tabular representation is found satisfactory, and this avoids difficulties with more complex methods and allows the investigation to concentrate on the essentials of learning. Various reward sources are tested and a combination of three are found to produce the best performance. Exploration of the problem space is investigated. Results show exploration is essential but controlling how much is done is also important. It turns out the learning episodes need to be very long and because of this the task needs to be treated as continuous by using discounting to limit the size of the variables stored. Eligibility traces are used with success to make the learning more efficient. The tabular representation is made more compact by hashing and more accurate by using smaller buckets. This slows the learning but produces better driving. The improvement given by a rough form of generalisation indicates the replacement of the tabular method by a function approximator is warranted. These results show reinforcement learning can work within the Robot Automobile Racing Simulator, and lay the foundations for building a more efficient and competitive agent.

Libri sul tema "Representation learning (artifical intelligence)":

1

Pacific, Rim International Conference on Artificial Intelligence (4th 1996 Cairns Qld ). PRICAI '96: Topics in artificial intelligence : 4th Pacific Rim International Conference on Artificial Intelligence, Cairns, Australia, August 26-30, 1996 : proceedings. Berlin: Springer, 1996.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Brighton, England) International Conference on Artificial Intelligence in Education (14th 2009. Artificial intelligence in education: Building learning systems that care : from knowledge representation to affective modelling. Amsterdam: IOS Press, 2009.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

1953-, Benjamin D. Paul, a cura di. Change of representation and inductive bias. Boston: Kluwer Academic, 1990.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Andrée, Tiberghien, Mandl Heinz e NATO Advanced Research Workshop on Knowledge Acquisition in the Domain of Physics and Intelligent Learning Environments (1990 : Lyon, France), a cura di. Intelligent learning environments and knowledge acquisition in physics. Berlin: Springer-Verlag, 1992.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Tiberghien, Andrée. Intelligent Learning Environments and Knowledge Acquisition in Physics. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Workshop on Reasoning with Incomplete and Changing Information (1996 Cairns, Qld.). Learning and reasoning with complex representations: PRICAI'96 Workshops on Reasoning with Incomplete and Changing Information and on Inducing Complex Representations, Cairns, Australia, August 26-30, 1996 : selected papers. Berlin: Springer, 1998.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Fisseler, Jens. Learning and modeling with probabilistic conditional logic. Heidelberg: Ios Press, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

KR4HC 2009 (2009 Verona, Italy). Knowledge representation for health-care: Data, processes and guidelines : AIME 2009 workshop KR4HC 2009, Verona, Italy, July 19, 2009 : revised selected papers. Berlin: Springer, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Riaño, David. Knowledge Representation for Health-Care: ECAI 2010 Workshop KR4HC 2010, Lisbon, Portugal, August 17, 2010, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

United States. National Aeronautics and Space Administration., a cura di. Instructable autonomous agents: CSE-TR-193-94. [Washington, DC: National Aeronautics and Space Administration, 1994.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Representation learning (artifical intelligence)":

1

Li, Yifeng. "Sparse Representation for Machine Learning". In Advances in Artificial Intelligence, 352–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38457-8_38.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Bao, Feng. "Disentangled Variational Information Bottleneck for Multiview Representation Learning". In Artificial Intelligence, 91–102. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93049-3_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Mai, Gengchen, Ziyuan Li e Ni Lao. "Spatial Representation Learning in GeoAI". In Handbook of Geospatial Artificial Intelligence, 99–120. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003308423-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Sharifirad, Sima, e Stan Matwin. "Deep Multi-cultural Graph Representation Learning". In Advances in Artificial Intelligence, 407–10. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57351-9_46.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Reynolds, Stuart I. "Adaptive Representation Methods for Reinforcement Learning". In Advances in Artificial Intelligence, 345–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45153-6_34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Joshi, Ameet V. "Data Understanding, Representation, and Visualization". In Machine Learning and Artificial Intelligence, 21–29. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26622-6_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Joshi, Ameet V. "Data Understanding, Representation, and Visualization". In Machine Learning and Artificial Intelligence, 21–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12282-8_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

He, Yiming, e Wei Hu. "3D Hand Pose Estimation via Regularized Graph Representation Learning". In Artificial Intelligence, 540–52. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93046-2_46.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Xiao, Chaojun, Zhiyuan Liu, Yankai Lin e Maosong Sun. "Legal Knowledge Representation Learning". In Representation Learning for Natural Language Processing, 401–32. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThe law guarantees the regular functioning of the nation and society. In recent years, legal artificial intelligence (legal AI), which aims to apply artificial intelligence techniques to perform legal tasks, has received significant attention. Legal AI can provide a handy reference and convenient legal services for legal professionals and non-specialists, thus benefiting real-world legal practice. Different from general open-domain tasks, legal tasks have a high demand for understanding and applying expert knowledge. Therefore, enhancing models with various legal knowledge is a key issue of legal AI. In this chapter, we summarize the existing knowledge-intensive legal AI approaches regarding knowledge representation, acquisition, and application. Besides, future directions and ethical considerations are also discussed to promote the development of legal AI.
10

Belle, Vaishak. "Representation Matters". In Synthesis Lectures on Artificial Intelligence and Machine Learning, 15–26. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21003-7_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Representation learning (artifical intelligence)":

1

Xie, Ruobing, Zhiyuan Liu, Huanbo Luan e Maosong Sun. "Image-embodied Knowledge Representation Learning". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/438.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.
2

Qian, Sheng, Guanyue Li, Wen-Ming Cao, Cheng Liu, Si Wu e Hau San Wong. "Improving representation learning in autoencoders via multidimensional interpolation and dual regularizations". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/453.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Autoencoders enjoy a remarkable ability to learn data representations. Research on autoencoders shows that the effectiveness of data interpolation can reflect the performance of representation learning. However, existing interpolation methods in autoencoders do not have enough capability of traversing a possible region between two datapoints on a data manifold, and the distribution of interpolated latent representations is not considered.To address these issues, we aim to fully exert the potential of data interpolation and further improve representation learning in autoencoders. Specifically, we propose the multidimensional interpolation to increase the capability of data interpolation by randomly setting interpolation coefficients for each dimension of latent representations. In addition, we regularize autoencoders in both the latent and the data spaces by imposing a prior on latent representations in the Maximum Mean Discrepancy (MMD) framework and encouraging generated datapoints to be realistic in the Generative Adversarial Network (GAN) framework. Compared to representative models, our proposed model has empirically shown that representation learning exhibits better performance on downstream tasks on multiple benchmarks.
3

Li, Sheng, e Handong Zhao. "A Survey on Representation Learning for User Modeling". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/695.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Artificial intelligent systems are changing every aspect of our daily life. In the past decades, numerous approaches have been developed to characterize user behavior, in order to deliver personalized experience to users in scenarios like online shopping or movie recommendation. This paper presents a comprehensive survey of recent advances in user modeling from the perspective of representation learning. In particular, we formulate user modeling as a process of learning latent representations for users. We discuss both the static and sequential representation learning methods for the purpose of user modeling, and review representative approaches in each category, such as matrix factorization, deep collaborative filtering, and recurrent neural networks. Both shallow and deep learning methods are reviewed and discussed. Finally, we conclude this survey and discuss a number of open research problems that would inspire further research in this field.
4

Nozawa, Kento, e Issei Sato. "Evaluation Methods for Representation Learning: A Survey". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/776.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Representation learning enables us to automatically extract generic feature representations from a dataset to solve another machine learning task. Recently, extracted feature representations by a representation learning algorithm and a simple predictor have exhibited state-of-the-art performance on several machine learning tasks. Despite its remarkable progress, there exist various ways to evaluate representation learning algorithms depending on the application because of the flexibility of representation learning. To understand the current applications of representation learning, we review evaluation methods of representation learning algorithms. On the basis of our evaluation survey, we also discuss the future direction of representation learning. The extended version, https://arxiv.org/abs/2204.08226, gives more detailed discussions and a survey on theoretical analyses.
5

Bai, Yang, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie e Min Zhang. "RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/62.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (i.e., learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (i.e., being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (i.e., learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: https://github.com/Flame-Chasers/RaSa.
6

Gao, Li, Hong Yang, Chuan Zhou, Jia Wu, Shirui Pan e Yue Hu. "Active Discriminative Network Representation Learning". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/296.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Most of current network representation models are learned in unsupervised fashions, which usually lack the capability of discrimination when applied to network analysis tasks, such as node classification. It is worth noting that label information is valuable for learning the discriminative network representations. However, labels of all training nodes are always difficult or expensive to obtain and manually labeling all nodes for training is inapplicable. Different sets of labeled nodes for model learning lead to different network representation results. In this paper, we propose a novel method, termed as ANRMAB, to learn the active discriminative network representations with a multi-armed bandit mechanism in active learning setting. Specifically, based on the networking data and the learned network representations, we design three active learning query strategies. By deriving an effective reward scheme that is closely related to the estimated performance measure of interest, ANRMAB uses a multi-armed bandit mechanism for adaptive decision making to select the most informative nodes for labeling. The updated labeled nodes are then used for further discriminative network representation learning. Experiments are conducted on three public data sets to verify the effectiveness of ANRMAB.
7

Dumancic, Sebastijan, e Hendrik Blockeel. "Clustering-Based Relational Unsupervised Representation Learning with an Explicit Distributed Representation". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/226.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The goal of unsupervised representation learning is to extract a new representation of data, such that solving many different tasks becomes easier. Existing methods typically focus on vectorized data and offer little support for relational data, which additionally describes relationships among instances. In this work we introduce an approach for relational unsupervised representation learning. Viewing a relational dataset as a hypergraph, new features are obtained by clustering vertices and hyperedges. To find a representation suited for many relational learning tasks, a wide range of similarities between relational objects is considered, e.g. feature and structural similarities. We experimentally evaluate the proposed approach and show that models learned on such latent representations perform better, have lower complexity, and outperform the existing approaches on classification tasks.
8

Le, Lei, Raksha Kumaraswamy e Martha White. "Learning Sparse Representations in Reinforcement Learning with Sparse Coding". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/287.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations. In this work, we develop a supervised sparse coding objective for policy evaluation. Despite the non-convexity of this objective, we prove that all local minima are global minima, making the approach amenable to simple optimization strategies. We empirically show that it is key to use a supervised objective, rather than the more straightforward unsupervised sparse coding approach. We then compare the learned representations to a canonical fixed sparse representation, called tile-coding, demonstrating that the sparse coding representation outperforms a wide variety of tile-coding representations.
9

Wang, Pengyang, Yanjie Fu, Yuanchun Zhou, Kunpeng Liu, Xiaolin Li e Kien Hua. "Exploiting Mutual Information for Substructure-aware Graph Representation Learning". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/472.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, we design and evaluate a new substructure-aware Graph Representation Learning (GRL) approach. GRL aims to map graph structure information into low-dimensional representations. While extensive efforts have been made for modeling global and/or local structure information, GRL can be improved by substructure information. Some recent studies exploit adversarial learning to incorporate substructure awareness, but hindered by unstable convergence. This study will address the major research question: is there a better way to integrate substructure awareness into GRL? As subsets of the graph structure, interested substructures (i.e., subgraph) are unique and representative for differentiating graphs, leading to the high correlation between the representation of the graph-level structure and substructures. Since mutual information (MI) is to evaluate the mutual dependence between two variables, we develop a MI inducted substructure-aware GRL method. We decompose the GRL pipeline into two stages: (1) node-level, where we introduce to maximize MI between the original and learned representation by the intuition that the original and learned representation should be highly correlated; (2) graph-level, where we preserve substructures by maximizing MI between the graph-level structure and substructure representation. Finally, we present extensive experimental results to demonstrate the improved performances of our method with real-world data.
10

Chu, Guanyi, Xiao Wang, Chuan Shi e Xunqiang Jiang. "CuCo: Graph Representation with Curriculum Contrastive Learning". In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/317.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Graph-level representation learning is to learn low-dimensional representation for the entire graph, which has shown a large impact on real-world applications. Recently, limited by expensive labeled data, contrastive learning based graph-level representation learning attracts considerable attention. However, these methods mainly focus on graph augmentation for positive samples, while the effect of negative samples is less explored. In this paper, we study the impact of negative samples on learning graph-level representations, and a novel curriculum contrastive learning framework for self-supervised graph-level representation, called CuCo, is proposed. Specifically, we introduce four graph augmentation techniques to obtain the positive and negative samples, and utilize graph neural networks to learn their representations. Then a scoring function is proposed to sort negative samples from easy to hard and a pacing function is to automatically select the negative samples in each training procedure. Extensive experiments on fifteen graph classification real-world datasets, as well as the parameter analysis, well demonstrate that our proposed CuCo yields truly encouraging results in terms of performance on classification and convergence.

Rapporti di organizzazioni sul tema "Representation learning (artifical intelligence)":

1

Goodwin, Sarah, Yigal Attali, Geoffrey LaFlair, Yena Park, Andrew Runge, Alina von Davier e Kevin Yancey. Duolingo English Test - Writing Construct. Duolingo, marzo 2023. http://dx.doi.org/10.46999/arxn5612.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Assessments, especially those used for high-stakes decision making, draw on evidence-based frameworks. Such frameworks inform every aspect of the testing process, from development to results reporting. The frameworks that language assessment professionals use draw on theory in language learning, assessment design, and measurement and psychometrics in order to provide underpinnings for the evaluation of language skills including speaking, writing, reading, and listening. This paper focuses on the construct, or underlying trait, of writing ability. The paper conceptualizes the writing construct for the Duolingo English Test, a digital-first assessment. “Digital-first” includes technology such as artificial intelligence (AI) and machine learning, with human expert involvement, throughout all item development, test scoring, and security processes. This work is situated in the Burstein et al. (2022) theoretical ecosystem for digital-first assessment, the first representation of its kind that incorporates design, validation/measurement, and security all situated directly in assessment practices that are digital first. The paper first provides background information about the Duolingo English Test and then defines the writing construct, including the purposes for writing. It also introduces principles underpinning the design of writing items and illustrates sample items that assess the writing construct.
2

Nguyen, Kim, e Jonathan Hambur. Adoption of Emerging Digital General-purpose Technologies: Determinants and Effects. Reserve Bank of Australia, dicembre 2023. http://dx.doi.org/10.47688/rdp2023-10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper examines the factors associated with the adoption of cloud computing and artificial intelligence/machine learning, two emerging digital general-purpose technologies (GPT), as well as firms' post-adoption outcomes. To do so we identify adoption of GPT based on references to these technologies in listed company reports, and merge this with data on their Board of Directors, their hiring activities and their financial performance. We find that firms that have directors with relevant technological backgrounds, or female representation on their Board, are more likely to profitably adopt GPT, with the former being particularly important. Worker skills also appear important, with firms that adopt GPT, particularly those that do so profitably, being more likely to hire skilled staff following adoption. Finally, while early adopters of GPT experience a dip in profitability following adoption, this is not evident for more recent adopters. This suggests that GPT may have become easier to adopt over time, potentially due to changes in the technologies or the availability of relevant skills, which is encouraging in terms of future productivity outcomes.
3

Varastehpour, Soheil, Hamid Sharifzadeh e Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
4

Shukla, Indu, Rajeev Agrawal, Kelly Ervin e Jonathan Boone. AI on digital twin of facility captured by reality scans. Engineer Research and Development Center (U.S.), novembre 2023. http://dx.doi.org/10.21079/11681/47850.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The power of artificial intelligence (AI) coupled with optimization algorithms can be linked to data-rich digital twin models to perform predictive analysis to make better informed decisions about installation operations and quality of life for the warfighters. In the current research, we developed AI connected lifecycle building information models through the creation of a data informed smart digital twin of one of US Army Corps of Engineers (USACE) buildings as our test case. Digital twin (DT) technology involves creating a virtual representation of a physical entity. Digital twin is created by digitalizing data collected through sensors, powered by machine learning (ML) algorithms, and are continuously learning systems. The exponential advance in digital technologies enables facility spaces to be fully and richly modeled in three dimensions and can be brought together in virtual space. Coupled with advancement in reinforcement learning and computer graphics enables AI agents to learn visual navigation and interaction with objects. We have used Habitat AI 2.0 to train an embodied agent in immersive 3D photorealistic environment. The embodied agent interacts with a 3D environment by receiving RGB, depth and semantically segmented views of the environment and taking navigational actions and interacts with the objects in the 3D space. Instead of training the robots in physical world we are training embodied agents in simulated 3D space. While humans are superior at critical thinking, creativity, and managing people, whereas robots are superior at coping with harsh environments and performing highly repetitive work. Training robots in controlled simulated world is faster and can increase their surveillance, reliability, efficiency, and survivability in physical space.
5

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe e Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, dicembre 2021. http://dx.doi.org/10.53328/uxuo4751.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).

Vai alla bibliografia