Littérature scientifique sur le sujet « Dynamic Representation Learning »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Dynamic Representation Learning ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Dynamic Representation Learning"
Lee, Jungmin, et Wongyoung Lee. « Aspects of A Study on the Multi Presentational Metaphor Education Using Online Telestration ». Korean Society of Culture and Convergence 44, no 9 (30 septembre 2022) : 163–73. http://dx.doi.org/10.33645/cnc.2022.9.44.9.163.
Texte intégralBiswal, Siddharth, Cao Xiao, Lucas M. Glass, Elizabeth Milkovits et Jimeng Sun. « Doctor2Vec : Dynamic Doctor Representation Learning for Clinical Trial Recruitment ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 01 (3 avril 2020) : 557–64. http://dx.doi.org/10.1609/aaai.v34i01.5394.
Texte intégralWang, Xingqi, Mengrui Zhang, Bin Chen, Dan Wei et Yanli Shao. « Dynamic Weighted Multitask Learning and Contrastive Learning for Multimodal Sentiment Analysis ». Electronics 12, no 13 (7 juillet 2023) : 2986. http://dx.doi.org/10.3390/electronics12132986.
Texte intégralGoyal, Palash, Sujit Rokka Chhetri et Arquimedes Canedo. « dyngraph2vec : Capturing network dynamics using dynamic graph representation learning ». Knowledge-Based Systems 187 (janvier 2020) : 104816. http://dx.doi.org/10.1016/j.knosys.2019.06.024.
Texte intégralHan, Liangzhe, Ruixing Zhang, Leilei Sun, Bowen Du, Yanjie Fu et Tongyu Zhu. « Generic and Dynamic Graph Representation Learning for Crowd Flow Modeling ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 4 (26 juin 2023) : 4293–301. http://dx.doi.org/10.1609/aaai.v37i4.25548.
Texte intégralJiao, Pengfei, Hongjiang Chen, Huijun Tang, Qing Bao, Long Zhang, Zhidong Zhao et Huaming Wu. « Contrastive representation learning on dynamic networks ». Neural Networks 174 (juin 2024) : 106240. http://dx.doi.org/10.1016/j.neunet.2024.106240.
Texte intégralRadulescu, Angela, Yeon Soon Shin et Yael Niv. « Human Representation Learning ». Annual Review of Neuroscience 44, no 1 (8 juillet 2021) : 253–73. http://dx.doi.org/10.1146/annurev-neuro-092920-120559.
Texte intégralLiu, Dianbo, Alex Lamb, Xu Ji, Pascal Junior Tikeng Notsawo, Michael Mozer, Yoshua Bengio et Kenji Kawaguchi. « Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization for Heterogeneous Representational Coarseness ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 8825–33. http://dx.doi.org/10.1609/aaai.v37i7.26061.
Texte intégralDeng, Yongjian, Hao Chen et Youfu Li. « A Dynamic GCN with Cross-Representation Distillation for Event-Based Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 2 (24 mars 2024) : 1492–500. http://dx.doi.org/10.1609/aaai.v38i2.27914.
Texte intégralLi, Jintang, Zhouxin Yu, Zulun Zhu, Liang Chen, Qi Yu, Zibin Zheng, Sheng Tian, Ruofan Wu et Changhua Meng. « Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 8588–96. http://dx.doi.org/10.1609/aaai.v37i7.26034.
Texte intégralThèses sur le sujet "Dynamic Representation Learning"
Stefanidis, Achilleas. « Dynamic Graph Representation Learning on Enterprise Live Video Streaming Events ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278817.
Texte intégralFöretag använder live video streaming för både intern och extern kommunikation. Strömmning av hög kvalitet video till tusentals tittare i ett företagsnätverk är inte enkelt eftersom bandbreddskraven ofta överstiger kapaciteten på nätverket. För att minska lasten på nätverket har Peer-to-Peer (P2P) nätverk visat sig vara en lösning. Här anpassar sig P2P nätverket efter företagsnätverkets struktur och kan därigenom utbyta video data på ett effektivt sätt. Anpassning till ett företagsnätverk är ett utmanande problem eftersom dom är dynamiska med förändring över tid och kännedom över topologin är inte alltid tillgänglig. I det här projektet föreslår vi en ny lösning, ABD, en dynamisk approach baserat på inlärning av grafrepresentationer. Vi försöker estimera den bandbreddskapacitet som finns mellan två peers eller tittare. Architekturen av ABD anpassar sig till egenskaperna av företagsnätverket. Själva modellen bakom ABD använder en koncentrationsmekanism och en avkodare. Attention mekanismen producerar node embeddings, medan avkodaren konverterar embeddings till estimeringar av bandbredden. Modellen fångar upp dynamiken och strukturen av nätverket med hjälp av en avancerad träningsprocess. Effektiviteten av ABD är testad på två dynamiska nätverksgrafer baserat på data från riktiga företagsnätverk. Enligt våra experiment har ABD bättre resultat när man jämför med andra state-of the-art modeller för inlärning av dynamisk grafrepresentation.
Liemhetcharat, Somchaya. « Representation, Planning, and Learning of Dynamic Ad Hoc Robot Teams ». Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/304.
Texte intégralMcGarity, Michael Computer Science & Engineering Faculty of Engineering UNSW. « Heterogeneous representations for reinforcement learning control of dynamic systems ». Awarded by:University of New South Wales. School of Computer Science and Engineering, 2004. http://handle.unsw.edu.au/1959.4/19350.
Texte intégralWoodbury, Nathan Scott. « Representation and Reconstruction of Linear, Time-Invariant Networks ». BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Texte intégralRibeiro, Andre Figueiredo. « Graph dynamics : learning and representation ». Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34184.
Texte intégralIncludes bibliographical references (p. 58-60).
Graphs are often used in artificial intelligence as means for symbolic knowledge representation. A graph is nothing more than a collection of symbols connected to each other in some fashion. For example, in computer vision a graph with five nodes and some edges can represent a table - where nodes correspond to particular shape descriptors for legs and a top, and edges to particular spatial relations. As a framework for representation, graphs invite us to simplify and view the world as objects of pure structure whose properties are fixed in time, while the phenomena they are supposed to model are actually often changing. A node alone cannot represent a table leg, for example, because a table leg is not one structure (it can have many different shapes, colors, or it can be seen in many different settings, lighting conditions, etc.) Theories of knowledge representation have in general concentrated on the stability of symbols - on the fact that people often use properties that remain unchanged across different contexts to represent an object (in vision, these properties are called invariants). However, on closer inspection, objects are variable as well as stable. How are we to understand such problems? How is that assembling a large collection of changing components into a system results in something that is an altogether stable collection of parts?
(cont.) The work here presents one approach that we came to encompass by the phrase "graph dynamics". Roughly speaking, dynamical systems are systems with states that evolve over time according to some lawful "motion". In graph dynamics, states are graphical structures, corresponding to different hypothesis for representation, and motion is the correction or repair of an antecedent structure. The adapted structure is an end product on a path of test and repair. In this way, a graph is not an exact record of the environment but a malleable construct that is gradually tightened to fit the form it is to reproduce. In particular, we explore the concept of attractors for the graph dynamical system. In dynamical systems theory, attractor states are states into which the system settles with the passage of time, and in graph dynamics they correspond to graphical states with many repairs (states that can cope with many different contingencies). In parallel with introducing the basic mathematical framework for graph dynamics, we define a game for its control, its attractor states and a method to find the attractors. From these insights, we work out two new algorithms, one for Bayesian network discovery and one for active learning, which in combination we use to undertake the object recognition problem in computer vision. To conclude, we report competitive results in standard and custom-made object recognition datasets.
by Andre Figueiredo Ribeiro.
S.M.
Terreau, Enzo. « Apprentissage de représentations d'auteurs et d'autrices à partir de modèles de langue pour l'analyse des dynamiques d'écriture ». Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20001.
Texte intégralThe recent and massive democratization of digital tools has empowered individuals to generate and share information on the web through various means such as blogs, social networks, sharing platforms, and more. The exponential growth of available information, mostly textual data, requires the development of Natural Language Processing (NLP) models to mathematically represent it and subsequently classify, sort, or recommend it. This is the essence of representation learning. It aims to construct a low-dimensional space where the distances between projected objects (words, texts) reflect real-world distances, whether semantic, stylistic, and so on.The proliferation of available data, coupled with the rise in computing power and deep learning, has led to the creation of highly effective language models for word and document embeddings. These models incorporate complex semantic and linguistic concepts while remaining accessible to everyone and easily adaptable to specific tasks or corpora. One can use them to create author embeddings. However, it is challenging to determine the aspects on which a model will focus to bring authors closer or move them apart. In a literary context, it is preferable for similarities to primarily relate to writing style, which raises several issues. The definition of literary style is vague, assessing the stylistic difference between two texts and their embeddings is complex. In computational linguistics, approaches aiming to characterize it are mainly statistical, relying on language markers. In light of this, our first contribution is a framework to evaluate the ability of language models to grasp writing style. We will have previously elaborated on text embedding models in machine learning and deep learning, at the word, document, and author levels. We will also have presented the treatment of the notion of literary style in Natural Language Processing, which forms the basis of our method. Transferring knowledge between black-box large language models and these methods derived from linguistics remains a complex task. Our second contribution aims to reconcile these approaches through a representation learning model focusing on style, VADES (Variational Author and Document Embedding with Style). We compare our model to state-of-the-art ones and analyze their limitations in this context.Finally, we delve into dynamic author and document embeddings. Temporal information is crucial, allowing for a more fine-grained representation of writing dynamics. After presenting the state of the art, we elaborate on our last contribution, B²ADE (Brownian Bridge Author and Document Embedding), which models authors as trajectories. We conclude by outlining several leads for improving our methods and highlighting potential research directions for the future
Pinder, Ross Andrew. « Representative learning design in dynamic interceptive actions ». Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/59803/1/Ross_Pinder_Thesis.pdf.
Texte intégralMoremoholo, T. P. « A review of how to optimize learning from external representations ». Journal for New Generation Sciences, Vol 11, Issue 2 : Central University of Technology, Free State, Bloemfontein, 2013. http://hdl.handle.net/11462/635.
Texte intégralThis article reviews research on learning with external representations and provides a theoretical background on how to optimize learning from external representations. General factors, such as the type of material to be learned, learner characteristics and the testing method, are some of the variables that can determine if graphic medium can increase a subject's comprehension and if such comprehension can be accurately measured. These factors are discussed and represented by a model to suggest how external representations can be effectively used in a learning environment. Two key conclusions are drawn from the observation made in these studies. Firstly, the proper design of a particular external representation and supporting text can promote relevant activities that ultimately contribute to fuller understanding of the content. Secondly, external representations must be developed to address the size complexity and variety of the content that must be analysed in order to extract knowledge for scientific discovery.
Delasalles, Edouard. « Inferring and Predicting Dynamic Representations for Structured Temporal Data ». Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS296.
Texte intégralTemporal data constitute a large part of data collected digitally. Predicting their next values is an important and challenging task in domains such as climatology, optimal control, or natural language processing. Standard statistical methods are based on linear models and are often limited to low dimensional data. We instead use deep learning methods capable of handling high dimensional structured data and leverage large quantities of examples. In this thesis, we are interested in latent variable models. Contrary to autoregressive models that directly use past data to perform prediction, latent models infer low dimensional vectorial representations of data on which prediction is performed. Latent vectorial spaces allow us to learn dynamic models that are able to generate high-dimensional and structured data. First, we propose a structured latent model for spatio-temporal data forecasting. Given a set of spatial locations where data such as weather or traffic are collected, we infer latent variables for each location and use spatial structure in the dynamic function. The model is also able to discover correlations between series without prior spatial information. Next, we focus on predicting data distributions, rather than point estimates. We propose a model that generates latent variables used to condition a generative model. Text data are used to evaluate the model on diachronic language modeling. Finally, we propose a stochastic prediction model. It uses the first values of sequences to generate several possible futures. Here, the generative model is not conditioned to an absolute epoch, but to a sequence. The model is applied to stochastic video prediction
Mccosker, Christopher Mark. « Interacting constraints in high performance long jumping ». Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204284/1/Christopher_McCosker_Thesis.pdf.
Texte intégralLivres sur le sujet "Dynamic Representation Learning"
Baum, Susan, et Robin Schader. Using a Positive Lens. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190645472.003.0003.
Texte intégralClavio, Galen. Social Media and Sports. Human Kinetics, 2017. http://dx.doi.org/10.5040/9781718221000.
Texte intégralFaflik, David. Urban Formalism. Fordham University Press, 2020. http://dx.doi.org/10.5422/fordham/9780823288045.001.0001.
Texte intégralThe Expected Knowledge : What can we know about anything and everything ? Tiruchirappalli : Sivashanmugam Palaniappan, 2012.
Trouver le texte intégralChapitres de livres sur le sujet "Dynamic Representation Learning"
Feng, Hao, Yan Liu, Ziqiao Zhou et Jing Chen. « Dynamic Network Change Detection via Dynamic Network Representation Learning ». Dans Communications and Networking, 642–58. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41114-5_48.
Texte intégralYu, Yanwei, Huaxiu Yao, Hongjian Wang, Xianfeng Tang et Zhenhui Li. « Representation Learning for Large-Scale Dynamic Networks ». Dans Database Systems for Advanced Applications, 526–41. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91458-9_32.
Texte intégralMendes, M. E. S., et L. Sacks. « Dynamic Knowledge Representation for e-Learning Applications ». Dans Enhancing the Power of the Internet, 259–82. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-45218-8_12.
Texte intégralFathy, Ahmed, et Kan Li. « TemporalGAT : Attention-Based Dynamic Graph Representation Learning ». Dans Advances in Knowledge Discovery and Data Mining, 413–23. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_32.
Texte intégralZhang, Si, Yinglong Xia, Yan Zhu et Hanghang Tong. « Representation Learning on Dynamic Network of Networks ». Dans Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), 298–306. Philadelphia, PA : Society for Industrial and Applied Mathematics, 2023. http://dx.doi.org/10.1137/1.9781611977653.ch34.
Texte intégralLowe, Richard. « User-Controllable Animated Diagrams : The Solution for Learning Dynamic Content ? » Dans Diagrammatic Representation and Inference, 355–59. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-25931-2_38.
Texte intégralXu, Liancheng, Xiaoxiang Wang, Lei Guo, Jinyu Zhang, Xiaoqi Wu et Xinhua Wang. « Candidate-Aware Dynamic Representation for News Recommendation ». Dans Artificial Neural Networks and Machine Learning – ICANN 2023, 272–84. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44195-0_23.
Texte intégralLi, Yongfang, Liang Chang, Guanjun Rao, Phatpicha Yochum, Yiqin Luo et Tianlong Gu. « Representation Learning for Knowledge Graph with Dynamic Step ». Dans Communications in Computer and Information Science, 382–93. Singapore : Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2203-7_29.
Texte intégralDenisov, Mikhail, Anton Anikin et Oleg Sychev. « Dynamic Flowcharts for Enhancing Learners’ Understanding of the Control Flow During Programming Learning ». Dans Diagrammatic Representation and Inference, 408–11. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_42.
Texte intégralGong, Changwei, Changhong Jing, Yanyan Shen et Shuqiang Wang. « Dynamic Community Detection via Adversarial Temporal Graph Representation Learning ». Dans Neural Computing for Advanced Applications, 1–13. Singapore : Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6135-9_1.
Texte intégralActes de conférences sur le sujet "Dynamic Representation Learning"
Kose, Oyku Deniz, et Yanning Shen. « Dynamic Fair Node Representation Learning ». Dans ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10094834.
Texte intégralZhang, Siwei, Yun Xiong, Yao Zhang, Yiheng Sun, Xi Chen, Yizhu Jiao et Yangyong Zhu. « RDGSL : Dynamic Graph Representation Learning with Structure Learning ». Dans CIKM '23 : The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA : ACM, 2023. http://dx.doi.org/10.1145/3583780.3615023.
Texte intégralTong, Minglei, et Han Hong. « Learning sparse representation for dynamic gesture recogniton ». Dans 2015 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 2015. http://dx.doi.org/10.1109/sips.2015.7345021.
Texte intégralTan, Zhen, Xiang Zhao, Yang Fang, Weidong Xiao et Jiuyang Tang. « Knowledge Representation Learning via Dynamic Relation Spaces ». Dans 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). IEEE, 2016. http://dx.doi.org/10.1109/icdmw.2016.0102.
Texte intégralRacz, Janos, et Tamas Klotz. « Knowledge representation by dynamic competitive learning techniques ». Dans SPIE Proceedings, sous la direction de Steven K. Rogers. SPIE, 1991. http://dx.doi.org/10.1117/12.45015.
Texte intégralTian, Sheng, Ruofan Wu, Leilei Shi, Liang Zhu et Tao Xiong. « Self-supervised Representation Learning on Dynamic Graphs ». Dans CIKM '21 : The 30th ACM International Conference on Information and Knowledge Management. New York, NY, USA : ACM, 2021. http://dx.doi.org/10.1145/3459637.3482389.
Texte intégralWu, Zian, Huijun Tang et Huan Liu. « Bayesian Contrastive Representation Learning for Dynamic Graph ». Dans 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicles (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta). IEEE, 2022. http://dx.doi.org/10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00355.
Texte intégralLuo, Yiqin, Liang Chang, Guanjun Rao, Wei Chen et Tianlong Gu. « Representation Learning for Knowledge Graph with Dynamic Margin ». Dans 2018 11th International Symposium on Computational Intelligence and Design (ISCID). IEEE, 2018. http://dx.doi.org/10.1109/iscid.2018.10171.
Texte intégralLiu, Yang, Yan Liu, Shenghua Zhong et Keith C. C. Chan. « Trajectory representation of dynamic texture via manifold learning ». Dans 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5496195.
Texte intégralLiu, Zhijun, Chao Huang, Yanwei Yu, Peng Song, Baode Fan et Junyu Dong. « Dynamic Representation Learning for Large-Scale Attributed Networks ». Dans CIKM '20 : The 29th ACM International Conference on Information and Knowledge Management. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3340531.3411945.
Texte intégralRapports d'organisations sur le sujet "Dynamic Representation Learning"
Mishra, Umakant, et Sagar Gautam. Improving and testing machine learning methods for benchmarking soil carbon dynamics representation of land surface models. Office of Scientific and Technical Information (OSTI), septembre 2022. http://dx.doi.org/10.2172/1891184.
Texte intégralDaudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe et Hamid Mehmood. Mapping WASH-related disease risk : A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, décembre 2021. http://dx.doi.org/10.53328/uxuo4751.
Texte intégral