Letteratura scientifica selezionata sul tema "Dynamic Representation Learning"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Dynamic Representation Learning".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Dynamic Representation Learning":
Lee, Jungmin, e Wongyoung Lee. "Aspects of A Study on the Multi Presentational Metaphor Education Using Online Telestration". Korean Society of Culture and Convergence 44, n. 9 (30 settembre 2022): 163–73. http://dx.doi.org/10.33645/cnc.2022.9.44.9.163.
Biswal, Siddharth, Cao Xiao, Lucas M. Glass, Elizabeth Milkovits e Jimeng Sun. "Doctor2Vec: Dynamic Doctor Representation Learning for Clinical Trial Recruitment". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 01 (3 aprile 2020): 557–64. http://dx.doi.org/10.1609/aaai.v34i01.5394.
Wang, Xingqi, Mengrui Zhang, Bin Chen, Dan Wei e Yanli Shao. "Dynamic Weighted Multitask Learning and Contrastive Learning for Multimodal Sentiment Analysis". Electronics 12, n. 13 (7 luglio 2023): 2986. http://dx.doi.org/10.3390/electronics12132986.
Goyal, Palash, Sujit Rokka Chhetri e Arquimedes Canedo. "dyngraph2vec: Capturing network dynamics using dynamic graph representation learning". Knowledge-Based Systems 187 (gennaio 2020): 104816. http://dx.doi.org/10.1016/j.knosys.2019.06.024.
Han, Liangzhe, Ruixing Zhang, Leilei Sun, Bowen Du, Yanjie Fu e Tongyu Zhu. "Generic and Dynamic Graph Representation Learning for Crowd Flow Modeling". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 4 (26 giugno 2023): 4293–301. http://dx.doi.org/10.1609/aaai.v37i4.25548.
Jiao, Pengfei, Hongjiang Chen, Huijun Tang, Qing Bao, Long Zhang, Zhidong Zhao e Huaming Wu. "Contrastive representation learning on dynamic networks". Neural Networks 174 (giugno 2024): 106240. http://dx.doi.org/10.1016/j.neunet.2024.106240.
Radulescu, Angela, Yeon Soon Shin e Yael Niv. "Human Representation Learning". Annual Review of Neuroscience 44, n. 1 (8 luglio 2021): 253–73. http://dx.doi.org/10.1146/annurev-neuro-092920-120559.
Liu, Dianbo, Alex Lamb, Xu Ji, Pascal Junior Tikeng Notsawo, Michael Mozer, Yoshua Bengio e Kenji Kawaguchi. "Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization for Heterogeneous Representational Coarseness". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 7 (26 giugno 2023): 8825–33. http://dx.doi.org/10.1609/aaai.v37i7.26061.
Deng, Yongjian, Hao Chen e Youfu Li. "A Dynamic GCN with Cross-Representation Distillation for Event-Based Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 2 (24 marzo 2024): 1492–500. http://dx.doi.org/10.1609/aaai.v38i2.27914.
Li, Jintang, Zhouxin Yu, Zulun Zhu, Liang Chen, Qi Yu, Zibin Zheng, Sheng Tian, Ruofan Wu e Changhua Meng. "Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 7 (26 giugno 2023): 8588–96. http://dx.doi.org/10.1609/aaai.v37i7.26034.
Tesi sul tema "Dynamic Representation Learning":
Stefanidis, Achilleas. "Dynamic Graph Representation Learning on Enterprise Live Video Streaming Events". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278817.
Företag använder live video streaming för både intern och extern kommunikation. Strömmning av hög kvalitet video till tusentals tittare i ett företagsnätverk är inte enkelt eftersom bandbreddskraven ofta överstiger kapaciteten på nätverket. För att minska lasten på nätverket har Peer-to-Peer (P2P) nätverk visat sig vara en lösning. Här anpassar sig P2P nätverket efter företagsnätverkets struktur och kan därigenom utbyta video data på ett effektivt sätt. Anpassning till ett företagsnätverk är ett utmanande problem eftersom dom är dynamiska med förändring över tid och kännedom över topologin är inte alltid tillgänglig. I det här projektet föreslår vi en ny lösning, ABD, en dynamisk approach baserat på inlärning av grafrepresentationer. Vi försöker estimera den bandbreddskapacitet som finns mellan två peers eller tittare. Architekturen av ABD anpassar sig till egenskaperna av företagsnätverket. Själva modellen bakom ABD använder en koncentrationsmekanism och en avkodare. Attention mekanismen producerar node embeddings, medan avkodaren konverterar embeddings till estimeringar av bandbredden. Modellen fångar upp dynamiken och strukturen av nätverket med hjälp av en avancerad träningsprocess. Effektiviteten av ABD är testad på två dynamiska nätverksgrafer baserat på data från riktiga företagsnätverk. Enligt våra experiment har ABD bättre resultat när man jämför med andra state-of the-art modeller för inlärning av dynamisk grafrepresentation.
Liemhetcharat, Somchaya. "Representation, Planning, and Learning of Dynamic Ad Hoc Robot Teams". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/304.
McGarity, Michael Computer Science & Engineering Faculty of Engineering UNSW. "Heterogeneous representations for reinforcement learning control of dynamic systems". Awarded by:University of New South Wales. School of Computer Science and Engineering, 2004. http://handle.unsw.edu.au/1959.4/19350.
Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Ribeiro, Andre Figueiredo. "Graph dynamics : learning and representation". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34184.
Includes bibliographical references (p. 58-60).
Graphs are often used in artificial intelligence as means for symbolic knowledge representation. A graph is nothing more than a collection of symbols connected to each other in some fashion. For example, in computer vision a graph with five nodes and some edges can represent a table - where nodes correspond to particular shape descriptors for legs and a top, and edges to particular spatial relations. As a framework for representation, graphs invite us to simplify and view the world as objects of pure structure whose properties are fixed in time, while the phenomena they are supposed to model are actually often changing. A node alone cannot represent a table leg, for example, because a table leg is not one structure (it can have many different shapes, colors, or it can be seen in many different settings, lighting conditions, etc.) Theories of knowledge representation have in general concentrated on the stability of symbols - on the fact that people often use properties that remain unchanged across different contexts to represent an object (in vision, these properties are called invariants). However, on closer inspection, objects are variable as well as stable. How are we to understand such problems? How is that assembling a large collection of changing components into a system results in something that is an altogether stable collection of parts?
(cont.) The work here presents one approach that we came to encompass by the phrase "graph dynamics". Roughly speaking, dynamical systems are systems with states that evolve over time according to some lawful "motion". In graph dynamics, states are graphical structures, corresponding to different hypothesis for representation, and motion is the correction or repair of an antecedent structure. The adapted structure is an end product on a path of test and repair. In this way, a graph is not an exact record of the environment but a malleable construct that is gradually tightened to fit the form it is to reproduce. In particular, we explore the concept of attractors for the graph dynamical system. In dynamical systems theory, attractor states are states into which the system settles with the passage of time, and in graph dynamics they correspond to graphical states with many repairs (states that can cope with many different contingencies). In parallel with introducing the basic mathematical framework for graph dynamics, we define a game for its control, its attractor states and a method to find the attractors. From these insights, we work out two new algorithms, one for Bayesian network discovery and one for active learning, which in combination we use to undertake the object recognition problem in computer vision. To conclude, we report competitive results in standard and custom-made object recognition datasets.
by Andre Figueiredo Ribeiro.
S.M.
Terreau, Enzo. "Apprentissage de représentations d'auteurs et d'autrices à partir de modèles de langue pour l'analyse des dynamiques d'écriture". Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20001.
The recent and massive democratization of digital tools has empowered individuals to generate and share information on the web through various means such as blogs, social networks, sharing platforms, and more. The exponential growth of available information, mostly textual data, requires the development of Natural Language Processing (NLP) models to mathematically represent it and subsequently classify, sort, or recommend it. This is the essence of representation learning. It aims to construct a low-dimensional space where the distances between projected objects (words, texts) reflect real-world distances, whether semantic, stylistic, and so on.The proliferation of available data, coupled with the rise in computing power and deep learning, has led to the creation of highly effective language models for word and document embeddings. These models incorporate complex semantic and linguistic concepts while remaining accessible to everyone and easily adaptable to specific tasks or corpora. One can use them to create author embeddings. However, it is challenging to determine the aspects on which a model will focus to bring authors closer or move them apart. In a literary context, it is preferable for similarities to primarily relate to writing style, which raises several issues. The definition of literary style is vague, assessing the stylistic difference between two texts and their embeddings is complex. In computational linguistics, approaches aiming to characterize it are mainly statistical, relying on language markers. In light of this, our first contribution is a framework to evaluate the ability of language models to grasp writing style. We will have previously elaborated on text embedding models in machine learning and deep learning, at the word, document, and author levels. We will also have presented the treatment of the notion of literary style in Natural Language Processing, which forms the basis of our method. Transferring knowledge between black-box large language models and these methods derived from linguistics remains a complex task. Our second contribution aims to reconcile these approaches through a representation learning model focusing on style, VADES (Variational Author and Document Embedding with Style). We compare our model to state-of-the-art ones and analyze their limitations in this context.Finally, we delve into dynamic author and document embeddings. Temporal information is crucial, allowing for a more fine-grained representation of writing dynamics. After presenting the state of the art, we elaborate on our last contribution, B²ADE (Brownian Bridge Author and Document Embedding), which models authors as trajectories. We conclude by outlining several leads for improving our methods and highlighting potential research directions for the future
Pinder, Ross Andrew. "Representative learning design in dynamic interceptive actions". Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/59803/1/Ross_Pinder_Thesis.pdf.
Moremoholo, T. P. "A review of how to optimize learning from external representations". Journal for New Generation Sciences, Vol 11, Issue 2: Central University of Technology, Free State, Bloemfontein, 2013. http://hdl.handle.net/11462/635.
This article reviews research on learning with external representations and provides a theoretical background on how to optimize learning from external representations. General factors, such as the type of material to be learned, learner characteristics and the testing method, are some of the variables that can determine if graphic medium can increase a subject's comprehension and if such comprehension can be accurately measured. These factors are discussed and represented by a model to suggest how external representations can be effectively used in a learning environment. Two key conclusions are drawn from the observation made in these studies. Firstly, the proper design of a particular external representation and supporting text can promote relevant activities that ultimately contribute to fuller understanding of the content. Secondly, external representations must be developed to address the size complexity and variety of the content that must be analysed in order to extract knowledge for scientific discovery.
Delasalles, Edouard. "Inferring and Predicting Dynamic Representations for Structured Temporal Data". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS296.
Temporal data constitute a large part of data collected digitally. Predicting their next values is an important and challenging task in domains such as climatology, optimal control, or natural language processing. Standard statistical methods are based on linear models and are often limited to low dimensional data. We instead use deep learning methods capable of handling high dimensional structured data and leverage large quantities of examples. In this thesis, we are interested in latent variable models. Contrary to autoregressive models that directly use past data to perform prediction, latent models infer low dimensional vectorial representations of data on which prediction is performed. Latent vectorial spaces allow us to learn dynamic models that are able to generate high-dimensional and structured data. First, we propose a structured latent model for spatio-temporal data forecasting. Given a set of spatial locations where data such as weather or traffic are collected, we infer latent variables for each location and use spatial structure in the dynamic function. The model is also able to discover correlations between series without prior spatial information. Next, we focus on predicting data distributions, rather than point estimates. We propose a model that generates latent variables used to condition a generative model. Text data are used to evaluate the model on diachronic language modeling. Finally, we propose a stochastic prediction model. It uses the first values of sequences to generate several possible futures. Here, the generative model is not conditioned to an absolute epoch, but to a sequence. The model is applied to stochastic video prediction
Mccosker, Christopher Mark. "Interacting constraints in high performance long jumping". Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204284/1/Christopher_McCosker_Thesis.pdf.
Libri sul tema "Dynamic Representation Learning":
Baum, Susan, e Robin Schader. Using a Positive Lens. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190645472.003.0003.
Clavio, Galen. Social Media and Sports. Human Kinetics, 2017. http://dx.doi.org/10.5040/9781718221000.
Faflik, David. Urban Formalism. Fordham University Press, 2020. http://dx.doi.org/10.5422/fordham/9780823288045.001.0001.
The Expected Knowledge: What can we know about anything and everything? Tiruchirappalli: Sivashanmugam Palaniappan, 2012.
Capitoli di libri sul tema "Dynamic Representation Learning":
Feng, Hao, Yan Liu, Ziqiao Zhou e Jing Chen. "Dynamic Network Change Detection via Dynamic Network Representation Learning". In Communications and Networking, 642–58. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41114-5_48.
Yu, Yanwei, Huaxiu Yao, Hongjian Wang, Xianfeng Tang e Zhenhui Li. "Representation Learning for Large-Scale Dynamic Networks". In Database Systems for Advanced Applications, 526–41. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91458-9_32.
Mendes, M. E. S., e L. Sacks. "Dynamic Knowledge Representation for e-Learning Applications". In Enhancing the Power of the Internet, 259–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-45218-8_12.
Fathy, Ahmed, e Kan Li. "TemporalGAT: Attention-Based Dynamic Graph Representation Learning". In Advances in Knowledge Discovery and Data Mining, 413–23. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_32.
Zhang, Si, Yinglong Xia, Yan Zhu e Hanghang Tong. "Representation Learning on Dynamic Network of Networks". In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), 298–306. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2023. http://dx.doi.org/10.1137/1.9781611977653.ch34.
Lowe, Richard. "User-Controllable Animated Diagrams: The Solution for Learning Dynamic Content?" In Diagrammatic Representation and Inference, 355–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-25931-2_38.
Xu, Liancheng, Xiaoxiang Wang, Lei Guo, Jinyu Zhang, Xiaoqi Wu e Xinhua Wang. "Candidate-Aware Dynamic Representation for News Recommendation". In Artificial Neural Networks and Machine Learning – ICANN 2023, 272–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44195-0_23.
Li, Yongfang, Liang Chang, Guanjun Rao, Phatpicha Yochum, Yiqin Luo e Tianlong Gu. "Representation Learning for Knowledge Graph with Dynamic Step". In Communications in Computer and Information Science, 382–93. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2203-7_29.
Denisov, Mikhail, Anton Anikin e Oleg Sychev. "Dynamic Flowcharts for Enhancing Learners’ Understanding of the Control Flow During Programming Learning". In Diagrammatic Representation and Inference, 408–11. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86062-2_42.
Gong, Changwei, Changhong Jing, Yanyan Shen e Shuqiang Wang. "Dynamic Community Detection via Adversarial Temporal Graph Representation Learning". In Neural Computing for Advanced Applications, 1–13. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6135-9_1.
Atti di convegni sul tema "Dynamic Representation Learning":
Kose, Oyku Deniz, e Yanning Shen. "Dynamic Fair Node Representation Learning". In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10094834.
Zhang, Siwei, Yun Xiong, Yao Zhang, Yiheng Sun, Xi Chen, Yizhu Jiao e Yangyong Zhu. "RDGSL: Dynamic Graph Representation Learning with Structure Learning". In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583780.3615023.
Tong, Minglei, e Han Hong. "Learning sparse representation for dynamic gesture recogniton". In 2015 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 2015. http://dx.doi.org/10.1109/sips.2015.7345021.
Tan, Zhen, Xiang Zhao, Yang Fang, Weidong Xiao e Jiuyang Tang. "Knowledge Representation Learning via Dynamic Relation Spaces". In 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). IEEE, 2016. http://dx.doi.org/10.1109/icdmw.2016.0102.
Racz, Janos, e Tamas Klotz. "Knowledge representation by dynamic competitive learning techniques". In SPIE Proceedings, a cura di Steven K. Rogers. SPIE, 1991. http://dx.doi.org/10.1117/12.45015.
Tian, Sheng, Ruofan Wu, Leilei Shi, Liang Zhu e Tao Xiong. "Self-supervised Representation Learning on Dynamic Graphs". In CIKM '21: The 30th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3459637.3482389.
Wu, Zian, Huijun Tang e Huan Liu. "Bayesian Contrastive Representation Learning for Dynamic Graph". In 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicles (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta). IEEE, 2022. http://dx.doi.org/10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00355.
Luo, Yiqin, Liang Chang, Guanjun Rao, Wei Chen e Tianlong Gu. "Representation Learning for Knowledge Graph with Dynamic Margin". In 2018 11th International Symposium on Computational Intelligence and Design (ISCID). IEEE, 2018. http://dx.doi.org/10.1109/iscid.2018.10171.
Liu, Yang, Yan Liu, Shenghua Zhong e Keith C. C. Chan. "Trajectory representation of dynamic texture via manifold learning". In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5496195.
Liu, Zhijun, Chao Huang, Yanwei Yu, Peng Song, Baode Fan e Junyu Dong. "Dynamic Representation Learning for Large-Scale Attributed Networks". In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3340531.3411945.
Rapporti di organizzazioni sul tema "Dynamic Representation Learning":
Mishra, Umakant, e Sagar Gautam. Improving and testing machine learning methods for benchmarking soil carbon dynamics representation of land surface models. Office of Scientific and Technical Information (OSTI), settembre 2022. http://dx.doi.org/10.2172/1891184.
Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe e Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, dicembre 2021. http://dx.doi.org/10.53328/uxuo4751.