Добірка наукової літератури з теми "Author and Document Representation Learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Author and Document Representation Learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Author and Document Representation Learning":
Para, Upendar, and M. S. Patel. "A New Term Representation Method for Gender and Age Prediction." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 5s (May 17, 2023): 90–104. http://dx.doi.org/10.17762/ijritcc.v11i5s.6633.
Ma, Yingying, Youlong Wu, and Chengqiang Lu. "A Graph-Based Author Name Disambiguation Method and Analysis via Information Theory." Entropy 22, no. 4 (April 7, 2020): 416. http://dx.doi.org/10.3390/e22040416.
Stoean, Catalin, and Daniel Lichtblau. "Author Identification Using Chaos Game Representation and Deep Learning." Mathematics 8, no. 11 (November 2, 2020): 1933. http://dx.doi.org/10.3390/math8111933.
Pooja, Km, Samrat Mondal, and Joydeep Chandra. "Exploiting Higher Order Multi-dimensional Relationships with Self-attention for Author Name Disambiguation." ACM Transactions on Knowledge Discovery from Data 16, no. 5 (October 31, 2022): 1–23. http://dx.doi.org/10.1145/3502730.
Kavuri, Karunakar, and M. Kavitha. "A Word Embeddings based Approach for Author Profiling: Gender and Age Prediction." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 7s (July 13, 2023): 239–50. http://dx.doi.org/10.17762/ijritcc.v11i7s.6996.
Buffone, Brittany, Ilena Djuana, Katherine Yang, Kyle J. Wilby, Maguy S. El Hajj, and Kerry Wilbur. "Diversity in health professional education scholarship: a document analysis of international author representation in leading journals." BMJ Open 10, no. 11 (November 2020): e043970. http://dx.doi.org/10.1136/bmjopen-2020-043970.
Popova, Y. B., and A. V. Goloburda. "ALGORITHMIC AND PROGRAM IMPLEMENTATION OF THE PLAGIARISM DEFINITION IN LEARNING MANAGEMENT SYSTEMS." «System analysis and applied information science», no. 1 (June 12, 2018): 71–78. http://dx.doi.org/10.21122/2309-4923-2018-1-71-78.
Popova, Oleksandra. "ECONOMIC AND LEGAL DISCOURSE: PARADIGM OF CHANGES IN THE XXI CENTURY (ON THE MATERIAL OF CHINESE, ENGLISH AND UKRAINIAN LANGUAGES)." Naukovy Visnyk of South Ukrainian National Pedagogical University named after K. D. Ushynsky: Linguistic Sciences 2022, no. 34 (July 2022): 61–73. http://dx.doi.org/10.24195/2616-5317-2022-34-6.
Dalyan, Tuğba, Hakan Ayral, and Özgür Özdemir. "A Comprehensive Study of Learning Approaches for Author Gender Identification." Information Technology and Control 51, no. 3 (September 23, 2022): 429–45. http://dx.doi.org/10.5755/j01.itc.51.3.29907.
Tarmizi, Nursyahirah, Suhaila Saee, and Dayang Hanani Abang Ibrahim. "TOWARDS CURBING CYBER-BULLYING IN MALAYSIA BY AUTHOR IDENTIFICATION OF IBAN AND KADAZANDUSUN OSN TEXT USING DEEP LEARNING." ASEAN Engineering Journal 13, no. 2 (May 31, 2023): 145–57. http://dx.doi.org/10.11113/aej.v13.19171.
Дисертації з теми "Author and Document Representation Learning":
Terreau, Enzo. "Apprentissage de représentations d'auteurs et d'autrices à partir de modèles de langue pour l'analyse des dynamiques d'écriture." Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20001.
The recent and massive democratization of digital tools has empowered individuals to generate and share information on the web through various means such as blogs, social networks, sharing platforms, and more. The exponential growth of available information, mostly textual data, requires the development of Natural Language Processing (NLP) models to mathematically represent it and subsequently classify, sort, or recommend it. This is the essence of representation learning. It aims to construct a low-dimensional space where the distances between projected objects (words, texts) reflect real-world distances, whether semantic, stylistic, and so on.The proliferation of available data, coupled with the rise in computing power and deep learning, has led to the creation of highly effective language models for word and document embeddings. These models incorporate complex semantic and linguistic concepts while remaining accessible to everyone and easily adaptable to specific tasks or corpora. One can use them to create author embeddings. However, it is challenging to determine the aspects on which a model will focus to bring authors closer or move them apart. In a literary context, it is preferable for similarities to primarily relate to writing style, which raises several issues. The definition of literary style is vague, assessing the stylistic difference between two texts and their embeddings is complex. In computational linguistics, approaches aiming to characterize it are mainly statistical, relying on language markers. In light of this, our first contribution is a framework to evaluate the ability of language models to grasp writing style. We will have previously elaborated on text embedding models in machine learning and deep learning, at the word, document, and author levels. We will also have presented the treatment of the notion of literary style in Natural Language Processing, which forms the basis of our method. Transferring knowledge between black-box large language models and these methods derived from linguistics remains a complex task. Our second contribution aims to reconcile these approaches through a representation learning model focusing on style, VADES (Variational Author and Document Embedding with Style). We compare our model to state-of-the-art ones and analyze their limitations in this context.Finally, we delve into dynamic author and document embeddings. Temporal information is crucial, allowing for a more fine-grained representation of writing dynamics. After presenting the state of the art, we elaborate on our last contribution, B²ADE (Brownian Bridge Author and Document Embedding), which models authors as trajectories. We conclude by outlining several leads for improving our methods and highlighting potential research directions for the future
Sayadi, Karim. "Classification du texte numérique et numérisé. Approche fondée sur les algorithmes d'apprentissage automatique." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066079/document.
Different disciplines in the humanities, such as philology or palaeography, face complex and time-consuming tasks whenever it comes to examining the data sources. The introduction of computational approaches in humanities makes it possible to address issues such as semantic analysis and systematic archiving. The conceptual models developed are based on algorithms that are later hard coded in order to automate these tedious tasks. In the first part of the thesis we propose a novel method to build a semantic space based on topics modeling. In the second part and in order to classify historical documents according to their script. We propose a novel representation learning method based on stacking convolutional auto-encoder. The goal is to automatically learn plot representations of the script or the written language
Wauquier, Pauline. "Task driven representation learning." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30005/document.
Machine learning proposes numerous algorithms to solve the different tasks that can be extracted from real world prediction problems. To solve the different concerned tasks, most Machine learning algorithms somehow rely on relationships between instances. Pairwise instances relationships can be obtained by computing a distance between the vectorial representations of the instances. Considering the available vectorial representation of the data, none of the commonly used distances is ensured to be representative of the task that aims at being solved. In this work, we investigate the gain of tuning the vectorial representation of the data to the distance to more optimally solve the task. We more particularly focus on an existing graph-based algorithm for classification task. An algorithm to learn a mapping of the data in a representation space which allows an optimal graph-based classification is first introduced. By projecting the data in a representation space in which the predefined distance is representative of the task, we aim at outperforming the initial vectorial representation of the data when solving the task. A theoretical analysis of the introduced algorithm is performed to define the conditions ensuring an optimal classification. A set of empirical experiments allows us to evaluate the gain of the introduced approach and to temper the theoretical analysis
Dos, Santos Ludovic. "Representation learning for relational data." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066480/document.
The increasing use of social and sensor networks generates a large quantity of data that can be represented as complex graphs. There are many tasks from information analysis, to prediction and retrieval one can imagine on those data where relation between graph nodes should be informative. In this thesis, we proposed different models for three different tasks: - Graph node classification - Relational time series forecasting - Collaborative filtering. All the proposed models use the representation learning framework in its deterministic or Gaussian variant. First, we proposed two algorithms for the heterogeneous graph labeling task, one using deterministic representations and the other one Gaussian representations. Contrary to other state of the art models, our solution is able to learn edge weights when learning simultaneously the representations and the classifiers. Second, we proposed an algorithm for relational time series forecasting where the observations are not only correlated inside each series, but also across the different series. We use Gaussian representations in this contribution. This was an opportunity to see in which way using Gaussian representations instead of deterministic ones was profitable. At last, we apply the Gaussian representation learning approach to the collaborative filtering task. This is a preliminary work to see if the properties of Gaussian representations found on the two previous tasks were also verified for the ranking one. The goal of this work was to then generalize the approach to more relational data and not only bipartite graphs between users and items
Belharbi, Soufiane. "Neural networks regularization through representation learning." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR10/document.
Neural network models and deep models are one of the leading and state of the art models in machine learning. They have been applied in many different domains. Most successful deep neural models are the ones with many layers which highly increases their number of parameters. Training such models requires a large number of training samples which is not always available. One of the fundamental issues in neural networks is overfitting which is the issue tackled in this thesis. Such problem often occurs when the training of large models is performed using few training samples. Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning, dropout, batch normalization, etc. In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications. We propose three contributions. The first one presented in chapter 2 is dedicated to dealing with structured output problems to perform multivariate regression when the output variable y contains structural dependencies between its components. Our proposal aims mainly at exploiting these dependencies by learning them in an unsupervised way. Validated on a facial landmark detection problem, learning the structure of the output data has shown to improve the network generalization and speedup its training. The second contribution described in chapter 3 deals with the classification task where we propose to exploit prior knowledge about the internal representation of the hidden layers in neural networks. This prior is based on the idea that samples within the same class should have the same internal representation. We formulate this prior as a penalty that we add to the training cost to be minimized. Empirical experiments over MNIST and its variants showed an improvement of the network generalization when using only few training samples. Our last contribution presented in chapter 4 showed the interest of transfer learning in applications where only few samples are available. The idea consists in re-using the filters of pre-trained convolutional networks that have been trained on large datasets such as ImageNet. Such pre-trained filters are plugged into a new convolutional network with new dense layers. Then, the whole network is trained over a new task. In this contribution, we provide an automatic system based on such learning scheme with an application to medical domain. In this application, the task consists in localizing the third lumbar vertebra in a 3D CT scan. A pre-processing of the 3D CT scan to obtain a 2D representation and a post-processing to refine the decision are included in the proposed system. This work has been done in collaboration with the clinic "Rouen Henri Becquerel Center" who provided us with data
Vukotic, Verdran. "Deep Neural Architectures for Automatic Representation Learning from Multimedia Multimodal Data." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0015/document.
In this dissertation, the thesis that deep neural networks are suited for analysis of visual, textual and fused visual and textual content is discussed. This work evaluates the ability of deep neural networks to learn automatic multimodal representations in either unsupervised or supervised manners and brings the following main contributions:1) Recurrent neural networks for spoken language understanding (slot filling): different architectures are compared for this task with the aim of modeling both the input context and output label dependencies.2) Action prediction from single images: we propose an architecture that allow us to predict human actions from a single image. The architecture is evaluated on videos, by utilizing solely one frame as input.3) Bidirectional multimodal encoders: the main contribution of this thesis consists of neural architecture that translates from one modality to the other and conversely and offers and improved multimodal representation space where the initially disjoint representations can translated and fused. This enables for improved multimodal fusion of multiple modalities. The architecture was extensively studied an evaluated in international benchmarks within the task of video hyperlinking where it defined the state of the art today.4) Generative adversarial networks for multimodal fusion: continuing on the topic of multimodal fusion, we evaluate the possibility of using conditional generative adversarial networks to lean multimodal representations in addition to providing multimodal representations, generative adversarial networks permit to visualize the learned model directly in the image domain
Karpate, Yogesh. "Enhanced representation & learning of magnetic resonance signatures in multiple sclerosis." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S068/document.
Multiple Sclerosis (MS) is an acquired inflammatory disease, which causes disabilities in young adults and it is common in northern hemisphere. This PhD work focuses on characterization and modeling of multidimensional MRI signatures in MS Lesions (MSL). The objective is to improve image representation and learning for visual recognition, where high level information such as MSL contained in MRI are automatically extracted. We propose a new longitudinal intensity normalization algorithm for multichannel MRI in the presence of MS lesions, which provides consistent and reliable longitudinal detections. This is primarily based on learning the tissue intensities from multichannel MRI using robust Gaussian Mixture Modeling. Further, we proposed two MSL detection methods based on a statistical patient to population comparison framework and probabilistic one class learning. We evaluated our proposed algorithms on multi-center databases to verify its efficacy
Soltan-Zadeh, Yasaman. "Improved rule-based document representation and classification using genetic programming." Thesis, Royal Holloway, University of London, 2011. http://repository.royalholloway.ac.uk/items/479a1773-779b-8b24-b334-7ed485311abe/8/.
Lu, Ying. "Transfer Learning for Image Classification." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC045/document.
When learning a classification model for a new target domain with only a small amount of training samples, brute force application of machine learning algorithms generally leads to over-fitted classifiers with poor generalization skills. On the other hand, collecting a sufficient number of manually labeled training samples may prove very expensive. Transfer Learning methods aim to solve this kind of problems by transferring knowledge from related source domain which has much more data to help classification in the target domain. Depending on different assumptions about target domain and source domain, transfer learning can be further categorized into three categories: Inductive Transfer Learning, Transductive Transfer Learning (Domain Adaptation) and Unsupervised Transfer Learning. We focus on the first one which assumes that the target task and source task are different but related. More specifically, we assume that both target task and source task are classification tasks, while the target categories and source categories are different but related. We propose two different methods to approach this ITL problem. In the first work we propose a new discriminative transfer learning method, namely DTL, combining a series of hypotheses made by both the model learned with target training samples, and the additional models learned with source category samples. Specifically, we use the sparse reconstruction residual as a basic discriminant, and enhance its discriminative power by comparing two residuals from a positive and a negative dictionary. On this basis, we make use of similarities and dissimilarities by choosing both positively correlated and negatively correlated source categories to form additional dictionaries. A new Wilcoxon-Mann-Whitney statistic based cost function is proposed to choose the additional dictionaries with unbalanced training data. Also, two parallel boosting processes are applied to both the positive and negative data distributions to further improve classifier performance. On two different image classification databases, the proposed DTL consistently out performs other state-of-the-art transfer learning methods, while at the same time maintaining very efficient runtime. In the second work we combine the power of Optimal Transport and Deep Neural Networks to tackle the ITL problem. Specifically, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. To the best of our knowledge, the proposed JTLN is the first work to tackle ITL with Deep Neural Networks while incorporating prior knowledge on relatedness between target and source categories. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks
Alaverdyan, Zaruhi. "Unsupervised representation learning for anomaly detection on neuroimaging. Application to epilepsy lesion detection on brain MRI." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI005/document.
This work represents one attempt to develop a computer aided diagnosis system for epilepsy lesion detection based on neuroimaging data, in particular T1-weighted and FLAIR MR sequences. Given the complexity of the task and the lack of a representative voxel-level labeled data set, the adopted approach, first introduced in Azami et al., 2016, consists in casting the lesion detection task as a per-voxel outlier detection problem. The system is based on training a one-class SVM model for each voxel in the brain on a set of healthy controls, so as to model the normality of the voxel. The main focus of this work is to design representation learning mechanisms, capturing the most discriminant information from multimodality imaging. Manual features, designed to mimic the characteristics of certain epilepsy lesions, such as focal cortical dysplasia (FCD), on neuroimaging data, are tailored to individual pathologies and cannot discriminate a large range of epilepsy lesions. Such features reflect the known characteristics of lesion appearance; however, they might not be the most optimal ones for the task at hand. Our first contribution consists in proposing various unsupervised neural architectures as potential feature extracting mechanisms and, eventually, introducing a novel configuration of siamese networks, to be plugged into the outlier detection context. The proposed system, evaluated on a set of T1-weighted MRIs of epilepsy patients, showed a promising performance but a room for improvement as well. To this end, we considered extending the CAD system so as to accommodate multimodality data which offers complementary information on the problem at hand. Our second contribution, therefore, consists in proposing strategies to combine representations of different imaging modalities into a single framework for anomaly detection. The extended system showed a significant improvement on the task of epilepsy lesion detection on T1-weighted and FLAIR MR images. Our last contribution focuses on the integration of PET data into the system. Given the small number of available PET images, we make an attempt to synthesize PET data from the corresponding MRI acquisitions. Eventually we show an improved performance of the system when trained on the mixture of synthesized and real images
Книги з теми "Author and Document Representation Learning":
Edwards, Carolyn, Lella Gandini, and George Forman, eds. The Hundred Languages of Children. 3rd ed. ABC-CLIO, LLC, 2011. http://dx.doi.org/10.5040/9798400667664.
Частини книг з теми "Author and Document Representation Learning":
Liu, Zhiyuan, Yankai Lin, and Maosong Sun. "Document Representation." In Representation Learning for Natural Language Processing, 91–123. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5573-2_5.
Kamkarhaghighi, Mehran, Eren Gultepe, and Masoud Makrehchi. "Deep Learning for Document Representation." In Handbook of Deep Learning Applications, 101–10. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11479-4_5.
Ding, Ning, Yankai Lin, Zhiyuan Liu, and Maosong Sun. "Sentence and Document Representation Learning." In Representation Learning for Natural Language Processing, 81–125. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1600-9_4.
Cosma, Adrian, Mihai Ghidoveanu, Michael Panaitescu-Liess, and Marius Popescu. "Self-supervised Representation Learning on Document Images." In Document Analysis Systems, 103–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-57058-3_8.
Wu, Xiaoyun, Rohini Srihari, and Zhaohui Zheng. "Document Representation for One-Class SVM." In Machine Learning: ECML 2004, 489–500. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30115-8_45.
Saeidi, Mozhgan, Evangelos Milios, and Norbert Zeh. "Graph Representation Learning in Document Wikification." In Document Analysis and Recognition – ICDAR 2021 Workshops, 509–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86159-9_37.
López-Monroy, Adrián Pastor, Manuel Montes-y-Gómez, Luis Villaseñor-Pineda, Jesús Ariel Carrasco-Ochoa, and José Fco Martínez-Trinidad. "A New Document Author Representation for Authorship Attribution." In Lecture Notes in Computer Science, 283–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31149-9_29.
Zhang, Yue, Liying Zhang, and Yao Liu. "Linked Document Classification by Network Representation Learning." In Lecture Notes in Computer Science, 302–13. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01716-3_25.
Feng, Hao, Wengang Zhou, Jiajun Deng, Yuechen Wang, and Houqiang Li. "Geometric Representation Learning for Document Image Rectification." In Lecture Notes in Computer Science, 475–92. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19836-6_27.
Li, Luyang, Wenjing Ren, Bing Qin, and Ting Liu. "Learning Document Representation for Deceptive Opinion Spam Detection." In Lecture Notes in Computer Science, 393–404. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25816-4_32.
Тези доповідей конференцій з теми "Author and Document Representation Learning":
Li, Peizhao, Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. "SelfDoc: Self-Supervised Document Representation Learning." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00560.
Frery, Jordan, Christine Largeron, and Mihaela Juganaru-Mathieu. "Author identification by automatic learning." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333748.
Chen, Xiuying, Shen Gao, Chongyang Tao, Yan Song, Dongyan Zhao, and Rui Yan. "Iterative Document Representation Learning Towards Summarization with Polishing." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1442.
Menon, Remya, K. Harikrishnan, and Ganesh Varier. "Parallel Approach for Document Representation using Dictionary Learning." In 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT). IEEE, 2019. http://dx.doi.org/10.1109/icicict46008.2019.8993114.
Xu, Peng, Xinchi Chen, Xiaofei Ma, Zhiheng Huang, and Bing Xiang. "Contrastive Document Representation Learning with Graph Attention Networks." In Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.327.
Dewalkar, Swapnil, and Maunendra Sankar Desarkar. "Multi-Context Information for Word Representation Learning." In DocEng '19: ACM Symposium on Document Engineering 2019. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3342558.3345418.
Peng, Liwen, Siqi Shen, Dongsheng Li, Jun Xu, Yongquan Fu, and Huayou Su. "Author Disambiguation through Adversarial Network Representation Learning." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852233.
"Author index." In 2018 First International Workshop on Deep and Representation Learning (IWDRL). IEEE, 2018. http://dx.doi.org/10.1109/iwdrl.2018.8358216.
Zhou, Xinjie, Xiaojun Wan, and Jianguo Xiao. "Cross-Lingual Sentiment Classification with Bilingual Document Representation Learning." In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/p16-1133.
Tang, Duyu. "Sentiment-Specific Representation Learning for Document-Level Sentiment Analysis." In WSDM 2015: Eighth ACM International Conference on Web Search and Data Mining. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2684822.2697035.
Звіти організацій з теми "Author and Document Representation Learning":
Church, Joshua, LaKenya Walker, and Amy Bednar. Iterative Learning Algorithm for Records Analysis (ILARA) user manual. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41845.
Learning About Women and Urban Services in Latin America and the Caribbean. Population Council, 1986. http://dx.doi.org/10.31899/pgy1986.1000.