Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Dirichlet allocation.

Статті в журналах з теми "Dirichlet allocation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Dirichlet allocation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Du, Lan, Wray Buntine, Huidong Jin, and Changyou Chen. "Sequential latent Dirichlet allocation." Knowledge and Information Systems 31, no. 3 (June 10, 2011): 475–503. http://dx.doi.org/10.1007/s10115-011-0425-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Schwarz, Carlo. "Ldagibbs: A Command for Topic Modeling in Stata Using Latent Dirichlet Allocation." Stata Journal: Promoting communications on statistics and Stata 18, no. 1 (March 2018): 101–17. http://dx.doi.org/10.1177/1536867x1801800107.

Повний текст джерела
Анотація:
In this article, I introduce the ldagibbs command, which implements latent Dirichlet allocation in Stata. Latent Dirichlet allocation is the most popular machine-learning topic model. Topic models automatically cluster text documents into a user-chosen number of topics. Latent Dirichlet allocation represents each document as a probability distribution over topics and represents each topic as a probability distribution over words. Therefore, latent Dirichlet allocation provides a way to analyze the content of large unclassified text data and an alternative to predefined document classifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yoshida, Takahiro, Ryohei Hisano, and Takaaki Ohnishi. "Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back." PLOS ONE 18, no. 7 (July 12, 2023): e0288274. http://dx.doi.org/10.1371/journal.pone.0288274.

Повний текст джерела
Анотація:
Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as “bank.” In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors which significantly improves the capture of polysemy compared with GLDA and CGTM. Our model learns the underlying topic distribution and hierarchical structure among topics simultaneously, which can be further used to understand the correlation among topics. Moreover, the added flexibility of our model does not necessarily increase the time complexity compared with GLDA and CGTM, which makes our model a good competitor to GLDA.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Archambeau, Cedric, Balaji Lakshminarayanan, and Guillaume Bouchard. "Latent IBP Compound Dirichlet Allocation." IEEE Transactions on Pattern Analysis and Machine Intelligence 37, no. 2 (February 2015): 321–33. http://dx.doi.org/10.1109/tpami.2014.2313122.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pion-Tonachini, Luca, Scott Makeig, and Ken Kreutz-Delgado. "Crowd labeling latent Dirichlet allocation." Knowledge and Information Systems 53, no. 3 (April 19, 2017): 749–65. http://dx.doi.org/10.1007/s10115-017-1053-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

S.S., Ramyadharshni, and Pabitha Dr.P. "Topic Categorization on Social Network Using Latent Dirichlet Allocation." Bonfring International Journal of Software Engineering and Soft Computing 8, no. 2 (April 30, 2018): 16–20. http://dx.doi.org/10.9756/bijsesc.8390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Syed, Shaheen, and Marco Spruit. "Exploring Symmetrical and Asymmetrical Dirichlet Priors for Latent Dirichlet Allocation." International Journal of Semantic Computing 12, no. 03 (September 2018): 399–423. http://dx.doi.org/10.1142/s1793351x18400184.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) has gained much attention from researchers and is increasingly being applied to uncover underlying semantic structures from a variety of corpora. However, nearly all researchers use symmetrical Dirichlet priors, often unaware of the underlying practical implications that they bear. This research is the first to explore symmetrical and asymmetrical Dirichlet priors on topic coherence and human topic ranking when uncovering latent semantic structures from scientific research articles. More specifically, we examine the practical effects of several classes of Dirichlet priors on 2000 LDA models created from abstract and full-text research articles. Our results show that symmetrical or asymmetrical priors on the document–topic distribution or the topic–word distribution for full-text data have little effect on topic coherence scores and human topic ranking. In contrast, asymmetrical priors on the document–topic distribution for abstract data show a significant increase in topic coherence scores and improved human topic ranking compared to a symmetrical prior. Symmetrical or asymmetrical priors on the topic–word distribution show no real benefits for both abstract and full-text data.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Gen, and Hazri Jamil. "Teacher professional learning community and interdisciplinary collaborative teaching path under the informationization basic education model." Yugoslav Journal of Operations Research, no. 00 (2024): 29. http://dx.doi.org/10.2298/yjor2403029l.

Повний текст джерела
Анотація:
The construction of a learning community cannot be separated from the participation of information technology. The current teacher learning community has problems of low interaction efficiency and insufficient enthusiasm for group cooperative teaching. This study adopts the Latent Dirichlet allocation method to process text data generated by teacher interaction from the evolution of knowledge topics in the learning community network space. At the same time, the interaction data of the network community learning space is used to extract the interaction characteristics between teachers, and a collaborative teaching group is formed using the K-means clustering algorithm. This study verifies the management effect of Latent Dirichlet allocation and Kmeans algorithm in learning community space through experiments. The experiment showed that the Latent Dirichlet allocation algorithm had the highest F1 value at a K value of 12, which is 0.88. It collaborated with the filtering algorithm on the overall F1 value. At the same time, there were a total of 4 samples with incorrect judgments in Latent Dirichlet allocation, with an accuracy of 86.7%, which is higher than other algorithm models. The results indicate that the proposed Latent Dirichlet allocation combined with K-means algorithm has superior performance in the management of teacher professional learning communities, and can effectively improve the service level of teacher work.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Garg, Mohit, and Priya Rangra. "Bibliometric Analysis of Latent Dirichlet Allocation." DESIDOC Journal of Library & Information Technology 42, no. 2 (February 28, 2022): 105–13. http://dx.doi.org/10.14429/djlit.42.2.17307.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) has emerged as an important algorithm in big data analysis that finds the group of topics in the text data. It posits that each text document consists of a group of topics, and each topic is a mixture of words related to it. With the emergence of a plethora of text data, the LDA has become a popular algorithm for topic modeling among researchers from different domains. Therefore, it is essential to understand the trends of LDA researches. Bibliometric techniques are established methods to study the research progress of a topic. In this study, bibliographic data of 18715 publications that have cited the LDA were extracted from the Scopus database. The software R and Vosviewer were used to carry out the analysis. The analysis revealed that research interest in LDA had grown exponentially. The results showed that most authors preferred “Book Series” followed by “Conference Proceedings” as the publication venue. The majority of the institutions and authors were from the USA, followed by China. The co-occurrence analysis of keywords indicated that text mining and machine learning were dominant topics in LDA research with significant interest in social media. This study attempts to provide a comprehensive analysis and intellectual structure of LDA compared to previous studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chauhan, Uttam, and Apurva Shah. "Topic Modeling Using Latent Dirichlet allocation." ACM Computing Surveys 54, no. 7 (September 30, 2022): 1–35. http://dx.doi.org/10.1145/3462478.

Повний текст джерела
Анотація:
We are not able to deal with a mammoth text corpus without summarizing them into a relatively small subset. A computational tool is extremely needed to understand such a gigantic pool of text. Probabilistic Topic Modeling discovers and explains the enormous collection of documents by reducing them in a topical subspace. In this work, we study the background and advancement of topic modeling techniques. We first introduce the preliminaries of the topic modeling techniques and review its extensions and variations, such as topic modeling over various domains, hierarchical topic modeling, word embedded topic models, and topic models in multilingual perspectives. Besides, the research work for topic modeling in a distributed environment, topic visualization approaches also have been explored. We also covered the implementation and evaluation techniques for topic models in brief. Comparison matrices have been shown over the experimental results of the various categories of topic modeling. Diverse technical challenges and future directions have been discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Guo, Yunyan, and Jianzhong Li. "Distributed Latent Dirichlet Allocation on Streams." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–20. http://dx.doi.org/10.1145/3451528.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) has been widely used for topic modeling, with applications spanning various areas such as natural language processing and information retrieval. While LDA on small and static datasets has been extensively studied, several real-world challenges are posed in practical scenarios where datasets are often huge and are gathered in a streaming fashion. As the state-of-the-art LDA algorithm on streams, Streaming Variational Bayes (SVB) introduced Bayesian updating to provide a streaming procedure. However, the utility of SVB is limited in applications since it ignored three challenges of processing real-world streams: topic evolution , data turbulence , and real-time inference . In this article, we propose a novel distributed LDA algorithm—referred to as StreamFed-LDA— to deal with challenges on streams. For topic modeling of streaming data, the ability to capture evolving topics is essential for practical online inference. To achieve this goal, StreamFed-LDA is based on a specialized framework that supports lifelong (continual) learning of evolving topics. On the other hand, data turbulence is commonly present in streams due to real-life events. In that case, the design of StreamFed-LDA allows the model to learn new characteristics from the most recent data while maintaining the historical information. On massive streaming data, it is difficult and crucial to provide real-time inference results. To increase the throughput and reduce the latency, StreamFed-LDA introduces additional techniques that substantially reduce both computation and communication costs in distributed systems. Experiments on four real-world datasets show that the proposed framework achieves significantly better performance of online inference compared with the baselines. At the same time, StreamFed-LDA also reduces the latency by orders of magnitudes in real-world datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Jonasson, Johan. "Slow mixing for Latent Dirichlet Allocation." Statistics & Probability Letters 129 (October 2017): 96–100. http://dx.doi.org/10.1016/j.spl.2017.05.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tazibt, Ahmed Amir, and Farida Aoughlis. "Latent Dirichlet allocation-based temporal summarization." International Journal of Web Information Systems 15, no. 1 (April 4, 2019): 83–102. http://dx.doi.org/10.1108/ijwis-04-2018-0023.

Повний текст джерела
Анотація:
Purpose During crises such as accidents or disasters, an enormous volume of information is generated on the Web. Both people and decision-makers often need to identify relevant and timely content that can help in understanding what happens and take right decisions, as soon it appears online. However, relevant content can be disseminated in document streams. The available information can also contain redundant content published by different sources. Therefore, the need of automatic construction of summaries that aggregate important, non-redundant and non-outdated pieces of information is becoming critical. Design/methodology/approach The aim of this paper is to present a new temporal summarization approach based on a popular topic model in the information retrieval field, the Latent Dirichlet Allocation. The approach consists of filtering documents over streams, extracting relevant parts of information and then using topic modeling to reveal their underlying aspects to extract the most relevant and novel pieces of information to be added to the summary. Findings The performance evaluation of the proposed temporal summarization approach based on Latent Dirichlet Allocation, performed on the TREC Temporal Summarization 2014 framework, clearly demonstrates its effectiveness to provide short and precise summaries of events. Originality/value Unlike most of the state of the art approaches, the proposed method determines the importance of the pieces of information to be added to the summaries solely relying on their representation in the topic space provided by Latent Dirichlet Allocation, without the use of any external source of evidence.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Adegoke, M. A., J. O. A. Ayeni, and P. A. Adewole. "Empirical prior latent Dirichlet allocation model." Nigerian Journal of Technology 38, no. 1 (January 16, 2019): 223. http://dx.doi.org/10.4314/njt.v38i1.27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Lukins, Stacy K., Nicholas A. Kraft, and Letha H. Etzkorn. "Bug localization using latent Dirichlet allocation." Information and Software Technology 52, no. 9 (September 2010): 972–90. http://dx.doi.org/10.1016/j.infsof.2010.04.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Christy, A., Anto Praveena, and Jany Shabu. "A Hybrid Model for Topic Modeling Using Latent Dirichlet Allocation and Feature Selection Method." Journal of Computational and Theoretical Nanoscience 16, no. 8 (August 1, 2019): 3367–71. http://dx.doi.org/10.1166/jctn.2019.8234.

Повний текст джерела
Анотація:
In this information age, Knowledge discovery and pattern matching plays a significant role. Topic Modeling, an area of Text mining is used detecting hidden patterns in a document collection. Topic Modeling and Document Clustering are two important key terms which are similar in concepts and functionality. In this paper, topic modeling is carried out using Latent Dirichlet Allocation-Brute Force Method (LDA-BF), Latent Dirichlet Allocation-Back Tracking (LDA-BT), Latent Semantic Indexing (LSI) method and Nonnegative Matrix Factorization (NMF) method. A hybrid model is proposed which uses Latent Dirichlet Allocation (LDA) for extracting feature terms and Feature Selection (FS) method for feature reduction. The efficiency of document clustering depends upon the selection of good features. Topic modeling is performed by enriching the good features obtained through feature selection method. The proposed hybrid model produces improved accuracy than K-Means clustering method.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

ZHANG, Zhifei, Duoqian MIAO, and Can GAO. "Short text classification using latent Dirichlet allocation." Journal of Computer Applications 33, no. 6 (October 29, 2013): 1587–90. http://dx.doi.org/10.3724/sp.j.1087.2013.01587.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Li, Chenchen, Xiang Yan, Xiaotie Deng, Yuan Qi, Wei Chu, Le Song, Junlong Qiao, Jianshan He, and Junwu Xiong. "Latent Dirichlet Allocation for Internet Price War." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 639–46. http://dx.doi.org/10.1609/aaai.v33i01.3301639.

Повний текст джерела
Анотація:
Current Internet market makers are facing an intense competitive environment, where personalized price reductions or discounted coupons are provided by their peers to attract more customers. Much investment is spent to catch up with each other’s competitors but participants in such a price cut war are often incapable of winning due to their lack of information about others’ strategies or customers’ preference. We formalize the problem as a stochastic game with imperfect and incomplete information and develop a variant of Latent Dirichlet Allocation (LDA) to infer latent variables under the current market environment, which represents preferences of customers and strategies of competitors. Tests on simulated experiments and an open dataset for real data show that, by subsuming all available market information of the market maker’s competitors, our model exhibits a significant improvement for understanding the market environment and finding the best response strategies in the Internet price war. Our work marks the first successful learning method to infer latent information in the environment of price war by the LDA modeling, and sets an example for related competitive applications to follow.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Moss, Fabian C., and Martin Rohrmeier. "Discovering Tonal Profiles with Latent Dirichlet Allocation." Music & Science 4 (January 2021): 205920432110488. http://dx.doi.org/10.1177/20592043211048827.

Повний текст джерела
Анотація:
Music analysis, in particular harmonic analysis, is concerned with the way pitches are organized in pieces of music, and a range of empirical applications have been developed, for example, for chord recognition or key finding. Naturally, these approaches rely on some operationalization of the concepts they aim to investigate. In this study, we take a complementary approach and discover latent tonal structures in an unsupervised manner. We use the topic model Latent Dirichlet Allocation and apply it to a large historical corpus of musical pieces from the Western classical tradition. This method conceives topics as distributions of pitch classes without assuming a priori that they correspond to either chords, keys, or other harmonic phenomena. To illustrate the generative process assumed by the model, we create an artificial corpus with arbitrary parameter settings and compare the sampled pieces to real compositions. The results we obtain by applying the topic model to the musical corpus show that the inferred topics have music-theoretically meaningful interpretations. In particular, topics cover contiguous segments on the line of fifths and mostly correspond to diatonic sets. Moreover, tracing the prominence of topics over the course of music history over [Formula: see text]600 years reflects changes in the ways pitch classes are employed in musical compositions and reveals particularly strong changes at the transition from common-practice to extended tonality in the 19th century.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ohmura, Masahiro, Koh Kakusho, and Takeshi Okadome. "Tweet Sentiment Analysis with Latent Dirichlet Allocation." International Journal of Information Retrieval Research 4, no. 3 (July 2014): 66–79. http://dx.doi.org/10.4018/ijirr.2014070105.

Повний текст джерела
Анотація:
The method proposed here analyzes the social sentiments from collected tweets that have at least 1 of 800 sentimental or emotional adjectives. By dealing with tweets posted in a half a day as an input document, the method uses Latent Dirichlet Allocation (LDA) to extract social sentiments, some of which coincide with our daily sentiments. The extracted sentiments, however, indicate lowered sensitivity to changes in time, which suggests that they are not suitable for predicting daily social or economic events. Using LDA for the representative 72 adjectives to which each of the 800 adjectives maps while preserving word frequencies permits us to obtain social sentiments that show improved sensitivity to changes in time. A regression model with autocorrelated errors in which the inputs are social sentiments obtained by analyzing the contracted adjectives predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Rasiwasia, N., and N. Vasconcelos. "Latent Dirichlet Allocation Models for Image Classification." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 11 (November 2013): 2665–79. http://dx.doi.org/10.1109/tpami.2013.69.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Liu, Hailin, Ling Xu, Mengning Yang, Meng Yan, and Xiaohong Zhang. "Predicting Component Failures Using Latent Dirichlet Allocation." Mathematical Problems in Engineering 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/562716.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) is a statistical topic model that has been widely used to abstract semantic information from software source code. Failure refers to an observable error in the program behavior. This work investigates whether semantic information and failures recorded in the history can be used to predict component failures. We use LDA to abstract topics from source code and a new metric (topic failure density) is proposed by mapping failures to these topics. Exploring the basic information of topics from neighboring versions of a system, we obtain a similarity matrix. Multiply the Topic Failure Density (TFD) by the similarity matrix to get the TFD of the next version. The prediction results achieve an average 77.8% agreement with the real failures by considering the top 3 and last 3 components descending ordered by the number of failures. We use the Spearman coefficient to measure the statistical correlation between the actual and estimated failure rate. The validation results range from 0.5342 to 0.8337 which beats the similar method. It suggests that our predictor based on similarity of topics does a fine job of component failure prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Pan, Lili, Shen Cheng, Jian Liu, Peijun Tang, Bowen Wang, Yazhou Ren, and Zenglin Xu. "Latent Dirichlet allocation based generative adversarial networks." Neural Networks 132 (December 2020): 461–76. http://dx.doi.org/10.1016/j.neunet.2020.08.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Xia, Wei, and Hani Doss. "Scalable Hyperparameter Selection for Latent Dirichlet Allocation." Journal of Computational and Graphical Statistics 29, no. 4 (May 15, 2020): 875–95. http://dx.doi.org/10.1080/10618600.2020.1741378.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Li, Zhoujun, Haijun Zhang, Senzhang Wang, Feiran Huang, Zhenping Li, and Jianshe Zhou. "Exploit latent Dirichlet allocation for collaborative filtering." Frontiers of Computer Science 12, no. 3 (May 11, 2018): 571–81. http://dx.doi.org/10.1007/s11704-016-6078-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Anandkumar, Anima, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Yi-Kai Liu. "A Spectral Algorithm for Latent Dirichlet Allocation." Algorithmica 72, no. 1 (July 3, 2014): 193–214. http://dx.doi.org/10.1007/s00453-014-9909-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Yao, Jiangchao, Yanfeng Wang, Ya Zhang, Jun Sun, and Jun Zhou. "Joint Latent Dirichlet Allocation for Social Tags." IEEE Transactions on Multimedia 20, no. 1 (January 2018): 224–37. http://dx.doi.org/10.1109/tmm.2017.2716829.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Biggers, Lauren R., Cecylia Bocovich, Riley Capshaw, Brian P. Eddy, Letha H. Etzkorn, and Nicholas A. Kraft. "Configuring latent Dirichlet allocation based feature location." Empirical Software Engineering 19, no. 3 (August 29, 2012): 465–500. http://dx.doi.org/10.1007/s10664-012-9224-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kim, Anastasiia, Sanna Sevanto, Eric R. Moore, and Nicholas Lubbers. "Latent Dirichlet Allocation modeling of environmental microbiomes." PLOS Computational Biology 19, no. 6 (June 8, 2023): e1011075. http://dx.doi.org/10.1371/journal.pcbi.1011075.

Повний текст джерела
Анотація:
Interactions between stressed organisms and their microbiome environments may provide new routes for understanding and controlling biological systems. However, microbiomes are a form of high-dimensional data, with thousands of taxa present in any given sample, which makes untangling the interaction between an organism and its microbial environment a challenge. Here we apply Latent Dirichlet Allocation (LDA), a technique for language modeling, which decomposes the microbial communities into a set of topics (non-mutually-exclusive sub-communities) that compactly represent the distribution of full communities. LDA provides a lens into the microbiome at broad and fine-grained taxonomic levels, which we show on two datasets. In the first dataset, from the literature, we show how LDA topics succinctly recapitulate many results from a previous study on diseased coral species. We then apply LDA to a new dataset of maize soil microbiomes under drought, and find a large number of significant associations between the microbiome topics and plant traits as well as associations between the microbiome and the experimental factors, e.g. watering level. This yields new information on the plant-microbial interactions in maize and shows that LDA technique is useful for studying the coupling between microbiomes and stressed organisms.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Damane, Moeti. "Topic Classification of Central Bank Monetary Policy Statements: Evidence from Latent Dirichlet Allocation in Lesotho." Acta Universitatis Sapientiae, Economics and Business 10, no. 1 (September 1, 2022): 199–227. http://dx.doi.org/10.2478/auseb-2022-0012.

Повний текст джерела
Анотація:
Abstract This article develops a baseline on how to analyse the statements of monetary policy from Lesotho’s Central Bank using a method of topic classification that utilizes a machine learning algorithm known as Latent Dirichlet Allocation. To evaluate the changes in the policy distribution, the classification of topics is performed on a sample of policy statements spanning from February 2017 to January 2021. The three-topic Latent Dirichlet Allocation model extracted topics that remained prominent throughout the sample period and were most closely reflective of the functions of the Central Bank of Lesotho Monetary Policy Committee. The topics identified are: (i) International Monetary and Financial Market Conditions; (ii) Monetary Policy Committee and International Reserves; (iii) Regional and International Economic Policy Conditions. The three-topic Latent Dirichlet Allocation model was determined as the most appropriate model through which a consistent analysis of topic evolution in Central Bank of Lesotho Monetary Policy Statements can be performed.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Fatima-Zahrae, Sifi, Sabbar Wafae, and El Mzabi Amal. "Application of Latent Dirichlet Allocation (LDA) for clustering financial tweets." E3S Web of Conferences 297 (2021): 01071. http://dx.doi.org/10.1051/e3sconf/202129701071.

Повний текст джерела
Анотація:
Sentiment classification is one of the hottest research areas among the Natural Language Processing (NLP) topics. While it aims to detect sentiment polarity and classification of the given opinion, requires a large number of aspect extractions. However, extracting aspect takes human effort and long time. To reduce this, Latent Dirichlet Allocation (LDA) method have come out recently to deal with this issue.In this paper, an efficient preprocessing method for sentiment classification is presented and will be used for analyzing user’s comments on Twitter social network. For this purpose, different text preprocessing techniques have been used on the dataset to achieve an acceptable standard text. Latent Dirichlet Allocation has been applied on the obtained data after this fast and accurate preprocessing phase. The implementation of different sentiment analysis methods and the results of these implementations have been compared and evaluated. The experimental results show that the combined uses of the preprocessing method of this paper and Latent Dirichlet Allocation have an acceptable results compared to other basic methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Suparyati, Suparyati, and Emma Utami. "Pengamatan Tren Ulasan Hotel Menggunakan Pemodelan Topik Berbasis Latent Dirichlet Allocation." JIKO (Jurnal Informatika dan Komputer) 6, no. 2 (September 19, 2022): 169. http://dx.doi.org/10.26798/jiko.v6i2.579.

Повний текст джерела
Анотація:
Ketepatan dalam mengekstrak dan meringkas ribuan ulasan ke dalam beberapa topik menjadi kunci dalam pelaksanaan pengolahan data dan informasi lebih lanjut. Tidak terkecuali dalam industri perhotelan yang mana suatu ulasan merupakan sebuah aset yang apabila diolah dapat menghasilkan suatu informasi yang nantinya akan digunakan untuk kepentingan ekspansi bisnis dan keberlangsungan usahanya. Penelitian pemodelan topik ulasan hotel ini menggunakan Latent Dirichlet Allocation sebagai sarana untuk peringkasan dokumennya. Latent Dirichlet Allocation terbukti efektif dalam pengolahan peringkasan kata-kata dan banyak penelitian yang menggunakan metode ini. Adapun tujuan dari penelitian yang dilakukan untuk mendapatkan ringkasan kata-kata yang membentuk suatu topik yang mewakili keseluruhan ulasan yang mana dapat menghasilkan suatu data bagi manajemen hotel dalam mempertahankan eksistensinya dalam bisnis tersebut serta melakukan ekspansi dengan mempertimbangkan hasil dari pemodelan topik tersebut. Dari hasil pemodelan topik Latent Dirichlet Allocation yang telah dilakukan terhadap dataset review Tripadvisor dapat disimpulkan bahwa tren ulasan lebih banyak membahas mengenai lokasi, pelayanan, hotel, sarapan, resort dan pantai.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Duraivel, Samuel, Lavanya Lavanya, and Aby Augustine. "Understanding Vaccine Hesitancy with Application of Latent Dirichlet Allocation to Reddit Corpora." Indian Journal Of Science And Technology 15, no. 37 (October 7, 2022): 1868–75. http://dx.doi.org/10.17485/ijst/v15i37.687.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Rakhmawati, Nur Aini, Rekyan Bayu Waskitho, Dimas Arief Rahman, and Muhammad Fajrul Alam Ulin Nuha. "Klasterisasi Topik Konten Channel Youtube Gaming Indonesia Menggunakan Latent Dirichlet Allocation." Journal of Information Engineering and Educational Technology 5, no. 2 (December 31, 2021): 78–83. http://dx.doi.org/10.26740/jieet.v5n2.p78-83.

Повний текст джерела
Анотація:
Youtube adalah platform untuk saling berbagi video terbesar di internet. Semakin platform ini berkembangan, semakin banyak konten yang tersedia di dalamnya, yang dikarenakan semakin beragam genre videonya. Salah satu genre video yang sedang naik daun adalah konten gaming, yang mana topik tersebut adalah objek pada penelitian ini. Penelitian ini dilakukan dengan menggunakan metode Latent Dirichlect Allocation (LDA) untuk memetakan topik-topik dari genre gaming ini. Data didapatkan dari 10 kanal gaming dengan subscribers terbanyak di Indonesia, yang diekstrak dengan melakukan text mining. Total data yang didapatkan adalah sebanyak 12.757 video. Sekian banyak video ini dipetakan menjadi 5 topik yang paling dominan. Adapun jumlah topik yang diambil didasarkan pada perhitungan perplexity, dan keterkaitan kata dalam topik dihitung menggunakan coherence. Topik tersebut antara lain bocoran update dan season baru, review hero dan skin baru, live bermain game di saat ada event yang menarik, membahas bug yang terjadi pada patch baru rilis sehingga mempengaruhi gameplay, dan konten memborong skin dalam game. Kata Kunci— Latent Dirichlet Allocation, Gaming, analisis topik, topik dominan, konten youtube.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kozlowski, Diego, Viktoriya Semeshenko, and Andrea Molinari. "Latent Dirichlet allocation model for world trade analysis." PLOS ONE 16, no. 2 (February 4, 2021): e0245393. http://dx.doi.org/10.1371/journal.pone.0245393.

Повний текст джерела
Анотація:
International trade is one of the classic areas of study in economics. Its empirical analysis is a complex problem, given the amount of products, countries and years. Nowadays, given the availability of data, the tools used for the analysis can be complemented and enriched with new methodologies and techniques that go beyond the traditional approach. This new possibility opens a research gap, as new, data-driven, ways of understanding international trade, can help our understanding of the underlying phenomena. The present paper shows the application of the Latent Dirichlet allocation model, a well known technique in the area of Natural Language Processing, to search for latent dimensions in the product space of international trade, and their distribution across countries over time. We apply this technique to a dataset of countries’ exports of goods from 1962 to 2016. The results show that this technique can encode the main specialisation patterns of international trade. On the country-level analysis, the findings show the changes in the specialisation patterns of countries over time. As traditional international trade analysis demands expert knowledge on a multiplicity of indicators, the possibility of encoding multiple known phenomena under a unique indicator is a powerful complement for traditional tools, as it allows top-down data-driven studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zhou, Qi, Haipeng Chen, Yitao Zheng, and Zhen Wang. "EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14602–11. http://dx.doi.org/10.1609/aaai.v35i16.17716.

Повний текст джерела
Анотація:
As one of the most powerful topic models, Latent Dirichlet Allocation (LDA) has been used in a vast range of tasks, including document understanding, information retrieval and peer-reviewer assignment. Despite its tremendous popularity, the security of LDA has rarely been studied. This poses severe risks to security-critical tasks such as sentiment analysis and peer-reviewer assignment that are based on LDA. In this paper, we are interested in knowing whether LDA models are vulnerable to adversarial perturbations of benign document examples during inference time. We formalize the evasion attack to LDA models as an optimization problem and prove it to be NP-hard. We then propose a novel and efficient algorithm, EvaLDA to solve it. We show the effectiveness of EvaLDA via extensive empirical evaluations. For instance, in the NIPS dataset, EvaLDA can averagely promote the rank of a target topic from 10 to around 7 by only replacing 1% of the words with similar words in a victim document. Our work provides significant insights into the power and limitations of evasion attacks to LDA models.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Fernanda, Jerhi Wahyu. "PEMODELAN PERSEPSI PEMBELAJARAN ONLINE MENGGUNAKAN LATENT DIRICHLET ALLOCATION." Jurnal Statistika Universitas Muhammadiyah Semarang 9, no. 2 (December 31, 2021): 79. http://dx.doi.org/10.26714/jsunimus.9.2.2021.79-85.

Повний текст джерела
Анотація:
Latent Dirichlet Allocation (LDA) merupakan metode untuk pemodelan topik adalah yang didasarkan kepada konsep probabilitas untuk mencari kemiripan suatu dokumen dan mengelompokkan dokumen-dokumen menjadi beberapa topik atau kelompok. Metode ini masuk dalam unsupervised learning karena tidak ada label atau target pada data yang dianalisis. Penelitian ini bertujuan untuk mengelompokkan persepsi tentang pembelajaran online ke dalam beberapa topik menggunakan metode LDA. Data penelitian ini adalah data primer yang dikumpulkan melalui formulir online. Hasil analisis menunjukkan bahwa pemodelan LDA menggunakan 6 topik memiliki coherence score paling besar. Hasil visualisasi data text menggunakan wordcloud didapatkan kata tidak memiliki frekuensi kemunculan terbesar. Penentuan jumlah topik yang optimal berdasarkan coherence score, didapatkan pemodelan LDA dengan 6 topik adalah yang paling optimal. secara garis besar terdapat beberapa kata yang saling beririsan dengan topik yang lain. Hasil pemodelan memberikan gambaran bahwa persepsi/pandangan mahasiswa terdapat pembelajaran online terkait pemahaman materi yang diberikan dosen, sinyal atau jaringan internet, kuota, dan tugas. Pada kata-kata terkait pemahaman materi, mahasiswa memberikan pandangan bahwa mereka tidak dapat memahami dengan baik materi yang diberikan oleh dosen.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Junwei Zeng, Fajie Wei, and Anying Liu. "Employing Latent Dirichlet Allocation for Organizational Risk Identification." Journal of Convergence Information Technology 6, no. 12 (December 31, 2011): 114–21. http://dx.doi.org/10.4156/jcit.vol6.issue12.15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hong, Fan, Chufan Lai, Hanqi Guo, Enya Shen, Xiaoru Yuan, and Sikun Li. "FLDA: Latent Dirichlet Allocation Based Unsteady Flow Analysis." IEEE Transactions on Visualization and Computer Graphics 20, no. 12 (December 31, 2014): 2545–54. http://dx.doi.org/10.1109/tvcg.2014.2346416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ihler, A., and D. Newman. "Understanding Errors in Approximate Distributed Latent Dirichlet Allocation." IEEE Transactions on Knowledge and Data Engineering 24, no. 5 (May 2012): 952–60. http://dx.doi.org/10.1109/tkde.2011.29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chen, Chaotao, and Jiangtao Ren. "Forum latent Dirichlet allocation for user interest discovery." Knowledge-Based Systems 126 (June 2017): 1–7. http://dx.doi.org/10.1016/j.knosys.2017.04.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Bhutada, Sunil, V. V. S. S. S. Balaram, and Vishnu Vardhan Bulusu. "Semantic latent dirichlet allocation for automatic topic extraction." Journal of Information and Optimization Sciences 37, no. 3 (May 3, 2016): 449–69. http://dx.doi.org/10.1080/02522667.2016.1165000.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Yan, Jian-Feng, Jia Zeng, Yang Gao, and Zhi-Qiang Liu. "Communication-efficient algorithms for parallel latent Dirichlet allocation." Soft Computing 19, no. 1 (July 18, 2014): 3–11. http://dx.doi.org/10.1007/s00500-014-1376-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Jeong, Young-Seob, and Ho-Jin Choi. "Overlapped latent Dirichlet allocation for efficient image segmentation." Soft Computing 19, no. 4 (August 13, 2014): 829–38. http://dx.doi.org/10.1007/s00500-014-1410-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wang, Jingdong, Jiazhen Zhou, Hao Xu, Tao Mei, Xian-Sheng Hua, and Shipeng Li. "Image tag refinement by regularized latent Dirichlet allocation." Computer Vision and Image Understanding 124 (July 2014): 61–70. http://dx.doi.org/10.1016/j.cviu.2014.02.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Zhang, Wei, Robert A. J. Clark, Yongyuan Wang, and Wen Li. "Unsupervised language identification based on Latent Dirichlet Allocation." Computer Speech & Language 39 (September 2016): 47–66. http://dx.doi.org/10.1016/j.csl.2016.02.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Momtazi, Saeedeh. "Unsupervised Latent Dirichlet Allocation for supervised question classification." Information Processing & Management 54, no. 3 (May 2018): 380–93. http://dx.doi.org/10.1016/j.ipm.2018.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zhao, Fangyuan, Xuebin Ren, Shusen Yang, Qing Han, Peng Zhao, and Xinyu Yang. "Latent Dirichlet Allocation Model Training With Differential Privacy." IEEE Transactions on Information Forensics and Security 16 (2021): 1290–305. http://dx.doi.org/10.1109/tifs.2020.3032021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Park, Hongju, Taeyoung Park, and Yung-Seop Lee. "Partially collapsed Gibbs sampling for latent Dirichlet allocation." Expert Systems with Applications 131 (October 2019): 208–18. http://dx.doi.org/10.1016/j.eswa.2019.04.028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Li, Ximing, Jihong Ouyang, Xiaotang Zhou, You Lu, and Yanhui Liu. "Supervised labeled latent Dirichlet allocation for document categorization." Applied Intelligence 42, no. 3 (November 25, 2014): 581–93. http://dx.doi.org/10.1007/s10489-014-0595-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії