Добірка наукової літератури з теми "Latent Semantic Indexing (LSI)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Latent Semantic Indexing (LSI)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Latent Semantic Indexing (LSI)"

1

Xu, Yanyan, Dengfeng Ke, and Kaile Su. "Contextualized Latent Semantic Indexing: A New Approach to Automated Chinese Essay Scoring." Journal of Intelligent Systems 26, no. 2 (April 1, 2017): 263–85. http://dx.doi.org/10.1515/jisys-2015-0048.

Повний текст джерела
Анотація:
AbstractThe writing part in Chinese language tests is badly in need of a mature automated essay scoring system. In this paper, we propose a new approach applied to automated Chinese essay scoring (ACES), called contextualized latent semantic indexing (CLSI), of which Genuine CLSI and Modified CLSI are two versions. The n-gram language model and the weighted finite-state transducer (WFST), two critical components, are used to extract context information in our ACES system. Not only does CLSI improve conventional latent semantic indexing (LSI), but bridges the gap between latent semantics and their context information, which is absent in LSI. Moreover, CLSI can score essays from the perspectives of language fluency and contents, and address the local overrating and underrating problems caused by LSI. Experimental results show that CLSI outperforms LSI, Regularized LSI, and latent Dirichlet allocation in many aspects, and thus, proves to be an effective approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Atreya, Avinash, and Charles Elkan. "Latent semantic indexing (LSI) fails for TREC collections." ACM SIGKDD Explorations Newsletter 12, no. 2 (March 31, 2011): 5–10. http://dx.doi.org/10.1145/1964897.1964900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Srinivas, S., and Ch AswaniKumar. "Optimising the Heuristics in Latent Semantic Indexing for Effective Information Retrieval." Journal of Information & Knowledge Management 05, no. 02 (June 2006): 97–105. http://dx.doi.org/10.1142/s0219649206001359.

Повний текст джерела
Анотація:
Latent Semantic Indexing (LSI) is a famous Information Retrieval (IR) technique that tries to overcome the problems of lexical matching using conceptual indexing. LSI is a variant of vector space model and proved to be 30% more effective. Many studies have reported that good retrieval performance is related to the use of various retrieval heuristics. In this paper, we focus on optimising two LSI retrieval heuristics: term weighting and rank approximation. The results obtained demonstrate that the LSI performance improves significantly with the combination of optimised term weighting and rank approximation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Blynova, N. "Latent semantic indexing (LSI) and its impact on copywriting." Communications and Communicative Technologies, no. 19 (May 5, 2019): 4–12. http://dx.doi.org/10.15421/291901.

Повний текст джерела
Анотація:
Latent semantic indexing (LSI) is becoming more and more popular in copywriting, gradually replacing texts written on the principles of SEO. LSI was called in the 2010s, when popular search engines switched to a qualitatively new way of ranking materials and sites. The difference between SEO and LSI ways of creation lies in the fact that search engines rank SEO materials by keywords, while LSI are ranked how fully the topic is covered and how useful the article will be to the reader. Consequently, in addition to keywords and phrases, the associative core is involved here. Materials written for people have replaced the texts created for the search engine. The article describes the algorithm for creation of the associative and thematic core, the ways in which this can be done. The basic steps helping to create an LSI text are also shown.The author underlines that due to the specificity of the presentation of a significant amount of information and the maximum expertise in the disclosure of the topic, text writers accustomed to working on the principles of SEO have to learn to write within a new paradigm. The owners of the websites that host articles created by LSI principles have discovered the advantages of this way of presenting information, since their resources have become better indexed and take the leading positions in search results. Such algorithms as “Baden-Baden”, “Korolev” and “Panda” have positively influenced the Internet environment as a whole, since re-optimized texts, which were filled with keys and were of little use to the reader, now have turned out to be on the last positions of issue. The new method of ranking according to the LSI method allows specialists to create the texts that are not only useful and expert but also differ in lexical richness, using expressive and figurative means of the language, which could not be assumed in SEO materials.It is highlighted in the article the use of neural networks should bring the way of presenting information to the consumer’s needs even more, inventing techniques that will allow leading materials created in an ordinary language to lead the positions without the need to incorporate key phrases into the text. We believe that the LSI-method, which has perfectly manifested itself in copywriting, is capable of unlocking the potential of the media texts, which are now being written on the principles of SEO.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kontostathis, April, and William M. Pottenger. "A framework for understanding Latent Semantic Indexing (LSI) performance." Information Processing & Management 42, no. 1 (January 2006): 56–73. http://dx.doi.org/10.1016/j.ipm.2004.11.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Li, Min Song. "A Method Based on Support Vector Machine for Feature Selection of Latent Semantic Features." Advanced Materials Research 181-182 (January 2011): 830–35. http://dx.doi.org/10.4028/www.scientific.net/amr.181-182.830.

Повний текст джерела
Анотація:
Latent Semantic Indexing(LSI) is an effective feature extraction method which can capture the underlying latent semantic structure between words in documents. However, it is probably not the most appropriate for text categorization to use the method to select feature subspace, since the method orders extracted features according to their variance,not the classification power. We proposed a method based on support vector machine to extract features and select a Latent Semantic Indexing that be suited for classification. Experimental results indicate that the method improves classification performance with more compact representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Praus, Petr, and Pavel Praks. "Information retrieval in hydrochemical data using the latent semantic indexing approach." Journal of Hydroinformatics 9, no. 2 (March 1, 2007): 135–43. http://dx.doi.org/10.2166/hydro.2007.003b.

Повний текст джерела
Анотація:
The latent semantic indexing (LSI) method was applied for the retrieval of similar samples (those samples with a similar composition) in a dataset of groundwater samples. The LSI procedure was based on two steps: (i) reduction of the data dimensionality by principal component analysis (PCA) and (ii) calculation of a similarity between selected samples (queries) and other samples. The similarity measures were expressed as the cosine similarity, the Euclidean and Manhattan distances. Five queries were chosen so as to represent different sampling localities. The original data space of 14 variables measured in 95 samples of groundwater was reduced to the three-dimensional space of the three largest principal components which explained nearly 80% of the total variance. The five most proximity samples to each query were evaluated. The LSI outputs were compared with the retrievals in the orthogonal system of all variables transformed by PCA and in the system of standardized original variables. Most of these retrievals did not agree with the LSI ones, most likely because both systems contained the interfering data noise which was not preliminary removed by the dimensionality reduction. Therefore the LSI approach based on the noise filtration was considered to be a promising strategy for information retrieval in real hydrochemical data.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Aswani Kumar, Ch, M. Radvansky, and J. Annapurna. "Analysis of a Vector Space Model, Latent Semantic Indexing and Formal Concept Analysis for Information Retrieval." Cybernetics and Information Technologies 12, no. 1 (March 1, 2012): 34–48. http://dx.doi.org/10.2478/cait-2012-0003.

Повний текст джерела
Анотація:
Abstract Latent Semantic Indexing (LSI), a variant of classical Vector Space Model (VSM), is an Information Retrieval (IR) model that attempts to capture the latent semantic relationship between the data items. Mathematical lattices, under the framework of Formal Concept Analysis (FCA), represent conceptual hierarchies in data and retrieve the information. However, both LSI and FCA use the data represented in the form of matrices. The objective of this paper is to systematically analyze VSM, LSI and FCA for the task of IR using standard and real life datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Al-Anzi, Fawaz, and Dia AbuZeina. "Enhanced Latent Semantic Indexing Using Cosine Similarity Measures for Medical Application." International Arab Journal of Information Technology 17, no. 5 (September 1, 2020): 742–49. http://dx.doi.org/10.34028/iajit/17/5/7.

Повний текст джерела
Анотація:
The Vector Space Model (VSM) is widely used in data mining and Information Retrieval (IR) systems as a common document representation model. However, there are some challenges to this technique such as high dimensional space and semantic looseness of the representation. Consequently, the Latent Semantic Indexing (LSI) was suggested to reduce the feature dimensions and to generate semantic rich features that can represent conceptual term-document associations. In fact, LSI has been effectively employed in search engines and many other Natural Language Processing (NLP) applications. Researchers thereby promote endless effort seeking for better performance. In this paper, we propose an innovative method that can be used in search engines to find better matched contents of the retrieving documents. The proposed method introduces a new extension for the LSI technique based on the cosine similarity measures. The performance evaluation was carried out using an Arabic language data collection that contains 800 medical related documents, with more than 47,222 unique words. The proposed method was assessed using a small testing set that contains five medical keywords. The results show that the performance of the proposed method is superior when compared to the standard LSI
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhan, Jiaming, and Han Tong Loh. "Using Latent Semantic Indexing to Improve the Accuracy of Document Clustering." Journal of Information & Knowledge Management 06, no. 03 (September 2007): 181–88. http://dx.doi.org/10.1142/s0219649207001755.

Повний текст джерела
Анотація:
Document clustering is a significant research issue in information retrieval and text mining. Traditionally, most clustering methods were based on the vector space model which has a few limitations such as high dimensionality and weakness in handling synonymous and polysemous problems. Latent semantic indexing (LSI) is able to deal with such problems to some extent. Previous studies have shown that using LSI could reduce the time in clustering a large document set while having little effect on clustering accuracy. However, when conducting clustering upon a small document set, the accuracy is more concerned than efficiency. In this paper, we demonstrate that LSI can improve the clustering accuracy of a small document set and we also recommend the dimensions needed to achieve the best clustering performance.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Latent Semantic Indexing (LSI)"

1

Zhu, Weizhong Allen Robert B. "Text clustering and active learning using a LSI subspace signature model and query expansion /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3077.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

La, Fleur Magnus, and Fredrik Renström. "Conceptual Indexing using Latent Semantic Indexing : A Case Study." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-263029.

Повний текст джерела
Анотація:
Information Retrieval is concerned with locating information (usually text) that is relevant to a user's information need. Retrieval systems based on word matching suffer from the vocabulary mismatch problem, which is a common phenomenon in the usage of natural languages. This difficulty is especially severe in large, full-text databases since such databases contain many different expressions of the same concept. One method aimed to reduce the negative effects of the vocabulary mismatch problem is for the retrieval system to exploit statistical relations. This report examines the utility of conceptual indexing to improve retrieval performance of a domain specific Information Retrieval System using Latent Semantic Indexing (LSI). Techniques like LSI attempt to exploit and model global usage patterns of terms so that related documents that may not share common (literal) terms are still represented by nearby conceptual descriptors. Experimental results show that the method is noticeable more efficient, compared to baseline, for relatively complete queries. However, the current implementation did not improve the effectiveness of short, yet descriptive, queries.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Suwannajan, Pakinee. "Evaluating the performance of latent semantic indexing." Diss., Connect to online resource, 2005. http://wwwlib.umi.com/dissertations/fullcit/3178359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Araújo, Hugo Rafael Teixeira Soares. "Exploring biomedical literature using latent semantic indexing." Master's thesis, Universidade de Aveiro, 2012. http://hdl.handle.net/10773/11298.

Повний текст джерела
Анотація:
Mestrado em Engenharia de Computadores e Telemática
O rápido crescimento de dados disponível na Internet e o facto de se encontrar maioritariamente na forma de texto não estruturado, tem criado sucessivos desafios na recuperação e indexação desta informação. Para além da Internet, também inúmeras bases de dados documentais, de áreas específicas do conhecimento, são confrontadas com este problema. Com a quantidade de informação a crescer tão rapidamente, os métodos tradicionais para indexar e recuperar informação, tornam-se insuficientes face a requisitos cada vez mais exigentes por parte dos utilizadores. Estes problemas levam à necessidade de melhorar os sistemas de recuperação de informação, usando técnicas mais poderosas e eficientes. Um desses métodos designa-se por Latent Semantic Indexing (LSI) e, tem sido sugerido como uma boa solução para modelar e analisar texto não estruturado. O LSI permite revelar a estrutura semântica de um corpus, descobrindo relações entre documentos e termos, mostrando-se uma solução robusta para o melhoramento de sistemas de recuperação de informação, especialmente a identificação de documentos relevantes para a pesquisa de um utilizador. Além disso, o LSI pode ser útil em outras tarefas tais como indexação de documentos e anotação de termos. O principal objectivo deste projeto consistiu no estudo e exploração do LSI na anotação de termos e na estruturação dos resultados de um sistema de recuperação de informação. São apresentados resultados de desempenho destes algoritmos e são igualmente propostas algumas formas para visualizar estes resultados.
The rapid increase in the amount of data available on the Internet, and the fact that this is mostly in the form of unstructured text, has brought successive challenges in information indexing and retrieval. Besides the Internet, specific literature databases are also faced with these problems. With the amount of information growing so rapidly, traditional methods for indexing and retrieving information become insufficient for the increasingly stringent requirements from users. These issues lead to the need of improving information retrieval systems using more powerful and efficient techniques. One of those methods is the Latent Semantic Indexing (LSI), which has been suggested as a good solution for modeling and analyzing unstructured text. LSI allows discovering the semantic structure in a corpus, by finding the relations between documents and terms. It is a robust solution for improving information retrieval systems, especially in the identification of relevant documents for a user's query. Besides this, LSI can be useful in other tasks such as document indexing and annotation of terms. The main goal of this project consisted in studying and exploring the LSI process for terms annotations and for structuring the retrieved documents from an information retrieval system. The performance results of these algorithms are presented and, in addition, several new forms of visualizing these results are proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Geiß, Johanna. "Latent Semantic Indexing and Information Retrieval a quest with BosSE /." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:16-opus-67536.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Geiss, Johanna. "Latent semantic sentence clustering for multi-document summarization." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609761.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Buys, Stephanus. "Log analysis aided by latent semantic mapping." Thesis, Rhodes University, 2013. http://hdl.handle.net/10962/d1002963.

Повний текст джерела
Анотація:
In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques.
LaTeX with hyperref package
Adobe Acrobat 9.54 Paper Capture Plug-in
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Polyakov, Serhiy. "Enhancing User Search Experience in Digital Libraries with Rotated Latent Semantic Indexing." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804881/.

Повний текст джерела
Анотація:
This study investigates a semi-automatic method for creation of topical labels representing the topical concepts in information objects. The method is called rotated latent semantic indexing (rLSI). rLSI has found application in text mining but has not been used for topical labels generation in digital libraries (DLs). The present study proposes a theoretical model and an evaluation framework which are based on the LSA theory of meaning and investigates rLSI in a DL environment. The proposed evaluation framework for rLSI topical labels is focused on human-information search behavior and satisfaction measures. The experimental systems that utilize those topical labels were built for the purposes of evaluating user satisfaction with the search process. A new instrument was developed for this study and the experiment showed high reliability of the measurement scales and confirmed the construct validity. Data was collected through the information search tasks performed by 122 participants using two experimental systems. A quantitative method of analysis, partial least squares structural equation modeling (PLS-SEM), was used to test a set of research hypotheses and to answer research questions. The results showed a not significant, indirect effect of topical label type on both guidance and satisfaction. The conclusion of the study is that topical labels generated using rLSI provide the same levels of alignment, guidance, and satisfaction with the search process as topical labels created by the professional indexers using best practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Spomer, Judith E. "Latent semantic analysis and classification modeling in applications for social movement theory /." Abstract Full Text (HTML) Full Text (PDF), 2008. http://eprints.ccsu.edu/archive/00000552/02/1996FT.htm.

Повний текст джерела
Анотація:
Thesis (M.S.) -- Central Connecticut State University, 2008.
Thesis advisor: Roger Bilisoly. "... in partial fulfillment of the requirements for the degree of Master of Science in Data Mining." Includes bibliographical references (leaves 122-127). Also available via the World Wide Web.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hockey, Andrew. "Computational modelling of the language production system : semantic memory, conflict monitoring, and cognitive control processes /." [St. Lucia, Qld.], 2006. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe20099.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Latent Semantic Indexing (LSI)"

1

Bellegarda, Jerome R. Latent Semantic Mapping: Principles and Applications. Springer International Publishing AG, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bellegarda, Jerome R. Latent Semantic Mapping: Principles and Applications. Morgan & Claypool Publishers, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bellegarda, Jerome R. Latent Semantic Mapping: Principles And Applications (Synthesis Lectures on Speech and Audio Processing). Morgan & Claypool Publishers, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Latent Semantic Indexing (LSI)"

1

Rahman, Nurazzah Abd, Zulaile Mabni, Nasiroh Omar, Haslizatul Fairuz Mohamed Hanum, and Nik Nur Amirah Tuan Mohamad Rahim. "A Parallel Latent Semantic Indexing (LSI) Algorithm for Malay Hadith Translated Document Retrieval." In Communications in Computer and Information Science, 154–63. Singapore: Springer Singapore, 2015. http://dx.doi.org/10.1007/978-981-287-936-3_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

McCarey, Frank, Mel Ó. Cinnéide, and Nicholas Kushmerick. "Recommending Library Methods: An Evaluation of the Vector Space Model (VSM) and Latent Semantic Indexing (LSI)." In Lecture Notes in Computer Science, 217–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11763864_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chakraborti, Sutanu, Robert Lothian, Nirmalie Wiratunga, and Stuart Watt. "Sprinkling: Supervised Latent Semantic Indexing." In Lecture Notes in Computer Science, 510–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11735106_53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ishioka, Tsunenori. "Text Segmentation by Latent Semantic Indexing." In New Developments in Psychometrics, 689–96. Tokyo: Springer Japan, 2003. http://dx.doi.org/10.1007/978-4-431-66996-8_80.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Czyszczoń, Adam, and Aleksander Zgrzywa. "Latent Semantic Indexing for Web Service Retrieval." In Computational Collective Intelligence. Technologies and Applications, 694–702. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11289-3_70.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Orengo, Viviane Moreira, and Christian Huyck. "Portuguese-English Experiments Using Latent Semantic Indexing." In Advances in Cross-Language Information Retrieval, 147–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45237-9_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gansterer, Wilfried N., Andreas G. K. Janecek, and Robert Neumayer. "Spam Filtering Based on Latent Semantic Indexing." In Survey of Text Mining II, 165–83. London: Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-046-9_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

De, Arijit. "SMS Based FAQ Retrieval Using Latent Semantic Indexing." In Multilingual Information Access in South Asian Languages, 100–103. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40087-2_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Xuezheng, Ming Chen, and Guangwen Yang. "Latent Semantic Indexing in Peer-to-Peer Networks." In Lecture Notes in Computer Science, 63–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24714-2_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kontostathis, April, William M. Pottenger, and Brian D. Davison. "Identification of Critical Values in Latent Semantic Indexing." In Foundations of Data Mining and knowledge Discovery, 333–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11498186_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Latent Semantic Indexing (LSI)"

1

Kontostathis, April. "Essential Dimensions of Latent Semantic Indexing (LSI)." In 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07). IEEE, 2007. http://dx.doi.org/10.1109/hicss.2007.213.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Al-Anzi, Fawaz S., and Dia AbuZeina. "Enhanced Search for Arabic Language Using Latent Semantic Indexing (LSI)." In 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC). IEEE, 2018. http://dx.doi.org/10.1109/iconic.2018.8601096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Amirah, Nik Nur, Tuan Mohamad Rahim, Zulaile Mabni, Haslizatul Mohamed Hanum, and Nurazzah Abdul Rahman. "A Malay Hadith translated document retrieval using parallel Latent Semantic Indexing (LSI)." In 2016 Third International Conference on Information Retrieval and Knowledge Management (CAMP). IEEE, 2016. http://dx.doi.org/10.1109/infrkm.2016.7806346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lubis, Arif Ridho, Santi Prayudani, Yulia Fatmi, and Okvi Nugroho. "Latent Semantic Indexing (LSI) and Hierarchical Dirichlet Process (HDP) Models on News Data." In 2022 5th International Conference of Computer and Informatics Engineering (IC2IE). IEEE, 2022. http://dx.doi.org/10.1109/ic2ie56416.2022.9970067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Souza, Erick Nilsen Pereira, and Daniela Barreiro Claro. "Detecção Multilíngue de Serviços Web Duplicados Baseada na Similaridade Textual." In X Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação - SBC, 2014. http://dx.doi.org/10.5753/sbsi.2014.6140.

Повний текст джерела
Анотація:
O agrupamento por similaridade representa uma etapa relevante nas estratégias de descoberta e composição de serviços web. Muitos métodos de agrupamento processam as descrições dos serviços em linguagem natural para estimar o grau de correlação entre eles. Entretanto, a utilização de bases de conhecimento em idiomas específicos limita a aplicabilidade desses métodos. Neste artigo e proposto um modelo multilíngue para agrupamento de serviços web similares a partir das suas descrições em linguagem natural. Em particular, foi aplicado o Latent Semantic Indexing (LSI), um método de Recuperação da Informação (RI) independente da língua e do domínio. Além disso, foi feita uma análise experimental com três medidas de similaridade, a fim de determinar qual delas e mais adequada à detecção de serviços web duplicados a partir das descrições dos serviços em dois idiomas.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Papadimitriou, Christos H., Hisao Tamaki, Prabhakar Raghavan, and Santosh Vempala. "Latent semantic indexing." In the seventeenth ACM SIGACT-SIGMOD-SIGART symposium. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/275487.275505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Krivenko, Mikhail, and Vitaly Vasilyev. "Sequential latent semantic indexing." In the 2nd Workshop. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1581114.1581117.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dasgupta, Anirban, Ravi Kumar, Prabhakar Raghavan, and Andrew Tomkins. "Variable latent semantic indexing." In Proceeding of the eleventh ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1081870.1081876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hofmann, Thomas. "Probabilistic latent semantic indexing." In the 22nd annual international ACM SIGIR conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/312624.312649.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Quan, Jun Xu, Hang Li, and Nick Craswell. "Regularized latent semantic indexing." In the 34th international ACM SIGIR conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2009916.2010008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Latent Semantic Indexing (LSI)"

1

Simon, H. D., and H. Zha. On updating problems in latent semantic indexing. Office of Scientific and Technical Information (OSTI), November 1997. http://dx.doi.org/10.2172/650342.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zha, H., and Z. Zhang. On matrices with low-rank-plus-shift structure: Partial SVD and latent semantic indexing. Office of Scientific and Technical Information (OSTI), August 1998. http://dx.doi.org/10.2172/663268.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії