Gotowa bibliografia na temat „Text analysis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Text analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Text analysis"

1

Z.K., Orazimbetova, i Mukhiyatdinova T. "LINGUISTIC ANALYSIS OF NEWSPAPER TEXT". CURRENT RESEARCH JOURNAL OF PHILOLOGICAL SCIENCES 02, nr 10 (1.10.2021): 82–85. http://dx.doi.org/10.37547/philological-crjps-02-10-16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Neha K, Shah. "Introduction of Text mining and an Analysis of Text mining Techniques". Paripex - Indian Journal Of Research 2, nr 2 (15.01.2012): 56–57. http://dx.doi.org/10.15373/22501991/feb2013/18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

V., Dr Sellam. "Text Analysis Via Composite Feature Extraction". Journal of Advanced Research in Dynamical and Control Systems 24, nr 4 (31.03.2020): 310–20. http://dx.doi.org/10.5373/jardcs/v12i4/20201445.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

H., D. P. "Text analysis". Nature 356, nr 6372 (kwiecień 1992): 740. http://dx.doi.org/10.1038/356740a0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

D., Mhamdi. "Job Recommendation System based on Text Analysis". Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (31.03.2020): 1025–30. http://dx.doi.org/10.5373/jardcs/v12sp4/20201575.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Khan, Nida Zafar, i Prof S. R. Yadav. "Analysis of Text Classification Algorithms: A Review". International Journal of Trend in Scientific Research and Development Volume-3, Issue-2 (28.02.2019): 579–81. http://dx.doi.org/10.31142/ijtsrd21448.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Adewumi, Sunday Eric. "Character Analysis Scheme for Compressing Text Files". International Journal of Computer Theory and Engineering 7, nr 5 (październik 2015): 362–65. http://dx.doi.org/10.7763/ijcte.2015.v7.986.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Makhmudovna, Madirimova Sokhiba, Abdulhaq Rahimi Saripul i Jamila Eisar. "ANALYSIS OF TEXT DIFFERENCES IN MUTRIB'S WORKS". American Journal Of Philological Sciences 03, nr 04 (1.04.2023): 41–47. http://dx.doi.org/10.37547/ajps/volume03issue04-07.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ding, Yong, Yongsheng Han, Guozhen Lu i Xinfeng Wu. "Boundedness of Singular Integrals on Multiparameter Weighted Hardy Spaces $\text{{\textit{H}}}^\text{{\textit{p}}}_{\text{{\textit{w}}}}\ (\mathbb{R}^{\text{{\textit{n}}}}\times \mathbb{R}^{\text{{\textit{m}}}})$". Potential Analysis 37, nr 1 (9.08.2011): 31–56. http://dx.doi.org/10.1007/s11118-011-9244-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Fréchet, Nadjim, Justin Savoie i Yannick Dufresne. "Analysis of Text-Analysis Syllabi: Building a Text-Analysis Syllabus Using Scaling". PS: Political Science & Politics 53, nr 2 (29.11.2019): 338–43. http://dx.doi.org/10.1017/s1049096519001732.

Pełny tekst źródła
Streszczenie:
ABSTRACTIn the last decade, text-analytic methods have become a fundamental element of a political researcher’s toolkit. Today, text analysis is taught in most major universities; many have entire courses dedicated to the topic. This article offers a systematic review of 45 syllabi of text-analysis courses around the world. From these syllabi, we extracted data that allowed us to rank canonical sources and discuss the variety of software used in teaching. Furthermore, we argue that our empirical method for building a text-analysis syllabus could easily be extended to syllabi for other courses. For instance, scholars can use our technique to introduce their graduate students to the field of systematic reviews while improving the quality of their syllabi.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Text analysis"

1

Haggren, Hugo. "Text Similarity Analysis for Test Suite Minimization". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-290239.

Pełny tekst źródła
Streszczenie:
Software testing is the most expensive phase in the software development life cycle. It is thus understandable why test optimization is a crucial area in the software development domain. In software testing, the gradual increase of test cases demands large portions of testing resources (budget and time). Test Suite Minimization is considered a potential approach to deal with the test suite size problem. Several test suite minimization techniques have been proposed to efficiently address the test suite size problem. Proposing a good solution for test suite minimization is a challenging task, where several parameters such as code coverage, requirement coverage, and testing cost need to be considered before removing a test case from the testing cycle. This thesis proposes and evaluates two different NLP-based approaches for similarity analysis between manual integration test cases, which can be employed for test suite minimization. One approach is based on syntactic text similarity analysis and the other is a machine learning based semantic approach. The feasibility of the proposed solutions is studied through analysis of industrial use cases at Ericsson AB in Sweden. The results show that the semantic approach barely manages to outperform the syntactic approach. While both approaches show promise, subsequent studies will have to be done to further evaluate the semantic similarity based method.
Mjukvarutestning är den mest kostsamma fasen inom mjukvaruutveckling. Därför är det förståeligt varför testoptimering är ett kritiskt område inom mjukvarubranschen. Inom mjukvarutestning ställer den gradvisa ökningen av testfall stora krav på testresurser (budget och tid). Test Suite Minimization anses vara ett potentiellt tillvägagångssätt för att hantera problemet med växande testsamlingar. Flera minimiseringsmetoder har föreslagits för att effektivt hantera testsamlingars storleksproblem. Att föreslå en bra lösning för minimering av antal testfall är en utmanande uppgift, där flera parametrar som kodtäckning, kravtäckning och testkostnad måste övervägas innan man tar bort ett testfall från testcykeln. Denna uppsats föreslår och utvärderar två olika NLP-baserade metoder för likhetsanalys mellan testfall för manuell integration, som kan användas för minimering av testsamlingar. Den ena metoden baseras på syntaktisk textlikhetsanalys, medan den andra är en maskininlärningsbaserad semantisk strategi. Genomförbarheten av de föreslagna lösningarna studeras genom analys av industriella användningsfall hos Ericsson AB i Sverige. Resultaten visar att den semantiska metoden knappt lyckas överträffa den syntaktiska metoden. Medan båda tillvägagångssätten visar lovande resultat, måste efterföljande studier göras för att ytterligare utvärdera den semantiska likhetsbaserade metoden.
Style APA, Harvard, Vancouver, ISO itp.
2

Romsdorfer, Harald. "Polyglot text to speech synthesis text analysis & prosody control". Aachen Shaker, 2009. http://d-nb.info/993448836/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kay, Roderick Neil. "Text analysis, summarising and retrieval". Thesis, University of Salford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360435.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Haselton, Curt B. Deierlein Gregory G. "Assessing seismic collapse safety of modern reinforced concrete moment-frame buildings". Berkeley, Calif. : Pacific Earthquake Engineering Research Center, 2008. http://nisee.berkeley.edu/elibrary/Text/200803261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ozsoy, Makbule Gulcin. "Text Summarization Using Latent Semantic Analysis". Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612988/index.pdf.

Pełny tekst źródła
Streszczenie:
Text summarization solves the problem of presenting the information needed by a user in a compact form. There are different approaches to create well formed summaries in literature. One of the newest methods in text summarization is the Latent Semantic Analysis (LSA) method. In this thesis, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish and English documents, and their performances are compared using their ROUGE scores.
Style APA, Harvard, Vancouver, ISO itp.
6

O'Connor, Brendan T. "Statistical Text Analysis for Social Science". Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/541.

Pełny tekst źródła
Streszczenie:
What can text corpora tell us about society? How can automatic text analysis algorithms efficiently and reliably analyze the social processes revealed in language production? This work develops statistical text analyses of dynamic social and news media datasets to extract indicators of underlying social phenomena, and to reveal how social factors guide linguistic production. This is illustrated through three case studies: first, examining whether sentiment expressed in social media can track opinion polls on economic and political topics (Chapter 3); second, analyzing how novel online slang terms can be very specific to geographic and demographic communities, and how these social factors affect their transmission over time (Chapters 4 and 5); and third, automatically extracting political events from news articles, to assist analyses of the interactions of international actors over time (Chapter 6). We demonstrate a variety of computational, linguistic, and statistical tools that are employed for these analyses, and also contribute MiTextExplorer, an interactive system for exploratory analysis of text data against document covariates, whose design was informed by the experience of researching these and other similar works (Chapter 2). These case studies illustrate recurring themes toward developing text analysis as a social science methodology: computational and statistical complexity, and domain knowledge and linguistic assumptions.
Style APA, Harvard, Vancouver, ISO itp.
7

Lin, Yuhao. "Text Analysis in Fashion : Keyphrase Extraction". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-290158.

Pełny tekst źródła
Streszczenie:
The ability to extract useful information from texts and present them in the form of structured attributes is an important step towards making product comparison algorithm in fashion smarter and better. Some previous work exploits statistical features like the word frequency and graph models to predict keyphrases. In recent years, deep neural networks have proved to be the state-of-the-art methods to handle language modeling. Successful examples include Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), Bidirectional Encoder Representations from Transformers(BERT) and their variations. In addition, some word embedding techniques like word2vec[1] are also helpful to improve the performance. Besides these techniques, a high-quality dataset is also important to the effectiveness of models. In this project, we aim to develop reliable and efficient machine learning models for keyphrase extraction. At Norna AB, we have a collection of product descriptions from different vendors without keyphrase annotations, which motivates the use of unsupervised methods. They should be capable of extracting useful keyphrases that capture the features of a product. To further explore the power of deep neural networks, we also implement several deep learning models. The dataset has two parts, the first part is called the fashion dataset where keyphrases are extracted by our unsupervised method. The second part is a public dataset in the domain of news. We find that deep learning models are also capable of extracting meaningful keyphrases and outperform the unsupervised model. Precision, recall and F1 score are used as evaluation metrics. The result shows that the model that uses LSTM and CRF achieves the optimal performance. We also compare the performance of different models with respect to keyphrase lengths and keyphrase numbers. The result indicates that all models perform better on predicting short keyphrases. We also show that our refined model has the advantage of predicting long keyphrases, which is challenging in this field.
Förmågan att extrahera användbar information från texter och presentera den i form av strukturerade attribut är ett viktigt steg mot att göra produktjämförelsesalgoritmen på ett smartare och bättre sätt. Vissa tidigare arbeten utnyttjar statistiska funktioner som ordfrekvens och grafmodeller för att förutsäga nyckelfraser. Under de senaste åren har djupa neurala nätverk visat sig vara de senaste metoderna för att hantera språkmodellering. Framgångsrika exempel inkluderar Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), Bidirectional Encoder Representations from Transformers (BERT) och deras variationer. Dessutom kan vissa ordinbäddningstekniker som word2vec[1] också vara till hjälp för att förbättra prestandan. Förutom dessa tekniker är en datauppsättning av hög kvalitet också viktig för modellernas effektivitet. I detta projekt strävar vi efter att utveckla pålitliga och effektiva maskininlärningsmodeller för utvinning av nyckelfraser. På Norna AB har vi en samling produktbeskrivningar från olika leverantörer utan nyckelfrasnoteringar, vilket motiverar användningen av metoder utan tillsyn. De bör kunna extrahera användbara nyckelfraser som fångar funktionerna i en produkt. För att ytterligare utforska kraften i djupa neurala nätverk implementerar vi också flera modeller för djupinlärning. Datasetet har två delar, den första delen kallas modedataset där nyckelfraser extraheras med vår metod utan tillsyn. Den andra delen är en offentlig dataset i nyhetsdomänen. Vi finner att deep learning-modeller också kan extrahera meningsfulla nyckelfraser och överträffa den oövervakade modellen. Precision, återkallning och F1-poäng används som utvärderingsmått. Resultatet visar att modellen som använder LSTM och CRF uppnår optimal prestanda. Vi jämför också prestanda för olika modeller med avseende på keyphrase längder och nyckelfras nummer. Resultatet indikerar att alla modeller presterar bättre på att förutsäga korta tangentfraser. Vi visar också att vår raffinerade modell har fördelen att förutsäga långa tangentfraser, vilket är utmanande inom detta område.
Style APA, Harvard, Vancouver, ISO itp.
8

Maisto, Alessandro. "A Hybrid Framework for Text Analysis". Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2481.

Pełny tekst źródła
Streszczenie:
2015 - 2016
In Computational Linguistics there is an essential dichotomy between Linguists and Computer Scientists. The rst ones, with a strong knowledge of language structures, have not engineering skills. The second ones, contrariwise, expert in computer and mathematics skills, do not assign values to basic mechanisms and structures of language. Moreover, this discrepancy, especially in the last decades, has increased due to the growth of computational resources and to the gradual computerization of the world; the use of Machine Learning technologies in Arti cial Intelligence problems solving, which allows for example the machines to learn , starting from manually generated examples, has been more and more often used in Computational Linguistics in order to overcome the obstacle represented by language structures and its formal representation. The dichotomy has resulted in the birth of two main approaches to Computational Linguistics that respectively prefers: rule-based methods, that try to imitate the way in which man uses and understands the language, reproducing syntactic structures on which the understanding process is based on, building lexical resources as electronic dictionaries, taxonomies or ontologies; statistic-based methods that, conversely, treat language as a group of elements, quantifying words in a mathematical way and trying to extract information without identifying syntactic structures or, in some algorithms, trying to confer to the machine the ability to learn these structures. One of the main problems is the lack of communication between these two di erent approaches, due to substantial di erences characterizing them: on the one hand there is a strong focus on how language works and on language characteristics, there is a tendency to analytical and manual work. From other hand, engineering perspective nds in language an obstacle, and recognizes in the algorithms the fastest way to overcome this problem. However, the lack of communication is not only an incompatibility: following Harris, the best way to approach natural language, could result by taking the best of both. At the moment, there is a large number of open-source tools that perform text analysis and Natural Language Processing. A great part of these tools are based on statistical models and consist on separated modules which could be combined in order to create a pipeline for the processing of the text. Many of these resources consist in code packages which have not a GUI (Graphical User Interface) and they result impossible to use for users without programming skills. Furthermore, the vast majority of these open-source tools support only English language and, when Italian language is included, the performances of the tools decrease signi cantly. On the other hand, open source tools for Italian language are very few. In this work we want to ll this gap by present a new hybrid framework for the analysis of Italian texts. It must not be intended as a commercial tool, but the purpose for which it was built is to help linguists and other scholars to perform rapid text analysis and to produce linguistic data. The framework, that performs both statistical and rule-based analysis, is called LG-Starship. The idea is to built a modular software that includes, in the beginning, the basic algorithms to perform di erent kind of analysis. Modules will perform the following tasks: Preprocessing Module: a module with which it is possible to charge a text, normalize it or delete stop-words. As output, the module presents the list of tokens and letters which compose the texts with respective occurrences count and the processed text. Mr. Ling Module: a module with which POS tagging and Lemmatization are performed. The module also returns the table of lemmas with the count of occurrences and the table with the quanti cation of grammatical tags. Statistic Module: with which it is possible to calculate Term Frequency and TF-IDF of tokens or lemmas, extract bi-grams and tri-grams units and export results as tables. Semantic Module: which use The Hyperspace Analogue to Language algorithm to calculate semantic similarity between words. The module returns similarity matrices of words per word which can be exported and analyzed. SyntacticModule: which analyze syntax structures of a selected sentence and tag the verbs and its arguments with semantic labels. The objective of the Framework is to build an all-in-one platform for NLP which allows any kind of users to perform basic and advanced text analysis. With the purpose of make the Framework accessible to users who have not speci c computer science and programming language skills, the modules have been provided with an intuitive GUI. The framework can be considered hybrid in a double sense: as explained in the previous lines, it uses both statistical and rule/based methods, by relying on standard statistical algorithms or techniques, and, at the same time, on Lexicon-Grammar syntactic theory. In addition, it has been written in both Java and Python programming languages. LG-Starship Framework has a simple Graphic User Interface but will be also released as separated modules which may be included in any NLP pipelines independently. There are many resources of this kind, but the large majority works for English. There are very few free resources for Italian language and this work tries to cover this need by proposing a tool which can be used both by linguists or other scientist interested in language and text analysis who have no idea about programming languages, as by computer scientists, who can use free modules in their own code or in combination with di erent NLP algorithms. The Framework takes the start from a text or corpus written directly by the user or charged from an external resource. The LG-Starship Framework work ow is described in the owchart shown in g. 1. The pipeline shows that the Pre-Processing Module is applied on original imported or generated text in order to produce a clean and normalized preprocessed text. This module includes a function for text splitting, a stop-word list and a tokenization method. On the text preprocessed the Statistic Module or the Mr. Ling Module can be applied. The rst one, which includes basic statistics algorithm as Term Frequency, tf-idf and n-grams extraction, produces as output databases of lexical and numerical data which can be used to produce charts or perform more external analysis; the second one, is divided in two main task: a Pos tagger, based on the Averaged Perceptron Tagger [?] and trained on the Paisà Corpus [Lyding et al., 2014], perform the Part-Of- Speech Tagging and produce an annotated text. A lemmatization method, which relies on a set of electronic dictionaries developed at the University of Salerno [Elia, 1995, Elia et al., 2010], take as input the Postagged text and produces a new lemmatized version of original text with information about syntactic and semantic properties. This lemmatized text, which can also be processed with the Statistic Module, serves as input for two deeper level of text analysis carried out by both the Syntactic Module and the Semantic Module. The rst one lays on the Lexicon Grammar Theory [Gross, 1971, 1975] and use a database of Predicate structures in development at the Department of Political, Social and Communication Science. Its objective is to produce a Dependency Graph of the sentences that compose the text. The Semantic Module uses the Hyperspace Analogue to Language distributional semantics algorithm [Lund and Burgess, 1996] trained on the Paisà Corpus to produce a semantic network of the words of the text. These work ow has been included in two di erent experiments in which two User Generated Corpora have been involved. The rst experiment represent a statistical study of the language of Rap Music in Italy through the analysis of a great corpus of Rap Song lyrics downloaded from on line databases of user generated lyrics. The second experiment is a Feature-Based Sentiment Analysis project performed on user product reviews. For this project we integrated a large domain database of linguistic resources for Sentiment Analysis, developed in the past years by the Department of Political, Social and Communication Science of the University of Salerno, which consists of polarized dictionaries of Verbs, Adjectives, Adverbs and Nouns. These two experiment underline how the linguistic framework can be applied to di erent level of analysis and to produce both Qualitative data and Quantitative data. For what concern the obtained results, the Framework, which is only at a Beta Version, obtain discrete results both in terms of processing time that in terms of precision. Nevertheless, the work is far from being considered complete. More algorithms will be added to the Statistic Module and the Syntactic Module will be completed. The GUI will be improved and made more attractive and modern and, in addiction, an open-source on-line version of the modules will be published. [edited by author]
XV n.s.
Style APA, Harvard, Vancouver, ISO itp.
9

Algarni, Abdulmohsen. "Relevance feature discovery for text analysis". Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/48230/1/Abdulmohsen_Algarni_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.
Style APA, Harvard, Vancouver, ISO itp.
10

Romsdorfer, Harald [Verfasser]. "Polyglot Text-to-Speech Synthesis : Text Analysis & Prosody Control / Harald Romsdorfer". Aachen : Shaker, 2009. http://d-nb.info/1156517354/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Text analysis"

1

Wachsmuth, Henning. Text Analysis Pipelines. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25741-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wildfeuer, Janina. Film Text Analysis. New York: Routledge, [2016] | Series: Routledge advances in: Routledge, 2016. http://dx.doi.org/10.4324/9781315692746.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Emel, Sözer, red. Text connexity, text coherence: Aspects, methods, results. Hamburg: H. Buske, 1985.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jockers, Matthew L., i Rosamond Thalken. Text Analysis with R. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39643-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Popping, R. Computer-assisted text analysis. London: Sage Publications, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Text and discourse analysis. London: Routledge, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Introducing electronic text analysis. New York: Routledge, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Cobham, David. Macroeconomic analysis: Anintermediate text. London: Longman, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Michel, Charolles, Petöfi János S i Sözer Emel, red. Research in text connexity and text coherence: A survey. Hamburg: H. Buske, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

1963-, Warnke Ingo, red. Schnittstelle Text: Diskurs. Frankfurt am Main: Peter Lang, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Text analysis"

1

Bainbridge, William Sims. "Text Analysis". W Human–Computer Interaction Series, 151–76. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5604-8_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Arnold, Taylor, i Lauren Tilton. "Text Analysis". W Quantitative Methods in the Humanities and Social Sciences, 157–76. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20702-5_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Petrocchi, Alessandra. "Text analysis". W The Gaṇitatilaka and its Commentary, 285–417. Abingdon, Oxon ; New York, NY : Routledge, 2019. |: Routledge, 2019. http://dx.doi.org/10.4324/9781351022262-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Dengah, H. J. François, Jeffrey G. Snodgrass, Evan R. Polzer i William Cody Nixon. "Text analysis". W Systematic Methods for Analyzing Culture, 65–82. New York : Routledge, 2021.: Routledge, 2020. http://dx.doi.org/10.4324/9781003092179-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Leslie, Larry Z. "Text Analysis". W Communication Research Methods in Postmodern Culture, 145–71. Second edition. | New York, NY : Routledge, 2017. |: Routledge, 2017. http://dx.doi.org/10.4324/9781315231730-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Wei. "Text analysis". W The Routledge Handbook of Research Methods in Applied Linguistics, 453–63. New York : Taylor and Francis, 2020. | Series: Routledge handbooks in applied linguistics: Routledge, 2019. http://dx.doi.org/10.4324/9780367824471-38.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Worch, Thierry, Julien Delarue, Vanessa Rios de Souza i John Ennis. "Text Analysis". W Data Science for Sensory and Consumer Scientists, 279–300. New York: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003028611-13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Bonnell, Jerry, i Mitsunori Ogihara. "Text Analysis". W Exploring Data Science with R and the Tidyverse, 441–72. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003320845-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Beach, David. "Music and Text". W Schenkerian Analysis, 252–90. Second edition. | New York ; London : Routledge, 2019. | Previous edition published under title: Advanced Schenkerian analysis.: Routledge, 2019. http://dx.doi.org/10.4324/9780429453793-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wachsmuth, Henning. "Text Analysis Pipelines". W Lecture Notes in Computer Science, 19–53. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25741-9_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Text analysis"

1

Thi Xuan Lam, Thanh, Anh Duc Le i Masaki Nakagawa. "User Interface for Text and Non-Text Classification". W 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW). IEEE, 2019. http://dx.doi.org/10.1109/icdarw.2019.20044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Xue, Zijun. "Scalable Text Analysis". W WSDM 2017: Tenth ACM International Conference on Web Search and Data Mining. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3018661.3022750.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Noronha, Perpetua F., Madhu Bhan, M. Niranjanamurthy i D. Chandana. "Text Analysis Tool". W 2023 International Conference on Artificial Intelligence and Applications (ICAIA) Alliance Technology Conference (ATCON-1). IEEE, 2023. http://dx.doi.org/10.1109/icaia57370.2023.10169652.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Xiufei, Lei Huang i Changping Liu. "A New Block Partitioned Text Feature for Text Verification". W 2009 10th International Conference on Document Analysis and Recognition. IEEE, 2009. http://dx.doi.org/10.1109/icdar.2009.61.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhao, Miao, Rui-Qi Wang, Fei Yin, Xu-Yao Zhang, Lin-Lin Huang i Jean-Marc Ogier. "Fast Text/non-Text Image Classification with Knowledge Distillation". W 2019 International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2019. http://dx.doi.org/10.1109/icdar.2019.00234.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zhu, Xiangyu, Yingying Jiang, Shuli Yang, Xiaobing Wang, Wei Li, Pei Fu, Hua Wang i Zhenbo Luo. "Deep Residual Text Detection Network for Scene Text". W 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2017. http://dx.doi.org/10.1109/icdar.2017.137.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Nakamura, Toshiki Nakamura, Anna Zhu i Seiichi Uchida. "Scene Text Magnifier". W 2019 International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2019. http://dx.doi.org/10.1109/icdar.2019.00137.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Romero, Veronica, Joan Andreu Sanchez, Vicente Bosch, Katrien Depuydt i Jesse de Does. "Influence of text line segmentation in Handwritten Text Recognition". W 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333819.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Chengquan, Cong Yao, Baoguang Shi i Xiang Bai. "Automatic discrimination of text and non-text natural images". W 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333889.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nicolaou, Anguelos, i Basilis Gatos. "Handwritten Text Line Segmentation by Shredding Text into its Lines". W 2009 10th International Conference on Document Analysis and Recognition. IEEE, 2009. http://dx.doi.org/10.1109/icdar.2009.243.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Text analysis"

1

Spirling, Arthur. Text Analysis: Text as Data with R. Instats Inc., 2022. http://dx.doi.org/10.61700/a52fcasdqm1du469.

Pełny tekst źródła
Streszczenie:
This seminar introduces “text as data” statistical methods using R. The course is very applied, with the primary aim of helping social science researchers understand the types of questions we can ask with text, and how to answer them. The seminar covers how texts may be modeled and compared as quantitative entities, and then moves to supervised and unsupervised methods—including topic models and embeddings. At the seminar's conclusion, participants will know how conduct their own text as data research projects. An official Instats certificate of completion is provided at the conclusion of the seminar.
Style APA, Harvard, Vancouver, ISO itp.
2

Spirling, Arthur. Text Analysis: Text as Data with R. Instats Inc., 2022. http://dx.doi.org/10.61700/lolq2hyg9sn6d469.

Pełny tekst źródła
Streszczenie:
This seminar introduces “text as data” statistical methods using R. The course is very applied, with the primary aim of helping social science researchers understand the types of questions we can ask with text, and how to answer them. The seminar covers how texts may be modeled and compared as quantitative entities, and then moves to supervised and unsupervised methods—including topic models and embeddings. At the seminar's conclusion, participants will know how conduct their own text as data research projects. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
Style APA, Harvard, Vancouver, ISO itp.
3

Stevenson, Mark. Individual Profiling Using Text Analysis. Fort Belvoir, VA: Defense Technical Information Center, kwiecień 2016. http://dx.doi.org/10.21236/ad1009417.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Montiel Olea, César E., i Leonardo R. Corral. Text Analysis of Project Completion Reports. Inter-American Development Bank, czerwiec 2021. http://dx.doi.org/10.18235/0003611.

Pełny tekst źródła
Streszczenie:
Project Completion Reports (PCRs) are the main instrument through which different multilateral organizations measure the success of a project once it closes. PCRs are important for development effectiveness as they serve to understand achievements, failures, and challenges within the project cycle they can feed back into the design and execution of new projects. The aim of this paper is to introduce text analysis tools for the exploration of PCR documents. We describe and apply different text analysis tools to explore the content of a sample of PCRs. We seek to illustrate a way in which PCRs can be summarized and analyzed using innovative tools applied to a unique dataset. We believe that the methods presented in this investigation have numerous potential applications to different types of text documents routinely prepared within the Inter-American Development Bank (IDB).
Style APA, Harvard, Vancouver, ISO itp.
5

Bock, Geoffrey. Meta Tagging and Text Analysis from ClearForest. Boston, MA: Patricia Seybold Group, luty 2002. http://dx.doi.org/10.1571/pr2-21-02cc.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Schryver, Jack C., Edmon Begoli, Ajith Jose i Christopher Griffin. Inferring Group Processes from Computer-Mediated Affective Text Analysis. Office of Scientific and Technical Information (OSTI), luty 2011. http://dx.doi.org/10.2172/1004442.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bengston, David N. Applications of computer-aided text analysis in natural resources. St. Paul, MN: U.S. Department of Agriculture, Forest Service, North Central Research Station, 2000. http://dx.doi.org/10.2737/nc-gtr-211.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Giorcelli, Michela, Nicola Lacetera i Astrid Marinoni. Does Scientific Progress Affect Culture? A Digital Text Analysis. Cambridge, MA: National Bureau of Economic Research, styczeń 2019. http://dx.doi.org/10.3386/w25429.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Fischer, Eric, Rebecca McCaughrin, Saketh Prazad i Mark Vandergon. Fed Transparency and Policy Expectation Errors: A Text Analysis Approach. Federal Reserve Bank of New York, listopad 2023. http://dx.doi.org/10.59576/sr.1081.

Pełny tekst źródła
Streszczenie:
This paper seeks to estimate the extent to which market-implied policy expectations could be improved with further information disclosure from the FOMC. Using text analysis methods based on large language models, we show that if FOMC meeting materials with five-year lagged release dates—like meeting transcripts and Tealbooks—were accessible to the public in real time, market policy expectations could substantially improve forecasting accuracy. Most of this improvement occurs during easing cycles. For instance, at the six-month forecasting horizon, the market could have predicted as much as 125 basis points of additional easing during the 2001 and 2008 recessions, equivalent to a 40-50 percent reduction in mean squared error. This potential forecasting improvement appears to be related to incomplete information about the Fed’s reaction function, particularly with respect to financial stability concerns in 2008. In contrast, having enhanced access to meeting materials would not have improved the market’s policy rate forecasting during tightening cycles.
Style APA, Harvard, Vancouver, ISO itp.
10

Han, Xuehua, Juanle Wang i Yuelei Yuan. Extraction and Analysis of Earthquake Events Information based on Web Text. International Science Council, 2019. http://dx.doi.org/10.24948/2019.06.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii