Littérature scientifique sur le sujet « LL. Automated language processing »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « LL. Automated language processing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "LL. Automated language processing"

1

Lalleman, Josine A., Ariane J. van Santen et Vincent J. van Heuven. « L2 Processing of Dutch regular and irregular Verbs ». ITL - International Journal of Applied Linguistics 115-116 (1 janvier 1997) : 1–26. http://dx.doi.org/10.1075/itl.115-116.01lal.

Texte intégral
Résumé :
Abstract Do Ll and (advanced) L2 speakers of Dutch employ distinct processes — rule application for regulars and lexical lookup for irregulars — when producing Dutch past tense forms? Do L2 speakers of a language that observes the same dual conjugation system as in Dutch (e.g. English, German) produce Dutch past tenses by a different process (i.e. more like that of Ll speakers) than learners of Dutch with a different Ll verb system (e.g. Japanese and Chinese)? We studied the on-line past tense production performance of Ll speakers and of advanced L2 speakers of Dutch varying relative past tense frequency of regular and irreg-ular Dutch verbs. Performance proved slower and less accurate with both Ll and L2 speakers for irregular verbs with relatively low past tense frequency. No frequency effects were found for regular verbs. The results were qualitatively the same for English/German and for Japanese/Chinese L2 speakers, with a striking tendency to overgeneralize the regular past tense formation. We conclude that the mental representation of the Dutch past tense rule is essentially the same for Ll and L2 language users.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rodríguez-Fornells, Antoni, Toni Cunillera, Anna Mestres-Missé et Ruth de Diego-Balaguer. « Neurophysiological mechanisms involved in language learning in adults ». Philosophical Transactions of the Royal Society B : Biological Sciences 364, no 1536 (27 décembre 2009) : 3711–35. http://dx.doi.org/10.1098/rstb.2009.0130.

Texte intégral
Résumé :
Little is known about the brain mechanisms involved in word learning during infancy and in second language acquisition and about the way these new words become stable representations that sustain language processing. In several studies we have adopted the human simulation perspective, studying the effects of brain-lesions and combining different neuroimaging techniques such as event-related potentials and functional magnetic resonance imaging in order to examine the language learning (LL) process. In the present article, we review this evidence focusing on how different brain signatures relate to (i) the extraction of words from speech, (ii) the discovery of their embedded grammatical structure, and (iii) how meaning derived from verbal contexts can inform us about the cognitive mechanisms underlying the learning process. We compile these findings and frame them into an integrative neurophysiological model that tries to delineate the major neural networks that might be involved in the initial stages of LL. Finally, we propose that LL simulations can help us to understand natural language processing and how the recovery from language disorders in infants and adults can be accomplished.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Drolia, Shristi, Shrey Rupani, Pooja Agarwal et Abheejeet Singh. « Automated Essay Rater using Natural Language Processing ». International Journal of Computer Applications 163, no 10 (17 avril 2017) : 44–46. http://dx.doi.org/10.5120/ijca2017913766.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Satomura, Y., et M. B. Do Amaral. « Automated diagnostic indexing by natural language processing ». Medical Informatics 17, no 3 (janvier 1992) : 149–63. http://dx.doi.org/10.3109/14639239209096531.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rungrojsuwan, Sorabud. « Morphological Processing Difficulty of Thai Learners of English with Different Levels of English Proficiency ». MANUSYA 18, no 1 (2015) : 73–92. http://dx.doi.org/10.1163/26659077-01801004.

Texte intégral
Résumé :
English morphology is said to be one of the most difficult subjects of linguistic study Thai students can acquire. The present study aims at examining Thai learners of English with different levels of English language proficiency in terms of their 1) morphological knowledge and 2) morphological processing behaviors. Two experiments were designed to test 200 participants from Mae Fah Luang University. The results showed that students with low language proficiency (LL group) have less morphological knowledge than those with intermediate language proficiency (IL group). However, those in the IL group still show some evidence of morphological difficulty, though they have better skills in English. For morphological processing behavior, it was found that, with less knowledge, participants in the LL group employ a one-by-one word matching technique rather than chunking a package of information as do those in the IL group. Accordingly, unlike those in the IL group, students in the LL group could not generate well-organized outputs.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Choudhary, Jaytrilok, et Deepak Singh Tomar. « Semi-Automated Ontology building through Natural Language Processing ». INTERNATIONAL JOURNAL OF COMPUTERS & ; TECHNOLOGY 13, no 8 (23 août 2014) : 4738–46. http://dx.doi.org/10.24297/ijct.v13i8.7072.

Texte intégral
Résumé :
Ontology is a backbone of semantic web which is used for domain knowledge representation. Ontology provides the platform for effective extraction of information. Usually, ontology is developed manually, but the manual ontology construction requires lots of efforts by domain experts. It is also time consuming and costly. Thus, an approach to build ontology in semi-automated manner has been proposed. The proposed approach extracts concept automatically from open directory Dmoz. The Stanford Parser is explored to parse natural language syntax and extract the parts of speech which are used to form the relationship among the concepts. The experimental result shows a fair degree of accuracy which may be improved in future with more sophisticated approach.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Sugden, Don. « Machine Aids to Translation : Automated Language Processing System (ALPS) ». Meta : Journal des traducteurs 30, no 4 (1985) : 403. http://dx.doi.org/10.7202/004310ar.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Karhade, Aditya V., Michiel E. R. Bongers, Olivier Q. Groot, Erick R. Kazarian, Thomas D. Cha, Harold A. Fogel, Stuart H. Hershman et al. « Natural language processing for automated detection of incidental durotomy ». Spine Journal 20, no 5 (mai 2020) : 695–700. http://dx.doi.org/10.1016/j.spinee.2019.12.006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mukherjee, Prasenjit, et Baisakhi Chakraborty. « Automated Knowledge Provider System with Natural Language Query Processing ». IETE Technical Review 33, no 5 (17 décembre 2015) : 525–38. http://dx.doi.org/10.1080/02564602.2015.1119662.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Garvin, Jennifer H., Youngjun Kim, Glenn T. Gobbel, Michael E. Matheny, Andrew Redd, Bruce E. Bray, Paul Heidenreich et al. « Automated Heart Failure Quality Measurement with Natural Language Processing ». Journal of Cardiac Failure 22, no 8 (août 2016) : S92. http://dx.doi.org/10.1016/j.cardfail.2016.06.292.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "LL. Automated language processing"

1

Allott, Nicholas Mark. « A natural language processing framework for automated assessment ». Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314333.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Onyenwe, Ikechukwu Ekene. « Developing methods and resources for automated processing of the African language Igbo ». Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/17043/.

Texte intégral
Résumé :
Natural Language Processing (NLP) research is still in its infancy in Africa. Most of languages in Africa have few or zero NLP resources available, of which Igbo is among those at zero state. In this study, we develop NLP resources to support NLP-based research in the Igbo language. The springboard is the development of a new part-of-speech (POS) tagset for Igbo (IgbTS) based on a slight adaptation of the EAGLES guideline as a result of language internal features not recognized in EAGLES. The tagset consists of three granularities: fine-grain (85 tags), medium-grain (70 tags) and coarse-grain (15 tags). The medium-grained tagset is to strike a balance between the other two grains for practical purpose. Following this is the preprocessing of Igbo electronic texts through normalization and tokenization processes. The tokenizer is developed in this study using the tagset definition of a word token and the outcome is an Igbo corpus (IgbC) of about one million tokens. This IgbTS was applied to a part of the IgbC to produce the first Igbo tagged corpus (IgbTC). To investigate the effectiveness, validity and reproducibility of the IgbTS, an inter-annotation agreement (IAA) exercise was undertaken, which led to the revision of the IgbTS where necessary. A novel automatic method was developed to bootstrap a manual annotation process through exploitation of the by-products of this IAA exercise, to improve IgbTC. To further improve the quality of the IgbTC, a committee of taggers approach was adopted to propose erroneous instances on IgbTC for correction. A novel automatic method that uses knowledge of affixes to flag and correct all morphologically-inflected words in the IgbTC whose tags violate their status as not being morphologically-inflected was also developed and used. Experiments towards the development of an automatic POS tagging system for Igbo using IgbTC show good accuracy scores comparable to other languages that these taggers have been tested on, such as English. Accuracy on the words previously unseen during the taggers’ training (also called unknown words) is considerably low, and much lower on the unknown words that are morphologically-complex, which indicates difficulty in handling morphologically-complex words in Igbo. This was improved by adopting a morphological reconstruction method (a linguistically-informed segmentation into stems and affixes) that reformatted these morphologically-complex words into patterns learnable by machines. This enables taggers to use the knowledge of stems and associated affixes of these morphologically-complex words during the tagging process to predict their appropriate tags. Interestingly, this method outperforms other methods that existing taggers use in handling unknown words, and achieves an impressive increase for the accuracy of the morphologically-inflected unknown words and overall unknown words. These developments are the first NLP toolkit for the Igbo language and a step towards achieving the objective of Basic Language Resources Kits (BLARK) for the language. This IgboNLP toolkit will be made available for the NLP community and should encourage further research and development for the language.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Leonhard, Annette Christa. « Automated question answering for clinical comparison questions ». Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6266.

Texte intégral
Résumé :
This thesis describes the development and evaluation of new automated Question Answering (QA) methods tailored to clinical comparison questions that give clinicians a rank-ordered list of MEDLINE® abstracts targeted to natural language clinical drug comparison questions (e.g. ”Have any studies directly compared the effects of Pioglitazone and Rosiglitazone on the liver?”). Three corpora were created to develop and evaluate a new QA system for clinical comparison questions called RetroRank. RetroRank takes the clinician’s plain text question as input, processes it and outputs a rank-ordered list of potential answer candidates, i.e. MEDLINE® abstracts, that is reordered using new post-retrieval ranking strategies to ensure the most topically-relevant abstracts are displayed as high in the result set as possible. RetroRank achieves a significant improvement over the PubMed recency baseline and performs equal to or better than previous approaches to post-retrieval ranking relying on query frames and annotated data such as the approach by Demner-Fushman and Lin (2007). The performance of RetroRank shows that it is possible to successfully use natural language input and a fully automated approach to obtain answers to clinical drug comparison questions. This thesis also introduces two new evaluation corpora of clinical comparison questions with “gold standard” references that are freely available and are a valuable resource for future research in medical QA.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Xozwa, Thandolwethu. « Automated statistical audit system for a government regulatory authority ». Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/6061.

Texte intégral
Résumé :
Governments all over the world are faced with numerous challenges while running their countries on a daily basis. The predominant challenges which arise are those which involve statistical methodologies. Official statistics to South Africa’s infrastructure are very important and because of this it is important that an effort is made to reduce the challenges that occur during the development of official statistics. For official statistics to be developed successfully quality standards need to be built into an organisational framework and form a system of architecture (Statistics New Zealand 2009:1). Therefore, this study seeks to develop a statistical methodology that is appropriate and scientifically correct using an automated statistical system for audits in government regulatory authorities. The study makes use of Mathematica to provide guidelines on how to develop and use an automated statistical audit system. A comprehensive literature study was conducted using existing secondary sources. A quantitative research paradigm was adopted for this study, to empirically assess the demographic characteristics of tenants of Social Housing Estates and their perceptions towards the rental units they inhabit. More specifically a descriptive study was undertaken. Furthermore, a sample size was selected by means of convenience sampling for a case study on SHRA to assess the respondent’s biographical information. From this sample, a pilot study was conducted investigating the general perceptions of the respondents regarding the physical conditions and quality of their units. The technical development of an automated statistical audit system was discussed. This process involved the development and use of a questionnaire design tool, statistical analysis and reporting and how Mathematica software served as a platform for developing the system. The findings of this study provide insights on how government regulatory authorities can best utilise automated statistical audits for regulation purposes and achieved this by developing an automated statistical audit system for government regulatory authorities. It is hoped that the findings of this study will provide government regulatory authorities with practical suggestions or solutions regarding the generating of official statistics for regulatory purposes, and that the suggestions for future research will inspire future researchers to further investigate automated statistical audit systems, statistical analysis, automated questionnaire development, and government regulatory authorities individually.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sommers, Alexander Mitchell. « EXPLORING PSEUDO-TOPIC-MODELING FOR CREATING AUTOMATED DISTANT-ANNOTATION SYSTEMS ». OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2862.

Texte intégral
Résumé :
We explore the use a Latent Dirichlet Allocation (LDA) imitating pseudo-topic-model, based on our original relevance metric, as a tool to facilitate distant annotation of short (often one to two sentence or less) documents. Our exploration manifests as annotating tweets for emotions, this being the current use-case of interest to us, but we believe the method could be extended to any multi-class labeling task of documents of similar length. Tweets are gathered via the Twitter API using "track" terms thought likely to capture tweets with a greater chance of exhibiting each emotional class, 3,000 tweets for each of 26 topics anticipated to elicit emotional discourse. Our pseudo-topic-model is used to produce relevance-ranked vocabularies for each corpus of tweets and these are used to distribute emotional annotations to those tweets not manually annotated, magnifying the number of annotated tweets by a factor of 29. The vector labels the annotators produce for the topics are cascaded out to the tweets via three different schemes which are compared for performance by proxy through the competition of bidirectional-LSMTs trained using the tweets labeled at a distance. An SVM and two emotionally annotated vocabularies are also tested on each task to provide context and comparison.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wang, Wei. « Automated spatiotemporal and semantic information extraction for hazards ». Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1415.

Texte intégral
Résumé :
This dissertation explores three research topics related to automated spatiotemporal and semantic information extraction about hazard events from Web news reports and other social media. The dissertation makes a unique contribution of bridging geographic information science, geographic information retrieval, and natural language processing. Geographic information retrieval and natural language processing techniques are applied to extract spatiotemporal and semantic information automatically from Web documents, to retrieve information about patterns of hazard events that are not explicitly described in the texts. Chapters 2, 3 and 4 can be regarded as three standalone journal papers. The research topics covered by the three chapters are related to each other, and are presented in a sequential way. Chapter 2 begins with an investigation of methods for automatically extracting spatial and temporal information about hazards from Web news reports. A set of rules is developed to combine the spatial and temporal information contained in the reports based on how this information is presented in text in order to capture the dynamics of hazard events (e.g., changes in event locations, new events occurring) as they occur over space and time. Chapter 3 presents an approach for retrieving semantic information about hazard events using ontologies and semantic gazetteers. With this work, information on the different kinds of events (e.g., impact, response, or recovery events) can be extracted as well as information about hazard events at different levels of detail. Using the methods presented in Chapter 2 and 3, an approach for automatically extracting spatial, temporal, and semantic information from tweets is discussed in Chapter 4. Four different elements of tweets are used for assigning appropriate spatial and temporal information to hazard events in tweets. Since tweets represent shorter, but more current information about hazards and how they are impacting a local area, key information about hazards can be retrieved through extracted spatiotemporal and semantic information from tweets.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Teske, Alexander. « Automated Risk Management Framework with Application to Big Maritime Data ». Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38567.

Texte intégral
Résumé :
Risk management is an essential tool for ensuring the safety and timeliness of maritime operations and transportation. Some of the many risk factors that can compromise the smooth operation of maritime activities include harsh weather and pirate activity. However, identifying and quantifying the extent of these risk factors for a particular vessel is not a trivial process. One challenge is that processing the vast amounts of automatic identification system (AIS) messages generated by the ships requires significant computational resources. Another is that the risk management process partially relies on human expertise, which can be timeconsuming and error-prone. In this thesis, an existing Risk Management Framework (RMF) is augmented to address these issues. A parallel/distributed version of the RMF is developed to e ciently process large volumes of AIS data and assess the risk levels of the corresponding vessels in near-real-time. A genetic fuzzy system is added to the RMF's Risk Assessment module in order to automatically learn the fuzzy rule base governing the risk assessment process, thereby reducing the reliance on human domain experts. A new weather risk feature is proposed, and an existing regional hostility feature is extended to automatically learn about pirate activity by ingesting unstructured news articles and incident reports. Finally, a geovisualization tool is developed to display the position and risk levels of ships at sea. Together, these contributions pave the way towards truly automatic risk management, a crucial component of modern maritime solutions. The outcomes of this thesis will contribute to enhance Larus Technologies' Total::Insight, a risk-aware decision support system successfully deployed in maritime scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Salov, Aleksandar. « Towards automated learning from software development issues : Analyzing open source project repositories using natural language processing and machine learning techniques ». Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-66834.

Texte intégral
Résumé :
This thesis presents an in-depth investigation on the subject of how natural language processing and machine learning techniques can be utilized in order to perform a comprehensive analysis of programming issues found in different open source project repositories hosted on GitHub. The research is focused on examining issues gathered from a number of JavaScript repositories based on their user generated textual description. The primary goal of the study is to explore how natural language processing and machine learning methods can facilitate the process of identifying and categorizing distinct issue types. Furthermore, the research goes one step further and investigates how these same techniques can support users in searching for potential solutions to these issues. For this purpose, an initial proof-of-concept implementation is developed, which collects over 30 000 JavaScript issues from over 100 GitHub repositories. Then, the system extracts the titles of the issues, cleans and processes the data, before supplying it to an unsupervised clustering model which tries to uncover any discernible similarities and patterns within the examined dataset. What is more, the main system is supplemented by a dedicated web application prototype, which enables users to utilize the underlying machine learning model in order to find solutions to their programming related issues. Furthermore, the developed implementation is meticulously evaluated through a number of measures. First of all, the trained clustering model is assessed by two independent groups of external reviewers - one group of fellow researchers and another group of practitioners in the software industry, so as to determine whether the resulting categories contain distinct types of issues. Moreover, in order to find out if the system can facilitate the search for issue solutions, the web application prototype is tested in a series of user sessions with participants who are not only representative of the main target group which can benefit most from such a system, but who also have a mixture of both practical and theoretical backgrounds. The results of this research demonstrate that the proposed solution can effectively categorize issues according to their type, solely based on the user generated free-text title. This provides strong evidence that natural language processing and machine learning techniques can be utilized for analyzing issues and automating the overall learning process. However, the study was unable to conclusively determine whether these same methods can aid the search for issue solutions. Nevertheless, the thesis provides a detailed account of how this problem was addressed and can therefore serve as the basis for future research.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sunil, Kamalakar FNU. « Automatically Generating Tests from Natural Language Descriptions of Software Behavior ». Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23907.

Texte intégral
Résumé :
Behavior-Driven Development (BDD) is an emerging agile development approach where all stakeholders (including developers and customers) work together to write user stories in structured natural language to capture a software application's functionality in terms of re- quired "behaviors". Developers then manually write "glue" code so that these scenarios can be executed as software tests. This glue code represents individual steps within unit and acceptance test cases, and tools exist that automate the mapping from scenario descriptions to manually written code steps (typically using regular expressions). Instead of requiring programmers to write manual glue code, this thesis investigates a practical approach to con- vert natural language scenario descriptions into executable software tests fully automatically. To show feasibility, we developed a tool called Kirby that uses natural language processing techniques, code information extraction and probabilistic matching to automatically gener- ate executable software tests from structured English scenario descriptions. Kirby relieves the developer from the laborious work of writing code for the individual steps described in scenarios, so that both developers and customers can both focus on the scenarios as pure behavior descriptions (understandable to all, not just programmers). Results from assessing the performance and accuracy of this technique are presented.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mao, Jin, Lisa R. Moore, Carrine E. Blank, Elvis Hsin-Hui Wu, Marcia Ackerman, Sonali Ranade et Hong Cui. « Microbial phenomics information extractor (MicroPIE) : a natural language processing tool for the automated acquisition of prokaryotic phenotypic characters from text sources ». BIOMED CENTRAL LTD, 2016. http://hdl.handle.net/10150/622562.

Texte intégral
Résumé :
Background: The large-scale analysis of phenomic data (i.e., full phenotypic traits of an organism, such as shape, metabolic substrates, and growth conditions) in microbial bioinformatics has been hampered by the lack of tools to rapidly and accurately extract phenotypic data from existing legacy text in the field of microbiology. To quickly obtain knowledge on the distribution and evolution of microbial traits, an information extraction system needed to be developed to extract phenotypic characters from large numbers of taxonomic descriptions so they can be used as input to existing phylogenetic analysis software packages. Results: We report the development and evaluation of Microbial Phenomics Information Extractor (MicroPIE, version 0.1.0). MicroPIE is a natural language processing application that uses a robust supervised classification algorithm (Support Vector Machine) to identify characters from sentences in prokaryotic taxonomic descriptions, followed by a combination of algorithms applying linguistic rules with groups of known terms to extract characters as well as character states. The input to MicroPIE is a set of taxonomic descriptions (clean text). The output is a taxon-by-character matrix-with taxa in the rows and a set of 42 pre-defined characters (e.g., optimum growth temperature) in the columns. The performance of MicroPIE was evaluated against a gold standard matrix and another student-made matrix. Results show that, compared to the gold standard, MicroPIE extracted 21 characters (50%) with a Relaxed F1 score > 0.80 and 16 characters (38%) with Relaxed F1 scores ranging between 0.50 and 0.80. Inclusion of a character prediction component (SVM) improved the overall performance of MicroPIE, notably the precision. Evaluated against the same gold standard, MicroPIE performed significantly better than the undergraduate students. Conclusion: MicroPIE is a promising new tool for the rapid and efficient extraction of phenotypic character information from prokaryotic taxonomic descriptions. However, further development, including incorporation of ontologies, will be necessary to improve the performance of the extraction for some character types.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "LL. Automated language processing"

1

Leacock, Claudia. Automated grammatical error detection for language learners. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

International Workshop on Natural Language Generation (6th 1992 Trento, Italy). Aspects of automated natural language generation : 6th international workshop, Trento, Italy, April 5-7, 1992 : proceedings. Berlin : Springer-Verlag, 1992.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

International Workshop on Natural Language Generation (6th 1992 Trento, Italy). Aspects of automated natural language generation : 6th International Workshop on Natural Language Generation, Trento, Italy, April 5-7, 1992 : proceedings. Berlin : Springer-Verlag, 1992.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wu, Chou, et Juang B. H, dir. Pattern recognition in speech and language processing. Boca Raton : CRC Press, 2003.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Vernon, Elizabeth. Decision-making for automation : Hebrew and Arabic script materials in the automated library. [Champaign] : Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vernon, Elizabeth. Decision-making for automation : Hebrew and Arabic script materials in the automated library. [Champaign] : Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

W, McManus J., Bynum W. L et Langley Research Center, dir. Automated concurrent blackboard system generation in C++. Hampton, Va : National Aeronautics and Space Administration, Langley Research Center, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Agenbroad, James Edward. Nonromanization : Prospects for improving automated cataloging of items in other writing systems. Washington : Cataloging Forum, Library of Congress, 1992.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

IFLA, Satellite Meeting (2nd 1993 Madrid Spain). Automated systems for access to multilingual and multiscript library materials : Proceedings of the Second IFLA Satellite Meeting, Madrid, August 18-19, 1993. München : K.G. Saur, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wooldridge, Michael, Sarit Kraus et Shaheen Fatima. Principles of Automated Negotiation. Cambridge University Press, 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "LL. Automated language processing"

1

Sneiders, Eriks. « Automated Email Answering by Text Pattern Matching ». Dans Advances in Natural Language Processing, 381–92. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14770-8_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gómez, José María, José Carlos Cortizo, Enrique Puertas et Miguel Ruiz. « Concept Indexing for Automated Text Categorization ». Dans Natural Language Processing and Information Systems, 195–206. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27779-8_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bano, Muneera, Alessio Ferrari, Didar Zowghi, Vincenzo Gervasi et Stefania Gnesi. « Automated Service Selection Using Natural Language Processing ». Dans Requirements Engineering in the Big Data Era, 3–17. Berlin, Heidelberg : Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48634-4_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Merunka, Vojtěch, Oldřich Nouza et Jiří Brožek. « Automated Model Transformations Using the C.C Language ». Dans Lecture Notes in Business Information Processing, 137–51. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68644-6_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Peleato, Ramón Aragüés, Jean-Cédric Chappelier et Martin Rajman. « Automated Information Extraction out of Classified Advertisements ». Dans Natural Language Processing and Information Systems, 203–14. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45399-7_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Guo, Xiaoyu, Meng Chen, Yang Song, Xiaodong He et Bowen Zhou. « Automated Thematic and Emotional Modern Chinese Poetry Composition ». Dans Natural Language Processing and Chinese Computing, 433–46. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32233-5_34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gonçalves, Teresa, et Paulo Quaresma. « Using IR Techniques to Improve Automated Text Classification ». Dans Natural Language Processing and Information Systems, 374–79. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27779-8_34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Banik, Debajyoty, Asif Ekbal et Pushpak Bhattacharyya. « Two-Phased Dynamic Language Model : Improved LM for Automated Language Translation ». Dans Computational Linguistics and Intelligent Text Processing, 265–79. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-24337-0_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Knapp, Melanie, et Jens Woch. « Towards a Natural Language Driven Automated Help Desk ». Dans Computational Linguistics and Intelligent Text Processing, 96–105. Berlin, Heidelberg : Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45715-1_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gangavarapu, Tushaar, Aditya Jayasimha, Gokul S. Krishnan et Sowmya Kamath S. « TAGS : Towards Automated Classification of Unstructured Clinical Nursing Notes ». Dans Natural Language Processing and Information Systems, 195–207. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23281-8_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "LL. Automated language processing"

1

Sturim, Douglas, William Campbell, Najim Dehak, Zahi Karam, Alan McCree, Doug Reynolds, Fred Richardson, Pedro Torres-Carrasquillo et Stephen Shum. « The MIT LL 2010 speaker recognition evaluation system : Scalable language-independent speaker recognition ». Dans ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947547.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rokade, Amit, Bhushan Patil, Sana Rajani, Surabhi Revandkar et Rajashree Shedge. « Automated Grading System Using Natural Language Processing ». Dans 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT). IEEE, 2018. http://dx.doi.org/10.1109/icicct.2018.8473170.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Gorin, Allen L., H. Hanek, R. C. Rose et L. Miller. « Spoken language acquisition for automated call routing ». Dans 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA : ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-385.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Harsha, Tumula Mani, Gangaraju Sai Moukthika, Dudipalli Siva Sai, Mannuru Naga Rajeswari Pravallika, Satish Anamalamudi et MuraliKrishna Enduri. « Automated Resume Screener using Natural Language Processing(NLP) ». Dans 2022 6th International Conference on Trends in Electronics and Informatics (ICOEI). IEEE, 2022. http://dx.doi.org/10.1109/icoei53556.2022.9777194.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Fell, Michael, Elena Cabrio, Michele Corazza et Fabien Gandon. « Comparing Automated Methods to Detect Explicit Content in Song Lyrics ». Dans Recent Advances in Natural Language Processing. Incoma Ltd., Shoumen, Bulgaria, 2019. http://dx.doi.org/10.26615/978-954-452-056-4_039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yang, Y. P., et J. R. Deller Jr. « A tool for automated design of language models ». Dans 4th International Conference on Spoken Language Processing (ICSLP 1996). ISCA : ISCA, 1996. http://dx.doi.org/10.21437/icslp.1996-137.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Fernando, Nisaja, Abimani Kumarage, Vithyashagar Thiyaganathan, Radesh Hillary et Lakmini Abeywardhana. « Automated vehicle insurance claims processing using computer vision, natural language processing ». Dans 2022 22nd International Conference on Advances in ICT for Emerging Regions (ICTer). IEEE, 2022. http://dx.doi.org/10.1109/icter58063.2022.10024089.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Jack, Mervyn A., J. C. Foster et F. W. Stentiford. « Intelligent dialogues in automated telephone services ». Dans 2nd International Conference on Spoken Language Processing (ICSLP 1992). ISCA : ISCA, 1992. http://dx.doi.org/10.21437/icslp.1992-241.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Terry, Mark, Randall Sparks et Patrick Obenchain. « Automated query identification in English dialogue ». Dans 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA : ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-237.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kaur, Amritpal, et Amrit Singh. « Conversational natural language processing for automated customer support services ». Dans INSTRUMENTATION ENGINEERING, ELECTRONICS AND TELECOMMUNICATIONS – 2021 (IEET-2021) : Proceedings of the VII International Forum. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0100875.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "LL. Automated language processing"

1

Zelenskyi, Arkadii A. Relevance of research of programs for semantic analysis of texts and review of methods of their realization. [б. в.], décembre 2018. http://dx.doi.org/10.31812/123456789/2884.

Texte intégral
Résumé :
One of the main tasks of applied linguistics is the solution of the problem of high-quality automated processing of natural language. The most popular methods for processing natural-language text responses for the purpose of extraction and representation of semantics should be systems that are based on the efficient combination of linguistic analysis technologies and analysis methods. Among the existing methods for analyzing text data, a valid method is used by the method using a vector model. Another effective and relevant means of extracting semantics from the text and its representation is the method of latent semantic analysis (LSA). The LSA method was tested and confirmed its effectiveness in such areas of processing the native language as modeling the conceptual knowledge of the person; information search, the implementation of which LSA shows much better results than conventional vector methods.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Salter, R., Quyen Dong, Cody Coleman, Maria Seale, Alicia Ruvinsky, LaKenya Walker et W. Bond. Data Lake Ecosystem Workflow. Engineer Research and Development Center (U.S.), avril 2021. http://dx.doi.org/10.21079/11681/40203.

Texte intégral
Résumé :
The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie