To see the other types of publications on this topic, follow the link: LL. Automated language processing.

Journal articles on the topic 'LL. Automated language processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'LL. Automated language processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lalleman, Josine A., Ariane J. van Santen, and Vincent J. van Heuven. "L2 Processing of Dutch regular and irregular Verbs." ITL - International Journal of Applied Linguistics 115-116 (January 1, 1997): 1–26. http://dx.doi.org/10.1075/itl.115-116.01lal.

Full text
Abstract:
Abstract Do Ll and (advanced) L2 speakers of Dutch employ distinct processes — rule application for regulars and lexical lookup for irregulars — when producing Dutch past tense forms? Do L2 speakers of a language that observes the same dual conjugation system as in Dutch (e.g. English, German) produce Dutch past tenses by a different process (i.e. more like that of Ll speakers) than learners of Dutch with a different Ll verb system (e.g. Japanese and Chinese)? We studied the on-line past tense production performance of Ll speakers and of advanced L2 speakers of Dutch varying relative past tense frequency of regular and irreg-ular Dutch verbs. Performance proved slower and less accurate with both Ll and L2 speakers for irregular verbs with relatively low past tense frequency. No frequency effects were found for regular verbs. The results were qualitatively the same for English/German and for Japanese/Chinese L2 speakers, with a striking tendency to overgeneralize the regular past tense formation. We conclude that the mental representation of the Dutch past tense rule is essentially the same for Ll and L2 language users.
APA, Harvard, Vancouver, ISO, and other styles
2

Rodríguez-Fornells, Antoni, Toni Cunillera, Anna Mestres-Missé, and Ruth de Diego-Balaguer. "Neurophysiological mechanisms involved in language learning in adults." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1536 (December 27, 2009): 3711–35. http://dx.doi.org/10.1098/rstb.2009.0130.

Full text
Abstract:
Little is known about the brain mechanisms involved in word learning during infancy and in second language acquisition and about the way these new words become stable representations that sustain language processing. In several studies we have adopted the human simulation perspective, studying the effects of brain-lesions and combining different neuroimaging techniques such as event-related potentials and functional magnetic resonance imaging in order to examine the language learning (LL) process. In the present article, we review this evidence focusing on how different brain signatures relate to (i) the extraction of words from speech, (ii) the discovery of their embedded grammatical structure, and (iii) how meaning derived from verbal contexts can inform us about the cognitive mechanisms underlying the learning process. We compile these findings and frame them into an integrative neurophysiological model that tries to delineate the major neural networks that might be involved in the initial stages of LL. Finally, we propose that LL simulations can help us to understand natural language processing and how the recovery from language disorders in infants and adults can be accomplished.
APA, Harvard, Vancouver, ISO, and other styles
3

Drolia, Shristi, Shrey Rupani, Pooja Agarwal, and Abheejeet Singh. "Automated Essay Rater using Natural Language Processing." International Journal of Computer Applications 163, no. 10 (April 17, 2017): 44–46. http://dx.doi.org/10.5120/ijca2017913766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Satomura, Y., and M. B. Do Amaral. "Automated diagnostic indexing by natural language processing." Medical Informatics 17, no. 3 (January 1992): 149–63. http://dx.doi.org/10.3109/14639239209096531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rungrojsuwan, Sorabud. "Morphological Processing Difficulty of Thai Learners of English with Different Levels of English Proficiency." MANUSYA 18, no. 1 (2015): 73–92. http://dx.doi.org/10.1163/26659077-01801004.

Full text
Abstract:
English morphology is said to be one of the most difficult subjects of linguistic study Thai students can acquire. The present study aims at examining Thai learners of English with different levels of English language proficiency in terms of their 1) morphological knowledge and 2) morphological processing behaviors. Two experiments were designed to test 200 participants from Mae Fah Luang University. The results showed that students with low language proficiency (LL group) have less morphological knowledge than those with intermediate language proficiency (IL group). However, those in the IL group still show some evidence of morphological difficulty, though they have better skills in English. For morphological processing behavior, it was found that, with less knowledge, participants in the LL group employ a one-by-one word matching technique rather than chunking a package of information as do those in the IL group. Accordingly, unlike those in the IL group, students in the LL group could not generate well-organized outputs.
APA, Harvard, Vancouver, ISO, and other styles
6

Choudhary, Jaytrilok, and Deepak Singh Tomar. "Semi-Automated Ontology building through Natural Language Processing." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 8 (August 23, 2014): 4738–46. http://dx.doi.org/10.24297/ijct.v13i8.7072.

Full text
Abstract:
Ontology is a backbone of semantic web which is used for domain knowledge representation. Ontology provides the platform for effective extraction of information. Usually, ontology is developed manually, but the manual ontology construction requires lots of efforts by domain experts. It is also time consuming and costly. Thus, an approach to build ontology in semi-automated manner has been proposed. The proposed approach extracts concept automatically from open directory Dmoz. The Stanford Parser is explored to parse natural language syntax and extract the parts of speech which are used to form the relationship among the concepts. The experimental result shows a fair degree of accuracy which may be improved in future with more sophisticated approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Sugden, Don. "Machine Aids to Translation: Automated Language Processing System (ALPS)." Meta: Journal des traducteurs 30, no. 4 (1985): 403. http://dx.doi.org/10.7202/004310ar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karhade, Aditya V., Michiel E. R. Bongers, Olivier Q. Groot, Erick R. Kazarian, Thomas D. Cha, Harold A. Fogel, Stuart H. Hershman, et al. "Natural language processing for automated detection of incidental durotomy." Spine Journal 20, no. 5 (May 2020): 695–700. http://dx.doi.org/10.1016/j.spinee.2019.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mukherjee, Prasenjit, and Baisakhi Chakraborty. "Automated Knowledge Provider System with Natural Language Query Processing." IETE Technical Review 33, no. 5 (December 17, 2015): 525–38. http://dx.doi.org/10.1080/02564602.2015.1119662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Garvin, Jennifer H., Youngjun Kim, Glenn T. Gobbel, Michael E. Matheny, Andrew Redd, Bruce E. Bray, Paul Heidenreich, et al. "Automated Heart Failure Quality Measurement with Natural Language Processing." Journal of Cardiac Failure 22, no. 8 (August 2016): S92. http://dx.doi.org/10.1016/j.cardfail.2016.06.292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

T.C., Sandanayake. "Automated Classroom Lecture Note Generation Using Natural Language Processing and Image Processing Techniques." International Journal of Advanced Trends in Computer Science and Engineering 8, no. 5 (October 15, 2019): 1920–26. http://dx.doi.org/10.30534/ijatcse/2019/16852019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gurbuz, Ozge, Fethi Rabhi, and Onur Demirors. "Process ontology development using natural language processing: a multiple case study." Business Process Management Journal 25, no. 6 (September 17, 2019): 1208–27. http://dx.doi.org/10.1108/bpmj-05-2018-0144.

Full text
Abstract:
Purpose Integrating ontologies with process modeling has gained increasing attention in recent years since it enhances data representations and makes it easier to query, store and reuse knowledge at the semantic level. The authors focused on a process and ontology integration approach by extracting the activities, roles and other concepts related to the process models from organizational sources using natural language processing techniques. As part of this study, a process ontology population (PrOnPo) methodology and tool is developed, which uses natural language parsers for extracting and interpreting the sentences and populating an event-driven process chain ontology in a fully automated or semi-automated (user assisted) manner. The purpose of this paper is to present applications of PrOnPo tool in different domains. Design/methodology/approach A multiple case study is conducted by selecting five different domains with different types of guidelines. Process ontologies are developed using the PrOnPo tool in a semi-automated and fully automated fashion and manually. The resulting ontologies are compared and evaluated in terms of time-effort and recall-precision metrics. Findings From five different domains, the results give an average of 70 percent recall and 80 percent precision for fully automated usage of the PrOnPo tool, showing that it is applicable and generalizable. In terms of efficiency, the effort spent for process ontology development is decreased from 250 person-minutes to 57 person-minutes (semi-automated). Originality/value The PrOnPo tool is the first one to automatically generate integrated process ontologies and process models from guidelines written in natural language.
APA, Harvard, Vancouver, ISO, and other styles
13

A. Y. Alqaralleh, Bassam, Fahad Aldhaban, Feras Mohammed A-Matarneh, and Esam A. AlQaralleh. "Automated Handwriting Recognition and Speech Synthesizer for Indigenous Language Processing." Computers, Materials & Continua 72, no. 2 (2022): 3913–27. http://dx.doi.org/10.32604/cmc.2022.026531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Moon, Seonghyeon, Gitaek Lee, and Seokho Chi. "Automated system for construction specification review using natural language processing." Advanced Engineering Informatics 51 (January 2022): 101495. http://dx.doi.org/10.1016/j.aei.2021.101495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sharma, Gourav. "Automated Brain Tumor Prediction System using Natural Language Processing (NLP)." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 31, 2021): 3784–87. http://dx.doi.org/10.22214/ijraset.2021.37196.

Full text
Abstract:
In this paper, we proposed an Automated Brain Tumor Prediction System which predicts Brain Tumor through symptoms in several diseases using Natural Language Processing (NLP). Term Frequency Inverse Document Frequency (TF-IDF) is used for calculating term weighting of terms on different disease’s symptoms. Cosine Similarity Measure and Euclidean Distance are used for calculating angular and linear distance respectively between diseases and symptoms for getting ranking of the Brain Tumor in the ranked diseases. A novel mathematical strategy is used here for predicting chance of Brain Tumor through symptoms in several diseases. According to the proposed novel mathematical strategy, the chance of the Brain Tumor is proportional to the obtained similarity value of the Brain Tumor when symptoms are queried and inversely proportional to the rank of the Brain Tumor in several diseases and the maximum similarity value of the Brain Tumor, where all symptoms of Brain Tumor are present.
APA, Harvard, Vancouver, ISO, and other styles
16

Friedman, Carol, Lyudmila Shagina, Yves Lussier, and George Hripcsak. "Automated Encoding of Clinical Documents Based on Natural Language Processing." Journal of the American Medical Informatics Association 11, no. 5 (September 2004): 392–402. http://dx.doi.org/10.1197/jamia.m1552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Leaman, Robert, Ritu Khare, and Zhiyong Lu. "Challenges in clinical natural language processing for automated disorder normalization." Journal of Biomedical Informatics 57 (October 2015): 28–37. http://dx.doi.org/10.1016/j.jbi.2015.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shetty, Pranav, and Rampi Ramprasad. "Automated knowledge extraction from polymer literature using natural language processing." iScience 24, no. 1 (January 2021): 101922. http://dx.doi.org/10.1016/j.isci.2020.101922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mo, Yunjeong, Dong Zhao, Jing Du, Matt Syal, Azizan Aziz, and Heng Li. "Automated staff assignment for building maintenance using natural language processing." Automation in Construction 113 (May 2020): 103150. http://dx.doi.org/10.1016/j.autcon.2020.103150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Hao, Yu-Ping Wang, Jie Yin, and Gang Tan. "SmartShell: Automated Shell Scripts Synthesis from Natural Language." International Journal of Software Engineering and Knowledge Engineering 29, no. 02 (February 2019): 197–220. http://dx.doi.org/10.1142/s0218194019500098.

Full text
Abstract:
Modern shell scripts provide interfaces with rich functionality for system administration. However, it is not easy for end-users to write correct shell scripts; misusing commands may cause unpredictable results. In this paper, we present SmartShell, an automated function-based tool for shell script synthesis, which uses natural language descriptions as input. It can help the computer system to “understand” users’ intentions. SmartShell is based on two insights: (1) natural language descriptions for system objects (such as files and processes) and operations can be recognized by natural language processing tools; (2) system-administration tasks are often completed by short shell scripts that can be automatically synthesized from natural language descriptions. SmartShell synthesizes shell scripts in three steps: (1) using natural language processing tools to convert the description of a system-administration task into a syntax tree; (2) using program-synthesis techniques to construct a SmartShell intermediate-language script from the syntax tree; (3) translating the intermediate-language script into a shell script. Experimental results show that SmartShell can successfully synthesize 53.7% of tasks collected from shell-script helping forums.
APA, Harvard, Vancouver, ISO, and other styles
21

Ananth Rao, Ananya, and Prof Venkatesh S. "Identification of Aphasia using Natural Language Processing." Journal of University of Shanghai for Science and Technology 23, no. 06 (June 28, 2021): 1737–47. http://dx.doi.org/10.51201/jusst/21/06488.

Full text
Abstract:
Aphasia is a neurological disorder of language that precludes a person’s ability to speak, understand, read or write in any language. By virtue of this disorder being inextricably connected to language, there is a vast potential for the application of Natural Language Processing (NLP) for the diagnosis of the disorder. This paper surveys the automated machine-learning-based classification methodologies followed by an attempt to discuss a potential way in which an NLP-backed methodology could be implemented along with its accompanying challenges. It is seen that the need for standardized technology-based diagnostic solutions necessitates the exploration of such a methodology.
APA, Harvard, Vancouver, ISO, and other styles
22

Guo, Zhijiang, Michael Schlichtkrull, and Andreas Vlachos. "A Survey on Automated Fact-Checking." Transactions of the Association for Computational Linguistics 10 (2022): 178–206. http://dx.doi.org/10.1162/tacl_a_00454.

Full text
Abstract:
Abstract Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore, researchers have been exploring how fact-checking can be automated, using techniques based on natural language processing, machine learning, knowledge representation, and databases to automatically predict the veracity of claims. In this paper, we survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines. In this process, we present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts. Finally, we highlight challenges for future research.
APA, Harvard, Vancouver, ISO, and other styles
23

Lagutina, Nadezhda S., Ksenia V. Lagutina, Aleksey S. Adrianov, and Ilya V. Paramonov. "RussianLanguage Thesauri: Automated Construction and Application For Natural Language Processing Tasks." Modeling and Analysis of Information Systems 25, no. 4 (August 27, 2018): 435–58. http://dx.doi.org/10.18255/1818-1015-2018-4-435-458.

Full text
Abstract:
The paper reviews the existing Russian-language thesauri in digital form and methods of their automatic construction and application. The authors analyzed the main characteristics of open access thesauri for scientific research, evaluated trends of their development, and their effectiveness in solving natural language processing tasks. The statistical and linguistic methods of thesaurus construction that allow to automate the development and reduce labor costs of expert linguists were studied. In particular, the authors considered algorithms for extracting keywords and semantic thesaurus relationships of all types, as well as the quality of thesauri generated with the use of these tools. To illustrate features of various methods for constructing thesaurus relationships, the authors developed a combined method that generates a specialized thesaurus fully automatically taking into account a text corpus in a particular domain and several existing linguistic resources. With the proposed method, experiments were conducted with two Russian-language text corpora from two subject areas: articles about migrants and tweets. The resulting thesauri were assessed by using an integrated assessment developed in the previous authors’ study that allows to analyze various aspects of the thesaurus and the quality of the generation methods. The analysis revealed the main advantages and disadvantages of various approaches to the construction of thesauri and the extraction of semantic relationships of different types, as well as made it possible to determine directions for future study.
APA, Harvard, Vancouver, ISO, and other styles
24

A. Nwafor, Chidinma, and Ikechukwu E. Onyenwe. "An Automated Multiple-Choice Question Generation using Natural Language Processing Techniques." International Journal on Natural Language Computing 10, no. 02 (April 30, 2021): 1–10. http://dx.doi.org/10.5121/ijnlc.2021.10201.

Full text
Abstract:
Automatic multiple-choice question generation (MCQG) is a useful yet challenging task in Natural Language Processing (NLP). It is the task of automatic generation of correct and relevant questions from textual data. Despite its usefulness, manually creating sizeable, meaningful and relevant questions is a time-consuming and challenging task for teachers. In this paper, we present an NLP-based system for automatic MCQG for Computer-Based Testing Examination (CBTE).We used NLP technique to extract keywords that are important words in a given lesson material. To validate that the system is not perverse, five lesson materials were used to check the effectiveness and efficiency of the system. The manually extracted keywords by the teacher were compared to the auto-generated keywords and the result shows that the system was capable of extracting keywords from lesson materials in setting examinable questions. This outcome is presented in a user-friendly interface for easy accessibility.
APA, Harvard, Vancouver, ISO, and other styles
25

Hassan, Fahad ul, and Tuyen Le. "Automated Requirements Identification from Construction Contract Documents Using Natural Language Processing." Journal of Legal Affairs and Dispute Resolution in Engineering and Construction 12, no. 2 (May 2020): 04520009. http://dx.doi.org/10.1061/(asce)la.1943-4170.0000379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Arora, Chetan, Mehrdad Sabetzadeh, Lionel Briand, and Frank Zimmer. "Automated Checking of Conformance to Requirements Templates Using Natural Language Processing." IEEE Transactions on Software Engineering 41, no. 10 (October 1, 2015): 944–68. http://dx.doi.org/10.1109/tse.2015.2428709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Besada, Juan A., Guillermo Frontera, Jesus Crespo, Enrique Casado, and Javier Lopez-Leones. "Automated Aircraft Trajectory Prediction Based on Formal Intent-Related Language Processing." IEEE Transactions on Intelligent Transportation Systems 14, no. 3 (September 2013): 1067–82. http://dx.doi.org/10.1109/tits.2013.2252343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

SOLORIO, T., M. SHERMAN, Y. LIU, L. M. BEDORE, E. D. PEÑA, and A. IGLESIAS. "Analyzing language samples of Spanish–English bilingual children for the automated prediction of language dominance." Natural Language Engineering 17, no. 3 (October 22, 2010): 367–95. http://dx.doi.org/10.1017/s1351324910000252.

Full text
Abstract:
AbstractIn this work we study how features typically used in natural language processing tasks, together with measures from syntactic complexity, can be adapted to the problem of developing language profiles of bilingual children. Our experiments show that these features can provide high discriminative value for predicting language dominance from story retells in a Spanish–English bilingual population of children. Moreover, some of our proposed features are even more powerful than measures commonly used by clinical researchers and practitioners for analyzing spontaneous language samples of children. This study shows that the field of natural language processing has the potential to make significant contributions to communication disorders and related areas.
APA, Harvard, Vancouver, ISO, and other styles
29

Somers, Rick, Samuel Cunningham-Nelson, and Wageeh Boles. "Applying natural language processing to automatically assess student conceptual understanding from textual responses." Australasian Journal of Educational Technology 37, no. 5 (December 6, 2021): 98–115. http://dx.doi.org/10.14742/ajet.7121.

Full text
Abstract:
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students’ conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students’ conceptual understanding, which is often overlooked in automated grading. Students and educators benefit from automated formative assessment, especially in online education and large cohorts, by providing insights into conceptual understanding as and when required. We selected the ELECTRA-small, RoBERTa-base, XLNet-base and ALBERT-base-v2 NLP machine learning models to determine the free-text validity of students’ justification and the level of confidence in their responses. These two pieces of information provide key insights into students’ conceptual understanding and the nature of their understanding. We developed a free-text validity ensemble using high performance NLP models to assess the validity of students’ justification with accuracies ranging from 91.46% to 98.66%. In addition, we proposed a general, non-question-specific confidence-in-response model that can categorise a response as high or low confidence with accuracies ranging from 93.07% to 99.46%. With the strong performance of these models being applicable to small data sets, there is a great opportunity for educators to implement these techniques within their own classes. Implications for practice or policy: Students’ conceptual understanding can be accurately and automatically extracted from their short answer responses using NLP to assess the level and nature of their understanding. Educators and students can receive feedback on conceptual understanding as and when required through the automated assessment of conceptual understanding, without the overhead of traditional formative assessment. Educators can implement accurate automated assessment of conceptual understanding models with fewer than 100 student responses for their short response questions.
APA, Harvard, Vancouver, ISO, and other styles
30

Kamune, Kalyani Pradiprao, and Avinash Agrawal. "Hybrid Model of Automated Anaphora Resolution." IAES International Journal of Artificial Intelligence (IJ-AI) 3, no. 3 (September 1, 2014): 105. http://dx.doi.org/10.11591/ijai.v3.i3.pp105-111.

Full text
Abstract:
Anaphora resolution has proven to be a very difficult problem of natural language processing, and it is useful in discourse analysis, language understanding and processing, information exaction, machine translation and many more. This paper represents a system that instead of using a monolithic architecture for resolving anaphora, use the hybrid model which combines the constraint-based and preferences-based architectures, each uses a different source of knowledge, and proves effective on theoretical and computational basis. An algorithm identifies both inter-sentential and intra-sentential antecedents of “Third person pronoun anaphors”, “Pleonastic it”, and “Lexical noun phrase anaphora”. The algorithm use Charniak parser (parser05Aug16) as an associated tool, and it relays on the output generated by it. Salience measures derived from parse tree, in order to find out accurate antecedents from the list of potential antecedents. We have tested the system extensively on 'Reuters Newspaper corpus'.
APA, Harvard, Vancouver, ISO, and other styles
31

Friedman, C., G. Hripcsak, W. DuMouchel, S. B. Johnson, and P. D. Clayton. "Natural language processing in an operational clinical information system." Natural Language Engineering 1, no. 1 (March 1995): 83–108. http://dx.doi.org/10.1017/s1351324900000061.

Full text
Abstract:
AbstractThis paper describes a natural language text extraction system, called MEDLEE, that has been applied to the medical domain. The system extracts, structures, and encodes clinical information from textual patient reports. It was integrated with the Clinical Information System (CIS), which was developed at Columbia-Presbyterian Medical Center (CPMC) to help improve patient care. MEDLEE is currently used on a daily basis to routinely process radiological reports of patients at CPMC.In order to describe how the natural language system was made compatible with the existing CIS, this paper will also discuss engineering issues which involve performance, robustness, and accessibility of the data from the end users' viewpoint.Also described are the three evaluations that have been performed on the system. The first evaluation was useful primarily for further refinement of the system. The two other evaluations involved an actual clinical application which consisted of retrieving reports that were associated with specified diseases. Automated queries were written by a medical expert based on the structured output forms generated as a result of text processing. The retrievals obtained by the automated system were compared to the retrievals obtained by independent medical experts who read the reports manually to determine whether they were associated with the specified diseases. MEDLEE was shown to perform comparably to the experts. The technique used to perform the last two evaluations was found to be a realistic evaluation technique for a natural language processor.
APA, Harvard, Vancouver, ISO, and other styles
32

Dangol, Dinesh, Rupesh Dahi Shrestha, and Arun Timalsina. "Automated News Classification using N-gram Model and Key Features of Nepali Language." SCITECH Nepal 13, no. 1 (September 30, 2018): 64–69. http://dx.doi.org/10.3126/scitech.v13i1.23504.

Full text
Abstract:
With an increasing trend of publishing news online on website, automatic text processing becomes more and more important. Automatic text classification has been a focus of many researchers in different languages for decades. There is a huge amount of research repository on features of English language and their uses on automated text processing. This research implements Nepali language key features for automatic text classification of Nepali news. In particular, the study on impact of Nepali language based features, which are extremely different than English language is more challenging because of the higher level of complexity to be resolved. The research experiment using vector space model, n-gram model and key feature based processing specific to Nepali language shows promising result compared to bag-of-words model for the task of automated Nepali news classification.
APA, Harvard, Vancouver, ISO, and other styles
33

Waterworth, David, Subbu Sethuvenkatraman, and Quan Z. Sheng. "Advancing smart building readiness: Automated metadata extraction using neural language processing methods." Advances in Applied Energy 3 (August 2021): 100041. http://dx.doi.org/10.1016/j.adapen.2021.100041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

XIANG, Wei. "Automated debug for common information model defect using natural language processing algorithm." Journal of Computer Applications 33, no. 5 (October 14, 2013): 1446–49. http://dx.doi.org/10.3724/sp.j.1087.2013.01446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Agaronnik, Nicole D., Anne Kwok, Andrew J. Schoenfeld, and Charlotta Lindvall. "Natural language processing for automated surveillance of intraoperative neuromonitoring in spine surgery." Journal of Clinical Neuroscience 97 (March 2022): 121–26. http://dx.doi.org/10.1016/j.jocn.2022.01.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sadanand, Vijaya Shetty, Kadagathur Raghavendra Rao Guruvyas, Pranav Prashantha Patil, Jeevan Janardhan Acharya, and Sharvani Gunakimath Suryakanth. "An automated essay evaluation system using natural language processing and sentiment analysi." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6585. http://dx.doi.org/10.11591/ijece.v12i6.pp6585-6593.

Full text
Abstract:
<span lang="EN-US">An automated essay evaluation system is a machine-based approach leveraging long short-term memory (LSTM) model to award grades to essays written in English language. <a name="_Hlk108785338"></a>natural language processing (NLP) is used to extract feature representations from the essays. The LSTM network learns from the extracted features and generates parameters for testing and validation. The main objectives of the research include proposing and training an LSTM model using a dataset of manually graded essays with scores. Sentiment analysis is performed to determine the sentiment of the essay as either positive, negative or neutral. The twitter sample dataset is used to build sentiment classifier that analyzes the sentiment based on the student’s approach towards a topic. Additionally, each essay is subjected to detection of syntactical errors as well as plagiarism check to detect the novelty of the essay. The overall grade is calculated based on the quality of the essay, the number of syntactic errors, the percentage of plagiarism found and sentiment of the essay. The corrected essay is provided as feedback to the students. This essay grading model has gained an average quadratic weighted kappa (QWK) score of 0.911 with 99.4% accuracy for the sentiment analysis classifier.</span>
APA, Harvard, Vancouver, ISO, and other styles
37

Conway, Lucian Gideon, Kathrene R. Conway, and Shannon C. Houck. "Validating automated integrative complexity: Natural language processing and the Donald Trump Test." Journal of Social and Political Psychology 8, no. 2 (September 2, 2020): 504–24. http://dx.doi.org/10.5964/jspp.v8i2.1307.

Full text
Abstract:
Computer algorithms that analyze language (natural language processing systems) have seen a great increase in usage recently. While use of these systems to score key constructs in social and political psychology has many advantages, it is also dangerous if we do not fully evaluate the validity of these systems. In the present article, we evaluate a natural language processing system for one particular construct that has implications for solving key societal issues: Integrative complexity. We first review the growing body of evidence for the validity of the Automated Integrative Complexity (AutoIC) method for computer-scoring integrative complexity. We then provide five new validity tests: AutoIC successfully distinguished fourteen classic philosophic works from a large sample of both lay populations and political leaders (Test 1) and further distinguished classic philosophic works from the rhetoric of Donald Trump at higher rates than an alternative system (Test 2). Additionally, AutoIC successfully replicated key findings from the hand-scored IC literature on smoking cessation (Test 3), U.S. Presidents’ State of the Union Speeches (Test 4), and the ideology-complexity relationship (Test 5). Taken in total, this large body of evidence not only suggests that AutoIC is a valid system for scoring integrative complexity, but it also reveals important theory-building insights into key issues at the intersection of social and political psychology (health, leadership, and ideology). We close by discussing the broader contributions of the present validity tests to our understanding of issues vital to natural language processing.
APA, Harvard, Vancouver, ISO, and other styles
38

Melton, Genevieve B., and George Hripcsak. "Automated Detection of Adverse Events Using Natural Language Processing of Discharge Summaries." Journal of the American Medical Informatics Association 12, no. 4 (July 2005): 448–57. http://dx.doi.org/10.1197/jamia.m1794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Feller, Daniel J., Jason Zucker, Michael T. Yin, Peter Gordon, and Noémie Elhadad. "Using Clinical Notes and Natural Language Processing for Automated HIV Risk Assessment." JAIDS Journal of Acquired Immune Deficiency Syndromes 77, no. 2 (February 2018): 160–66. http://dx.doi.org/10.1097/qai.0000000000001580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Moon, Seonghyeon, Gitaek Lee, Seokho Chi, and Hyunchul Oh. "Automated Construction Specification Review with Named Entity Recognition Using Natural Language Processing." Journal of Construction Engineering and Management 147, no. 1 (January 2021): 04020147. http://dx.doi.org/10.1061/(asce)co.1943-7862.0001953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Goff, Daniel J., and Thomas W. Loehfelm. "Automated Radiology Report Summarization Using an Open-Source Natural Language Processing Pipeline." Journal of Digital Imaging 31, no. 2 (October 30, 2017): 185–92. http://dx.doi.org/10.1007/s10278-017-0030-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pawar, Prof P. Y. "Completely Automated Captcha Solver." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 20, 2021): 1728–32. http://dx.doi.org/10.22214/ijraset.2021.36710.

Full text
Abstract:
This project was primarily aimed to create an automated system for solving captcha’s automatically. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Human Apart) are the Internet’s first line of defence against automated account creation and service abuse. This paper presents unCaptcha, an automates system that can solve Captcha’s most difficult auditory challenges with high success rate using Deep Learning and Natural Language processing. There are four types of Captcha’s Audio Captcha,Text based captcha, Image captcha,Maths-solver captcha.
APA, Harvard, Vancouver, ISO, and other styles
43

Mukherjee, Prasenjit, Atanu Chattopadhyay, Baisakhi Chakraborty, and Debashis Nandi. "Natural language query handling using extended knowledge provider system." International Journal of Knowledge-based and Intelligent Engineering Systems 25, no. 1 (April 9, 2021): 1–19. http://dx.doi.org/10.3233/kes-210049.

Full text
Abstract:
Extraction of knowledge data from knowledge database using natural language query is a difficult task. Different types of natural language processing (NLP) techniques have been developed to handle this knowledge data extraction task. This paper proposes an automated query-response model termed Extended Automated Knowledge Provider System (EAKPS) that can manage various types of natural language queries from user. The EAKPS uses combination based technique and it can handle assertive, interrogative, imperative, compound and complex type query sentences. The algorithm of EAKPS generates structure query language (SQL) for each natural language query to extract knowledge data from the knowledge database resident within the EAKPS. Extraction of noun or noun phrases is another issue in natural language query processing. Most of the times, determiner, preposition and conjunction are prefixed to a noun or noun phrase and it is difficult to identify the noun/noun phrase with prefix during query processing. The proposed system is able to identify these prefixes and extract exact noun or noun phrases from natural language queries without any manual intervention.
APA, Harvard, Vancouver, ISO, and other styles
44

Vogel, Sven C. "gsaslanguage: aGSASscript language for automated Rietveld refinements of diffraction data." Journal of Applied Crystallography 44, no. 4 (July 13, 2011): 873–77. http://dx.doi.org/10.1107/s0021889811023181.

Full text
Abstract:
A description of the gsaslanguage software is presented. The software provides input to and processes output from theGSASpackage. It allows the development of scripts for the automatic evaluation of large numbers of data sets and provides documentation of the refinement strategies employed, thus fostering the development of efficient refinement strategies. Use of the bash shell and standard Unix text-processing tools, available natively on Linux and Mac OSX platforms andviathe freecygwinsoftware on Windows systems, make this software platform independent.
APA, Harvard, Vancouver, ISO, and other styles
45

Rychtyckyj, Nestor, and Craig Plesco. "Applying Automated Language Translation at a Global Enterprise Level." AI Magazine 34, no. 1 (December 6, 2012): 43. http://dx.doi.org/10.1609/aimag.v34i1.2436.

Full text
Abstract:
In 2007 we presented a paper that described the application of Natural Language Processing (NLP) and Machine Translation (MT) for the automated translation of process build instructions from English to other languages to support Ford’s assembly plants in non-English speaking countries. This project has continued to evolve with the addition of new languages and improvements to the translation process. However, we discovered that there was a large demand for automated language translation across all of Ford Motor Company and we decided to expand the scope of our project to address these requirements. This paper will describe our efforts to meet all of Ford’s internal translation requirements with AI and MT technology and focus on the challenges and lessons that we learned from applying advanced technology across an entire corporation.
APA, Harvard, Vancouver, ISO, and other styles
46

Rychtyckyj, Nestor, and Craig Plesco. "Applying Automated Language Translation at a Global Enterprise Level." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 2 (July 22, 2012): 2245–52. http://dx.doi.org/10.1609/aaai.v26i2.18965.

Full text
Abstract:
In 2007 we presented a paper that described the application of Natural Language Processing (NLP) and Machine Translation (MT) for the automated translation of process build instructions from English to other languages to support Ford’s assembly plants in non-English speaking countries. This project has continued to evolve with the addition of new languages and improvements to the translation process. However, we discovered that there was a large demand for automated language translation across all of Ford Motor Company and we decided to expand the scope of our project to address these requirements. This paper will describe our efforts to meet all of Ford’s internal translation requirements with AI and MT technology and focus on the challenges and lessons that we learned from applying advanced technology across an entire corporation.
APA, Harvard, Vancouver, ISO, and other styles
47

Thessen, Anne E., Hong Cui, and Dmitry Mozzherin. "Applications of Natural Language Processing in Biodiversity Science." Advances in Bioinformatics 2012 (May 22, 2012): 1–17. http://dx.doi.org/10.1155/2012/391574.

Full text
Abstract:
Centuries of biological knowledge are contained in the massive body of scientific literature, written for human-readability but too big for any one person to consume. Large-scale mining of information from the literature is necessary if biology is to transform into a data-driven science. A computer can handle the volume but cannot make sense of the language. This paper reviews and discusses the use of natural language processing (NLP) and machine-learning algorithms to extract information from systematic literature. NLP algorithms have been used for decades, but require special development for application in the biological realm due to the special nature of the language. Many tools exist for biological information extraction (cellular processes, taxonomic names, and morphological characters), but none have been applied life wide and most still require testing and development. Progress has been made in developing algorithms for automated annotation of taxonomic text, identification of taxonomic names in text, and extraction of morphological character information from taxonomic descriptions. This manuscript will briefly discuss the key steps in applying information extraction tools to enhance biodiversity science.
APA, Harvard, Vancouver, ISO, and other styles
48

Topoleanu, Tudor, Gheorghe Leonte Mogan, and Cristian Postelnicu. "On Semantic Graph Language Processing for Mobile Robot Voice Interaction." Applied Mechanics and Materials 162 (March 2012): 286–93. http://dx.doi.org/10.4028/www.scientific.net/amm.162.286.

Full text
Abstract:
This paper describes a simple semantic graph based model for processing natural language commands issued to a mobile robot. The proposed model is intended for translating natural language commands given by naïve users into an action or sequence of actions that the robot can execute via its available functionality, in order to complete the commands. This approach to language processing is easily extensible through automated learning, it also is simpler and more scalable than hard-coded command to action mapping, while also being flexible and covering any number of command formulations that could be generated by a user.
APA, Harvard, Vancouver, ISO, and other styles
49

Su, Yu-Hsiang, Ching-Ping Chao, Ling-Chien Hung, Sheng-Feng Sung, and Pei-Ju Lee. "A Natural Language Processing Approach to Automated Highlighting of New Information in Clinical Notes." Applied Sciences 10, no. 8 (April 19, 2020): 2824. http://dx.doi.org/10.3390/app10082824.

Full text
Abstract:
Electronic medical records (EMRs) have been used extensively in most medical institutions for more than a decade in Taiwan. However, information overload associated with rapid accumulation of large amounts of clinical narratives has threatened the effective use of EMRs. This situation is further worsened by the use of “copying and pasting”, leading to lots of redundant information in clinical notes. This study aimed to apply natural language processing techniques to address this problem. New information in longitudinal clinical notes was identified based on a bigram language model. The accuracy of automated identification of new information was evaluated using expert annotations as the reference standard. A two-stage cross-over user experiment was conducted to evaluate the impact of highlighting of new information on task demands, task performance, and perceived workload. The automated method identified new information with an F1 score of 0.833. The user experiment found a significant decrease in perceived workload associated with a significantly higher task performance. In conclusion, automated identification of new information in clinical notes is feasible and practical. Highlighting of new information enables healthcare professionals to grasp key information from clinical notes with less perceived workload.
APA, Harvard, Vancouver, ISO, and other styles
50

Gimaletdinova, G. K., and E. Kh Dovtaeva. "Sentiment Analysis of Reader Comments: Automated vs Manual Text Processing." Uchenye Zapiski Kazanskogo Universiteta. Seriya Gumanitarnye Nauki 163, no. 1 (2021): 65–80. http://dx.doi.org/10.26907/2541-7738.2021.1.65-80.

Full text
Abstract:
The verbal and structural features of the reader comment, a genre of Internet communication, were studied. The method of sentiment analysis (ParallelDots API) was used to reveal and measure the emotive component of the reader comments (N = 3000) in the English and Russian languages. The results obtained were verified by the manual linguistic text analysis. The experts were specialists in the field of philology of the English and Russian languages (N = 6), students of philology, as well as native speakers of the Russian language for whom English is a foreign language, i.e., their level of proficiency is C1 (N = 4). As a result of the comparison of the data collected through the automated and manual text processing, a number of factors that reduce the reliability of the results of automated sentiment analysis of the reader comments were singled out. Difficulties hindering the objective determination of the sentiment by the program were found in the reader comments in both analyzed languages. This is indicative of the structural similarities between the English and Russian reader comments at the lexical and syntactic levels. The feasibility of the mixed automated and manual text processing in order to obtain more detailed and objective data was demonstrated. The results of this work can be used for comparative studies of two or more languages performed by the method of sentiment analysis, as well as for drawing parallels between the lexical, grammatical, and cultural components of languages.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography