Academic literature on the topic 'Events in natural language processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Events in natural language processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Events in natural language processing"

1

KARTTUNEN, LAURI, KIMMO KOSKENNIEMI, and GERTJAN VAN NOORD. "Finite state methods in natural language processing." Natural Language Engineering 9, no. 1 (2003): 1–3. http://dx.doi.org/10.1017/s1351324903003139.

Full text
Abstract:
Finite state methods have been in common use in various areas of natural language processing (NLP) for many years. A series of specialized workshops in this area illustrates this. In 1996, András Kornai organized a very successful workshop entitled Extended Finite State Models of Language. One of the results of that workshop was a special issue of Natural Language Engineering (Volume 2, Number 4). In 1998, Kemal Oflazer organized a workshop called Finite State Methods in Natural Language Processing. A selection of submissions for this workshop were later included in a special issue of Computational Linguistics (Volume 26, Number 1). Inspired by these events, Lauri Karttunen, Kimmo Koskenniemi and Gertjan van Noord took the initiative for a workshop on finite state methods in NLP in Helsinki, as part of the European Summer School in Language, Logic and Information. As a related special event, the 20th anniversary of two-level morphology was celebrated. The appreciation of these events led us to believe that once again it should be possible, with some additional submissions, to compose an interesting special issue of this journal.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yong, Xiaojun Yang, Min Zuo, Qingyu Jin, Haisheng Li, and Qian Cao. "Deep Structured Learning for Natural Language Processing." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 3 (2021): 1–14. http://dx.doi.org/10.1145/3433538.

Full text
Abstract:
The real-time and dissemination characteristics of network information make net-mediated public opinion become more and more important food safety early warning resources, but the data of petabyte (PB) scale growth also bring great difficulties to the research and judgment of network public opinion, especially how to extract the event role of network public opinion from these data and analyze the sentiment tendency of public opinion comment. First, this article takes the public opinion of food safety network as the research point, and a BLSTM-CRF model for automatically marking the role of event is proposed by combining BLSTM and conditional random field organically. Second, the Attention mechanism based on vocabulary in the field of food safety is introduced, the distance-related sequence semantic features are extracted by BLSTM, and the emotional classification of sequence semantic features is realized by using CNN. A kind of Att-BLSTM-CNN model for the analysis of public opinion and emotional tendency in the field of food safety is proposed. Finally, based on the time series, this article combines the role extraction of food safety events and the analysis of emotional tendency and constructs a net-mediated public opinion early warning model in the field of food safety according to the heat of the event and the emotional intensity of the public to food safety public opinion events.
APA, Harvard, Vancouver, ISO, and other styles
3

Ozonoff, Al, Carly E. Milliren, Kerri Fournier, et al. "Electronic surveillance of patient safety events using natural language processing." Health Informatics Journal 28, no. 4 (2022): 146045822211324. http://dx.doi.org/10.1177/14604582221132429.

Full text
Abstract:
Objective We describe our approach to surveillance of reportable safety events captured in hospital data including free-text clinical notes. We hypothesize that a) some patient safety events are documented only in the clinical notes and not in any other accessible source; and b) large-scale abstraction of event data from clinical notes is feasible. Materials and Methods We use regular expressions to generate a training data set for a machine learning model and apply this model to the full set of clinical notes and conduct further review to identify safety events of interest. We demonstrate this approach on peripheral intravenous (PIV) infiltrations and extravasations (PIVIEs). Results During Phase 1, we collected 21,362 clinical notes, of which 2342 were reviewed. We identified 125 PIV events, of which 44 cases (35%) were not captured by other patient safety systems. During Phase 2, we collected 60,735 clinical notes and identified 440 infiltrate events. Our classifier demonstrated accuracy above 90%. Conclusion Our method to identify safety events from the free text of clinical documentation offers a feasible and scalable approach to enhance existing patient safety systems. Expert reviewers, using a machine learning model, can conduct routine surveillance of patient safety events.
APA, Harvard, Vancouver, ISO, and other styles
4

Guda, Vanitha, and SureshKumar Sanampudi. "Event Time Relationship in Natural Language Text." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 7, no. 3 (2019): 4. http://dx.doi.org/10.3991/ijes.v7i3.10985.

Full text
Abstract:
<p>Due to the numerous information needs, retrieval of events from a given natural language text is inevitable. In natural language processing (NLP) perspective, "Events" are situations, occurrences, real-world entities or facts. Extraction of events and arranging them on a timeline is helpful in various NLP application like building the summary of news articles, processing health records, and Question Answering System (QA) systems. This paper presents a framework for identifying the events and times from a given document and representing them using a graph data structure. As a result, a graph is derived to show event-time relationships in the given text. Events form the nodes in a graph, and edges represent the temporal relations among the nodes. Time of an event occurrence exists in two forms namely qualitative (like before, after, duringetc) and quantitative (exact time points/periods). To build the event-time-event structure quantitative time is normalized to qualitative form. Thus obtained temporal information is used to label the edges among the events. Data set released in the shared task EvTExtract of (Forum for Information Retrieval Extraction) FIRE 2018 conference is identified to evaluate the framework. Precision and recall are used as evaluation metrics to access the performance of the proposed framework with other methods mentioned in state of the art with 85% of accuracy and 90% of precision.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Balgi, Sanjana Madhav. "Fake News Detection using Natural Language Processing." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 4790–95. http://dx.doi.org/10.22214/ijraset.2022.45095.

Full text
Abstract:
Abstract: Fake news is information that is false or misleading but is reported as news. The tendency for people to spread false information is influenced by human behaviour; research indicates that people are drawn to unexpected fresh events and information, which increases brain activity. Additionally, it was found that motivated reasoning helps spread incorrect information. This ultimately encourages individuals to repost or disseminate deceptive content, which is frequently identified by click-bait and attention-grabbing names. The proposed study uses machine learning and natural language processing approaches to identify false news specifically, false news items that come from unreliable sources. The dataset used here is ISOT dataset which contains the Real and Fake news collected from various sources. Web scraping is used here to extract the text from news website to collect the present news and is added into the dataset. Data pre-processing, feature extraction is applied on the data. It is followed by dimensionality reduction and classification using models such as Rocchio classification, Bagging classifier, Gradient Boosting classifier and Passive Aggressive classifier. To choose the best functioning model with an accurate prediction for fake news, we compared a number of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Hkiri, Emna, Souheyl Mallat, and Mounir Zrigui. "Events Automatic Extraction from Arabic Texts." International Journal of Information Retrieval Research 6, no. 1 (2016): 36–51. http://dx.doi.org/10.4018/ijirr.2016010103.

Full text
Abstract:
The event extraction task consists in determining and classifying events within an open-domain text. It is very new for the Arabic language, whereas it attained its maturity for some languages such as English and French. Events extraction was also proved to help Natural Language Processing tasks such as Information Retrieval and Question Answering, text mining, machine translation etc… to obtain a higher performance. In this article, we present an ongoing effort to build a system for event extraction from Arabic texts using Gate platform and other tools.
APA, Harvard, Vancouver, ISO, and other styles
7

Melton, Genevieve B., and George Hripcsak. "Automated Detection of Adverse Events Using Natural Language Processing of Discharge Summaries." Journal of the American Medical Informatics Association 12, no. 4 (2005): 448–57. http://dx.doi.org/10.1197/jamia.m1794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

YLI-JYRÄ, ANSSI, ANDRÁS KORNAI, and JACQUES SAKAROVITCH. "Finite-state methods and models in natural language processing." Natural Language Engineering 17, no. 2 (2011): 141–44. http://dx.doi.org/10.1017/s1351324911000015.

Full text
Abstract:
For the past two decades, specialised events on finite-state methods have been successful in presenting interesting studies on natural language processing to the public through journals and collections. The FSMNLP workshops have become well-known among researchers and are now the main forum of the Association for Computational Linguistics' (ACL) Special Interest Group on Finite-State Methods (SIGFSM). The current issue on finite-state methods and models in natural language processing was planned in 2008 in this context as a response to a call for special issue proposals. In 2010, the issue received a total of sixteen submissions, some of which were extended and updated versions of workshop papers, and others which were completely new. The final selection, consisting of only seven papers that could fit into one issue, is not fully representative, but complements the prior special issues in a nice way. The selected papers showcase a few areas where finite-state methods have less than obvious and sometimes even groundbreaking relevance to natural language processing (NLP) applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Abbood, Auss, Alexander Ullrich, Rüdiger Busche, and Stéphane Ghozzi. "EventEpi—A natural language processing framework for event-based surveillance." PLOS Computational Biology 16, no. 11 (2020): e1008277. http://dx.doi.org/10.1371/journal.pcbi.1008277.

Full text
Abstract:
According to the World Health Organization (WHO), around 60% of all outbreaks are detected using informal sources. In many public health institutes, including the WHO and the Robert Koch Institute (RKI), dedicated groups of public health agents sift through numerous articles and newsletters to detect relevant events. This media screening is one important part of event-based surveillance (EBS). Reading the articles, discussing their relevance, and putting key information into a database is a time-consuming process. To support EBS, but also to gain insights into what makes an article and the event it describes relevant, we developed a natural language processing framework for automated information extraction and relevance scoring. First, we scraped relevant sources for EBS as done at the RKI (WHO Disease Outbreak News and ProMED) and automatically extracted the articles’ key data: disease, country, date, and confirmed-case count. For this, we performed named entity recognition in two steps: EpiTator, an open-source epidemiological annotation tool, suggested many different possibilities for each. We extracted the key country and disease using a heuristic with good results. We trained a naive Bayes classifier to find the key date and confirmed-case count, using the RKI’s EBS database as labels which performed modestly. Then, for relevance scoring, we defined two classes to which any article might belong: The article is relevant if it is in the EBS database and irrelevant otherwise. We compared the performance of different classifiers, using bag-of-words, document and word embeddings. The best classifier, a logistic regression, achieved a sensitivity of 0.82 and an index balanced accuracy of 0.61. Finally, we integrated these functionalities into a web application called EventEpi where relevant sources are automatically analyzed and put into a database. The user can also provide any URL or text, that will be analyzed in the same way and added to the database. Each of these steps could be improved, in particular with larger labeled datasets and fine-tuning of the learning algorithms. The overall framework, however, works already well and can be used in production, promising improvements in EBS. The source code and data are publicly available under open licenses.
APA, Harvard, Vancouver, ISO, and other styles
10

Kosiv, Yurii A., and Vitaliy S. Yakovyna. "Three language political leaning text classification using natural language processing methods." Applied Aspects of Information Technology 5, no. 4 (2022): 359–70. http://dx.doi.org/10.15276/aait.05.2022.24.

Full text
Abstract:
In this article, the problem of political leaning classificationof the text resource is solved. First, a detailed analysis of ten stud-ies on the work’s topicwas performed in the form of comparative characteristicsof the used methodologies.Literary sources were compared according to the problem-solvingmethods,the learning that was carried out, the evaluation metrics, and according to the vectorizations.Thus, it was determined that machine learning algorithms and neural networks, as well as vectorizationmethods TF-IDF and Word2Vec, were most often used to solve the problem.Next, various classification models of whether textual information is pro-Ukrainian or pro-Russian were built based on a dataset containing messages from social media users about the events of the large-scale Russian invasion of Ukraine from February 24, 2022.The problem was solved with the help of Support Vector Machines, Decision Tree, Random Forest, Naïve Bayes classifier,eXtreme Gradient BoostingandLogistic Regressionmachine learning algo-rithms, Convolutional Neural Networks, Long short-term memory and BERT neural networks, techniques for working with unbal-anced dataRandom Oversampling, Random Undersampling , SMOTE and SMOTETomek, as well as stacking ensembles of models.Amongthe machine learning algorithms, LR performed best, showing a macro F1-scorevalue of 0.7966 when features were trans-formed by TF-IDF vectorization and 0.7933 when BoW.Among neural networks, the best macro F1-scorevalue of 0.76was ob-tained using CNN and LSTM.Applying data balancing techniques failed to improve the results of machine learning algorithms.Next, ensembles of models from machine learning algorithms were determined. Two of the constructed ensembles achieved the same macro F1-scorevalue of 0.7966 as with LR. Ensembles that wasable to do so consisted of the TF-IDF vectorization, the B-NBC meta-model, and the SVC, NuSVC LR, and SVC, LR base models, respectively.Thus, three classifiers, the LR machine learning algorithmand two ensembles of models, which were defined as a combination of existing methods of solving the problem, demon-strated the largest macro F1-score value of 0.7966. The obtained models can be used for a detailed review of various news publica-tions according to the political leaning characteristic, information about which can help people identify being isolated by a filter bubble.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography