Journal articles on the topic 'Digital Language Processing'

To see the other types of publications on this topic, follow the link: Digital Language Processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Digital Language Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gonzalez-Dios, Itziar, and Begoña Altuna. "Natural Language Processing and Language Technologies for the Basque Language." Cuadernos Europeos de Deusto, no. 04 (July 22, 2022): 203–30. http://dx.doi.org/10.18543/ced.2477.

Full text
Abstract:
The presence of a language in the digital domain is crucial for its survival, as online communication and digital language resources have become the standard in the last decades and will gain more importance in the coming years. In order to develop advanced systems that are considered the basics for an efficient digital communication (e.g. machine translation systems, text-to-speech and speech-to-text converters and digital assistants), it is necessary to digitalise linguistic resources and create tools. In the case of Basque, scholars have studied the creation of digital linguistic resources and the tools that allow the development of those systems for the last forty years. In this paper, we present an overview of the natural language processing and language technology resources developed for Basque, their impact in the process of making Basque a “digital language” and the applications and challenges in multilingual communication. More precisely, we present the well-known products for Basque, the basic tools and the resources that are behind the products we use every day. Likewise, we would like that this survey serves as a guide for other minority languages that are making their way to digitalisation. Recibido: 05 abril 2022Aceptado: 20 mayo 2022
APA, Harvard, Vancouver, ISO, and other styles
2

Bachate, Ravindra Parshuram, and Ashok Sharma. "Acquaintance with Natural Language Processing for Building Smart Society." E3S Web of Conferences 170 (2020): 02006. http://dx.doi.org/10.1051/e3sconf/202017002006.

Full text
Abstract:
Natural Language Processing (NLP) deals with the spoken languages by using computer and Artificial Intelligence. As people from different regional areas using different digital platforms and expressing their views in their spoken language, it is now must to focus on working spoken languages in India to make our society smart and digital. NLP research grown tremendously in last decade which results in Siri, Google Assistant, Alexa, Cortona and many more automatic speech recognitions and understanding systems (ASR). Natural Language Processing can be understood by classifying it into Natural Language Generation and Natural Language Understanding. NLP is widely used in various domain such as Health Care, Chatbot, ASR building, HR, Sentiment analysis etc.
APA, Harvard, Vancouver, ISO, and other styles
3

Embree, Paul M., Bruce Kimble, and James F. Bartram. "C Language Algorithms for Digital Signal Processing." Journal of the Acoustical Society of America 90, no. 1 (July 1991): 618. http://dx.doi.org/10.1121/1.401205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dolmans, Jeroen H. "C language algorithms for digital signal processing." Control Engineering Practice 4, no. 10 (October 1996): 1484–85. http://dx.doi.org/10.1016/0967-0661(96)85106-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Németh, Renáta, and Júlia Koltai. "Natural language processing." Intersections 9, no. 1 (April 26, 2023): 5–22. http://dx.doi.org/10.17356/ieejsp.v9i1.871.

Full text
Abstract:
Natural language processing (NLP) methods are designed to automatically process and analyze large amounts of textual data. The integration of this new-generation toolbox into sociology faces many challenges. NLP was institutionalized outside of sociology, while the expertise of sociology has been based on its own methods of research. Another challenge is epistemological: it is related to the validity of digital data and the different viewpoints associated with predictive and causal approaches. In our paper, we discuss the challenges and opportunities of the use of NLP in sociology, offer some potential solutions to the concerns and provide meaningful and diverse examples of its sociological application, most of which are related to research on Eastern European societies. The focus will be on the use of NLP in quantitative text analysis. Solutions are provided concerning how sociological knowledge can be incorporated into the new methods and how the new analytical tools can be evaluated against the principles of traditional quantitative methodology.
APA, Harvard, Vancouver, ISO, and other styles
6

Allah, Fadoua Ataa, and Siham Boulaknadel. "NEW TRENDS IN LESS-RESOURCED LANGUAGE PROCESSING: CASE OF AMAZIGH LANGUAGE." International Journal on Natural Language Computing 12, no. 2 (April 29, 2023): 75–89. http://dx.doi.org/10.5121/ijnlc.2023.12207.

Full text
Abstract:
The coronavirus (COVID-19) pandemic has dramatically changed lifestyles in much of the world. It forced people to profoundly review their relationships and interactions with digital technologies. Nevertheless, people prefer using these technologies in their favorite languages. Unfortunately, most languages are considered even as low or less-resourced, and they do not have the potential to keep up with the new needs. Therefore, this study explores how this kind of languages, mainly the Amazigh, will behave in the wholly digital environment, and what to expect for new trends. Contrary to last decades, the research gap of low and less-resourced languages is continually reducing. Nonetheless, the literature review exploration unveils the need for innovative research to review their informatization roadmap, while rethinking, in a valuable way, people’s behaviors in this increasingly changing environment. Through this work, we will try first to introduce the technology access challenges, and explain how natural language processing contributes to their overcoming. Then, we will give an overview of existing studies and research related to under and less-resourced languages’ informatization, with an emphasis on the Amazigh language. After, based on these studies and the agile revolution, a new roadmap will be presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Norilo, Vesa. "Kronos: A Declarative Metaprogramming Language for Digital Signal Processing." Computer Music Journal 39, no. 4 (December 2015): 30–48. http://dx.doi.org/10.1162/comj_a_00330.

Full text
Abstract:
Kronos is a signal-processing programming language based on the principles of semifunctional reactive systems. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks by utilizing the powerful programming paradigms of “metaprogramming” and reactive multirate systems. The Kronos language features expressive source code as well as a streamlined, efficient runtime. The programming model presented is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples.
APA, Harvard, Vancouver, ISO, and other styles
8

Lazebna, N. V. "ENGLISH-LANGUAGE SENTENCE PROCESSING: DIGITAL TOOLS AND PSYCHOLINGUISTIC PERSPECTIVE." International Humanitarian University Herald. Philology 1, no. 46 (2020): 204–6. http://dx.doi.org/10.32841/2409-1154.2020.46-1.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Müller, Marvin, Emanuel Alexandi, and Joachim Metternich. "Digital shop floor management enhanced by natural language processing." Procedia CIRP 96 (2021): 21–26. http://dx.doi.org/10.1016/j.procir.2021.01.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

KIERNAN, K. S. "Digital Image Processing and the Beowulf Manuscript." Literary and Linguistic Computing 6, no. 1 (January 1, 1991): 20–27. http://dx.doi.org/10.1093/llc/6.1.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wattamwar, Aniket. "Sign Language Recognition using CNN." International Journal for Research in Applied Science and Engineering Technology 9, no. 9 (September 30, 2021): 826–30. http://dx.doi.org/10.22214/ijraset.2021.38058.

Full text
Abstract:
Abstract: This research work presents a prototype system that helps to recognize hand gesture to normal people in order to communicate more effectively with the special people. Aforesaid research work focuses on the problem of gesture recognition in real time that sign language used by the community of deaf people. The problem addressed is based on Digital Image Processing using CNN (Convolutional Neural Networks), Skin Detection and Image Segmentation techniques. This system recognizes gestures of ASL (American Sign Language) including the alphabet and a subset of its words. Keywords: gesture recognition, digital image processing, CNN (Convolutional Neural Networks), image segmentation, ASL (American Sign Language), alphabet
APA, Harvard, Vancouver, ISO, and other styles
12

DeHart, Kenneth, and John Holbrook. "Emergency department applications of digital dictation and natural language processing." Journal of Ambulatory Care Management 15, no. 4 (October 1992): 18–23. http://dx.doi.org/10.1097/00004479-199210000-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

J., Shruthi, and Suma Swamy. "A prior case study of natural language processing on different domain." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (October 1, 2020): 4928. http://dx.doi.org/10.11591/ijece.v10i5.pp4928-4936.

Full text
Abstract:
In the present state of digital world, computer machine do not understand the human’s ordinary language. This is the great barrier between humans and digital systems. Hence, researchers found an advanced technology that provides information to the users from the digital machine. However, natural language processing (i.e. NLP) is a branch of AI that has significant implication on the ways that computer machine and humans can interact. NLP has become an essential technology in bridging the communication gap between humans and digital data. Thus, this study provides the necessity of the NLP in the current computing world along with different approaches and their applications. It also, highlights the key challenges in the development of new NLP model.
APA, Harvard, Vancouver, ISO, and other styles
14

Babatunde, A. N., A. A. Oke, B. F. Balogun, T. A. AbdulRahman, and R. O. Ogundokun. "A Deep Neural Network-Based Yoruba Intelligent Chatbot System." Advances in Multidisciplinary and scientific Research Journal Publication 10, no. 2 (June 15, 2022): 69–80. http://dx.doi.org/10.22624/aims/digital/v10n2p4.

Full text
Abstract:
Two Artificial Intelligence software systems, Bot and Chatbot have recently debuted on the internet. This initiate a communication between the user and a virtual agent. The modeling and performance in deep learning (DL) computation for an Assistant Conversational Agent are presented in this research (Chatbot). The deep neural network (DNN) technique is used to respond to a large number of tokens in an input sentence with more appropriate dialogue. The model was created to do Yoruba-to-Yoruba translations. The major goal of this project is to improve the model's perplexity and learning rate, as well as to find a blue score for translation in the same language. Kares is used to run the experiments, which is written in Python. The data was trained using a deep learning-based algorithm. With the use of training examples, a collection of Yoruba phrases with various intentions was produced. The results demonstrate that the system can communicate in basic Yoruba terms and that it would be able to learn simple Yoruba words. The study result when evaluated showed that the system had 80% accuracy rate. Keywords: Chatbot, Natural Language Processing, Deep Learning, Artificial Neural Network, Yoruba Language
APA, Harvard, Vancouver, ISO, and other styles
15

Sharma, Aryaman. "NATURAL LANGUAGE PROCESSING AND SENTIMENT ANALYSIS." International Research Journal of Computer Science 8, no. 10 (October 30, 2021): 237–42. http://dx.doi.org/10.26562/irjcs.2021.v0810.001.

Full text
Abstract:
Natural Language Processing is one of the branches of Artificial Intelligence that has only recently entered the spotlight. Apple Siri, Amazon Alexa, and, more recently, Google Duplex are just a few of the most well-known instances of NLP in work, with the technology delivering outstanding human-machine interactions. By 2023, there are estimated to be eight billion digital voice assistants in use, owing to their popularity. With such a large-scale use, the data generated from these interactions would also be immense. This untapped goldmine of data can be used to further research and development of Natural Language Processing and can be used in many industries such as healthcare, technology, business etc. Sentiment analysis, with the help of Natural Language Processing can help industries process these huge datasets faster and efficiently and can be used, for example, in the healthcare industry to diagnose patients and develop diagnostic models for detecting chronic disease in its early stages. Web 2.0 has allowed massive user data to be generated which can be tapped to extract valuable information for various reasons that an individual, policy maker(s), organizations and Governments might need.
APA, Harvard, Vancouver, ISO, and other styles
16

Funk, Burkhardt, Shiri Sadeh-Sharvit, Ellen E. Fitzsimmons-Craft, Mickey Todd Trockel, Grace E. Monterubio, Neha J. Goel, Katherine N. Balantekin, et al. "A Framework for Applying Natural Language Processing in Digital Health Interventions." Journal of Medical Internet Research 22, no. 2 (February 19, 2020): e13855. http://dx.doi.org/10.2196/13855.

Full text
Abstract:
Background Digital health interventions (DHIs) are poised to reduce target symptoms in a scalable, affordable, and empirically supported way. DHIs that involve coaching or clinical support often collect text data from 2 sources: (1) open correspondence between users and the trained practitioners supporting them through a messaging system and (2) text data recorded during the intervention by users, such as diary entries. Natural language processing (NLP) offers methods for analyzing text, augmenting the understanding of intervention effects, and informing therapeutic decision making. Objective This study aimed to present a technical framework that supports the automated analysis of both types of text data often present in DHIs. This framework generates text features and helps to build statistical models to predict target variables, including user engagement, symptom change, and therapeutic outcomes. Methods We first discussed various NLP techniques and demonstrated how they are implemented in the presented framework. We then applied the framework in a case study of the Healthy Body Image Program, a Web-based intervention trial for eating disorders (EDs). A total of 372 participants who screened positive for an ED received a DHI aimed at reducing ED psychopathology (including binge eating and purging behaviors) and improving body image. These users generated 37,228 intervention text snippets and exchanged 4285 user-coach messages, which were analyzed using the proposed model. Results We applied the framework to predict binge eating behavior, resulting in an area under the curve between 0.57 (when applied to new users) and 0.72 (when applied to new symptom reports of known users). In addition, initial evidence indicated that specific text features predicted the therapeutic outcome of reducing ED symptoms. Conclusions The case study demonstrates the usefulness of a structured approach to text data analytics. NLP techniques improve the prediction of symptom changes in DHIs. We present a technical framework that can be easily applied in other clinical trials and clinical presentations and encourage other groups to apply the framework in similar contexts.
APA, Harvard, Vancouver, ISO, and other styles
17

Lazebna, Nataliia. "ENGLISH-LANGUAGE BASIS OF PYTHON PROGRAMMING LANGUAGE." Research Bulletin Series Philological Sciences 1, no. 193 (April 2021): 371–76. http://dx.doi.org/10.36550/2522-4077-2021-1-193-371-376.

Full text
Abstract:
The dynamic nature of the Python programming language, the accumulation of a certain linguosemiotic basis indicates the similarity of this language with the English language, which is the international one and mediates human communication in both real and virtual worlds. In this study, the English language is positioned as the linguistic basis of Python language of programming, which is widely used in industry, research, natural language processing, textual information retrieval, textual data processing, texts corpora, and more. English language, its lexical features, text representation and interaction with logical and functional basis in the context of Python programming language are considered further in this research. Thus, the unity of verbal units and symbols in the modern English-language digital discourse indicates both the order and variability of the constituents therein. The functionality of linguosemiotic elements produces a network of relationships, where each of these integrated elements can produce from a word or symbol a holistic set of units, which are extrapolated in the English-language digital discourse and mediates human communication with a machine. An overview of the basic properties of Python language, such as values, types, expressions, and operations are in focus of the study. Though users understand the responses of Python interpreter, there is a need to follow certain instructions and codes. To facilitate work with this programming language and prescribed English-language commands, it is necessary to involve linguists to cooperate with programmers to invent a certain logical and reasonable principle of Python commands operation.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Jian Xiang, and Hong Jun Sun. "The Digital Signal Processing Algorithm Implemented on ARM Embedded System." Advanced Materials Research 756-759 (September 2013): 3958–61. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.3958.

Full text
Abstract:
Digital Signal Processing technology is an important tool for modern signal processing. The speedy development ARM embedded processor has powerful computed capability for digital signal processing algorithms. The paper provides a common hardware platform in recent years. In this paper, used assemble language to realize the algorithms and FIR Filter based on ARM-Linux embedded environment. The results shown that the ARM can quickly and efficiently complete a series of digital signal processing algorithms. Digital signal processing algorithm on ARM embedded system provides an effective way.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Xin. "Research and Design of Digital Signal Processing Virtual Experiment Platform." Applied Mechanics and Materials 401-403 (September 2013): 1415–18. http://dx.doi.org/10.4028/www.scientific.net/amm.401-403.1415.

Full text
Abstract:
An intelligent virtual experiment platform can effectively solve the shortcomings of traditional laboratory which is limited by space, time and cost. In this paper, the relevant characteristics of the digital signal processing experiments, the design of a digital signal processing virtual experiment platform. The platform uses B/S architecture model and the Java language, it has platform-independent. The platform allows the user to independently build experimental process, innovation and experimentation; while providing component registration, users can use their own familiar language development component. Through experiments and registered instance, the feasibility and operability of the platform is verified, and it has a good practical value.
APA, Harvard, Vancouver, ISO, and other styles
20

Et. al., Srinivasa Rao Dhanikonda,. "A Survey On Telugu Optical Character Recognition From Digital Images." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 11, 2021): 999–1003. http://dx.doi.org/10.17762/turcomat.v12i6.2412.

Full text
Abstract:
Images play an essential function in the electronic media to share information. Nowadays, each event is going to be recorded in the arrangement of digital images. Text from the image file won't be in a format on the computer. OCR (Optical Character Recognition) for English vocabulary is well constructed. Currently, there's a requirement of OCR for Indian languages to maintain historical documents composed mainly in Indian languages to arrange publications in the library and for program form processing. OCR for the Telugu language is challenging as consonants and vowels plays a vital role in forming words along with vattus and gunithas. It may be a mixture of vowels and consonants that may form a compound character. This paper presents research on methods utilized in the OCR method for the Telugu Language until today.
APA, Harvard, Vancouver, ISO, and other styles
21

Müller, Marvin, and Joachim Metternich. "Assistenzsysteme durch Natural Language Processing - Umsetzungsstrategien für den Shopfloor." Industrie 4.0 Management 2021, no. 6 (December 7, 2021): 11–14. http://dx.doi.org/10.30844/i40m_21-6_s11-14.

Full text
Abstract:
Die Werkstattführung im Rahmen des sogenannten Shopfloor Managements (SFM) greift zunehmend auf digital erfasste Daten zurück. Die im Rahmen des SFM erkannten Abweichungen führen im besten Fall zu einer systematischen und nachhaltigen Lösung der zugrundeliegenden Probleme. Besonders wertvoll ist dabei das in Form von Freitext dokumentierte Wissen der Beschäftigten in der Ursachenforschung und Maßnahmendefinition. Im Transferprojekt TexPrax werden daher Ansätze aus dem Natural Language Processing (NLP) auf diese Textdaten angewendet, um Assistenzfunktionen im SFM zu realisieren. Dieser Beitrag stellt verschiedene, erprobte Assistenzsysteme im digitalen SFM (dSFM) vor und zeigt situationsspezifische Umsetzungsstrategien für Unternehmen auf.
APA, Harvard, Vancouver, ISO, and other styles
22

Solomakha, Anzhelika. "APPLICATION OF DIGITAL TECHNOLOGIES FOR FORMATION OF FOREIGN LANGUAGE GRAMMAR COMPETENCE IN THE PROCESS OF EARLY LEARNING FOREIGN LANGUAGES (IN THE EXAMPLE OF THE GERMAN LANGUAGE)." OPEN EDUCATIONAL E-ENVIRONMENT OF MODERN UNIVERSITY, no. 8 (2020): 121–35. http://dx.doi.org/10.28925/2414-0325.2020.8.11.

Full text
Abstract:
The methodology of teaching foreign languages is constantly looking for ways to effectively master foreign languages by primary school students. The article deals with the possibility of using digital and multimedia technologies in the process of forming foreign language grammar competence of younger students on the example of the German language. The analysis of foreign experience proved the relevance of the introduction of such technologies in the teaching process of educational institutions of all levels, but it also noted the lack of studying the method of using digital and multimedia tools in the process of forming foreign language competence of primary school students, in particular when learning grammar in German. It is considered that in modern educational development conditions it is important to take into account the features of modern students, who are digital native, and the use of digital and multimedia technologies in German lessons is a natural and understandable tool for them. Future teachers and those, who are practicing the early language teaching, need to overcome psychological barriers and doubts about the effectiveness of new tools to make digital and multimedia technology a daily practice. The article proposes digital and multimedia resources and programs that can be used in the process of forming a foreign grammar competence at different stages of grammar processing, while fully complying with the requirements of the program "Foreign Languages for General and Specialty Educational Institutions 1-4 classes" of Ukraine. The comparative analysis of online resources intended for the study of foreign languages, including German, with the existing foreign language program for the New Ukrainian School (2018), taking into account the level of foreign language communication competence at the time of graduation from 4th grade, allowed to systematize existing digital networks on the Internet and cartoon resources in accordance with vocabulary stock and vocabulary topics, which will help to apply them effectively in German lessons, to increase the motivation of younger students, to encourage an independent study of a foreign language
APA, Harvard, Vancouver, ISO, and other styles
23

Bai, Aruna. "Research on the Application of Computer Digital Technology in the Protection of Security Language." MATEC Web of Conferences 365 (2022): 01023. http://dx.doi.org/10.1051/matecconf/202236501023.

Full text
Abstract:
Making full use of computer digital technology to carry out machine translation research, will realize the language variety conversion between national language and document information, and make the security language language realize the communication goal with no limit, it also lays a good foundation for the protection of security language and writing information and the construction of barrier-free national cultural exchange system. Based on the analysis of the current situation of the security language and characters, this paper probes into the concrete methods of protecting the security language and characters by using digital recording technology, Digital Image Processing Technology and other computer digital technologies.
APA, Harvard, Vancouver, ISO, and other styles
24

Roy, Abhijit, Pamela E. Souza, and Ann Bradlow. "Exploring the need for language-specific hearing aid signal processing." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A197. http://dx.doi.org/10.1121/10.0016012.

Full text
Abstract:
Speech phonemes in higher frequencies have varying acoustic characteristics in different languages. For example, /s/ and /ʃ/, fricatives commonly used in English, have spectral peaks at higher frequencies than /ʂ/ and /x/, fricatives commonly used in Mandarin. Data on the relationship between the acoustic characteristics of a listener’s native language and their hearing aid signal prescription requirements could provide important clinical guidance. Non-linear frequency compression is a digital signal processing tool that compresses auditory information above a pre-determined cutoff frequency. In hearing aids, this compression is applied with the intention of making higher-frequency speech sounds more audible to listeners with high-frequency hearing loss. Here, we studied the effect of different frequency compression settings on fricative perception between Mandarin and English listeners. Normal-hearing participants between age 18 to 50 years were presented with fricative identification and fricative discrimination tasks under various frequency compression settings. Participants were grouped between those with exclusive English language background birth to 6 years to those with exclusive Mandarin language background during the same period. Results display different responses for presented phonemes and frequency compression settings, suggesting that language specificity could be considered for hearing aid signal prescription.
APA, Harvard, Vancouver, ISO, and other styles
25

Márquez Suárez, Fernanda. "Nuevos códigos para la enseñanza del color. Adoptando el lenguaje del Processing." Economía Creativa, no. 7 (May 1, 2017): 11–31. http://dx.doi.org/10.46840/ec.2017.07.02.

Full text
Abstract:
El trabajo describe una experiencia de adopción de código digital para la enseñanza de color a nivel universitario mediante Processing. Se explican el contexto, el enfoque y los resultados preliminares alcanzados por tres grupos de control de programas creativos (diseño, arquitectura de interiores, entre otras). Se evidencia la problemática relacionada con la adquisición de este nuevo lenguaje, que impacta tanto profesores como a alumnos. Entre otros resultados de interés, se observó un contraste significativo entre las expectativas generales cifradas por los alumnos en este recurso, versus las aplicaciones prácticas que identifican a nivel personal; asimismo, se observó una asimilación diferenciada de acuerdo con la carrera que cursaba cada participante al momento del estudio.
APA, Harvard, Vancouver, ISO, and other styles
26

Tareva, Elena G. "Foreign Language Teaching Practices: Online Projection." European Journal of Social & Behavioural Sciences 30, no. 2 (April 30, 2021): 3409–20. http://dx.doi.org/10.15405/ejsbs.297.

Full text
Abstract:
Nowadays, educational technologies, in particular, digital means and methods of teaching have become the subject of research studied from different angles. Of interest are educational practices that most successfully disseminate a positive learning experience. The purpose of this paper is to consider the phenomenon of ‘best/effective educational practices’ and present a set of those that have successfully proven to be effective in teaching foreign languages in a digital environment, in the context of distance learning. The research methods comprised an analysis of existing approaches to productive educational practices, their classification and spheres of application and a survey of 48 experienced higher education teachers followed by the processing and systematization of the data obtained. The research results are an innovative classification of educational practices recommended for teaching foreign languages in a digital environment followed by recommendations for the development of new teaching technologies that can ensure the quality of development of foreign language communicative competence of students.
APA, Harvard, Vancouver, ISO, and other styles
27

Dutsova, Ralitsa. "Web-based software system for processing bilingual digital resources." Cognitive Studies | Études cognitives, no. 14 (September 4, 2014): 33–43. http://dx.doi.org/10.11649/cs.2014.004.

Full text
Abstract:
Web-based software system for processing bilingual digital resourcesThe article describes a software management system developed at the Institute of Mathematics and Informatics, BAS, for the creation, storing and processing of digital language resources in Bulgarian. Independent components of the system are intended for the creation and management of bilingual dictionaries, for information retrieval and data mining from a bilingual dictionary, and for the presentation of aligned corpora. A module which connects these components is also being developed. The system, implemented as a web-application, contains tools for compilation, editing and search within all components.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Xingzhi. "Research and application of deep learning in image recognition." Journal of Physics: Conference Series 2425, no. 1 (February 1, 2023): 012047. http://dx.doi.org/10.1088/1742-6596/2425/1/012047.

Full text
Abstract:
Abstract In recent years, deep learning technology has been one of the most important representatives of progress in the field of artificial intelligence technology in China. It has made great achievements in many fields, such as language recognition, natural language processing, image processing and video analysis. This paper further discusses the application research and practice of deep learning technology in modern digital image recognition. Deep learning technology has played a more and more important role in modern digital image recognition.
APA, Harvard, Vancouver, ISO, and other styles
29

Solnyshkina, Marina Ivanovna, Danielle S. McNamara, and Radif Rifkatovich Zamaletdinov. "Natural language processing and discourse complexity studies." Russian Journal of Linguistics 26, no. 2 (June 29, 2022): 317–41. http://dx.doi.org/10.22363/2687-0088-30171.

Full text
Abstract:
The study presents an overview of discursive complexology, an integral paradigm of linguistics, cognitive studies and computer linguistics aimed at defining discourse complexity. The article comprises three main parts, which successively outline views on the category of linguistic complexity, history of discursive complexology and modern methods of text complexity assessment. Distinguishing the concepts of linguistic complexity, text and discourse complexity, we recognize an absolute nature of text complexity assessment and relative nature of discourse complexity, determined by linguistic and cognitive abilities of a recipient. Founded in the 19th century, text complexity theory is still focused on defining and validating complexity predictors and criteria for text perception difficulty. We briefly characterize the five previous stages of discursive complexology: formative, classical, period of closed tests, constructive-cognitive and period of natural language processing. We also present the theoretical foundations of Coh-Metrix, an automatic analyzer, based on a five-level cognitive model of perception. Computing not only lexical and syntactic parameters, but also text level parameters, situational models and rhetorical structures, Coh-Metrix provides a high level of accuracy of discourse complexity assessment. We also show the benefits of natural language processing models and a wide range of application areas of text profilers and digital platforms such as LEXILE and ReaderBench. We view parametrization and development of complexity matrix of texts of various genres as the nearest prospect for the development of discursive complexology which may enable a higher accuracy of inter- and intra-linguistic contrastive studies, as well as automating selection and modification of texts for various pragmatic purposes.
APA, Harvard, Vancouver, ISO, and other styles
30

A., Izay, and Onyejegbu L. N. "Digital Image Processing for Detecting and Classifying Plant Diseases." Circulation in Computer Science 2, no. 11 (December 20, 2017): 1–7. http://dx.doi.org/10.22632/ccs-2017-252-66.

Full text
Abstract:
Agriculture is the backbone of human sustenance in this world. With growing population, there is need for increased productivity in agriculture to be able to meet the demands. Diseases can occur on any part of a plant, but in this paper only the symptoms in the fruits of a plant is considered using segmentation algorithm and edge/ sizing detectors. We also looked at image processing using fuzzy logic controller. The system was designed using object oriented analysis and design methodology. It was implemented using MySQL for the database, and PHP programming language. This system will be of great benefit to farmers and will encourage them in investing their resources since crop diseases can be detected and eliminated early.
APA, Harvard, Vancouver, ISO, and other styles
31

Hutchinson, Tim. "Natural language processing and machine learning as practical toolsets for archival processing." Records Management Journal 30, no. 2 (May 16, 2020): 155–74. http://dx.doi.org/10.1108/rmj-09-2019-0055.

Full text
Abstract:
Purpose This study aims to provide an overview of recent efforts relating to natural language processing (NLP) and machine learning applied to archival processing, particularly appraisal and sensitivity reviews, and propose functional requirements and workflow considerations for transitioning from experimental to operational use of these tools. Design/methodology/approach The paper has four main sections. 1) A short overview of the NLP and machine learning concepts referenced in the paper. 2) A review of the literature reporting on NLP and machine learning applied to archival processes. 3) An overview and commentary on key existing and developing tools that use NLP or machine learning techniques for archives. 4) This review and analysis will inform a discussion of functional requirements and workflow considerations for NLP and machine learning tools for archival processing. Findings Applications for processing e-mail have received the most attention so far, although most initiatives have been experimental or project based. It now seems feasible to branch out to develop more generalized tools for born-digital, unstructured records. Effective NLP and machine learning tools for archival processing should be usable, interoperable, flexible, iterative and configurable. Originality/value Most implementations of NLP for archives have been experimental or project based. The main exception that has moved into production is ePADD, which includes robust NLP features through its named entity recognition module. This paper takes a broader view, assessing the prospects and possible directions for integrating NLP tools and techniques into archival workflows.
APA, Harvard, Vancouver, ISO, and other styles
32

Koprawi, Muhammad. "Parallel Computation in Uncompressed Digital Images Using Computer Unified Device Architecture and Open Computing Language." PIKSEL : Penelitian Ilmu Komputer Sistem Embedded and Logic 8, no. 1 (March 20, 2020): 31–38. http://dx.doi.org/10.33558/piksel.v8i1.2017.

Full text
Abstract:
In general, a computer program will execute instructions serially. These instructions will be run on the CPU or referred to as serial computing. But when computing is run in large numbers, the time required by serial computing becomes very long. Therefore, we need another computation that can streamline data processing time such as parallel computing. Parallel computing can be done on GPUs (Graphical Processing Units) that are run with the help of toolkits such as CUDA (Computer Unified Device Architecture) and OpenCL (Open Computing Language). CUDA can only be run on NVIDIA graphics cards, while OpenCL can be run on all types of graphics cards. This research will compare the performance of parallel computing time between CUDA and OpenCL tested on uncompressed digital images. The digital image tested has several different sizes. The results of the study are expected to be a reference for digital image processing methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Ramírez Sánchez, Julián, Alejandra Campo-Archbold, Andrés Zapata Rozo, Daniel Díaz-López, Javier Pastor-Galindo, Félix Gómez Mármol, and Julián Aponte Díaz. "Uncovering Cybercrimes in Social Media through Natural Language Processing." Complexity 2021 (December 10, 2021): 1–15. http://dx.doi.org/10.1155/2021/7955637.

Full text
Abstract:
Among the myriad of applications of natural language processing (NLP), assisting law enforcement agencies (LEA) in detecting and preventing cybercrimes is one of the most recent and promising ones. The promotion of violence or hate by digital means is considered a cybercrime as it leverages the cyberspace to support illegal activities in the real world. The paper at hand proposes a solution that uses neural network (NN) based NLP to monitor suspicious activities in social networks allowing us to identify and prevent related cybercrimes. An LEA can find similar posts grouped in clusters, then determine their level of polarity, and identify a subset of user accounts that promote violent activities to be reviewed extensively as part of an effort to prevent crimes and specifically hostile social manipulation (HSM). Different experiments were also conducted to prove the feasibility of the proposal.
APA, Harvard, Vancouver, ISO, and other styles
34

Obiorah ,, Philip, Friday Onuodu, and Batholowmeo Eke. "Topic Modeling Using Latent Dirichlet Allocation & Multinomial Logistic Regression." Advances in Multidisciplinary and scientific Research Journal Publication 10, no. 4 (December 30, 2022): 99–112. http://dx.doi.org/10.22624/aims/digital/v10n4p11a.

Full text
Abstract:
Unsupervised categorization for datasets has benefits, but not without a few difficulties. Unsupervised algorithms cluster groups of documents in an unsupervised fashion, and often output findings as vectors containing distributions of words clustered according to their probability of occurring together. Additionally, this technique requires human or domain expert interpretation in order to correctly identify clusters of words as belonging to a certain topic. We propose combining Latent Dirichlet Allocation (LDA) with multi-class Logistic Regression for topic modelling as a multi-step classification process in order to extract and classify topics from unseen texts without relying on human labelling or domain expert interpretation in order to correctly identify clusters of words as belonging to a certain topic. The findings suggest that the two procedures were complementary in terms of identifying textual subjects and overcoming the difficulty of comprehending the array of topics from the output of LDA. Keywords: Natural Language Processing; Topic Modeling; Latent Dirichlet Allocation; Logistic Regression
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Keliang, Yunxiao Zu, and Weizheng Ren. "Research and Design of Knowledge System Construction System Based on Natural Language Processing." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 12 (November 2019): 1959038. http://dx.doi.org/10.1142/s0218001419590389.

Full text
Abstract:
The digital processing of content resources has subverted the traditional paper content processing model and has also spread widely. The digital resources processed by text structure need to be structured and processed by professional knowledge, which can be saved as a professional digital content resource of knowledge base and provide basic metadata for intelligent knowledge service platform. The professional domain-based knowledge system construction system platform explored in this study is designed based on natural language processing. Natural language processing is an important branch of artificial intelligence, which is the application of artificial intelligence technology in linguistics. The system first extracts the professional thesaurus and domain ontology in the digital resources and then uses the new word discovery algorithm based on the label weight designed by artificial intelligence technology to intelligently extract and clean the new words of the basic thesaurus. At the same time, the relationship system between knowledge points and elements is established to realize the association extraction of targeted knowledge points, and finally the output content is enriched from knowledge points into related knowledge systems. In order to improve the scalability and universality of the system, the extended architecture of the thesaurus, algorithms, computational capabilities, tags, and exception thesaurus was taken into account when designing. At the same time, the implementation of “artificial intelligence [Formula: see text] manual assistance” was adopted. On the basis of improving the system availability, the experimental basis of the optimization algorithm is provided. The results of this research will bring an artificial intelligence innovation after the digitization to the publishing industry and will transform the content service into an intelligent service based on the knowledge system.
APA, Harvard, Vancouver, ISO, and other styles
36

Kyathanahalli Nanjappa, Sowmya, Sowmya Prakash, Aiswarya Burle, Nandish Nagabhushan, and Chaitanya Shashi Kumar. "mySmartCart: a smart shopping list for day-to-day supplies." IAES International Journal of Artificial Intelligence (IJ-AI) 12, no. 3 (September 1, 2023): 1484. http://dx.doi.org/10.11591/ijai.v12.i3.pp1484-1490.

Full text
Abstract:
<p class="p1">Shopping of day-to-day items and keeping track of the shopping list can be a tedious and a time-consuming procedure, especially if it has to be done frequently. mySmartCart is a mobile application design proposed to transform the traditional way of writing a shopping list to a digitalized smart list which implements voice recognition and handwriting recognition for processing the natural language input of the user. The system design comprises four modules: i) input- which takes voice and handwritten list image input from the user; ii) processing- natural language processing of input data and converted to digital shopping list; iii) classification - list items classified into respective categories using machine learning algorithms; iv) output - searching on e-commerce applications and adding to shopping cart. The design proposed utilizes natural languages to communicate with the user thus enhancing their shopping experience. Google cloud speech recognition and tesseract optical character recognition (OCR) for natural language processing have been utilized in the prototype along with Support Vector Machine classifier for categorization.</p>
APA, Harvard, Vancouver, ISO, and other styles
37

Rasheed, Fahad, Mehmoon Anwar, and Imran Khan. "Detecting Cyberbullying in Roman Urdu Language Using Natural Language Processing Techniques." Pakistan Journal of Engineering and Technology 5, no. 2 (September 19, 2022): 198–203. http://dx.doi.org/10.51846/vol5iss2pp198-203.

Full text
Abstract:
Nowadays, social media platforms are the primary source of public communication and information. Social media platforms have become an integral part of our daily lives, and their user base is rapidly expanding as access is extended to more remote locations. Pakistan has around 71.70 million social media users that utilize Roman Urdu to communicate. With these improvements and the increasing number of users, there has been an increase in digital bullying, often known as cyberbullying. This research focuses on social media users who use Roman Urdu (Urdu language written in the English alphabet) to communicate. In this research, we explored the topic of cyberbullying actions on the Twitter platform, where users employ Roman Urdu as a medium of communication. To our knowledge, this is one of the very few studies that address cyberbullying behavior in Roman Urdu. Our proposed study aims to identify a suitable model for classifying cyberbullying behavior in Roman Urdu. To begin, the dataset was designed by extracting data from twitter using twitter's API. The targeted data was extracted using keywords based on Roman Urdu. The data was then annotated as bully and not-bully. After that, the dataset has been pre-processed to reduce noise, which includes punctuation, stop words, null entries, and duplication removal. Following that, features are extracted using two different methods, Count-Vectorizer and TF-IDF Vectorizer, and a set of ten different learning algorithms including SVM, MLP, and KNN was applied to both types of extracted features based on supervised learning. Support Vector Machine (SVM) performed the best out of the implemented algorithms by both combinations, with 97.8 percent when implemented over the TF-IDF features and 93.4 percent when implemented over the CV features. The proposed mechanism could be helpful for online social apps and chat rooms for the better detection and designing of bully word filters, making safer cyberspace for end users.
APA, Harvard, Vancouver, ISO, and other styles
38

Gamu, Dawit Tadesse, and Michael Melese Woldeyohannis. "Morphology-Based Spell Checker for Dawurootsuwa Language." Scientific Programming 2023 (May 10, 2023): 1–10. http://dx.doi.org/10.1155/2023/7625785.

Full text
Abstract:
Processing of textual information by using word-processing tools is extremely increased due to the presence of misspelled or erroneous words. In order to minimize these misspelled words from digital information, different spellchecker tools are needed. A plenty of works are performed in technological favored languages like English and European languages but not for an underresourced language like Dawurootsuwa. The primary idea behind a morphology-based spellchecker is to use a dictionary lookup approach with morphological properties of the language to reduce dictionary size while also handling word inflection, derivation, and compounding. Two distinct tests were carried out in this work to evaluate the performance of a morphology-based spellchecker: error detection and error correction. The Hunspell dictionary format was utilized to construct the root words in this study, which included a total of 5,000 root words and more than 2,500 morphological rules along with 3,156 unique words for testing. The experimental result showed the overall spell error detection performance of 90.4% and the overall spell error correction performance of 79.31%. Moreover, we are working further towards developing a real word spelling checker that incorporate more numbers of language rules.
APA, Harvard, Vancouver, ISO, and other styles
39

Mohamed, Hassan, Nur Aisyah Abdul Fataf, and Tengku Mohd Tengku Sembok. "A Framework for Malay Computational Grammar Formalism based-on Enhanced Pola Grammar." JOIV : International Journal on Informatics Visualization 7, no. 2 (May 8, 2023): 363. http://dx.doi.org/10.30630/joiv.7.2.1172.

Full text
Abstract:
In the era of IR4.0, Natural Language Processing (NLP) is one of the major focuses because text is stored digitally to code the information. Natural language understanding requires a computational grammar for syntax and semantics of the language in question for this information to be manipulated digitally. Many languages around the world have their own computational grammars for processing syntax and semantics. However, when it comes to the Malay language, the researchers have yet to come across a substantial computational grammar that can process Malay syntax and semantics based on a computational theoretical framework that can be applied in systems such as e-commerce. Hence, we intend to propose a formalism framework based on enhanced Pola Grammar with syntactic and semantic features. The objectives of this proposed framework are to create a linguistic computational formalism for the Malay language based on theoretical linguistic; implement templates for Malay words to handle syntax and semantic features in accordance with the enhanced Pola Grammar; and create a Malay Language Parser Algorithm that can be used for digital applications. To accomplish the objectives, the proposed framework will recursively formalise the computational Malay grammar and lexicon using a combination of solid theoretical linguistic foundations such as Dependency Grammar. A Malay parsing algorithm will be developed for the proposed model until the formalised grammar is deemed reliable. The findings of this indigenous Malay parser will help to advance Malay language applications in the digital economy.
APA, Harvard, Vancouver, ISO, and other styles
40

Galiotou, Eleni. "Using digital corpora for preserving and processing cultural heritage texts: a case study." Library Review 63, no. 6/7 (August 26, 2014): 408–21. http://dx.doi.org/10.1108/lr-11-2013-0142.

Full text
Abstract:
Purpose – The purpose of this paper is to describe the creation and exploitation of a historical corpus in an attempt to contribute to the preservation and availability of cultural heritage documents. Design/methodology/approach – At first, the digitization process and attempts to the availability and awareness of the books and manuscripts in a historical library in Greece are presented. Then, processing and exploitation, taking into account natural language processing techniques of the digitized corpus, are discussed. Findings – In the course of the project, methods that take into account the state of the documents and the particularities of the Greek language were developed. Practical implications – In its present state, the use of the corpus facilitates the work of theologians, historians, philologists, paleographers, etc. and in the same time, prevents the original documents from further damage. Originality/value – The results of this undertaking can give useful insights as for the creation of corpora of cultural heritage documents and as for the methods for the processing and exploitation of the digitized documents which take into account the language in which the documents are written.
APA, Harvard, Vancouver, ISO, and other styles
41

Pacheco-Guevara, Lizbeth, Ruth Reátegui, and Priscila Valdiviezo-Díaz. "Topic identification from news blog in Spanish language." Informática y Sistemas: Revista de Tecnologías de la Informática y las Comunicaciones 6, no. 1 (May 27, 2022): 22–34. http://dx.doi.org/10.33936/isrtic.v6i1.4514.

Full text
Abstract:
Currently exist a large amount of news in a digital format that need to be classified or labeled automatically according to their content. LDA is an unsupervised technique that automatically creates topics based on words in documents. The present work aims to apply LDA in order to analyze and extract topic from digital news in Spanish language. A total of 198 digital news was collected from a university news blog. A data pre-processing and representation in vector spaces was carried out and k values were selected based on coherence metric. A TF_IDF matrix and a combination of unigrams and bigrams produce topics with a variety of terms and topics related to university activities like study programs, research, projects for innovation and social responsibility. Furthermore, with the manual validation process, terms in topics correspond with hashtags written by the communication professionals.
APA, Harvard, Vancouver, ISO, and other styles
42

Spirintsev, Vyacheslav, Dmitry Popov, and Olga Spirintseva. "VIRTUAL DIGITAL ASSISTANT WITH VOICE INTERFACE SUPPORT." System technologies 2, no. 133 (March 1, 2021): 42–51. http://dx.doi.org/10.34185/1562-9945-2-133-2021-06.

Full text
Abstract:
A virtual digital assistant which can work with arbitrary systems and provide an effective solution of narrowly focused user tasks for interaction with Ukrainian services voice inter-face supported has been proposed. The developed web service was implemented by using the PHP programming language, Wit.ai service for audio signal processing, FANN library for neural network construction, Telegram service for creating an interface.
APA, Harvard, Vancouver, ISO, and other styles
43

Mialkovska, Liudmyla, Liudmyla Zhvania, Mariia Rozhylо, Oksana Terebus, Maksym Yablonskyy, and Volodymyr Hrysiuk. "Digital Tools in Teaching the Mass Media Language." World Journal of English Language 13, no. 4 (April 20, 2023): 43. http://dx.doi.org/10.5430/wjel.v13n4p43.

Full text
Abstract:
The functioning of language in modern media is a complex set of different types of discourses. It involves using mental and cultural codes, concepts and archetypes, taking into account the specifics of Internet content and methods of its promotion, along with traditional newspaper journalism, knowledge of the basics of cognitive, communicative and information-theoretical theories and methods, etc. The purpose of the academic paper is to clarify the features and modern tendencies of teaching the mass media language with the help of digital tools, as well as to establish particular practical aspects of using such educational means in the process of teaching the mass media language. In the course of the research, the analytical-bibliographic method was used to study the scientific literature on teaching the mass media language with the help of digital tools. Along with this, induction, deduction, analysis, synthesis of information, system-structural, comparative, logical-linguistic methods, abstraction, and idealization were applied for studying and processing data. At the same time, the questionnaire survey was conducted in online mode by the research authors to practically clarify certain aspects of using digital educational tools in teaching the mass media language. Based on the research results, the primary and most significant theoretical aspects of the process of teaching the mass media language using digital educational tools were highlighted. Moreover, the standpoints of education seekers and teachers of higher educational institutions regarding the key aspects of this issue were clarified.
APA, Harvard, Vancouver, ISO, and other styles
44

Azzat, Media, Karwan Jacksi, and Ismael Ali. "The Kurdish Language Corpus: State of the Art." Science Journal of University of Zakho 11, no. 1 (February 20, 2023): 125–31. http://dx.doi.org/10.25271/sjuoz.2023.11.1.1123.

Full text
Abstract:
The notable growth of the digital communities and different online news streams led to the growing availability of online natural language content. However not all natural languages have the enough attention of being made readable and comprehendible to machines. Among these less resourced and paid attention languages is the Kurdish language. Creating the machine-readable text is the first step toward applications of text mining and semantic web, such as translation, information retrieval and recommendation systems. With the de facto challenges in the Kurdish language, such as the scarcity of linguistic sources and not having unified orthography rules, this language has a lack of the language processing tools. However, to overcome the mentioned challenges and enable intelligent applications the well organized and annotated Kurdish text corpora is needed. This review paper investigates the available textual corpora in the Kurdish language and its dialects and then determined challenges are discussed, open problems are listed and future directions suggested.
APA, Harvard, Vancouver, ISO, and other styles
45

Ebraheem, Sundus Khaleel. "Perform Measuring by Using Image Processing." International Journal of Informatics and Communication Technology (IJ-ICT) 5, no. 1 (April 1, 2016): 36. http://dx.doi.org/10.11591/ijict.v5i1.pp36-44.

Full text
Abstract:
The development of ability of the computer leads to improve the abilities of digital image processing, because of the traditional measuring work needs much effort in the site for different fields, and has its difficulties and shortcoming points; therefore this paper introduces improvement for the method to solve the measuring problem, by using digital image processing. This paper improved the system of perform measuring of material for buildings by calculating the volume furthermore the area and dimensions for any position was pointed by the user. The improvement involved in addition to that correction the position of pointed point in case of zooming. The system was designed by using MATLAB R2012b language. The system has implemented on different images and different dimensions as well as on video, it showed accurate results in calculating specific dimensions, areas and volume which defined by the user in the image.
APA, Harvard, Vancouver, ISO, and other styles
46

Moreno, Fábio Carlos, Cinthyan Sachs C. de Barbosa, and Edio Roberto Manfio. "Hash Tables for a Digital Lexicon." Revista de Informática Teórica e Aplicada 28, no. 2 (August 29, 2021): 25–38. http://dx.doi.org/10.22456/2175-2745.107128.

Full text
Abstract:
This paper deals with the construction of digital lexicons within the scope of Natural Language Processing. Data Structures called Hash Tables have demonstrated to generate good results for Natural Language Interface for Databases and have data dispersion, response speed and programming simplicity as main features. The storage of the desired information is done by associating a key through the hashing functions that is responsible for distributing the information in this table. The objective of this paper is to present the tool called Visual TaHs that uses a sparse table to a real lexicon (Lexicon of Herbs), improving performance results of several implemented hash functions. Such structure has achieved satisfactory results in terms of speed and storage when compared to conventional databases and can work in various media, such as desktop, Web and mobile.
APA, Harvard, Vancouver, ISO, and other styles
47

Roth, Camille. "Digital, digitized, and numerical humanities." Digital Scholarship in the Humanities 34, no. 3 (November 5, 2018): 616–32. http://dx.doi.org/10.1093/llc/fqy057.

Full text
Abstract:
AbstractThe term ‘digital humanities’ may be understood in three different ways: as ‘digitized humanities’, by dealing essentially with the constitution, management, and processing of digitized archives; as ‘numerical humanities’, by putting the emphasis on mathematical abstraction and the development of numerical and formal models; and as ‘humanities of the digital’, by focusing on the study of computer-mediated interactions and online communities. Discussing their methods and actors, we show how these three potential acceptations cover markedly distinct epistemological endeavors and, eventually, non-overlapping scientific communities.
APA, Harvard, Vancouver, ISO, and other styles
48

Vikas, G., M. V. S. Koushik, M. Nithya, and Ch Sudha. "Mobile Message Classification Using Natural Language Processing and Machine Learning Algorithms." International Journal for Research in Applied Science and Engineering Technology 11, no. 6 (June 30, 2023): 3578–82. http://dx.doi.org/10.22214/ijraset.2023.54341.

Full text
Abstract:
Abstract: SPAM: Stupid Pointless Annoying Malware is any unwanted, unsolicited digital communication sent out in bulk Although email is the most common method of spreading spam, it can also be communicated via social media, text messages, and phone calls. Sadly, whether we like it or not, spam messages must irritate everyone with a mobile device. Here this project classifies spam messages. Understanding different spam text classification techniques like extraction, text preprocessing, and NLTK stop words is vital. This project mainly focuses on the spam classification approach using machine learning algorithms such as Random Forest, KNN, Naïve Bayes, Support Vector Machine, decision tree, and NLP algorithms Count Vectorization and TF-IDF
APA, Harvard, Vancouver, ISO, and other styles
49

Kochanov, Andrey, Vyacheslav Zolotukhin, Vladislav Mironenko, Anastasia Savelyeva, and Anastasia Polyakova. "Digital processing of satellite images using neural network algorithms." Journal of Physics: Conference Series 2373, no. 6 (December 1, 2022): 062026. http://dx.doi.org/10.1088/1742-6596/2373/6/062026.

Full text
Abstract:
Abstract The article discusses the system, the basis of which can be divided into two parts, hardware and software. The hardware part is a single-board computer and software using, which includes the use of neural network algorithms. This system is able to extract the necessary information from the photo and/or video frame specified by the parameters when receiving images from the satellite. This possibility is relevant, because at the moment the trend is that large enterprises want to have at least a few small spacecraft on their account and in this regard they face a number of problems associated with limited mass and size of payload devices. Due to these certain limitations, the processing of satellite frames takes place on Earth, and not on the device itself. Image processing on the spacecraft makes it possible to send only individual fragments of data and not clog the transmission channel, which saves traffic and time. Also, it makes it possible to process data on the spacecraft itself 24/7. This becomes possible thanks to the Python programming language and the trained convolutional neural network. The neural network is connected through the free library TensorFlow. The purpose of this work is to describe the features for solving the problem of classification of objects (pattern recognition).
APA, Harvard, Vancouver, ISO, and other styles
50

Hvelplund, Kristian Tangsgaard. "Digital resources in the translation process – attention, cognitive effort and processing flow." Perspectives 27, no. 4 (March 7, 2019): 510–24. http://dx.doi.org/10.1080/0907676x.2019.1575883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography