Academic literature on the topic 'Digital Language Processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Digital Language Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Digital Language Processing"

1

Gonzalez-Dios, Itziar, and Begoña Altuna. "Natural Language Processing and Language Technologies for the Basque Language." Cuadernos Europeos de Deusto, no. 04 (July 22, 2022): 203–30. http://dx.doi.org/10.18543/ced.2477.

Full text
Abstract:
The presence of a language in the digital domain is crucial for its survival, as online communication and digital language resources have become the standard in the last decades and will gain more importance in the coming years. In order to develop advanced systems that are considered the basics for an efficient digital communication (e.g. machine translation systems, text-to-speech and speech-to-text converters and digital assistants), it is necessary to digitalise linguistic resources and create tools. In the case of Basque, scholars have studied the creation of digital linguistic resources and the tools that allow the development of those systems for the last forty years. In this paper, we present an overview of the natural language processing and language technology resources developed for Basque, their impact in the process of making Basque a “digital language” and the applications and challenges in multilingual communication. More precisely, we present the well-known products for Basque, the basic tools and the resources that are behind the products we use every day. Likewise, we would like that this survey serves as a guide for other minority languages that are making their way to digitalisation. Recibido: 05 abril 2022Aceptado: 20 mayo 2022
APA, Harvard, Vancouver, ISO, and other styles
2

Bachate, Ravindra Parshuram, and Ashok Sharma. "Acquaintance with Natural Language Processing for Building Smart Society." E3S Web of Conferences 170 (2020): 02006. http://dx.doi.org/10.1051/e3sconf/202017002006.

Full text
Abstract:
Natural Language Processing (NLP) deals with the spoken languages by using computer and Artificial Intelligence. As people from different regional areas using different digital platforms and expressing their views in their spoken language, it is now must to focus on working spoken languages in India to make our society smart and digital. NLP research grown tremendously in last decade which results in Siri, Google Assistant, Alexa, Cortona and many more automatic speech recognitions and understanding systems (ASR). Natural Language Processing can be understood by classifying it into Natural Language Generation and Natural Language Understanding. NLP is widely used in various domain such as Health Care, Chatbot, ASR building, HR, Sentiment analysis etc.
APA, Harvard, Vancouver, ISO, and other styles
3

Embree, Paul M., Bruce Kimble, and James F. Bartram. "C Language Algorithms for Digital Signal Processing." Journal of the Acoustical Society of America 90, no. 1 (July 1991): 618. http://dx.doi.org/10.1121/1.401205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dolmans, Jeroen H. "C language algorithms for digital signal processing." Control Engineering Practice 4, no. 10 (October 1996): 1484–85. http://dx.doi.org/10.1016/0967-0661(96)85106-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Németh, Renáta, and Júlia Koltai. "Natural language processing." Intersections 9, no. 1 (April 26, 2023): 5–22. http://dx.doi.org/10.17356/ieejsp.v9i1.871.

Full text
Abstract:
Natural language processing (NLP) methods are designed to automatically process and analyze large amounts of textual data. The integration of this new-generation toolbox into sociology faces many challenges. NLP was institutionalized outside of sociology, while the expertise of sociology has been based on its own methods of research. Another challenge is epistemological: it is related to the validity of digital data and the different viewpoints associated with predictive and causal approaches. In our paper, we discuss the challenges and opportunities of the use of NLP in sociology, offer some potential solutions to the concerns and provide meaningful and diverse examples of its sociological application, most of which are related to research on Eastern European societies. The focus will be on the use of NLP in quantitative text analysis. Solutions are provided concerning how sociological knowledge can be incorporated into the new methods and how the new analytical tools can be evaluated against the principles of traditional quantitative methodology.
APA, Harvard, Vancouver, ISO, and other styles
6

Allah, Fadoua Ataa, and Siham Boulaknadel. "NEW TRENDS IN LESS-RESOURCED LANGUAGE PROCESSING: CASE OF AMAZIGH LANGUAGE." International Journal on Natural Language Computing 12, no. 2 (April 29, 2023): 75–89. http://dx.doi.org/10.5121/ijnlc.2023.12207.

Full text
Abstract:
The coronavirus (COVID-19) pandemic has dramatically changed lifestyles in much of the world. It forced people to profoundly review their relationships and interactions with digital technologies. Nevertheless, people prefer using these technologies in their favorite languages. Unfortunately, most languages are considered even as low or less-resourced, and they do not have the potential to keep up with the new needs. Therefore, this study explores how this kind of languages, mainly the Amazigh, will behave in the wholly digital environment, and what to expect for new trends. Contrary to last decades, the research gap of low and less-resourced languages is continually reducing. Nonetheless, the literature review exploration unveils the need for innovative research to review their informatization roadmap, while rethinking, in a valuable way, people’s behaviors in this increasingly changing environment. Through this work, we will try first to introduce the technology access challenges, and explain how natural language processing contributes to their overcoming. Then, we will give an overview of existing studies and research related to under and less-resourced languages’ informatization, with an emphasis on the Amazigh language. After, based on these studies and the agile revolution, a new roadmap will be presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Norilo, Vesa. "Kronos: A Declarative Metaprogramming Language for Digital Signal Processing." Computer Music Journal 39, no. 4 (December 2015): 30–48. http://dx.doi.org/10.1162/comj_a_00330.

Full text
Abstract:
Kronos is a signal-processing programming language based on the principles of semifunctional reactive systems. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks by utilizing the powerful programming paradigms of “metaprogramming” and reactive multirate systems. The Kronos language features expressive source code as well as a streamlined, efficient runtime. The programming model presented is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples.
APA, Harvard, Vancouver, ISO, and other styles
8

Lazebna, N. V. "ENGLISH-LANGUAGE SENTENCE PROCESSING: DIGITAL TOOLS AND PSYCHOLINGUISTIC PERSPECTIVE." International Humanitarian University Herald. Philology 1, no. 46 (2020): 204–6. http://dx.doi.org/10.32841/2409-1154.2020.46-1.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Müller, Marvin, Emanuel Alexandi, and Joachim Metternich. "Digital shop floor management enhanced by natural language processing." Procedia CIRP 96 (2021): 21–26. http://dx.doi.org/10.1016/j.procir.2021.01.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

KIERNAN, K. S. "Digital Image Processing and the Beowulf Manuscript." Literary and Linguistic Computing 6, no. 1 (January 1, 1991): 20–27. http://dx.doi.org/10.1093/llc/6.1.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Digital Language Processing"

1

Kakavandy, Hanna, and John Landeholt. "How natural language processing can be used to improve digital language learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281693.

Full text
Abstract:
The world is facing globalization and with that, companies are growing and need to hire according their needs. A great obstacle for this is the language barrier between job applicants and employers who want to hire competent candidates. One spark of light in this challenge is Lingio, who provides a product that teaches digital profession-specific Swedish. Lingio intends to make their existing product more interactive and this research paper aims to research aspects involved in that. This study evaluates system utterances that are planned to be used in Lingio’s product for language learners to use in their practice and studies the feasibility of using the natural language model cosine similarity in classifying the correctness of answers to these utterances. This report also looks at whether it best to use crowd sourced material or a golden standard as benchmark for a correct answer. The results indicate that there are a number of improvements and developments that need to be made to the model in order for it to accurately classify answers due to its formulation and the complexity of human language. It is also concluded that the utterances by Lingio might need to be further developed in order to be efficient in their use for learning language and that crowd sourced material works better than a golden standard. The study makes several interesting observations from the collected data and analysis, aiming to contribute to further research in natural language engineering when it comes to text classification and digital language learning.
Globaliseringen medför flertal konsekvenser för växande företag. En av utmaningarna som företag står inför är anställandet av tillräckligt med kompentent personal. För många företag står språkbarriären mellan de och att anställa kompetens, arbetsökande har ofta inte tillräckligt med språkkunskaper för att klara av jobbet. Lingio är företag som arbetar med just detta, deras produkt är en digital applikation som undervisar yrkesspecific svenska, en effektiv lösning för den som vill fokusera sin inlärning av språket inför ett jobb. Syftet är att hjälpa Lingio i utvecklingen av deras produkt, närmare bestämt i arbetet med att göra den mer interaktiv. Detta görs genom att undersöka effektiviteten hos applikationens yttranden som används för inlärningssyfte och att använda en språkteknologisk modell för att klassificera en användares svar till ett yttrande. Vidare analyseras huruvida det är bäst att använda en golden standard eller insamlat material från enkäter som referenspunkt för ett korrekt yttrande. Resultatet visar att modellen har flertal svagheter och  behöver utvecklas för att kunna göra klassificeringen på ett korrekt sätt och att det finns utrymme för bättring när det kommer till yttrandena. Det visas även att insamlat material från enkäter fungerar bättre än en golden standard.
APA, Harvard, Vancouver, ISO, and other styles
2

Katzir, Yoel. "PC software for the teaching of digital signal processing." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23346.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Electrical and Computer Engineering Department at the Naval Postgraduate School has a need for additional software to be used in instructing students studying digital signal processing. This software will be used in a PC lab or at home. This thesis provides a set of disks written in APL (A Programming Language) which allows the user to input arbitrary signals from a disk, to perform various signal processing operations, to plot the results, and to save them without the need for complicated programming. The software is in the form of a digital signal processing toolkit. The user can select functions which can operate on the signals and interactively apply them in any order. The user can also easily develop new functions and include them in the toolkit. The thesis includes brief discussions about the library workspaces, a user manual, function listings with examples of their use, and an application paper. The software is modular and can be expanded by adding additional sets of functions.
http://archive.org/details/pcsoftwarefortea00katz
Major, Israeli Air Force
APA, Harvard, Vancouver, ISO, and other styles
3

Ou, Shiyan, Christopher S. G. Khoo, and Dion H. Goh. "Automatic multi-document summarization for digital libraries." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.

Full text
Abstract:
With the rapid growth of the World Wide Web and online information services, more and more information is available and accessible online. Automatic summarization is an indispensable solution to reduce the information overload problem. Multi-document summarization is useful to provide an overview of a topic and allow users to zoom in for more details on aspects of interest. This paper reports three types of multi-document summaries generated for a set of research abstracts, using different summarization approaches: a sentence-based summary generated by a MEAD summarization system that extracts important sentences using various features, another sentence-based summary generated by extracting research objective sentences, and a variable-based summary focusing on research concepts and relationships. A user evaluation was carried out to compare the three types of summaries. The evaluation results indicated that the majority of users (70%) preferred the variable-based summary, while 55% of the users preferred the research objective summary, and only 25% preferred the MEAD summary.
APA, Harvard, Vancouver, ISO, and other styles
4

Ruiz, Fabo Pablo. "Concept-based and relation-based corpus navigation : applications of natural language processing in digital humanities." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE053/document.

Full text
Abstract:
La recherche en Sciences humaines et sociales repose souvent sur de grandes masses de données textuelles, qu'il serait impossible de lire en détail. Le Traitement automatique des langues (TAL) peut identifier des concepts et des acteurs importants mentionnés dans un corpus, ainsi que les relations entre eux. Ces informations peuvent fournir un aperçu du corpus qui peut être utile pour les experts d'un domaine et les aider à identifier les zones du corpus pertinentes pour leurs questions de recherche. Pour annoter automatiquement des corpus d'intérêt en Humanités numériques, les technologies TAL que nous avons appliquées sont, en premier lieu, le liage d'entités (plus connu sous le nom de Entity Linking), pour identifier les acteurs et concepts du corpus ; deuxièmement, les relations entre les acteurs et les concepts ont été déterminées sur la base d'une chaîne de traitements TAL, qui effectue un étiquetage des rôles sémantiques et des dépendances syntaxiques, entre autres analyses linguistiques. La partie I de la thèse décrit l'état de l'art sur ces technologies, en soulignant en même temps leur emploi en Humanités numériques. Des outils TAL génériques ont été utilisés. Comme l'efficacité des méthodes de TAL dépend du corpus d'application, des développements ont été effectués, décrits dans la partie II, afin de mieux adapter les méthodes d'analyse aux corpus dans nos études de cas. La partie II montre également une évaluation intrinsèque de la technologie développée, avec des résultats satisfaisants. Les technologies ont été appliquées à trois corpus très différents, comme décrit dans la partie III. Tout d'abord, les manuscrits de Jeremy Bentham, un corpus de philosophie politique des 18e et 19e siècles. Deuxièmement, le corpus PoliInformatics, qui contient des matériaux hétérogènes sur la crise financière américaine de 2007--2008. Enfin, le Bulletin des Négociations de la Terre (ENB dans son acronyme anglais), qui couvre des sommets internationaux sur la politique climatique depuis 1995, où des traités comme le Protocole de Kyoto ou les Accords de Paris ont été négociés. Pour chaque corpus, des interfaces de navigation ont été développées. Ces interfaces utilisateur combinent les réseaux, la recherche en texte intégral et la recherche structurée basée sur des annotations TAL. À titre d'exemple, dans l'interface pour le corpus ENB, qui couvre des négociations en politique climatique, des recherches peuvent être effectuées sur la base d'informations relationnelles identifiées dans le corpus: les acteurs de la négociation ayant discuté un sujet concret en exprimant leur soutien ou leur opposition peuvent être recherchés. Le type de la relation entre acteurs et concepts est exploité, au-delà de la simple co-occurrence entre les termes du corpus. Les interfaces ont été évaluées qualitativement avec des experts de domaine, afin d'estimer leur utilité potentielle pour la recherche dans leurs domaines respectifs. Tout d'abord, il a été vérifié si les représentations générées pour le contenu des corpus sont en accord avec les connaissances des experts du domaine, pour déceler des erreurs d'annotation. Ensuite, nous avons essayé de déterminer si les experts pourraient être en mesure d'avoir une meilleure compréhension du corpus grâce à avoir utilisé les applications, par exemple, s'ils ont trouvé de l'évidence nouvelle pour leurs questions de recherche existantes, ou s'ils ont trouvé de nouvelles questions de recherche. On a pu mettre au jour des exemples où un gain de compréhension sur le corpus est observé grâce à l'interface dédiée au Bulletin des Négociations de la Terre, ce qui constitue une bonne validation du travail effectué dans la thèse. En conclusion, les points forts et faiblesses des applications développées ont été soulignés, en indiquant de possibles pistes d'amélioration en tant que travail futur
Social sciences and Humanities research is often based on large textual corpora, that it would be unfeasible to read in detail. Natural Language Processing (NLP) can identify important concepts and actors mentioned in a corpus, as well as the relations between them. Such information can provide an overview of the corpus useful for domain-experts, and help identify corpus areas relevant for a given research question. To automatically annotate corpora relevant for Digital Humanities (DH), the NLP technologies we applied are, first, Entity Linking, to identify corpus actors and concepts. Second, the relations between actors and concepts were determined based on an NLP pipeline which provides semantic role labeling and syntactic dependencies among other information. Part I outlines the state of the art, paying attention to how the technologies have been applied in DH.Generic NLP tools were used. As the efficacy of NLP methods depends on the corpus, some technological development was undertaken, described in Part II, in order to better adapt to the corpora in our case studies. Part II also shows an intrinsic evaluation of the technology developed, with satisfactory results. The technologies were applied to three very different corpora, as described in Part III. First, the manuscripts of Jeremy Bentham. This is a 18th-19th century corpus in political philosophy. Second, the PoliInformatics corpus, with heterogeneous materials about the American financial crisis of 2007-2008. Finally, the Earth Negotiations Bulletin (ENB), which covers international climate summits since 1995, where treaties like the Kyoto Protocol or the Paris Agreements get negotiated.For each corpus, navigation interfaces were developed. These user interfaces (UI) combine networks, full-text search and structured search based on NLP annotations. As an example, in the ENB corpus interface, which covers climate policy negotiations, searches can be performed based on relational information identified in the corpus: the negotiation actors having discussed a given issue using verbs indicating support or opposition can be searched, as well as all statements where a given actor has expressed support or opposition. Relation information is employed, beyond simple co-occurrence between corpus terms.The UIs were evaluated qualitatively with domain-experts, to assess their potential usefulness for research in the experts' domains. First, we payed attention to whether the corpus representations we created correspond to experts' knowledge of the corpus, as an indication of the sanity of the outputs we produced. Second, we tried to determine whether experts could gain new insight on the corpus by using the applications, e.g. if they found evidence unknown to them or new research ideas. Examples of insight gain were attested with the ENB interface; this constitutes a good validation of the work carried out in the thesis. Overall, the applications' strengths and weaknesses were pointed out, outlining possible improvements as future work
APA, Harvard, Vancouver, ISO, and other styles
5

Adam, Jameel. "Video annotation wiki for South African sign language." Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.

Full text
Abstract:

The SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.

APA, Harvard, Vancouver, ISO, and other styles
6

Kan'an, Tarek Ghaze. "Arabic News Text Classification and Summarization: A Case of the Electronic Library Institute SeerQ (ELISQ)." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/74272.

Full text
Abstract:
Arabic news articles in heterogeneous electronic collections are difficult for users to work with. Two problems are: that they are not categorized in a way that would aid browsing, and that there are no summaries or detailed metadata records that could be easier to work with than full articles. To address the first problem, schema mapping techniques were adapted to construct a simple taxonomy for Arabic news stories that is compatible with the subject codes of the International Press Telecommunications Council. So that each article would be labeled with the proper taxonomy category, automatic classification methods were researched, to identify the most appropriate. Experiments showed that the best features to use in classification resulted from a new tailored stemming approach (i.e., a new Arabic light stemmer called P-Stemmer). When coupled with binary classification using SVM, the newly developed approach proved to be superior to state-of-the-art techniques. To address the second problem, i.e., summarization, preliminary work was done with English corpora. This was in the context of a new Problem Based Learning (PBL) course wherein students produced template summaries of big text collections. The techniques used in the course were extended to work with Arabic news. Due to the lack of high quality tools for Named Entity Recognition (NER) and topic identification for Arabic, two new tools were constructed: RenA for Arabic NER, and ALDA for Arabic topic extraction tool (using the Latent Dirichlet Algorithm). Controlled experiments with each of RenA and ALDA, involving Arabic speakers and a randomly selected corpus of 1000 Qatari news articles, showed the tools produced very good results (i.e., names, organizations, locations, and topics). Then the categorization, NER, topic identification, and additional information extraction techniques were combined to produce approximately 120,000 summaries for Qatari news articles, which are searchable, along with the articles, using LucidWorks Fusion, which builds upon Solr software. Evaluation of the summaries showed high ratings based on the 1000-article test corpus. Contributions of this research with Arabic news articles thus include a new: test corpus, taxonomy, light stemmer, classification approach, NER tool, topic identification tool, and template-based summarizer – all shown through experimentation to be highly effective.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Matsubara, Shigeki, Tomohiro Ohno, and Masashi Ito. "Text-Style Conversion of Speech Transcript into Web Document for Lecture Archive." Fuji Technology Press, 2009. http://hdl.handle.net/2237/15083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Segers, Vaughn Mackman. "The efficacy of the Eigenvector approach to South African sign language identification." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_2697_1298280657.

Full text
Abstract:

The communication barriers between deaf and hearing society mean that interaction between these communities is kept to a minimum. The South African Sign Language research group, Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL), at the University of the Western Cape aims to create technologies to bridge the communication gap. In this thesis we address the subject of whole hand gesture recognition. We demonstrate a method to identify South African Sign Language classifiers using an eigenvector ap- proach. The classifiers researched within this thesis are based on those outlined by the Thibologa Sign Language Institute for SASL. Gesture recognition is achieved in real- time. Utilising a pre-processing method for image registration we are able to increase the recognition rates for the eigenvector approach.

APA, Harvard, Vancouver, ISO, and other styles
9

Zahidin, Ahmad Zamri. "Using Ada tasks (concurrent processing) to simulate a business system." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/539634.

Full text
Abstract:
Concurrent processing has always been a traditional problem in developing operating systems. Today, concurrent algorithms occur in many application areas such as science and engineering, artificial intelligence, business systems databases, and many more. The presence of concurrent processing facilities allows the natural expression of these algorithms as concurrent programs. This is a very distinct advantage if the underlying computer offers parallelism. On the other hand, the lack of concurrent processing facilities forces these algorithms to be written as sequential programs, thus, destroying the structure of the algorithms and making them hard to understand and analyze.The first major programming language that offers high-level concurrent processing facilities is Ada. Ada is a complex, general purpose programming language that provides an excellent concurrent programming facility called task that is based on rendezvous concept. In this study, concurrent processing is practiced by simulating a business system using Ada language and its facilities.A warehouse (the business system) consists of a number of employees purchases microwave ovens from various vendors and distributes them to several retailers. Simulation of activities in the system is carried over by assigning each employee to a specific task and all tasks run simultaneously. The programs. written for this business system produce transactions and financial statements of a typical business day. They(programs) are also examining the behavior of activities that occur simultaneously. The end results show that concurrency and Ada work efficiently and effectively.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

Vladimir, Ostojić. "Integrisana multiveličinska obrada radiografskih snimaka." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107425&source=NDLTD&language=en.

Full text
Abstract:
Predložena je multiveličinska obrada kojom je moguće objediniti pojačanjevidljivosti detalja kao i poboljšanje kontrasta radiografskih snimaka, kojimse uspostavlja ravnoteža između kontrasta slabo i jasno vidljivih struktura.Pored toga, obradom je obuhvaćeno i smanjenje globalnog kontrasta, čime jemoguće dodatno naglasiti lokalne strukture. Drugim rečima, predložena jemultiveličinska obrada koja integriše sve korake poboljšanja vidljivostianatomskih struktura. Predložena obrada razmotrena je u okviru razvijenogalgoritamskog okvira koji sadrži korake obrade radiografskih snimaka koji supotrebni da bi se sirov signal, dobijen od strane detektora zračenja, obradioi time pripremio za prikazivanje lekarima. Svaki od koraka obrade jeanaliziran i predložena su originalna rešenja, kao i poboljšanja postojećihpristupa. Evaluacijom je pokazano se integrisanom obradom postižu rezultatikoji prevazilaze one koji se dobijaju savremenom vrhunskom obradom, kao i da jecelokupni proces obrade moguće kontrolisati sa samo dva operativnaparametra. Da bi se upotpunila sveobuhvatna analiza procesa obraderadiografskih snimaka, u disertaciji je razmotreno i uklanjanje artefakatanastalih obradom, kao i mogućnost ubrzanja obrade radiografskih snimaka. Zaoba problema su ponuđena originalna rešenja čija je efikasnosteksperimentalno potvrđena.
The thesis focuses on digital radiography image processing. Multi-scale processing isproposed, which unifies detail visibility enhancement, local contrast enhancement andglobal contrast reduction, thus enabling additional amplification of local structures. Inother words, the proposed multi-scale image processing integrates all steps ofanatomical structures visibility enhancement. For the purpose of the proposedanatomical structures visibility enhancement analysis, a processing framework wasdeveloped. The framework consists of several stages, used to process the image fromits raw form (signal obtained from the radiation detector), to the state where it will bepresented to the medical diagnostician. Each stage is analyzed and for each anoriginal solution or an improvement of an existing approach was proposed. Evaluationhas shown that integrated processing provides results which surpass state-of-the-artprocessing methods, and that the entire processing pipeline can be controlled usingjust two parameters. In order to complete the comprehensive analysis of radiographyimage processing, processing artifacts removal and radiography image processingacceleration are analyzed in the thesis. Both issues are addressed through originalsolutions whose efficiency is experimentally confirmed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Digital Language Processing"

1

Lyon, Douglas A. Java digital signal processing. New York, N.Y: M&T Books, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1949-, Kimble Bruce, ed. C language algorithms for digital signal processing. Englewood Cliffs, N.J: PTR Prentice Hall, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lockhart, Gordon B. BASIC digital signal processing. London: Butterworths, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bekavac, Božo, Kristina Kocijan, Max Silberztein, and Krešimir Šojat, eds. Formalising Natural Languages: Applications to Natural Language Processing and Digital Humanities. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70629-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bigey, Magali, Annabel Richeton, Max Silberztein, and Izabella Thomas, eds. Formalizing Natural Languages: Applications to Natural Language Processing and Digital Humanities. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92861-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

González, Mariana, Silvia Susana Reyes, Andrea Rodrigo, and Max Silberztein, eds. Formalizing Natural Languages: Applications to Natural Language Processing and Digital Humanities. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-23317-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

M, Embree Paul. C++ algorithms for digital signal processing. Upper Saddler River, NJ: Prentice Hall, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Capturing time & motion: The dynamic language of digital photography. New York, N.Y: Lark Books, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hans, Uszkoreit, and SpringerLink (Online service), eds. The Hungarian Language in the Digital Age. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hans, Uszkoreit, and SpringerLink (Online service), eds. The Greek Language in the Digital Age. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Digital Language Processing"

1

Ferilli, Stefano. "Natural Language Processing." In Automatic Digital Document Processing and Management, 199–222. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-198-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chanod, Jean-Pierre. "Natural Language Processing and Digital Libraries." In Information Extraction, 17–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48089-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, J. K., S. K. Wang, E. B. Lee, and R. T. Chang. "Natural Language Processing (NLP) in AI." In Digital Eye Care and Teleophthalmology, 243–49. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-24052-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Piotrowski, Michael. "NLP and Digital Humanities." In Natural Language Processing for Historical Texts, 5–10. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-02146-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boudabous, Mohamed Mahdi, Mohamed Hédi Maaloul, and Lamia Hadrich Belguith. "Digital Learning for Summarizing Arabic Documents." In Advances in Natural Language Processing, 79–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14770-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ami Podrebarac, Anamarija. "Introduction to Natural Language Processing." In Fragmentation of the Photographic Image in the Digital Age, 204–11. New York, NY: Routledge, 2020. | Series: [Routledge history of photography]: Routledge, 2019. http://dx.doi.org/10.4324/9781351027946-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Loukachevitch, Natalia, and Boris Dobrov. "RuThes Thesaurus for Natural Language Processing." In The Palgrave Handbook of Digital Russia Studies, 319–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42855-6_18.

Full text
Abstract:
AbstractThis chapter describes the Russian RuThes thesaurus created as a linguistic and terminological resource for automatic document processing. Its structure utilizes two popular paradigms for computer thesauri: concept-based units, a small set of relation types, rules for including multiword expression as in information retrieval thesauri; and language-motivated units, detailed sets of synonyms, description of ambiguous words as in WordNet-like thesauri. The development of the RuThes thesaurus is supported for many years: new concepts, new senses, and multiword expressions found in contemporary texts are introduced regularly. The chapter shows some examples of representing newly appeared concepts related to important internal and international events.
APA, Harvard, Vancouver, ISO, and other styles
8

Gottfried, Björn, and Lothar Meyer-Lerbs. "Towards the Processing of Historic Documents." In Advanced Language Technologies for Digital Libraries, 15–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23160-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abascal, Rocío, Béatrice Rumpler, and Jean-Marie Pinon. "Information Retrieval in Digital Theses Based on Natural Language Processing Tools." In Advances in Natural Language Processing, 172–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30228-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

LaFlair, Geoff, Kevin Yancey, Burr Settles, and Alina A. von Davier. "Computational Psychometrics for Digital-First Assessments." In Advancing Natural Language Processing in Educational Assessment, 107–23. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003278658-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Digital Language Processing"

1

Kyprianou, Ross, Peter Schachte, and Bill Moran. "Dauphin: A Signal Processing Language - Statistical Signal Processing Made Easy." In 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2015. http://dx.doi.org/10.1109/dicta.2015.7371250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Shun, Hongtao Xie, Jian Yin, and Yajun Chen. "Uyghur language text detection in images." In Eighth International Conference on Digital Image Processing (ICDIP 2016), edited by Charles M. Falco and Xudong Jiang. SPIE, 2016. http://dx.doi.org/10.1117/12.2244133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Axelsson, Emil, Koen Claessen, Gergely Devai, Zoltan Horvath, Karin Keijzer, Bo Lyckegard, Anders Persson, Mary Sheeran, Josef Svenningsson, and Andras Vajdax. "Feldspar: A domain specific language for digital signal processing algorithms." In 2010 8th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2010). IEEE, 2010. http://dx.doi.org/10.1109/memcod.2010.5558637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gallardo-Antolin, Ascensión, Fernando Diaz-de-Maria, and Francisco J. Valverde-Albacete. "Recognition from GSM digital speech." In 5th International Conference on Spoken Language Processing (ICSLP 1998). ISCA: ISCA, 1998. http://dx.doi.org/10.21437/icslp.1998-324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hamey, Leonard G. C. "Efficient Image Processing with the Apply Language." In 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA 2007). IEEE, 2007. http://dx.doi.org/10.1109/dicta.2007.4426843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhagat, Neel Kamal, Y. Vishnusai, and G. N. Rathna. "Indian Sign Language Gesture Recognition using Image Processing and Deep Learning." In 2019 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2019. http://dx.doi.org/10.1109/dicta47822.2019.8945850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cong Jin, Dong Xu, and ZhiGuo Qu. "Applications of digital fingerprinting and digital watermarking for E-commerce security mechanism." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Hui, and Yanjie Li. "Analysis on the language features of digital sculptures." In International Conference on Image Processing and Intelligent Control (IPIC 2021), edited by Feng Wu and Fengjie Cen. SPIE, 2021. http://dx.doi.org/10.1117/12.2611392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

di Buono, M. P., M. Monteleone, P. Ronzino, V. Vassallo, and S. Hermon. "Decision making support systems for the Archaeological domain: A Natural Language Processing proposal." In 2013 Digital Heritage International Congress (DigitalHeritage). IEEE, 2013. http://dx.doi.org/10.1109/digitalheritage.2013.6744789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hubackova, Sarka. "Processing of multimedia aplications and their use in foreign language teaching." In 2016 Eleventh International Conference on Digital Information Management (ICDIM). IEEE, 2016. http://dx.doi.org/10.1109/icdim.2016.7829793.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Digital Language Processing"

1

Alonso-Robisco, Andres, and Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Madrid: Banco de España, August 2023. http://dx.doi.org/10.53479/33412.

Full text
Abstract:
Central banks are increasingly using verbal communication for policymaking, focusing not only on traditional monetary policy, but also on a broad set of topics. One such topic is central bank digital currency (CBDC), which is attracting attention from the international community. The complex nature of this project means that it must be carefully designed to avoid unintended consequences, such as financial instability. We propose the use of different Natural Language Processing (NLP) techniques to better understand central banks’ stance towards CBDC, analyzing a set of central bank discourses from 2016 to 2022. We do this using traditional techniques, such as dictionary-based methods, and two large language models (LLMs), namely Bert and ChatGPT, concluding that LLMs better reflect the stance identified by human experts. In particular, we observe that ChatGPT exhibits a higher degree of alignment because it can capture subtler information than BERT. Our study suggests that LLMs are an effective tool to improve sentiment measurements for policy-specific texts, though they are not infallible and may be subject to new risks, like higher sensitivity to the length of texts, and prompt engineering.
APA, Harvard, Vancouver, ISO, and other styles
2

Volkova, Nataliia P., Nina O. Rizun, and Maryna V. Nehrey. Data science: opportunities to transform education. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3241.

Full text
Abstract:
The article concerns the issue of data science tools implementation, including the text mining and natural language processing algorithms for increasing the value of high education for development modern and technologically flexible society. Data science is the field of study that involves tools, algorithms, and knowledge of math and statistics to discover knowledge from the raw data. Data science is developing fast and penetrating all spheres of life. More people understand the importance of the science of data and the need for implementation in everyday life. Data science is used in business for business analytics and production, in sales for offerings and, for sales forecasting, in marketing for customizing customers, and recommendations on purchasing, digital marketing, in banking and insurance for risk assessment, fraud detection, scoring, and in medicine for disease forecasting, process automation and patient health monitoring, in tourism in the field of price analysis, flight safety, opinion mining etc. However, data science applications in education have been relatively limited, and many opportunities for advancing the fields still unexplored.
APA, Harvard, Vancouver, ISO, and other styles
3

Furey, John, Austin Davis, and Jennifer Seiter-Moser. Natural language indexing for pedoinformatics. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41960.

Full text
Abstract:
The multiple schema for the classification of soils rely on differing criteria but the major soil science systems, including the United States Department of Agriculture (USDA) and the international harmonized World Reference Base for Soil Resources soil classification systems, are primarily based on inferred pedogenesis. Largely these classifications are compiled from individual observations of soil characteristics within soil profiles, and the vast majority of this pedologic information is contained in nonquantitative text descriptions. We present initial text mining analyses of parsed text in the digitally available USDA soil taxonomy documentation and the Soil Survey Geographic database. Previous research has shown that latent information structure can be extracted from scientific literature using Natural Language Processing techniques, and we show that this latent information can be used to expedite query performance by using syntactic elements and part-of-speech tags as indices. Technical vocabulary often poses a text mining challenge due to the rarity of its diction in the broader context. We introduce an extension to the common English vocabulary that allows for nearly-complete indexing of USDA Soil Series Descriptions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography