To see the other types of publications on this topic, follow the link: Digital Language Processing.

Dissertations / Theses on the topic 'Digital Language Processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital Language Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kakavandy, Hanna, and John Landeholt. "How natural language processing can be used to improve digital language learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281693.

Full text
Abstract:
The world is facing globalization and with that, companies are growing and need to hire according their needs. A great obstacle for this is the language barrier between job applicants and employers who want to hire competent candidates. One spark of light in this challenge is Lingio, who provides a product that teaches digital profession-specific Swedish. Lingio intends to make their existing product more interactive and this research paper aims to research aspects involved in that. This study evaluates system utterances that are planned to be used in Lingio’s product for language learners to use in their practice and studies the feasibility of using the natural language model cosine similarity in classifying the correctness of answers to these utterances. This report also looks at whether it best to use crowd sourced material or a golden standard as benchmark for a correct answer. The results indicate that there are a number of improvements and developments that need to be made to the model in order for it to accurately classify answers due to its formulation and the complexity of human language. It is also concluded that the utterances by Lingio might need to be further developed in order to be efficient in their use for learning language and that crowd sourced material works better than a golden standard. The study makes several interesting observations from the collected data and analysis, aiming to contribute to further research in natural language engineering when it comes to text classification and digital language learning.
Globaliseringen medför flertal konsekvenser för växande företag. En av utmaningarna som företag står inför är anställandet av tillräckligt med kompentent personal. För många företag står språkbarriären mellan de och att anställa kompetens, arbetsökande har ofta inte tillräckligt med språkkunskaper för att klara av jobbet. Lingio är företag som arbetar med just detta, deras produkt är en digital applikation som undervisar yrkesspecific svenska, en effektiv lösning för den som vill fokusera sin inlärning av språket inför ett jobb. Syftet är att hjälpa Lingio i utvecklingen av deras produkt, närmare bestämt i arbetet med att göra den mer interaktiv. Detta görs genom att undersöka effektiviteten hos applikationens yttranden som används för inlärningssyfte och att använda en språkteknologisk modell för att klassificera en användares svar till ett yttrande. Vidare analyseras huruvida det är bäst att använda en golden standard eller insamlat material från enkäter som referenspunkt för ett korrekt yttrande. Resultatet visar att modellen har flertal svagheter och  behöver utvecklas för att kunna göra klassificeringen på ett korrekt sätt och att det finns utrymme för bättring när det kommer till yttrandena. Det visas även att insamlat material från enkäter fungerar bättre än en golden standard.
APA, Harvard, Vancouver, ISO, and other styles
2

Katzir, Yoel. "PC software for the teaching of digital signal processing." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23346.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Electrical and Computer Engineering Department at the Naval Postgraduate School has a need for additional software to be used in instructing students studying digital signal processing. This software will be used in a PC lab or at home. This thesis provides a set of disks written in APL (A Programming Language) which allows the user to input arbitrary signals from a disk, to perform various signal processing operations, to plot the results, and to save them without the need for complicated programming. The software is in the form of a digital signal processing toolkit. The user can select functions which can operate on the signals and interactively apply them in any order. The user can also easily develop new functions and include them in the toolkit. The thesis includes brief discussions about the library workspaces, a user manual, function listings with examples of their use, and an application paper. The software is modular and can be expanded by adding additional sets of functions.
http://archive.org/details/pcsoftwarefortea00katz
Major, Israeli Air Force
APA, Harvard, Vancouver, ISO, and other styles
3

Ou, Shiyan, Christopher S. G. Khoo, and Dion H. Goh. "Automatic multi-document summarization for digital libraries." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.

Full text
Abstract:
With the rapid growth of the World Wide Web and online information services, more and more information is available and accessible online. Automatic summarization is an indispensable solution to reduce the information overload problem. Multi-document summarization is useful to provide an overview of a topic and allow users to zoom in for more details on aspects of interest. This paper reports three types of multi-document summaries generated for a set of research abstracts, using different summarization approaches: a sentence-based summary generated by a MEAD summarization system that extracts important sentences using various features, another sentence-based summary generated by extracting research objective sentences, and a variable-based summary focusing on research concepts and relationships. A user evaluation was carried out to compare the three types of summaries. The evaluation results indicated that the majority of users (70%) preferred the variable-based summary, while 55% of the users preferred the research objective summary, and only 25% preferred the MEAD summary.
APA, Harvard, Vancouver, ISO, and other styles
4

Ruiz, Fabo Pablo. "Concept-based and relation-based corpus navigation : applications of natural language processing in digital humanities." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE053/document.

Full text
Abstract:
La recherche en Sciences humaines et sociales repose souvent sur de grandes masses de données textuelles, qu'il serait impossible de lire en détail. Le Traitement automatique des langues (TAL) peut identifier des concepts et des acteurs importants mentionnés dans un corpus, ainsi que les relations entre eux. Ces informations peuvent fournir un aperçu du corpus qui peut être utile pour les experts d'un domaine et les aider à identifier les zones du corpus pertinentes pour leurs questions de recherche. Pour annoter automatiquement des corpus d'intérêt en Humanités numériques, les technologies TAL que nous avons appliquées sont, en premier lieu, le liage d'entités (plus connu sous le nom de Entity Linking), pour identifier les acteurs et concepts du corpus ; deuxièmement, les relations entre les acteurs et les concepts ont été déterminées sur la base d'une chaîne de traitements TAL, qui effectue un étiquetage des rôles sémantiques et des dépendances syntaxiques, entre autres analyses linguistiques. La partie I de la thèse décrit l'état de l'art sur ces technologies, en soulignant en même temps leur emploi en Humanités numériques. Des outils TAL génériques ont été utilisés. Comme l'efficacité des méthodes de TAL dépend du corpus d'application, des développements ont été effectués, décrits dans la partie II, afin de mieux adapter les méthodes d'analyse aux corpus dans nos études de cas. La partie II montre également une évaluation intrinsèque de la technologie développée, avec des résultats satisfaisants. Les technologies ont été appliquées à trois corpus très différents, comme décrit dans la partie III. Tout d'abord, les manuscrits de Jeremy Bentham, un corpus de philosophie politique des 18e et 19e siècles. Deuxièmement, le corpus PoliInformatics, qui contient des matériaux hétérogènes sur la crise financière américaine de 2007--2008. Enfin, le Bulletin des Négociations de la Terre (ENB dans son acronyme anglais), qui couvre des sommets internationaux sur la politique climatique depuis 1995, où des traités comme le Protocole de Kyoto ou les Accords de Paris ont été négociés. Pour chaque corpus, des interfaces de navigation ont été développées. Ces interfaces utilisateur combinent les réseaux, la recherche en texte intégral et la recherche structurée basée sur des annotations TAL. À titre d'exemple, dans l'interface pour le corpus ENB, qui couvre des négociations en politique climatique, des recherches peuvent être effectuées sur la base d'informations relationnelles identifiées dans le corpus: les acteurs de la négociation ayant discuté un sujet concret en exprimant leur soutien ou leur opposition peuvent être recherchés. Le type de la relation entre acteurs et concepts est exploité, au-delà de la simple co-occurrence entre les termes du corpus. Les interfaces ont été évaluées qualitativement avec des experts de domaine, afin d'estimer leur utilité potentielle pour la recherche dans leurs domaines respectifs. Tout d'abord, il a été vérifié si les représentations générées pour le contenu des corpus sont en accord avec les connaissances des experts du domaine, pour déceler des erreurs d'annotation. Ensuite, nous avons essayé de déterminer si les experts pourraient être en mesure d'avoir une meilleure compréhension du corpus grâce à avoir utilisé les applications, par exemple, s'ils ont trouvé de l'évidence nouvelle pour leurs questions de recherche existantes, ou s'ils ont trouvé de nouvelles questions de recherche. On a pu mettre au jour des exemples où un gain de compréhension sur le corpus est observé grâce à l'interface dédiée au Bulletin des Négociations de la Terre, ce qui constitue une bonne validation du travail effectué dans la thèse. En conclusion, les points forts et faiblesses des applications développées ont été soulignés, en indiquant de possibles pistes d'amélioration en tant que travail futur
Social sciences and Humanities research is often based on large textual corpora, that it would be unfeasible to read in detail. Natural Language Processing (NLP) can identify important concepts and actors mentioned in a corpus, as well as the relations between them. Such information can provide an overview of the corpus useful for domain-experts, and help identify corpus areas relevant for a given research question. To automatically annotate corpora relevant for Digital Humanities (DH), the NLP technologies we applied are, first, Entity Linking, to identify corpus actors and concepts. Second, the relations between actors and concepts were determined based on an NLP pipeline which provides semantic role labeling and syntactic dependencies among other information. Part I outlines the state of the art, paying attention to how the technologies have been applied in DH.Generic NLP tools were used. As the efficacy of NLP methods depends on the corpus, some technological development was undertaken, described in Part II, in order to better adapt to the corpora in our case studies. Part II also shows an intrinsic evaluation of the technology developed, with satisfactory results. The technologies were applied to three very different corpora, as described in Part III. First, the manuscripts of Jeremy Bentham. This is a 18th-19th century corpus in political philosophy. Second, the PoliInformatics corpus, with heterogeneous materials about the American financial crisis of 2007-2008. Finally, the Earth Negotiations Bulletin (ENB), which covers international climate summits since 1995, where treaties like the Kyoto Protocol or the Paris Agreements get negotiated.For each corpus, navigation interfaces were developed. These user interfaces (UI) combine networks, full-text search and structured search based on NLP annotations. As an example, in the ENB corpus interface, which covers climate policy negotiations, searches can be performed based on relational information identified in the corpus: the negotiation actors having discussed a given issue using verbs indicating support or opposition can be searched, as well as all statements where a given actor has expressed support or opposition. Relation information is employed, beyond simple co-occurrence between corpus terms.The UIs were evaluated qualitatively with domain-experts, to assess their potential usefulness for research in the experts' domains. First, we payed attention to whether the corpus representations we created correspond to experts' knowledge of the corpus, as an indication of the sanity of the outputs we produced. Second, we tried to determine whether experts could gain new insight on the corpus by using the applications, e.g. if they found evidence unknown to them or new research ideas. Examples of insight gain were attested with the ENB interface; this constitutes a good validation of the work carried out in the thesis. Overall, the applications' strengths and weaknesses were pointed out, outlining possible improvements as future work
APA, Harvard, Vancouver, ISO, and other styles
5

Adam, Jameel. "Video annotation wiki for South African sign language." Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.

Full text
Abstract:

The SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.

APA, Harvard, Vancouver, ISO, and other styles
6

Kan'an, Tarek Ghaze. "Arabic News Text Classification and Summarization: A Case of the Electronic Library Institute SeerQ (ELISQ)." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/74272.

Full text
Abstract:
Arabic news articles in heterogeneous electronic collections are difficult for users to work with. Two problems are: that they are not categorized in a way that would aid browsing, and that there are no summaries or detailed metadata records that could be easier to work with than full articles. To address the first problem, schema mapping techniques were adapted to construct a simple taxonomy for Arabic news stories that is compatible with the subject codes of the International Press Telecommunications Council. So that each article would be labeled with the proper taxonomy category, automatic classification methods were researched, to identify the most appropriate. Experiments showed that the best features to use in classification resulted from a new tailored stemming approach (i.e., a new Arabic light stemmer called P-Stemmer). When coupled with binary classification using SVM, the newly developed approach proved to be superior to state-of-the-art techniques. To address the second problem, i.e., summarization, preliminary work was done with English corpora. This was in the context of a new Problem Based Learning (PBL) course wherein students produced template summaries of big text collections. The techniques used in the course were extended to work with Arabic news. Due to the lack of high quality tools for Named Entity Recognition (NER) and topic identification for Arabic, two new tools were constructed: RenA for Arabic NER, and ALDA for Arabic topic extraction tool (using the Latent Dirichlet Algorithm). Controlled experiments with each of RenA and ALDA, involving Arabic speakers and a randomly selected corpus of 1000 Qatari news articles, showed the tools produced very good results (i.e., names, organizations, locations, and topics). Then the categorization, NER, topic identification, and additional information extraction techniques were combined to produce approximately 120,000 summaries for Qatari news articles, which are searchable, along with the articles, using LucidWorks Fusion, which builds upon Solr software. Evaluation of the summaries showed high ratings based on the 1000-article test corpus. Contributions of this research with Arabic news articles thus include a new: test corpus, taxonomy, light stemmer, classification approach, NER tool, topic identification tool, and template-based summarizer – all shown through experimentation to be highly effective.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Matsubara, Shigeki, Tomohiro Ohno, and Masashi Ito. "Text-Style Conversion of Speech Transcript into Web Document for Lecture Archive." Fuji Technology Press, 2009. http://hdl.handle.net/2237/15083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Segers, Vaughn Mackman. "The efficacy of the Eigenvector approach to South African sign language identification." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_2697_1298280657.

Full text
Abstract:

The communication barriers between deaf and hearing society mean that interaction between these communities is kept to a minimum. The South African Sign Language research group, Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL), at the University of the Western Cape aims to create technologies to bridge the communication gap. In this thesis we address the subject of whole hand gesture recognition. We demonstrate a method to identify South African Sign Language classifiers using an eigenvector ap- proach. The classifiers researched within this thesis are based on those outlined by the Thibologa Sign Language Institute for SASL. Gesture recognition is achieved in real- time. Utilising a pre-processing method for image registration we are able to increase the recognition rates for the eigenvector approach.

APA, Harvard, Vancouver, ISO, and other styles
9

Zahidin, Ahmad Zamri. "Using Ada tasks (concurrent processing) to simulate a business system." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/539634.

Full text
Abstract:
Concurrent processing has always been a traditional problem in developing operating systems. Today, concurrent algorithms occur in many application areas such as science and engineering, artificial intelligence, business systems databases, and many more. The presence of concurrent processing facilities allows the natural expression of these algorithms as concurrent programs. This is a very distinct advantage if the underlying computer offers parallelism. On the other hand, the lack of concurrent processing facilities forces these algorithms to be written as sequential programs, thus, destroying the structure of the algorithms and making them hard to understand and analyze.The first major programming language that offers high-level concurrent processing facilities is Ada. Ada is a complex, general purpose programming language that provides an excellent concurrent programming facility called task that is based on rendezvous concept. In this study, concurrent processing is practiced by simulating a business system using Ada language and its facilities.A warehouse (the business system) consists of a number of employees purchases microwave ovens from various vendors and distributes them to several retailers. Simulation of activities in the system is carried over by assigning each employee to a specific task and all tasks run simultaneously. The programs. written for this business system produce transactions and financial statements of a typical business day. They(programs) are also examining the behavior of activities that occur simultaneously. The end results show that concurrency and Ada work efficiently and effectively.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

Vladimir, Ostojić. "Integrisana multiveličinska obrada radiografskih snimaka." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107425&source=NDLTD&language=en.

Full text
Abstract:
Predložena je multiveličinska obrada kojom je moguće objediniti pojačanjevidljivosti detalja kao i poboljšanje kontrasta radiografskih snimaka, kojimse uspostavlja ravnoteža između kontrasta slabo i jasno vidljivih struktura.Pored toga, obradom je obuhvaćeno i smanjenje globalnog kontrasta, čime jemoguće dodatno naglasiti lokalne strukture. Drugim rečima, predložena jemultiveličinska obrada koja integriše sve korake poboljšanja vidljivostianatomskih struktura. Predložena obrada razmotrena je u okviru razvijenogalgoritamskog okvira koji sadrži korake obrade radiografskih snimaka koji supotrebni da bi se sirov signal, dobijen od strane detektora zračenja, obradioi time pripremio za prikazivanje lekarima. Svaki od koraka obrade jeanaliziran i predložena su originalna rešenja, kao i poboljšanja postojećihpristupa. Evaluacijom je pokazano se integrisanom obradom postižu rezultatikoji prevazilaze one koji se dobijaju savremenom vrhunskom obradom, kao i da jecelokupni proces obrade moguće kontrolisati sa samo dva operativnaparametra. Da bi se upotpunila sveobuhvatna analiza procesa obraderadiografskih snimaka, u disertaciji je razmotreno i uklanjanje artefakatanastalih obradom, kao i mogućnost ubrzanja obrade radiografskih snimaka. Zaoba problema su ponuđena originalna rešenja čija je efikasnosteksperimentalno potvrđena.
The thesis focuses on digital radiography image processing. Multi-scale processing isproposed, which unifies detail visibility enhancement, local contrast enhancement andglobal contrast reduction, thus enabling additional amplification of local structures. Inother words, the proposed multi-scale image processing integrates all steps ofanatomical structures visibility enhancement. For the purpose of the proposedanatomical structures visibility enhancement analysis, a processing framework wasdeveloped. The framework consists of several stages, used to process the image fromits raw form (signal obtained from the radiation detector), to the state where it will bepresented to the medical diagnostician. Each stage is analyzed and for each anoriginal solution or an improvement of an existing approach was proposed. Evaluationhas shown that integrated processing provides results which surpass state-of-the-artprocessing methods, and that the entire processing pipeline can be controlled usingjust two parameters. In order to complete the comprehensive analysis of radiographyimage processing, processing artifacts removal and radiography image processingacceleration are analyzed in the thesis. Both issues are addressed through originalsolutions whose efficiency is experimentally confirmed.
APA, Harvard, Vancouver, ISO, and other styles
11

Hoffenberg, Steven. "The tug function : a method of context sensitive dot structuring for digital halftones /." Online version of thesis, 1990. http://hdl.handle.net/1850/11500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rice, Richard Aaron. "Teaching and learning first-year composition with digital portfolios." Virtual Press, 2002. http://liblink.bsu.edu/uhtbin/catkey/1239209.

Full text
Abstract:
The purpose of this study was to begin to define and describe some of the complex intersections between teaching and learning first-year composition with digital portfolios, focusing on the construction, presentation, and assessment processes in one first-year composition course at Ball State University. The study employed a qualitative ethnographic methodology with case study, and used grounded theory to develop a resultant guide to code the data collected through several methods: observation, interview, survey, and artifact assessment.The resultant coding guide included the core categories "reflective immediacy," "reflexive hypermediacy," and "active remediation." With the guide findings indicate several effective "common tool" digital portfolio strategies for both teachers and learners. For teachers: introduce the digital portfolio as early in the course as possible; make connections between digital portfolios and personal pedagogical strategies; highlight rhetorical hyperlinking and constructing navigational schemes; emphasize scalability; create a sustainable support system. For learners: consider the instructor's objectives within the framework of the portfolio; synthesize writing process with course content and portfolio construction; include each component of the writing process in the portfolio.
Department of English
APA, Harvard, Vancouver, ISO, and other styles
13

Naidoo, Nathan Lyle. "South African sign language recognition using feature vectors and Hidden Markov Models." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8533_1297923615.

Full text
Abstract:

This thesis presents a system for performing whole gesture recognition for South African Sign Language. The system uses feature vectors combined with Hidden Markov models. In order to constuct a feature vector, dynamic segmentation must occur to extract the signer&rsquo
s hand movements. Techniques and methods for normalising variations that occur when recording a signer performing a gesture, are investigated. The system has a classification rate of 69%

APA, Harvard, Vancouver, ISO, and other styles
14

Rajah, Christopher. "Chereme-based recognition of isolated, dynamic gestures from South African sign language with Hidden Markov Models." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_4979_1183461652.

Full text
Abstract:

Much work has been done in building systems that can recognize gestures, e.g. as a component of sign language recognition systems. These systems typically use whole gestures as the smallest unit for recognition. Although high recognition rates have been reported, these systems do not scale well and are computationally intensive. The reason why these systems generally scale poorly is that they recognize gestures by building individual models for each separate gesture
as the number of gestures grows, so does the required number of models. Beyond a certain threshold number of gestures to be recognized, this approach become infeasible. This work proposed that similarly good recognition rates can be achieved by building models for subcomponents of whole gestures, so-called cheremes. Instead of building models for entire gestures, we build models for cheremes and recognize gestures as sequences of such cheremes. The assumption is that many gestures share cheremes and that the number of cheremes necessary to describe gestures is much smaller than the number of gestures. This small number of cheremes then makes it possible to recognized a large number of gestures with a small number of chereme models. This approach is akin to phoneme-based speech recognition systems where utterances are recognized as phonemes which in turn are combined into words.

APA, Harvard, Vancouver, ISO, and other styles
15

Kantemir, Ozkan. "VHDL modeling and simulation of a digital image synthesizer for countering ISAR." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FKantemir.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2003.
Thesis advisor(s): Douglas J. Fouts, Phillip E. Pace. Includes bibliographical references (p. 143-144). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
16

Kiang, Kai-Ming Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Natural feature extraction as a front end for simultaneous localization and mapping." Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/26960.

Full text
Abstract:
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
APA, Harvard, Vancouver, ISO, and other styles
17

Matsubara, Shigeki, Tomohiro Ohno, and Masashi Ito. "Text Editing for Lecture Speech Archiving on the Web." Springer, 2009. http://hdl.handle.net/2237/15114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pollitt, Mark. "The Hermeneutics of the Hard Drive: Using Narratology, Natural Language Processing, and Knowledge Management to Improve the Effectiveness of the Digital Forensic Process." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6004.

Full text
Abstract:
In order to protect the safety of our citizens and to ensure a civil society, we ask our law enforcement, judiciary and intelligence agencies, under the rule of law, to seek probative information which can be acted upon for the common good. This information may be used in court to prosecute criminals or it can be used to conduct offensive or defensive operations to protect our national security. As the citizens of the world store more and more information in digital form, and as they live an ever-greater portion of their lives online, law enforcement, the judiciary and the Intelligence Community will continue to struggle with finding, extracting and understanding the data stored on computers. But this trend affords greater opportunity for law enforcement. This dissertation describes how several disparate approaches: knowledge management, content analysis, narratology, and natural language processing, can be combined in an interdisciplinary way to positively impact the growing difficulty of developing useful, actionable intelligence from the ever-increasing corpus of digital evidence. After exploring how these techniques might apply to the digital forensic process, I will suggest two new theoretical constructs, the Hermeneutic Theory of Digital Forensics and the Narrative Theory of Digital Forensics, linking existing theories of forensic science, knowledge management, content analysis, narratology, and natural language processing together in order to identify and extract narratives from digital evidence. An experimental approach will be described and prototyped. The results of these experiments demonstrate the potential of natural language processing techniques to digital forensics.
Ph.D.
Doctorate
Dean's Office, Arts and Humanities
Arts and Humanities
Texts and Technology
APA, Harvard, Vancouver, ISO, and other styles
19

Neda, Milić. "Model optimizacije slike za korisnike sa poremećajima viđenja boja." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=99904&source=NDLTD&language=en.

Full text
Abstract:
Predmet disertacije jeste optimizacija digitalne slike kadaograničenje nije vezano za način reprodukcije već za samog posmatrača,odnosno optimizacija opaženog kvaliteta digitalne slike od straneosoba sa poremećajima viđenja boja. Predloženi model optimizacijeslike poboljšava distinkciju boja i opseg boja slike za korisnike sarazličitim težinama poremećaja viđenja boja uz očuvanje prirodnostislike. Metodološki okvir ispitivanja, koji uključuje kvantitativnuanalizu računarskih simulacija, analizu eye-tracking podataka isubjektivno ocenjivanje poboljšanja opaženog kvaliteta test slika,daje sistematičnu i pouzdanu verifikaciju efektnosti predloženihmetoda adaptacije boja slike.
The subject of the thesis was the digital image optimization when anobserver represents the main image reproduction limitation or, in otherwords, the optimization of the perceived image quality by individuals withcolour vision deficiencies. The proposed image optimization model enhancescolour distinction and gamut for users with different severities of colourblindnesswhile preserving the image naturalness. The used methodologicalframework, including a quantitative analysis of computer simulations, ananalysis of eye-tracking data and a subjective evaluation of the perceivedimage quality, provides systematic and reliable effectiveness verification ofthe proposed colour adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Christopher James. "SKEWER: Sentiment Knowledge Extraction With Entity Recognition." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1615.

Full text
Abstract:
The California state legislature introduces approximately 5,000 new bills each legislative session. While the legislative hearings are recorded on video, the recordings are not easily accessible to the public. The lack of official transcripts or summaries also increases the effort required to gain meaningful insight from those recordings. Therefore, the news media and the general population are largely oblivious to what transpires during legislative sessions. Digital Democracy, a project started by the Cal Poly Institute for Advanced Technology and Public Policy, is an online platform created to bring transparency to the California legislature. It features a searchable database of state legislative committee hearings, with each hearing accompanied by a transcript that was generated by an internal transcription tool. This thesis presents SKEWER, a pipeline for building a spoken-word knowledge graph from those transcripts. SKEWER utilizes a number of natural language processing tools to extract named entities, phrases, and sentiments from the transcript texts and aggregates the results of those tools into a graph database. The resulting graph can be queried to discover knowledge regarding the positions of legislators, lobbyists, and the general public towards specific bills or topics, and how those positions are expressed in committee hearings. Several case studies are presented to illustrate the new knowledge that can be acquired from the knowledge graph.
APA, Harvard, Vancouver, ISO, and other styles
21

Vladimir, Ilić. "Application of new shape descriptors and theory of uncertainty in image processing." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=111129&source=NDLTD&language=en.

Full text
Abstract:
The doctoral thesis deals with the study of quantitative aspects of shape attribute ssuitable for numerical characterization, i.e., shape descriptors, as well as the theory of uncertainty, particularly the theory of fuzzy sets, and their application in imageprocessing. The original contributions and results of the thesis can be naturally divided into two groups, in accordance with the approaches used to obtain them. The first group of contributions relates to introducing new shape descriptors (of hexagonality and fuzzy squareness) and associated measures that evaluate to what extent the shape considered satisfies these properties. The introduced measures are naturally defined, theoretically well-founded, and satisfy most of the desirable properties expected to be satisfied by each well-defined shape measure. To mention some of them: they both range through (0,1] and achieve the largest possible value 1 if and only if the shape considered is a hexagon, respectively a fuzzy square; there is no non-zero area shape with the measured hexagonality or fuzzy squareness equal to 0; both introduced measures are invariant to similarity transformations; and provide results that are consistent with the theoretically proven results, as well as human perception and expectation. Numerous experiments on synthetic and real examples are shown aimed to illustrate theoretically proven considerations and to provide clearer insight into the behaviour of the introduced shape measures. Their advantages and applicability are illustrated in various tasks of recognizing and classifying objects images of several well-known and most frequently used image datasets. Besides, the doctoral thesis contains research related to the application of the theory of uncertainty, in the narrower sense fuzzy set theory, in the different tasks of image processing and shape analysis. We distinguish between the tasks relating to the extraction of shape features, and those relating to performance improvement of different image processing and image analysis techniques. Regarding the first group of tasks, we deal with the application of fuzzy set theory in the tasks of introducing new fuzzy shape-based descriptor, named fuzzy squareness, and measuring how much fuzzy square is given fuzzy shape. In the second group of tasks, we deal withthe study of improving the performance of estimates of both the Euclidean distancetransform in three dimensions (3D EDT) and the centroid distance signature of shape in two dimensions. Performance improvement is particularly reflected in terms of achieved accuracy and precision, increased invariance to geometrical transformations (e.g., rotation and translation), and robustness in the presence of noise and uncertainty resulting from the imperfection of devices or imaging conditions. The latter also refers to the second group of the original contributions and results of the thesis. It is motivated by the fact that the shape analysis traditionally assumes that the objects appearing in the image are previously uniquely and crisply extracted from the image. This is usually achieved in the process of sharp (i.e., binary) segmentation of the original image where a decision on the membership of point to an imaged object is made in a sharp manner. Nevertheless, due to the imperfections of imaging conditions or devices, the presence of noise, and various types of imprecision (e.g., lack of precise object boundary or clear boundaries between the objects, errors in computation, lack of information, etc.), different levels of uncertainty and vagueness in the process of making a decision regarding the membership of image point may potentially occur. This is particularly noticeable in the case of discretization (i.e., sampling) of continuous image domain when a single image element, related to corresponding image sample point, iscovered by multiple objects in an image. In this respect, it is clear that this type of segmentation can potentially lead to a wrong decision on the membership of image points, and consequently irreversible information loss about the imaged objects. Thisstems from the fact that image segmentation performed in this way does not permit that the image point may be a member to a particular imaged object to some degree, further leading to the potential risk that points partially contained in the object beforesegmentation will not be assigned to the object after segmentation. However, if instead of binary segmentation, it is performed segmentation where a decision about the membership of image point is made in a gradual rather than crisp manner, enabling that point may be a member to an object to some extent, then making a sharp decision on the membership can be avoided at this early analysis step. This further leads that potentially a large amount of object information can be preserved after segmentation and used in the following analysis steps. In this regard, we are interested in one specific type of fuzzy segmentation, named coverage image segmentation, resulting in fuzzy digital image representation where membership value assigned to each image element is proportional to its relative coverage by a continuous object present in the original image. In this thesis, we deal with the study of coverage digitization model providing coverage digital image representation and present how significant improvements in estimating 3D EDT, as well as the centroid distance signature of continuous shape, can be achieved, if the coverageinformation available in this type of image representation is appropriately considered.
Докторска дисертација се бави проучавањем квантитативних аспеката атрибутаоблика погодних за нумеричку карактеризацију, то јест дескриптора облика, као итеоријом неодређености, посебно теоријом фази скупова, и њиховом применом у обради слике. Оригинални доприноси и резултати тезе могу се природно поделити у две групе, у складу са приступом и методологијом која је коришћена за њихово добијање. Прва група доприноса односи се на увођење нових дескриптора облика (шестоугаоности и фази квадратности) као и одговарајућих мера које нумерички оцењују у ком обиму разматрани облик задовољава разматрана својства. Уведене мере су природно дефинисане, теоријски добро засноване и задовољавају већину пожељних својстава које свака добро дефинисана мера облика треба да задовољава. Поменимо неке од њих: обе мере узимају вредности из интервала (0,1] и достижу највећу могућу вредност 1 ако и само ако је облик који се посматра шестоугао, односно фази квадрат; не постоји облик не-нула површине чија је измерена шестоугаоност, односно фази квадратност једнака 0; обе уведене мере су инваријантне у односу на трансформације сличности; и дају резултате који су у складу са теоријски доказаним резултатима, као и људском перцепцијом и очекивањима. Бројни експерименти на синтетичким и реалним примерима приказани су у циљу илустровања теоријски доказаних разматрања и пружања јаснијег увида у понашање уведених мера. Њихова предност и корисност илустровани су у различитим задацима препознавања и класификације слика објеката неколико познатих и најчешће коришћених база слика. Поред тога, докторска теза садржи истраживања везана за примену теорије неодређености, у ужем смислу теорије фази скупова, у различитим задацима обраде слике и анализе облика. Разликујемо задатке који се односе на издвајање карактеристика облика ионе који се односе на побољшање перформанси различитих техника обраде ианализе слике. Што се тиче прве групе задатака, бавимо се применом теорије фази скупова у задацима дефинисања новог дескриптора фази облика, назван фази квадратност, и мерења колико је фази квадратан посматрани фази облик. У другој групи задатака бавимо се истраживањем побољшања перформанси оцене трансформације слике еуклидским растојањима у три димензије (3Д ЕДТ), као и сигнатуре непрекидног облика у две димензије засноване на растојању одцентроида облика. Ово последње се посебно огледа у постигнутој тачности ипрецизности оцене, повећаној инваријантности у односу на ротацију и транслацију објекта, као и робустности у присуству шума и неодређености које су последица несавршености уређаја или услова снимања. Последњи резултати се такође односе и на другу групу оригиналних доприноса тезе који су мотивисани чињеницом да анализа облика традиционално претпоставља да су објекти на слици претходно једнозначно и јасно издвојени из слике. Такво издвајање објеката се обично постиже у процесу јасне (то јест бинарне) сегментације оригиналне слике где се одлука о припадности тачке објекту на слици доноси на једнозначан и недвосмислени начин. Међутим, услед несавршености услова или уређаја за снимање, присуства шума и различитих врста непрецизности (на пример непостојање прецизне границе објекта или јасних граница између самих објеката, грешке у рачунању, недостатка информација, итд.), могу се појавити различити нивои несигурности и неодређености у процесу доношења одлуке у вези са припадношћу тачке слике. Ово је посебно видљиво у случају дискретизације (то јест узорковања) непрекидног домена слике кадаелемент слике, придружен одговарајућој тачки узорка домена, може битиделимично покривен са више објеката на слици. У том смислу, имамо да ова врста сегментације може потенцијално довести до погрешне одлуке о припадности тачака слике, а самим тим и неповратног губитка информација о објектима који се на слици налазе. То произлази из чињенице да сегментација слике изведена на овај начин не дозвољава да тачка слике може делимично у одређеном обиму бити члан посматраног објекта на слици, што даље води потенцијалном ризику да тачке делимично садржане у објекту пре сегментације неће бити придружене објекту након сегментације. Међутим, ако се уместо бинарне сегментације изврши сегментација слике где се одлука о припадности тачке слике објекту доноси на начин који омогућава да тачка може делимично бити члан објекта у неком обиму, тада се доношење бинарне одлуке о чланство тачке објекту на слици може избећи у овом раном кораку анализе. То даље резултира да се потенцијално велика количина информација о објектима присутним на слици може сачувати након сегментације, и користити у следећим корацима анализе. С тим у вези, од посебног интереса за нас јесте специјална врста фази сегментације слике, сегментација заснована на покривености елемената слике, која као резултат обезбеђује фази дигиталну репрезентацију слике где је вредност чланства додељена сваком елементу пропорционална његовој релативној покривености непрекидним објектом на оригиналној слици. У овој тези бавимо се истраживањем модела дигитализације покривености који пружа овакву врсту репрезентацију слике и представљамо како се могу постићи значајна побољшања у оцени 3Д ЕДТ, као и сигнатуре непрекидног облика засноване на растојању од центроида, ако су информације о покривеностидоступне у овој репрезентацији слике разматране на одговарајући начин.
Doktorska disertacija se bavi proučavanjem kvantitativnih aspekata atributaoblika pogodnih za numeričku karakterizaciju, to jest deskriptora oblika, kao iteorijom neodređenosti, posebno teorijom fazi skupova, i njihovom primenom u obradi slike. Originalni doprinosi i rezultati teze mogu se prirodno podeliti u dve grupe, u skladu sa pristupom i metodologijom koja je korišćena za njihovo dobijanje. Prva grupa doprinosa odnosi se na uvođenje novih deskriptora oblika (šestougaonosti i fazi kvadratnosti) kao i odgovarajućih mera koje numerički ocenjuju u kom obimu razmatrani oblik zadovoljava razmatrana svojstva. Uvedene mere su prirodno definisane, teorijski dobro zasnovane i zadovoljavaju većinu poželjnih svojstava koje svaka dobro definisana mera oblika treba da zadovoljava. Pomenimo neke od njih: obe mere uzimaju vrednosti iz intervala (0,1] i dostižu najveću moguću vrednost 1 ako i samo ako je oblik koji se posmatra šestougao, odnosno fazi kvadrat; ne postoji oblik ne-nula površine čija je izmerena šestougaonost, odnosno fazi kvadratnost jednaka 0; obe uvedene mere su invarijantne u odnosu na transformacije sličnosti; i daju rezultate koji su u skladu sa teorijski dokazanim rezultatima, kao i ljudskom percepcijom i očekivanjima. Brojni eksperimenti na sintetičkim i realnim primerima prikazani su u cilju ilustrovanja teorijski dokazanih razmatranja i pružanja jasnijeg uvida u ponašanje uvedenih mera. NJihova prednost i korisnost ilustrovani su u različitim zadacima prepoznavanja i klasifikacije slika objekata nekoliko poznatih i najčešće korišćenih baza slika. Pored toga, doktorska teza sadrži istraživanja vezana za primenu teorije neodređenosti, u užem smislu teorije fazi skupova, u različitim zadacima obrade slike i analize oblika. Razlikujemo zadatke koji se odnose na izdvajanje karakteristika oblika ione koji se odnose na poboljšanje performansi različitih tehnika obrade ianalize slike. Što se tiče prve grupe zadataka, bavimo se primenom teorije fazi skupova u zadacima definisanja novog deskriptora fazi oblika, nazvan fazi kvadratnost, i merenja koliko je fazi kvadratan posmatrani fazi oblik. U drugoj grupi zadataka bavimo se istraživanjem poboljšanja performansi ocene transformacije slike euklidskim rastojanjima u tri dimenzije (3D EDT), kao i signature neprekidnog oblika u dve dimenzije zasnovane na rastojanju odcentroida oblika. Ovo poslednje se posebno ogleda u postignutoj tačnosti ipreciznosti ocene, povećanoj invarijantnosti u odnosu na rotaciju i translaciju objekta, kao i robustnosti u prisustvu šuma i neodređenosti koje su posledica nesavršenosti uređaja ili uslova snimanja. Poslednji rezultati se takođe odnose i na drugu grupu originalnih doprinosa teze koji su motivisani činjenicom da analiza oblika tradicionalno pretpostavlja da su objekti na slici prethodno jednoznačno i jasno izdvojeni iz slike. Takvo izdvajanje objekata se obično postiže u procesu jasne (to jest binarne) segmentacije originalne slike gde se odluka o pripadnosti tačke objektu na slici donosi na jednoznačan i nedvosmisleni način. Međutim, usled nesavršenosti uslova ili uređaja za snimanje, prisustva šuma i različitih vrsta nepreciznosti (na primer nepostojanje precizne granice objekta ili jasnih granica između samih objekata, greške u računanju, nedostatka informacija, itd.), mogu se pojaviti različiti nivoi nesigurnosti i neodređenosti u procesu donošenja odluke u vezi sa pripadnošću tačke slike. Ovo je posebno vidljivo u slučaju diskretizacije (to jest uzorkovanja) neprekidnog domena slike kadaelement slike, pridružen odgovarajućoj tački uzorka domena, može bitidelimično pokriven sa više objekata na slici. U tom smislu, imamo da ova vrsta segmentacije može potencijalno dovesti do pogrešne odluke o pripadnosti tačaka slike, a samim tim i nepovratnog gubitka informacija o objektima koji se na slici nalaze. To proizlazi iz činjenice da segmentacija slike izvedena na ovaj način ne dozvoljava da tačka slike može delimično u određenom obimu biti član posmatranog objekta na slici, što dalje vodi potencijalnom riziku da tačke delimično sadržane u objektu pre segmentacije neće biti pridružene objektu nakon segmentacije. Međutim, ako se umesto binarne segmentacije izvrši segmentacija slike gde se odluka o pripadnosti tačke slike objektu donosi na način koji omogućava da tačka može delimično biti član objekta u nekom obimu, tada se donošenje binarne odluke o članstvo tačke objektu na slici može izbeći u ovom ranom koraku analize. To dalje rezultira da se potencijalno velika količina informacija o objektima prisutnim na slici može sačuvati nakon segmentacije, i koristiti u sledećim koracima analize. S tim u vezi, od posebnog interesa za nas jeste specijalna vrsta fazi segmentacije slike, segmentacija zasnovana na pokrivenosti elemenata slike, koja kao rezultat obezbeđuje fazi digitalnu reprezentaciju slike gde je vrednost članstva dodeljena svakom elementu proporcionalna njegovoj relativnoj pokrivenosti neprekidnim objektom na originalnoj slici. U ovoj tezi bavimo se istraživanjem modela digitalizacije pokrivenosti koji pruža ovakvu vrstu reprezentaciju slike i predstavljamo kako se mogu postići značajna poboljšanja u oceni 3D EDT, kao i signature neprekidnog oblika zasnovane na rastojanju od centroida, ako su informacije o pokrivenostidostupne u ovoj reprezentaciji slike razmatrane na odgovarajući način.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Wei-chun. "Simulation of a morphological image processor using VHDL. mathematical components /." Online version of thesis, 1993. http://hdl.handle.net/1850/11872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Hao. "Simulation of a morphological image processor using VHDL. control mechanism /." Online version of thesis, 1993. http://hdl.handle.net/1850/11744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Magdolna, Pal. "Razvoj modela objektivne kontrole površinskih oštećenja premaznih papira u procesu savijanja." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2014. http://www.cris.uns.ac.rs/record.jsf?recordId=90012&source=NDLTD&language=en.

Full text
Abstract:
U disertaciji se predstavljaju istraživanja koja su rezultirala razvojem modela objektivne kontrole otpornosti premaznih papira prema površinskom oštećenju u procesima savijanja. Na bazi analize niza izabranih parametara procesa kontrole, ppedložena su tri obeležja digitalnih uzoraka premaznih papira za opis i ocenu površinskog oštećenja. Rezultati predloženih obeležja, kao i korelacione analize omogućuju primenu tih obeležja u funkciji kontrole kvaliteta kao osnove razvoja objektivne procesne kontrole premaznih papira u procesu savijanja.
The research presented in this dissertation resulted in development of the objective quality control model for fold cracking resistance of coated papers. Based on the analysis of chosen control process parameters, three different features of the digitalised coated paper samples were proposed for describing and classifying surface damages. The results of the proposed features along with their correlation analysis contribute to their usage in objective process quality control of coated papers in the folding process.
APA, Harvard, Vancouver, ISO, and other styles
25

Slobodan, Dražić. "Shape Based Methods for Quantification and Comparison of Object Properties from Their Digital Image Representations." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=107871&source=NDLTD&language=en.

Full text
Abstract:
The thesis investigates development, improvement and evaluation of methods for quantitative characterization of objects from their digital images and similarity measurements between digital images. Methods for quantitative characterization of objects from their digital images are increasingly used in applications in which error can have crtical consequences, but the traditional methods for shape quantification are of low precision and accuracy. In the thesis is shown that the coverage of a pixel by a shape can be used to highly improve the accuracy and precision of using digital images to estimate the maximal distance between objects furthest points measured in a given direction. It is highly desirable that a distance measure between digital images can be related to a certain shape property and morphological operations are used when defining a distance for this purpose. Still, the distances defined in this manner turns out to be insufficiently sensitive to relevant data representing shape properties in images. We show that the idea of adaptive mathematical morphology can be used successfully to overcome problems related to sensitivity of distances defined via morphological operations when comparing objects from their digital image representations.
У тези су размотрени развој, побољшање и евалуација метода за квантитативну карактеризацију објеката приказаних дигиталним сликама, као и мере растојања између дигиталних слика. Методе за квантитативну карактеризацију објеката представљених дигиталним сликама се  све више користе у применама у којима грешка може имати критичне последице, а традиционалне методе за  квантитативну карактеризацију су мале прецизности и тачности. У тези се показује да се коришћењем информације о покривеност пиксела обликом може значајно побољшати прецизност и тачност оцене растојања између две најудаљеније тачке облика мерено у датом правцу. Веома је пожељно да мера растојања између дигиталних слика може да се веже за одређену особину облика и морфолошке операције се користе приликом дефинисања растојања у ту сврху. Ипак, растојања дефинисана на овај начин показују се недовољно осетљива на релевантне податке дигиталних слика који представљају особине облика. У тези се показује да идеја адаптивне математичке морфологије може успешно да се користи да би се превазишао поменути  проблем осетљивости растојања дефинисаних користећи морфолошке операције.
U tezi su razmotreni razvoj, poboljšanje i evaluacija metoda za kvantitativnu karakterizaciju objekata prikazanih digitalnim slikama, kao i mere rastojanja između digitalnih slika. Metode za kvantitativnu karakterizaciju objekata predstavljenih digitalnim slikama se  sve više koriste u primenama u kojima greška može imati kritične posledice, a tradicionalne metode za  kvantitativnu karakterizaciju su male preciznosti i tačnosti. U tezi se pokazuje da se korišćenjem informacije o pokrivenost piksela oblikom može značajno poboljšati preciznost i tačnost ocene rastojanja između dve najudaljenije tačke oblika mereno u datom pravcu. Veoma je poželjno da mera rastojanja između digitalnih slika može da se veže za određenu osobinu oblika i morfološke operacije se koriste prilikom definisanja rastojanja u tu svrhu. Ipak, rastojanja definisana na ovaj način pokazuju se nedovoljno osetljiva na relevantne podatke digitalnih slika koji predstavljaju osobine oblika. U tezi se pokazuje da ideja adaptivne matematičke morfologije može uspešno da se koristi da bi se prevazišao pomenuti  problem osetljivosti rastojanja definisanih koristeći morfološke operacije.
APA, Harvard, Vancouver, ISO, and other styles
26

Schmidt, Natassja. ""Det ska vara rätt och riktigt!" : - En intervjustudie om textbearbetning i skolans yngre år." Thesis, Linnéuniversitetet, Institutionen för svenska språket (SV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-53227.

Full text
Abstract:
Progress in technology has resulted in new aids in the form of writing tools which make it easier for pupils to revise their texts. This interview study investigates how four teachers view paper-based and screen-based text processing in the younger years of school. It is still paper-based text revision that dominates in schools, and the teachers point out the shortage of digital tools and competence. The choice of processing method is affected by the aim, whether it concerns content or form. The result shows that digital tools can be of help for pupils with difficulties, but the interviewed teachers also emphasize the value of mastering handwriting, and they would like to see screen-based text processing as a complement to traditional paper-based text processing.
APA, Harvard, Vancouver, ISO, and other styles
27

Ghaziasgar, Mehrdad. "The use of mobile phones as service-delivery devices in sign language machine translation system." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7216_1299134611.

Full text
Abstract:

This thesis investigates the use of mobile phones as service-delivery devices in a sign language machine translation system. Four sign language visualization methods were evaluated on mobile phones. Three of the methods were synthetic sign language visualization methods. Three factors were considered: the intelligibility of sign language, as rendered by the method
the power consumption
and the bandwidth usage associated with each method. The average intelligibility rate was 65%, with some methods achieving intelligibility rates of up to 92%. The average le size was 162 KB and, on average, the power consumption increased to 180% of the idle state, across all methods. This research forms part of the Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL) project at the University of the Western Cape and serves as an integration platform for the group's research. In order to perform this research a machine translation system that uses mobile phones as service-delivery devices was developed as well as a 3D Avatar for mobile phones. It was concluded that mobile phones are suitable service-delivery platforms for sign language machine translation systems.

APA, Harvard, Vancouver, ISO, and other styles
28

Ofoghi, Bahadorreza. "Enhancing factoid question answering using frame semantic-based approaches." Thesis, University of Ballarat, 2009. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/55602.

Full text
Abstract:
FrameNet is used to enhance the performance of semantic QA systems. FrameNet is a linguistic resource that encapsulates Frame Semantics and provides scenario-based generalizations over lexical items that share similar semantic backgrounds.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
29

Ofoghi, Bahadorreza. "Enhancing factoid question answering using frame semantic-based approaches." University of Ballarat, 2009. http://innopac.ballarat.edu.au/record=b1503070.

Full text
Abstract:
FrameNet is used to enhance the performance of semantic QA systems. FrameNet is a linguistic resource that encapsulates Frame Semantics and provides scenario-based generalizations over lexical items that share similar semantic backgrounds.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
30

Svensson, Henrik, and Kalle Lindqvist. "Rättssäker Textanalys." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20396.

Full text
Abstract:
Digital språkbehandling (natural language processing) är ett forskningsområde inom vilketdet ständigt görs nya framsteg. En betydande del av den textanalys som sker inom dettafält har som mål att uppnå en fullgod tillämpning kring dialogen mellan människa ochdator. I denna studie vill vi dock fokusera på den inverkan digital språkbehandling kan hapå den mänskliga inlärningsprocessen. Vårt praktiska testområde har också en framtidainverkan på en av de mest grundläggande förutsättningarna för ett rättssäkert samhälle,nämligen den polisiära rapportskrivningen.Genom att skapa en teoretisk idébas som förenar viktiga aspekter av digital språk-behandling och polisrapportskrivning samt därefter implementera dem i en pedagogiskwebbplattform ämnad för polisstudenter är vi av uppfattningen att vår forskning tillförnågot nytt inom det datavetenskapliga respektive det samhällsvetenskapliga fälten.Syftet med arbetet är att verka som de första stegen mot en webbapplikation somunderstödjer svensk polisdokumentation.
Natural language processing is a research area in which new advances are constantly beingmade. A significant portion of text analyses that takes place in this field have the aim ofachieving a satisfactory application in the dialogue between human and computer. In thisstudy, we instead want to focus on what impact natural language processing can have onthe human learning process.Simultaneously, the context for our research has a future impact on one of the mostbasic principles for a legally secure society, namely the writing of the police report.By creating a theoretical foundation of ideas that combines aspects of natural languageprocessing as well as official police report writing and then implementing them in aneducational web platform intended for police students, we are of the opinion that ourresearch adds something new in the computer science and sociological fields.The purpose of this work is to act as the first steps towards a web application thatsupports the Swedish police documentation.
APA, Harvard, Vancouver, ISO, and other styles
31

Kästel, Arne Morten, and Christian Vestergaard. "Comparing performance of K-Means and DBSCAN on customer support queries." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260252.

Full text
Abstract:
In customer support, there are often a lot of repeat questions, and questions that does not need novel answers. In a quest to increase the productivity in the question answering task within any business, there is an apparent room for automatic answering to take on some of the workload of customer support functions. We look at clustering corpora of older queries and texts as a method for identifying groups of semantically similar questions and texts that would allow a system to identify new queries that fit a specific cluster to receive a connected, automatic response. The approach compares the performance of K-means and density-based clustering algorithms on three different corpora using document embeddings encoded with BERT. We also discuss the digital transformation process, why companies are unsuccessful in their implementation as well as the possible room for a new more iterative model.
I kundtjänst förekommer det ofta upprepningar av frågor samt sådana frågor som inte kräver unika svar. I syfte att öka produktiviteten i kundtjänst funktionens arbete att besvara dessa frågor undersöks metoder för att automatisera en del av arbetet. Vi undersöker olika metoder för klusteranalys, applicerat på existerande korpusar innehållande texter så väl som frågor. Klusteranalysen genomförs i syfte att identifiera dokument som är semantiskt lika, vilket i ett automatiskt system för frågebevarelse skulle kunna användas för att besvara en ny fråga med ett existerande svar. En jämförelse mellan hur K-means och densitetsbaserad metod presterar på tre olika korpusar vars dokumentrepresentationer genererats med BERT genomförs. Vidare diskuteras den digitala transformationsprocessen, varför företag misslyckas avseende implementation samt även möjligheterna för en ny mer iterativ modell.
APA, Harvard, Vancouver, ISO, and other styles
32

Skurat, Harris Heidi A. "Digital students in the democratic classroom using technology to enhance critical pedagogy in first-year composition /." Muncie, Ind. : Ball State University, 2009. http://cardinalscholar.bsu.edu/831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

McCullagh, Adrian J. "The incorporation of trust strategies in digital signature regimes." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sibanda, Phathisile. "Connection management applications for high-speed audio networking." Thesis, Rhodes University, 2008. http://hdl.handle.net/10962/d1006532.

Full text
Abstract:
Traditionally, connection management applications (referred to as patchbays) for high-speed audio networking, are predominantly developed using third-generation languages such as C, C# and C++. Due to the rapid increase in distributed audio/video network usage in the world today, connection management applications that control signal routing over these networks have also evolved in complexity to accommodate more functionality. As the result, high-speed audio networking application developers require a tool that will enable them to develop complex connection management applications easily and within the shortest possible time. In addition, this tool should provide them with the reliability and flexibility required to develop applications controlling signal routing in networks carrying real-time data. High-speed audio networks are used for various purposes that include audio/video production and broadcasting. This investigation evaluates the possibility of using Adobe Flash Professional 8, using ActionScript 2.0, for developing connection management applications. Three patchbays, namely the Broadcast patchbay, the Project studio patchbay, and the Hospitality/Convention Centre patchbay were developed and tested for connection management in three sound installation networks, namely the Broadcast network, the Project studio network, and the Hospitality/Convention Centre network. Findings indicate that complex connection management applications can effectively be implemented using the Adobe Flash IDE and ActionScript 2.0.
APA, Harvard, Vancouver, ISO, and other styles
35

Yang, Seungwon. "Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25111.

Full text
Abstract:
Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic tagging tasks, which is labor-intensive, solutions using computers have been developed. This dissertation describes the design and development of a topic identification approach, in this case applied to disaster events. In a sense, this study represents the marriage of research analysis with an engineering effort in that it combines inspiration from Cognitive Informatics with a practical model from Information Retrieval. One of the design constraints, however, is that the Web was used as a universal knowledge source, which was essential in accessing the required information for inferring topics from texts. Retrieving specific information of interest from such a vast information source was achieved by querying a search engine's application programming interface. Specifically, the information gathered was processed mainly by incorporating the Vector Space Model from the Information Retrieval field. As a proof of concept, we subsequently developed and evaluated a prototype tool, Xpantrac, which is able to run in a batch mode to automatically process text documents. A user interface of Xpantrac also was constructed to support an interactive semi-automatic topic tagging application, which was subsequently assessed via a usability study. Throughout the design, development, and evaluation of these various study components, we detail how the hypotheses and research questions of this dissertation have been supported and answered. We also present that our overarching goal, which was the identification of topics in a human-comparable way without depending on a large training set or a corpus, has been achieved.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Darbhamulla, Lalitha. "A Java image editor and enhancer." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2705.

Full text
Abstract:
The purpose of this project is to develop a Java Applet that provides all the tools needed for creating image fantasies. It lets the user pick a template and an image, and combine them together. The user can then apply image processing techniques such as rotation, zooming, blurring etc according to his/her requirements.
APA, Harvard, Vancouver, ISO, and other styles
37

Fuini, Mateus Guilherme. "Sistema de recuperação de imagens baseada na teoria computacional das percepções e em linguagens formais fuzzy." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259063.

Full text
Abstract:
Orientador: Fernando Antônio Campos Gomide
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-18T15:47:57Z (GMT). No. of bitstreams: 1 Fuini_MateusGuilherme_M.pdf: 2245163 bytes, checksum: de5163a887c624c4488ae7be6adde00a (MD5) Previous issue date: 2006
Resumo: Neste trabalho utilizam-se as teorias de Linguagem Formal Nebulosa e da Computacional das Percepções de Zadeh para definir buscas em uma base de dados gráfica. A descrição dos elementos gráficos a serem identificados é codificada por meio de sentenças aceitas por uma gramática nebulosa e definida sobre um conjunto de símbolos gráficos terminais reconhecidos por rotinas computacionais específicas. Esses símbolos terminais rotulam a imagem a ser pesquisada. A teoria da Percepção Computacional é usada para permitir que o usuário defina as relações espaciais a serem partilhadas pelos elementos gráficos na cena a ser pesquisada. Os resultados obtidos com buscas realizadas em uma base de dados gráfica com 22000 desenhos mostram que o sistema proposto fornece uma alternativa interessante para solução de buscas em bancos de dados visuais
Abstract: In this work, Fuzzy Formal Language techniques and Zadeh's Computational Theory of Perceptions are used to allow the user to query graphic data bases. The description of the graphic elements to be searched is encoded by means of fuzzy sentences accepted by a fuzzy grammar defined over a set of graphic primitives recognized by specific computational routines aimed to label different primitive graphic components of a given image. The Computational Theory of Perceptions is used to allow the user to specify the required spatial relations to be shared by the selected in the graphic scenes to be selected. The results obtained by querying a 22000 graphic scene data base support the claim that our approach provides a interesting solution for querying visual data bases
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
38

Euawatana, Teerapong. "Implementation business-to-business electronic commercial website using ColdFusion 4.5." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1917.

Full text
Abstract:
This project was created using ColdFusion 4.5 to build and implement a commercial web site to present a real picture of electronic commerce. This project is intended to provide enough information for other students who are looking for a guideline for further study and to improve their skills in business from an information management aspect.
APA, Harvard, Vancouver, ISO, and other styles
39

Ruan, Jianhua, Han-Shen Yuh, and Koping Wang. "Spider III: A multi-agent-based distributed computing system." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2249.

Full text
Abstract:
The project, Spider III, presents architecture and protocol of a multi-agent-based internet distributed computing system, which provides a convenient development and execution environment for transparent task distribution, load balancing, and fault tolerance. Spider is an on going distribution computing project in the Department of Computer Science, California State University San Bernardino. It was first proposed as an object-oriented distributed system by Han-Sheng Yuh in his master's thesis in 1997. It has been further developed by Koping Wang in his master's project, of where he made large contribution and implemented the Spider II System.
APA, Harvard, Vancouver, ISO, and other styles
40

Barragán, Guerrero Diego Orlando 1984. "Implementação em FPGA de algoritmos de sincronismo para OFDM." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261503.

Full text
Abstract:
Orientador: Luís Geraldo Pedroso Meloni
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-23T18:38:54Z (GMT). No. of bitstreams: 1 BarraganGuerrero_DiegoOrlando_M.pdf: 4412718 bytes, checksum: fd7daf7712cace2d176bf47e3bd792dd (MD5) Previous issue date: 2013
Resumo: Os sistemas OFDM são intrinsecamente sensíveis a erros de sincronismo de tempo e frequência. O sincronismo é uma etapa fundamental para a correta recepção de pacotes. Esta dissertação descreve como se implementar vários algoritmos de sincronismo para OFDM em FPGA usando os símbolos do preâmbulo definidos no padrão IEEE 802.11a. Além disso, foi implementado o algoritmo CORDIC (necessário para a etapa de estimação e compensação de desvio de portadora) em modo rotacional e vetorial para um sistema coordenado circular, comparando o desempenho de várias arquiteturas com o intuito de otimizar a frequência de operação e relacionar o erro do resultado com o número de iterações realizadas. Conforme mostrado nos resultados, são obtidas estimativas com boas aproximações para desvios de 0, 100 e 200 kHz. Os resultados obtidos constituem um instrumento importante para a melhor escolha de implementação de algoritmos de sincronismo em FPGA. Verificou-se que os diferentes algoritmos não apenas possuem valores de variância distintos, mas também frequências de operação diferentes e consumo de recursos da FPGA. Ao longo do projeto foi considerado um modelo de canal tapped-delay
Abstract: OFDM systems are intrinsically sensitive to errors of synchronization in time and frequency. Synchronization is a key step for correct packet reception. This thesis describes how to implement in FPGA several synchronization algorithms for OFDM using the symbols of the preamble defined in IEEE 802.11a. In addition, the CORDIC algorithm is implemented (step required for carrier frequency offset estimation and compensation) in rotational and vectoring mode for a circular coordinate system, comparing the performance of various architectures in order to optimize the operating frequency and relate the error of the result with the number of iterations performed. As shown in the results, estimates are obtained with good approximations for offsets of 0, 100 and 200 kHz. The obtained results are an important instrument for the best choice of synchronization algorithm for implementation in FPGA. It was found that the different algorithms have not only different values of variance, but also different operating frequency and consumption of the FPGA resources. Throughout the project a tapped-delay channel model was considered in the analysis
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
41

Branko, Brkljač. "Препознавање облика са ретком репрезентацијом коваријансних матрица и коваријансним дескрипторима." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104951&source=NDLTD&language=en.

Full text
Abstract:
У раду је предложен нови модел за ретку апроксимацију Гаусовихкомпоненти у моделима за статистичко препознавање обликазаснованим на Гаусовим смешама, а са циљем редукције сложеностипрепознавања. Апроксимације инверзних коваријансних матрицаконструишу се као ретке линеарне комбинације симетричних матрица изнаученог редундантног скупа, коришћењем информационог критеријумакоји почива на принципу минимума дискриминативне информације.Ретка репрезентација подразумева релативно мали број активнихкомпоненти приликом реконструкције сигнала, а тај циљ постиже такошто истовремено тежи: очувању информационог садржаја иједноставности представе или репрезентације.
U radu je predložen novi model za retku aproksimaciju Gausovihkomponenti u modelima za statističko prepoznavanje oblikazasnovanim na Gausovim smešama, a sa ciljem redukcije složenostiprepoznavanja. Aproksimacije inverznih kovarijansnih matricakonstruišu se kao retke linearne kombinacije simetričnih matrica iznaučenog redundantnog skupa, korišćenjem informacionog kriterijumakoji počiva na principu minimuma diskriminativne informacije.Retka reprezentacija podrazumeva relativno mali broj aktivnihkomponenti prilikom rekonstrukcije signala, a taj cilj postiže takošto istovremeno teži: očuvanju informacionog sadržaja ijednostavnosti predstave ili reprezentacije.
Paper presents a new model for sparse approximation of Gaussiancomponents in statistical pattern recognition models that are based onGaussian mixtures, with the aim of reducing computational complexity.Approximations of inverse covariance matrices are designed as sparse linearcombinations of symmetric matrices that form redundant set, which is learnedthrough information criterion based on the principle of minimumdiscrimination information. Sparse representation assumes relatively smallnumber of active components in signal reconstruction, and it achieves thatgoal by simultaneously striving for: preservation of information content andsimplicity of notion or representation.
APA, Harvard, Vancouver, ISO, and other styles
42

Marija, Delić. "Modeli neodređenosti u obradi digitalnih slika." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2020. https://www.cris.uns.ac.rs/record.jsf?recordId=114273&source=NDLTD&language=en.

Full text
Abstract:
Problemi klasifikacije i segmentacije digitalnih slika su veomaaktuelni i zastupljeni u praksi. Potreba za modelima koji razmatrajuovu problematiku u poslednjih nekoliko decenija ubrzanim tempompoprima sve veći značaj i obim u svakodnevnom životu. Koriste se uračunarskoj grafici, prepoznavanju oblika, medicinskoj analizi slika,saobraćaju, analizi dokumenata, pokreta i izraza lica i sl.U okviru ove disertacije, predstavljeno istraživanje motivisano jeprimenama razvijenih modela u klasifikaciji i segmentacijidigitalnih slika. Istraživanje obuhvata dva segmenta. Ovi segmentipovezani su terminom neodređenosti, koji je uz upotrebu adekvatnogmatematičkog aparata (teorije fazi skupova), ugrađen u modele razvijeza primenu u obradi slike.Jedan pravac istraživanja baziran je na teoriji fazi skupova, t-normama, t-konormama, operatorima agregacije i agregiranimfunkcijama rastojanja. U okviru toga, istraživanje je sprovedeno sastruktuiranom matematičkom podlogom, izložene su osnovnedefinicije, teoreme, kao i osobine korištenih operatora, proširenisu teorijski koncepti t-normi i t-konormi. Definisani su novi tipovioperatora agregacije i njihovom primenom konstruisane su novefunkcije rastojanja, čija je upotreba diskutovana kroz uspešnost uprocesu segmentacije digitalnih slika.Drugi pravac istraživanja, izložen u ovoj disertaciji, obuhvata višeinženjerski pristup rešavanju problema klasifikacije teksturadigitalnih slika. U skladu sa tim, detaljno je analizirana idiskutovana klasa lokalnih binarnih deskriptora teksture.Inspirisana uspešnošću pomenute LBP klase deskriptora, uvedena jejedna nova podfamilija α-deskriptora teksture. Uvedeni modeldeskriptora formiran je na temeljima idejnih principa lokalnihbinarnih kodova i bazičnih pojmova iz teorije fazi skupova. Praktičnaupotreba i značaj predstavljenog modela demonstrirani su kroz veomauspešne procese klasifikacije na nekoliko javno dostupnih baza slika.
Classification and segmentation problems of digital images is a very attractivetopic and has been making impact in many different applied disciplines. In thepast few decades, the demand for models that address these issues has beengaining momentum and applications in everyday life. These models are used incomputer graphics, shape recognition, medical image analysis, traffic, documentanalysis, facial movements and expressions, etc.The research within this doctoral dissertation was motivated by the application ofdeveloped methods in classification and segmentation tasks. The conductedresearch covered two segments, which were linked by the term of indeterminacy,with the usage of the theory of fuzzy sets, which is incorporated into methodsdeveloped for application in image processing.One direction of the research was founded on the theory of fuzzy sets, t-norms,t-conorms, aggregation operators, and aggregated distance functions. Within thisframework, the research was conducted with a structured mathematicalbackground. Firstly, basic definitions, theorems and characteristics of the usedoperators were presented, followed by the theoretical concepts of t-norms and tconormsthat were extended. New types of aggregation operators and distancefunctions were defined, and finally, their contribution in the digital imagesegmentation process was explored and discussed.The second direction of the research presented in this dissertation involved moreof an engineering-type of approach to solving the problem of the classification ofdigital image textures. To that end, a class of local binary texture descriptors(LBPs) was analyzed and discussed in detail. Inspired by the results of theabove-mentioned LBP descriptors, one new sub-family of the $\alpha$-descriptors was introduced by the author. The introduced descriptor model wasbased on the conceptual principles of LBPs and basic definitions from the fuzzyset theory. Its practical usage and importance were established and reflected invery successful classification results, achieved in the application on severalpublicly available image datasets.
APA, Harvard, Vancouver, ISO, and other styles
43

Boban, Bondžulić. "Процена квалитета слике и видеа кроз очување информација о градијенту." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=99807&source=NDLTD&language=en.

Full text
Abstract:
У овој дисертацији разматране су објективне мере процене квалитетаслике и видеа са потпуним и делимичним референцирањем на изворнисигнал. За потребе евалуације квалитета развијене су поуздане,рачунски ефикасне мере, засноване на очувању информација оградијенту. Мере су тестиране на великом броју тест слика и видеосеквенци, различитих типова и степена деградације. Поред јавнодоступних база слика и видео секвенци, за потребе истраживањаформиране су и нове базе видео секвенци са преко 300 релевантнихтест узорака. Поређењем доступних субјективних и објективних скороваквалитета показано је да је објективна евалуација квалитета веомасложен проблем, али га је могуће решити и доћи до високихперформанси коришћењем предложених мера процене квалитета сликеи видеа.
U ovoj disertaciji razmatrane su objektivne mere procene kvalitetaslike i videa sa potpunim i delimičnim referenciranjem na izvornisignal. Za potrebe evaluacije kvaliteta razvijene su pouzdane,računski efikasne mere, zasnovane na očuvanju informacija ogradijentu. Mere su testirane na velikom broju test slika i videosekvenci, različitih tipova i stepena degradacije. Pored javnodostupnih baza slika i video sekvenci, za potrebe istraživanjaformirane su i nove baze video sekvenci sa preko 300 relevantnihtest uzoraka. Poređenjem dostupnih subjektivnih i objektivnih skorovakvaliteta pokazano je da je objektivna evaluacija kvaliteta veomasložen problem, ali ga je moguće rešiti i doći do visokihperformansi korišćenjem predloženih mera procene kvaliteta slikei videa.
This thesis presents an investigation into objective image and video qualityassessment with full and reduced reference on original (source) signal. Forquality evaluation purposes, reliable, computational efficient, gradient-basedmeasures are developed. Proposed measures are tested on different imageand video datasets, with various types of distorsions and degradation levels.Along with publicly available image and video quality datasets, new videoquality datasets are maded, with more than 300 relevant test samples.Through comparison between available subjective and objective qualityscores it has been shown that objective quality evaluation is highly complexproblem, but it is possible to resolve it and acchieve high performance usingproposed quality measures.
APA, Harvard, Vancouver, ISO, and other styles
44

Дмитренко, Тарас Васильович. "Експертна система для оцінки якості стиснення зображень та аудіоінформації." Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/50354.

Full text
Abstract:
1. Томашевський О. М., Цегелик Г. Г. Вітер М. Б., Дубук В. І.. Інформаційні технології та моделювання бізнес-процесів : навч. посіб.- К.: ЦУЛ, 2012. - 296 с.
Для оцінки результатів проведених дій над масивами даних, які представлені у вигляді зображень і аудіо сигналів в реальному масштабі часу було запропоновано використати експертну систему. Враховуючи, що існують новітні технології та підходи, які можуть допомогти автоматизувати певні етапи або процеси використовуючи знання експертів в певній області знань відповідно до вимог та завдань користувача. Запропоновано систему, яка дає можливість отримати оцінку результату дій над масивами даних.
It was proposed to use an expert system to evaluate the results of the actions performed on data sets, which are presented in the form of images and audio signals in real time. Given that there are the latest technologies and approaches that can help automate certain stages or processes using the knowledge of experts in a particular field of knowledge in accordance with the requirements and tasks of the user. A system is proposed that allows to obtain an estimate of the result of actions on data sets.
APA, Harvard, Vancouver, ISO, and other styles
45

De, Wilde Max. "From Information Extraction to Knowledge Discovery: Semantic Enrichment of Multilingual Content with Linked Open Data." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/218774.

Full text
Abstract:
Discovering relevant knowledge out of unstructured text in not a trivial task. Search engines relying on full-text indexing of content reach their limits when confronted to poor quality, ambiguity, or multiple languages. Some of these shortcomings can be addressed by information extraction and related natural language processing techniques, but it still falls short of adequate knowledge representation. In this thesis, we defend a generic approach striving to be as language-independent, domain-independent, and content-independent as possible. To reach this goal, we offer to disambiguate terms with their corresponding identifiers in Linked Data knowledge bases, paving the way for full-scale semantic enrichment of textual content. The added value of our approach is illustrated with a comprehensive case study based on a trilingual historical archive, addressing constraints of data quality, multilingualism, and language evolution. A proof-of-concept implementation is also proposed in the form of a Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), demonstrating to a certain extent the general applicability of our methodology to any language, domain, and type of content.
Découvrir de nouveaux savoirs dans du texte non-structuré n'est pas une tâche aisée. Les moteurs de recherche basés sur l'indexation complète des contenus montrent leur limites quand ils se voient confrontés à des textes de mauvaise qualité, ambigus et/ou multilingues. L'extraction d'information et d'autres techniques issues du traitement automatique des langues permettent de répondre partiellement à cette problématique, mais sans pour autant atteindre l'idéal d'une représentation adéquate de la connaissance. Dans cette thèse, nous défendons une approche générique qui se veut la plus indépendante possible des langues, domaines et types de contenus traités. Pour ce faire, nous proposons de désambiguïser les termes à l'aide d'identifiants issus de bases de connaissances du Web des données, facilitant ainsi l'enrichissement sémantique des contenus. La valeur ajoutée de cette approche est illustrée par une étude de cas basée sur une archive historique trilingue, en mettant un accent particulier sur les contraintes de qualité, de multilinguisme et d'évolution dans le temps. Un prototype d'outil est également développé sous le nom de Multilingual Entity/Resource Combiner & Knowledge eXtractor (MERCKX), démontrant ainsi le caractère généralisable de notre approche, dans un certaine mesure, à n'importe quelle langue, domaine ou type de contenu.
Doctorat en Information et communication
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
46

Guardia, Filho Luiz Eduardo. "Sistema para controle de maquinas robotizadas utilizando dispositivos logicos programaveis." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259017.

Full text
Abstract:
Orientador: Marconi Kolm Madrid
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T17:12:57Z (GMT). No. of bitstreams: 1 GuardiaFilho_LuizEduardo_M.pdf: 2405031 bytes, checksum: b724836217b8586950a9ffabcd235f35 (MD5) Previous issue date: 2005
Resumo: Este trabalho de mestrado teve o propósito de projetar e construir um sistema de hard-ware capaz de realizar o controle de máquinas robotizadas em tempo real. Foi dada uma abordagem usando técnicas de processamento paralelo e eletrônica reconfigurável com o uso de dispositivos lógicos programáveis. Mostrou-se em função dos resultados das implementações que o sistema proposto é eficiente para ser utilizado no controle de robôs baseado em modelos matemáticos complexos como cinemático direto/inverso, dinâmico e de visão artificial. Esse mesmo sistema prevê sua utilização para os quatro níveis hierárquicos envolvidos em plantas que se utilizam de controle automático: supervisão, tarefas, trajetória e servomecanismos. O sistema possui interfaces de comunicação USE e RS-232, conversores A/D e D/A, sistema de processamento de imagens (entradas e saídas de sinais de vídeo analógico), portas E/S, chaves e leds para propósito geral. A eficiência foi comprovada através de experimentações práticas utilizando sistemas robóticos reais como: sistema de um pêndulo acionado, robô redundante de 4GDL denominado Cobra, e solução em hardware de funções importantes no sentido da resolução dos modelos matemáticos em tempo real como funções transcendentais
Abstract: This work had as purpose the project and build of a hardware system with abilities to accomplish the real time control of robotic machines. It was given an approach using tech-niques of parallel processing and programmable electronics configuration with programmable logic devices. According to the implementation results, it was shown that this proposed sys-tem is efficient to be used for controlling robots based on complex mathematical models, like direct/inverse kinematics, dynamics and artificial vision. This system foresees its use for the four hierarchical levels involved in industrial plants that use automatic control: supervision, tasks, trajectory /path and servomechanisms. The system has USE and RS-232 communica-tion interfaces, A/D and D/A converters, image processing capabilities (with input/output for analog video signals), I/O ports, and switches and leds for general purpose. Its efficiency is demonstrated through practical experimentations using real robotic systems as: a driven pendu-lum system, a redundant 4 DOF robot called "Cobra", and a hardware solution for important functions in the sense of real time mathematical models computing, like the transcendental functions
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
47

Sharoun, Assaid Othman. "Digitální programovatelné funkční bloky pracující v kódu zbytkových tříd." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-233545.

Full text
Abstract:
V systému s kódy zbytkových tříd je základem skupina navzájem nezávislých bází. Číslo ve formátu integer je reprezentováno kratšími čísly integer, které získáme jako zbytky všech bází, a aritmetické operace probíhají samostatně na každé bázi. Při aritmetických operacích nedochází k přenosu do vyšších řádů při sčítání, odečítání a násobení, které obvykle potřebují více strojového času. Srovnávání, dělení a operace se zlomky jsou komplikované a chybí efektivní algoritmy. Kódy zbytkových tříd se proto nepoužívají k numerickým výpočtům, ale jsou velmi užitečné pro digitální zpracování signálu. Disertační práce se týká návrhu, simulace a mikropočítačové implementace funkčních bloků pro digitální zpracování signálu. Funkční bloky, které byly studovány jsou nově navržené konvertory z binarní do reziduální reprezentace a naopak, reziduální sčítačka a násobička. Nově byly také navržené obslužné algoritmy.
APA, Harvard, Vancouver, ISO, and other styles
48

Angelo, Tiago Novaes 1983. "Extrator de conhecimento coletivo : uma ferramenta para democracia participativa." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259820.

Full text
Abstract:
Orientadores: Ricardo Ribeiro Gudwin, Cesar José Bonjuani Pagan
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-26T04:03:32Z (GMT). No. of bitstreams: 1 Angelo_TiagoNovaes_M.pdf: 3900207 bytes, checksum: 2eed8dd66c9bdc37e4d58e9eac614c9d (MD5) Previous issue date: 2014
Resumo: O surgimento das Tecnologias de Comunicação e Informação trouxe uma nova perspectiva para o fortalecimento da democracia nas sociedades modernas. A democracia representativa, modelo predominante nas sociedades atuais, atravessa uma crise de credibilidade cuja principal consequência é o afastamento do cidadão na participação política, enfraquecendo os ideais democráticos. Neste contexto, a tecnologia surge como possibilidade para construção de um novo modelo de participação popular que resgate uma cidadania mais ativa, inaugurando o que denomina-se de democracia digital. O objetivo desta pesquisa foi desenvolver e implementar uma ferramenta, denominada "Extrator de Conhecimento Coletivo", com o propósito de conhecer o que um coletivo pensa a respeito de sua realidade a partir de pequenos relatos de seus participantes, dando voz à população num processo de democracia participativa. Os fundamentos teóricos baseiam-se em métodos de mineração de dados, sumarizadores extrativos e redes complexas. A ferramenta foi implementada e testada usando um banco de dados formado por opiniões de clientes a respeito de suas estadias em um Hotel. Os resultados apresentaram-se satisfatórios. Para trabalhos futuros, a proposta é que o Extrator de Conhecimento Coletivo seja o núcleo de processamento de dados de um espaço virtual onde a população pode se expressar e exercer ativamente sua cidadania
Abstract: The emergence of Information and Communication Technologies brought a new perspective to the strengthening of democracy in modern societies. The representative democracy, prevalent model in today's societies, crosses a crisis of credibility whose main consequence is the removal of citizen participation in politics, weakening democratic ideals. In this context, technology emerges as a possibility for construction of a new model of popular participation to rescue more active citizenship, inaugurating what is called digital democracy. The objective of this research was to develop and implement a tool called "Collective Knowledge Extractor", with the purpose of knowing what the collective thinks about his reality through small reports of its participants, giving voice to the people in a process participatory democracy. The theoretical foundations are based on methods of data mining, extractive summarizers and complex networks. The tool was implemented and tested using a database consisting of customer reviews about their stay in a Hotel. The results were satisfactory. For future work, the proposal is that the Extractor Collective Knowledge be the core data processing of a virtual space where people can express themselves and actively exercise their citizenship
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
49

Buda, Bajić Papuga. "Methods for image restoration and segmentation by sparsity promoting energy minimization." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110640&source=NDLTD&language=en.

Full text
Abstract:
Energy minimization approach is widely used in image processing applications.Many image processing problems can be modelled in a form of a minimizationproblem. This thesis deals with two crucial tasks of image analysis workflows:image restoration and segmentation of images corrupted by blur and noise. Bothimage restoration and segmentation are modelled as energy minimizationproblems, where energy function is composed of two parts: data fidelity term andregularization term. The main contribution of this thesis is development of newdata fidelity and regularization terms for both image restoration andsegmentation tasks.Image restoration methods (non-blind and blind deconvolution and superresolutionreconstruction) developed within this thesis are suited for mixedPoisson-Gaussian noise which is encountered in many realistic imagingconditions. We use generalized Anscombe variance stabilization transformationfor removing signal-dependency of noise. We propose novel data fidelity termwhich incorporates variance stabilization transformation process into account.Turning our attention to the regularization term for image restoration, weinvestigate how sparsity promoting regularization in the gradient domainformulated as Total Variation, can be improved in the presence of blur and mixedPoisson-Gaussian noise. We found that Huber potential function leads tosignificant improvement of restoration performance.In this thesis we propose new segmentation method, the so called coveragesegmentation, which estimates the relative coverage of each pixel in a sensedimage by each image component. Its data fidelity term takes into accountblurring and down-sampling processes and in that way it provides robustsegmentation in the presence of blur, allowing at the same time segmentation atincreased spatial resolution. In addition, new sparsity promoting regularizationterms are suggested: (i) Huberized Total Variation which provides smooth objectboundaries and noise removal, and (ii) non-edge image fuzziness, whichresponds to an assumption that imaged objects are crisp and that fuzziness ismainly due to the imaging and digitization process.The applicability of here proposed restoration and coverage segmentationmethods is demonstrated for Transmission Electron Microscopy imageenhancement and segmentation of micro-computed tomography andhyperspectral images.
Поступак минимизације функције енергије је често коришћен зарешавање проблема у обради дигиталне слике. Предмет истраживањатезе су два круцијална задатка дигиталне обраде слике: рестаурација исегментација слика деградираних шумом и замагљењем. И рестaурацијаи сегментација су моделовани као проблеми минимизације функцијеенергије која представља збир две функције: функције фитовањаподатака и регуларизационе функције. Главни допринос тезе је развојнових функција фитовања података и нових регуларизационих функцијаза рестаурацију и сегментацију.Методе за рестаурацију (оне код којих је функција замагљења позната икод којих је функцију замагљења потребно оценити на основу датихподатака као и методе за реконструкцију слике у супер-резолуцији)развијене у оквиру ове тезе третирају мешавину Поасоновог и Гаусовогшума који се појављује у многобројним реалистичним сценаријима. Затретирање такве врсте шума користили смо нелинеарну трансформацијуи предложили смо нову функцију фитовања података која узима у обзиртакву трансформацију. У вези са регуларизационим функцијама смотестирали хипотезу да се функција Тоталне Варијације која промовишеретку слику у градијентном домену може побољшати уколико се користетзв. потенцијалне функције. Показали смо да се употребом Хуберовепотенцијалне функције може значајно побољшати квалитет рестауриранеслике која је деградирана замагљењем и мешавином Поасоновог иГаусовог шума.У оквиру тезе смо предложили нову методу сегментације која допуштаделимичну покривеност пиксела објектом. Функција фитовања податакаове методе укључује и модел замагљења и смањења резолуције. На тајначин је постигнута робустност сегментације у присуству замагљења идобијена могућност сегментирања слике у супер-резолуцији. Додатно,нове регуларизационе функције које промовишу ретке репрезентацијеслике су предложене.Предложене методе рестаурације и сегментације која допушта делимичнупокривеност пиксела објектом су примењене на слике добијене помоћуелектронског микроскопа, хиперспектралне слике и медицинске ЦТ слике.
Postupak minimizacije funkcije energije je često korišćen zarešavanje problema u obradi digitalne slike. Predmet istraživanjateze su dva krucijalna zadatka digitalne obrade slike: restauracija isegmentacija slika degradiranih šumom i zamagljenjem. I restauracijai segmentacija su modelovani kao problemi minimizacije funkcijeenergije koja predstavlja zbir dve funkcije: funkcije fitovanjapodataka i regularizacione funkcije. Glavni doprinos teze je razvojnovih funkcija fitovanja podataka i novih regularizacionih funkcijaza restauraciju i segmentaciju.Metode za restauraciju (one kod kojih je funkcija zamagljenja poznata ikod kojih je funkciju zamagljenja potrebno oceniti na osnovu datihpodataka kao i metode za rekonstrukciju slike u super-rezoluciji)razvijene u okviru ove teze tretiraju mešavinu Poasonovog i Gausovogšuma koji se pojavljuje u mnogobrojnim realističnim scenarijima. Zatretiranje takve vrste šuma koristili smo nelinearnu transformacijui predložili smo novu funkciju fitovanja podataka koja uzima u obzirtakvu transformaciju. U vezi sa regularizacionim funkcijama smotestirali hipotezu da se funkcija Totalne Varijacije koja promovišeretku sliku u gradijentnom domenu može poboljšati ukoliko se koristetzv. potencijalne funkcije. Pokazali smo da se upotrebom Huberovepotencijalne funkcije može značajno poboljšati kvalitet restauriraneslike koja je degradirana zamagljenjem i mešavinom Poasonovog iGausovog šuma.U okviru teze smo predložili novu metodu segmentacije koja dopuštadelimičnu pokrivenost piksela objektom. Funkcija fitovanja podatakaove metode uključuje i model zamagljenja i smanjenja rezolucije. Na tajnačin je postignuta robustnost segmentacije u prisustvu zamagljenja idobijena mogućnost segmentiranja slike u super-rezoluciji. Dodatno,nove regularizacione funkcije koje promovišu retke reprezentacijeslike su predložene.Predložene metode restauracije i segmentacije koja dopušta delimičnupokrivenost piksela objektom su primenjene na slike dobijene pomoćuelektronskog mikroskopa, hiperspektralne slike i medicinske CT slike.
APA, Harvard, Vancouver, ISO, and other styles
50

Angelo, Alex Garcia Smith. "Considerações sobre um campo conceitual comum entre a formação básica escolar, projeto e as tecnologias digitais de modelagem e fabricação." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/16/16134/tde-07032016-155459/.

Full text
Abstract:
Produto de uma pesquisa teórica e empírica, esta dissertação aborda um campo conceitual comum entre a área de projeto pertencente à arquitetura e ao design, a formação escolar básica e as tecnologias digitais de modelagem e fabricação. Tal temática pauta-se na contemporaneidade, na medida em que se inserem novos meios de apoio ao ensino e aprendizado na formação escolar básica. Desse modo, o computador até então utilizado como processador de texto e imagens, alia-se à modelagem, à fabricação digital e à comunicação em rede. Como abordado nessa dissertação, o termo \"design\" é empregado com distintos enfoques no campo da formação escolar básica, o primeiro deles de caráter mais vinculado ao campo das artes, enquanto o segundo insere-se no campo das tecnologias de fabricação e no estudo da cultura material da sociedade contemporânea. Tendo em vista o campo comum dessas ações permeando as áreas de projeto, formação básica escolar e tecnologias digitais, esta investigação considera a partir uma base teórica, um grupo de dezoito pontos de convergência que subsidiou os trabalhos de campo. Esses trabalhos foram estruturados a partir de quatro tipos de oficinas realizadas na periferia da cidade de Guarulhos, região metropolitana de São Paulo, organizadas em um ambiente de livre acesso e que foram detalhadas nesse trabalho. Essa formulação teórico-prática visa traçar considerações acerca do campo comum estudado em que, tecnologias digitais de modelagem e fabricação têm auxiliado no desenvolvimento de linguagens e habilidades mentais em um público inserido na formação escolar básica. Pretende-se assim avançar no debate sobre a formação do indivíduo na atualidade.
The product of a theoretical and empiric research, this paper discusses a common conceptual ground among the project area -which includes the architecture and the design-, basic education, and digital modelling and manufacturing technologies. Its subject is based on the contemporaneity, to the extent that it introduces new means of support for learning and teaching in basic education. In this way, the computer -to that moment used exclusively for text and image processing- allies with modelling, digital manufacturing, and network communication. As herein discussed, the word \"design\" is used with different meanings in the field of basic education, the first of which has a stronger relation with the field of arts, whereas the second one lies within the field of manufacturing technologies and the studies on the material culture of the contemporary society. Considering the common ground of said activities, which permeate the fields of design, basic education and digital technology, this research take into account a group of eighteen common points of agreement from a theoretical basis that subsided the field studies. Those studies were structured from four types of workshop held in the outskirts of Guarulhos, in the metropolitan region of São Paulo, organized in a free-access environment, and which are herein detailed. This theoretical and practical formulation has as purpose to address the considerations resulting from the studied common ground, insofar as digital modelling and manufacturing technologies has helped the development of languages and mental skills of such a young audience still in basic education. The aim is to advance the debate on the formation of the individual nowadays.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography