Dissertations / Theses on the topic 'Worm Algorithm'

To see the other types of publications on this topic, follow the link: Worm Algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Worm Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

SILVA, Antônio Márcio Pereira. "Estudos sobre o modelo O(N) na rede quadrada e dinâmica de bolhas na célula de Hele-Shaw." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/17187.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-06-29T13:52:59Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_final.pdf: 5635071 bytes, checksum: b300efb627e9ece412ad5936ab67e8e2 (MD5)
Made available in DSpace on 2016-06-29T13:52:59Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_final.pdf: 5635071 bytes, checksum: b300efb627e9ece412ad5936ab67e8e2 (MD5) Previous issue date: 2013-08-26
CNPq
No presente trabalho duas classes de problemas são abordadas. Primeiramente, são apresentados estudos computacionais sobre o modelo O(n) de spins na rede quadrada, e em seguida apresentamos novas soluções exatas para a dinâmica de bolhas na célula de Hele-Shaw. O estudo do modelo O(n) é feito utilizando sua representação em laços (cadeias fechadas), a qual é obtida a partir de uma expansão para altas temperaturas. Nesse representação, a função de partição do modelo possui uma expansão diagramática em que cada termo depende do número e comprimento total de laços e do número de (auto)interseções entre esses laços. Propriedades críticas do modelo de laços O(n) são obtidas através de conceitos oriundos da teoria de percolação. Para executar as simulações Monte Carlo, usamos o eficiente algoritmo WORM, o qual realiza atualizações locais através do movimento da extremidade de uma cadeia aberta denominada de verme e não sofre com o problema de "critical slowing down". Para implementar esse algoritmo de forma eficiente para o modelo O(n) na rede quadrada, fazemos uso de um nova estrutura de dados conhecida como listas satélites. Apresentamos estimativas para o ponto crítico do modelo para vários valores de n no intervalo de 0 < n ≤ 2. Usamos as estatísticas de laços e vermes para extrair, respectivamente, os expoentes críticos térmicos e magnéticos do modelo. No estudo de dinâmica de interfaces, apresentamos uma solução exata bastante geral para um arranjo periódico de bolhas movendo-se com velocidade constante ao longo de uma célula de Hele-Shaw. Usando a periodicidade da solução, o domínio relevante do problema pode ser reduzido a uma célula unitária que contém uma única bolha. Nenhuma imposição de simetria sobre forma da bolha é feita, de modo que a solução é capaz de produzir bolhas completamente assimétricas. Nossa solução é obtida por métodos de transformações conformes entre domínios duplamente conexos, onde utilizamos a transformação de Schwarz-Christoffel generalizada para essa classe de domínios.
In this thesis two classes of problems are discussed. First, we present computational studies of the O(n) spin model on the square lattice and determine its critical properties, whereas in the second part of the thesis we present new exact solutions for bubble dynamics in a Hele-Shaw cell. The O(n) model is investigated by using its loop representation which is obtained from a high-temperature expansion of the original model. In this representation, the partition function admits an diagrammatic expansion in which each term depends on the number and total length of loops (closed graphs) as well as on the number of intersections between these loops. Critical properties of the O(n) model are obtained by employing concepts from percolation theory. To perform Monte Carlo simulations of the model, we use the WORM algorithm, which is an efficient algorithm that performs local updates through the motion of one of the ends (called head) of an open chain (called worm) and hence does not suffer from “critical slowing down”. To implement this algorithm efficiently for the O(n) model on the square lattice, we make use of a new data structure known as a satellite list. We present estimates for the critical point of the model for various values of n in the range 0 < n ≤ 2. We use the statistics about the loops and the worm to extract the thermal and magnetic critical exponents of the model, respectively. In our study about interface dynamics, we present a rather general exact solution for a periodic array of bubbles moving with constant velocity in a Hele-Shaw cell. Using the periodicity of the solution, the relevant domain of the problem can be reduced to a unit cell containing a single bubble. No symmetry requirement is imposed on the bubble shape, so that the solution is capable of generating completely asymmetrical bubbles. Our solution is obtained by using conformal mappings between doubly-connected domains and employing the generalized Schwarz-Christoffel formula for this class of domains.
APA, Harvard, Vancouver, ISO, and other styles
2

Meier, Hannes. "Phase transitions in novel superfluids and systems with correlated disorder." Doctoral thesis, KTH, Statistisk fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160929.

Full text
Abstract:
Condensed matter systems undergoing phase transitions rarely allow exact solutions. The presence of disorder renders the situation  even worse but collective Monte Carlo methods and parallel algorithms allow numerical descriptions. This thesis considers classical phase transitions in disordered spin systems in general and in effective models of superfluids with disorder and novel interactions in particular. Quantum phase transitions are considered via a quantum to classical mapping. Central questions are if the presence of defects changes universal properties and what qualitative implications follow for experiments. Common to the cases considered is that the disorder maps out correlated structures. All results are obtained using large-scale Monte Carlo simulations of effective models capturing the relevant degrees of freedom at the transition. Considering a model system for superflow aided by a defect network, we find that the onset properties are significantly altered compared to the $\lambda$-transition in $^{4}$He. This has qualitative implications on expected experimental signatures in a defect supersolid scenario. For the Bose glass to superfluid quantum phase transition in 2D we determine the quantum correlation time by an anisotropic finite size scaling approach. Without a priori assumptions on critical parameters, we find the critical exponent $z=1.8 \pm 0.05$ contradicting the long standing result $z=d$. Using a 3D effective model for multi-band type-1.5 superconductors we find that these systems possibly feature a strong first order vortex-driven phase transition. Despite its short-range nature details of the interaction are shown to play an important role. Phase transitions in disordered spin models exposed to correlated defect structures obtained via rapid quenches of critical loop and spin models are investigated. On long length scales the correlations are shown to decay algebraically. The decay exponents are expressed through known critical exponents of the disorder generating models. For cases where the disorder correlations imply the existence of a new long-range-disorder fixed point we determine the critical exponents of the disordered systems via finite size scaling methods of Monte Carlo data and find good agreement with theoretical expectations.

QC 20150306

APA, Harvard, Vancouver, ISO, and other styles
3

Saccani, Sebastiano. "Quantum Monte Carlo studies of soft Bosonic systems and Minimum Energy Pathways." Doctoral thesis, SISSA, 2013. http://hdl.handle.net/20.500.11767/4931.

Full text
Abstract:
In this thesis, we make use of Monte Carlo techniques to address two rather different subjects in condensed matter physics. The first study deals with the characterization of a relatively novel and elusive phase of matter, the so-called supersolid, in which crystalline order and dissipationless flow coexist. While supersolidity is a well studied phenomenology in lattice models, we will be working here in continuous space, where much fewer results are available. Specifically, we study a soft core Bosonic system, quantum analog of thoroughly studied classical models, which displays an unambiguous supersolid phenomenology. In this system such a behavior is not obtained through Bose Condensation of lattice defects, but rather it is mean field in character. By computer simulations we characterize many properties of the system: of these, the most prominent are the phase diagram of the system and its excitation spectrum. This study is loosely related to the ultracold atom experimental field, as it is speculated that interparticle potential pertaining to the same class of the one employed here may be realized in this context. After the recent (and apparently definitive) ruling out of supersolidity effects in ^4He, it seems fair to state that ultracold atoms are the most promising candidate for the observation of this phenomenology. In this section we employ our own implementation of the worm algorithm on the continuum. The second part of this thesis is instead related to electronic structure, more specifically to the study of minimum energy pathways of reactions calculated via quantum Monte Carlo methods. In particular, we aim at assessing the computational feasibility and the accuracy of determining the most significant geometries of a reaction (initial/final and transition state) and its energy barrier via these stochastic techniques. To this end, we perform calculations on a set of simple reactions and compare the results with density functional theory and high level quantum chemistry calculations. We show that the employed technique indeed performs better than density functional for both geometries and energy barrier. Therefore our methodology is a good candidate to study reactions in which an high accuracy is needed, but it is not possible to employ high level quantum chemistry methods due to computational limitations. We believe that this study is significant also because of its systematic use of forces from Monte Carlo simulations. Although several studies have addressed various aspects of the problem of computing forces within quantum Monte Carlo accurately and efficiently, there is little awareness that such estimators are in fact mature, and consequently there are very few studies which actually employ them. We hope to show here that these estimators are actually ready to be used and provide good results. In this section we have mainly developed interfaces for existing Quantum Monte Carlo codes.
APA, Harvard, Vancouver, ISO, and other styles
4

Ohlsson, Patrik. "Computer Assisted Music Creation : A recollection of my work and thoughts on heuristic algorithms, aesthetics, and technology." Thesis, Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kmh:diva-2090.

Full text
Abstract:
Denna text är delvis en dokumentation av min egna resa inom datorbaserat tonsättande, specifikt inom algoritmisk komposition. Det är även ett utforskande av den tankevärld som finns i anknytning till dessa metoder – där estetiska koncept och konsekvenser diskuteras. Texten kommer huvudsakligen att beröra metoder som gynnas av eller möjliggörs av teknologi. Jag har försökt att närma mig dessa ämnen holistiskt genom att diskutera allt från estetik, teknik, till konkreta realiseringar av särskilda musikaliska idéer. Till detta tillkommer även många notexempel, lite kod, och illustrationer – specifikt för att stödja förklaringarna av, för många musikstudenter, främmande utommusikaliska koncept.
APA, Harvard, Vancouver, ISO, and other styles
5

Stadtherr, Hans. "Work efficient parallel scheduling algorithms." [S.l. : s.n.], 1998. http://deposit.ddb.de/cgi-bin/dokserv?idn=962681369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thakkar, Darshan Suresh, and darshanst@gmail com. "FPGA Implementation of Short Word-Length Algorithms." RMIT University. Electrical and Computer Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080806.140908.

Full text
Abstract:
Short Word-Length refers to single-bit, two-bit or ternary processing systems. SWL systems use Sigma-Delta Modulation (SDM) technique to express an analogue or multi-bit input signal in terms of a high frequency single-bit stream. In Sigma-Delta Modulation, the input signal is coarsely quantized into a single-bit representation by sampling it at a much higher rate than twice the maximum input frequency viz. the Nyquist rate. This single-bit representation is almost exclusively filtered to remove conversion quantization noise and sample decimated to the Nyquist frequency in preparation for traditional signal processing. SWL algorithms have a huge potential in a variety of applications as they offer many advantages as compared to multi-bit approaches. Features of SWL include efficient hardware implementation, increased flexibility and massive cost savings. Field Programmable Gate Arrays (FPGAs) are SRAM/FLASH based integrated circuits that can be programmed and re-programmed by the end user. FPGAs are made up of arrays of logic gates, routing channels and I/O blocks. State-of-the-art FPGAs include features such as Advanced Clock Management, Dedicated Multipliers, DSP Slices, High Speed I/O and Embedded Microprocessors. A System-on-Programmable-Chip (SoPC) design approach uses some or all the aforementioned resources to create a complete processing system on the device itself, ensuring maximum silicon area utilization and higher speed by eliminating inter-chip communication overheads. This dissertation focuses on the application of SWL processing systems in audio Class-D Amplifiers and aims to prove the claims of efficient hardware implementation and higher speeds of operation. The analog Class-D Amplifier is analyzed and an SWL equivalent of the system is derived by replacing the analogue components with DSP functions wherever possible. The SWL Class-D Amplifier is implemented on an FPGA, the standard emulation platform, using VHSIC Hardware Description Languages (VHDL). The approach is taken a step forward by adding re-configurability and media selectivity and proposing SDM adaptivity to improve performance.
APA, Harvard, Vancouver, ISO, and other styles
7

CECCHINI, FLAVIO MASSIMILIANO. "Graph-based Clustering Algorithms for Word Sense Induction." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/151635.

Full text
Abstract:
La presente tesi si occupa dell’induzione dei significati delle parole (ISP o word sense induction - WSI), una branca dell’elaborazione dei linguaggi naturali il cui scopo è individuare ed elencare automaticamente e in modo non supervisionato i possibili significati o sensi che un termine può assumere relativamente ai differenti contesti in cui esso viene impiegato, senza ricorrere a risorse esterne quali dizionari od ontologie. Fra i vari approcci alla ISP esistenti, ci siamo concentrati in particolare su quelli che modellano il contesto di una parola tramite un grafo, su cui viene fatto operare un algoritmo per partizionarlo: la partizione che ne risulta ha come interpretazione l’implicita descrizione dei possibili sensi di quella parola. Le nozioni fondamentali dell’ISP, alcuni concetti base e approcci alla ISP scelti fra la letteratura del campo vengono presentati e analizzati nella prima parte del lavoro. Nella seconda parte introduciamo il nostro triplice contributo. Per prima cosa definiamo ed esploriamo la distanza di Jaccard pesata (assieme a una sua versione non pesata), cioè una distanza sui nodi di un grafo non orientato con pesi positivi, che usiamo per ottenere relazioni di secondo ordine dalle relazioni di primo ordine eventualmente modellate dal grafo (p.e. cooccorrenze). Inoltre, definiamo la nozione correlata di “arco passerella”, un arco “separatore” il cui peso è superiore alla media dei pesi sugli archi uscenti da una delle sue due estremità, e definiamo anche una nuova interpretazione sintetica delle curvatura di un grafo, vista come la differenza delle distanze di Jaccard pesata e non pesata sugli archi. La distanza di Jaccard da noi definita è alla base del nostro secondo contributo: tre nuovi algoritmi di clustering su grafo espressamente creati per l’ISP, rispettivamente l’algoritmo di clustering per passerelle, un algoritmo di clustering aggregativo e uno basato sulla curvatura. Il terzo contributo di questa tesi è un nuovo sistema di valutazione per algoritmi di clustering su grafo per l’ISP, il quale consiste di due collezioni di grafi (una per le cooccorrenze e una per le similarità semantiche) e di una nuova misura di valutazione ad hoc, entrambi sviluppati attorno all’idea di pseudoparola. Una pseudoparola è la fusione artificiale di due parole esistenti, usata come una nuova parola ambigua i cui (pseudo)significati sono perfettamente conosciuti. Tale sistema permette di valutare algoritmi per l’ISP su dati facilmente creabili ed espandibili. Svolgiamo inoltre una valutazione basata sulle pseudoparole per vari algoritmi di clustering su grafo, inclusi i tre da noi proposti. L'indagine di come i parametri di una pseudoparola influiscono sul risultato di un algoritmo, il confronto dei punteggi ottenuti da diverse metriche di valutazione, assieme all'individuazione dei loro bias, la grandezza dei clustering e le tendenze evidenziate dall'iperclustering, l'influenza del tipo di un grafo di parole (basato su similarità sintattiche o cooccorrenze) sul prodotto di un algoritmo – tutti questi fattori, preceduti da un'esauriente descrizione degli obiettivi e dalla definizione di nuovi concetti e strumenti per affrontarli, concorrono a dare una migliore comprensione del funzionamento e delle insidie dell'ISP su grafo. Sottolineiamo e isoliamo gli elementi che determinano l'aspetto dei risultati di un algoritmo, ne discutiamo le proprietà e i comportamenti in relazione alle caratteristiche del grafo di parole e stabiliamo i pro e i contro di ogni algoritmo. La nostra analisi fornisce una bussola sperimentale che aiuta a individuare con precisione le caratteristiche richieste da un algoritmo nell'ambito dell'ISP e a orientare la costruzione di un grafo di parole. In particolare, abbiamo evidenziato il contrasto fra paradigma e sintagma inerente ai grafi di parole basati su similarità semantiche e cooccorrenze.
This dissertation is about Word Sense Induction (WSI), a branch of Natural Language Processing concerned with the automated, unsupervised detection and listing of the possible senses that a word can assume relative to the different contexts in which it appears. To this end, no external resources like dictionaries or ontologies are used. Among the many existing approaches to WSI, we focus specifically on modelling the context of a word through a graph and on running a clustering algorithm on it: the resulting clusters are interpreted as implicitly describing the possible senses of the word. Fundamental notions of WSI, basic concepts and some WSI approaches selected from literature are presented and examined in the first part of this work. In the second part, we introduce our threefold contribution. Firstly, we define and explore a weighted (together with an unweighted) Jaccard distance, i.e. a distance on the nodes of a positively weighted undirected graph which we use to obtain second-order relations from the first-order ones modelled by the graph (e.g. co-occurrences). Moreover, we define the related notion of gangplank edge, a separator edge with weight greater than the mean weights of the edges incident to either of its two ends, and finally a new synthetic interpretation of the curvature on a graph, seen as the difference between weighted and unweighted Jaccard distances between node couples. Our Jaccard distance is at the basis of the second contribution: three novel graph-based clustering algorithms expressly created for the task of WSI, respectively the gangplank clustering algorithm, an aggregative clustering algorithm and a curvature-based clustering algorithm. The third contribution is a novel evaluation framework for graph-based clustering algorithms for WSI, consisting of two word graph data sets (one for co-occurrences and one for semantic similarities) and a new ad hoc evaluation measure built around pseudowords. A pseudoword is the artificial conflation of two existing words, used as an ambiguous word whose (pseudo)senses are perfectly known. This enables to evaluate WSI algorithms on an easily creatable and expandable data set. We carry out a pseudoword-based evaluation for a number of graph-based clustering algorithms, including our three proposed systems. The investigation of how the parameters of a pseudoword affect an algorithm's outcomes, the comparison of the scores obtained by different evaluation metrics together with the detection of their biases, the size of the clusterings and the trends put in evidence by the hyperclustering step, the influence of the type of a word graph (based on semantic similarities or co-occurrences) on the output of an algorithm - all these factors, preceded by the comprehensive description of the task and the definition of novel concepts and instruments to tackle it, concur to give a deeper insight into the functioning and pitfalls of graph-based Word Sense Induction. We highlight and isolate the elements that determine how the results of an algorithm look like, discuss their properties and behaviours in relation to the word graph features and establish the pro and contra of each algorithm. Our analysis provides an experimental compass that helps pinpoint the right characteristics required by a clustering algorithm for the task of Word Sense Induction, and that helps orient the construction of a word graph. In particular, we have put in evidence the different syntagmatic versus paradigmatic contrast inherent to word graphs based on co-occurrences and semantic similarities.
APA, Harvard, Vancouver, ISO, and other styles
8

Costa, Karine Piacentini Coelho da. "Estudo do modelo de Bose-Hubbard usando o algoritmo Worm." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-27022012-085711/.

Full text
Abstract:
Nesta dissertação estudaremos sistemas de bósons ultrafrios armadilhados em uma rede ótica quadrada bidimensional sem levar em consideração o confinamento harmônico. A dinâmica desses sistemas é bem descrita pelo modelo de Bose-Hubbard, que prevê uma transição de fase quântica de um superfluido para um isolante de Mott a temperaturas baixas, e pode ser induzida variando a profundidade do potencial da rede ótica. Apresentaremos o diagrama de fases dessa transição construído a partir de uma aproximação de campo médio e também com um cálculo numérico usando um algoritmo de Monte Carlo Quântico, denominado algoritmo Worm. Encontramos o ponto crítico para o primeiro lobo de Mott em ambos os casos, concordando com trabalhos anteriores.
This work study the two-dimensional ultracold bosonic atoms loaded in a square optical lattice, without harmonic confinement. The dynamics of this system is described by the Bose-Hubbard model, which predicts a quantum phase transition from a superfluid to a Mott-insulator at low temperatures that can be induced by varying the depth of the optical potential. We present here the phase diagram of this transition built from a mean field approach and from a numerical calculation using a Quantum Monte Carlo algorithm, namely the Worm algorithm. We found the critical transition point for the first Mott lobe in both cases, in agreement with the standard literature.
APA, Harvard, Vancouver, ISO, and other styles
9

Embretsén, Stefan. "Modifying a pure pursuit algorithm to work in three dimensions." Thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-142508.

Full text
Abstract:
Path tracking has been an important issue in robotics for many decades now and it is used to give a robot entity the ability to follow a path. The many different algorithms used to do path tracking all have specific behaviors and may not be optimal or even work in all areas. To broaden the use of an existing path tracking algorithm, this report sets out to modify it to work in three dimensions instead of two.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Wen-Tsong. "Word level training of handwritten word recognition systems /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sulaiman, Nasri. "Genetic algorithms for word length optimization of FFT processors." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/14513.

Full text
Abstract:
Genetic algorithms (GAs) are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover to find best solutions to optimization and search problems. GAs are used in wide variety of applications in fields ranging from computer science, engineering, evolvable hardware, economics, mathematics, physics and biogenetics to name a few. A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and it’s inverse. An FFT processor is used in applications such as signal processing and telecommunications, etc. It is one of the most power consuming block in wireless receivers such as Multi-Carrier Code Division Multiple Access (MC-CDMA). The portability requirement of these receiver systems imposes the need of low power architectures. Thus, designing an FFT processor with low power consumption is of crucial importance for overall system power. Power consumption of an FFT processor depends on the size of word length of the FFT coefficients. One way to reduce the power consumption in this processor is by reducing the switching activity in the FFT coefficients. This can be achieved using smaller word length for the FFT coefficients. This in turn reduces the SNR in the output signals of the FFT. This thesis investigates the impact of word length optimization of FFT coefficients on switching activity and SNR using GAs. The quality of GAs solutions are compared with non-GA solutions in order to determine the feasibility of using GAs to achieve optimum performance in terms of switching activity and SNR. Results show that GAs can find solutions with smaller word length and have significant reductions in switching compared to the non-GA solutions. This thesis also investigates some of the varying parameter settings, such as mutation domain, population size, crossover rate and mutation probability in the GAs, which affects the quality of search performance towards convergence and the speed of convergence.
APA, Harvard, Vancouver, ISO, and other styles
12

Isaac, Andreas. "Evaluation of word segmentation algorithms applied on handwritten text." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414609.

Full text
Abstract:
The aim of this thesis is to build and evaluate how a word segmentation algorithm performs when extracting words from historical handwritten documents. Since historical documents often consist of background noise, the aim will also be to investigate whether applying a background removal algorithm will affect the final result or not. Three different types of historical handwritten documents are used to be able to compare the output when applying two different word segmentation algorithms. The result attained indicates that the background removal algorithm increases the accuracy obtained when using the word segmentation algorithm. The word segmentation algorithm developed successfully manages to extract a majority of the words while the obtained algorithm has difficulties for some documents. A conclusion made was that the type of document plays the key role in whether a poor result will be obtained or not. Hence, different algorithms may be needed rather than using one for all types of documents.
APA, Harvard, Vancouver, ISO, and other styles
13

Sinha, Ravi Som. "Graph-based Centrality Algorithms for Unsupervised Word Sense Disambiguation." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9736/.

Full text
Abstract:
This thesis introduces an innovative methodology of combining some traditional dictionary based approaches to word sense disambiguation (semantic similarity measures and overlap of word glosses, both based on WordNet) with some graph-based centrality methods, namely the degree of the vertices, Pagerank, closeness, and betweenness. The approach is completely unsupervised, and is based on creating graphs for the words to be disambiguated. We experiment with several possible combinations of the semantic similarity measures as the first stage in our experiments. The next stage attempts to score individual vertices in the graphs previously created based on several graph connectivity measures. During the final stage, several voting schemes are applied on the results obtained from the different centrality algorithms. The most important contributions of this work are not only that it is a novel approach and it works well, but also that it has great potential in overcoming the new-knowledge-acquisition bottleneck which has apparently brought research in supervised WSD as an explicit application to a plateau. The type of research reported in this thesis, which does not require manually annotated data, holds promise of a lot of new and interesting things, and our work is one of the first steps, despite being a small one, in this direction. The complete system is built and tested on standard benchmarks, and is comparable with work done on graph-based word sense disambiguation as well as lexical chains. The evaluation indicates that the right combination of the above mentioned metrics can be used to develop an unsupervised disambiguation engine as powerful as the state-of-the-art in WSD.
APA, Harvard, Vancouver, ISO, and other styles
14

Sinha, Ravi Som Mihalcea Rada F. "Graph-based centrality algorithms for unsupervised word sense disambiguation." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-9736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Vu, Trong-Tuan. "Heterogeneity and locality-aware work stealing for large scale Branch-and-Bound irregular algorithms." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10151/document.

Full text
Abstract:
Les algorithmes Branch-and-Bound (B&B) font partie des méthodes exactes pour la résolution de problèmes d’optimisation combinatoire. Les calculs induits par un algorithme B&B sont extrêmement couteux surtout lorsque des instances de grande tailles sont considérées. Un algorithme B&B peut être vu comme une exploration implicite d’un espace représenté sous la forme d’un arbre qui a pour spécificité d’être hautement irrégulier. Pour accélérer l’exploration de cet espace, les calculs parallèles et distribués à très large échelle sont souvent utilisés. Cependant, atteindre des performances parallèles optimales est un objectif difficile et jalonné de plusieurs défis, qui découlent essentiellement de deux facteurs: (i) l’irrégularité des calculs inhérents à l’arbre B&B et (ii) l’hétérogénéité inhérente aux environnements de calcul large échelle. Dans cette thèse, nous nous intéressons spécifiquement à la résolution de ces deux défis. Nous nous concentrons sur la conception d’algorithmes distribués pour l’équilibrage de charge afin de garantir qu’aucune entité de calcul n’est surchargée ou sous-utilisée. Nous montrons comment résoudre l’irrégularité des calculs sur différents type d’environnements, et nous comparons les approches proposées par rapport aux approches de références existantes. En particulier, nous proposons un ensemble de protocoles spécifiques à des contextes homogènes, hétérogène en terme de puissance de calcul (muti-coeurs, CPU et GPU), et hétérogènes en terme de qualité des lien réseaux. Nous montrons à chaque fois la supériorité de nos protocoles à travers des études expérimentales extensives et rigoureuses
Branch and Bound (B&B) algorithms are exact methods used to solve combinatorial optimization problems (COPs). The computation process of B&B is extremely time-intensive when solving large problem instances since the algorithm must explore a very large space which can be viewed as a highly irregular tree. Consequently, B&B algorithms are usually parallelized on large scale distributed computing environments in order to speedup their execution time. Large scale distributed computing environments, such as Grids and Clouds, can provide a huge amount of computing resources so that very large B&B instances can be tackled. However achieving high performance is very challenging mainly because of (i) the irregular characteristics of B&B workload and (ii) the heterogeneity exposed by large scale computing environments. This thesis addresses and deals with the above issues in order to design high performance parallel B&B on large scale heterogeneous computing environments. We focus on dynamic load balancing techniques which are to guarantee that no computing resources are underloaded or overloaded during execution time. We also show how to tackle the irregularity of B&B while running on different computing environments, and consider to compare our proposed solutions with the state-of-the-art algorithms. In particular, we propose several dynamic load balancing algorithms for homogeneous, node-heterogeneous and link-heterogeneous computing platforms. In each context, our approach is shown to perform much better than the state-of-the-art approaches
APA, Harvard, Vancouver, ISO, and other styles
16

Willis, Timothy Alan. "A flexible expansion algorithm for user-chosen abbreviations." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3477.

Full text
Abstract:
People with some types of motor disabilities who wish to generate text using a computer can find the process both fatiguing and time-consuming. These problems can be alleviated by reducing the quantity of keystrokes they must make, and one approach is to allow the user to enter shortened, abbreviated input, which is then re-expanded for them, by a program ‘filling in the gaps’. Word Prediction is one approach, but comes with drawbacks, one of which is the requirement that generally the user must type the first letters of their intended word, regardless of how unrepresentative they may consider those letters to be. Abbreviation Expansion allows the user to type reduced forms of many words in a way they feel represents them more effectively. This can be done by the omission of one or more letters, or the replacement of letter sequences with other, usually shorter, sequences. For instance, the word ‘hyphenate might be shortened to ‘yfn8’, by leaving out some letters and replacing the ‘ph’ and ‘ate’ with the shorter but phonetically similar ‘f’ and ‘8’. ‘Fixed Abbreviation Expansion’ requires the user to memorise a set of correspondences between abbreviations and the full words which they represent. While this enables useful keystroke savings to be made, these come alongside an increased cognitive load and potential for error. Where a word is encountered for which there is no preset abbreviation, or for which the user cannot remember one, keystroke savings may be lost. ‘Flexible Abbreviation Expansion’ allows the user to leave out whichever letters they feel to be ‘less differentiating' and jump straight ahead to type those they feel are most ‘salient’ and most characterise the word, choosing abbreviations ‘on the fly’. The need to memorise sets of correspondences is removed, as the user can be offered all candidates for which the abbreviation might be a representation, usually in small sets on screen. For useful savings to be made, the intended word must regularly be in the first or second set for quick selection, or the system might attempt to place the intended word at the very top of its list as frequently as possible. Thus it is important to generate and rank the candidates effectively, so that high probability words can be offered in a shortlist. Lower-ranking candidates can be offered in secondary lists which are not immediately displayed. This can reduce both the cognitive load and keystrokes needed for selection. The thesis addresses the task of reducing the number of keystrokes needed for text creation with a large, expressive vocabulary, using a new approach to flexible abbreviation expansion. To inform the solution, two empirical studies were run to gather letter-level statistics on the abbreviation methods of twenty-nine people, under different degrees of constriction (that is, different restrictions on the numbers of characters by which to reduce). These studies showed that with a small amount of priming, people would abbreviate in regular ways, both shared between users, and repeated through the data from an individual. Analysis showed the most common strategies to be vowel deletion, phonetic replacement, loss of double letters, and word truncation. Participants reduced the number of letters in their texts by between 25% (judged to maintain a high degree of comprehensibility) up to 40% (judged to be a maximum degree of brevity whilst still retaining comprehensibility). Informed by these results, an individual-word-level algorithm was developed. For each input abbreviation, a set of candidates is produced, ranked in such a way as to potentially save substantial keystrokes when used across a whole text. A variety of statistical and linguistic techniques, often also used in spelling checking and correction, are used to rank them so that the most probable will be easiest to select, and with fewest keystrokes. The algorithm works at the level of the individual word, without looking at surrounding context. Evaluation of the algorithm demonstrated that it outperforms its nearest comparable alternative, of ranking word lists exclusively by word frequency. The evaluation was performed on the data from the second empirical study, using vocabulary sizes of 2-, 10-, 20- and 30-thousand words. The results show the algorithm to be of potential benefit for use as a component of a flexible abbreviation expansion system. Even with the overhead of selection of the intended word, useful keystroke savings could still be attained. It is envisaged that such a system could be implemented on many platforms, including as part of an AAC (Augmentative and Alternative Communication) device, and an email system on a standard PC, thus making typed communication for the user group more comfortable and expansive.
APA, Harvard, Vancouver, ISO, and other styles
17

McMillan, David Evans. "Time-varying linear prediction as a base for an isolated-word recognition algorithm." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45777.

Full text
Abstract:
There is a vast amount of research being done in the area of voice recognition. A large portion of this research concentrates on developing algorithms that will yield higher accuracy rates; such as algorithms based on dynamic time warping, vector quantization, and other mathematical methods [l2][21][l5]. In this research, the evaluation of the feasibility of using linear prediction (LP) with time-varying parameters as a base for a voice recognition algorithm will be investigated. First the development of an anti-aliasing filter is discussed with some results from the filter hardware realization included. Then a brief discussion of LP is presented and a method for time-varying LP is derived from this discussion. A comparison between time-varying and segmentation LP is made and a description of the developed algorithm that tests time-varying LP as a recognition technique is given. The evaluation is conducted with the developed algorithm configured for speaker-dependent and speaker-independent isolated-word recognition. The conclusion drawn from this research is that this particular technique of voice recognition is very feasible as a base for a voice recognition algorithm. With the incorporation of other techniques, a complete algorithm can conceivably be developed that will yield very high accuracy rates. Recommendations for algorithm improvements are given along with other techniques that might be added to make a complete recognition algorithm.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Shrestha, Joseph, and H. David Jeong. "Computational Algorithm to Automate As-Built Schedule Development Using Digital Daily Work Reports." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etsu-works/2717.

Full text
Abstract:
As-built schedules prepared during and after construction are valuable tools for State Highway Agencies (SHAs) to monitor construction progress, evaluate contractor's schedule performance, and defend against any potential disputes. However, previous studies indicate that current as-built schedule development methods are manual and rely on information scattered in various field diaries and meeting minutes. SHAs have started to collect field activity data in digital databases that can be used to automatically generate as-built schedules if proper computational algorithms are developed. This study develops computational algorithms and a prototype system to automatically generate and visualize project level and activity level as-built schedules during and after construction. The algorithm is validated using a real highway project data. The study is expected to significantly aid SHAs in making better use of field data, facilitate as-built schedule development, monitor construction progress with higher granularity, and utilize as-built schedule for productivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

Showalter, Mark Henry. "Work Space Analysis and Walking Algorithm Development for A Radially Symmetric Hexapod Robot." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/34663.

Full text
Abstract:
The Multi-Appendage Robotic System (MARS) built for this research is a hexapod robotic platform capable of walking and performing manipulation tasks. Each of the six limbs of MARS incorporates a three-degree of freedom (DOF), kinematically spherical proximal joint, similar to a shoulder or hip joint; and a 1-DOF distal joint, similar to an elbow or knee joint. Designing walking gaits for such multi-limb robots requires a thorough understanding of the kinematics of the limbs, including their workspace. The specic abilities of a walking algorithm dictate the usable workspace for the limbs. Generally speaking, the more general the walking algorithm is, the less constricted the workspace becomes. However, the entire limb workspace cannot be used in a continuous, statically stable, alternating tripedal gait for such a robot; therefore a subset of the limb workspace is dened for walking algorithms. This thesis develops MARS limb workspaces in the knee up conguration, and analyzes its limitations for walking on planar surfaces. The workspaces range from simple 2D geometry to complex 3D volumes. While MARS is a hexapedal robot, the tasks of dening the workspace and walking agorthm for all six limbs can be abstracted to a single limb using the constraint of a tripedal, statically stable gait. Based on understanding the behavior of an individual limb, a walking algorithm was developed to allow MARS to walk on level terrain. The algorithm is adaptive in that it continously updates based on control inputs. Open Tech developed a similar algorithm, based on a 2D workspace. This simpler algorithm developed resulted in smooth gait generation, with near-instantaneous response to control input. This accomplishment demonstrated the feasibility of implementing a more sophisticated algorithm, allowing for inputs of all six DOF: x and y velocity, z velocity or walking height, yaw, pitch and roll. This latter algorithm uses a 3D workspace developed to aord near-maximum step length. The workspace analysis and walking algorithm development in this thesis can be applied to the further advancement of walking gait generation algorithms.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Faruque, Md Ehsanul. "A Minimally Supervised Word Sense Disambiguation Algorithm Using Syntactic Dependencies and Semantic Generalizations." Thesis, University of North Texas, 2005. https://digital.library.unt.edu/ark:/67531/metadc4969/.

Full text
Abstract:
Natural language is inherently ambiguous. For example, the word "bank" can mean a financial institution or a river shore. Finding the correct meaning of a word in a particular context is a task known as word sense disambiguation (WSD), which is essential for many natural language processing applications such as machine translation, information retrieval, and others. While most current WSD methods try to disambiguate a small number of words for which enough annotated examples are available, the method proposed in this thesis attempts to address all words in unrestricted text. The method is based on constraints imposed by syntactic dependencies and concept generalizations drawn from an external dictionary. The method was tested on standard benchmarks as used during the SENSEVAL-2 and SENSEVAL-3 WSD international evaluation exercises, and was found to be competitive.
APA, Harvard, Vancouver, ISO, and other styles
21

Raza, Ghulam. "Algorithms for the recognition of poor quality documents." Thesis, Nottingham Trent University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bekos, Michael A., Dijk Thomas C. van, Martin Fink, Philipp Kindermann, Stephen Kobourov, Sergey Pupyrev, Joachim Spoerhase, and Alexander Wolff. "Improved Approximation Algorithms for Box Contact Representations." Springer, 2016. http://hdl.handle.net/10150/623076.

Full text
Abstract:
We study the following geometric representation problem: Given a graph whose vertices correspond to axis-aligned rectangles with fixed dimensions, arrange the rectangles without overlaps in the plane such that two rectangles touch if the graph contains an edge between them. This problem is called Contact Representation of Word Networks (Crown) since it formalizes the geometric problem behind drawing word clouds in which semantically related words are close to each other. Crown is known to be NP-hard, and there are approximation algorithms for certain graph classes for the optimization version, Max-Crown, in which realizing each desired adjacency yields a certain profit. We present the first O(1)-approximation algorithm for the general case, when the input is a complete weighted graph, and for the bipartite case. Since the subgraph of realized adjacencies is necessarily planar, we also consider several planar graph classes (namely stars, trees, outerplanar, and planar graphs), improving upon the known results. For some graph classes, we also describe improvements in the unweighted case, where each adjacency yields the same profit. Finally, we show that the problem is APX-complete on bipartite graphs of bounded maximum degree.
APA, Harvard, Vancouver, ISO, and other styles
23

Astrov, Sergey. "Optimization of algorithms for large vocabulary isolated word recognition in embedded devices." [S.l.] : [s.n.], 2007. http://mediatum2.ub.tum.de/doc/620864/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Young, Teresa. "A Model of Children's Acquisition of Grammatical Word Categories Using an Adaptation and Selection Algorithm." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4148.

Full text
Abstract:
Most children who follow a typical developmental timeline learn the grammatical categories of words in their native language by the time they enter school. Researchers have worked to provide a number of explicit, testable models or algorithms in an attempt to model this language development. These models or algorithms have met with some varying success in terms of determining grammatical word categories from the transcripts of adult input to children. A new model of grammatical category acquisition involving an application of evolutionary computing algorithms may provide further understanding in this area. This model implements aspects of evolutionary biology, such as variation, adaptive change, self-regulation, and inheritance. The current thesis applies this model to six English language corpora. The model created dictionaries based on the words in each corpus and matched the words with their grammatical tags. The dictionaries evolved over 5,000 generations. Four different mutation rates were used in creating offspring dictionaries. The accuracy achieved by the model in correctly matching words with tags reached 90%. Considering this success, further research involving an evolutionary model appears warranted.
APA, Harvard, Vancouver, ISO, and other styles
25

Cloughley, William R. "Evaluation of work distribution algorithms and hardware topologies in a multi-Transputer network." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23030.

Full text
Abstract:
This thesis presents the evaluation of work distribution algorithms and hardware topologies in a multi-Transputer network. The primary emphasis concerns a work distribution algorithm known as 'workfarm' that is effective on problems that are divisible into independent work packets. All the programs and examples presented in this thesis were implemented in the OCCAM programming language, using the Transputer Development System, D700C, Beta 2.0 March 1987 compiler version. Keywords: Optimization, Debugging. (kr)
APA, Harvard, Vancouver, ISO, and other styles
26

Mukre, Prakash. "Hardware accelerator for DNA code word searching." Diss., Online access via UMI:, 2008.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2008.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
27

Akbari, Masoomeh. "Probabilistic Transitive Closure of Fuzzy Cognitive Maps: Algorithm Enhancement and an Application to Work-Integrated Learning." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41401.

Full text
Abstract:
A fuzzy cognitive map (FCM) is made up of factors and direct impacts. In graph theory, a bipolar weighted digraph is used to model an FCM; its vertices represent the factors, and the arcs represent the direct impacts. Each direct impact is either positive or negative, and is assigned a weight; in the model considered in this thesis, each weight is interpreted as the probability of the impact. A directed walk from factor F to factor F' is interpreted as an indirect impact of F on F'. The probabilistic transitive closure (PTC) of an FCM (or bipolar weighted digraph) is a bipolar weighted digraph with the same set of factors, but with arcs corresponding to the indirect impacts in the given FCM. Fuzzy cognitive maps can be used to represent structured knowledge in diverse fields, which include science, engineering, and the social sciences. In [P. Niesink, K. Poulin, M. Sajna, Computing transitive closure of bipolar weighted digraphs, Discrete Appl. Math. 161 (2013), 217-243], it was shown that the transitive closure provides valuable new information for its corresponding FCM. In particular, it gives the total impact of each factor on each other factor, which includes both direct and indirect impacts. Furthermore, several algorithms were developed to compute the transitive closure of an FCM. Unfortunately, computing the PTC of an FCM is computationally hard and the implemented algorithms are not successful for large FCMs. Hence, the Reduction-Recovery Algorithm was proposed to make other (direct) algorithms more efficient. However, this algorithm has never been implemented before. In this thesis, we code the Reduction-Recovery Algorithm and compare its running time with the existing software. Also, we propose a new enhancement on the existing PTC algorithms, which we call the Separation-Reduction Algorithm. In particular, we state and prove a new theorem that describes how to reduce the input digraph to smaller components by using a separating vertex. In the application part of the thesis, we show how the PTC of an FCM can be used to compare different standpoints on the issue of work-integrated learning.
APA, Harvard, Vancouver, ISO, and other styles
28

Stenquist, Nicole Adele. "Modeling Children's Acquisition of Grammatical Word Categories from Adult Input Using an Adaptation and Selection Algorithm." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5830.

Full text
Abstract:
It is understood that children learn the use of grammatical categories in their native language, and previous models have only been partially successful in describing this acquisition. The present study uses an adaptation selection algorithm to continue the work in addressing this question. The input for the computer model is child-directed speech towards three children, ages ranging from 1;1 to 5;1 during the course of sampling. The output of the model consists of the input words labeled with a grammatical category. This output data was evaluated at regular intervals through its ability to correctly identify the grammatical categories of language of the target child. The findings suggest that the use of this type of model is effective in categorizing words into grammatical categories in both accuracy and completion.
APA, Harvard, Vancouver, ISO, and other styles
29

Moon, Gordon Euhyun. "Parallel Algorithms for Machine Learning." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561980674706558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bellesia, Francesca <1990&gt. "Individuals in the Workplatform. Exploring Implications for Work Identity and Algorithmic Reputation Management." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9259/1/Thesis%20Final%20February%202020.pdf.

Full text
Abstract:
In the new world of work, workers not only change jobs more frequently, but also perform independent work on online labor markets. As they accomplish smaller and shorter jobs at the boundaries of organizations, employment relationships become unstable and career trajectories less linear. These new working conditions question the validity of existing management theories and call for more studies explaining gig workers’ behavior. Aim of this dissertation is contributing to this emerging body of knowledge by (I) exploring how gig workers shape their work identity on online platforms, and (II) investigating how algorithmic reputation changes dynamics of quality signaling and affects gig workers’ behavior. Chapter 1 introduces the debate on gig work, detailing why existing theories and definitions cannot be applied to this emergent workforce. Chapter 2 provides a systematic review of studies on individual work in online labor markets and identifies areas for future research. Chapter 3 describes the exploratory, qualitative methodology applied to collect and analyze data. Chapter 4 presents the first empirical paper investigating how the process of work identity construction unfolds for gig workers. It explores how digital platforms, intended both as providers of technological features and online environments, affect this process. Findings reveal the online environment constrains the action of workers who are pushed to take advantage of platform’s technological features to succeed. This interplay leads workers to develop an entrepreneurial orientation. Drawing on signaling theory, Chapter 5 understands how gig workers interpret algorithmic calculated reputation and with what consequences for their experience. Results show that, after complying to platform’s rules in the first period, freelancers respond to algorithmic management through different strategies – i.e. manipulation, nurturing relationships, and living with it. Although reputation scores standardize information on freelancers’ quality, and, apparently, freelancers’ work, this study shows instead responses to algorithmic control can be diverse.
APA, Harvard, Vancouver, ISO, and other styles
31

Tillmann, Christoph [Verfasser], and Hermann [Akademischer Betreuer] Ney. "Word re-ordering and dynamic programming based search algorithm for statistical machine translation / Christoph Tillmann ; Betreuer: Hermann Ney." Aachen : Universitätsbibliothek der RWTH Aachen, 2001. http://d-nb.info/1129260615/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sinha, Ravi Som. "Finding Meaning in Context Using Graph Algorithms in Mono- and Cross-lingual Settings." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc271899/.

Full text
Abstract:
Making computers automatically find the appropriate meaning of words in context is an interesting problem that has proven to be one of the most challenging tasks in natural language processing (NLP). Widespread potential applications of a possible solution to the problem could be envisaged in several NLP tasks such as text simplification, language learning, machine translation, query expansion, information retrieval and text summarization. Ambiguity of words has always been a challenge in these applications, and the traditional endeavor to solve the problem of this ambiguity, namely doing word sense disambiguation using resources like WordNet, has been fraught with debate about the feasibility of the granularity that exists in WordNet senses. The recent trend has therefore been to move away from enforcing any given lexical resource upon automated systems from which to pick potential candidate senses,and to instead encourage them to pick and choose their own resources. Given a sentence with a target ambiguous word, an alternative solution consists of picking potential candidate substitutes for the target, filtering the list of the candidates to a much shorter list using various heuristics, and trying to match these system predictions against a human generated gold standard, with a view to ensuring that the meaning of the sentence does not change after the substitutions. This solution has manifested itself in the SemEval 2007 task of lexical substitution and the more recent SemEval 2010 task of cross-lingual lexical substitution (which I helped organize), where given an English context and a target word within that context, the systems are required to provide between one and ten appropriate substitutes (in English) or translations (in Spanish) for the target word. In this dissertation, I present a comprehensive overview of state-of-the-art research and describe new experiments to tackle the tasks of lexical substitution and cross-lingual lexical substitution. In particular I attempt to answer some research questions pertinent to the tasks, mostly focusing on completely unsupervised approaches. I present a new framework for unsupervised lexical substitution using graphs and centrality algorithms. An additional novelty in this approach is the use of directional similarity rather than the traditional, symmetric word similarity. Additionally, the thesis also explores the extension of the monolingual framework into a cross-lingual one, and examines how well this cross-lingual framework can work for the monolingual lexical substitution and cross-lingual lexical substitution tasks. A comprehensive set of comparative investigations are presented amongst supervised and unsupervised methods, several graph based methods, and the use of monolingual and multilingual information.
APA, Harvard, Vancouver, ISO, and other styles
33

Björkhammer, Cecilia. "Hur hanterar elever i år 4 subtraktionsuppgifter? : En studie med fokus på feltyper, räknemetod och modersmål." Thesis, Linköpings universitet, Institutionen för beteendevetenskap och lärande, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143228.

Full text
Abstract:
Mitt syfte med denna studie har varit att beskriva och analysera hur elever med svenska som modersmål samt elever med annat modersmål än svenska löser subtraktionsuppgifter och formulerar uppgifter utifrån ett givet uttryck. Empirin består av 272 elevers svar och utifrån denna bas har en kvalitativ och en kvantitativ studie genomförts där feltyper, metodval och elevens modersmål varit viktiga parametrar i analysen.   I den kvalitativa analysen konstaterades att elever beräknar subtraktionsuppgifter på olika sätt och utifrån detta har ett flödesschema tagits fram och använts som analysredskap. Elevsvaren sorteras ut i olika nivåer; Nivå 1 – Läsligt svar eller inte, Nivå 2 – Inkorrekta svar och feltyper som uppkommer inom räknemetoderna, Nivå 3 – Korrekta svar med redovisning eller inte. På nivå 2 identifierades 8 feltyper för lodrät algoritm och 7 feltyper för vågrät algoritm. När eleven skall formulera en subtraktionshändelse finns fem olika kategorier; Jämförelse, Ta bort, Utfyllnad, Missledande resonemang och Övrigt.   I den kvantitativa analysen redovisas lösningsfrekvens och räknemetod för de respektive uppgiftsgrupper som skapats (grön, blå, röda och svarta), där gröna uppgifter anses vara de lättaste och de svarta uppgifterna de svåraste. Den lodräta algoritmen var den mest framgångsrika räknemetoden för alla fyra uppgiftsgrupper. Den lodräta räknemetoden var också den mest använda vid de gröna och svarta uppgifterna, medan den vågräta räknemetoden var mest använd vid de blå och röda uppgifterna.   Dessutom görs en jämförelse av fördelningen av feltyper mellan olika elevgrupper; elever med svenska som modersmål och elever med annat modersmål än svenska. I båda elevgrupper är feltypen Mindre från större mest förekommande, både när det gäller vågrät som lodrät algoritm. I elevgruppen elever med svenska som modersmål framträder feltyperna Mindre räknefel och Subtraherar delresultat i större utsträckning än hos elever med annat modersmål än svenska.   När elever formulerar egna räknehändelser utifrån en given subtraktionsuppgift används synsättet Ta bort uteslutande som den mest förekommande kategorin.
APA, Harvard, Vancouver, ISO, and other styles
34

Leme, Rafael Reis [UNESP]. "Teoria quântica do campo escalar real com autoacoplamento quártico - simulações de Monte Carlo na rede com um algoritmo worm." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/92038.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:25:34Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-13Bitstream added on 2014-06-13T19:53:28Z : No. of bitstreams: 1 leme_rr_me_ift.pdf: 924435 bytes, checksum: 92fdbfbe29ac1970f3d28a01d822ca6c (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Neste trabalho apresentamos resultados de simulações de Monte Carlo de uma teoria quântica de campos escalar com autointeração ´fi POT. 4' em uma rede (1+1) empregando o recentemente proposto algoritmo worm. Em simulações de Monte Carlo, a eficiência de um algoritmo é medida em termos de um expoente dinâmico 'zeta', que se relaciona com o tempo de autocorrelação 'tau' entre as medidas de acordo com a relação 'tau' 'alfa' 'L POT. zeta', onde L é o comprimento da rede. O tempo de autocorrelação fornece uma medida para a “memória” do processo de atualização de uma simulação de Monte Carlo. O algoritmo worm possui um 'zeta' comparável aos obtidos com os eficientes algoritmos do tipo cluster, entretanto utiliza apenas processos de atualização locais. Apresentamos resultados para observáveis em função dos parâmetros não renormalizados do modelo 'lâmbda' e 'mü POT. 2'. Particular atenção é dedicada ao valor esperado no vácuo < 'fi'('qui')> e a função de correlação de dois pontos <'fi'('qui')'fi'('qui' POT. 1')>. Determinamos a linha crítica ( ´lâmbda IND. C', 'mu IND C POT. 2') que separa a fase simétrica e com quebra espontânea de simetria e comparamos os resultados com a literatura
In this work we will present results of Monte Carlo simulations of the ´fi POT. 4'quantum field theory on a (1 + 1) lattice employing the recently-proposed worm algorithm. In Monte Carlo simulations, the efficiency of an algorithm is measured in terms of a dynamical critical exponent 'zeta', that is related with the autocorrelation time 'tau' of measurements as 'tau' 'alfa' 'L POT. zeta', where L is the lattice length. The autocorrelation time provides a measure of the “memory” of the Monte Carlo updating process. The worm algorithm has a 'zeta' comparable with the ones obtained with the efficient cluster algorithms, but uses local updates only. We present results for observables as functions of the unrenormalized parameters of the theory 'lâmbda and 'mü POT. 2'. Particular attention is devoted to the vacuum expectation value < 'fi'('qui')> and the two-point correlation function <'fi'('qui')fi(qui pot. 1')>. We determine the critical line( ´lâmbda IND. C', 'mu IND C POT. 2') that separates the symmetric and spontaneously-broken phases and compare with results of the literature
APA, Harvard, Vancouver, ISO, and other styles
35

Leme, Rafael Reis. "Teoria quântica do campo escalar real com autoacoplamento quártico - simulações de Monte Carlo na rede com um algoritmo worm /." São Paulo, 2011. http://hdl.handle.net/11449/92038.

Full text
Abstract:
Orientador: Gastão Inácio Krein
Banca: Sergio Novaes
Banca: Tereza Mendes
Banca: Antonio Mihara
Banca : Rogerio Rosenfeld
Resumo: Neste trabalho apresentamos resultados de simulações de Monte Carlo de uma teoria quântica de campos escalar com autointeração 'fi POT. 4' em uma rede (1+1) empregando o recentemente proposto algoritmo worm. Em simulações de Monte Carlo, a eficiência de um algoritmo é medida em termos de um expoente dinâmico 'zeta', que se relaciona com o tempo de autocorrelação 'tau' entre as medidas de acordo com a relação 'tau' 'alfa' 'L POT. zeta', onde L é o comprimento da rede. O tempo de autocorrelação fornece uma medida para a "memória" do processo de atualização de uma simulação de Monte Carlo. O algoritmo worm possui um 'zeta' comparável aos obtidos com os eficientes algoritmos do tipo cluster, entretanto utiliza apenas processos de atualização locais. Apresentamos resultados para observáveis em função dos parâmetros não renormalizados do modelo 'lâmbda' e 'mü POT. 2'. Particular atenção é dedicada ao valor esperado no vácuo < 'fi'('qui')> e a função de correlação de dois pontos <'fi'('qui')'fi'('qui' POT. 1')>. Determinamos a linha crítica ( 'lâmbda IND. C', 'mu IND C POT. 2') que separa a fase simétrica e com quebra espontânea de simetria e comparamos os resultados com a literatura
Abstract: In this work we will present results of Monte Carlo simulations of the 'fi POT. 4'quantum field theory on a (1 + 1) lattice employing the recently-proposed worm algorithm. In Monte Carlo simulations, the efficiency of an algorithm is measured in terms of a dynamical critical exponent 'zeta', that is related with the autocorrelation time 'tau' of measurements as 'tau' 'alfa' 'L POT. zeta', where L is the lattice length. The autocorrelation time provides a measure of the "memory" of the Monte Carlo updating process. The worm algorithm has a 'zeta' comparable with the ones obtained with the efficient cluster algorithms, but uses local updates only. We present results for observables as functions of the unrenormalized parameters of the theory 'lâmbda and 'mü POT. 2'. Particular attention is devoted to the vacuum expectation value < 'fi'('qui')> and the two-point correlation function <'fi'('qui')fi(qui pot. 1')>. We determine the critical line( 'lâmbda IND. C', 'mu IND C POT. 2') that separates the symmetric and spontaneously-broken phases and compare with results of the literature
Mestre
APA, Harvard, Vancouver, ISO, and other styles
36

Berardi, Emily Marie. "A Model of Children's Acquisition of Grammatical Word Categories from Adult Language Input Using an Adaption and Selection Algorithm." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6198.

Full text
Abstract:
Previous models of language acquisition have had partial success describing the processes that children use to acquire knowledge of the grammatical categories of their native language. The present study used a computer model based on the evolutionary principles of adaptation and selection to gain further insight into children's acquisition of grammatical categories. Transcribed language samples of eight parents or caregivers each conversing with their own child served as the input corpora for the model. The model was tested on each child's language corpus three times: two fixed mutation rates as well as a progressively decreasing mutation rate, which allowed less adaptation over time, were examined. The output data were evaluated by measuring the computer model's ability to correctly identify the grammatical categories in 500 utterances from the language corpus of each child. The model's performance ranged between 78 and 88 percent correct; the highest performance overall was found for a corpus using the progressively decreasing mutation rate, but overall no clear pattern relative to mutation rate was found.
APA, Harvard, Vancouver, ISO, and other styles
37

Choudhury, Sabyasachy. "Hierarchical Data Structures for Pattern Recognition." Thesis, Indian Institute of Science, 1987. http://hdl.handle.net/2005/74.

Full text
Abstract:
Pattern recognition is an important area with potential applications in computer vision, Speech understanding, knowledge engineering, bio-medical data classification, earth sciences, life sciences, economics, psychology, linguistics, etc. Clustering is an unsupervised classification process corning under the area of pattern recognition. There are two types of clustering approaches: 1) Non-hierarchical methods 2) Hierarchical methods. Non-hierarchical algorithms are iterative in nature and. perform well in the context of isotropic clusters. Time-complexity of these algorithms is order of (0 (n) ) and above, Hierarchical agglomerative algorithms, on the other hand, are effective when clusters are non-isotropic. The single linkage method of hierarchical category produces a dendrogram which corresponds to the minimal spanning tree, conventional approaches are time consuming requiring O (n2 ) computational time. In this thesis we propose an intelligent partitioning scheme for generating the minimal spanning tree in the co-ordinate space. This is computationally elegant as it avoids the computation of similarity between many pairs of samples me minimal spanning tree generated can be used to produce C disjoint clusters by breaking the (C-1) longest edges in the tree. A systolic architecture has been proposed to increase the speed of the algorithm further. Simulation study has been conducted and the corresponding results are reported. The simulation package has been developed on DEC-1090 in Pascal. It is observed based on the simulation study that the parallel implementation reduces the time enormously. The number of processors required for the parallel implementation is a constant making the approach more attractive. Texture analysis and synthesis has been extensively studied in the context of computer vision, Two important approaches which have been studied extensively by researchers earlier are statistical and structural approaches, Texture is understood to be a periodic pattern with primitive sub patterns repeating in a particular fashion. This has been used to characterize texture with the help of the hierarchical data structure, tree. It is convenient to use a tree data structure as, along with the operations like merging, splitting, deleting a node, adding a node, etc, .it would be useful to handle a periodic pattern. Various functions like angular second moment, correlation etc, which are used to characterize texture have been translated into the new language of hierarchical data structure.
APA, Harvard, Vancouver, ISO, and other styles
38

Selkirk, Colin Gregory. "Optimal algorithms for single-shift workforce scheduling to avoid one-day work stretches in a cyclic schedule." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0004/NQ42973.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hargroves, Ryan. "Reach Campaigns and Self-promotion on Social Networking Sites : hidden Algorithms at Work in Selected Vloggers' Videos." Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/75553.

Full text
Abstract:
The ways in which people present themselves online to others is a growing point of interest for scholars in a multiplicity of academic fields. On the common ground of self-representation, the concept of reach campaigns is used as a hermeneutical tool to analyse and interpret the postings and uploaded videos of five selected vloggers to work towards a way to explain the hidden algorithms at work on Social Networking Sites. The purpose of reach campaigns is not to replace terms such as ideology, or hegemony, nor does it serve to categorize or limit certain trends and currents but rather aims to provide a means to discuss human interactions with technology and more specifically – digital technology, working in and around the fields of cultural analytics and visual studies. One of the most notable visualities to emerge from the human-technology relationship is that of the self-representation. Vlogging has become one of the most popular means of self-representation online and through the lens of reach campaigns, it is proposed that a contemporary understanding of online self-representation can be achieved. While a large majority of vlogging’s conception occurs online, algorithms could be seen as a predominant influencing factor. This dissertation seeks to explore how algorithms may affect the promotion of four YouTube vlogger’s videos.
Dissertation (MA (Visual Studies))--University of Pretoria, 2020.
NRF
Visual Arts
MA (Visual Studies)
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
40

Mahajan, Rutvij Sanjay. "Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/96725.

Full text
Abstract:
The k–server problem is of significant importance to the theoretical computer science and the operations research community. In this problem, we are given k servers, their initial locations and a sequence of n requests that arrive one at a time. All these locations are points from some metric space and the cost of serving a request is given by the distance between the location of the request and the current location of the server selected to process the request. We must immediately process the request by moving a server to the request location. The objective in this problem is to minimize the total distance traveled by the servers to process all the requests. In this thesis, we present an empirical analysis of a new online algorithm for k-server problem. This algorithm maintains two solutions, online solution, and an approximately optimal offline solution. When a request arrives we update the offline solution and use this update to inform the online assignment. This algorithm is motivated by the Robust-Matching Algorithm [RMAlgorithm, Raghvendra, APPROX 2016] for the closely related online bipartite matching problem. We then give a comprehensive experimental analysis of this algorithm and also provide a graphical user interface which can be used to visualize execution instances of the algorithm. We also consider these problems under stochastic setting and implement a lookahead strategy on top of the new online algorithm.
MS
APA, Harvard, Vancouver, ISO, and other styles
41

Valeš, Ondřej. "Efektivní algoritmy pro stromové automaty." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403145.

Full text
Abstract:
In this work a novel algorithm for testing language equivalence and inclusion on tree automata is proposed and implemented as a module in the VATA library. First, existing approaches to equivalence and inclusion testing on both word and tree automata are examined. These existing approaches are then modified to create bisimulation up-to congruence algorithm for tree automata and a formal proof of the soundness of the new algorithm is provided. Efficiency of this new approach is compared with existing language equivalence and inclusion testing methods for tree automata, showing the performance of our algorithm on hard cases is often superior.
APA, Harvard, Vancouver, ISO, and other styles
42

DESSI', STEFANIA. "Analysis and implementation of methods for the text categorization." Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266782.

Full text
Abstract:
Text Categorization (TC) is the automatic classification of text documents under pre-defined categories, or classes. Popular TC approaches map categories into symbolic labels and use a training set of documents, previously labeled by human experts, to build a classifier which enables the automatic TC of unlabeled documents. Suitable TC methods come from the field of data mining and information retrieval, however the following issues remain unsolved. First, the classifier performance depends heavily on hand-labeled documents that are the only source of knowledge for learning the classifier. Being a labor-intensive and time consuming activity, the manual attribution of documents to categories is extremely costly. This creates a serious limitations when a set of manual labeled data is not available, as it happens in most cases. Second, even a moderately sized text collection often has tens of thousands of terms in that making the classification cost prohibitive for learning algorithms that do not scale well to large problem sizes. Most important, TC should be based on the text content rather than on a set of hand-labeled documents whose categorization depends on the subjective judgment of a human classifier. This thesis aims at facing the above issues by proposing innovative approaches which leverage techniques from data mining and information retrieval. To face problems about both the high dimensionality of the text collection and the large number of terms in a single text, the thesis proposes a hybrid model for term selection which combines and takes advantage of both filter and wrapper approaches. In detail, the proposed model uses a filter to rank the list of terms present in documents to ensure that useful terms are unlikely to be screened out. Next, to limit classification problems due to the correlation among terms, this ranked list is refined by a wrapper that uses a Genetic Algorithm (GA) to retaining the most informative and discriminative terms. Experimental results compare well with some of the top-performing learning algorithms for TC and seems to confirm the effectiveness of the proposed model. To face the issues about the lack and the subjectivity of manually labeled datasets, the basic idea is to use an ontology-based approach which does not depend on the existence of a training set and relies solely on a set of concepts within a given domain and the relationships between concepts. In this regard, the thesis proposes a text categorization approach that applies WordNet for selecting the correct sense of words in a document, and utilizes domain names in WordNet Domains for classification purposes. Experiments show that the proposed approach performs well in classifying a large corpus of documents. This thesis contributes to the area of data mining and information retrieval. Specifically, it introduces and evaluates novel techniques to the field of text categorization. The primary objective of this thesis is to test the hypothesis that: text categorization requires and benefits from techniques designed to exploit document content. hybrid methods from data mining and information retrieval can better support problems about high dimensionality that is the main aspect of large document collections. in absence of manually annotated documents, WordNet domain abstraction can be used that is both useful and general enough to categorize any documents collection. As a final remark, it is important to acknowledge that much of the inspiration and motivation for this work derived from the vision of the future of text categorization processes which are related to specific application domains such as the business area and the industrial sectors, just to cite a few. In the end, it is this vision that provided the guiding framework. However, it is equally important to understand that many of the results and techniques developed in this thesis are not limited to text categorization. For example, the evaluation of disambiguation methods is interesting in its own right and is likely to be relevant to other application fields.
APA, Harvard, Vancouver, ISO, and other styles
43

Dahlin, Mathilda. "Avkodning av cykliska koder - baserad på Euklides algoritm." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-48248.

Full text
Abstract:
Today’s society requires that transformation of information is done effectively and correctly. In other words, the received message must correspond to the message being sent. There are a lot of decoding methods to locate and correct errors. The main purpose in this degree project is to study one of these methods based on the Euclidean algorithm. Thereafter an example will be illustrated showing how the method is used when decoding a three - error correcting BCH code. To begin with, fundamental concepts about coding theory are introduced. Secondly, linear codes, cyclic codes and BCH codes - in that specific order - are explained before advancing to the decoding process. The results show that correcting one or two errors is relatively simple, but when three or more errors occur it becomes much more complicated. In that case, a specific method is required.
Dagens samhälle kräver att informationsöverföring sker på ett effektivt och korrekt sätt, det vill säga att den information som når mottagaren motsvarar den som skickades från början. Det finns många avkodningsmetoder för att lokalisera och rätta fel. Syftet i denna uppsats är att studera en av dessa, en som baseras på Euklides algoritm och därefter illustrera ett exempel på hur metoden används vid avkodning av en tre - rättande BCH - kod. Först ges en presentation av grunderna inom kodningsteorin. Sedan introduceras linjära koder, cykliska koder och BCH - koder i nämnd ordning, för att till sist presentera avkodningsprocessen. Det visar sig att det är relativt enkelt att rätta ett eller två fel, men när tre eller fler fel uppstår blir det betydligt mer komplicerat. Då krävs någon speciell metod.
APA, Harvard, Vancouver, ISO, and other styles
44

Freberg, Daniel. "Evaluating Statistical MachineLearning and Deep Learning Algorithms for Anomaly Detection in Chat Messages." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235957.

Full text
Abstract:
Automatically detecting anomalies in text is of great interest for surveillance entities as vast amounts of data can be analysed to find suspicious activity. In this thesis, three distinct machine learning algorithms are evaluated as a chat message classifier is being implemented for the purpose of market surveillance. Naive Bayes and Support Vector Machine belong to the statistical class of machine learning algorithms being evaluated in this thesis and both require feature selection, a side objective of the thesis is thus to find a suitable feature selection technique to ensure mentioned algorithms achieve high performance. Long Short-Term Memory network is the deep learning algorithm being evaluated in the thesis, rather than depend on feature selection, the deep neural network will be evaluated as it is trained using word embeddings. Each of the algorithms achieved high performance but the findings ofthe thesis suggest Naive Bayes algorithm in conjunction with a feature counting feature selection technique is the most suitable choice for this particular learning problem.
Att automatiskt kunna upptäcka anomalier i text har stora implikationer för företag och myndigheter som övervakar olika sorters kommunikation. I detta examensarbete utvärderas tre olika maskininlärningsalgoritmer för chattmeddelandeklassifikation i ett marknadsövervakningsystem. Naive Bayes och Support Vector Machine tillhör båda den statistiska klassen av maskininlärningsalgoritmer som utvärderas i studien och bådar kräver selektion av vilka särdrag i texten som ska användas i algoritmen. Ett sekundärt mål med studien är således att hitta en passande selektionsteknik för att de statistiska algoritmerna ska prestera så bra som möjligt. Long Short-Term Memory Network är djupinlärningsalgoritmen som utvärderas i studien. Istället för att använda en selektionsteknik kommer djupinlärningsalgoritmen nyttja ordvektorer för att representera text. Resultaten visar att alla utvärderade algoritmer kan nå hög prestanda för ändamålet, i synnerhet Naive Bayes tillsammans med termfrekvensselektion.
APA, Harvard, Vancouver, ISO, and other styles
45

Mor, Stefano Drimon Kurz. "Analysis of synchronizations in greedy-scheduled executions and applications to efficient generation of pseudorandom numbers in parallel." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/130529.

Full text
Abstract:
Nous présentons deux contributions dans le domaine de la programmation parallèle. La première est théorique : nous introduisons l’analyse SIPS, une approche nouvelle pour dénombrer le nombre d’opérations de synchronisation durant l’exécution d’un algorithme parallèle ordonnancé par vol de travail. Basée sur le concept d’horloges logiques, elle nous permet : d’une part de donner de nouvelles majorations de coût en moyenne; d’autre part de concevoir des programmes parallèles plus efficaces par adaptation dynamique de la granularité. La seconde contribution est pragmatique : nous présentons une parallélisation générique d’algorithmes pour la génération déterministe de nombres pseudo-aléatoires, indépendamment du nombre de processus concurrents lors de l’exécution. Alternative à l’utilisation d’un générateur pseudo-aléatoire séquentiel par processus, nous introduisons une API générique, appelée Par-R qui est conçue et analysée grâce à SIPS. Sa caractéristique principale est d’exploiter un générateur séquentiel qui peut “sauter” directement d’un nombre à un autre situé à une distance arbitraire dans la séquence pseudo-aléatoire. Grâce à l’analyse SIPS, nous montrons qu’en moyenne, lors d’une exécution par vol de travail d’un programme très parallèle (dont la profondeur ou chemin critique est très petite devant le travail ou nombre d’opérations), ces opérations de saut sont rares. Par-R est comparé au générateur pseudo-aléatoire DotMix écrit pour Cilk Plus, une extension de C/C++ pour la programmation parallèle par vol de travail. Le surcout théorique de Par-R se compare favorablement au surcoput de DotMix, ce qui apparait aussi expériemntalement. De plus, étant générique, Par-R est indépendant du générateur séquentiel sous-jacent.
Nós apresentamos duas contribuições para a área de programação paralela. A primeira contribuição é teórica: nós introduzimos a análise SIPS, uma nova abordagem para a estimar o número de sincronizações realizadas durante a execução de um algoritmo paralelo. SIPS generaliza o conceito de relógios lógicos para contar o número de sincronizações realizadas por um algoritmo paralelo e é capaz de calcular limites do pior caso mesmo na presença de execuções paralelas não-determinísticas, as quais não são geralmente cobertas por análises no estado-da-arte. Nossa análise nos permite estimar novos limites de pior caso para computações escalonadas pelo popular algoritmo de roubo de tarefas e também projetar programas paralelos e adaptáveis que são mais eficientes. A segunda contribuição é pragmática: nós apresentamos uma estratégia de paralelização eficiente para a geração de números pseudoaleatórios. Como uma alternativa para implementações fixas de componentes de geração aleatória nós introduzimos uma API chamada Par-R, projetada e analisada utilizando-se SIPS. Sua principal idea é o uso da capacidade de um gerador sequencial R de realizar um “pulo” eficiente dentro do fluxo de números gerados; nós os associamos a operações realizadas pelo escalonador por roubo de tarefas, o qual nossa análise baseada em SIPS demonstra ocorrer raramente em média. Par-R é comparado com o gerador paralelo de números pseudoaleatórios DotMix, escrito para a plataforma de multithreading dinâmico Cilk Plus. A latência de Par-R tem comparação favorável à latência do DotMix, o que é confirmado experimentalmente, mas não requer o uso subjacente fixado de um dado gerador aleatório.
We present two contributions to the field of parallel programming. The first contribution is theoretical: we introduce SIPS analysis, a novel approach to estimate the number of synchronizations performed during the execution of a parallel algorithm. Based on the concept of logical clocks, it allows us: on one hand, to deliver new bounds for the number of synchronizations, in expectation; on the other hand, to design more efficient parallel programs by dynamic adaptation of the granularity. The second contribution is pragmatic: we present an efficient parallelization strategy for pseudorandom number generation, independent of the number of concurrent processes participating in a computation. As an alternative to the use of one sequential generator per process, we introduce a generic API called Par-R, which is designed and analyzed using SIPS. Its main characteristic is the use of a sequential generator that can perform a “jump-ahead” directly from one number to another on an arbitrary distance within the pseudorandom sequence. Thanks to SIPS, we show that, in expectation, within an execution scheduled by work stealing of a “very parallel” program (whose depth or critical path is subtle when compared to the work or number of operations), these operations are rare. Par-R is compared with the parallel pseudorandom number generator DotMix, written for the Cilk Plus dynamic multithreading platform. The theoretical overhead of Par-R compares favorably to DotMix’s overhead, what is confirmed experimentally, while not requiring a fixed generator underneath.
APA, Harvard, Vancouver, ISO, and other styles
46

Clark, Matthew David. "Electronic Dispersion Compensation For Interleaved A/D Converters in a Standard Cell ASIC Process." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16269.

Full text
Abstract:
The IEEE 802.3aq standard recommends a multi-tap decision feedback equalizer be implemented to remove inter-symbol interference and additive system noise from data transmitted over a 10 Gigabit per Second (10 Gbps) multi-mode fiber-optic link (MMF). The recommended implementation produces a design in an analog process. This design process is difficult, time consuming, and is expensive to modify if first pass silicon success is not achieved. Performing the majority of the design in a well-characterized digital process with stable, evolutionary tools reduces the technical risk. ASIC design rule checking is more predictable than custom tools flows and produces regular, repeatable results. Register Transfer Language (RTL) changes can also be relatively quickly implemented when compared to the custom flow. However, standard cell methodologies are expected to achieve clock rates of roughly one-tenth of the corresponding analog process. The architecture and design for a parallel linear equalizer and decision feedback equalizer are presented. The presented design demonstrates an RTL implementation of 10 GHz filters operating in parallel at 625 MHz. The performance of the filters is characterized by testing the design against a set of 324 reference channels. The results are compared against the IEEE standard group s recommended implementation. The linear equalizer design of 20 taps equalizes 88 % of the reference channels. The decision feedback equalizer design of 20 forward and 1 reverse tap equalizes 93 % of the reference channels. Analysis of the unequalized channels in performed, and areas for continuing research are presented.
APA, Harvard, Vancouver, ISO, and other styles
47

Jaime, Mérida Carlos. "Simulation of AGVs in MATLAB : Virtual 3D environment for testing different AGV kinematics and algorithms." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18561.

Full text
Abstract:
The field of robotics is becoming increasingly more important and consequently, students need better tools to gain knowledge and experience with them. The University of Skövde was interested in developing a learning tool focused on a virtual simulation of mobile robots. Despite the fact that there are several programmes to create this tool, MATLAB was preferable because of its strong presence in educational institutions. The objectives were oriented towards testing different robot kinematics in an adjustable virtual 3D environment. Moreover, the simulation needed a part in which future users could design own algorithms in order to control the AGVs. Therefore, sensors such as LIDAR sensors were necessary to enable a possible interaction between the robot and the scenario created. This project was developed with a previous study and a comparison of some MATLAB projects and tools. After that, the scenario and the simulation were produced. As a result, a virtual simulation has been created emphasising that the user could modify and adapt multiple parameters such as the size of the AGV, the form of the virtual environment or the selection of forward or inverse kinematics in order to develop different types of algorithms. Other features can be adjusted manually such as the type or number of sensors as well as SLAM conditions. Finally, this thesis was conducted to give a basis about mobile robots and to be a first step for operating with real robots. The simulation also provides an easy to use interface in which students can keep working in it through the introduction of new applications related to image processing or more sophisticated algorithms and controllers.
APA, Harvard, Vancouver, ISO, and other styles
48

Tramontina, Gregorio Baggio. "Analise de problemas de escalonamento de processos em workflow." [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276403.

Full text
Abstract:
Orientador: Jacques Wainer
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-03T21:43:19Z (GMT). No. of bitstreams: 1 Tramontina_GregorioBaggio_M.pdf: 3206412 bytes, checksum: 9ccea673a2b1b8b2c73ebbd7deb75647 (MD5) Previous issue date: 2004
Resumo: A ordenação das instancias de processos (casos) em um sistema de workow pode trazer beneficios como a diminuição do numero de casos atrasados e a minimização do tempo de processamento dos casos, entre outros. Publicações recentes em workflow reconhecem uma lacuna na pesquisa relacionada com este tema, e apontam para a literatura de escalonamento como uma possivel solução. Este trabalho visa utilizar tecnicas de escalonamento em um ambiente dinamico de workflow e avaliar o desempenho dessas tecnicas frente a regra FIFO (First In First Out), a politica de alocação de trabalho mais utilizada nos sistemas de workflow atuais. Discute-se problemas relacionados a esta pratica, e ataca-se dois deles: as incertezas quanto ao tempo de execução das atividades de workflow e as incertezas quanto as rotas que os casos seguem dentro das suas definições de processo. Para mapear essas incertezas uma nova tecnica e proposta, chamada de "guess and solve", que consiste em prever os tempos de execução e rotas das atividades e resolver o problema de escalonamento deterministico resultante com uma tecnica adequada, por exemplo regras de prioridade e algoritmos geneticos. Simulações cuidadosas sao conduzidas e os numeros mostram que e quase sempre mais vantajoso utilizar outra tecnica que não FIFO, e que o
Abstract: Ordering cases within a worklfow can result in a signi¯cant decrease on the number of late cases and the cases' mean processing time, for example. Recent publications on workflow recognize the lack of research in this topic and points to the literature on scheduling as a possible solution. This work applies scheduling techniques to a dynamic workflow scenario and evaluates their performance in relation to the FIFO (First In First Out) rule, the most used work allocation principle in today's workflow systems. Problems related to this approach are discussed and two of them are tackled: the uncertainties regarding the activities' processing times and the cases' routes within their process definition. A new technique to map these uncertainties, called "guess and solve", is proposed. It consists of making a guess on the activities' processing times and cases' routes and then solving the resulting deterministic scheduling problem with a suitable technique, for example priority rules and genetic algorithms. Careful simulation is performed and the numbers show that it is almost always advantageous to use ordering techniques other than FIFO, and that the " guess and solve", at least when its error is bound by 30%, gives very satisfactory results
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
49

Tramontina, Gregorio Baggio. "Composicionalidade de tecnicas de escalonamento de processos em workflow." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276017.

Full text
Abstract:
Orientador: Jacques Wainer
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-12T17:35:56Z (GMT). No. of bitstreams: 1 Tramontina_GregorioBaggio_D.pdf: 3133482 bytes, checksum: a7ccd2516280148bacc653144b839643 (MD5) Previous issue date: 2008
Resumo: Os sistemas de workflow são componentes presentes nas empresas de hoje para automatizar e otimizar processos de negócio. Uma das atividades desses sistemas é a de direcionar a execução de tarefas para seus participantes, e quando há um excesso dessas tarefas, uma decisão quanto à ordem de sua execução deve ser tomada. Os sistemas de workflow atuais utilizam principalmente a técnica FIFO (First In First Out) para realizar tal ordenação executando as tarefas na ordem de sua chegada. Mas ganhos mensuráveis podem ser atingidos quando se realiza tal ordenação de maneira diferente. Este trabalho apresenta um estudo aprofundado do comportamento de técnicas de escalonamento em sistemas de workflow e propõe uma metodologia de aplicação destas técnicas de escalonamento em cenários de workflow complexos. Para construir esta metodologia, analisou-se primeiramente o comportamento de técnicas de escalonamento escolhidas em cenários básicos de workflow. Foram usadas tanto técnicas locais de escolha de tarefas quanto algoritmos genéticos, que possuem uma visão global do problema. Os cenários de workflow estudados levam em conta três características importantes: eles são dinâmicos, possuem incertezas quanto ao tempo de processamento das tarefas em suas atividades, e também possuem incertezas quanto à rota que tais tarefas seguem nos desvios condicionais dos processos.Para manipular tais incertezas, o trabalho utiliza a técnica guess and solve, proposta pelos autores em trabalhos prévios. Simulações foram feitas para se gerar os resultados do comportamento das técnicas escolhidas nos cenários básicos de workflow, e tais resultados foram analisados numérica e estatisticamente utilizando ANOVA. Propôs-se então uma metodologia para a aplicação destas melhores técnicas de escalonamento já conhecidas nos componentes mais básicos de cenários mais complexos. Um conjunto de cenários complexos foi submetido a testes de simulação seguindo os mesmos parâmetros dos cenários básicos, utilizando a metodologia proposta, e os resultados mostram que tal metodologia traz ganhos em relação às métricas estudadas na maioria dos casos, quando não se tem execução paralela nos cenários, sendo que no pior caso a metodologia é tão boa quanto as outras, e em seu caso geral, é melhor que o das outras técnicas. Mostram também que quando se adiciona esta execução paralela nos cenários, os resultados se deterioram, apontando em direções futuras de pesquisa.
Abstract: Workflow systems are present in today's companies to automate and optimize their business processes. One activity of these systems is to direct the execution of its tasks to their executors, and when there is an excess of these tasks, a decision must be made regarding the order of execution of these tasks. Current workflow systems use the FIFO (First In First Out) policy to make those decisions, executing the tasks according to their arrival order. But it is possible to achieve measureable gains by doing this task ordering differently. This work presents a study of the behavior of scheduling techniques in workflow systems and proposes a methodology to apply these techniques to more complex workflow scenarios. To build this methodology, the authors analyzed the behavior of selected scheduling techniques in basic workflow scenarios. Both local and global scheduling techniques were chosen, ranging from dispatching rules to a genetic algorithm. The studied workflow scenarios have three important characteristics: they are dynamic, they have uncertainties on the processing time of the tasks in the process, and they also have uncertainties on the route each task follows within its process. To handle these uncertainties, this work uses the guess and solve technique, proposed by the authors in previous works. Simulation experiments were performed to generate the results on the behavior of the scheduling techniques in the basic workflow scenarios, and the mentioned results were analyzed numerically and statistically using ANOVA. The work then proposes a methodology to apply the best scheduling techniques in subcomponents of complex scenarios. The authors applied simulation to the complex scenarios using the proposed methodology, and the results show that it brings improvements when compared to the other tested techniques. When there is no parallel execution in the process, the worst case of the methodology is that it is no worse then the others, and its general case brings measureable improvements. When there is parallel execution, the methodology has its results deteriorated, which points to future research in the field.
Doutorado
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
50

Aguirre, Guerrero Daniela. "Word-processing-based routing for Cayley graphs." Doctoral thesis, Universitat de Girona, 2019. http://hdl.handle.net/10803/667410.

Full text
Abstract:
This Thesis focuses on the problem of generic routing in Cayley Graphs(CGs). These graphs are a geometric representation of algebraic groups and have been used as topologies of a wide variety of communication networks. The problem is analyzed from the Automatic Group Theory (AGT), which states that the structure of CGs can be encoded in a set of automatons. From these approach, word-processing techniques are used to design a generic routing scheme that has low complexity; guarantees packet delivery; and provides minimal routing, path diversity and fault-tolerance. These scheme is supported on a set low complexity algorithms for path computation in CGs. The contributions of this Thesis also include an analysis of the topological properties of CGs and their impact on the performance and robustness of networks that use them as topology
Esta Tesis aborda el problema del encaminamiento genérico en grafos Cayley (CGs, por sus siglas en inglés). Estos grafos son una representación geométrica de grupos algebraicos y han sido utilizados como topologías de una gran variedad de redes de comunicación. El problema es analizado desde la perspectiva de la Teoría de Grupos Automáticos (AGT, por sus siglas en inglés), la cual establece que la estructura de los CGs puede ser codificada en un conjunto de autómatas. Siguiendo este enfoque, se aplicaron técnicas de procesamiento de texto para diseñar un esquema de encaminamiento genérico de baja complejidad; el cual garantiza la entrega de paquetes; y provee encaminamiento mínimo, diversidad de caminos y tolerancia a fallas. Este esquema es soportado en un conjunto de algoritmos de baja complejidad para el computo de caminos en CGs. Las contribuciones de esta Tesis también incluyen un análisis de las propiedades topológicas de los CGs y su impacto en el desempeño y robustez de las redes que los utilizan como topología
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography