Дисертації з теми "Distributed processing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Distributed processing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Distributed processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lee, Li 1975. "Distributed signal processing." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86436.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lu, Yu-En. "Distributed proximity query processing." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wu, Tsung-li. "Distributed processing on link enhancement." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23869.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

de, Errico Luciano. "Agent-based distributed parallel processing." Thesis, University of Surrey, 1996. http://epubs.surrey.ac.uk/843822/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work concerns the design and prototype implementation of an agent-based parallel architecture for physically distributed systems. The generic goal is to combine the processing power of widely available, low-cost networks of workstations, providing parallelism inside single applications. The specific goal is to investigate ways of implementing agent-based parallel processing in distributed systems. In this context, an agent is a lightweight mobile process that can freely move in the network and execute when it reaches a processing node. The Swarm architecture addresses these points by providing an abstract environment that can span many or all machines in the network. The environment is structured as a virtual machine, whose organisation and instruction set are detailed. Swarm is based on the idea of process flow, in which mobile concurrent processes can move and execute asynchronously in a distributed space consisting of data nodes. Each node is capable of permanently storing arbitrary information and references to other nodes, permitting the creation of persistent and distributed data structures in the environment. The main advantage is a flexible programming environment, which combines characteristics of the message-passing and distributed shared-memory approaches. A subset of the Swarm architecture was implemented as a prototype, coded in C language for operation under the Unix environment, to study and evaluate the model. The prototype executed in a single workstation, simulating the Swarm abstract environment and pennitting the validation of the proposed architecture and implemented mechanisms. Both the implementation and the evaluation procedure are explained and discussed. Results suggest that agent-based processing is feasible in moderately-and tightly-coupled environments, and that the Swarm processing model can be successfully applied to local-area networks and massively parallel computing machines. In particular, applications that manipulate irregular and distributed data structures can benefit from the programming environment provided by the Swarm architecture. These comprise: symbolic processing (artificial intelligence and expert systems), distributed simulation, distributed databases, and intelligent networks.
5

Norcross, Stuart John. "Deriving distributed garbage collectors from distributed termination algorithms." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/14986.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis concentrates on the derivation of a modularised version of the DMOS distributed garbage collection algorithm and the implementation of this algorithm in a distributed computational environment. DMOS appears to exhibit a unique combination of attractive characteristics for a distributed garbage collector but the original algorithm is known to contain a bug and, previous to this work, lacks a satisfactory, understandable implementation. The relationship between distributed termination detection algorithms and distributed garbage collectors is central to this thesis. A modularised DMOS algorithm is developed using a previously published distributed garbage collector derivation methodology that centres on mapping centralised collection schemes to distributed termination detection algorithms. In examining the utility and suitability of the derivation methodology, a family of six distributed collectors is developed and an extension to the methodology is presented. The research work described in this thesis incorporates the definition and implementation of a distributed computational environment based on the ProcessBase language and a generic definition of a previously unimplemented distributed termination detection algorithm called Task Balancing. The role of distributed termination detection in the DMOS collection mechanisms is defined through a process of step-wise refinement. The implementation of the collector is achieved in two stages; the first stage defines the implementation of two distributed termination mappings with the Task Balancing algorithm; the second stage defines the DMOS collection mechanisms.
6

Benelallam, Amine. "Model transformation on distributed platforms : decentralized persistence and distributed processing." Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0288/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Grâce à sa promesse de réduire les efforts de développement et maintenance du logiciel, l’Ingénierie Dirigée par les Modèles (IDM) attire de plus en plus les acteurs industriels. En effet, elle a été adoptée avec succès dans plusieurs domaines tels que le génie civil, l’industrie automobile et la modernisation de logiciels.Toutefois, la taille croissante des modèles utilisés nécessite de concevoir des solutions passant à l’échelle afin de les traiter (transformer), et stocker (persister) de manière efficace. Une façon de pallier cette problématique est d’utiliser les systèmes et les bases de données répartis. D’une part, les paradigmes de programmation distribuée tels que MapReduce et Pregel peuvent simplifier la distribution de transformations des modèles (TM). Et d’autre part, l’avènement des base de données NoSQL permet le stockage efficace des modèles d’une manière distribuée. Dans le cadre de cette thèse, nous proposons une approche pour la transformation ainsi que pour la persistance de grands modèles.Nous nous basons d’un côté, sur le haut niveau d’abstraction fourni par les langages déclaratifs (relationnels) de transformation et d’un autre côté, sur la sémantique bien définie des paradigmes existants de programmation distribués, afin de livrer un moteur distribué de TM. La distribution est implicite et la syntaxe du langage n’est pas modifiée (aucune primitive de parallélisation n’est ajoutée). Nous étendons cette solution avec un algorithme efficace de distribution de modèles qui se base sur l’analyse statique des transformations et sur résultats récents sur le partitionnement équilibré des graphes. Nous avons appliqué notre approche à ATL, un langage relationnel de TM et MapReduce, un paradigme de programmation distribué. Finalement, nous proposons une solution pour stocker des modèles à l’aide de bases de données NoSQL, en particulier au travers d’un cadre d’applications de persistance répartie
Model-Driven Engineering (MDE) is gaining ground in industrial environments, thanks to its promise of lowering software development and maintenance effort. It has been adopted with success in producing software for several domains like civil engineering, car manufacturing and modernization of legacy software systems. As the models that need to be handled in model-driven engineering grow in scale, it became necessary to design scalable algorithms for model transformation (MT) as well as well-suitable persistence frameworks. One way to cope with these issues is to exploit the wide availability of distributed clusters in the Cloud for the distributed execution of model transformations and their persistence. On one hand, programming models such as MapReduce and Pregel may simplify the development of distributed model transformations. On the other hand, the availability of different categories of NoSQL databases may help to store efficiently the models. However, because of the dense interconnectivity of models and the complexity of transformation logics, scalability in distributed model processing is challenging. In this thesis, we propose our approach for scalable model transformation and persistence. We exploit the high-level of abstraction of relational MT languages and the well-defined semantics of existing distributed programming models to provide a relational model transformation engine with implicit distributed execution. The syntax of the MT language is not modified and no primitive for distribution is added. Hence developers are not required to have any acquaintance with distributed programming.We extend this approach with an efficient model distribution algorithm, based on the analysis of relational model transformation and recent results on balanced partitioning of streaming graphs. We applied our approach to a popular MT language, ATL, on top of a well-known distributed programming model, MapReduce. Finally, we propose a multi-persistence backend for manipulating and storing models in NoSQL databases according to the modeling scenario. Especially, we focus on decentralized model persistence for distributed model transformations
7

孫昱東 and Yudong Sun. "A distributed object model for solving irregularly structured problemson distributed systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31243630.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kumar, Rohit 1986. "Temporal graph mining and distributed processing." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/620623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the recent growth of social media platforms and the human desire to interact with the digital world a lot of human-human and human-device interaction data is getting generated every second. With the boom of the Internet of Things (IoT) devices, a lot of device-device interactions are also now on the rise. All these interactions are nothing but a representation of how the underlying network is connecting different entities over time. These interactions when modeled as an interaction network presents a lot of unique opportunities to uncover interesting patterns and to understand the dynamics of the network. Understanding the dynamics of the network is very important because it encapsulates the way we communicate, socialize, consume information and get influenced. To this end, in this PhD thesis, we focus on analyzing an interaction network to understand how the underlying network is being used. We define interaction network as a sequence of time-stamped interactions E over edges of a static graph G=(V, E). Interaction networks can be used to model many real-world networks for example, in a social network or a communication network, each interaction over an edge represents an interaction between two users, e.g., emailing, making a call, re-tweeting, or in case of the financial network an interaction between two accounts to represent a transaction. We analyze interaction network under two settings. In the first setting, we study interaction network under a sliding window model. We assume a node could pass information to other nodes if they are connected to them using edges present in a time window. In this model, we study how the importance or centrality of a node evolves over time. In the second setting, we put additional constraints on how information flows between nodes. We assume a node could pass information to other nodes only if there is a temporal path between them. To restrict the length of the temporal paths we consider a time window in this approach as well. We apply this model to solve the time-constrained influence maximization problem. By analyzing the interaction network data under our model we find the top-k most influential nodes. We test our model both on human-human interaction using social network data as well as on location-location interaction using location-based social network(LBSNs) data. In the same setting, we also mine temporal cyclic paths to understand the communication patterns in a network. Temporal cycles have many applications and appear naturally in communication networks where one person posts a message and after a while reacts to a thread of reactions from peers on the post. In financial networks, on the other hand, the presence of a temporal cycle could be indicative of certain types of fraud. We provide efficient algorithms for all our analysis and test their efficiency and effectiveness on real-world data. Finally, given that many of the algorithms we study have huge computational demands, we also studied distributed graph processing algorithms. An important aspect of distributed graph processing is to correctly partition the graph data between different machine. A lot of research has been done on efficient graph partitioning strategies but there is no one good partitioning strategy for all kind of graphs and algorithms. Choosing the best partitioning strategy is nontrivial and is mostly a trial and error exercise. To address this problem we provide a cost model based approach to give a better understanding of how a given partitioning strategy is performing for a given graph and algorithm.
Con el reciente crecimiento de las redes sociales y el deseo humano de interactuar con el mundo digital, una gran cantidad de datos de interacción humano-a-humano o humano-a-dispositivo se generan cada segundo. Con el auge de los dispositivos IoT, las interacciones dispositivo-a-dispositivo también están en alza. Todas estas interacciones no son más que una representación de como la red subyacente conecta distintas entidades en el tiempo. Modelar estas interacciones en forma de red de interacciones presenta una gran cantidad de oportunidades únicas para descubrir patrones interesantes y entender la dinamicidad de la red. Entender la dinamicidad de la red es clave ya que encapsula la forma en la que nos comunicamos, socializamos, consumimos información y somos influenciados. Para ello, en esta tesis doctoral, nos centramos en analizar una red de interacciones para entender como la red subyacente es usada. Definimos una red de interacciones como una sequencia de interacciones grabadas en el tiempo E sobre aristas de un grafo estático G=(V, E). Las redes de interacción se pueden usar para modelar gran cantidad de aplicaciones reales, por ejemplo en una red social o de comunicaciones cada interacción sobre una arista representa una interacción entre dos usuarios (correo electrónico, llamada, retweet), o en el caso de una red financiera una interacción entre dos cuentas para representar una transacción. Analizamos las redes de interacción bajo múltiples escenarios. En el primero, estudiamos las redes de interacción bajo un modelo de ventana deslizante. Asumimos que un nodo puede mandar información a otros nodos si estan conectados utilizando aristas presentes en una ventana temporal. En este modelo, estudiamos como la importancia o centralidad de un nodo evoluciona en el tiempo. En el segundo escenario añadimos restricciones adicionales respecto como la información fluye entre nodos. Asumimos que un nodo puede mandar información a otros nodos solo si existe un camino temporal entre ellos. Para restringir la longitud de los caminos temporales también asumimos una ventana temporal. Aplicamos este modelo para resolver este problema de maximización de influencia restringido temporalmente. Analizando los datos de la red de interacción bajo nuestro modelo intentamos descubrir los k nodos más influyentes. Examinamos nuestro modelo en interacciones humano-a-humano, usando datos de redes sociales, como en ubicación-a-ubicación usando datos de redes sociales basades en localización (LBSNs). En el mismo escenario también minamos camínos cíclicos temporales para entender los patrones de comunicación en una red. Existen múltiples aplicaciones para cíclos temporales y aparecen naturalmente en redes de comunicación donde una persona envía un mensaje y después de un tiempo reacciona a una cadena de reacciones de compañeros en el mensaje. En redes financieras, por otro lado, la presencia de un ciclo temporal puede indicar ciertos tipos de fraude. Proponemos algoritmos eficientes para todos nuestros análisis y evaluamos su eficiencia y efectividad en datos reales. Finalmente, dado que muchos de los algoritmos estudiados tienen una gran demanda computacional, también estudiamos los algoritmos de procesado distribuido de grafos. Un aspecto importante de procesado distribuido de grafos es el de correctamente particionar los datos del grafo entre distintas máquinas. Gran cantidad de investigación se ha realizado en estrategias para particionar eficientemente un grafo, pero no existe un particionamento bueno para todos los tipos de grafos y algoritmos. Escoger la mejor estrategia de partición no es trivial y es mayoritariamente un ejercicio de prueba y error. Con tal de abordar este problema, proporcionamos un modelo de costes para dar un mejor entendimiento en como una estrategia de particionamiento actúa dado un grafo y un algoritmo.
9

Lei, Ma. "Distributed query processing using composite semijoins." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62238.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liu, Ying. "Query optimization for distributed stream processing." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3274258.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4597. Adviser: Beth Plale. Title from dissertation home page (viewed Apr. 21, 2008).
11

McCue, Daniel Lawrence. "Selective transparency in distributed transaction processing." Thesis, University of Newcastle Upon Tyne, 1992. http://hdl.handle.net/10443/2020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Object-oriented programming languages provide a powerful interface for programmers to access the mechanisms necessary for reliable distributed computing. Using inheritance and polymorphism provided by the object model, it is possible to develop a hierarchy of classes to capture the semantics and inter-relationships of various levels of functionality required for distributed transaction processing. Using multiple inheritance, application developers can selectively apply transaction properties to suit the requirements of the application objects. In addition to the specific problems of (distributed) transaction processing in an environment of persistent objects, there is a need for a unified framework, or architecture in which to place this system. To be truly effective, not only the transaction manager, but the entire transaction support environment must be described, designed and implemented in terms of objects. This thesis presents an architecture for reliable distributed processing in which the management of persistence, provision of transaction properties (e.g., concurrency control), and organisation of support services (e.g., RPC) are all gathered into a unified design based on the object model.
12

Joyce, Elizabeth Mary. "Security in a distributed processing environment." Thesis, University of Plymouth, 2001. http://hdl.handle.net/10026.1/1638.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distribution plays a key role in telecommunication and computing systems today. It has become a necessity as a result of deregulation and anti-trust legislation, which has forced businesses to move from centralised, monolithic systems to distributed systems with the separation of applications and provisioning technologies, such as the service and transportation layers in the Internet. The need for reliability and recovery requires systems to use replication and secondary backup systems such as those used in ecommerce. There are consequences to distribution. It results in systems being implemented in heterogeneous environment; it requires systems to be scalable; it results in some loss of control and so this contributes to the increased security issues that result from distribution. Each of these issues has to be dealt with. A distributed processing environment (DPE) is middleware that allows heterogeneous environments to operate in a homogeneous manner. Scalability can be addressed by using object-oriented technology to distribute functionality. Security is more difficult to address because it requires the creation of a distributed trusted environment. The problem with security in a DPE currently is that it is treated as an adjunct service, i.e. and after-thought that is the last thing added to the system. As a result, it is not pervasive and therefore is unable to fully support the other DPE services. DPE security needs to provide the five basic security services, authentication, access control, integrity, confidentiality and non-repudiation, in a distributed environment, while ensuring simple and usable administration. The research, detailed in this thesis, starts by highlighting the inadequacies of the existing DPE and its services. It argues that a new management structure was introduced that provides greater flexibility and configurability, while promoting mechanism and service independence. A new secure interoperability framework was introduced which provides the ability to negotiate common mechanism and service level configurations. New facilities were added to the non-repudiation and audit services. The research has shown that all services should be security-aware, and therefore would able to interact with the Enhanced Security Service in order to provide a more secure environment within a DPE. As a proof of concept, the Trader service was selected. Its security limitations were examined, new security behaviour policies proposed and it was then implemented as a Security-aware Trader, which could counteract the existing security limitations.
13

Argile, Andrew Duncan Stuart. "Distributed processing in decision support systems." Thesis, Nottingham Trent University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259647.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Newton, Ryan Rhodes 1980. "Language design for distributed stream processing." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/46795.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 149-152).
Applications that combine live data streams with embedded, parallel, and distributed processing are becoming more commonplace. WaveScript is a domain-specific language that brings high-level, type-safe, garbage-collected programming to these domains. This is made possible by three primary implementation techniques, each of which leverages characteristics of the streaming domain. First, WaveScript employs an evaluation strategy that uses a combination of interpretation and reification to partially evaluate programs into stream dataflow graphs. Second, we use profile-driven compilation to enable many optimizations that are normally only available in the synchronous (rather than asynchronous) dataflow domain. Finally, an empirical, profile-driven approach also allows us to compute practical partitions of dataflow graphs, spreading them across embedded nodes and more powerful servers. We have used our language to build and deploy applications, including a sensor-network for the acoustic localization of wild animals such as the Yellow-Bellied marmot. We evaluate WaveScript's performance on this application, showing that it yields good performance on both embedded and desktop-class machines. Our language allowed us to implement the application rapidly, while outperforming a previous C implementation by over 35%, using fewer than half the lines of code. We evaluate the contribution of our optimizations to this success. We also evaluate WaveScript's ability to extract parallelism from this and other applications.
by Ryan Rhodes Newton.
Ph.D.
15

Unnava, Vasundhara. "Query processing in distributed database systems." Connect to resource, 1992. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1261314105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lopes, Cassio Guimaraes. "Distributed cooperative strategies for adaptive processing." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1581123071&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kumar, Rohit. "Temporal Graph Mining and Distributed Processing." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/271527.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the recent growth of social media platforms and the human desire to interact with the digital world a lot of human-human and human-device interaction data is getting generated every second. With the boom of the Internet of Things (IoT) devices, a lot of device-device interactions are also now on the rise. All these interactions are nothing but a representation of how the underlying network is connecting different entities over time. These interactions when modeled as an interaction network presents a lot of unique opportunities to uncover interesting patterns and to understand the dynamics of the network. Understanding the dynamics of the network is very important because it encapsulates the way we communicate, socialize, consume information and get influenced. To this end, in this PhD thesis, we focus on analyzing an interaction network to understand how the underlying network is being used. We define interaction network as a sequence of time-stamped interactions E over edges of a static graph G=(V, E). Interaction networks can be used to model many real-world networks for example, in a social network or a communication network, each interaction over an edge represents an interaction between two users, e.g. emailing, making a call, re-tweeting, or in case of the financial network an interaction between two accounts to represent a transaction.We analyze interaction network under two settings. In the first setting, we study interaction network under a sliding window model. We assume a node could pass information to other nodes if they are connected to them using edges present in a time window. In this model, we study how the importance or centrality of a node evolves over time. In the second setting, we put additional constraints on how information flows between nodes. We assume a node could pass information to other nodes only if there is a temporal path between them. To restrict the length of the temporal paths we consider a time window in this approach as well. We apply this model to solve the time-constrained influence maximization problem. By analyzing the interaction network data under our model we find the top-k most influential nodes. We test our model both on human-human interaction using social network data as well as on location-location interaction using location-based social network(LBSNs) data. In the same setting, we also mine temporal cyclic paths to understand the communication patterns in a network. Temporal cycles have many applications and appear naturally in communication networks where one person posts a message and after a while reacts to a thread of reactions from peers on the post. In financial networks, on the other hand, the presence of a temporal cycle could be indicative of certain types of fraud. We provide efficient algorithms for all our analysis and test their efficiency and effectiveness on real-world data.Finally, given that many of the algorithms we study have huge computational demands, we also studied distributed graph processing algorithms. An important aspect of these algorithms is to correctly partition the graph data between different machines. A lot of research has been done on efficient graph partitioning strategies but there is no one good partitioning strategy for all kind of graphs and algorithms. Choosing the best partitioning strategy is nontrivial and is mostly a trial and error exercise. To address this problem we provide a cost model based approach to give a better understanding of how a given partitioning strategy is performing for a given graph and algorithm.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
18

Kotto, Kombi Roland. "Distributed query processing over fluctuating streams." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI050/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le traitement de flux de données est au cœur des problématiques actuelles liées au Big Data. Face à de grandes quantités de données (Volume) accessibles de manière éphémère (Vélocité), des solutions spécifiques tels que les systèmes de gestion de flux de données (SGFD) ont été développés. Ces SGFD reçoivent des flux et des requêtes continues pour générer de nouveaux résultats aussi longtemps que des données arrivent en entrée. Dans le contexte de cette thèse, qui s’est réalisée dans le cadre du projet ANR Socioplug (ANR-13-INFR-0003), nous considérons une plateforme collaborative de traitement de flux de données à débit variant en termes de volume et de distribution des valeurs. Chaque utilisateur peut soumettre des requêtes continues et contribue aux ressources de traitement de la plateforme. Cependant, chaque unité de traitement traitant les requêtes dispose de ressources limitées ce qui peut engendrer la congestion du système en fonction des variations des flux en entrée. Le problème est alors de savoir comment adapter dynamiquement les ressources utilisées par chaque requête continue par rapport aux besoins de traitement. Cela soulève plusieurs défis : i) comment détecter un besoin de reconfiguration ? ii) quand reconfigurer le système pour éviter sa congestion ? Durant ces travaux de thèse, nous nous sommes intéressés à la gestion automatique de la parallélisation des opérateurs composant une requête continue. Nous proposons une approche originale basée sur une estimation des besoins de traitement dans un futur proche. Ainsi, nous pouvons adapter le niveau de parallélisme des opérateurs de manière proactive afin d’ajuster les ressources utilisées aux besoins des traitements. Nous montrons qu’il est possible d’éviter la congestion du système mais également de réduire significativement la consommation de ressources à performance équivalente. Ces différents travaux ont été implémentés et validés dans un SGFD largement utilisé avec différents jeux de tests reproductibles
In a Big Data context, stream processing has become a very active research domain. In order to manage ephemeral data (Velocity) arriving at important rates (Volume), some specific solutions, denoted data stream management systems (DSMSs),have been developed. DSMSs take as inputs some queries, called continuous queries,defined on a set of data streams. Acontinuous query generates new results as long as new data arrive in input. In many application domains, data streams haveinput rates and distribution of values which change over time. These variations may impact significantly processingrequirements for each continuous query.This thesis takes place in the ANR project Socioplug (ANR-13-INFR-0003). In this context, we consider a collaborative platformfor stream processing. Each user can submit multiple continuous queries and contributes to the execution support of theplatform. However, as each processing unit supporting treatments has limited resources in terms of CPU and memory, asignificant increase in input rate may cause the congestion of the system. The problem is then how to adjust dynamicallyresource usage to processing requirements for each continuous query ? It raises several challenges : i) how to detect a need ofreconfiguration ? ii) when reconfiguring the system to avoid its congestion at runtime ?In this work, we are interested by the different processing steps involved in the treatment of a continuous query over adistributed infrastructure. From this global analysis, we extract mechanisms enabling dynamic adaptation of resource usage foreach continuous query. We focus on automatic parallelization, or auto-parallelization, of operators composing the executionplan of a continuous query. We suggest an original approach based on the monitoring of operators and an estimation ofprocessing requirements in near future. Thus, we can increase (scale-out), or decrease (scale-in) the parallelism degree ofoperators in a proactive many such as resource usage fits to processing requirements dynamically. Compared to a staticconfiguration defined by an expert, we show that it is possible to avoid the congestion of the system in many cases or to delay itin most critical cases. Moreover, we show that resource usage can be reduced significantly while delivering equivalentthroughput and result quality. We suggest also to combine this approach with complementary mechanisms for dynamic adaptation of continuous queries at runtime. These differents approaches have been implemented within a widely used DSMS and have been tested over multiple and reproductible micro-benchmarks
19

CABIDDU, DANIELA. "Distributed processing of large triangle meshes." Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thanks to modern high-resolution acquisition techniques, 3D digital representations of real objects are easily made of millions, or even billions, of elements. Processing and analysing such large datasets is often a non trivial task, due to specific software and hardware requirements. Our system allows to process large triangle meshes by exploiting nothing more than a standard Web browser. A graphical interface allows to select among available algorithms and to stack them into complex pipelines, while a central engine manages the overall execution by exploiting both hardware and software provided by a distributed network of servers. As an additional feature, our system allows to store work ows and to make them publicly available. A semantic-driven search mechanism is provided to allow the retrieval of specific work ows. Besides the technological contribution, an innovative mesh transfer protocol avoids possible bottlenecks during the transmission of data across scattered servers. Also, distributed parallel processing is enabled thanks to an innovative divide and conquer approach. A simplification algorithm based on this paradigm proves that the overhead due to data transmission is negligible.
20

Al-Shakarchi, Ahmad. "Scalable audio processing across heterogeneous distributed resources : an investigation into distributed audio processing for Music Information Retrieval." Thesis, Cardiff University, 2013. http://orca.cf.ac.uk/47855/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Audio analysis algorithms and frameworks for Music Information Retrieval (MIR) are expanding rapidly, providing new ways to discover non-trivial information from audio sources, beyond that which can be ascertained from unreliable metadata such as ID3 tags. MIR is a broad field and many aspects of the algorithms and analysis components that are used are more accurate given a larger dataset for analysis, and often require extensive computational resources. This thesis investigates if, through the use of modern distributed computing techniques, it is possible to design an MIR system that is scalable as the number of participants increases, which adheres to copyright laws and restrictions, whilst at the same time enabling access to a global database of music for MIR applications and research. A scalable platform for MIR analysis would be of benefit to the MIR and scientific community as a whole. A distributed MIR platform that encompasses the creation of MIR algorithms and workflows, their distribution, results collection and analysis, is presented in this thesis. The framework, called DART - Distributed Audio Retrieval using Triana - is designed to facilitate the submission of MIR algorithms and computational tasks against either remotely held music and audio content, or audio provided and distributed by the MIR researcher. Initially a detailed distributed DART architecture is presented, along with simulations to evaluate the validity and scalability of the architecture. The idea of a parameter sweep experiment to find the optimal parameters of the Sub-Harmonic Summation (SHS) algorithm is presented, in order to test the platform and use it to perform useful and real-world experiments that contribute new knowledge to the field. DART is tested on various pre-existing distributed computing platforms and the feasibility of creating a scalable infrastructure for workflow distribution is investigated throughout the thesis, along with the different workflow distribution platforms that could be integrated into the system. The DART parameter sweep experiments begin on a small scale, working up towards the goal of running experiments on thousands of nodes, in order to truly evaluate the scalability of the DART system. The result of this research is a functional and scalable distributed MIR research platform that is capable of performing real world MIR analysis, as demonstrated by the successful completion of several large scale SHS parameter sweep experiments across a variety of different input data - using various distribution methods - and through finding the optimal parameters of the implemented SHS algorithm. DART is shown to be highly adaptable both in terms of the distributed MIR analysis algorithm, as well as the distribution
21

Dahlberg, Tobias. "Distributed Storage and Processing of Image Data." Thesis, Linköpings universitet, Databas och informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-85109.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Systems operating in a medical environment need to maintain high standards regarding availability and performance. Large amounts of images are stored and studied to determine what is wrong with a patient. This puts hard requirements on the storage of the images. In this thesis, ways of incorporating distributed storage into a medical system are explored. Products, inspired by the success of Google, Amazon and others, are experimented with and compared to the current storage solutions. Several “non-relational databases” (NoSQL) are investigated for storing medically relevant metadata of images, while a set of distributed file systems are considered for storing the actual images. Distributed processing of the stored data is investigated by using Hadoop MapReduce to generate a useful model of the images' metadata.
22

Gottemukkala, Vibby. "Scalability issues in distributed and parallel databases." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8176.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Bennett, John K. "Distributed Smalltalk : inheritance and reactiveness in distributed systems /." Thesis, Connect to this title online; UW restricted, 1988. http://hdl.handle.net/1773/6923.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Juntunen, R. (Risto). "Tradeoffs in distributed databases." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201602231230.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In a distributed database data is spread throughout the network into separated nodes with different DBMS systems (Date, 2000). According to CAP-theorem three database properties — consistency, availability and partition tolerance cannot be achieved simultaneously in distributed database systems. Two of these properties can be achieved but not all three at the same time (Brewer, 2000). Since this theorem there has been some development in network infrastructure. Also new methods to achieve consistency in distributed databases has emerged. This paper discusses trade-offs in distributed databases.
25

Gunaseelan, L. "Debugging of Distributed object systems." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/9219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Navaratnam, Srivallipuranandan. "Reliable group communication in distributed systems." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work describes the design and implementation details of a reliable group communication mechanism. The mechanism guarantees that messages will be received by all the operational members of the group or by none of them (atomicity). In addition, the sequence of messages will be the same at each of the recipients (order). The message ordering property can be used to simplify distributed database systems and distributed processing algorithms. The proposed mechanism continues to operate despite process, host and communication link failures (survivability). Survivability is essential in fault-tolerant applications.
Science, Faculty of
Computer Science, Department of
Graduate
27

Fukuzono, Hayato. "Spatial Signal Processing on Distributed MIMO Systems." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/217206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Belghoul, Abdeslem. "Optimizing Communication Cost in Distributed Query Processing." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous étudions le problème d’optimisation du temps de transfert de données dans les systèmes de gestion de données distribuées, en nous focalisant sur la relation entre le temps de communication de données et la configuration du middleware. En réalité, le middleware détermine, entre autres, comment les données sont divisées en lots de F tuples et messages de M octets avant d’être communiqués à travers le réseau. Concrètement, nous nous concentrons sur la question de recherche suivante : étant donnée requête Q et l’environnement réseau, quelle est la meilleure configuration de F et M qui minimisent le temps de communication du résultat de la requête à travers le réseau?A notre connaissance, ce problème n’a jamais été étudié par la communauté de recherche en base de données.Premièrement, nous présentons une étude expérimentale qui met en évidence l’impact de la configuration du middleware sur le temps de transfert de données. Nous explorons deux paramètres du middleware que nous avons empiriquement identifiés comme ayant une influence importante sur le temps de transfert de données: (i) la taille du lot F (c’est-à-dire le nombre de tuples dans un lot qui est communiqué à la fois vers une application consommant des données) et (ii) la taille du message M (c’est-à-dire la taille en octets du tampon du middleware qui correspond à la quantité de données à transférer à partir du middleware vers la couche réseau). Ensuite, nous décrivons un modèle de coût permettant d’estimer le temps de transfert de données. Ce modèle de coût est basé sur la manière dont les données sont transférées entre les noeuds de traitement de données. Notre modèle de coût est basé sur deux observations cruciales: (i) les lots et les messages de données sont communiqués différemment sur le réseau : les lots sont communiqués de façon synchrone et les messages dans un lot sont communiqués en pipeline (asynchrone) et (ii) en raison de la latence réseau, le coût de transfert du premier message d’un lot est plus élevé que le coût de transfert des autres messages du même lot. Nous proposons une stratégie pour calibrer les poids du premier et non premier messages dans un lot. Ces poids sont des paramètres dépendant de l’environnement réseau et sont utilisés par la fonction d’estimation du temps de communication de données. Enfin, nous développons un algorithme d’optimisation permettant de calculer les valeurs des paramètres F et M qui fournissent un bon compromis entre un temps optimisé de communication de données et une consommation minimale de ressources. L’approche proposée dans cette thèse a été validée expérimentalement en utilisant des données issues d’une application en Astronomie
In this thesis, we take a complementary look to the problem of optimizing the time for communicating query results in distributed query processing, by investigating the relationship between the communication time and the middleware configuration. Indeed, the middleware determines, among others, how data is divided into batches and messages before being communicated over the network. Concretely, we focus on the research question: given a query Q and a network environment, what is the best middleware configuration that minimizes the time for transferring the query result over the network? To the best of our knowledge, the database research community does not have well-established strategies for middleware tuning. We present first an intensive experimental study that emphasizes the crucial impact of middleware configuration on the time for communicating query results. We focus on two middleware parameters that we empirically identified as having an important influence on the communication time: (i) the fetch size F (i.e., the number of tuples in a batch that is communicated at once to an application consuming the data) and (ii) the message size M (i.e., the size in bytes of the middleware buffer, which corresponds to the amount of data that can be communicated at once from the middleware to the network layer; a batch of F tuples can be communicated via one or several messages of M bytes). Then, we describe a cost model for estimating the communication time, which is based on how data is communicated between computation nodes. Precisely, our cost model is based on two crucial observations: (i) batches and messages are communicated differently over the network: batches are communicated synchronously, whereas messages in a batch are communicated in pipeline (asynchronously), and (ii) due to network latency, it is more expensive to communicate the first message in a batch compared to any other message that is not the first in its batch. We propose an effective strategy for calibrating the network-dependent parameters of the communication time estimation function i.e, the costs of first message and non first message in their batch. Finally, we develop an optimization algorithm to effectively compute the values of the middleware parameters F and M that minimize the communication time. The proposed algorithm allows to quickly find (in small fraction of a second) the values of the middleware parameters F and M that translate a good trade-off between low resource consumption and low communication time. The proposed approach has been evaluated using a dataset issued from application in Astronomy
29

Wang, Yang. "Distributed parallel processing in networks of workstations." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1174328416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
31

Jonassen, Simon. "Efficient Query Processing in Distributed Search Engines." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-20206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Web search engines have to deal with a rapidly increasing amount of information, high query loads and tight performance constraints. The success of a search engine depends on the speed with which it answers queries (efficiency) and the quality of its answers (effectiveness). These two metrics have a large impact on the operational costs of the search engine and the overall user satisfaction, which determine the revenue of the search engine. In this context, any improvement in query processing efficiency can reduce the operational costs and improve user satisfaction, hence improve the overall benefit. In this thesis, we elaborate on query processing efficiency, address several problems within partitioned query processing, pruning and caching and propose several novel techniques: First, we look at term-wise partitioned indexes and address the main limitations of the state-of-the-art query processing methods. Our first approach combines the advantage of pipelined and traditional (non-pipelined) query processing. This approach assumes one disk access per posting list and traditional term-at-a-time processing. For the second approach, we follow an alternative direction and look at document-at-a-time processing of sub-queries and skipping. Subsequently, we present several skipping extensions to pipelined query processing, which as we show can improve the query processing performance and/or the quality of results. Then, we extend one of these methods with intra-query parallelism, which as we show can improve the performance at low query loads. Second, we look at skipping and pruning optimizations designed for a monolithic index. We present an efficient self-skipping inverted index designed for modern index compression methods and several query processing optimizations. We show that these optimizations can provide a significant speed-up compared to a full (non-pruned) evaluation and reduce the performance gap between disjunctive (OR) and conjunctive (AND) queries. We also propose a linear programming optimization that can further improve the I/O, decompression and computation efficiency of Max-Score. Third, we elaborate on caching in Web search engines in two independent contributions. First, we present an analytical model that finds the optimal split in a static memory-based two-level cache. Second, we present several strategies for selecting, ordering and scheduling prefetch queries and demonstrate that these can improve the efficiency and effectiveness of Web search engines. We carefully evaluate our ideas either using a real implementation or by simulation using real-world text collections and query logs. Most of the proposed techniques are found to improve the state-of-the-art in the conducted empirical studies. However, the implications and applicability of these techniques in practice need further evaluation in real-life settings.
32

Mühleisen, Hannes [Verfasser]. "Architecture-independent distributed query processing / Hannes Mühleisen." Berlin : Freie Universität Berlin, 2013. http://d-nb.info/1031100261/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Algire, Martin. "Distributed multi-processing for high performance computing." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Parallel computing can take many forms. From a user's perspective, it is important to consider the advantages and disadvantages of each methodology. The following project attempts to provide some perspective on the methods of parallel computing and indicate where the tradeoffs lie along the continuum. Problems that are parallelizable enable researchers to maximize the computing resources available for a problem, and thus push the limits of the problems that can be solved. Solving any particular problem in parallel will require some very important design decisions to be made. These decisions may dramatically affect portability, performance, and cost of implementing a software solution to the problem. The results gained from this work indicate that although performance improvements are indeed possible---they are heavily dependent on the application in question and may require much more programming effort and expertise to implement.
34

Al-Bassiouni, Abdel-Aziz Mahmoud. "Optimum signal processing in distributed sensor systems." Thesis, Monterey, California: U.S. Naval Postgraduate School, 1987. http://hdl.handle.net/10945/22401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Approved for public release; distribution is unlimited
We consider the problem of detection of known signals in noise using quantized, discrete sensor observations. Optimal design of the quantizers at the sensor sites as well as the global fusion of the quantized observations is presented. Also the equivalence between a team of two sensors and their fusion centre and another team of a primary decision maker and a second opinion is shown. Since the fusion of information is a main pillar of the thesis, an early chapter is devoted to the optimum fusion policy. Extension of the results to the case of vector sensor observations is also considered
35

Wong, Kar Leong. "A message controller for distributed processing systems." Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wang, Wei. "Distributed real-time processing for automotive applications." Thesis, Cranfield University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Murphy, Donald P. "Parallel Distributed Processing of Realtime Telemetry Data." International Foundation for Telemetering, 1987. http://hdl.handle.net/10150/615233.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California
An architecture is described for Processing Multiple digital PCM telemetry streams. This architecture is implemented using a collection of Motorola mono-board microprocessor units (MPUs) in a single chassis called an Intermediate Processing Unit (IPU). Multiple IPUs can be integrated using a common input data bus. Each IPU is capable of processing a single PCM digital telemetry stream. Processing, in this context, includes conversion of raw sample count data to engineering units; computation of derived quantities from measurement sample data; calculation of minimum, maximum, average and cyclic [(maximum - minimum)/2] values for both measurement and derived data over a preselected time interval; out-of-limit, dropout and wildpoint detection; strip chart recording of selected data; transmission of both measurement and derived data to a high-speed, large-capacity disk storage subsystem; and transmission of compressed data to the host computer for realtime processing and display. All processing is done in realtime with at most two PCM major frames time latency.
38

Millar, Dean Lee. "Parallel distributed processing in rock engineering systems." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/37116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rock Engineering Systems are a collection of ideas, mathematical tools and computer technology all of which are designed to solve problems in rock engineering with interacting components. The interactions between components can be complex and the rock engineering problems themselves contain a high degree of uncertainty. The research described in this thesis investigates the incorporation of computational techniques known as parallel distributed processing methods into the disciplines of rock mechanics and rock engineering. Two main applications of parallel distributed processing methods in rock engineering are investigated in this thesis. 1) Multilayered perceptron artificial neural networks are used successfully to encapsulate the laboratory behaviour of rocks under triaxial compression. Trained artificial neural networks are then used to replace conventional constitutive models within finite difference geomechanical numerical modelling codes. 2) Two multilayered perceptron artificial neural networks are developed to assist in the task of discrimination of rock fracture presence within digital imagery of rock exposures. The first is trained using samples of the image that contain fracture image content and samples that do not, and provides a probability-like measure of fracture presence. It was sufficiently successful to permit estimation of fracture intensity parameter , . The second was developed specifically to identify fracture termination condition by matching samples to a set of fracture termination condition templates. Seven original contributions to the rock mechanics and rock engineering disciplines have resulted across the three application areas. These contributions are itemised, with details, at the beginning of the final Chapter of the thesis.
39

Kanagasabapathy, Shri. "Distributed adaptive signal processing for frequency estimation." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/49783.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
It is widely recognised that future smart grids will heavily rely upon intelligent communication and signal processing as enabling technologies for their operation. Traditional tools for power system analysis, which have been built from a circuit theory perspective, are a good match for balanced system conditions. However, the unprecedented changes that are imposed by smart grid requirements, are pushing the limits of these old paradigms. To this end, we provide new signal processing perspectives to address some fundamental operations in power systems such as frequency estimation, regulation and fault detection. Firstly, motivated by our finding that any excursion from nominal power system conditions results in a degree of non-circularity in the measured variables, we cast the frequency estimation problem into a distributed estimation framework for noncircular complex random variables. Next, we derive the required next generation widely linear, frequency estimators which incorporate the so-called augmented data statistics and cater for the noncircularity and a widely linear nature of system functions. Uniquely, we also show that by virtue of augmented complex statistics, it is possible to treat frequency tracking and fault detection in a unified way. To address the ever shortening time-scales in future frequency regulation tasks, the developed distributed widely linear frequency estimators are equipped with the ability to compensate for the fewer available temporal voltage data by exploiting spatial diversity in wide area measurements. This contribution is further supported by new physically meaningful theoretical results on the statistical behavior of distributed adaptive filters. Our approach avoids the current restrictive assumptions routinely employed to simplify the analysis by making use of the collaborative learning strategies of distributed agents. The efficacy of the proposed distributed frequency estimators over standard strictly linear and stand-alone algorithms is illustrated in case studies over synthetic and real-world three-phase measurements. An overarching theme in this thesis is the elucidation of underlying commonalities between different methodologies employed in classical power engineering and signal processing. By revisiting fundamental power system ideas within the framework of augmented complex statistics, we provide a physically meaningful signal processing perspective of three-phase transforms and reveal their intimate connections with spatial discrete Fourier transform (DFT), optimal dimensionality reduction and frequency demodulation techniques. Moreover, under the widely linear framework, we also show that the two most widely used frequency estimators in the power grid are in fact special cases of frequency demodulation techniques. Finally, revisiting classic estimation problems in power engineering through the lens of non-circular complex estimation has made it possible to develop a new self-stabilising adaptive three-phase transformation which enables algorithms designed for balanced operating conditions to be straightforwardly implemented in a variety of real-world unbalanced operating conditions. This thesis therefore aims to help bridge the gap between signal processing and power communities by providing power system designers with advanced estimation algorithms and modern physically meaningful interpretations of key power engineering paradigms in order to match the dynamic and decentralised nature of the smart grid.
40

Mühleisen, Hannes Fabian [Verfasser]. "Architecture-independent distributed query processing / Hannes Mühleisen." Berlin : Freie Universität Berlin, 2013. http://nbn-resolving.de/urn:nbn:de:kobv:188-fudissthesis000000042056-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

CHEN, HONG. "A WEB-BASED DISTRIBUTED IMAGE PROCESSING SYSTEM." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin975338078.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Andersson, Sara. "Data Processing and Collection in Distributed Systems." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85313.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distributed systems can be seen in a variety of applications that is in use today. Tritech provides several systems that to some extent consist of distributed systems of nodes. These nodes collect data and the data have to be processed. A problem that often appears when designing these systems, is deciding where the data should be processed, i.e., which architecture is the most suitable one for the system. Decide the architecture for these systems are not simple, especially since it changes rather quickly due to the development in these areas. The thesis aims to perform a study regarding which factors affect the choice of architecture in a distributed system and how these factors relate to each other. To be able to analyze which factors do affect the choice of architecture and to what extent, a simulator was implemented. The simulator received information about the factors as input, and return one or several architecture configurations as output. By performing qualitative interviews, the input factors to the simulator were chosen. The factors that were analyzed in the thesis was: security, storage, working memory, size of data, number of nodes, data processing per data set, robust communication, battery consumption, and cost. From the qualitative interviews as well as from the prestudy five architecture configuration was chosen. The chosen architectures were: thin-client server, thick-client server, three-tier client-server, peer-to-peer, and cloud computing. The simulator was validated regarding the three given use cases: agriculture, the train industry, and industrial Internet of Things. The validation consisted of five existing projects from Tritech. From the results of the validation, the simulator produced correct results for three of the five projects. By using the simulator results, it could be seen which factors affect the choice of architecture more than others and are hard to provide in the same architecture since they are conflicting factors. The conflicting factors were security together with working memory and robust communication. The factor working memory together with battery consumption also showed to be conflicting factors and is hard to provide within the same architecture. Therefore, according to the simulator, it can be seen that the factors that affect the choice of architecture were working memory, battery consumption, security, and robust communication. By using the results of the simulator, a decision matrix was designed whose purpose was to facilitate the choice of architecture. The evaluation of the decision matrix consisted of four projects from Tritech including the three given use cases: agriculture, the train industry, and industrial Internet of Things. The evaluation of the decision matrix showed that the two architectures that received the most points, one of the architectures were used in the validated project.
Distribuerade system kan ses i en mängd olika applikationer som används idag. Tritech jobbar med flera produkter som till viss del består av distribuerade system av noder. Det dessa system har gemensamt är att noderna samlar in data och denna data kommer på ett eller ett annat sätt behöva bearbetas. En fråga som ofta behövs besvaras vid uppsättning av arkitekturen för sådana projekt är huruvida datan ska bearbetas, d.v.s. vilken arkitektkonfiguration som är mest lämplig för systemet. Att ta dessa beslut har visat sig inte alltid vara helt simpelt, och det ändrar sig relativt snabbt med den utvecklingen som sker på dessa områden. Denna uppsats syftar till att utföra en studie om vilka faktorer som påverkar valet av arkitektur för ett distribuerat system samt hur dessa faktorer förhåller sig mot varandra. För att kunna analysera vilka faktorer som påverkar valet av arkitektur och i vilken utsträckning, implementerades en simulator. Simulatorn tog faktorerna som input och returnerade en eller flera arkitekturkonfigurationer som output. Genom att utföra kvalitativa intervjuer valdes faktorerna till simulatorn. Faktorerna som analyserades i denna uppsats var: säkerhet, lagring, arbetsminne, storlek på data, antal noder, databearbetning per datamängd, robust kommunikation, batteriförbrukning och kostnad. Från de kvalitativa intervjuerna och från förstudien valdes även fem stycken arkitekturkonfigurationer. De valda arkitekturerna var: thin-client server, thick-client server, three-tier client-server, peer-to-peer, och cloud computing. Simulatorn validerades inom de tre givna användarfallen: lantbruk, tågindustri och industriell IoT. Valideringen bestod av fem befintliga projekt från Tritech. Från resultatet av valideringen producerade simulatorn korrekta resultat för tre av de fem projekten. Utifrån simulatorns resultat, kunde det ses vilka faktorer som påverkade mer vid valet av arkitektur och är svåra att kombinera i en och samma arkitekturkonfiguration. Dessa faktorer var säkerhet tillsammans med arbetsminne och robust kommunikation. Samt arbetsminne tillsammans med batteriförbrukning visade sig också vara faktorer som var svåra att kombinera i samma arkitektkonfiguration. Därför, enligt simulatorn, kan det ses att de faktorer som påverkar valet av arkitektur var arbetsminne, batteriförbrukning, säkerhet och robust kommunikation. Genom att använda simulatorns resultat utformades en beslutsmatris vars syfte var att underlätta valet av arkitektur. Utvärderingen av beslutsmatrisen bestod av fyra projekt från Tritech som inkluderade de tre givna användarfallen: lantbruk, tågindustrin och industriell IoT. Resultatet från utvärderingen av beslutsmatrisen visade att de två arkitekturerna som fick flest poäng, var en av arkitekturerna den som användes i det validerade projektet
43

Peng, Yanfeng. "Distributed processing, reconfigurable processes and active network." Thesis, Aston University, 2003. http://publications.aston.ac.uk/8005/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The fast spread of the Internet and the increasing demands of the service are leading to radical changes in the structure and management of underlying telecommunications systems. Active networks (ANs) offer the ability to program the network on a per-router, per-user, or even per-packet basis, thus promise greater flexibility than current networks. To make this new network paradigm of active network being widely accepted, a lot of issues need to be solved. Management of the active network is one of the challenges. This thesis investigates an adaptive management solution based on genetic algorithm (GA). The solution uses a distributed GA inspired by bacterium on the active nodes within an active network, to provide adaptive management for the network, especially the service provision problems associated with future network. The thesis also reviews the concepts, theories and technologies associated with the management solution. By exploring the implementation of these active nodes in hardware, this thesis demonstrates the possibility of implementing a GA based adaptive management in the real network that being used today. The concurrent programming language, Handel-C, is used for the description of the design system and a re-configurable computer platform based on a FPGA process element is used for the hardware implementation. The experiment results demonstrate both the availability of the hardware implementation and the efficiency of the proposed management solution.
44

Peterson, Krystal, Samuel Richter, Adam Schafer, Steve Grant, and Kurt Kosbar. "DISTRIBUTED COMPUTING PROCESSOR FOR SIGNAL PROCESSING APPLICATIONS." International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624191.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many signal processing, data analysis and graphical user interface algorithms are computationally intensive. This paper investigates a method of off-loading some of the calculations to remotely located processors. Inexpensive, commercial off the shelf processors are used to perform operations such as fast Fourier transforms and other numerically intensive algorithms. The data is passed to the processors, and results collected, using conventional network interfaces such as TCP/IP. This allows the processors to be located at any location, and also allows potentially large caches of processors to be shared between multiple applications.
45

Gao, Su. "Distributed signal processing using nested lattice codes." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9238.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multi-Terminal Source Coding (MTSC) addresses the problem of compressing correlated sources without communication links among them. In this thesis, the constructive approach of this problem is considered in an algebraic framework and a system design is provided that can be applicable in a variety of settings. Wyner-Ziv problem is first investigated: coding of an independent and identically distributed (i.i.d.) Gaussian source with side information available only at the decoder in the form of a noisy version of the source to be encoded. Theoretical models are first established and derived for calculating distortion-rate functions. Then a few novel practical code implementations are proposed by using the strategy of multi-dimensional nested lattice/trellis coding. By investigating various lattices in the dimensions considered, analysis is given on how lattice properties affect performance. Also proposed are methods on choosing good sublattices in multiple dimensions. By introducing scaling factors, the relationship between distortion and scaling factor is examined for various rates. The best high-dimensional lattice using our scale-rotate method can achieve a performance less than 1 dB at low rates from the Wyner-Ziv limit; and random nested ensembles can achieve a 1.87 dB gap with the limit. Moreover, the code design is extended to incorporate with distributed compressive sensing (DCS). Theoretical framework is proposed and practical design using nested lattice/trellis is presented for various scenarios. By using nested trellis, the simulation shows a 3.42 dB gap from our derived bound for the DCS plus Wyner-Ziv framework.
46

Ebrahimian, Mohammad Reza. "Power system operations : state estimation distributed processing /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Drougas, Ioannis. "Rate allocation in distributed stream processing systems." Diss., [Riverside, Calif.] : University of California, Riverside, 2008. http://proquest.umi.com/pqdweb?index=0&did=1663077971&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268240766&clientId=48051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--University of California, Riverside, 2008.
Includes abstract. Title from first page of PDF file (viewed March 10, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 93-98). Also issued in print.
48

Xu, Songcen. "Distributed signal processing algorithms for wireless networks." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/9516/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distributed signal processing algorithms have become a key approach for statistical inference in wireless networks and applications such as wireless sensor networks and smart grids. It is well known that distributed processing techniques deal with the extraction of information from data collected at nodes that are distributed over a geographic area. In this context, for each specific node, a set of neighbor nodes collect their local information and transmit the estimates to a specific node. Then, each specific node combines the collected information together with its local estimate to generate an improved estimate. In this thesis, novel distributed cooperative algorithms for inference in ad hoc, wireless sensor networks and smart grids are investigated. Low-complexity and effective algorithms to perform statistical inference in a distributed way are devised. A number of innovative approaches for dealing with node failures, compression of data and exchange of information are proposed and summarized as follows: Firstly, distributed adaptive algorithms based on the conjugate gradient (CG) method for distributed networks are presented. Both incremental and diffusion adaptive solutions are considered. Secondly, adaptive link selection algorithms for distributed estimation and their application to wireless sensor networks and smart grids are proposed. Thirdly, a novel distributed compressed estimation scheme is introduced for sparse signals and systems based on compressive sensing techniques. The proposed scheme consists of compression and decompression modules inspired by compressive sensing to perform distributed compressed estimation. A design procedure is also presented and an algorithm is developed to optimize measurement matrices. Lastly, a novel distributed reduced-rank scheme and adaptive algorithms are proposed for distributed estimation in wireless sensor networks and smart grids. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced–dimension parameter vector.
49

Xia, Yu S. M. Massachusetts Institute of Technology. "Logical timestamps in distributed transaction processing systems." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122877.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 73-79).
Distributed transactions are such transactions with remote data access. They usually suffer from high network latency (compared to the internal overhead) during data operations on remote data servers, and therefore lengthen the entire transaction executiont time. This increases the probability of conflicting with other transactions, causing high abort rates. This, in turn, causes poor performance. In this work, we constructed Sundial, a distributed concurrency control algorithm that applies logical timestamps seaminglessly with a cache protocol, and works in a hybrid fashion where an optimistic approach is combined with lock-based schemes. Sundial tackles the inefficiency problem in two ways. Firstly, Sundial decides the order of transactions on the fly. Transactions get their commit timestamp according to their data access traces. Each data item in the database has logical leases maintained by the system. A lease corresponds to a version of the item. At any logical time point, only a single transaction holds the 'lease' for any particular data item. Therefore, lease holders do not have to worry about someone else writing to the item because in the logical timeline, the data writer needs to acquire a new lease which is disjoint from the holder's. This lease information is used to calculate the logical commit time for transactions. Secondly, Sundial has a novel caching scheme that works together with logical leases. The scheme allows the local data server to automatically cache data from the remote server while preserving data coherence. We benchmarked Sundial along with state-of-the-art distributed transactional concurrency control protocols. On YCSB, Sundial outperforms the second best protocol by 57% under high data access contention. On TPC-C, Sundial has a 34% improvement over the state-of-the-art candidate. Our caching scheme has performance gain comparable with hand-optimized data replication. With high access skew, it speeds the workload by up to 4.6 x.
"This work was supported (in part) by the U.S. National Science Foundation (CCF-1438955)"
by Yu Xia.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
50

Sun, Yudong. "A distributed object model for solving irregularly structured problems on distributed systems /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23501662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії