Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Distributed space system.

Dissertationen zum Thema „Distributed space system“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Distributed space system" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Holsapple, Stephen Alan. „DSM64: A DISTRIBUTED SHARED MEMORY SYSTEM IN USER-SPACE“. DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/725.

Der volle Inhalt der Quelle
Annotation:
This paper presents DSM64: a lazy release consistent software distributed shared memory (SDSM) system built entirely in user-space. The DSM64 system is capable of executing threaded applications implemented with pthreads on a cluster of networked machines without any modifications to the target application. The DSM64 system features a centralized memory manager [1] built atop Hoard [2, 3]: a fast, scalable, and memory-efficient allocator for shared-memory multiprocessors. In my presentation, I present a SDSM system written in C++ for Linux operating systems. I discuss a straight-forward approach to implement SDSM systems in a Linux environment using system-provided tools and concepts avail- able entirely in user-space. I show that the SDSM system presented in this paper is capable of resolving page faults over a local area network in as little as 2 milliseconds. In my analysis, I present the following. I compare the performance characteristics of a matrix multiplication benchmark using various memory coherency models. I demonstrate that matrix multiplication benchmark using a LRC model performs orders of magnitude quicker than the same application using a stricter coherency model. I show the effect of coherency model on memory access patterns and memory contention. I compare the effects of different locking strategies on execution speed and memory access patterns. Lastly, I provide a comparison of the DSM64 system to a non-networked version using a system-provided allocator.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

McDonald, Ian Lindsay. „Memory management in a distributed system of single address space operating systems supporting quality of service“. Thesis, University of Glasgow, 2001. http://theses.gla.ac.uk/5427/.

Der volle Inhalt der Quelle
Annotation:
The choices provided by an operating system to the application developer for managing memory came in two forms: no choice at all, with the operating system making all decisions about managing memory; or the choice to implement virtual memory management specific to the individual application. The second of these choices is, for all intents and purposes, the same as the first: no choice at all. For many application developers, the cost of implementing a customised virtual memory management system is just too high. The results is that, regardless of the level of flexibility available, the developer ends up using the system-provided default. Further exacerbating the problem is the tendency for operating system developers to be extremely unimaginative when providing that same default. Advancements in virtual memory techniques such as prefetching, remote paging, compressed caching, and user-level page replacement coupled with the provision of user-level virtual memory management should have heralded a new era of choice and an application-centric approach to memory management. Unfortunately, this has failed to materialise. This dissertation describes the design and implementation of the Heracles virtual memory management system. The Heracles approach is one of inclusion rather than exclusion. The main goal of Heracles is to provide an extensible environment that is configurable to the extent of providing application-centric memory management without the need for application developers to implement their own. However, should the application developer wish to provide a more specialised implementation for all or any part of Heracles, the system is constructed around well-defined interfaces that allow new implementations to be "plugged in" where required. The result is a virtual memory management hierarchy that is highly configurable, highly flexible, and can be adapted at run-time to meet new phases in the application's behaviour. Furthermore, different parts of an application's address space can have different hierarchies associated with managing its memory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rowe, Andrew W. „High-accuracy distributed sensor time-space-position information system for captive-carry field experiments“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA324537.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kassan, Mark W. „Distributed Interactive Simulation: The Answer to Interoperable Test and Training Instrumentation“. International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611445.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
This paper discusses Global Positioning System (GPS) Range Applications Joint Program Office (RAJPO) efforts to foster interoperability between airborne instrumentation, virtual simulators, and constructive simulations using Distributed Interactive Simulation (DIS). In the past, the testing and training communities developed separate airborne instrumentation systems primarily because available technology couldn't encompass both communities' requirements. As budgets get smaller, as requirements merge, and as technology advances, the separate systems can be used interoperably and possibly merged to meet common requirements. Using DIS to bridge the gap between the RAJPO test instrumentation system and the Air Combat Maneuvering Instrumentation (ACMI) training systems provides a defacto system-level interoperable interface while giving both communities the added benefits of interaction with the modeling and simulation world. The RAJPO leads the test community in using DIS. RAJPO instrumentation has already supported training exercises such as Roving Sands 95, Warfighter 95, and Combat Synthetic Test, Training, and Assessment Range (STTAR) and major tests such as the Joint Advanced Distributed Simulation (JADS) Joint Test and Evaluation (JT&E) program. Future efforts may include support of Warrior Flag 97 and upgrading the Nellis No-Drop Bomb Scoring Ranges. These exercises, combining the use of DIS and RAJPO instrumentation to date, demonstrate how a single airborne system can be used successfully to support both test and training requirements. The Air Combat Training System (ACTS) Program plans to build interoperability through DIS into existing and future ACMI systems. The RAJPO is committed to fostering interoperable airborne instrumentation systems as well as interfaces to virtual and constructive systems in the modeling and simulation world. This interoperability will provide a highly realistic combat training and test synthetic environment enhancing the military's ability to train its warfighters and test its advanced weapon systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bruhn, Fredrik. „Miniaturized Multifunctional System Architecture for Satellites and Robotics“. Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Universitetsbiblioteket [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6130.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Briao, Eduardo Wenzel. „Métodos de Exploração de Espaço de Projeto em Tempo de Execução em Sistemas Embarcados de Tempo Real Soft baseados em Redes-Em-Chip“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/13157.

Der volle Inhalt der Quelle
Annotation:
A complexidade no projeto de sistemas eletrônicos tem aumentado devido à evolução tecnológica e permite a concepção de sistemas inteiros em um único chip (SoCs – do inglês, Systems-on-Chip). Com o objetivo de reduzir a alta complexidade de projeto, custos de projeto e o tempo de lançamento do produto no mercado, os sistemas são desenvolvidos em módulos funcionais, pré-verificados e pré-projetados, denominados de núcleos de propriedade intelectual (IP – do inglês, Intellectual Property). Esses núcleos IP podem ser reutilizados de outros projetos ou adquiridos de terceiros. Entretanto, é necessário prover uma estrutura de comunicação para interligar esses núcleos e as estruturas atuais (barramentos) são inadequadas para atender as necessidades dos futuros SoCs (compartilhamento de banda, falta de escalabilidade). As redes-em-chip (NoCs{ XE "NoCs" } – do inglês, Networks-on-Chip) vêm sendo apresentadas como uma solução para atender essas restrições. No desenvolvimento de sistemas embarcados baseados em redes-em-chip, deve-se personalizar a rede para atendimento de restrições. Essa exploração de espaço de projeto (EEP), segundo uma infinidade de trabalhos, é realizada em tempo de projeto, supondo-se que é conhecido o perfil das aplicações que devem ser executadas pelo sistema. No entanto, cada vez mais sistemas embarcados aproximam-se de dispositivos genéricos de processamento (como palmtops), onde as tarefas a serem executadas não são inteiramente conhecidas a priori. Com a mudança dinâmica da carga de trabalho de um sistema embarcado, a busca pelo atendimento de requisitos pode então ser enfrentada por mecanismos adaptativos, que implementam dinamicamente a EEP. No âmbito deste trabalho, a EEP em tempo de execução provê mecanismos adaptativos que deverão realizar suas funções para atendimento de restrições de projeto. Consequentemente, EEP em tempo de execução pode permitir resultados ainda melhores, no que diz respeito a sistemas embarcados com restrições de projetos rígidas. É possível maximizar o tempo de duração da energia da bateria que alimenta um sistema embarcado ou, até mesmo, diminuir a taxa de perda de deadlines em um sistema de tempo real soft, realocando em tempo de execução tarefas de modo a gerar menor taxa de comunicação entre os processadores, desde que o sistema seja executado em um tempo suficiente para amortizar os custos de migração. Neste trabalho, foi utilizada a combinação de heurísticas de alocação da área dos Sistemas Computacionais Distribuídos como, por exemplo, algoritmos bin-packing e linear clustering. Resultados mostraram que a realocação de tarefas, utilizando uma combinação Worst-Fit e Linear Clustering, reduziu o consumo de energia e a taxa de perda de deadlines em 17% e 37%, respectivamente, utilizando o modelo de migração por cópia.
The complexity of electronic systems design has been increasing due to the technological evolution, which now allows the inclusion of a complete system on a single chip (SoC – System-on-Chip). In order to cope with the corresponding design complexity and reduce design costs and time-to-market, systems are built by assembling pre-designed and pre-verificated functional modules, called IP (Intellectual Property) cores. IP cores can be reused from previous designs or acquired from third-party vendors. However, an adequate communication architecture is required to interconnect these IP cores. Current communication architectures (busses) are unsuitable for the communication requirements of future SoCs (sharing of bandwidth, lack of scalability). Networks-on-Chip (NoC) arise as one of the solutions to fulfill these requirements. While developing NoC-based embedded systems, the NoC customization is mandatory to fulfill design constraints. This design space exploration (DSE), according to most approaches in the literature, is achieved at compile-time (off-line DSE), assuming the profiles of the tasks that will be executed in the embedded system are known a priori. However, nowadays, embedded systems are becoming more and more similar to generic processing devices (such as palmtops), where the tasks to be executed are not completely known a priori. Due to the dynamic modification of the workload of the embedded system, the fulfillment of requirements can be accomplished by using adaptive mechanisms that implement dynamically the DSE (run-time DSE or on-line DSE). In the scope of this work, DSE is on-line. In other words, when the system is running, adaptive mechanisms will be executed to fulfill the requirements of the system. Consequently, on-line DSE can achieve better results than off-line DSE alone, especially considering embedded systems with tight constraints. It is thus possible to maximize the lifetime of the battery that feeds an embedded system, or even to decrease the deadline miss ratio in a soft real-time system, for example by relocating tasks dynamically in order to generate less communication among the processors, provided that the system runs for enough execution time in order to amortize the migration overhead.In this work, a combination of allocation heuristics from the domain of Distributed Computing Systems is applied, for instance bin-packing and linear clustering algorithms. Results shows that applying task reallocation using the Worst-Fit and Linear Clustering combination reduces the energy consumption and deadline miss ratio by 17% and 37%, respectively, using the copy task migration model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hazra, Tushar K., Charles Sun, Arshad M. Mian und Louis M. Picinich. „Developing Communication and Data Systems for Space Station Facility Class Payloads“. International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608434.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
The driving force in modern space mission control has been directed towards developing cost effective and reliable communication and data systems. The objective is to maintain and ensure error-free payload commanding and data acquisition as well as efficient processing of the payload data for concurrent, real time and future use. While Mainframe computing still comprises a majority of commercially available communication and data systems, a significant diversion can be noticed towards utilizing a distributed network of workstations and commercially available software and hardware. This motivation reflects advances in modem computer technology and the trend in space mission control today and in the future. The development of communication and data involves the implementation of distributed and parallel processing concepts in a network of highly powerful client server environments. This paper addresses major issues related to developing and integrating communication and data system and the significance for future developments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Puranik, Sachin Vishwas. „Development of a distributed model for the biological water processor of the water recovery system for NASA Advanced Life Support program“. Master's thesis, Mississippi State : Mississippi State University, 2004. http://library.msstate.edu/etd/show.asp?etd=etd-11152004-174325.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lanzarini, Matteo. „Distributed optimization methods for cooperative beamforming in satellite communications“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23246/.

Der volle Inhalt der Quelle
Annotation:
This thesis analyzes various beamforming techniques and theories of space information networking (SIN), with the aim of merging and using them in real application. We propose then two algorithm to solve two different problem linked to satellites and beamforming. The first one show a cluster of satellites that performs collaborative beamforming to reach an Earth user, while reducing interference in secondary directions. Then we consider a problem for hybrid satellite-terrestrial relay networks (HSTRNs), where multiple geostationary satellites transmit signals to multiple Earth terminal, with the help of multiple single-antenna relays.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Banu, Shahera. „Examining the impact of climate change on dengue transmission in the Asia-Pacific region“. Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/66387/1/Shahera_Banu_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Dengue fever (DF) is a serious public health concern in many parts of the world. An increase in DF incidence has been observed globally over the past decades. Multiple factors including urbanisation, increased international travels and global climate change are thought to be responsible for increased DF. However, little research has been conducted in the Asia-Pacific region about the impact of these changes on dengue transmission. The overarching aim of this thesis is to explore the spatiotemporal pattern of DF transmission in the Asia-Pacific region and project the future risk of DF attributable to climate change. Annual data of DF outbreaks for sixteen countries in the Asia-Pacific region over the last fifty years were used in this study. The results show that the geographic range of DF in this region increased significantly over the study period. Thailand, Vietnam and Laos were identified as the highest risk areas and there was a southward expansion observed in the transmission pattern of DF which might have originated from Philippines or Thailand. Additionally, the detailed DF data were obtained and the space-time clustering of DF transmission was examined in Bangladesh. Monthly DF data were used for the entire country at the district level during 2000-2009. Dhaka district was identified as the most likely DF cluster in Bangladesh and several districts of the southern part of Bangladesh were identified as secondary clusters in the years 2000-2002. In order to examine the association between meteorological factors and DF transmission and to project the future risk of DF using different climate change scenarios, the climate-DF relationship was examined in Dhaka, Bangladesh. The results show that climate variability (particularly maximum temperature and relative humidity) was positively associated with DF transmission in Dhaka. The effects of climate variability were observed at a lag of four months which might help to potentially control and prevent DF outbreaks through effective vector management and community education. Based on the quantitative assessment of the climate-DF relationship, projected climate change will likely increase mosquito abundance and activity and DF in this area. Assuming a temperature increase of 3.3oC without any adaptation measures and significant changes in socio-economic conditions, the consequence will be devastating, with a projected annual increase of 16,030 cases in Dhaka, Bangladesh by the end of this century. Therefore, public health authorities need to be prepared for likely increase of DF transmission in this region. This study adds to the literature on the recent trends of DF and impacts of climate change on DF transmission. These findings may have significant public health implications for the control and prevention of DF, particularly in the Asia- Pacific region.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Akopyan, Evelyne. „Fiabilité de l'architecture réseau des systèmes spatiaux distribués sur essaims de nanosatellites“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP102.

Der volle Inhalt der Quelle
Annotation:
Le domaine de l’observation de l’Espace s’intéresse de près aux signaux très basse fréquence, car ils fournissent d’importantes informations quant à la naissance et aux premiers jours de l’Univers. A ce jour, les interféromètres observant ces signaux se situent à la surface de la Terre, dans des zones arides. Malheureusement, ces signaux sont très sensibles aux interférences terrestres ainsi qu’à l’ionosphère, et sont donc difficilement observables. Une solution à ce problème serait d’observer les signaux directement depuis l’Espace, en déployant un essaim de nanosatellites en orbite autour de la Lune. Cet essaim constitue un Système Spatial Distribué (DSS), fonctionnant en tant qu'interféromètre, dont l’orbite lunaire le préserve des interférences terrestres et de l’ionosphère. Cependant, la configuration de l’essaim de nanosatellites en interféromètre spatial est un défi de taille en termes de communication, notamment en raison de l’absence d’infrastructure dans l’Espace et du volume de données d’observation à propager au sein du système. L’objectif de la thèse est donc de définir une configuration d’architecture fiable, répondant aux contraintes conjointes d’un réseau MANET et d’un système distribué. La thèse commence par caractériser le réseau de l’essaim de nanosatellites et met en avant sa forte hétérogénéité. Ensuite, elle propose des algorithmes permettant de répartir équitablement la charge du réseau, en se basant sur des techniques de division de graphe, et compare les performances de ces algorithmes sur des critères d'équité. Enfin, elle évalue la sensibilité du système aux pannes en termes de robustesse (résistance aux pannes) et de résilience (maintien du fonctionnement en présence de pannes) et étudie l'impact de la division de graphe sur la fiabilité globale de l'essaim. Les algorithmes de division développés dans cette thèse devront garantir la Qualité de Service (QoS) essentielle au bon fonctionnement d'un interféromètre spatial. Pour atteindre cet objectif, des solutions de routage pertinentes devront être minutieusement étudiées et intégrées, afin de s'assurer qu'elles répondent aux exigences strictes de performance et de fiabilité de cette application avancée
The study of the low-frequency range is essential for Deep Space observation, as it extracts precious information from Dark Ages signals, which are signatures of the very early Universe. To this day, the majority of low-frequency radio interferometers are deployed in desertic regions on the surface of the Earth. However, these signals are easily distorted by radio-frequency interferences as well as the ionosphere, making them hardly observable when they are not completely masked. One solution to this problem would be to observe the low-frequency signals directly from Space, by deploying a nanosatellite swarm in orbit around the Moon. This swarm is defined as a Distributed Space System (DSS) operating as an interferometer, while being shielded by the Moon from terrestrial interferences and ionospheric distortions. However, the configuration of a nanosatellite swarm as a space observatory proves to be a challenging problem in terms of communication, mostly because of the lack of external infrastructure in Space, and the amount of observation data to propagate within the swarm. Thus, the objective of the thesis is to define a reliable network architecture that would comply with the requirements of a MANET and a distributed system, simultaneously. This thesis starts by characterizing the network of the nanosatellite swarm and highlights its strong heterogeneity. Then, it introduces a set of algorithms, based on graph division, to fairly distribute the network load among the swarm, and compares their performance in terms of fairness. Finally, it assesses the fault tolerance of the system in terms of robustness (capacity to resist faults) and resilience (capacity to maintain functionality when faults occur) and evaluates the impact of graph division on the overall reliability of the swarm. The division algorithms developped in this thesis should ensure the Quality of Service (QoS) necessary to the proper functioning of a Space interferometer. To this end, relevant routing protocols should be thoroughly studied and integrated, in order to meet the strict requirements of this advanced application in terms of performance and reliability
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Jeffrey, Alan. „Observation spaces and timed processes“. Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302874.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Gull, Aarron. „Cherub : a hardware distributed single shared address space memory architecture“. Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356981.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Menezes, Ronaldo Parente de. „Resource management in open tuple space systems“. Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313869.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Tang, Yipeng. „DSP implementation of trellis coded modulation and distributed space time coding“. Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=1796.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains v, 118, [64] p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 114-118).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Corbin, Benjamin Andrew. „The value proposition of distributed satellite systems for space science missions“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/103442.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 382-402).
The resources available for planetary science missions are finite and subject to some uncertainty. Despite decreasing costs of spacecraft components and launch services, the cost of space science missions is increasing, causing some missions to be canceled or delayed, and fewer science groups have the opportunity to achieve their goals due to budget limits. New methods in systems engineering have been developed to evaluate flexible systems and their sustained lifecycle value, but these methods are not yet employed by space agencies in the early stages of a mission's design. Previous studies of distributed satellite systems (DSS) showed that they are rarely competitive with monolithic systems; however, comparatively little research has focused on how DSS can be used to achieve new, fundamental space science goals that simply cannot be achieved with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. A tiered structure in multi-attribute utility theory allows attributes of complex systems to be mentally compartmentalized by stakeholders and more explicitly shows synergy between complementary science goals. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. A complete list of the emergent capabilities of DSS was defined through the examination of the potential benefits of DSS as well as other science campaigns that leverage multiple assets to achieve their scientific goals. Three distinct categories consisting of seven total unique capabilities related to scientific data sampling and collection were identified and defined. The three broad categories are fundamentally unique, analytically unique, and operationally unique capabilities. This work uses RSC to examine four case studies of DSS missions that achieve new space science goals by leveraging these emergent capabilities. ExoplanetSat leverages shared sampling to conduct observations of necessary frequency and length to detect transiting exoplanets. HOBOCOP leverages simultaneous sampling and stacked sampling to study the Sun in far greater detail than any previous mission. ÆGIR leverages census sampling and self-sampling to catalog asteroids for future ISRU and mining operations. GANGMIR leverages staged sampling with sacrifice sampling and stacked sampling to answer fundamental questions related to the future human exploration of Mars. In all four case studies, RSC showed how scientific value was gained that would. be impossible or unsatisfactory with monolithic systems. Information gained in these studies helped stakeholders more accurately understand the risks and opportunities that arise as a result of the added flexibility in these missions. The wide scope of these case studies demonstrates how RSC can be applied to any science mission, especially one with goals that are more easily achieved with (or impossible to achieve without) DSS. Each study serves as a blueprint for how to conduct a Pre-Phase A study using these methods.
by Benjamin Andrew Corbin.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Boone, Gary Noel. „Extreme dimensionality reduction for text learning : cluster-generated feature spaces“. Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/8139.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Jacobs, Zachary A. „PROVIDING A PERSISTENT SPACE PLUG-AND-PLAY AVIONICS NETWORK ON THE INTERNATIONAL SPACE STATION“. UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/16.

Der volle Inhalt der Quelle
Annotation:
The CubeLab is a new payload standard that greatly improves access to the International Space Station (ISS) for small, rapid turn-around microgravity experiments. CubeLabs are small (less than 16”x8”x4” and under 10kg) modular payloads that interface with the NanoRacks Platform aboard the ISS. CubeLabs receive power from the station and transfer data using the standard terrestrial plug-and-play Universal Serial Bus (USB). The Space Plug-and-play Avionics (SPA) architecture is a modular technology for spacecraft that provides an infrastructure for modular satellite components to reduce the time to orbit and development costs for satellites. This paper describes the development of a bus capable of interfacing SPA-1 payloads in the CubeLab form-factor aboard the ISS. This CubeLab also provides the “discover and join” functionality that is necessary for a SPA-1 network of devices. This will ultimately provide persistent SPA capabilities on the ISS which will allow users to send SPA-1 devices to orbit for on-the-fly installation by astronauts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sugiura, Shinya. „Coherent versus non-coherent space-time shift keying for co-located and distributed MIMO systems“. Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/165759/.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we propose the novel Space-Time Coding (STC) concept of Space-Time Shift Keying (STSK) and explore its characteristics in the contexts of both co-located and cooperative Multiple-Input Multiple-Output (MIMO) systems using both coherent and non-coherent detection. Furthermore, we conceive new serially-concatenated turbo-coding assisted STSK arrangements for the sake of approaching the channel capacity limit, which are designed with the aid of EXtrinsic Information Transfer (EXIT) charts. The basic STSK concept is first proposed for the family of co-located MIMO systems employing coherent detection. More specifically, in order to generate space-time codewords, these Coherent STSK (CSTSK) encoding schemes activate one out of Q dispersion matrices. The CSTSK scheme is capable of striking an attractive tradeoff between the achievable diversity gain and the transmission rate, hence having the potential of outperforming other classic MIMO arrangements. Since no inter-channel interference is imposed at the CSTSK receiver, the employment of single-stream-based Maximum Likelihood (ML) detection becomes realistic. Furthermore, for the sake of achieving an infinitesimally low Bit-Error Ratio (BER) at low SNRs, we conceive a three-stage concatenated turbo CSTSK scheme. In order to mitigate the effects of potential Channel State Information (CSI) estimation errors as well as the high pilot overhead, the Differentially-encoded STSK (DSTSK) philosophy is conceived with the aid of the Cayley transform and differential unitary space-time modulation. The DSTSK receiver benefits from low-complexity non-coherent single-streambased ML detection, while retaining the CSTSK scheme’s fundamental benefits. In order to create further flexible STSK architecture, the above-mentioned co-located CSTSK scheme is generalized so that P out of Q dispersion matrices are activated during each space-time block interval. Owing to its highly flexible structure, this generalized STSK scheme subsumes diverse other MIMO arrangements. Finally, the STSK concept is combined with cooperative MIMO techniques, which are capable of attaining the maximum achievable diversity gain by eliminating the undesired performance limitations imposed by uncorrelated fading. More specifically, considering the usual twin-phase cooperative transmission regime constituted by a broadcast phase and by a cooperative phase, the CSTSK and DSTSK schemes developed for co-located MIMO systems are employed during the cooperative transmission phase.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Paques, Henrique Wiermann. „The Ginga Approach to Adaptive Query Processing in Large Distributed Systems“. Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5277.

Der volle Inhalt der Quelle
Annotation:
Processing and optimizing ad-hoc and continual queries in an open environment with distributed, autonomous, and heterogeneous data servers (e.g., the Internet) pose several technical challenges. First, it is well known that optimized query execution plans constructed at compile time make some assumptions about the environment (e.g., network speed, data sources' availability). When such assumptions no longer hold at runtime, how can I guarantee the optimized execution of the query? Second, it is widely recognized that runtime adaptation is a complex and difficult task in terms of cost and benefit. How to develop an adaptation methodology that makes the runtime adaptation beneficial at an affordable cost? Last, but not the least, are there any viable performance metrics and performance evaluation techniques for measuring the cost and validating the benefits of runtime adaptation methods? To address the new challenges posed by Internet query and search systems, several areas of computer science (e.g., database and operating systems) are exploring the design of systems that are adaptive to their environment. However, despite the large number of adaptive systems proposed in the literature up to now, most of them present a solution for adapting the system to a specific change to the runtime environment. Typically, these solutions are not easily ``extendable' to allow the system to adapt to other runtime changes not predicted in their approach. In this dissertation, I study the problem of how to construct a framework where I can catalog the known solutions to query processing adaptation and how to develop an application that makes use of this framework. I call the solution to these two problems the Ginga approach. I provide in this dissertation three main contributions: The first contribution is the adoption of the Adaptation Space concept combined with feedback-based control mechanisms for coordinating and integrating different kinds of query adaptations to different runtime changes. The second contribution is the development of a systematic approach, called Ginga, to integrate the adaptation space with feedback control that allows me to combine the generation of predefined query plans (at compile-time) with reactive adaptive query processing (at runtime), including policies and mechanisms for determining when to adapt, what to adapt, and how to adapt. The third contribution is a detailed study on how to adapt to two important runtime changes, and their combination, encountered during the execution of distributed queries: memory constraints and end-to-end delays.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Varagnolo, Damiano. „Distributed Parametric-Nonparametric Estimation in Networked Control Systems“. Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421610.

Der volle Inhalt der Quelle
Annotation:
In the framework of parametric and nonparametric distributed estimation, we introduce and mathematically analyze some consensus-based regression strategies characterized by a guess of the number of agents in the network as a parameter. The parametric estimators assume a-priori information about the finite set of parameters to be estimated, while the the nonparametric use a reproducing kernel Hilbert space as the hypothesis space. The analysis of the proposed distributed regressors offers some sufficient conditions assuring the estimators to perform better, under the variance of the estimation error metric, than local optimal ones. Moreover it characterizes, under euclidean distance metrics, the performance losses of the distributed estimators with respect to centralized optimal ones. We also offer a novel on-line algorithm that distributedly computes certificates of quality attesting the goodness of the estimation results, and show that the nonparametric distributed regressor is an approximate distributed Regularization Network requiring small computational, communication and data storage efforts. We then analyze the problem of estimating a function from different noisy data sets collected by spatially distributed sensors and subject to unknown temporal shifts, and perform time delay estimation through the minimization of functions of inner products in reproducing kernel Hilbert spaces. Due to the importance of the knowledge of the number of agents in the previously analyzed algorithms, we also propose a design methodology for its distributed estimation. This algorithm is based on the following paradigm: some locally randomly generated values are exchanged among the various sensors, and are then modified by known consensus-based strategies. Statistical analysis of the a-consensus values allows the estimation of the number of sensors participating in the process. The first main feature of this approach is that algorithms are completely distributed, since they do not require leader election steps. Moreover sensors are not requested to transmit authenticating information like identification numbers or similar data, and thus the strategy can be implemented even if privacy problems arise. After a rigorous formulation of the paradigm we analyze some practical examples, fully characterize them from a statistical point of view, and finally provide some general theoretical results among with asymptotic analyses.
In questa tesi vengono introdotti e analizzati alcuni algoritmi di regressione distribuita parametrica e nonparametrica, basati su tecniche di consenso e parametrizzati da un parametro il cui significato è una stima del numero di sensori presenti nella rete. Gli algoritmi parametrici assumono la conoscenza di informazione a-priori sulle quantità da stimare, mentre quelli nonparametrici utilizzano come spazio delle ipotesi uno spazio di Hilbert a nucleo riproducente. Dall'analisi degli stimatori distribuiti proposti si ricavano alcune condizioni sufficienti che, se assicurate, garantiscono che le prestazioni degli stimatori distribuiti sono migliori di quelli locali (usando come metrica la varianza dell'errore di stima). Inoltre dalla stessa analisi si caratterizzano le perdite di prestazioni che si hanno usando gli stimatori distribuiti invece che quelli centralizzati e ottimi (usando come metrica la distanza euclidea tra le due diverse stime ottenute). Inoltre viene offerto un nuovo algoritmo che calcola in maniera distribuita dei certificati di qualità che garantiscono la bontà dei risultati ottenuti con gli stimatori distribuiti. Si mostra inoltre come lo stimatore nonparametrico distribuito proposto sia in realtà una versione approssimata delle cosiddette ``Reti di Regolarizzazione'', e come esso richieda poche risorse computazionali, di memoria e di comunicazione tra sensori. Si analizza quindi il caso di sensori spazialmente distribuiti e soggetti a ritardi temporali sconosciuti. Si mostra dunque come si possano stimare, minimizzando opportune funzioni di prodotti interni negli spazi di Hilbert precedentemente considerati, sia la funzione vista dai sensori che i relativi ritardi visti da questi. A causa dell'importanza della conoscenza del numero di agenti negli algoritmi proposti precedentemente, viene proposta una nuova metodologia per sviluppare algoritmi di stima distribuita di tale numero, basata sulla seguente idea: come primo passo gli agenti generano localmente alcuni numeri, in maniera casuale e da una densità di probabilità nota a tutti. Quindi i sensori si scambiano e modificano questi dati usando algoritmi di consenso quali la media o il massimo; infine, tramite analisi statistiche sulla distribuzione finale dei dati modificati, si può ottenere dell'informazione su quanti agenti hanno partecipato al processo di consenso e modifica. Una caratteristica di questo approccio è che gli algoritmi sono completamente distribuiti, in quanto non richiedono passi di elezione di leaders. Un'altra è che ai sensori non è richiesto di trasmettere informazioni sensibili quali codici identificativi o altro, quindi la strategia è implementabile anche se in presenza di problemi di riservatezza. Dopo una formulazione rigorosa del paradigma, analizziamo alcuni esempi pratici, li caratterizziamo completamente dal punto di vista statistico, e infine offriamo alcuni risultati teorici generali e analisi asintotiche.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Sundholm, Hillevi. „Spaces within Spaces : The Construction of a Collaborative Reality“. Doctoral thesis, Kista : Department of Computer and Systems Sciences (Stockholm University together with KTH), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6860.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Alotaibi, Faisal T. „Distributed space-time block coding in cooperative relay networks with application in cognitive radio“. Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/10965.

Der volle Inhalt der Quelle
Annotation:
Spatial diversity is an effective technique to combat the effects of severe fading in wireless environments. Recently, cooperative communications has emerged as an attractive communications paradigm that can introduce a new form of spatial diversity which is known as cooperative diversity, that can enhance system reliability without sacrificing the scarce bandwidth resource or consuming more transmit power. It enables single-antenna terminals in a wireless relay network to share their antennas to form a virtual antenna array on the basis of their distributed locations. As such, the same diversity gains as in multi-input multi-output systems can be achieved without requiring multiple-antenna terminals. In this thesis, a new approach to cooperative communications via distributed extended orthogonal space-time block coding (D-EO-STBC) based on limited partial feedback is proposed for cooperative relay networks with three and four relay nodes and then generalized for an arbitrary number of relay nodes. This scheme can achieve full cooperative diversity and full transmission rate in addition to array gain, and it has certain properties that make it alluring for practical systems such as orthogonality, flexibility, low computational complexity and decoding delay, and high robustness to node failure. Versions of the closed-loop D-EO-STBC scheme based on cooperative orthogonal frequency division multiplexing type transmission are also proposed for both flat and frequency-selective fading channels which can overcome imperfect synchronization in the network. As such, this proposed technique can effectively cope with the effects of fading and timing errors. Moreover, to increase the end-to-end data rate, this scheme is extended for two-way relay networks through a three-time slot framework. On the other hand, to substantially reduce the feedback channel overhead, limited feedback approaches based on parameter quantization are proposed. In particular, an optimal one-bit partial feedback approach is proposed for the generalized D-O-STBC scheme to maximize the array gain. To further enhance the end-to-end bit error rate performance of the cooperative relay system, a relay selection scheme based on D-EO-STBC is then proposed. Finally, to highlight the utility of the proposed D-EO-STBC scheme, an application to cognitive radio is studied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Jung, Jin Woo. „Modeling and control of fuel cell based distributed generation systems“. Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1116451881.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xvi, 209 p.; also includes graphics. Includes bibliographical references (p. 202-209). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Jones, Alistair. „Co-located collaboration in interactive spaces for preliminary design“. Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01067774.

Der volle Inhalt der Quelle
Annotation:
The preliminary design phase occurs near the launch of an engineering project, normally after an initial requirements gathering phase. Through a series of meetingswhich gathers the key actors of a project, effective preliminary design involves discussion and decision-making punctuated by group creativity techniques. These activities are designed to explore the potential solutions of the problem, such asbrainstorming or causal analysis, or to address the project itself, such as collaborative project planning. Such activities are usually conducted in traditional meeting rooms with pen and paper media, which requires significant time and effort to prepare, perform, and later render into a digitally exploitable format. These processes have resisted previous attempts of computer-supported solutions, because any additional instruments risk obstructing the natural collaboration and workflow that make these activities so beneficial. Over the past decade, technologies such as interactive table tops, interactive wall displays, speech recognition software, 3D motion sensing cameras, and handheld tablets and smartphones have experienced significant advances in maturity. Theirform factors resemble the physical configuration of traditional pen-and-paper environments,while their "natural" input devices (based on multi-touch, gestures, voice, tangibles, etc.) allow them to leverage a user's pre-existing verbal, spatial,social, motor and cognitive skills. Researchers hypothesize that having these devices working in concert inside interactive spaces could augment collaboration forco-located (i.e. physically present) groups of users.There currently exist several interactive spaces in the literature, illustrating awide range of potential hardware configurations and interaction techniques. The goal of this thesis is first to explore what qualities these interactive spaces should exhibit in their interaction design, particularly with regard to preliminary designactivities, and second, to investigate how their heterogeneous and distributed computing devices can be unified into a flexible and extensible distributed computing architecture. The first main contribution of this thesis is an extensive presentation of an interactive space, which at its core uses a configuration not yet fully explored inprevious literature : a large multitouch table top and a large multitouch interactive Abstract board display. The design of this interactive space is driven by observations o fgroups engaged in preliminary design activities in traditional environments and a literature review aimed at extracting user-centered design guide lines. Special consideration is given to the user interface as it extends across multiple shared displays, and maintains a separation of concerns regarding personal and group work. Finally, evaluations using groups of five and six users show that using such an interactive space, coupled with our proposed multi-display interaction techniques, leads to a more effective construction of the digital artifacts used in preliminary design.The second main contribution of this thesis is a multi-agent infrastructure forthe distributed computing environment which effectively accommodates a widerange of platforms and devices in concerted interaction. By using agent-oriented programming and by establishing a common content language for messaging, the infrastructure is especially tolerant of network faults and suitable for rapid prototyping of heterogeneous devices in the interactive space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bellachehab, Anass. „Pairwise gossip in CAT(k) metric spaces“. Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0017/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse adresse le problème du consensus dans les réseaux. On étudie des réseaux composés d'agents identiques capables de communiquer entre eux, qui ont une mémoire et des capacités de calcul. Le réseau ne possède pas de nœud central de fusion. Chaque agent stocke une valeur qui n'est pas initialement connue par les autres agents. L'objectif est d'atteindre le consensus, i.e. tous les agents ont la même valeur, d'une manière distribuée. De plus, seul les agents voisins peuvent communiquer entre eux. Ce problème a une longue et riche histoire. Si toutes les valeurs appartiennent à un espace vectoriel, il existe plusieurs protocoles pour résoudre le problème. Une des solutions connues est l'algorithme du gossip qui atteint le consensus de manière asymptotique. C'est un protocole itératif qui consiste à choisir deux nœuds adjacents à chaque itération et de les moyenner. La spécificité de cette thèse est dans le fait que les données stockées par les agents n'appartiennent pas nécessairement à un espace vectoriel, mais à un espace métrique. Par exemple, chaque agent stocke une direction (l'espace métrique est l'espace projectif) ou une position dans un graphe métrique (l'espace métrique est le graphe sous-jacent). Là, les protocoles de gossip mentionnés plus haut n'ont plus de sens car l'addition qui n'est plus disponibles dans les espaces métriques. Cependant, dans les espaces métriques les points milieu ont du sens dans certains cas. Et là ils peuvent se substituer aux moyennes arithmétiques. Dans ce travail, on a compris que la convergence du gossip avec les points milieu dépend de la courbure. On s'est focalisés sur le cas où l'espace des données appartient à une classe d'espaces métriques appelés les espaces CAT(k). Et on a pu démontrer que si les données initiales sont suffisamment "proches" dans un sens bien précis, alors le gossip avec les points milieu - qu'on a appelé le Random Parwise Midpoints- converge asymptotiquement vers un consensus
This thesis deals with the problem of consensus on networks. Networks under study consists of identical agents that can communicate with each other, have memory and computational capacity. The network has no central node. Each agent stores a value that, initially, is not known by other agents. The goal is to achieve consensus, i.e. all agents having the same value, in a fully distributed way. Hence, only neighboring agents can have direct communication. This problem has a long and fruitful history. If all values belong to some vector space, several protocols are known to solve this problem. A well-known solution is the pairwise gossip protocol that achieves consensus asymptotically. It is an iterative protocol that consists in choosing two adjacent nodes at each iteration and average them. The specificity of this Ph.D. thesis lies in the fact that the data stored by the agents does not necessarily belong to a vector space, but some metric space. For instance, each agent stores a direction (the metric space is the projective space) or position on a sphere (the metric space is a sphere) or even a position on a metric graph (the metric space is the underlying graph). Then the mentioned pairwise gossip protocols makes no sense since averaging implies additions and multiplications that are not available in metric spaces: what is the average of two directions, for instance? However, in metric spaces midpoints sometimes make sense and when they do, they can advantageously replace averages. In this work, we realized that, if one wants midpoints to converge, curvature matters. We focused on the case where the data space belongs to some special class of metric spaces called CAT(k) spaces. And we were able to show that, provided initial data is "close enough" is some precise meaning, midpoints-based gossip algorithm – that we refer to as Random Pairwise Midpoints - does converge to consensus asymptotically. Our generalization allows to treat new cases of data spaces such as positive definite matrices, the rotations group and metamorphic systems
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Fan, Yang, Hidehiko Masuhara, Tomoyuki Aotani, Flemming Nielson und Hanne Riis Nielson. „AspectKE*: Security aspects with program analysis for distributed systems“. Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4136/.

Der volle Inhalt der Quelle
Annotation:
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Borkar, Milind. „A distributed Monte Carlo method for initializing state vector distributions in heterogeneous smart sensor networks“. Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22680.

Der volle Inhalt der Quelle
Annotation:
The objective of this research is to demonstrate how an underlying system's state vector distribution can be determined in a distributed heterogeneous sensor network with reduced subspace observability at the individual nodes. We show how the network, as a whole, is capable of observing the target state vector even if the individual nodes are not capable of observing it locally. The initialization algorithm presented in this work can generate the initial state vector distribution for networks with a variety of sensor types as long as the measurements at the individual nodes are known functions of the target state vector. Initialization is accomplished through a novel distributed implementation of the particle filter that involves serial particle proposal and weighting strategies, which can be accomplished without sharing raw data between individual nodes in the network. The algorithm is capable of handling missed detections and clutter as well as compensating for delays introduced by processing, communication and finite signal propagation velocities. If multiple events of interest occur, their individual states can be initialized simultaneously without requiring explicit data association across nodes. The resulting distributions can be used to initialize a variety of distributed joint tracking algorithms. In such applications, the initialization algorithm can initialize additional target tracks as targets come and go during the operation of the system with multiple targets under track.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Bellachehab, Anass. „Pairwise gossip in CAT(k) metric spaces“. Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0017.

Der volle Inhalt der Quelle
Annotation:
Cette thèse adresse le problème du consensus dans les réseaux. On étudie des réseaux composés d'agents identiques capables de communiquer entre eux, qui ont une mémoire et des capacités de calcul. Le réseau ne possède pas de nœud central de fusion. Chaque agent stocke une valeur qui n'est pas initialement connue par les autres agents. L'objectif est d'atteindre le consensus, i.e. tous les agents ont la même valeur, d'une manière distribuée. De plus, seul les agents voisins peuvent communiquer entre eux. Ce problème a une longue et riche histoire. Si toutes les valeurs appartiennent à un espace vectoriel, il existe plusieurs protocoles pour résoudre le problème. Une des solutions connues est l'algorithme du gossip qui atteint le consensus de manière asymptotique. C'est un protocole itératif qui consiste à choisir deux nœuds adjacents à chaque itération et de les moyenner. La spécificité de cette thèse est dans le fait que les données stockées par les agents n'appartiennent pas nécessairement à un espace vectoriel, mais à un espace métrique. Par exemple, chaque agent stocke une direction (l'espace métrique est l'espace projectif) ou une position dans un graphe métrique (l'espace métrique est le graphe sous-jacent). Là, les protocoles de gossip mentionnés plus haut n'ont plus de sens car l'addition qui n'est plus disponibles dans les espaces métriques. Cependant, dans les espaces métriques les points milieu ont du sens dans certains cas. Et là ils peuvent se substituer aux moyennes arithmétiques. Dans ce travail, on a compris que la convergence du gossip avec les points milieu dépend de la courbure. On s'est focalisés sur le cas où l'espace des données appartient à une classe d'espaces métriques appelés les espaces CAT(k). Et on a pu démontrer que si les données initiales sont suffisamment "proches" dans un sens bien précis, alors le gossip avec les points milieu - qu'on a appelé le Random Parwise Midpoints- converge asymptotiquement vers un consensus
This thesis deals with the problem of consensus on networks. Networks under study consists of identical agents that can communicate with each other, have memory and computational capacity. The network has no central node. Each agent stores a value that, initially, is not known by other agents. The goal is to achieve consensus, i.e. all agents having the same value, in a fully distributed way. Hence, only neighboring agents can have direct communication. This problem has a long and fruitful history. If all values belong to some vector space, several protocols are known to solve this problem. A well-known solution is the pairwise gossip protocol that achieves consensus asymptotically. It is an iterative protocol that consists in choosing two adjacent nodes at each iteration and average them. The specificity of this Ph.D. thesis lies in the fact that the data stored by the agents does not necessarily belong to a vector space, but some metric space. For instance, each agent stores a direction (the metric space is the projective space) or position on a sphere (the metric space is a sphere) or even a position on a metric graph (the metric space is the underlying graph). Then the mentioned pairwise gossip protocols makes no sense since averaging implies additions and multiplications that are not available in metric spaces: what is the average of two directions, for instance? However, in metric spaces midpoints sometimes make sense and when they do, they can advantageously replace averages. In this work, we realized that, if one wants midpoints to converge, curvature matters. We focused on the case where the data space belongs to some special class of metric spaces called CAT(k) spaces. And we were able to show that, provided initial data is "close enough" is some precise meaning, midpoints-based gossip algorithm – that we refer to as Random Pairwise Midpoints - does converge to consensus asymptotically. Our generalization allows to treat new cases of data spaces such as positive definite matrices, the rotations group and metamorphic systems
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Kratz, Jonathan L. „Robust Control of Uncertain Input-Delayed Sample Data Systems through Optimization of a Robustness Bound“. The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429149093.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Born, Marc, und Olaf Kath. „CoRE - komponentenorientierte Entwicklung offener verteilter Softwaresysteme im Telekommunikationskontext“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I; Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2002. http://dx.doi.org/10.18452/14744.

Der volle Inhalt der Quelle
Annotation:
Die Telekommunikation und die ihr zuliefernde Industrie stellen einen softwareintensiven Bereich dar - der durch einen sehr hohen Anteil von Eigenentwicklungen gekennzeichnet ist. Eine wesentliche Ursache dafür sind spezielle Anforderungen an Telekommunikationssoftwaresysteme, die i.allg. nicht durch Standardsoftwareprodukte sichergestellt werden können. Diese Anforderungen ergeben sich aus den besonderen Eigenschaften solcher Softwaresysteme wie die Verteilung der Komponenten von Softwaresystemen sowie die Verteilung der Entwicklung dieser Komponenten, die Heterogenität der Entwicklungs- und Einsatzumgebungen für diese Komponenten und die Komplexität der entwickelten Softwaresysteme hinsichtlich nichtfunktionaler Charakteristika. Die industrielle Entwicklung von Telekommunikationssoftwaresystemen ist ein schwieriger und bisher nicht zufriedenstellend gelöster Prozeß. Aktuelle Forschungsarbeiten thematisieren Softwareentwicklungsprozesse und -techniken sowie unterstützende Werkzeuge zur Erstellung und Integration wiederverwendbarer Softwarekomponenten ("Componentware"). Das Ziel dieser Dissertation besteht in der Unterstützung der industriellen Entwicklung offener, verteilter Telekommunikationssoftwaresysteme. Dazu wird die Entwicklungstechnik Objektorientierte Modellierung mit dem Einsatz von Komponentenarchitekturen durch die automatische Ableitung von Softwarekomponenten aus Modellen zusammengeführt. Die zentrale Idee ist dabei eine präzise Definition der zur Entwicklung von verteilten Softwaresystemen einsetzbaren Modellierungskonzepte in Form eines Metamodells. Von diesem Metamodell ausgehend werden dann zur Konstruktion und Darstellung objektorientierter Entwurfsmodelle eine graphische und eine textuelle Notation definiert. Da die Notationen die Konzepte des Meta- Modells visualisieren, haben sie diesem gegenüber einen sekundären Charakter. Für die Transformation von Entwurfsmodellen in ausführbare Applikationen wurde auf der Grundlage von CORBA eine Komponentenplattform realisiert, die zusätzlich zu Interaktionen zwischen verteilten Softwarekomponenten auch Entwicklungs-, Deployment- und Ausführungsaspekte unterstützt. Wiederum ausgehend vom Metamodell wird durch die Anwendung wohldefinierter Ableitungsregeln die automatische Überführung von Entwurfsmodellen in Softwarekomponenten des zu entwickelnden Systems ermöglicht. Die von den Autoren erarbeiteten Konzeptionen und Vorgehensweisen wurden praktisch in eine Werkzeugumgebung umgesetzt, die sich bereits erfolgreich in verschiedenen Softwareentwicklungsprojekten bewährt hat.
The telecommunication industry and their suppliers form a software intensive domain. In addition, a high percentage of the software is developed by the telecommunication enterprises themselves. A main contributing factor for this situation are specific requirements to telecommunication software systems which cannot be fulfilled by standard off-the-shelf products. These requirements result from particular properties of those software systems, e.g. distributed development and execution of their components, heterogeneity of execution and development environments and complex non-functional characteristics like scalability, reliability, security and manageability. The development of telecommunication software systems is a complex process and currently not satisfactory realized. Actual research topics in this arena are software development processes and development techniques as well as tools which support the creation and integration of reusable software components (component ware). The goal of this thesis work is the support of the industrial development and manufacturing of open distributed telecommunication software systems. For that purpose, the development technique object oriented modelling and the implementation technique usage of component architectures are combined. The available modelling concepts are precisely defined as a metamodel. Based on that metamodel, graphical and textual notations for the presentation of models are developed. To enable a smooth transition from object oriented models into executable components a component architecture based on CORBA was also developed as part of the thesis. This component architecture covers besides the interaction support for distributed components deployment and execution aspects. Again on the basis of the metamodel code generation rules are defined which allow to automate the transition from models to components. The development techniques described in this thesis have been implemented as a tool chain. This tool chain has been successfully used in several software development projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Amoah, Raphael. „Formal security analysis of the DNP3-Secure Authentication Protocol“. Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/93798/1/Raphael_Amoah_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis evaluates the security of Supervisory Control and Data Acquisition (SCADA) systems, which are one of the key foundations of many critical infrastructures. Specifically, it examines one of the standardised SCADA protocols called the Distributed Network Protocol Version 3, which attempts to provide a security mechanism to ensure that messages transmitted between devices, are adequately secured from rogue applications. To achieve this, the thesis applies formal methods from theoretical computer science to formally analyse the correctness of the protocol.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Roriz, Junior Marcos Paulino. „C3S: uma plataforma de middleware de compartilhamento de conteúdo para espaços inteligentes“. Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tede/3101.

Der volle Inhalt der Quelle
Annotation:
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2014-09-19T12:27:42Z No. of bitstreams: 2 Dissertacao Marcos Paulino Roriz Junior.pdf: 6606924 bytes, checksum: 51f41fd9bffd47d74d5b5433034ffe62 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-19T12:58:45Z (GMT) No. of bitstreams: 2 Dissertacao Marcos Paulino Roriz Junior.pdf: 6606924 bytes, checksum: 51f41fd9bffd47d74d5b5433034ffe62 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2014-09-19T12:58:45Z (GMT). No. of bitstreams: 2 Dissertacao Marcos Paulino Roriz Junior.pdf: 6606924 bytes, checksum: 51f41fd9bffd47d74d5b5433034ffe62 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-05-17
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
According to Mark Weiser, ubiquitous computing focuses on seamlessly integrating computing tasks into people’s daily lives. Because of current technology limitations, the realization of ubiquitous computing observes a limited set of aspects of ubiquitous computing, such as, mobility and context, which are based on services that integrate the users with the resources that are present on a delimited ubiquitous environment (such as in smart spaces). Instead, we explored a different approach, in which services are used not to integrate an individual user with the environment, but to integrate the users present in the environment with one another. One way to realize this aspect is by using content sharing, first-class application dat, that serve as integration medium between users. However, due to the environment complexity and lack of middleware platforms, applications that follow this approach are repeatedly built from scratch using raw techniques. Aiming to provide an infrastructure for the development of this kind of applications, we propose Content Sharing for Smart Spaces (C3S), a middleware that offers a high-level programming model using primitives that are based on a set of content sharing semantics and ubiquitous application concepts. The primitives express a small set of behaviors, such as move, clone, and mirror, which serve as building blocks for developers to implement sharing and content ubiquity features, while the ubiquitous concepts supported by the middleware allow the manipulation of users, groups and ubiquitous applications. We validated our proposal using two different case studies that allowed us to explore these features. Our results show that our middleware provides an easier way to develop sharing-based applications compared to related work found in the literature.
De acordo com Mark Weiser, a computação ubíqua se concentra na integração de maneira despercebida e sem rupturas (seamlessy) de tarefas da computação no cotidiano das pessoas. Por causa das atuais limitações tecnológicas, a realização dessa integração segue um ou mais aspectos da computação ubíqua, por exemplo, de mobilidade ou de contexto, que são baseados em serviços que integram o usuário em um ambiente ubíquo delimitado (como espaços inteligentes). Neste trabalho exploramos uma abordagem diferente, em que os serviços não são utilizados para integrar um usuário individual ao ambiente, mas são utilizados para integrar os usuários presentes no ambiente uns com os outros. Uma maneira de realizar esse aspecto é usando o compartilhamento de conteúdo, dados de primeira classe da aplicação que servem como meio de integrar os usuários. No entanto, devido à complexidade do ambiente de computação ubíqua e à falta de plataformas de middleware, aplicações que seguem esta abordagem são repetidamente construídas a partir “do zero”, usando técnicas não convencionais. Com o objetivo de fornecer uma infraestrutura para o desenvolvimento deste tipo de aplicação, propomos o Content Sharing for Smart Spaces (C3S), um middleware que oferece um modelo de programação de alto nível, usando primitivas baseadas em um conjunto de semânticas de compartilhamento de conteúdo e em conceitos de aplicações ubíquas. As primitivas expressam um conjunto de comportamentos, tais como mover, clonar, e espelhar, que servem como blocos de construção para os desenvolvedores implementarem funcionalidades de compartilhamento, enquanto que os conceitos de ubiquidade permitem a manipulação de usuários, grupos e aplicações ubíquas. A proposta foi validada por meio de dois estudos de caso que exploram esses recursos. Os resultados permitiram concluir que o middleware fornece uma maneira mais fácil de desenvolver aplicativos baseados em compartilhamento em comparação com trabalhos semelhantes encontrados na literatura.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Mehiaoui, Asma. „Techniques d'analyse et d'optimisation pour la synthèse architecturale de systèmes temps réel embarqués distribués : problèmes de placement, de partitionnement et d'ordonnancement“. Thesis, Brest, 2014. http://www.theses.fr/2014BRES0011/document.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre industriel et académique, les méthodologies de développement logiciel exploitent de plus en plus le concept de “modèle” afin d’appréhender la complexité des systèmes temps réel critiques. En particulier, celles-ci définissent une étape dans laquelle un modèle fonctionnel, conçu comme un graphe de blocs fonctionnels communiquant via des échanges de signaux de données, est déployé sur un modèle de plateforme d’exécution matérielle et un modèle de plateforme d’exécution logicielle composé de tâches et de messages. Cette étape appelée étape de déploiement, permet d’établir une architecture opérationnelle du système nécessitant une validation des propriétés temporelles du système. Dans le contexte des systèmes temps réel dirigés par les évènements, la vérification des propriétés temporelles est réalisée à l’aide de l’analyse d’ordonnançabilité basée sur l’analyse des temps de réponse. Chaque choix de déploiement effectué a un impact essentiel sur la validité et la qualité du système. Néanmoins, les méthodologies existantes n’offrent pas de support permettant de guider le concepteur d’applications durant l’exploration de l’espace des architectures possibles. L’objectif de ces travaux de thèse consiste à mettre en place des techniques d’analyse et de synthèse automatiques permettant de guider le concepteur vers une architecture opérationnelle valide et optimisée par rapport aux performances du système. Notre proposition est dédiée à l’exploration de l’espace des architectures en tenant compte à la fois des quatre degrés de liberté déterminés durant la phase de déploiement, à savoir (j) le placement des éléments fonctionnels sur les éléments de calcul et de communication de la plateforme d’exécution, (ii) le partitionnement des éléments fonctionnels en tâches temps réel et des signaux de données en messages, (iii) l’affectation de priorités d’exécution aux tâches et aux messages du système et (iv) l’attribution du mécanisme de protection des données partagées pour les systèmes temps réel périodiques. Nous nous intéressons principalement à la satisfaction des contraintes temporelles et celles liées aux capacités des ressources de la plateforme cible. De plus, nous considérons l’optimisation des latences de bout-en-bout et la consommation mémoire. Les approches d’exploration architecturale présentées dans cette thèse sont basées sur la technique d’optimisation PLNE (programmation linéaire en nombres entiers) et concernent à la fois les applications activées périodiquement et celles dont l’activation est pilotée par les données. Contrairement à de nombreuses approches antérieures fournissant une solution partielle au problème de déploiement, les méthodes proposées considèrent l’ensemble du problème de déploiement. Les approches proposées dans cette thèse sont évaluées à l’aide d’applications génériques et industrielles
Modern development methodologies from the industry and the academia exploit more and more the ”model” concept to address the complexity of critical real-time systems. These methodologies define a key stage in which the functional model, designed as a network of function blocks communicating through exchanged data signals, is deployed onto a hardware execution platform model and implemented in a software model consisting of a set of tasks and messages. This stage so-called deployment stage allows establishment of an operational architecture of the system, thus it requires evaluation and validation of the temporal properties of the system. In the context of event-driven real-time systems, the verification of temporal properties is performed using the schedulability analysis based on the response time analysis. Each deployment choice has an essential impact on the validity and the quality of the system. However, the existing methodologies do not provide supportto guide the designer of applications in the exploration of the operational architectures space. The objective of this thesis is to develop techniques for analysis and automatic synthesis of a valid operational architecture optimized with respect to the system performances. Our proposition is dedicated to the exploration of architectures space considering at the same time the four degrees of freedom determined during the deployment phase, (i) the placement of functional elements on the computing and communication resources of the execution platform, (ii) the partitioning of function elements into real time tasks and data signals into messages, (iii) the priority assignment to system tasks and messages and (iv) the assignment of shared data protection mechanism for periodic real-time systems. We are mainly interested in meeting temporal constraints and memory capacity of the target platform. In addition, we are focusing on the optimization of end-to-end latency and memory consumption. The design space exploration approaches presented in this thesis are based on the MILP (Mixed Integer Linear programming) optimization technique and concern at the same time time-driven and data-driven applications. Unlike many earlier approaches providing a partial solution to the deployment problem, our methods consider the whole deployment problem. The proposed approaches in this thesis are evaluated using both synthetic and industrial applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Ben, Salem Aymen. „The Application of Multiuser Detection to Spectrally Efficient MIMO or Virtual MIMO SC-FDMA Uplinks in LTE Systems“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/30351.

Der volle Inhalt der Quelle
Annotation:
Single Carrier Frequency Division Multiple Access (SC-FDMA) is a multiple access transmission scheme that has been adopted in the 4th generation 3GPP Long Term Evolution (LTE) of cellular systems. In fact, its relatively low peak-to-average power ratio (PAPR) makes it ideal for the uplink transmission where the transmit power efficiency is of paramount importance. Multiple access among users is made possible by assigning different users to different sets of non-overlapping subcarriers. With the current LTE specifications, if an SC-FDMA system is operating at its full capacity and a new user requests channel access, the system redistributes the subcarriers in such a way that it can accommodate all of the users. Having less subcarriers for transmission, every user has to increase its modulation order (for example from QPSK to 16QAM) in order to keep the same transmission rate. However, increasing the modulation order is not always possible in practice and may introduce considerable complexity to the system. The technique presented in this thesis report describes a new way of adding more users to an SC-FDMA system by assigning the same sets of subcarriers to different users. The main advantage of this technique is that it allows the system to accommodate more users than conventional SC-FDMA and this corresponds to increasing the spectral efficiency without requiring a higher modulation order or using more bandwidth. During this work, special attentions wee paid to the cases where two and three source signals are being transmitted on the same set of subcarriers, which leads respectively to doubling and tripling the spectral efficiency. Simulation results show that by using the proposed technique, it is possible to add more users to any SC-FDMA system without increasing the bandwidth or the modulation order while keeping the same performance in terms of bit error rate (BER) as the conventional SC-FDMA. This is realized by slightly increasing the energy per bit to noise power spectral density ratio (Eb/N0) at the transmitters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

廖智瑋. „Partially Distributed Space-Time Coding System in Flat Fading Channels“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/95746082357936820576.

Der volle Inhalt der Quelle
Annotation:
碩士
逢甲大學
通訊工程學系
104
In this paper, we will propose a new coding rules used in cooperative communication systems, comprosing 2x1,3x1,4x1 even multiple relay systems, using the virtual relay to simplifies and streamlines the process of the signal transmission, the coding rules names Alamouti Partially Distributed Space-Time Coding, we research on Alamouti Partially Distributed Space-Time Coding system in Flat Fading Channels. The symbol error rate performances of Alamouti Partially Distributed Space-Time Coding system on Flat Fading Channels using Amplify and Forward scheme is simulated and compared with Alamouti Full Distributed Space-Time Coding. We show through simulation that performance of Alamouti Partially Distributed Space-Time Coding is better than Alamouti Full Distributed Space-Time Coding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Hwu, Jiann-Ming, und 胡建銘. „An Extended Stub Library for Distributed Common Variable Space in the PVM System“. Thesis, 1994. http://ndltd.ncl.edu.tw/handle/00125454640822606120.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中央大學
資訊及電子工程研究所
82
These years, the research in the realm of distributed shared memory (DSM) has been a hot research area. The benefits of DSM over that of message passing mechanisms are widely acknowledged. Instead of involving in the research about DSM meanwhile, a varied meachanism, distributed shared variable (DSV), will be applied because the development of DSM always requires extra modification for the kernel of operating system. The major differences between DSV and DSM paradigm can be distinguished in the following. DSV paradigm only supports the data sharing in a distributed computing environment. However, DSM paradigm can support data and code sharing. Nevertheless, the source codes of the existing widely accepted commercial operating systems are not affordable easily. So, pursuing a method with no kernel modification and with capability of achieving data sharing in heterogeneous computing environment is our final reasonable choice. DSV is not only an efficient programming interface, but also a friendly high-level application programming interface. While the scale of software will grow large, the DSV paradigm should be able to cure the original crisis incurred by software complexity. PVM is a user- level software system which integrates lots of existing famous computing resources on the network into one large virtual entity, however, it lacks an efficient programming paradigm such as shared variable or even shared memory. After taking the above into account, we choose this platform, PVM system, to develop the prototype of DSV system as quickly as possible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Hol, Wen-Chen, und 何文丞. „On the Study of Optimal File Assignment in Distributed System Under Memory Space Constraint“. Thesis, 1993. http://ndltd.ncl.edu.tw/handle/78119250464957123904.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
資訊工程研究所
81
Distributed Computing System (DCS) has become a major trend today's computer system design for its high speed and high performance advantages. Reliability is an important performance parameter in DCS design. In this thesis, we develop a heuristic algorithm (HROFA) for the reliability-oriented file assignment problem, which uses a careful reduction method to reduce the problem space. Based on some numerical results, the HROFA algorithm obtain the exact solution in most case and the computation time is improved significantly. When it fails to give an exact solution, the deviation from the exact solution is very small.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Wan-ChunWen und 溫婉均. „A High Capacity Cell Architecture Based on Distributed Antenna System And Cooperative Space-Time Coding“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/54104116699755088491.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Yu, Kai. „Reduced state-space Markov decision process and the dynamic recovery and reconfiguration of a distributed real-time system“. 1996. https://scholarworks.umass.edu/dissertations/AAI9721499.

Der volle Inhalt der Quelle
Annotation:
The performance of a dynamic real-time system can be improved through dynamic system reconfiguration and recovery selection under the control of the proper policies. Different actions (recovery or reconfiguration) have different short term efficiencies (or reliability) and long term effects on the system. A selected policy should take into account both of these factors to achieve better overall system behavior. A real-time system can fail due to the missing of a single critical task's deadline. Therefore, the selected control policies should be able to take care of every critical task during the system mission. This means that a decision is made not only on the average system behavior values, but also some instantaneous system operational information such as workload. With this consideration, the analytic state space of the Markov Decision Process model would be prohibitively large for any practical analysis. In this work, a Reduced State Space Markov Decision Process model is proposed to model the real-time system behavior. This model takes into account not only the system internal hardware state (such as configuration and fault pattern), external parameter (such as task arrival rate), system remaining mission time, and the action overheads but also the current instantaneous workload. Due to the use of a state space aggregation technique, our approach leads to a very efficient algorithm applicable to most real-time systems. With the help of our model, the realistic optimal control policies of the dynamic reconfiguration and recovery selection for a real-time system can be generated. It is illustrated, through numerical examples, that the dynamic recovery selection and reconfiguration approaches can significantly enhance the real-time system's performance. In addition to the theoretical work, a real-time system emulator has been developed. It provides a means for the researchers to test and debug their theoretical results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Cheng, Tz-shin, und 程子勳. „A Simple Heuristic Method to maximize the System Reliability of the File Assignment problem in the distributed computing system under Memory Space Constraints“. Thesis, 1995. http://ndltd.ncl.edu.tw/handle/31234908069609808969.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
資訊工程研究所
83
Distributed computing systems (DCS) have become a major trend in computer system design today because of their high speed and high reliable performance advantages. Reliability is an important performance parameter in DCS design. Typically, redundant copies of software can be added to a system to increase system's reliability. The distribution of program and data files can also affect the distributed program reliability (DPR) and distributed system reliability (DSR). The reliability- oriented file assignment problem is to find a file distribution such that program reliability or system reliability is maximal. In this thesis, we develop a simple heuristic file assignment algorithm which use several simple heuristic assignment rules to achieve reliability-oriented file assignment. The proposed algorithm can obtain the optimal solutions in most cases and reduce computation time significantly. Examples are given to illustrate the applicability and advantages of the proposed algorithm. Also, the time complexity is analyzed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Cheng, Zi-Xun, und 程子勳. „A simple heuristic method to maximize the system reliability of the file assignment problem in the distributed computing system under memory space constraints“. Thesis, 1995. http://ndltd.ncl.edu.tw/handle/99076355775167811615.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
資訊工程研究所
83
Distributed computing systems (DCS) have become a major trend in computer system design today because of their high speed and high reliable performance advantages. Reliability is an important performance parameter in DCS design. Typically, redundant copies of software can be added to a system to increase system''s reliability. The distribution of program and data files can also affect the distributed program reliability (DPR) and distributed system reliability (DSR). The reliability- oriented file assignment problem is to find a file distribution such that program reliability or system reliability is maximal. In this thesis, we develop a simple heuristic file assignment algorithm which use several simple heuristic assignment rules to achieve reliability-oriented file assignment. The proposed algorithm can obtain the optimal solutions in most cases and reduce computation time significantly. Examples are given to illustrate the applicability and advantages of the proposed algorithm. Also, the time complexity is analyzed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Maybodi, Farnam Khalili. „A Data-Flow Threads Co-Processor for MPSoC FPGA Clusters“. Doctoral thesis, 2021. http://hdl.handle.net/2158/1237446.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Kumar, Sappati Vinodh. „Distributed Robust Estimation of Space-Time Varying Parameter over Distributed Network“. Thesis, 2016. http://ethesis.nitrkl.ac.in/9200/1/2016_MT_SVKumar.pdf.

Der volle Inhalt der Quelle
Annotation:
Distributed wireless sensor networks assess some parameter of interest connected with the earth by preparing the spatial-temporal information. Adaptive algorithms are applied to the distributed networks to endow the network with adaptation capabilities. The distributed network consists of many small sensors deployed randomly in a geographic area, which are adaptive and share their local information. Be that as it may all these disseminated techniques depend on the least mean square cost function which is delicate to the anomalies, for example, motivation clamour and impedance present in the wanted and/or information. There are a couple of circumstances where parameters estimation shift over both space and time territories over the framework. An arrangement of premise capacities i.e. Chebyshev polynomials is utilized to depict the space-changing nature of the parameters and DLMS system is developed to recoup these parameters. The parameters of our anxiety are appraise for both one dimensional and two dimensional systems. Steadiness and joining of the developed calculation have been examined and utterances are determined to foresee the conduct. System stochastic networks are utilized to consolidate traded data between nodes. The subsequent calculation is disseminated, co-agent and ready to react to the constant changes in environment. The developed method uses the diffusion LMS algorithm, which introduce better performance than incremental strategy in the sense of large communication and in measurement sharing. But it fails to predict the parameter of interest in the presence of outliers. So there is need of discovering appropriate calculation which would be reasonable for remote sensor systems as far as correspondence and computational complexities. This essay deals with the development of space time varying parameter using diffusion LMS and Huber error loss function and to find estimate to handle the outliers. (i) desired data; (ii) input data; (iii) in both input and desired data; and (iv) desired data in case of highly coloured input data. Complete simulation studies show that the proposed methods are robust against 50% outliers in the data, dispense better convergence and low mean square deviation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

YI, ZHIHANG. „Distributed Space-Time Block Codes in Wireless Cooperative Networks“. Thesis, 2009. http://hdl.handle.net/1974/1978.

Der volle Inhalt der Quelle
Annotation:
In cooperative networks, relays cooperate and form a distributed multi-antenna system to provide spatial diversity. In order to achieve high bandwidth efficiency, distributed space-time block codes (DSTBCs) are proposed and have been studied extensively. Among all DSTBCs, this thesis focuses on the codes which are single-symbol maximum likelihood (ML) decodable and can achieve the full diversity order. This thesis presents four works on single-symbol ML decodable DSTBCs. The first work proposes the row-monomial distributed orthogonal space-time block codes (DOSTBCs). We find an upper bound of the data-rate of the row-monomial DOSTBC and construct the codes achieving this upper bound. In the second work, we first study the general DOSTBCs and derive an upper bound of the data-rate of the DOSTBC. Secondly, we propose the row-monomial DOSTBCs with channel phase information (DOSTBCs-CPI) and derive an upper bound of the data-rate of those codes. Furthermore, we find the actual row-monomial DOSTBCs-CPI which achieve the upper bound of the data-rate. In the third and fourth works of this thesis, we focus on error performance analysis of single-symbol ML decodable DSTBCs. Specifically, we study the distributed Alamouti's code in dissimilar cooperative networks. In the third work, we assume that the relays are blind relays and we derive two very accurate approximate bit error rate (BER) expressions of the distributed Alamouti's code. In the fourth work, we assume that the relays are CSI-assisted relays. When those CSI-assisted relays adopt the amplifying coefficients that was proposed in [33] and widely used in many previous publications, upper and lower bounds of the BER of the distributed Alamouti's code are derived. Very surprisingly, the lower bound indicates that the code cannot achieve the full diversity order when the CSI-assisted relays adopt the amplifying coefficients proposed in [33]. Therefore, we propose a new threshold-based amplifying coefficient and it makes the code achieve the full diversity order two. Moreover, three optimum and one suboptimum schemes are developed to calculate the threshold used in this new amplifying coefficient.
Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2009-06-27 19:07:47.066
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

„Distributed space-time block coding in wireless cooperative communications“. 2005. http://library.cuhk.edu.hk/record=b5892631.

Der volle Inhalt der Quelle
Annotation:
Cheng Ho Ting.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 90-93).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Overview of Wireless Cooperative Communications --- p.1
Chapter 1.2 --- Motivation --- p.2
Chapter 1.3 --- Distributed Space-Time Block Coding --- p.4
Chapter 1.4 --- Imperfect Channel Estimation --- p.4
Chapter 1.5 --- Time-Varying Channels --- p.4
Chapter 1.6 --- Outline of the thesis --- p.5
Chapter 2 --- Background Study --- p.6
Chapter 3 --- Distributed Space-Time Block Coding --- p.13
Chapter 3.1 --- Introduction --- p.13
Chapter 3.2 --- System Model --- p.13
Chapter 3.3 --- BER Analysis by Characteristic Equations --- p.16
Chapter 3.4 --- BER Analysis by Error Terms --- p.18
Chapter 3.4.1 --- Non-fading R→D link --- p.19
Chapter 3.4.2 --- Fading R→D link --- p.19
Chapter 3.5 --- Performance --- p.20
Chapter 3.5.1 --- Accuracy of Analytical Expressions --- p.20
Chapter 3.5.2 --- Observation of Second-order Diversity --- p.21
Chapter 3.6 --- Summary --- p.22
Chapter 4 --- Distributed Space-Time Block Coding with Imperfect Channel Estimation --- p.31
Chapter 4.1 --- Introduction --- p.31
Chapter 4.2 --- System Model --- p.32
Chapter 4.3 --- BER Analysis --- p.32
Chapter 4.3.1 --- Non-fading R→D link --- p.33
Chapter 4.3.2 --- Fading R→D link --- p.34
Chapter 4.4 --- Numerical Results --- p.34
Chapter 4.5 --- Summary --- p.36
Chapter 5 --- Distributed Space-Time Block Coding with Time-Varying Channels --- p.43
Chapter 5.1 --- Introduction --- p.43
Chapter 5.2 --- System Model --- p.44
Chapter 5.3 --- Pilot Symbol Assisted Modulation (PSAM) for DSTBC --- p.45
Chapter 5.4 --- Reception Methods --- p.48
Chapter 5.4.1 --- Maximum-Likelihood Detection (ML) in [29] --- p.48
Chapter 5.4.2 --- Cooperative Maximum-Likelihood Detection (CML) --- p.50
Chapter 5.4.3 --- Alamouti's Receiver (AR) --- p.51
Chapter 5.4.4 --- Zero-forcing Linear Detection (ZF) --- p.51
Chapter 5.4.5 --- Decision-feedback Detection (DF) --- p.52
Chapter 5.5 --- BER Analysis for Time-varying Channels --- p.53
Chapter 5.5.1 --- Quasi-Static Channels (p = 1) --- p.53
Chapter 5.5.2 --- ZF: Uncorrelated Channel (p = 0) --- p.54
Chapter 5.5.3 --- ZF: General Channel --- p.55
Chapter 5.5.4 --- DF: General Channel --- p.56
Chapter 5.6 --- Numerical Results --- p.57
Chapter 5.7 --- Summary --- p.60
Chapter 6 --- Conclusion and Future Work --- p.74
Chapter 6.1 --- Conclusion --- p.74
Chapter 6.2 --- Future Work --- p.76
Chapter 6.2.1 --- Design of Code Matrix --- p.76
Chapter 6.2.2 --- Adaptive Protocols --- p.77
Chapter A --- Derivation of (3.23) --- p.79
Chapter B --- Derivation of (3.30) and (3.32) --- p.83
Chapter C --- Derivation of (4.9) and (4.13) --- p.85
Chapter D --- Derivation of (5.68) --- p.88
Bibliography --- p.90
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Chen, Ming-Hsian, und 陳銘賢. „Estimation of Space-Distributed Parameters of the Shaft in Rotor Systems“. Thesis, 2003. http://ndltd.ncl.edu.tw/handle/56852336145125937526.

Der volle Inhalt der Quelle
Annotation:
碩士
中華大學
機械與航太工程研究所
91
This research uses a general transfer matrix method (GTMM) and Timoshenko Principle for estimating the spatial unbalance parameters in rotor systems. Firstly we derive the motion equation of the flexible shaft, disks, and bearings. Through assembling the transfer matrices of the flexible shaft, disks, and bearings, we can obtain the global transfer matrix of the rotor system. The state variables of the transfer matrix include lateral deflections, deflection angles caused by shears and bending moments respectively, shears, bending moments, torque, and torsional angle. The spatial unbalance parameters of flexible shafts, and disk eccentricities, can be estimated at two closely rotating speeds. The theoretical development of the proposed method is presented with simulation results and discussion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Liou, Jiun-Huei, und 劉俊輝. „Study of Distributed Space-Frequency Codes for Cooperative Communication Systems with Multiple Carrier Frequency Offsets“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/98926409743191155675.

Der volle Inhalt der Quelle
Annotation:
碩士
國立暨南國際大學
通訊工程研究所
99
In this thesis, we study in cooperative wireless communications system that combined Orthogonal Frequency Division Multiplexing (OFDM) to let frequency selective fading channel reduces to a flat fading channel.We are using three space-frequency codes (SFC) to discuss that we use ZF, and MMSE to achieve full cooperative diversity at receiver in the frequency domain.In the flat fading channel,we compared the performance of three space-frequencywhen multi relay nodes exist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Mishra, Ashirbad. „Efficient betweenness Centrality Computations on Hybrid CPU-GPU Systems“. Thesis, 2016. http://etd.iisc.ac.in/handle/2005/2718.

Der volle Inhalt der Quelle
Annotation:
Analysis of networks is quite interesting, because they can be interpreted for several purposes. Various features require different metrics to measure and interpret them. Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Between’s Centrality (BC) is one such metric that plays a key role in many real world applications. BC is an important graph analytics application for large-scale graphs. However it is one of the most computationally intensive kernels to execute, and measuring centrality in billion-scale graphs is quite challenging. While there are several existing e orts towards parallelizing BC algorithms on multi-core CPUs and many-core GPUs, in this work, we propose a novel ne-grained CPU-GPU hybrid algorithm that partitions a graph into two partitions, one each for CPU and GPU. Our method performs BC computations for the graph on both the CPU and GPU resources simultaneously, resulting in a very small number of CPU-GPU synchronizations, hence taking less time for communications. The BC algorithm consists of two phases, the forward phase and the backward phase. In the forward phase, we initially and the paths that are needed by either partitions, after which each partition is executed on each processor in an asynchronous manner. We initially compute border matrices for each partition which stores the relative distances between each pair of border vertex in a partition. The matrices are used in the forward phase calculations of all the sources. In this way, our hybrid BC algorithm leverages the multi-source property inherent in the BC problem. We present proof of correctness and the bounds for the number of iterations for each source. We also perform a novel hybrid and asynchronous backward phase, in which each partition communicates with the other only when there is a path that crosses the partition, hence it performs minimal CPU-GPU synchronizations. We use a variety of implementations for our work, like node-based and edge based parallelism, which includes data-driven and topology based techniques. In the implementation we show that our method also works using variable partitioning technique. The technique partitions the graph into unequal parts accounting for the processing power of each processor. Our implementations achieve almost equal percentage of utilization on both the processors due to the technique. For large scale graphs, the size of the border matrix also becomes large, hence to accommodate the matrix we present various techniques. The techniques use the properties inherent in the shortest path problem for reduction. We mention the drawbacks of performing shortest path computations on a large scale and also provide various solutions to it. Evaluations using a large number of graphs with different characteristics show that our hybrid approach without variable partitioning and border matrix reduction gives 67% improvement in performance, and 64-98.5% less CPU-GPU communications than the state of art hybrid algorithm based on the popular Bulk Synchronous Paradigm (BSP) approach implemented in TOTEM. This shows our algorithm's strength which reduces the need for larger synchronizations. Implementing variable partitioning, border matrix reduction and backward phase optimizations on our hybrid algorithm provides up to 10x speedup. We compare our optimized implementation, with CPU and GPU standalone codes based on our forward phase and backward phase kernels, and show around 2-8x speedup over the CPU-only code and can accommodate large graphs that cannot be accommodated in the GPU-only code. We also show that our method`s performance is competitive to the state of art multi-core CPU and performs 40-52% better than GPU implementations, on large graphs. We show the drawbacks of CPU and GPU only implementations and try to motivate the reader about the challenges that graph algorithms face in large scale computing, suggesting that a hybrid or distributed way of approaching the problem is a better way of overcoming the hurdles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Mishra, Ashirbad. „Efficient betweenness Centrality Computations on Hybrid CPU-GPU Systems“. Thesis, 2016. http://hdl.handle.net/2005/2718.

Der volle Inhalt der Quelle
Annotation:
Analysis of networks is quite interesting, because they can be interpreted for several purposes. Various features require different metrics to measure and interpret them. Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Between’s Centrality (BC) is one such metric that plays a key role in many real world applications. BC is an important graph analytics application for large-scale graphs. However it is one of the most computationally intensive kernels to execute, and measuring centrality in billion-scale graphs is quite challenging. While there are several existing e orts towards parallelizing BC algorithms on multi-core CPUs and many-core GPUs, in this work, we propose a novel ne-grained CPU-GPU hybrid algorithm that partitions a graph into two partitions, one each for CPU and GPU. Our method performs BC computations for the graph on both the CPU and GPU resources simultaneously, resulting in a very small number of CPU-GPU synchronizations, hence taking less time for communications. The BC algorithm consists of two phases, the forward phase and the backward phase. In the forward phase, we initially and the paths that are needed by either partitions, after which each partition is executed on each processor in an asynchronous manner. We initially compute border matrices for each partition which stores the relative distances between each pair of border vertex in a partition. The matrices are used in the forward phase calculations of all the sources. In this way, our hybrid BC algorithm leverages the multi-source property inherent in the BC problem. We present proof of correctness and the bounds for the number of iterations for each source. We also perform a novel hybrid and asynchronous backward phase, in which each partition communicates with the other only when there is a path that crosses the partition, hence it performs minimal CPU-GPU synchronizations. We use a variety of implementations for our work, like node-based and edge based parallelism, which includes data-driven and topology based techniques. In the implementation we show that our method also works using variable partitioning technique. The technique partitions the graph into unequal parts accounting for the processing power of each processor. Our implementations achieve almost equal percentage of utilization on both the processors due to the technique. For large scale graphs, the size of the border matrix also becomes large, hence to accommodate the matrix we present various techniques. The techniques use the properties inherent in the shortest path problem for reduction. We mention the drawbacks of performing shortest path computations on a large scale and also provide various solutions to it. Evaluations using a large number of graphs with different characteristics show that our hybrid approach without variable partitioning and border matrix reduction gives 67% improvement in performance, and 64-98.5% less CPU-GPU communications than the state of art hybrid algorithm based on the popular Bulk Synchronous Paradigm (BSP) approach implemented in TOTEM. This shows our algorithm's strength which reduces the need for larger synchronizations. Implementing variable partitioning, border matrix reduction and backward phase optimizations on our hybrid algorithm provides up to 10x speedup. We compare our optimized implementation, with CPU and GPU standalone codes based on our forward phase and backward phase kernels, and show around 2-8x speedup over the CPU-only code and can accommodate large graphs that cannot be accommodated in the GPU-only code. We also show that our method`s performance is competitive to the state of art multi-core CPU and performs 40-52% better than GPU implementations, on large graphs. We show the drawbacks of CPU and GPU only implementations and try to motivate the reader about the challenges that graph algorithms face in large scale computing, suggesting that a hybrid or distributed way of approaching the problem is a better way of overcoming the hurdles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie