Dissertations / Theses on the topic 'Decentralised'

To see the other types of publications on this topic, follow the link: Decentralised.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Decentralised.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Paul, Greig. "Secure decentralised storage networks." Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28763.

Full text
Abstract:
In recent years, cloud-based computing and storage have become increasingly popular,as they remove the need for users and developers to buy or rent expensive dedicated hardware on an ongoing basis. This has led to the increasing centralisation of both services and storage, where users are reliant upon a small number of cloud-based providers to hold their data, and provide them with services they use. Recent events have shown that security breaches of centralised data stores can lead to significant quantities of personal data being revealed. This centralisation can also result in inconvenience in the event of the failure of the service provider, resulting in potential data loss or a loss of utility of the service. In contrast, a decentralised service and storage architecture removes the single point of failure from a network, and allows users to remove their dependency on a single company or service provider. In addition, by preventing storage providers from having access to user data, as is inherently needed in a decentralised network to preserve confidentiality,it is possible for users to protect their data from theft or unauthorised access,giving rise to data security and privacy benefits. This thesis explores the the challenges encountered in implementing a secure decentralised network, based around storage, and presents solutions to some of these problems. A security analysis of the MaidSafe network is firstly given, setting the context of the work, and investigating the state-of-the-art. Potential uses for decentralised services are considered, including for use on mobile devices. The importance of client device security is also considered, and a number of vulnerabilities affecting the security of client-based software are identified and explored. A practical design of decentralised architecture for preserving user privacy when discovering users is also contributed, to illustrate how decentralised service design can be used to enhance privacy of existing systems, and solve otherwise unsolved problems. A review and analysis of the privacy policies of popular web-based services then shows the extent to which user privacy is at risk from centralised web services. Finally, the concepts of identity and authentication within decentralised networks are considered, with a novel smartcard-based approach to securing user credentials within a decentralised network demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
2

Rodríguez-Cano, Guillermo. "Toward Privacy-Preserving Decentralised Systems." Licentiate thesis, KTH, Teoretisk datalogi, TCS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206444.

Full text
Abstract:
Privacy enhancing technologies have proven to be a beneficial area of research lessening the threats users' privacy in centralised systems such as online social networks. Decentralised solutions have been proposed to extend the control that users have over their data as opposed to the centralised massive collection of personal and sensitive data. The power that the service provider has in centralised systems has been shown to diminish the user’s privacy. Moreover, the disclosures in 2013 of a global surveillance program in collaboration with some of the service providers of such centralised systems have accelerated the debate on how to take action to counteract the threats to privacy. Privacy-preserving decentralised systems are plausible solutions to such threats. However, the removal of the central authority comes with two main trade-offs, mimicking the features and taking over the supervision of the security and privacy threats that were a responsibility of the central authority. In our thesis, we propose the use of privacy-preserving decentralised systems and develop three solutions in terms of decentralisation, functionality, and achievable security and privacy. For decentralised systems we show a mechanism for user authentication via standard credentials. Within the realm of decentralised online social networks we implement a coordination and cooperation mechanism to organise events without the need of a trusted third party. Finally, we improve one of the aspects of the user’s privacy: anonymity, by showing an implementation of a privacy-preserving system to submit and grade documents anonymously in systems where the central authority is still required. Our solutions are some concrete examples of how privacy as data control can be achieved to varying degrees. Nonetheless, we hope that the protocols we propose and the evaluation of the security and privacy properties can be useful in other scenarios to mitigate the diverse dangers to personal privacy.
Integritets främjande teknik — på engelska, privacy enhancing technologies — har visat sig vara ett positivt forskningsområde som syftar till att minska hoten mot den personliga integriteten av användarnas personuppgifter i centraliserade informationssystem som online sociala nätverk — på engelska, online social networks. Följaktligen har decentraliserade lösningar föreslagits för att förlänga den kontroll som användare har över sina data i motsats till en centraliserade massiv samling av personliga och känsliga data. Den kraft som tjänsteleverantören har i centrala informationssystem har visat sig minska användarens integritet vid fall av missbruk, censur eller dataläckage. Vidare har upplysningarna 2013 av ett globalt övervakningsprogram som leds av offentliga efterlysningsinstitutioner i samarbete med några av tjänsteleverantörerna av sådana centraliserade informationssystem påskyndat debatten om hur man vidtar åtgärder för att motverka hot mot integritet. I synnerhet hotet mot den lagliga "rätten att bli ensam" — på engelska, "right to be let alone", som definierats av Samuel Warren och Louis Brandeis år 1890 i sin inflytelserika laggransknings artikel "The Right to Privacy". Sekretessskyddande decentraliserade system är trovärdiga lösningar på sådana hot och ett av de vanligaste alternativen som åtgärdas idag. Avlägsnandet av den centrala auktoriteten kommer emellertid med två huvudsakliga kompromisser, efterlikna funktionerna i det centraliserade informationssystemet på ett användbart sätt och överta övervakningen av säkerhets och hoten som en gång var ett centralt ansvar för centralt auktoritet. I vår avhandling använder vi decentraliserade system för integritetsskydd och utvecklar tre lösningar för centraliserade informationssystem när det gäller decentralisering, funktionalitet och uppnåelig säkerhet och integritet. I decentraliserade informationssystem generellt visar vi på en konkret mekanism för användarautentisering via standard användar-lösenordsuppgifter med jämförbar användbarhet för standardiserade centraliserade applikationer. Inom ramen för praktiska decentraliserade system visar vi på ett specifikt exempel på domänen för decentraliserade online sociala nätverk — på engelska, decentralised online social networks — som implementerar en samordnings- och samarbetsmekanism för att organisera händelser utan att behöva ha en betrodd tredje part. Slutligen går vi tillbaka till de centraliserade systemen där närvaron av den centrala myndigheten fortfarande krävs och i stället förbättrar en av aspekterna av användarens integritet: anonymitet genom att visa en implementering av ett system för att skicka in och klassificera dokument anonymt i akademisk sfär i ett generiskt centraliserat system för integritetsskydd. Våra lösningar är några konkreta exempel på hur integritet som datakontroll, som det paradigm som Anita Allen förutser, kan uppnås i varierande grad i centraliserade och decentraliserade informationssystem för integritetsskydd. Ändå hoppas vi att de integritetsskydd protokollen som vi föreslår och utvärderingen av säkerhets- och sekretessegenskaperna kan vara användbara i andra scenarier för att mildra de olika farorna för personlig integritet som vi står inför för närvarande.
Las tecnologías para mejorar la privacidad — en inglés, privacy enhancing technologies — han demostrado ser una beneficiosa área de investigación para disminuir las amenazas a la privacidad de la información personal de los usuarios en sistemas de información centralizados como las redes sociales on line — en inglés, online social networks. Por ello, se han propuesto soluciones descentralizadas para ampliar el control que los usuarios ejercen sobre sus datos en contraposición a la recogida de datos personales y sensibles en sistemas centralizados. Casos de mal uso, censura o incluso fuga de datos demuestran que el poder del proveedor de servicios en sistemas de información centralizados disminuye la privacidad del usuario. Las revelaciones en 2013 de un programa de vigilancia a nivel global dirigido por agencias de inteligencia públicas en colaboración con algunos de los proveedores de servicios de sistemas de información centralizados han acelerado el debate sobre las medidas a tomar para contrarrestar las amenazas a la privacidad. En particular, la amenaza al "derecho a la soledad" — en inglés, "right to be let alone"— enunciado por Samuel Warren y Louis Brandeis en 1890 en el influyente artículo legal, "El derecho a la intimidad". Los sistemas descentralizados que preservan la privacidad son soluciones viables ante las amenazas a la privacidad, y una de las alternativas más comunes en la actualidad. Sin embargo, la supresión de la autoridad central conlleva tratar de resolver dos inconvenientes: replicar la funcionalidad de los sistemas de información centralizados de forma que sean utilizables y asumir la vigilancia de las amenazas a la seguridad y privacidad que anteriormente eran responsabilidad de la autoridad central. En esta tesis, se propone el uso de sistemas descentralizados que preservan la privacidad y para ello desarrollamos tres soluciones a los sistemas de información centralizados desde los puntos de vista de descentralización, fun- cionalidad y, seguridad y privacidad. En los sistemas de información descentralizados, diseñamos un mecanismo de autenticación de usuarios mediante el uso de credenciales estándar usuario-contraseña cuya usabilidad es comparable a las aplicaciones en sistemas centralizados. En el ámbito más práctico de los sistemas descentralizados mostramos un ejemplo específico en el área de las redes sociales on line descentralizadas — en inglés, decentralised online social networks — implementando un mecanismo de coordinación y cooperación para la organización de eventos sin necesidad de existencia de un tercero de confianza. Finalmente, en los sistemas de información centralizados, en los que la presencia de una autoridad central sigue siendo necesaria, intentamos mejorar uno de los aspectos de la privacidad del usuario: el anonimato, diseñando e implementando un sistema para presentar y evaluar documentos de forma anónima en el ámbito académico en un sistema de información genérico y centralizado. Las soluciones que proponemos son algunos ejemplos concretos del concepto de "privacidad como control de datos" — en inglés, "privacy as data control"— tal y como lo definió Anita Allen. Un paradigma que se puede conseguir en diversos niveles tanto en sistemas de información centralizados como descentralizados. No obstante, deseamos que los protocolos para preservar la privacidad que proponemos junto con la evaluación de las propiedades de seguridad y privacidad sean de utilidad en otros ámbitos para contribuir a mitigar las diversas amenazas a la privacidad a las que no enfrentamos en la actualidad.

QC 20170508


PeerSoN: Privacy-Preserving P2P Social Networks
APA, Harvard, Vancouver, ISO, and other styles
3

Grime, Stewart Harper. "Communication in decentralised sensing architectures." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yuan, Ye. "Decentralised network prediction and reconstruction algorithms." Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/243619.

Full text
Abstract:
This study concerns the decentralised prediction and reconstruction problems in a network. First of all, we propose a decentralised prediction algorithm in the framework of network consensus problem. It allows any individual to compute the consensus value of the whole network in finite time using only the minimal number of successive values of its own history. We further prove that the minimal number of steps can be characterised using other algebraic and graph theoretical notions: minimal external equitable partition (mEEP) that can be directly computed from the Laplacian matrix of the graph and from the underlying network structure. Later, we consider a number of possible theoretical extensions of the proposed algorithm to issues arising from practical applications, e.g., time-delays, noise, external inputs, nonlinearities in the network, and analyse how the proposed algorithm should be changed to incorporate such constraints. For the decentralised reconstruction problem, we firstly define a new presentation: dynamical structure functions encoding structural information and explore the properties of such a representation for the purpose of solving the reconstruction problem. We have studied a number of theoretical problems: identification, realisation, reduction, etc. for dynamical structure functions and showed that how these theoretical can be used in solving decentralised network reconstruction problems. We later illustrate the results on a number of in-silico examples. We conclude the thesis with some ideas and future perspectives to continue based on the research of decentralised prediction and reconstruction problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Livingston, Daniel John Civil &amp Environmental Engineering Faculty of Engineering UNSW. "Institutions and decentralised urban water management." Publisher:University of New South Wales. Civil & Environmental Engineering, 2008. http://handle.unsw.edu.au/1959.4/41336.

Full text
Abstract:
Physically decentralised water management systems may contribute to improving the sustainability of urban water management. Any shift toward decentralised systems needs to consider not just physical system design but also social values, knowledge frames, and organisations, and their interconnections to the physical technology. Four cases of recent Australian urban water management improvement projects were researched using qualitative methods. Three cases were of decentralised water management innovation. The other was of a centralised system, although decentralised options had been considered. These cases were studied to identify institutional barriers and enablers for the uptake of decentralised systems, and to better understand how emerging environmental engineering knowledge might be applied to overcome an implementation gap for decentralised urban water technologies. Analysis of each case focused on the institutional elements of urban water management, namely: the values, knowledge frames and organisational structures. These elements were identified through in-depth interviews, document review, and an on-line survey. The alignment of these elements was identified as being a significant contributor to the stability of centralised systems, or to change toward decentralised systems. A new organisational home for innovative knowledge was found to be common to each case where decentralised innovation occurred. ??Institutional entrepreneurs??, strong stakeholder engagement, and inter-organisational networks were all found to be linked to the creation of shared meaning and legitimacy for organisational and technological change. Existing planning frameworks focus on expert justification for change rather than institutional support for change. Institutional factors include shared understandings, values and organisational frameworks, and the alignment of each factor. Principles for, and examples of, appropriate organisational design for enabling and managing decentralised technological innovation for urban water management are proposed. This research contributes to the understanding of the institutional basis and dynamics of urban water management, particularly in relation to physical centralisation and decentralisation of urban water management technologies and, to a lesser extent, in relation to user involvement in urban water management. Understanding of factors that contribute to enabling and constraining decentralised technologies is extended to include institutional and organisational factors. New and practical pathways for change for the implementation of decentralised urban water systems are provided.
APA, Harvard, Vancouver, ISO, and other styles
6

Craig, Iain David. "Decentralised control in a blackboard system." Thesis, Lancaster University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sadighi, Firozabadi Seyd Babak. "Decentralised privilege management for access control." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.424362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Utete, Simukai. "Network management in decentralised sensing systems." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdul-Rahman, Alfare. "A framework for decentralised trust reasoning." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1444477/.

Full text
Abstract:
Recent developments in the pervasiveness and mobility of computer systems in open computer networks have invalidated traditional assumptions about trust in computer communications security. In a fundamentally decentralised and open network such as the Internet, the responsibility for answering the question of whether one can trust another entity on the network now lies with the individual agent, and not a priori a decision to be governed by a central authority. Online agents represent users' digital identities. Thus, we believe that it is reasonable to explore social models of trust for secure agent communication. The thesis of this work is that it is feasible to design and formalise a dynamic model of trust for secure communications based on the properties of social trust. In showing this, we divide this work into two phases. The aim of the first is to understand the properties and dynamics of social trust and its role in computer systems. To this end, a thorough review of trust, and its supporting concept, reputation, in the social sciences was carried out. We followed this by a rigorous analysis of current trust models, comparing their properties with those of social trust. We found that current models were designed in an ad-hoc basis, with regards to trust properties. The aim of the second phase is to build a framework for trust reasoning in distributed systems. Knowledge from the previous phase is used to design and formally specify, in Z, a computational trust model. A simple model for the communication of recommendations, the recommendation protocol, is also outlined to complement the model. Finally an analysis of possible threats to the model is carried out. Elements of this work have been incorporated into Sun's JXTA framework and Ericsson Research's prototype trust model.
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Hasini De. "Decentralised group formation in pervasive environments." Thesis, University of Surrey, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.549652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Smyrnakis, Michalis. "Game-theoretical approaches to decentralised optimisation." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Engels, Wouter Peter. "Decentralised velocity feedback control of structures." Thesis, University of Southampton, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shin, Eunkyung. "Understanding institutional changes toward decentralised governance." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/18640/.

Full text
Abstract:
During the last decade, about 95 per cent of democracies implemented one or more types of decentralisation reforms. Decentralisation encompasses administrative, fiscal, and political dimensions and depths of deconcentration, delegation, and devolution. The extant literature deals with origins, processes, and outcomes of decentralisation and demonstrates diverse outcomes such as subnational autonomy, accountability, economic growth, and the quality of public service delivery. This thesis investigated decentralisation empirically, methodologically, and theoretically. First, a measurement tool is developed to capture the degree of changes in subnational autonomy. Second, Falleti’s theory was applied to the first wave of decentralisation in Japan and Korea. As the results demonstrate the lack of generalisability, the author developed an historical ideological framework which explains causality from powerful actors’ ideological footholds to types of decentralisation. Finally, cross-country and cross-sector case studies confirm that powerful actors’ motivations, public consensus and institutional factors shape types of decentralisation which determine the degree of changes in subnational autonomy. As a whole, the thesis contributes to the knowledge by showing limitations of Falleti’s sequential theory of decentralisation. Empirically this thesis measures the degree of changes in subnational autonomy after the first and the second wave of decentralisation in Japan and Korea with a more nuanced and comprehensive measurement tool. Methodologically, the thesis shows limitations of theory-guided intensive process-tracing and potential advantages of extensive process-tracing. Theoretically, the thesis shows ideas combined institutional factors have causal power as strong as interests. Notwithstanding several contributions, the thesis contains some limitations and renders insights for future studies. Historical ideological causality based on decentralisation in Japan and Korea should be tested in another location to expand generalisability. The tool to measure subnational autonomy developed by the author should be improved by fine-tuning technical issues. For periodization of decentralisation, an economic perspective of post-developmental decentralisation as well as a social perspective of the expansion of Welfare State should be considered.
APA, Harvard, Vancouver, ISO, and other styles
14

Kiddie, Paul David. "Decentralised soft-security in distributed systems." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1731/.

Full text
Abstract:
Existing approaches to intrusion detection in imperfect wireless environments employ local monitoring, but are limited by their failure to reason about the imprecise monitoring within a radio environment that arises from unidirectional links and collisions. This compounds the challenge of detecting subtle behaviour or adds to uncertainty in the detection strategies employed. A simulation platform was developed, based on the Jist/SWANS environment, adopting a robust methodology that employed Monte-Carlo sampling in order to evaluate intrusion detection systems (IDS). A framework for simulating adversaries was developed, which enabled wormholes, black holes, selfishness, flooding and data modification to be simulated as well as a random distribution thereof. A game theoretic inspired IDS, sIDS, was developed, which applied reasoning between the detection and response components of a typical IDS, to apply more appropriate local responses. The implementation of sIDS is presented within the context of a generic IDS framework for MANET. Results showed a 5-15% reduction in false response rate compared to a baseline IDS over a number of attacking scenarios. sIDS was extended with immune system inspired features, namely a response over multiple timescales, as employed by the innate and adaptive components of the immune system, and the recruitment of neighbouring agents to participate in a co-ordinated response to an intrusion. Results showed a true response rate of 95-100% for all simulated attack scenarios. For random misbehaviour and assisted black hole scenarios, PDR gains of up to 30% and 15% were observed respectively compared to the pure game theoretic approach, tracking the omniscient network performance in these scenarios. In all, this study has shown that applying game theoretic reasoning to existing detection methods results in better discrimination of benign nodes from adversaries, which can be used to bias network operation towards the benign nodes. When fused with immune system inspired features, the resulting IDS maintained this discrimination whilst substantially reducing attack efficacy.
APA, Harvard, Vancouver, ISO, and other styles
15

Kennedy, Laura (Laura Lynn) 1973. "Manufacturing initiatives in a decentralised company." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/82685.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 52-53).
by Laura Kennedy.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Stranders, Ruben. "Decentralised coordination of information gathering agents." Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/172455/.

Full text
Abstract:
Unmanned sensors are rapidly becoming the de facto means of achieving situational awareness---the ability to make sense of, and predict what is happening in an environment---in disaster management, military reconnaissance, space exploration, and climate research. In these domains, and many others besides, their use reduces the need for exposing humans to hostile, impassable or polluted environments. Whilst these sensors are currently often pre-programmed or remotely controlled by human operators, there is a clear trend toward making these sensors fully autonomous, thus enabling them to make decisions without human intervention. Full autonomy has two clear benefits over pre-programming and human remote control. First, in contrast to sensors with pre-programmed motion paths, autonomous sensors are better able to adapt to their environment, and react to a priori unknown external events or hardware failure. Second, autonomous sensors can operate in large teams that would otherwise be too complex to control by human operators. The key benefit of this is that a team of cheap, small sensors can achieve through cooperation the same results as individual large, expensive sensors---with more flexibility and robustness. In light of the importance of autonomy and cooperation, we adopt an agent-based perspective on the operation of the sensors. Within this view, each sensor becomes an information gathering agent. As a team, these agents can then direct their collective activity towards collecting information from their environment with the aim of providing accurate and up-to-date situational awareness. Against this background, the central problem we address in this thesis is that of achieving accurate situational awareness through the coordination of multiple information gathering agents. To achieve general and principled solutions to this problem, we formulate a generic problem definition, which captures the essential properties of dynamic environments. Specific instantiations of this generic problem span a broad spectrum of concrete application domains, of which we study three canonical examples: monitoring environmental phenomena, wide area surveillance, and search and patrol. The main contributions of this thesis are decentralised coordination algorithms that solve this general problem with additional constraints and requirements, and can be grouped into two categories. The first category pertains to decentralised coordination of fixed information gathering agents. For these agents, we study the application of decentralised coordination during two distinct phases of the agents' life cycle: deployment and operation. For the former, we develop an efficient algorithm for maximising the quality of situational awareness, while simultaneously constructing a reliable communication network between the agents. Specifically, we present a novel approach to the NP-hard problem of frequency allocation, which deactivates certain agents such that the problem can be provably solved in polynomial time. For the latter, we address the challenge of coordinating these agents under the additional assumption that their control parameters are continuous. In so doing, we develop two extensions to the max-sum message passing algorithm for decentralised welfare maximisation, which constitute the first two algorithms for distributed constraint optimisation problems (DCOPs) with continuous variables---CPLF-MS (for linear utility functions) and HCMS (for non-linear utility functions). The second category relates to decentralised coordination of mobile information gathering agents whose motion is constrained by their environment. For these agents, we develop algorithms with a receding planning horizon, and a non-myopic planning horizon. The former is based on the max-sum algorithm, thus ensuring an efficient and scalable solution, and constitutes the first online agent-based algorithm for the domains of pursuit-evasion, patrolling and monitoring environmental phenomena. The second uses sequential decision making techniques for the offline computation of patrols---infinitely long paths designed to continuously monitor a dynamic environment---which are subsequently improved on at runtime through decentralised coordination. For both topics, the algorithms are designed to satisfy our design requirements of quality of situational awareness, adaptiveness (the ability to respond to a priori unknown events), robustness (the ability to degrade gracefully), autonomy (the ability of agents to make decisions without the intervention of a centralised controller), modularity (the ability to support heterogeneous agents) and performance guarantees (the ability to give a lower bound on the quality of the achieved situational awareness). When taken together, the contributions presented in this thesis represent an advance in the state of the art of decentralised coordination of information gathering agents, and a step towards achieving autonomous control of unmanned sensors.
APA, Harvard, Vancouver, ISO, and other styles
17

Kho, Johnsen. "Decentralised control of wireless sensor networks." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/66078/.

Full text
Abstract:
Wireless sensor networks are receiving a considerable degree of research interest due to their deployment in an increasing number and variety of applications. However, the efficient management of the limited energy resources of such networks in a way that maximises the information value of the data collected is a significant research challenge. To date, most of these systems have adopted a centralised control mechanism, but from a system's perspective this raises concerns associated with scalability, robustness, and the ability to cope with dynamism. Given this, decentralised approaches are appealing. But, the design of efficient decentralised regimes is challenging as it introduces an additional control issue related to the dynamic interactions between the network's interconnected nodes in the absence of a central coordinator. Within this context, this thesis first concentrates on decentralised approaches to adaptive sampling as a means of focusing a node's energy consumption on obtaining the most important data. Specifically, we develop a principled information metric based upon Fisher information and Gaussian process regression that allows the information content of a node's observations to be expressed. We then use this metric to derive three novel decentralised control algorithms for information-based adaptive sampling which represent a trade-off in computational cost and optimality. These algorithms are evaluated in the context of a deployed sensor network in the domain of flood monitoring. The most computationally efficient of the three is shown to increase the value of information gathered by approximately 83%, 27%, and 8% per day compared to benchmarks that sample in a naive non-adaptive manner, in a uniform non-adaptive manner, and using a state-of-the-art adaptive sampling heuristic (USAC) correspondingly. Moreover, our algorithm collects information whose total value is approximately 75% of the optimal solution (which requires an exponential, and thus impractical, amount of time to compute). The second major line of work then focuses on the adaptive sampling, transmitting, forwarding, and routing actions of each node in order to maximise the information value of the data collected in resource-constrained networks. This adds additional complexity because these actions are inter-related, since each node's energy consumption must be optimally allocated between sampling and transmitting its own data, receiving and forwarding the data of other nodes, and routing any data. Thus, in this setting we develop two optimal decentralised algorithms to solve this distributed constraint optimization problem. The first assumes that the route by which data is forwarded to the base station is fixed (either because the underlying communication network is a tree, or because an arbitrary choice of route has been made) and then calculates the optimal integration of actions that each node should perform. The second deals with flexible routing, and makes optimal decisions regarding both the sampling, transmitting, and forwarding actions that each node should perform, and also the route by which this data should be forwarded to the base station. The two algorithms represent a trade-off in optimality, communication cost, and processing time. In an empirical evaluation on sensor networks (whose underlying communication networks exhibit loops), we show that the algorithm with flexible routing delivers approximately twice the quantity of information to the base station compared to the algorithm with fixed routing. However, this gain comes at a considerable communication and computational cost (increasing both by a factor of 100 times). Thus, while the algorithm with flexible routing is suitable for networks with a small numbers of nodes, it scales poorly, and as the size of the network increases, the algorithm with fixed routing should be favoured.
APA, Harvard, Vancouver, ISO, and other styles
18

Ball, Rudi. "A framework for decentralised vehicular services." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9699.

Full text
Abstract:
Traffic management is an old and growing problem within cities with inefficient road use resulting in significant economic costs. Existing traffic management solutions are typically large centralised systems which rely on central authorities for control. Furthermore, these systems are costly to setup, deploy and maintain. Within the near future it is expected that Vehicle-to-X (V2X) technologies will become integrated into both vehicles and transportation infrastructures. V2X technologies allow vehicles and road-side infrastructure to communicate with one another using ad-hoc wireless communications. In this thesis we present a unique vehicular framework for the development and prototyping of decentralised vehicular services which exploit available V2X technologies. The framework uses discrete event simulation to evaluate the operation of a decentralised vehicular service. The decentralised services presented within the thesis are unique as they require a combination of scaled ad-hoc inter-vehicle messaging, mobility data and cooperation to enable communities of road vehicles to provide services to one another. Vehicles manage themselves in their local space to approximate the outcomes of centralised services. As vehicular services are decentralised they reduce the costs associated with deploying and supporting traffic services. Using the framework we prototype two novel decentralised traffic control protocols which evaluate the problems of travel time estimation and intersection control. Each protocol is evaluated in a scaled scenario which emphasises the usage and requirement of fine grained geographic mobility traces which mimic real city road maps. We show that decentralised approaches provide a feasible means of providing vehicular services to users. We evaluate the trade-offs and performance issues resulting from their use.
APA, Harvard, Vancouver, ISO, and other styles
19

Steinheimer, Michael. "Autonomous decentralised M2M application service provision." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11957.

Full text
Abstract:
Machine-to-Machine Communication (M2M) service platforms integrate M2M devices and enable realisation of applications using the M2M devices to support processes, mostly in the business domain. Many application-specific vertical implementations of M2M service platforms exist as well as efforts to define horizontal M2M service platforms. Both approaches usually have central components or stakeholders of which the entire M2M system or the user depends. With regards to the end-user, more and more M2M devices provide resources, such as environmental information (e.g. energy consumption data) or control options (e.g. switching energy consumer). These resources offer great potential for supporting smart environments and it would be advantageous if these resources could be used by end-users to create individual smart environments or be accessible for other users to integrate these resources into their processes. Furthermore, it would be advantageous to avoid centralised or domain-specific solutions in order to realise flexible and independent M2M service platforms. This thesis proposes a novel framework for autonomous and decentralised M2M application service provision based on native end-user integration and a distributed M2M system architecture. In order to actively involve end-users in M2M application development, an intuitive methodology for graphical application design through state machine-based application modelling is proposed. To achieve independence from the execution environments, a formal language for modelling M2M applications is introduced enabling a graphically designed M2M application to be represented by a formally described application model, which can be processed automatically and platform-independently. The design of a generalised interface definition enables local M2M applications to be provided as a service to other users. Based on this, an approach is introduced allowing end-users to combine the resources available in their personal environments in order to realise cooperative M2M applications and act as service providers. The M2M service platform architecture presented does not contain any central components or stakeholders. The distributive nature of central entities and stakeholders is realised by a decentralised system architecture being implemented in the end-user domain. The various M2M service providers and consumers link via a Peer-to-Peer (P2P) network on both the communication level (using communication protocols Constrained Application Protocol, CoAP or Session Initiation Protocol, SIP) and on the data storage level (using structured or unstructured P2P overlay networks). An M2M Community concept complements the P2P network to enable a social network between different M2M service providers and consumers. The thesis also presents a prototypical proof-of-concept implementation used to verify the proposed framework components.
APA, Harvard, Vancouver, ISO, and other styles
20

Granby, Peter. "Decentralised preference-based scheduling for manufacturing systems." Thesis, Keele University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ring, B. J. "Dispatch based pricing in decentralised power systems." Thesis, University of Canterbury. Management, 1995. http://hdl.handle.net/10092/4617.

Full text
Abstract:
This thesis investigates the application of marginal cost based spot pricing techniques to the short run coordination of decentralised, and potentially competitive, electricity markets. A Dispatch Based Pricing philosophy is proposed which requires that the dispatcher of a power system determine spot prices which are consistent with both the observed power system dispatch and the offers and bids issued by market participants. Whereas previous research has involved determining prices corresponding to an optimised power system dispatch, Dispatch Based Pricing is more flexible, requiring no such optimality assumption while generating incentives which encourage efficient dispatch. Pricing relationships are formed, and the resulting incentives analysed, by applying duality theory to mathematical programming formulations of the dispatch problem. A detailed theoretical description of a dispatch based pricing model, based on an Optimal Power Flow formulation, is presented. This model is an extension of the ex post pricing model of Hogan (1991). As well as presenting a more general representation of dispatch variable relationships, we demonstrate the underlying mathematical relationships which drive the economic interpretation of this model. In addition, we explore the behaviour of transmission flow constraints in cyclic networks, and describe the modifications needed to price for security requirements consistent with current operational practices in New Zealand. We explore the extension of Dispatch Based Pricing to situations beyond the scope of the Optimal Power Flow problem, and even to situations which are strictly incompatible with a pure marginal cost based analysis. We develop a "best compromise" pricing approach which, for (seemingly) economically inconsistent dispatches, minimises the side payments required to account for the difference between the market clearing spot prices and the offers and bids of the market participants. We develop and discuss methods for determining dispatch based prices which are consistent with primal inter-temporal constraints, uncertainty, and integer variables.
APA, Harvard, Vancouver, ISO, and other styles
22

Hellwig, Christian. "Money, intermediation and coordination in decentralised markets." Thesis, London School of Economics and Political Science (University of London), 2003. http://etheses.lse.ac.uk/1723/.

Full text
Abstract:
Overview: This thesis studies the coordination of individuals' transactions in a large, decentralized market. The first half of the thesis examines the role of market institutions in an environment with frictions. In particular, it studies the interaction between "intermediaries" (banks, shops) and decentralized "equilibrium arrangements" such as money or credit. The second half of the thesis studies the role of public and private information in coordinating individual actions, as well as the macro-economic effects of such coordination. First Half: I study a search economy in which intermediaries are the driving force co-ordinating the economy on the use of a unique, common medium of exchange for transactions. If search frictions delay trade, intermediaries offering immediate exchange opportunities can make arbitrage gains from a price spread, but they have to solve the search market's allocation problem. Intermediaries solve this problem best by imposing a common medium of exchange to other agents, and a Cash-in-Advance constraint arises in equilibrium: Agents trade twice in order to consume, once to exchange their production against the medium of exchange, and once to purchase their consumption. I extend my analysis to the study of fiat currencies and, in particular, free banking regimes. Second Half: The second half consists of two essays studying the role of public and private information in coordination games. In the first, I relate the convergence of equilibria to the convergence of higher-order beliefs. The central result of this essay relates the convergence of players' higher-order beliefs (and hence equilibrium convergence) to the parameters of the signal structure. This provides a limit condition determining the uniqueness or multiplicity of equilibria in the coordination game. Building on the previous chapter, the second essay studies the effects of information policies on output and inflation, when price-setters face higher-order uncertainty concerning the money supply. I show that this may lead to substantial delays in price-adjustment following a shock. To the extent that public announcements coordinate expectations, they reduce this delay, and thereby reduce the effect and persistence of monetary shocks on output in the short-rim. On the other hand, public announcements introduce a second component of noise, and may therefore increase short-run volatility.
APA, Harvard, Vancouver, ISO, and other styles
23

Cimen, Hasan. "Decentralised power system load frequency controller design." Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Schardong, Frederico. "Taming NFV orchestration using decentralised cognitive components." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/184344.

Full text
Abstract:
Network Functions Virtualisation (NFV) separa as funções de rede dos dispositivos físicos, simplificando a implantação de novos serviços. As típicas funções de rede, como firewalls, aceleradores de tráfego, sistemas de detecção de intrusão e sistemas de prevenção de intrusões, são tradicionalmente realizadas por equipamentos físicos proprietários, que devem ser instalados manualmente pelos operadores de rede. A implantação de equipamentos físicos é desafiadora porque eles têm requisitos específicos de encadeamento e ordenação. Ao contrário dos equipamentos físicos tradicionais, as funções de rede virtuais (VNFs) podem ser dinamicamente implementadas e reconfiguradas sob demanda, colocando desafios de gerenciamento rigorosos aos sistemas em rede. A seleção das VNFs mais apropriadas para atingir um objetivo específico e a decisão sobre onde implantar essas VNFs e por quais caminhos elas se comunicarão são responsabilidades de um orquestrador de NFV. Nesta dissertação, propomos orquestrar VNFs usando componentes cognitivos interativos estruturados com a arquitetura belief-desire-intention (BDI), levando a soluções emergentes para enfrentar os desafios da rede. A arquitetura BDI inclui um ciclo de raciocínio que fornece aos agentes um comportamento racional, permitindo que lidem com diferentes cenários nos quais o comportamento flexível e inteligente é necessário. Estendemos a arquitetura NFV substituindo seu orquestrador centralizado por agentes BDI. Nossa proposta inclui um protocolo de leilão reverso e uma nova heurística de licitação que permite que os agentes tomem decisões sobre as tarefas de orquestração. Por fim, nós fornecemos uma plataforma de testes que integra uma plataforma para o desenvolvimento de agentes BDI com um emulador de rede, permitindo que os agentes controlem as VNFs e percebam a rede. Essa plataforma de testes é usada para implementar VNFs e avaliar empiricamente nosso modelo teórico em um ataque de negação de serviço distribuído. Os resultados da avaliação mostram que uma solução para o ataque DDoS surge através da negociação de agentes, mitigando com sucesso o ataque.
Network Functions Virtualisation (NFV) decouples network functions from physical devices, simplifying the deployment of new services. Typical network functions, like firewalls, traffic accelerators, intrusion detection systems and intrusion prevention systems, are traditionally performed by proprietary physical appliances, which must be manually installed by network operators. Their deployment is challenging because they have specific chaining requirements. As opposed to traditional physical appliances, virtual network functions (VNFs) can be dynamically deployed and reconfigured on demand, posing strict management challenges to networked systems. The selection of the most appropriate VNFs to achieve a particular objective, the decision on where to deploy these VNFs and through which paths they will communicate are the responsibilities of an NFV orchestrator. In this dissertation, we propose to orchestrate VNFs using interacting cognitive components structured with the belief-desire-intention (BDI) architecture, leading to emergent solutions to address network challenges. The BDI architecture includes a reasoning cycle, which provides agents with rational behaviour, allowing agents to deal with different scenarios in which flexible and intelligent behaviour is needed. We extend the NFV architecture, replacing its centralised orchestrator with BDI agents. Our proposal includes a reverse auction protocol and a novel bidding heuristic that allow agents to make decisions regarding the orchestration tasks. Finally, we provide a testbed that integrates a platform for developing BDI agents with a network emulator, allowing agents to control VNFs and perceive the network. This testbed is used to implement VNFs and empirically evaluate our theoretical model in a distributed denial-of-service (DDoS) attack. The evaluation results show that a solution to the DDoS attack emerges through the negotiation of agents, successfully mitigating the attack.
APA, Harvard, Vancouver, ISO, and other styles
25

Smith, George Fenwick. "Decentralised staff development roles in higher education." Thesis, University College London (University of London), 1990. http://discovery.ucl.ac.uk/10020774/.

Full text
Abstract:
The value of the role of the decentralised staff developer in higher education and of the alternative ways by which it might be fulfilled, has not been addressed or decided. Of the alternative models of staff development practice in higher education, product-orientation, prescription-orientation, processorientation, problem orientation and eclecticism, all but the latter are considered to have significant weaknesses. Similarly, the alternative models of staff development responsibility in higher education, 'management', 'shopfloor' and 'partnership', are considered to have weaknesses. It is hypothesised that the 'partnership' model, modified by decentralisation and eclectic in practice, offers a means for overcoming these weaknesses and promoting effective staff development. To test the hypothesis, a case study method was adopted which comprised participant observation, interviews, documentation and a survey. A sustained investigation was made of Birmingham Polytechnic with more limited inquiries at Brighton and Coventry Polytechnics. The results of the research provide some qualified support for the hypothesis. It was found that eclecticism was the only model of practice that was capable of facilitating extensive professional development. Decentralisation was found to be partially successful in promoting staff development albeit with limited integration, low staff response and uncertain expertise. Further research was considered necessary to refine the model further. It was concluded that eclectic decentralised staff development offers a model for higher education which can adequately meet the challenges facing professional development in the future.
APA, Harvard, Vancouver, ISO, and other styles
26

Davy, Simon Mark. "Decentralised economic resource allocation for computational grids." Thesis, University of Leeds, 2008. http://etheses.whiterose.ac.uk/1369/.

Full text
Abstract:
Grid computing is the concept of harnessing the power of many computational resources in a transparent manner. It is currently an active research area, with significant challenges due to the scale and level of heterogeneity involved. One of the key challenges in implementing grid systems is resource allocation. Currently, centralised approaches are employed that have limited scalability and reliability, which is a key factor in achieving a usable grid system. The field of economics is the study of allocating scarce resources using economic mechanisms. Such systems can be highly scalable, robust and adaptive and as such are a potential solution to the grid allocation problem. There is also a natural fit of the economic allocation metaphor to grid systems, given the diversity of autonomy of grid resources. We propose that an economic system is a suitable mechanism for grid resource allocation. We propose a simple market mechanism to explore this idea. Our system is a fully decentralised economic allocation scheme, which aims to achieve a high degree of scalability and reliability, and easily allows resources to retain their autonomy. We implement a simulation of a grid system to analyse this system, and explore its performance and scalability, with a comparison to existing systems. We use a network to facilitate communication between participating agents, and we pay particular attention to the topology of the network between participating agents, examining the effects of different topologies on the performance of the system.
APA, Harvard, Vancouver, ISO, and other styles
27

Mason, Richard S. "A framework for fully decentralised cycle stealing." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/26039/1/Richard_Mason_Thesis.pdf.

Full text
Abstract:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
APA, Harvard, Vancouver, ISO, and other styles
28

Mason, Richard S. "A framework for fully decentralised cycle stealing." Queensland University of Technology, 2007. http://eprints.qut.edu.au/26039/.

Full text
Abstract:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
APA, Harvard, Vancouver, ISO, and other styles
29

Djamaludin, Christopher I. "Decentralised key management for delay tolerant networks." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/94983/1/Christopher_Djamaludin_Thesis.pdf.

Full text
Abstract:
Public key authentication is the verification of the identity-public key binding, and is foundational to the security of any network. The contribution of this thesis has been to provide public key authentication for a decentralised and resource challenged network such as an autonomous Delay Tolerant Network (DTN). It has resulted in the development and evaluation of a combined co-localisation trust system and key distribution scheme evaluated on a realistic large geographic scale mobility model. The thesis also addresses the problem of unplanned key revocation and replacement without any central authority.
APA, Harvard, Vancouver, ISO, and other styles
30

Novell, Recasens Marta. "Paper-based potentiometric platforms for decentralised chemical analysis." Doctoral thesis, Universitat Rovira i Virgili, 2015. http://hdl.handle.net/10803/313994.

Full text
Abstract:
En les darreres dècades, el món ha experimentat profunds canvis socials i tecnològics. Entre aquests, són destacables les tendències emergents d’anàlisis descentralitzats i de xarxes de sensors, que estan tenint un gran impacte en moltes àrees, especialment en el sistema sanitari. El desenvolupament d’eines per a realitzar anàlisis fora del laboratori de forma robusta, simple i econòmica, serà de gran ajuda per generar, per exemple, eines de diagnòstic assequibles. Per complementar aquestes tendències, aquesta tesis presenta el desenvolupament d’una eina analítica nova per anàlisis descentralitzats, usant paper modificat amb nanotubs de carboni com a substrat i la potenciometria com a tècnica de detecció. Els nanotubs de carboni s’han incorporat amb èxit sobre un paper de filtre convencional, convertint-lo així en conductor, i proporciona’t-li habilitat transductora ió-electró. Amb aquesta plataforma s’han desenvolupat elèctrodes selectius de ions per a diferents ions –mantenint el mateix rendiment analític que els elèctrodes convencionals- així com també un elèctrode de referència. La demostració de que aquesta plataforma pot solucionar un problema analític s’ha dut a terme a través del desenvolupament d’una cel·la potenciomètrica complerta de paper per a la detecció de liti en sang. Aquests elèctrodes també han estat combinats amb èxit amb un potenciòmetre d’identificació per radiofreqüència (RFID), cosa que permet el seu ús de forma descentralitzada. Altres aplicacions possibles, juntament amb les limitacions del sistema es discuteixen en detall. En definitiva, aquest treball obre la possibilitat de substituir sensors convencionals per aquesta plataforma més econòmica, obrint així tota una nova gama d’oportunitats.
En las últimas décadas, el mundo ha experimentado profundos cambios sociales y tecnológicos. Entre los cuáles son destacables las tendencias emergentes de análisis descentralizados y de redes de sensores, que tienen un gran impacto en muchas áreas, especialmente en el sistema sanitario. El desarrollo de herramientas para realizar análisis fuera del laboratorio de forma robusta, simple i económica, será de gran ayuda per generar, para generar, herramientas de diagnóstico asequibles. Para complementar estas tendencias, esta tesis presenta el desarrollo de una herramienta analítica nueva para análisis descentralizados, usando papel modificado con nanotubos de carbono como sustrato y la potenciometría como técnica de detección. Los nanotubos de carbono se han incorporado con éxito sobre un papel de filtro convencional, convirtiéndolo así en conductor, y proporcionándole habilidad transductora ion-electrón. Con esta plataforma se han desarrollado electrodos selectivos de iones para distintos iones manteniendo el mismo rendimiento analítico que los electrodos convencionales- así como también un electrodo de referencia. La demostración de que esta plataforma puede solucionar un problema analítico se ha hecho a través del desarrollo de una celda potenciométrica completa de papel para la detección de liti en sangre. Estos electrodos también se han combinado con éxito con un potenciómetro de identificación por radiofrecuencia (RFID), cosa que permite su uso de forma descentralizada. Otras aplicaciones posibles, junto con las limitaciones sistema se discuten en detalle. En definitiva, este trabajo abre la posibilidad de substituir los sensores convencionales por esta plataforma más económica, abriendo así tota una nueva gama de oportunidades.
During the last decades, the world has undergone deep social and technological changes. Remarkably are the emerging trends of decentralised analysis and sensing networks, which are having a deep impact in many areas, especially in the healthcare system. The development of tools for performing measurements out of the lab in a robust, simple and cost-effective way will be of great help to generate, for example, affordable diagnostic tools. To complement this trends, this thesis presents the development of a novel analytical tool for decentralised measurements, by using paper as a substrate modified with carbon nanotubes (CNT), and potentiometry as detection approach. CNTs have been successfully incorporated over a conventional filter paper making it conductive, and giving to it ion-to-electron transduction capability. Over this platform ion-selective electrodes for different ions have been developed –keeping the same analytical performance as conventional electrodes- as well as a reference electrode. The demonstration that this platform can solve an analytical problem has been proved through the development of a complete paper cell for the detection of lithium in blood. This electrodes have been also combined with a radio frequency identification (RFID) potentiometer, which will allows its use in a decentralised way. Other possible application of this platform together with its limitations are also discussed. All in all, this work opens the possibility to substitute conventional sensors for this low-cost paper sensors, thus unlocking a whole new range of possibilities.
APA, Harvard, Vancouver, ISO, and other styles
31

Speiser, Sebastian [Verfasser]. "Usage Policies for Decentralised Information Processing / Sebastian Speiser." Karlsruhe : KIT Scientific Publishing, 2013. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nikiforakis, Nikolaos. "On decentralised enforcement of cooperation : an experimental invetigation." Thesis, Royal Holloway, University of London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fernández, Mariano. "Failure detection and isolation in decentralised multisensor systems." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ajayi, Oluwafemi O. "Dynamic trust negotiation for decentralised e-health collaborations." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/848/.

Full text
Abstract:
In the Internet-age, the geographical boundaries that have previously impinged upon inter-organisational collaborations have become decreasingly important. Of more importance for such collaborations is the notion and subsequent nature of security and trust - this is especially so in open collaborative environments like the Grid where resources can be both made available, subsequently accessed and used by remote users from a multitude of institutions with a variety of different privileges spanning across the collaboration. In this context, the ability to dynamically negotiate and subsequently enforce security policies driven by various levels of inter-organisational trust is essential. Numerous access control solutions exist today to address aspects of inter-organisational security. These include the use of centralised access control lists where all collaborating partners negotiate and agree on privileges required to access shared resources. Other solutions involve delegating aspects of access right management to trusted remote individuals in assigning privileges to their (remote) users. These solutions typically entail negotiations and delegations which are constrained by organisations, people and the static rules they impose. Such constraints often result in a lack of flexibility in what has been agreed; difficulties in reaching agreement, or once established, in subsequently maintaining these agreements. Furthermore, these solutions often reduce the autonomous capacity of collaborating organisations because of the need to satisfy collaborating partners demands. This can result in increased security risks or reducing the granularity of security policies. Underpinning this is the issue of trust. Specifically trust realisation between organisations, between individuals, and/or between entities or systems that are present in multi-domain authorities. Trust negotiation is one approach that allows and supports trust realisation. The thesis introduces a novel model called dynamic trust negotiation (DTN) that supports n-tier negotiation hops for trust realisation in multi-domain collaborative environments with specific focus on e-Health environments. DTN describes how trust pathways can be discovered and subsequently how remote security credentials can be mapped to local security credentials through trust contracts, thereby bridging the gap that makes decentralised security policies difficult to define and enforce. Furthermore, DTN shows how n-tier negotiation hops can limit the disclosure of access control policies and how semantic issues that exist with security attributes in decentralised environments can be reduced. The thesis presents the results from the application of DTN to various clinical trials and the implementation of DTN to Virtual Organisation for Trials of Epidemiological Studies (VOTES). The thesis concludes that DTN can address the issue of realising and establishing trust between systems or agents within the e-Health domain, such as the clinical trials domain.
APA, Harvard, Vancouver, ISO, and other styles
35

Taylor, Robert George. "Decentralised object location and routing in sensor networks." Thesis, University of Manchester, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.528499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Rafaeli, Sandro. "Architecture and protocols for decentralised group key management." Thesis, Lancaster University, 2003. http://eprints.lancs.ac.uk/12293/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmed, Quamar F. "Study of decentralised decision models in distributed environments." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/5674/.

Full text
Abstract:
Many of today's complex systems require effective decision making within uncertain distributed environments. The central theme of the thesis considers the systematic analysis for the representation of decision making organisations. The basic concept of stochastic learning automata provides a framework for modelling decision making in complex systems. Models of interactive decision making are discussed, which result from interconnecting decision makers in both synchronous and sequential configurations. The concepts and viewpoints from learning theory and game theory are used to explain the behaviour of these structures. This work is then extended by presenting a quantitative framework based on Petri Net theory. This formalism provides a powerful means for capturing the information flow in the decision-making process and demonstrating the explicit interactions between decision makers. Additionally, it is also used for the description and analysis of systems that axe characterised as being concurrent, asynchronous, distributed, parallel and/ or stochastic activities. The thesis discusses the limitations of each modelling framework. The thesis proposes an extension to the existing methodologies by presenting a new class of Petri Nets. This extension has resulted in a novel structure which has the additional feature of an embedded stochastic learning automata. An application of this approach to a realistic decision problem demonstrates the impact that the use of an artificial intelligence technique embedded within Petri Nets can have on the performance of decision models.
APA, Harvard, Vancouver, ISO, and other styles
38

Iqbal, Yasir. "A decentralised semantic architecture for social networking platforms." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17129.

Full text
Abstract:
Social networking platforms (SNPs) are complex distributed software applications exhibiting many challenges related to data portability. Since existing platforms are propriety in design, users cannot easily share their data with other SNPs, however decentralisation of social networking platforms can provide a solution to this problem. There is a difference of opinion, the way the research and developer communities have pursued this issue. Existing approaches used in decentralisation provide limited structural detail and lack in providing a systematic framework of design activities. There is a need for an architectural framework based on standardised software architectural principles and technologies to guide the design and development of decentralised social networking platforms in order to improve the level of both data portability and interoperability. The main aim of this research is to develop an architectural solution to achieve data portability among SNPs via decentralisation. Existing proposed decentralised platforms are based on a distributed structure and are mainly for a specific aspect such as access control or security and privacy. In addition to this, existing approaches lack in practicality due to underdeveloped and non-standardised design. To solve these issues a new architectural framework is needed, which can provide design and development guidelines for the decentralised social networking platform. The goal of this thesis is to study, design and develop an architectural framework for social networking platforms that can incorporate the requirements of the decentralisation, to make portability possible. The synergies between the software engineering principles and social web technologies are investigated to create a standard approach. The proposed architecture is based on component-based software development (CBSD) and aspect-oriented software development (AOSD), a unified approach known as CAM (Component Aspect Model). The foundations of the proposed architecture are based on decentralised social networking architecture (DSNA), architectural style which is derived from CAM. Components and aspects are the building blocks of the proposed decentralised social networking platform architecture. From a development perspective, each component represents a social network functionality and aspects represent the properties and preferences that are used to decentralise the functionality. The model for the component composition is a major challenge because the use of CAM for social networks has not been attempted before. The proposed architecture comprehensively integrates the DSNA architectural style into each architectural component. Portability among SNPs by means of decentralisation can be summarised into three steps. (1) Definition of the architectural style, (2) implementation of the architectural style into components and (3) integration of the component composition. To date component composition approaches have not been used for social networks as a way to develop social network functionality. The concept of middleware has been adapted to achieve the composition feature of the architecture. In the architecture Social Network Support Layer (SNSL) functions as middleware to facilitate component composition. Existing middleware solutions still lack integration of CBSD and AOSD concepts. This limitation is characterised by, a lack of explicit guidelines for composition, a lack of declarative specification and definition model to express component composition and a lack of support for role allocation. This research overcome these limitations. The application of the architecture is based on the W3C SWAT (Social Web Acid Test) scenario. A Messaging application is developed to evaluate the scenario based on the Design Science Research Methodology. The architectural style is defined in the first stage of design followed by the component-based architecture. The architectural style is defined to guide the architecture and the component composition model. In the second stage, the design and implementation of composition technology (that is SNSL) are developed with architectural style and the rules defined in the first stage. The refined version of the architecture is evaluated in the third stage, according to WC3 SWAT test. The definitive version of the proposed architecture with the benchmarked result can be used to design and build social networking platforms, allowing users to share and collaborate information across the different social networking platforms.
APA, Harvard, Vancouver, ISO, and other styles
39

Alsughayyir, Aeshah Yahya. "Energy-aware scheduling in decentralised multi-cloud systems." Thesis, University of Leicester, 2018. http://hdl.handle.net/2381/42407.

Full text
Abstract:
Cloud computing is an emerging Internet-based computing paradigm that aims to provide many on-demand services, requested nowadays by almost all online users. Although it greatly utilises virtualised environments for applications to be executed efficiently in low-cost hosting, it has turned energy wasting and overconsumption issues into major concerns. Many studies have projected that the energy consumption of cloud data-centres would grow significantly to reach 35% of the total energy consumed worldwide, threatening to further boost the world's energy crisis. Moreover, cloud infrastructure is built on a great amount of server equipment, including high performance computing (HPC), and the servers are naturally prone to failures. In this thesis, we study practically as well as theoretically the feasibility of optimising energy consumption in multi-cloud systems. We explore a deadline-based scheduling problem of executing HPC-applications by a heterogeneous set of clouds that are geographically distributed worldwide. We assume that these clouds participate in a federated approach. The practical part of the thesis has focused on combining two energy dimensions while scheduling HPC-applications (i.e., energy consumed for execution and data transmission). It has considered simultaneously minimising application rejections and deadline violations, to support resource reliability, with energy optimisation. In the theoretical part, we have presented the first online algorithms for the non-pre-emptive scheduling of jobs with agreeable deadlines on heterogeneous parallel processors. Through our developed simulation and experimental analysis using real parallel workloads from large-scale systems, the results evidenced that it is possible to reduce a considerable amount of energy while carefully scheduling cloud applications over a multi-cloud system. We have shown that our practical approaches provide promising energy savings with acceptable level of resource reliability. We believe that our scheduling approaches have particular importance in relation with the main aim of green cloud computing for the necessity of increasing energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Thulnoon, A. A. T. "Efficient runtime security system for decentralised distributed systems." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/9043/.

Full text
Abstract:
Distributed systems can be defined as systems that are scattered over geographical distances and provide different activities through communication, processing, data transfer and so on. Thus, increasing the cooperation, efficiency, and reliability to deal with users and data resources jointly. For this reason, distributed systems have been shown to be a promising infrastructure for most applications in the digital world. Despite their advantages, keeping these systems secure, is a complex task because of the unconventional nature of distributed systems which can produce many security problems like phishing, denial of services or eavesdropping. Therefore, adopting security and privacy policies in distributed systems will increase the trustworthiness between the users and these systems. However, adding or updating security is considered one of the most challenging concerns and this relies on various security vulnerabilities which existing in distributed systems. The most significant one is inserting or modifying a new security concern or even removing it according to the security status which may appear at runtime. Moreover, these problems will be exacerbated when the system adopts the multi-hop concept as a way to deal with transmitting and processing information. This can pose many significant security challenges especially if dealing with decentralized distributed systems and the security must be furnished as end-to-end. Unfortunately, existing solutions are insufficient to deal with these problems like CORBA which is considered a one-to-one relationship only, or DSAW which deals with end-to-end security but without taking into account the possibility of changing information sensitivity during runtime. This thesis provides a proposed mechanism for enforcing security policies and dealing with distributed systems’ security weakness in term of the software perspective. The proposed solution utilised Aspect-Oriented Programming (AOP), to address security concerns during compilation and running time. The proposed solution is based on a decentralized distributed system that adopts the multi-hop concept to deal with different requested tasks. The proposed system focused on how to achieve high accuracy, data integrity and high efficiency of the distributed system in real time. This is done through modularising the most efficient security solutions, Access Control and Cryptography, by using Aspect-Oriented Programming language. The experiments’ results show the proposed solution overcomes the shortage of the existing solutions by fully integrating with the decentralized distributed system to achieve dynamic, high cooperation, high performance and end-to-end holistic security.
APA, Harvard, Vancouver, ISO, and other styles
41

Macarthur, Kathryn. "Multi-agent coordination for dynamic decentralised task allocation." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/209737/.

Full text
Abstract:
Coordination of multiple agents for dynamic task allocation is an important and challenging problem, which involves deciding how to assign a set of agents to a set of tasks, both of which may change over time (i.e., it is a dynamic environment). Moreover, it is often necessary for heterogeneous agents to form teams to complete certain tasks in the environment. In these teams, agents can often complete tasks more efficiently or accurately, as a result of their synergistic abilities. In this thesis we view these dynamic task allocation problems as a multi-agent system and investigate coordination techniques for such systems. In more detail, we focus specially on the distributed constraint optimisation problem (DCOP) formalism as our coordination technique. Now, a DCOP consists of agents, variables and functions agents must work together to find the optimal configuration of variable values. Given its ubiquity, a number of decentralised algorithms for solving such problems exist, including DPOP, ADOPT, and the GDL family of algorithms. In this thesis, we examine the anatomy of the above-mentioned DCOP algorithms and highlight their shortcomings with regard to their application to dynamic task allocation scenarios. We then explain why the max-sum algorithm (a member of the GDL family) is the most appropriate for our setting, and define specific requirements for performing multi-agent coordination in a dynamic task allocation scenario: namely, scalability, robustness, efficiency in communication, adaptiveness, solution quality, and boundedness. In particular, we present three dynamic task allocation algorithms: fast-max-sum, branchand-bound fast-max-sum and bounded fast-max-sum, which build on the basic max-sum algorithm. The former introduces storage and decision rules at each agent to reduce overheads incurred by re-running the algorithm every time the environment changes. However, the overall computational complexity of fast-max-sum is exponential in the number of agents that could complete a task in the environment. Hence, in branchand- bound fast-max-sum, we give fast-max-sum significant new capabilities: namely, an online pruning procedure that simplifies the problem, and a branch-and-bound technique that reduces the search space. This allows us to scale to problems with hundreds of tasks and agents, at the expense of additional storage. Despite this, fast-max-sum is only proven to converge to an optimal solution on instances where the underlying graph contains no cycles. In contrast, bounded fast-max-sum builds on techniques found in bounded max-sum, another extension of max-sum, to find bounded approximate solutions on arbitrary graphs. Given such a graph, bounded fast-max-sum will run our iGHS algorithm, which computes a maximum spanning tree on subsections of a graph, in order to reduce overheads when there is a change in the environment. Bounded fast-max-sum will then run fast-max-sum on this maximum spanning tree in order to find a solution. We have found that fast-max-sum reduces the size of messages communicated and the amount of computation by up to 99% compared with the original max-sum. We also found that, even in large environments, branch-and-bound fast-max-sum finds a solution using 99% less computation and up to 58% fewer messages than fast-max-sum. Finally, we found bounded fast-max-sum reduces the communication and computation cost of bounded max-sum by up to 99%, while obtaining 60{88% of the optimal utility, at the expense of needing additional communication than using fast-max-sum alone. Thus, fast-max-sum or branch-and-bound fast-max-sum should be used where communication is expensive and provable solution quality is not necessary, and bounded fast-max-sum where communication is less expensive, and provable solution quality is required. Now, in order to achieve such improvements over max-sum, fast-max-sum exploits a particularly expressive model of the environment by modelling tasks in the environment as function nodes in a factor graph, which need to have some communication and computation performed for them. An equivalent problem to this can be found in operations research, and is known as scheduling jobs on unrelated parallel machines (also known as RjjCmax). In this thesis, we draw parallels between unrelated parallel machine scheduling and the computation distribution problem, and, in so doing, we present the spanning tree decentralised task distribution algorithm (ST-DTDA), the first decentralised solution to RjjCmax. Empirical evaluation of a number of heuristics for ST-DTDA shows solution quality achieved is up to 90% of the optimal on sparse graphs, in the best case, whilst worst-case quality bounds can be estimated within 5% of the solution found, in the best case
APA, Harvard, Vancouver, ISO, and other styles
42

Sivakumaran, Arun. "Malicious user attacks in decentralised cognitive radio networks." Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/79657.

Full text
Abstract:
Cognitive radio networks (CRNs) have emerged as a solution for the looming spectrum crunch caused by the rapid adoption of wireless devices over the previous decade. This technology enables efficient spectrum utility by dynamically reusing existing spectral bands. A CRN achieves this by requiring its users – called secondary users (SUs) – to measure and opportunistically utilise the band of a legacy broadcaster – called a primary user (PU) – in a process called spectrum sensing. Sensing requires the distribution and fusion of measurements from all SUs, which is facilitated by a variety of architectures and topologies. CRNs possessing a central computation node are called centralised networks, while CRNs composed of multiple computation nodes are called decentralised networks. While simpler to implement, centralised networks are reliant on the central node – the entire network fails if this node is compromised. In contrast, decentralised networks require more sophisticated protocols to implement, while offering greater robustness to node failure. Relay-based networks, a subset of decentralised networks, distribute the computation over a number of specialised relay nodes – little research exists on spectrum sensing using these networks. CRNs are vulnerable to unique physical layer attacks targeted at their spectrum sensing functionality. One such attack is the Byzantine attack; these attacks occur when malicious SUs (MUs) alter their sensing reports to achieve some goal (e.g. exploitation of the CRN’s resources, reduction of the CRN’s sensing performance, etc.). Mitigation strategies for Byzantine attacks vary based on the CRN’s network architecture, requiring defence algorithms to be explored for all architectures. Because of the sparse literature regarding relay-based networks, a novel algorithm – suitable for relay-based networks – is proposed in this work. The proposed algorithm performs joint MU detection and secure sensing by large-scale probabilistic inference of a statistical model. The proposed algorithm’s development is separated into the following two parts. • The first part involves the construction of a probabilistic graphical model representing the likelihood of all possible outcomes in the sensing process of a relay-based network. This is done by discovering the conditional dependencies present between the variables of the model. Various candidate graphical models are explored, and the mathematical description of the chosen graphical model is determined. • The second part involves the extraction of information from the graphical model to provide utility for sensing. Marginal inference is used to enable this information extraction. Belief propagation is used to infer the developed graphical model efficiently. Sensing is performed by exchanging the intermediate belief propagation computations between the relays of the CRN. Through a performance evaluation, the proposed algorithm was found to be resistant to probabilistic MU attacks of all frequencies and proportions. The sensing performance was highly sensitive to the placement of the relays and honest SUs, with the performance improving when the number of relays was increased. The transient behaviour of the proposed algorithm was evaluated in terms of its dynamics and computational complexity, with the algorithm’s results deemed satisfactory in this regard. Finally, an analysis of the effectiveness of the graphical model’s components was conducted, with a few model components accounting for most of the performance, implying that further simplifications to the proposed algorithm are possible.
Dissertation (MEng)--University of Pretoria, 2020.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
43

Riley, Luke. "Decentralised coalition formation methods for multi-agent systems." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2012139/.

Full text
Abstract:
Coalition formation is a process whereby agents recognise that cooperation with others can occur in a mutually beneficial manner and therefore the agents can choose appropriate temporary groups (named coalitions) to form. The benefit of each coalition can be measured by: the goals it achieves; the tasks it completes; or the utility it gains. Determining the set of coalitions that should form is difficult even in centralised cooperative circumstances due to: (a) the exponential number of different possible coalitions; (b) the ``super exponential'' number of possible sets of coalitions; and (c) the many ways in which the agents of a coalition can agree to distribute its gains between its members (if this gain can be transferred between the agents). The inherent distributed and potentially self-interested nature of multi-agent systems further complicates the coalition formation process. How to design decentralised coalition formation methods for multi-agent systems is a significant challenge and is the topic of this thesis. The desirable characteristics for these methods to have are (among others): (i) a balanced computational load between the agents; (ii) an optimal solution found with distributed knowledge; (iii) bounded communication costs; and (iv) to allow coalitions to form even when the agents disagree on their values. The coalition formation methods presented in this thesis implement one or more of these desirable characteristics. The contribution of this thesis begins with a decentralised dialogue game that utilise argumentation to allow agents to reason over and come to a conclusion on what are the best coalitions to form, when the coalitions are valued qualitatively. Next, the thesis details two decentralised algorithms that allow the agents to complete the coalition formation process in a specific coalition formation model, named characteristic function games. The first algorithm allows the coalition value calculations to be distributed between the agents of the system in an approximately equal manner using no communication, where each agent assigned to calculate the value of a coalition is included in that coalition as a member. The second algorithm allows the agents to find one of the most stable coalition formation solutions, even though each agent has only partial knowledge of the system. The final contribution of this thesis is a new coalition formation model, which allows the agents to find the expected payoff maximising coalitions to form, when each agent may disagree on the quantitative value of each coalition. This new model introduces more risk to agents valuing a coalition higher than the other agents, and so encourages pessimistic valuations.
APA, Harvard, Vancouver, ISO, and other styles
44

Loukarakis, Emmanouil. "Decentralised optimisation and control in electrical power systems." Thesis, Durham University, 2016. http://etheses.dur.ac.uk/11601/.

Full text
Abstract:
Emerging smart-grid-enabling technologies will allow an unprecedented degree of observability and control at all levels in a power system. Combined with flexible demand devices (e.g. electric vehicles or various household appliances), increased distributed generation, and the potential development of small scale distributed storage, they could allow procuring energy at minimum cost and environmental impact. That however presupposes real-time coordination of demand of individual households and industries down at the distribution level, with generation and renewables at the transmission level. In turn this implies the need to solve energy management problems of a much larger scale compared to the one we currently solve today. This of course raises significant computational and communications challenges. The need for an answer to these problems is reflected in today’s power systems literature where a significant number of papers cover subjects such as generation and/or demand management at both transmission and/or distribution, electric vehicle charging, voltage control devices setting, etc. The methods used are centralized or decentralized, handling continuous and/or discrete controls, approximate or exact, and incorporate a wide range of problem formulations. All these papers tackle aspects of the same problem, i.e. the close to real-time determination of operating set-points for all controllable devices available in a power system. Yet, a consensus regarding the associated formulation and time-scale of application has not been reached. Of course, given the large scale of the problem, decentralization is unavoidably part of the solution. In this work we explore the existing and developing trends in energy management and place them into perspective through a complete framework that allows optimizing energy usage at all levels in a power system.
APA, Harvard, Vancouver, ISO, and other styles
45

Vijjappu, Srihaarika. "Distributed Decentralised Visual SLAM for Multi-Agent Systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272102.

Full text
Abstract:
A key challenge in multi robot systems is performing distributed SLAM (Simultaneous Localisation and Mapping). The aim of this thesis is to be able to perform visual SLAM in a decentralised manner across multiple autonomous agents while minimising the inter-agent communication bandwidth requirement. For this purpose, the distributed multi-agent system communication protocol, Contract NET protocol has been suitably adapted to define the interaction between the autonomous agents [38]. The agents in this context could be mobile devices such as robots or autonomous cars. The agents communicate with one another by means of proposals and bids wherein each agent attempts to minimise the resources it has to spend while attempting to maximise the benefit derived from interacting with another agent in the system. Keeping this in mind, a set of rules have been defined for the agents to logic and reason independently based on the state information available to them so that they can take appropriate decisions by themselves without the need for an external intervention. This thesis work is directed towards the design and execution of a visual SLAM system in a multi-agent architecture that aims to improve the accuracy of the pose estimates of the agents, maximise the map area exploration and the accuracy of the map copies possessed by the agents while attempting to minimise the bandwidth required for the data exchange between them. The proposed multi agent system has been thoroughly studied and evaluated with multiple datasets to analyse its performance with respect to bandwidth requirements, stability, consistency, scalability, map accuracy and usage of temporal information.
En av de största utmaningarna med multirobotsystem är att utföra distribuerad SLAM (Simultaneous Localisation and Mapping: Samtidig Lokalisering och Kartläggning). Målet med detta arbete är att kunna utföra visuell och decentraliserad SLAM över flertalet autonoma agenter och samtidigt minimera kravet på bandbredd. För detta ändamål har systemkommunikationsprotokollet Contract NET protocol för multiagentsystem anpassats för att definiera samspelet mellan de autonoma agenterna [42]. I detta sammanhang kan agenterna vara mobila enheter såsom robotar eller autonoma bilar. Agenterna kommunicerar med varandra genom att skicka förslag och anbud sinsemellan medan varje agent jobbar för att minimera sin resursförbrukning och samtidigt maximera fördelarna av att interagera med en annan agent i systemet. Med detta i åtanke har ett antal regler tagits fram, för att låta robotarna använda logik för att resonera oberoende av varandra baserat på den tillgängliga tillståndsinformationen så att de kan fatta egna lämpliga beslut utan yttre inblandning. Detta examensarbete är inriktat på designen och utförandet av ett visuellt SLAM system i en multiagentarktitektur ämnad att förbättra noggrannhet hos agenternas uppskattade positionen och orienteringen, maximera den upptäckta arean av kartan och exaktheten hos kartkopiorna varje agent innehar och samtidigt försöka minimera bandbredden som krävs för datakommunikationen agenterna emellan. Det föreslagna multiagentsystemet har studerats och utvärderats utförligt med hänsyn till bandbreddskrav, stabilitet, konsistens, skalbarhet, kartans noggrannhet och användning av temporal information.
APA, Harvard, Vancouver, ISO, and other styles
46

Capadisli, Sarven [Verfasser]. "Linked Research on the Decentralised Web / Sarven Capadisli." Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1218301589/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fornell, Tim, and Jacob Holmberg. "Target Tracking in Decentralised Networks with Bandwidth Limitations." Thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152200.

Full text
Abstract:
The number and the size of sensor networks, e.g., used for monitoring of public places, are steadily increasing, introducing new demands on the algorithms used to process the collected measurements. The straightforward solution is centralised fusion, where all measurements are sent to a common node where all estimation is performed. This can be shown to be optimal, but it is resource intensive, scales poorly, and is sensitive to communication and sensor node failure. The alternative is to perform decentralised fusion, where the computations are spread out in the network. Distributing the computation results in an algorithm that scales better with the size of the network and that can be more robust to hardware failure. The price of decentralisation is that it is more difficult to provide optimal estimates. Hence, a decentralised method needs to be designed to maximise scaling and robustness while minimising the performance loss. This MSc thesis studies tree aspects of the design of decentralised networks: the network topology, communication schemes, and methods to fuse the estimates from different sensor nodes. Results are obtained using simulations of a network consisting of radar sensors, where the quality of the estimates are compared(the root mean square error, RMSE) and the consistency of the estimates (the normalised estimation error squared, NEES). Based on the simulation, it is recommended that a 2-tree network topology should be used, and that estimates should be communicated throughout the network using an algorithm that allows information to propagate. This is achieved by sending information in two steps. The first step is to let the nodes send information to their neighbours with a certain frequency, after which a fusion is performed. The second step is to let the nodes indirectly forward the information they receive by sending the result of the fusion. This second step is not performed every time information is received, but rather at an interval, e.g., every fifth time. Furthermore, 3 sub-optimal methods to fuse possibly correlated estimates are evaluated: Covariance Intersection, Safe Fusion, and Inverse Covariance Intersection. The outcome is to recommend using Inverse Covariance Intersection.
APA, Harvard, Vancouver, ISO, and other styles
48

Guy, Amy. "The presentation of self on a decentralised Web." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29537.

Full text
Abstract:
Self presentation is evolving; with digital technologies, with the Web and personal publishing, and then with mainstream adoption of online social media. Where are we going next? One possibility is towards a world where we log and own vast amounts of data about ourselves. We choose to share - or not - the data as part of our identity, and in interactions with others; it contributes to our day-to-day personhood or sense of self. I imagine a world where the individual is empowered by their digital traces (not imprisoned), but this is a complex world. This thesis examines the many factors at play when we present ourselves through Web technologies. I optimistically look to a future where control over our digital identities are not in the hands of centralised actors, but our own, and both survey and contribute to the ongoing technical work which strives to make this a reality. Decentralisation changes things in unexpected ways. In the context of the bigger picture of our online selves, building on what we already know about self-presentation from decades of Social Science research, I examine what might change as we move towards decentralisation; how people could be affected, and what the possibilities are for a positive change. Finally I explore one possible way of self-presentation on a decentralised social Web through lightweight controls which allow an audience to set their expectations in order for the subject to meet them appropriately. I seek to acknowledge the multifaceted, complicated, messy, socially-shaped nature of the self in a way that makes sense to software developers. Technology may always fall short when dealing with humanness, but the framework outlined in this thesis can provide a foundation for more easily considering all of the factors surrounding individual self-presentation in order to build future systems which empower participants.
APA, Harvard, Vancouver, ISO, and other styles
49

Zimba, Anthony Andile. "Decentralised cooperative governance in the South African metropolitan municipalities." Thesis, University of Fort Hare, 2012. http://hdl.handle.net/10353/536.

Full text
Abstract:
The study emanates from the constitutional imperatives with regard to the role of local government in community development. The notion of cooperative governance is envisaged in the South African Constitution which stipulates that all spheres of government must adhere to the principles of cooperative government and must conduct their activities within the parameters prescribed by the Constitution. The purpose is to support and strengthen the capacity of the local governments to manage their own affairs and to perform their functions. The basic values and principles governing public administration entail that: it must be broadly representative of the people of South Africa in order to redress the imbalances. The existing gaps in the legislation on decision making power at the local level of the municipality, be it in a ward committee or sub council, have not been adequately addressed in the post 1994 democratic dispensation. It is in this context that this study seeks to address these gaps and obstacles, and contribute to the design and development of a decentralized cooperative governance model, specifically to the six metropolitan municipalities and also provide a basis for further research. The findings of the research could be adapted as a national policy in the empowering of municipalities through the dispersal of democratic power which is an essential ingredient of inclusive governance. Based on a case study of six metropolitan municipalities, the research is intended to contribute to the development of empirically grounded; praxis and practical guideline in decentralized cooperative governance which can be adopted and institutionalized in public administration. It is believed that a study of decentralized cooperative governance adds value in that it seeks to link decentralized power and local development. Rather than civil society organisations being seen as adversarial, a creative partnership with the state in local development is crucial. This political assimilation is critical in the construction of democracy through fusing the substantive values of a political culture with the procedural requisites of democratic accountability. This serves to fragment and disperse political power and maintain a system of checks and balances with regard to the exercise of governmental power. The capacity for innovation, flexibility and change can be enhanced at the local level, and it is a cliché that local decision making is viewed as more democratic in contrast to central, top-down decision-making processes. A syncretistic model for local government based on the political adaptation of political and inclusive decentralisation is outlined.
APA, Harvard, Vancouver, ISO, and other styles
50

Athanasius, Germane Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Robust decentralised output feedback control of interconnected grid system." Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/39591.

Full text
Abstract:
The novel contribution of the thesis is the design and implementation of decentralised output feedback power system controllers for power oscillation damping (POD) over the entire operating regime of the power system. The POD controllers are designed for the linearised models of the nonlinear power system dynamics. The linearised models are combined and treated as parameter varying switched systems. The thesis contains novel results for the controller design, bumpless switching and stability analysis of such switched systems. Use of switched controllers against the present trend of having single controller helps to reduce the conservatism and to increase the uncertainty handling capability of the power system controller design. Minimax-LQG control design method is used for the controller design. Minimax-LQG control combines the advantages of both LQG and H control methods with respect to robustness and the inclusion of uncertainty and noise in the controller design. Also, minimax-LQG control allows the use of multiple integral quadratic constraints to bound the different types of uncertainties in the power system application. During switching between controllers, switching stability of the system is guaranteed by constraining the minimum time between two consecutive switchings. An expression is developed to compute the minimum time required between switchings including the effect of jumps in the states. Bumpless switching scheme is used to minimise the switching transients which occur when the controllers are switched. Another contribution of the thesis is to include the effect of on load tap changing transformers in the power system controller design. A simplified power system model linking generator and tap changing transformer dynamics is developed for this purpose and included in the controller design. The performance of the proposed linear controllers are validated by nonlinear computer simulations and through real time digital simulations. The designed controllers improve power system damping and provide uniform performance over the entire operating regime of the generator.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography