Dissertations / Theses on the topic 'Model-based protocol'

To see the other types of publications on this topic, follow the link: Model-based protocol.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Model-based protocol.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Blom, Johan. "Model-Based Protocol Testing in an Erlang Environment." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-279489.

Full text
Abstract:
Testing is the dominant technique for quality assurance of software systems. It typically consumes considerable resources in development projects, and is often performed in an ad hoc manner. This thesis is concerned with model-based testing, which is an approach to make testing more systematic and more automated. The general idea in model-based testing is to start from a formal model, which captures the intended behavior of the software system to be tested. On the basis of this model, test cases can be generated in a systematic way. Since the model is formal, the generation of test suites can be automated and with adequate tool support one can automatically quantify to which degree they exercise the tested software. Despite the significant improvements on model-based testing in the last 20 years, acceptance by industry has so far been limited. A number of commercially available tools exist, but still most testing in industry relies on manually constructed test cases. This thesis address this problem by presenting a methodology and associated tool support, which is intended to be used for model-based testing of communication protocol implementations in industry. A major goal was to make the developed tool suitable for industrial usage, implying that we had to consider several problems that typically are not addressed by the literature on model-based testing. The thesis presents several technical contributions to the area of model-based testing, including - a new specification language based on the functional programming language Erlang, - a novel technique for specifying coverage criteria for test suite generation, and - a technique for automatically generating test suites. Based on these developments, we have implemented a complete tool chain that generates and executes complete test suites, given a model in our specification language. The thesis also presents a substantial industrial case study, where our technical contributions and the implemented tool chain are evaluated. Findings from the case study include that test suites generated using (model) coverage criteria have at least as good fault-detection capability as equally large random test suites, and that model-based testing could discover faults in previously well-tested software where previous testing had employed a relaxed validation of requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

De, Wet Nico. "Model driven communication protocol engineering and simulation based performance analysis using UML 2.0." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/6392.

Full text
Abstract:
Includes bibliographical references.
The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDT's for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML.
APA, Harvard, Vancouver, ISO, and other styles
3

Laxmi, Vijaya. "Trust based QOS-aware packet forwarding model for ad hoc network independent of routing protocol." Thesis, Wichita State University, 2010. http://hdl.handle.net/10057/3732.

Full text
Abstract:
The need for users to be able to setup wireless networks as, and when they require, has led to a boom in MANET. The constantly changing status of wireless links, mobility and resource scarcity, pose serious problems when a node in an ad hoc network is required to not only be able to communicate with other neighbors (multiple hops away), but also have demand QOS of intermediate nodes to its delay sensitive packets. As this technology has matured, resource starving of best effort traffic in the presence of priority traffic is not acceptable. Moreover, a true seamless, wireless network would be one in which intermediate nodes do not always need to support the same type of routing protocol in their TCP/IP stack to allow communication between the source and destination node. This research proposes to solve the QOS issues in a wireless ad hoc network by enriching the nodes in a network with trust databases, and a pool table to keep records of its previous interactions with all malicious and trustworthy nodes. A node can assign trust points to well behaving nodes and deduct points away from the database for a bad node. Thus, a node can always have a look at an intermediate node’s trust points and its previous performance to decide if this node can be trusted to properly forward its multimedia traffic by satisfying the QOS request. Also, QOS favors are returned promptly to provide incentives for nodes to become trustworthy. This author has proposed to solve the QOS issues in a MANET in a unique way, and has also tried to capture the dynamism of wireless channels by using a Best Effort (BE) timer to gain the best utilization of a costly channel and to provide fairness. A Universal Packet Format is used in this research to ensure communication between two nodes which may be separated by nodes that do not support the same routing protocol in their TCP/IP stack. Hence, an attempt toward a comprehensive solution for achieving the goals of a seamless QOS aware ad hoc network is made in this research work.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
4

Santiago, Pinazo Sonia. "Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48527.

Full text
Abstract:
The area of formal analysis of cryptographic protocols has been an active one since the mid 80’s. The idea is to verify communication protocols that use encryption to guarantee secrecy and that use authentication of data to ensure security. Formal methods are used in protocol analysis to provide formal proofs of security, and to uncover bugs and security flaws that in some cases had remained unknown long after the original protocol publication, such as the case of the well known Needham-Schroeder Public Key (NSPK) protocol. In this thesis we tackle problems regarding the three main pillars of protocol verification: modelling capabilities, verifiable properties, and efficiency. This thesis is devoted to investigate advanced features in the analysis of cryptographic protocols tailored to the Maude-NPA tool. This tool is a model-checker for cryptographic protocol analysis that allows for the incorporation of different equational theories and operates in the unbounded session model without the use of data or control abstraction. An important contribution of this thesis is relative to theoretical aspects of protocol verification in Maude-NPA. First, we define a forwards operational semantics, using rewriting logic as the theoretical framework and the Maude programming language as tool support. This is the first time that a forwards rewriting-based semantics is given for Maude-NPA. Second, we also study the problem that arises in cryptographic protocol analysis when it is necessary to guarantee that certain terms generated during a state exploration are in normal form with respect to the protocol equational theory. We also study techniques to extend Maude-NPA capabilities to support the verification of a wider class of protocols and security properties. First, we present a framework to specify and verify sequential protocol compositions in which one or more child protocols make use of information obtained from running a parent protocol. Second, we present a theoretical framework to specify and verify protocol indistinguishability in Maude-NPA. This kind of properties aim to verify that an attacker cannot distinguish between two versions of a protocol: for example, one using one secret and one using another, as it happens in electronic voting protocols. Finally, this thesis contributes to improve the efficiency of protocol verification in Maude-NPA. We define several techniques which drastically reduce the state space, and can often yield a finite state space, so that whether the desired security property holds or not can in fact be decided automatically, in spite of the general undecidability of such problems.
Santiago Pinazo, S. (2015). Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48527
TESIS
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Ngo Minh Thang. "Test case generation for Symbolic Distributed System Models : Application to Trickle based IoT Protocol." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC092.

Full text
Abstract:
Les systèmes distribués sont composés de nombreux sous-systèmes distants les uns des autres. Afin de réaliser une même tâche, les sous-systèmes communiquent à la fois avec l’environnement par des messages externes et avec d’autres sous-systèmes par des messages internes, via un réseau de communication. En pratique, les systèmes distribués mettent en jeu plusieurs types d’erreurs, propres aux sous-systèmes les constituant, ou en lien avec les communications internes. Afin de s’assurer de leur bon fonctionnement, savoir tester de tels systèmes est essentiel. Cependant, il est très compliqué de les tester car sans horloge globale, les sous-systèmes ne peuvent pas facilement synchroniser leurs envois de messages, ce qui explique l’existence des situations non déterministes. Le test à base de modèles (MBT) est une approche qui consiste à vérifier si le comportement d’un système sous test (SUT) est conforme à son modèle, qui spécifie les comportements souhaités. MBT comprend deux étapes principales: la génération de cas de test et le calcul de verdict. Dans cette thèse, nous nous intéressons à la génération de cas de test dans les systèmes distribués. Nous utilisons les systèmes de transition symbolique temporisé à entrées et sorties (TIOSTS) et les analysons à l’aide des techniques d’exécution symbolique pour obtenir les comportements symboliques du système distribué. Dans notre approche, l’architecture de test permet d’observer au niveau de chaque soussystème à la fois les messages externes émis vers l’environnement et les messages internes reçus et envoyés. Notre framework de test comprend plusieurs étapes: sélectionner un objectif de test global, défini comme un comportement particulier exhibé par exécution symbolique, projeter l’objectif de test global sur chaque sous-système pour obtenir des objectifs de test locaux, dériver des cas de test unitaires pour chacun des sous-systèmes. L’exécution du test consiste à exécuter des cas de test locaux sur les sous-systèmes paramétrés par les objectifs de tests en calculant à la volée les données de test à soumettre au sous-système en fonction de données observées. Enfin, nous mettons en œuvre notre approche sur un cas d’étude décrivant un protocole utilisé dans le contexte de l’IoT
Distributed systems are composed of many distant subsystems. In order to achieve a common task, subsystems communicate both with the local environment by external messages and with other subsystems by internal messages through a communication network. In practice, distributed systems are likely to reveal many kinds of errors, so that we need to test them before reaching a certain level of confidence in them. However, testing distributed systems is complicated due to their intrinsic characteristics. Without global clocks, subsystems cannot synchronize messages, leading to non-deterministic situations.Model-Based Testing (MBT) aims at checking whether the behavior of a system under test (SUT) is consistent with its model, specifying expected behaviors. MBT is useful for two main steps: test case generation and verdict computation. In this thesis, we are mainly interested in the generation of test cases for distributed systems.To specify the desired behaviors, we use Timed Input Output Symbolic Transition Systems (TIOSTS), provided with symbolic execution techniques to derive behaviors of the distributed system. Moreover, we assume that in addition to external messages, a local test case observes internal messages received and sent by the co-localized subsystem. Our testing framework includes several steps: selecting a global test purpose using symbolic execution on the global system, projecting the global test purpose to obtain a local test purpose per subsystem, deriving unitary test case per subsystem. Then, test execution consists of executing local test cases by submitting data compatible following a local test purpose and computing a test verdict on the fly. Finally, we apply our testing framework to a case study issued from a protocol popular in the context of IoT
APA, Harvard, Vancouver, ISO, and other styles
6

Kuppusamy, Lakshmi Devi. "Modelling client puzzles and denial-of-service resistant protocols." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/61032/1/Lakshmi_Kuppusamy_Thesis.pdf.

Full text
Abstract:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
APA, Harvard, Vancouver, ISO, and other styles
7

Rowden, Elizabeth Szydlo. "Response to Intervention: A Case Study Documenting one Elementary School's Successful Implementation." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97953.

Full text
Abstract:
The use of Response to Intervention, more commonly referred to as RTI has become more prevalent as school systems look to find ways of bridging the opportunity gap and provide support those students who are not successful in their attempts to access the general education curriculum. More research is needed in order to have a better understanding of not only how schools implement RTI, but also how they utilize data, monitor student progress and help to ensure fidelity of implementation. The purpose of this study was to examine and explain how one elementary school with a high quality RTI program implemented Response to Intervention while keeping all three essential components in consideration. The findings demonstrate that the subject elementary school combined several elements of Response to Intervention and in turn, created their own version of a hybrid RTI model that utilized components from both the standard protocol model and the problem-solving model. In order to monitor student progress, universal screeners were utilized several times throughout the year for both reading and math. Reading was also monitored through running records, PALS Quick Checks, Orton Gillingham assessments, and exit tickets, whereas Math utilized formative assessments, anecdotal notes, and exit tickets to track student progress. Each math and reading CLT met weekly to engage in dialogue around student data. An important finding is that the subject elementary school made RTI implementation decisions around what was best for their students, which allowed for a more flexible and adaptable approach. The system utilized targeted individual student needs and helped to ensure that ALL students had access to the necessary supports that would help to ensure student success.
Doctor of Education
As schools continue to face increasing demands, including how to meet the needs of students with diverse academic backgrounds, they have been charged with exploring new ways and methods of ensuring that students are successful in their attempts to access the general education curriculum. Response to Intervention, more commonly referred to as RTI, has become more widely used in school systems as they continue to work to ensure student success for all. RTI is seen as a tool to help accurately identify students who have a learning disability (Ciolfi and Ryan, 2011), however more research is needed in order to have a better understanding of how schools implement RTI, as well as how they utilize the data collected and monitor student progress. This qualitative case study analyzes how one subject elementary school implemented RTI, how they utilized data, as well as how they monitored the progress of their students.
APA, Harvard, Vancouver, ISO, and other styles
8

Riese, Marc. "Model-based diagnosis of communication protocols /." [S.l.] : [s.n.], 1993. http://library.epfl.ch/theses/?nr=1173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pinheiro, Pedro Victor Pontes. "Teste baseado em modelos para serviços RESTful usando máquinas de estados de protocolos UML." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14072014-165410/.

Full text
Abstract:
A Arquitetura Orientada a Serviços (SOA) é um estilo arquitetural formado por um conjunto de restrições que visa promover a escalabilidade e a flexibilidade de um sistema, provendo suas funcionalidades como serviços. Nos últimos anos, um estilo alternativo foi proposto e amplamente adotado, que projeta as funcionalidades de um sistema como recursos. Este estilo arquitetural orientado a recursos é chamado de REST. O teste de serviços web em geral apresenta vários desafios devido a sua natureza distribuída, canal de comunicação pouco confiável, baixo acoplamento e a falta de uma interface de usuário. O teste de serviços RESTful (serviços que utilizam o REST) compartilham estes mesmos desafios e ainda necessitam que suas restrições sejam obedecidas. Estes desafios demandam testes mais sistemáticos e formais. Neste contexto, o teste baseado em modelos (TBM) se apresenta como um processo viável para abordar essas necessidades. O modelo que representa o sistema deve ser simples e ao mesmo tempo preciso para que sejam gerados casos de teste com qualidade. Com base nesse contexto, este projeto de mestrado propõe uma abordagem baseada em modelos para testar serviços RESTful. O modelo comportamental adotado foi a máquina de estados de protocolos UML, capaz de formalizar a interface do serviço enquanto esconde o seu funcionamento interno. Uma ferramenta foi desenvolvida para gerar automaticamente os casos de teste usando critérios de cobertura de estados e transições para percorrer o modelo
Service Oriented Architecture (SOA) is an architectural style consisting of a set of restrictions aimed at promoting the scalability and flexibility of a system, providing its functionalities as services. In recent years, an alternative style was proposed and widely adopted, which designs the system\'s functionalities as resources. This resource oriented architectural style is called REST. In general, the test of web services has several challenges due to its distributed nature, unreliable communication channel, low coupling and the lack of a user interface. Testing RESTful web services (services that use REST) share these same challenges and also need to obey the REST constraints. These challenges require a more systematic and formal testing approach. In this context, model based testing presents itself as a viable process for addressing those needs. The model that represents the system should be simple and precise enough to generate quality test cases. Based on this context, this work proposes a model based approach to test RESTful web services. The behavioral model used was the UML protocol state machine, which is capable to provide a formalization of the service interface, while hiding its internal behaviour. A tool was developed to automatically generate test cases using the state and transition coverage criteria to traverse the model
APA, Harvard, Vancouver, ISO, and other styles
10

Ponge, Julien Nicolas Computer Science &amp Engineering Faculty of Engineering UNSW. "Model based analysis of time-aware web services interactions." Publisher:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43525.

Full text
Abstract:
Web services are increasingly gaining acceptance as a framework for facilitating application-to-application interactions within and across enterprises. It is commonly accepted that a service description should include not only the interface, but also the business protocol supported by the service. The present work focuses on the formalization of the important category of protocols that include time-related constraints (called timed protocols), and the impact of time on compatibility and replaceability analysis. We formalized the following timing constraints: CInvoke constraints define time windows of availability while MInvoke constraints define expirations deadlines. We extended techniques for compatibility and replaceability analysis between timed protocols by using a semantic-preserving mapping between timed protocols and timed automata, leading to the novel class of protocol timed automata (PTA). Specifically, PTA exhibit silent transitions that cannot be removed in general, yet they are closed under complementation, making every type of compatibility or replaceability analysis decidable. Finally, we implemented our approach in the context of a larger project called ServiceMosaic, a model-driven framework for web service life-cycle management.
APA, Harvard, Vancouver, ISO, and other styles
11

Gorantla, Malakondayya Choudary. "Design and analysis of group key exchange protocols." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37664/1/Malakondayya_Gorantla_Thesis.pdf.

Full text
Abstract:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
APA, Harvard, Vancouver, ISO, and other styles
12

Lippold, Georg. "Encryption schemes and key exchange protocols in the certificateless setting." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41697/1/Georg_Lippold_Thesis.pdf.

Full text
Abstract:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.
APA, Harvard, Vancouver, ISO, and other styles
13

Furqan, Zeeshan. "DEVELOPING STRAND SPACE BASED MODELS AND PROVING THE CORRECTNESS OF THE IEEE 802.11I AUTHENTICATION PROTOCOL WITH RESTRICTED SEC." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2864.

Full text
Abstract:
The security objectives enforce the security policy, which defines what is to be protected in a network environment. The violation of these security objectives induces security threats. We introduce an explicit notion of security objectives for a security protocol. This notion should precede the formal verification process. In the absence of such a notion, the security protocol may be proven correct despite the fact that it is not equipped to defend against all potential threats. In order to establish the correctness of security objectives, we present a formal model that provides basis for the formal verification of security protocols. We also develop the modal logic, proof based, and multi-agent approaches using the Strand Space framework. In our modal logic approach, we present the logical constructs to model a protocol's behavior in such a way that the participants can verify different security parameters by looking at their own run of the protocol. In our proof based model, we present a generic set of proofs to establish the correctness of a security protocol. We model the 802.11i protocol into our proof based system and then perform the formal verification of the authentication property. The intruder in our model is imbued with powerful capabilities and repercussions to possible attacks are evaluated. Our analysis proves that the authentication of 802.11i is not compromised in the presented model. We further demonstrate how changes in our model will yield a successful man-in-the-middle attack. Our multi-agent approach includes an explicit notion of multi-agent, which was missing in the Strand Space framework. The limitation of Strand Space framework is the assumption that all the information available to a principal is either supplied initially or is contained in messages received by that principal. However, other important information may also be available to a principal in a security setting, such as a principal may combine information from different roles played by him in a protocol to launch a powerful attack. Our presented approach models the behavior of a distributed system as a multi-agent system. The presented model captures the combined information, the formal model of knowledge, and the belief of agents over time. After building this formal model, we present a formal proof of authentication of the 4-way handshake of the 802.11i protocol.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
14

Ponge, Julien. "Model based analysis of Time-aware Web service interactions." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00730187.

Full text
Abstract:
Les services web gagnent de l'importance en tant que cadre facilitant l'intégration d'applications au sein et en dehors des frontières des entreprises. Il est accepté que la description d'un service ne devrait pas seulement inclure l'interface, mais aussi le protocole métier supporté par le service. Dans le cadre de ce travail, nous avons formalisé la catégorie des protocoles incluant des contraintes de temps (appelés protocoles temporisés) et étudié l'impact du temps sur l'analyse de compatibilité et de remplaçabilité. Nous avons formalisé les contraintes suivantes : les contraintes Clnvoke définissent des fenêtres de disponibilités tandis que les contraintes Mlnvoke définissent des délais d'expiration. Nous avons étendu les techniques pour l'analyse de compatibilité et de remplaçabilité entre protocoles temporisés à l'aide d'un mapping préservant la sémantique entre les protocoles temporisés et les automates temporisés, ce qui a défini la classe des automates temporisés de protocoles (PTA). Les PTA possèdent des transitions silencieuses qui ne peuvent pas être supprimées en général, et pourtant ils sont fermés par calcul du complément, ce qui rend décidable les différents types d'analyse de compatibilité et de remplaçabilité. Enfin, nous avons mis en oeuvre notre approche dans le cadre du projet ServiceMosaic, une plate-forme pour la gestion du cycle de vie des services web.
APA, Harvard, Vancouver, ISO, and other styles
15

Ayaida, Marwane. "Contribution aux communications intra-véhicule et inter-véhicules." Thesis, Reims, 2012. http://www.theses.fr/2012REIMS016/document.

Full text
Abstract:
Les véhicules modernes sont équipés de périphériques permettant d'automatiser des tâches (changement de vitesse de transmission, régulation de vitesse, etc.) ou de fournir des services à l'utilisateur (aide à la conduite, détection d'obstacles, etc.). Les communications entre les véhicules permettent d'élargir ces services grâce à la collaboration de plusieurs véhicules (prévention des accidents, gestion du trafic routier, etc.). La multiplication de ces périphériques, de leurs interfaces et protocoles rend l'échange de données plus complexe. Par ailleurs, la communication inter- véhicules est plus contraignante à cause de la haute mobilité des véhicules. Dans cette thèse, nous proposons la conception d'un canal de communication Connect to All (C2A) qui permet d'assurer l'interopérabilité entre les périphériques embarqués dans un véhicule. En effet, il détecte la connexion à chaud d'un équipement, le reconnaît et lui permet d'échanger des données avec les autres périphériques connectés. La conception du canal commence par la modélisation de ce canal en utilisant deux techniques différentes (l'outil de modélisation et de vérification UPPAAL et le Langage de Description et de Spécification (LDS)). La vérification des modèles proposés a pour but de valider le fonctionnement. Ensuite, nous détaillons une implémentation réelle du canal sur une carte embarquée qui vise à démontrer la faisabilité du concept d'interopérabilité de C2A.Nous avons aussi étudié les effets de la mobilité dans la communication inter-véhiculaires grâce à une approche hybride mixant le routage et un service de localisation. Cette approche offre un mécanisme qui permet de réduire les coûts de la localisation des véhicules tout en augmentant les performances de routage. En plus, nous comparons deux applications de cette approche : Hybrid Routing and Grid Location Service (HRGLS) et Hybrid Routing and Hierarchical Location Service (HRHLS) avec des approches originelles pour démontrer la valeur ajoutée. Cette approche est enrichie avec un algorithme de prédiction de mobilité. Ce dernier permet de mieux cerner le déplacement des véhicules en les estimant. De même, l'approche hybride avec prédiction de mobilité Predictive Hybrid Routing and Hierarchical Location Service (PHRHLS) est comparée à HRHLS et l'approche originelle afin de révéler les bénéfices de la prédiction de mobilité
Modern vehicles are equipped with various devices that aim to automate tasks (shift transmission, cruise control, etc.) or to provide services to the user (driver assistance, obstacle detection, etc.). Communications between vehicles help to expand these services through the collaboration of several vehicles (accident prevention, traffic management, etc.). The proliferation of these devices, their interfaces and protocols makes the data exchange more complex. In addition, inter-vehicle communication is more restrictive because of the vehicles' high mobility.In this work, we propose the design of a communication channel Connect to All (C2A) that ensures the interoperability between embedded devices in a vehicle. In fact, it detects the equipment connection, recognizes it and allows it to exchange data with other devices. The channel design starts by the modelling step using two different techniques (the model checker tool UPPAAL and the Specification and Description Language (SDL). Then, we validate the designed models. We also detail a concrete implementation of the channel on an embedded chip that aims to show the C2A interoperability concept feasibility.We also studied the mobility effects in the inter-vehicular communication through a hybrid approach mixing routing and location-based service. This approach provides a mechanism to reduce vehicle-tracking costs while increasing routing performances. Moreover, we compare two applications of this approach: Hybrid Routing and Grid Location Service (HRGLS) and Hybrid Routing and Hierarchical Location Service (HRHLS) with classical approaches to prove the added value. Then, this approach is improved with a mobility prediction algorithm. The latter allows a better understanding of the vehicle movements by estimating them. Similarly, the hybrid approach with mobility prediction Predictive Hybrid Routing and Hierarchical Location Service (PHRHLS) is compared with the basic approach and HRHLS in order to show the mobility prediction advantages
APA, Harvard, Vancouver, ISO, and other styles
16

Strengbom, Kristoffer. "Mobile Services Based Traffic Modeling." Thesis, Linköpings universitet, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-116459.

Full text
Abstract:
Traditionally, communication systems have been dominated by voice applications. Today with the emergence of smartphones, focus has shifted towards packet switched networks. The Internet provides a wide variety of services such as video streaming, web browsing, e-mail etc, and IP trac models are needed in all stages of product development, from early research to system tests. In this thesis, we propose a multi-level model of IP traffic where the user behavior and the actual IP traffic generated from different services are considered as being two independent random processes. The model is based on observations of IP packet header logs from live networks. In this way models can be updated to reflect the ever changing service and end user equipment usage. Thus, the work can be divided into two parts. The first part is concerned with modeling the traffic from different services. A subscriber is interested in enjoying the services provided on the Internet and traffic modeling should reflect the characteristics of these services. An underlying assumption is that different services generate their own characteristic pattern of data. The FFT is used to analyze the packet traces. We show that the traces contains strong periodicities and that some services are more or less deterministic. For some services this strong frequency content is due to the characteristics of cellular network and for other it is actually a programmed behavior of the service. The periodicities indicate that there are strong correlations between individual packets or bursts of packets. The second part is concerned with the user behavior, i.e. how the users access the different services in time. We propose a model based on a Markov renewal process and estimate the model parameters. In order to evaluate the model we compare it to two simpler models. We use model selection, using the model's ability to predict future observations as selection criterion. We show that the proposed Markov renewal model is the best of the three models in this sense. The model selection framework can be used to evaluate future models.
APA, Harvard, Vancouver, ISO, and other styles
17

Du, Rong. "Secure electronic tendering." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16606/1/Rong_Du_Thesis.pdf.

Full text
Abstract:
Tendering is a method for entering into a sales contract. Numerous electronic tendering systems have been established with the intent of improving the efficiency of the tendering process. Although providing adequate security services is a desired feature in an e-tendering system, current e-tendering systems are usually designed with little consideration of security and legal compliance. This research focuses on designing secure protocols for e-tendering systems. It involves developing methodologies for establishing security requirements, constructing security protocols and using formal methods in protocol security verification. The implication is that it may prove suitable for developing secure protocols in other electronic business domains. In depth investigations are conducted into a range of issues in relation to establishing generic security requirements for e-tendering systems. The outcomes are presented in a form of basic and advanced security requirements for e-tendering process. This analysis shows that advanced security services are required to secure e-tender negotiation integrity and the submission process. Two generic issues discovered in the course of this research, functional difference and functional limitations, are fundamental in constructing secure protocols for tender negotiation and submission processes. Functional difference identification derives advanced security requirements. Functional limitation assessment defines how the logic of generic security mechanisms should be constructed. These principles form a proactive analysis applied prior to the construction of security protocols. Security protocols have been successfully constructed using generic cryptographic security mechanisms. These protocols are secure e-tender negotiation integrity protocol suite, and secure e-tender submission protocols. Their security has been verified progressively during the design. Verification results show that protocols are secure against common threat scenarios. The primary contribution of this stage are the procedures developed for the complex e-business protocol analysis using formal methods. The research shows that proactive analysis has made this formal security verification possible and practical for complex protocols. These primary outcomes have raised awareness of security issues in e-tendering. The security solutions proposed in the protocol format are the first in e-tendering with verifiable security against common threat scenarios, and which are also practical for implementation. The procedures developed for securing the e-tendering process are generic and can be applied to other business domains. The study has made improvements in: establishing adequate security for a business process; applying proactive analysis prior to secure protocol construction; and verifying security of complex e-business protocols using tool aided formal methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Du, Rong. "Secure electronic tendering." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16606/.

Full text
Abstract:
Tendering is a method for entering into a sales contract. Numerous electronic tendering systems have been established with the intent of improving the efficiency of the tendering process. Although providing adequate security services is a desired feature in an e-tendering system, current e-tendering systems are usually designed with little consideration of security and legal compliance. This research focuses on designing secure protocols for e-tendering systems. It involves developing methodologies for establishing security requirements, constructing security protocols and using formal methods in protocol security verification. The implication is that it may prove suitable for developing secure protocols in other electronic business domains. In depth investigations are conducted into a range of issues in relation to establishing generic security requirements for e-tendering systems. The outcomes are presented in a form of basic and advanced security requirements for e-tendering process. This analysis shows that advanced security services are required to secure e-tender negotiation integrity and the submission process. Two generic issues discovered in the course of this research, functional difference and functional limitations, are fundamental in constructing secure protocols for tender negotiation and submission processes. Functional difference identification derives advanced security requirements. Functional limitation assessment defines how the logic of generic security mechanisms should be constructed. These principles form a proactive analysis applied prior to the construction of security protocols. Security protocols have been successfully constructed using generic cryptographic security mechanisms. These protocols are secure e-tender negotiation integrity protocol suite, and secure e-tender submission protocols. Their security has been verified progressively during the design. Verification results show that protocols are secure against common threat scenarios. The primary contribution of this stage are the procedures developed for the complex e-business protocol analysis using formal methods. The research shows that proactive analysis has made this formal security verification possible and practical for complex protocols. These primary outcomes have raised awareness of security issues in e-tendering. The security solutions proposed in the protocol format are the first in e-tendering with verifiable security against common threat scenarios, and which are also practical for implementation. The procedures developed for securing the e-tendering process are generic and can be applied to other business domains. The study has made improvements in: establishing adequate security for a business process; applying proactive analysis prior to secure protocol construction; and verifying security of complex e-business protocols using tool aided formal methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Oliveira, André Luiz Machado de. "Estudo de um Sistema de Telefonia sem Infraestrutura através de Modelagem e Simulação baseada em Agentes." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/100/100132/tde-28122012-092709/.

Full text
Abstract:
A evolução tecnológica das redes de telecomunicações sem fio permite que organizações de redes mais inteligentes sejam vislumbradas. É possível imaginar um sistema de telefonia formado por dispositivos móveis autônomos que não necessite de nenhuma infraestrutura pré-estabelecida para trocar informações com seus vizinhos, de acordo com o alcance do raio de transmissão. Assim, as informações poderiam ser repassadas de nó em nó, formando uma rede de múltiplos saltos. A ausência de uma entidade central também poderia melhorar a tolerância a falhas do sistema, principalmente por gerar uma redundância de caminhos possíveis entre os nós. Analisamos o desempenho desse sistema em diferentes cenários e a sensibilidade à variação de parâmetros como o raio de transmissão, interferências, a quantidade de nós e número de saltos máximo permitido (TTL), e testamos estratégias de comunicação com raio fixo, raio variável, número de vizinhos mínimo e etc., através de modelagem e simulação baseada em agentes. De maneira geral, a estratégia de transmissão com raio variável apresentou a melhor taxa de mensagens recebidas e a menor média de saltos até o destino, porém com maior nível de energia do sistema. A estratégia de raio fixo apresentou a menor energia total gasta pelo sistema para enviar as mensagens, porém, com uma taxa menor de mensagens recebidas. Além disso, avaliamos que as principais causas de perdas de pacotes estão associadas com o aumento da mobilidade, a redução do TTL e as interferências, sendo que cada uma contribui mais ou menos de acordo com o cenário estudado.
The technological development of Wireless Networks leads to more intelligent networks structures. One can imagine a mobile data system consisting of autonomous mobile devices that do not require any pre-established infrastructure to exchange information one with another, limited mainly by the transmission radius. Thus, data could be forwarded from node to node, forming a multihop network. The absence of a central entity could also improve fault tolerance by allowing redundant paths for nodes to communicate. We analyzed the performance of the system in different scenarios and system behavior regarding parameters variations such as transmission radius, interferences, the number of nodes and maximum allowed number of hops (TTL), and tested communication strategies with fixed radius, variable radius, minimum number of neighbors to transmit, etc., through modeling and simulation-based agents. In general, variable radius strategy had the best rate of incoming messages and the lowest average number of hops to the destination. However it presented the higher level of system energy. In one hand, fixed radius strategy presented the lowest total energy expended by the system to send messages, but, in the other hand, the rate of incoming messages was lower. Furthermore, we discovered the main causes of packet losses are associated with increased mobility, reducing the TTL and interference, each of which contributes more or less in accordance with the scenario.
APA, Harvard, Vancouver, ISO, and other styles
20

Hitchcock, Yvonne Roslyn. "Elliptic curve cryptography for lightweight applications." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15838/1/Yvonne_Hitchcock_Thesis.pdf.

Full text
Abstract:
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.
APA, Harvard, Vancouver, ISO, and other styles
21

Hitchcock, Yvonne Roslyn. "Elliptic Curve Cryptography for Lightweight Applications." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15838/.

Full text
Abstract:
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.
APA, Harvard, Vancouver, ISO, and other styles
22

Godse, Aditi. "Petri net based model for protocol damage detection and protection." 2007. http://digital.library.okstate.edu/etd/umi-okstate-2188.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cheng, Chia-Hsun, and 鄭嘉勳. "RTL-to-TL Model Generation Based on Protocol Abstraction Techniques." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/13397534790692267303.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
102
Simulation-based verification is a fundamental verification methodology for validating digital designs. The ever-increasing complexity of system arises from design growing from simple controllers to complex System-on-Chips (SoCs). The complexity leads to the slow simulation-speed for system-level Register Transfer Level (RTL) simulation that cannot catch up with the growing complexity of integrated RTL blocks on a SOC. This work proposes the techniques to increase the simulation speed by transforming the designs from RTL to transaction-level (TL) models in SystemC, a standard for modeling electrical systems. From RTL to TL, the timing granularity is different and the notion of equivalence should be redefined to cross different abstraction levels. To achieve the abstraction and maintain the equivalence, we defined the Protocol Specification Language (PSL) for user to formulate the handshaking signals and cared transaction boundaries in RTL. From the RTL description and PSL specification, the formal model – Extended Finite State Machine (EFSM) can be extracted and simplified based on formal and compiler transformation techniques. In the last code generation phase, we perform several optimizations and generate corresponding TL SystemC simulation models. The experimental results show that the simulation speed can be increased several times and the manual effort to craft the correct untimed SystemC model can be alleviated.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Yuenfu, and 陳元甫. "MAY: A Highly Fault Tolerant Routing Protocol Based on Hypercube Model." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/76739766666882234180.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
99
As more and more real wireless sensor network (WSN) applications are tested and deployed over the last decade, the research community of WSN realizes that several issues need to be revisited from practical perspectives, such as reliability and availability. Basically, wireless sensor networks suffer from resource limitations, high failure rates and faults caused by the defective natures of wireless communication and the wireless sensor characteristics. The design and analysis of fault tolerant routing schemes for WSNs has been the focus of much recent research. In this paper, we propose a hypercube based fault-tolerant WSN routing protocol, Majestic Active Yoda (MAY), which conducts a sensor to deliver the packet to its destination without a routing table. The simulation results show that the proposed MAY routing protocol improves the availability and robustness of the network by using the properties of hypercube architecture.
APA, Harvard, Vancouver, ISO, and other styles
25

de, Wet Nico. "Model Driven Communication Protocol Engineering and Simulation based Performance Analysis using UML 2.0." Thesis, 2005. http://pubs.cs.uct.ac.za/archive/00000199/.

Full text
Abstract:
The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDTs for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML. Our first consideration in the development of our methodology was to identify the roles of UML 2.0 diagrams in the performance modelling process. In addition, questions regarding the specification of non-functional duration contraints, or temporal aspects, were considered. We developed a semantic time model with which a lack of means of specifying communication delay and processing times in the language are addressed. Environmental characteristics such as channel bandwidth and buffer space can be specified and realistic assumptions are made regarding time and signal transfer. With proSPEX we aimed to integrate a commercial UML 2.0 model editing tool and a discrete-event simulation library. Such an approach has been advocated as being necessary in order to develop a closer integration of performance engineering with formal design and implementation methodologies. In order to realize the integration we firstly identified a suitable simulation library and then extended the library with features required to represent high-level SDL abstractions, such as extended finite state machines (EFSM) and signal addressing. In implementing proSPEX we filtered the XML output of our editor and used text templates for code generation. The filtering of the XML output and the need to extend our simulation library with EFSM abstractions was found to be significant implementation challenges. Lastly, in order to to illustrate the utility of proSPEX we conducted a performance analysis case-study in which the efficient short remote operations (ESRO) protocol is used in a wireless e-commerce scenario.
APA, Harvard, Vancouver, ISO, and other styles
26

Liang, Yu-Tsung, and 梁裕宗. "Development of Personal Computer Based Flight Simulator With Distributed Interactive Simulator Protocol-- Development of Mathematical Model." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/57883413703920173088.

Full text
Abstract:
碩士
淡江大學
機械工程學系
85
Most of flight simulators are based on expensive workstations or other high-end computing equipment. To provide a high-end and low-cost flight simulator, an experiment using networked personal computers (PCs) for fairly sophisticated flight simulation with DIS protocol is presented. Two Pentium PCs connected with TCP/IP or IPX local area network protocol are used as a flight simulator. One of them is used as the work platform for the computation of flight dynamics and Dead Reckoning models, and implementaon of pilot interface. The other one is used as the work platform for visual effect and DIS system. In this study, the flight dynamic model of flight simulator is derived. The aircraft is assumed as a rigid body, the behavior of flight can be described by Six-Degree-of-Freedom (6DOF) equations of motion. Also the real-time configured numerical integration method, and the coordinate systems, earth model, which are compatible with the DIS protocol, are developed. Dead Reckoning (DR) algorithm is an important chnique that is widely used in DIS. The purpose of DR is to reduce updates required by each simulator on the network to better utilize the available bandwidth. Extrapolation formulas are discussed based on network communication traffic and the amount of computation performed by simulators. Smoothing method used during data update process is also discussed. With this study, a low cost, efficient high-fidelity networked PC flight simulator is feasible.
APA, Harvard, Vancouver, ISO, and other styles
27

Alvarez, Charles Conceicao, and 安查爾. "New Business Model on IP (Web) based Network - Voice over Internet Protocol (VoIP) Technology (Telecommunications Industry)." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/24714424707511802749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kumar, Vikas. "Construction of Secure and Efficient Private Set Intersection Protocol." Thesis, 2013. http://hdl.handle.net/2005/3277.

Full text
Abstract:
Private set intersection(PSI) is a two party protocol where both parties possess a private set and at the end of the protocol, one party (client) learns the intersection while other party (server) learns nothing. Motivated by some interesting practical applications, several provably secure and efficient PSI protocols have appeared in the literature in recent past. Some of the proposed solutions are secure in the honest-but-curious (HbC) model while the others are secure in the (stronger) malicious model. Security in the latter is traditionally achieved by following the classical approach of attaching a zero knowledge proof of knowledge (ZKPoK) (and/or using the so-called cut-and-choose technique). These approaches prevent the parties from deviating from normal protocol execution, albeit with significant computational overhead and increased complexity in the security argument, which includes incase of ZKPoK, knowledge extraction through rewinding. We critically investigate a subset of the existing protocols. Our study reveals some interesting points about the so-called provable security guarantee of some of the proposed solutions. Surprisingly, we point out some gaps in the security argument of several protocols. We also discuss an attack on a protocol when executed multiple times between the same client and server. The attack, in fact, indicates some limitation in the existing security definition of PSI. On the positive side, we show how to correct the security argument for the above mentioned protocols and show that in the HbC model the security can be based on some standard computational assumption like RSA and Gap Diffie-Hellman problem. For a protocol, we give improved version of that protocol and prove security in the HbC model under standard computational assumption. For the malicious model, we construct two PSI protocols using deterministic blind signatures i.e., Boldyreva’s blind signature and Chaum’s blind signature, which do not involve ZKPoK or cut-and-choose technique. Chaum’s blind signature gives a new protocol in the RSA setting and Boldyreva’s blind signature gives protocol in gap Diffie-Hellman setting which is quite similar to an existing protocol but it is efficient and does not involve ZKPoK.
APA, Harvard, Vancouver, ISO, and other styles
29

"Development of a non-steroidal aromatase inhibitor-based protocol for the control of ovarian function using a bovine model." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-06-1065.

Full text
Abstract:
Five studies were designed to characterize the effects of a non-steroidal aromatase inhibitor, letrozole, on ovarian function in cattle. The general hypothesis was that non-steroidal aromatase inhibitors have potential as a steroid-free option for the control of ovarian function for the purposes of fixed-time artificial insemination and embryo production. The specific objectives were to determine the effect of route and vehicle, type of aromatase inhibitor, and duration of aromatase inhibitor treatment (short vs prolonged) on ovarian follicles in cattle, and to test the efficacy of an aromatase inhibitor-based protocol to synchronize ovulation in cattle. In the first experiment, heifers were treated with letrozole intravenously (n=10) or intramuscularly (n=10) or allocated in iv and im control groups (n=5/group). During the second experiment, heifers were divided randomly into two groups (n=15/group) and an intravaginal device containing 1 g of letrozole or a blank device (control) was inserted. The third experiment was designed with the goal of formulating and testing an intravaginal device that provides biologically active circulating concentrations of an aromatase inhibitor for a minimum of 4 days. The biological significance of the pharmacokinetic differences between the letrozole intravaginal devices resulting from the third study was evaluated during the fourth study. A final study was designed to determine the effect of stage of the estrous cycle on the proportion of animals that ovulated and the synchrony of ovulation of heifers treated with an aromatase inhibitor-based ovulation-synchronization protocol and to determine subsequent pregnancy outcomes. In all the studies, the effects of aromatase inhibitor on ovarian function were assessed by transrectal ultrasound examination of the ovaries, and blood samples were collected for hormone concentration determination. Results demonstrated that route of administration, or more precisely, the nature of iii the vehicle used for the administration of letrozole (intravenous, intramuscular depot, short release intravaginal or prolonged release intravaginal) has an impact on the effects of letrozole on hormonal profiles and ovarian dynamics. The intramuscular route appeared to provide a prolonged release of letrozole from the injection site which had a marked effect on estradiol production, dominant follicle lifespan, and CL form and function. Letrozole treatment during the ovulatory follicle wave by means of a gel-based intravaginal releasing device during the second study resulted in more rapidly growing dominant follicles and larger ovulatory follicles, delayed ovulation (by 24 h) of a single follicle and formation of a CL that secreted higher levels of progesterone. A wax-based vehicle allowed for a steady and continuous delivery of the active compound over the treatment period. During the third study, the addition of a letrozole-containing gel coating increased the rate of initial absorption and hastened the increase on plasma concentrations of the active ingredient, while the letrozole-containing wax-based vehicle prolonged drug-delivery from the intravaginal device. When tested in vivo during the fourth study, we confirmed that letrozole-impregnated intravaginal devices formulated with a wax base plus a gel coat vehicle was most suitable for the application of a letrozole-based protocol for the synchronization of ovulation in cattle, since it effectively delivered elevated concentrations of letrozole, and reduced estradiol production resulting in increased follicular growth and lifespan, without adversely affecting progesterone production. The application of a letrozole-impregnated intravaginal device for 4 days, combined with PGF treatment at device removal and GnRH 24 h post-device removal increased the percentage of ovulations and synchrony of ovulation in cattle, regardless the stage of the estrous cycle at initiation of treatment. As observed in previous studies, the effects observed could be associated with an increase in circulating LH iv concentrations. However, the effects of treatment on gonadotropin concentrations are inconclusive, possibly due to inadequate sampling frequency. The impact of letrozole treatment of oocyte fertility remains unknown. The results of the five experiments support our general hypothesis that non-steroidal aromatase inhibitors have potential as a steroid-free option for the control of ovarian function in cattle. However, further research is needed in order to elucidate the effects of letrozole treatment during the proestrous on oocyte competence and fertility of the resulting ovulations in cattle.
APA, Harvard, Vancouver, ISO, and other styles
30

Plappert, H., C. Hobson-Merrett, B. Gibbons, E. Baker, S. Bevan, M. Clark, S. Creanor, et al. "Evaluation of a primary care-based collaborative care model (PARTNERS2) for people with diagnoses of schizophrenia, bipolar, or other psychoses: study protocol for a cluster randomised controlled trial." 2003. http://hdl.handle.net/10454/18577.

Full text
Abstract:
Yes
Current NHS policy encourages an integrated approach to provision of mental and physical care for individuals with long term mental health problems. The 'PARTNERS2' complex intervention is designed to support individuals with psychosis in a primary care setting. The trial will evaluate the clinical and cost-effectiveness of the PARTNERS2 intervention. This is a cluster randomised controlled superiority trial comparing collaborative care (PARTNERS2) with usual care, with an internal pilot to assess feasibility. The setting will be primary care within four trial recruitment areas: Birmingham & Solihull, Cornwall, Plymouth, and Somerset. GP practices are randomised 1:1 to either (a) the PARTNERS2 intervention plus modified standard care ('intervention'); or (b) standard care only ('control'). PARTNERS2 is a flexible, general practice-based, person-centred, coaching-based intervention aimed at addressing mental health, physical health, and social care needs. Two hundred eligible individuals from 39 GP practices are taking part. They were recruited through identification from secondary and primary care databases. The primary hypothesis is quality of life (QOL). Secondary outcomes include: mental wellbeing, time use, recovery, and process of physical care. A process evaluation will assess fidelity of intervention delivery, test hypothesised mechanisms of action, and look for unintended consequences. An economic evaluation will estimate its cost-effectiveness. Intervention delivery and follow-up have been modified during the COVID-19 pandemic. The overarching aim is to establish the clinical and cost-effectiveness of the model for adults with a diagnosis of schizophrenia, bipolar, or other types of psychoses.
PARTNERS2 is funded by the National Institute for Health Research (NIHR) under its Programme Grant for Applied Research Programme (grant number: RP-PG- 200625). This research was also supported by the NIHR Collaboration for Leadership in Applied Health Research and Care South West Peninsula at the Royal Devon and Exeter NHS Foundation Trust.
APA, Harvard, Vancouver, ISO, and other styles
31

Plappert, H., C. Hobson-Merrett, B. Gibbons, E. Baker, S. Bevan, M. Clark, S. Creanor, et al. "Evaluation of a primary care-based collaborative care model (PARTNERS2) for people with diagnoses of schizophrenia, bipolar, or other psychoses: study protocol for a cluster randomised controlled trial." 2021. http://hdl.handle.net/10454/18577.

Full text
Abstract:
Yes
Current NHS policy encourages an integrated approach to provision of mental and physical care for individuals with long term mental health problems. The 'PARTNERS2' complex intervention is designed to support individuals with psychosis in a primary care setting. The trial will evaluate the clinical and cost-effectiveness of the PARTNERS2 intervention. This is a cluster randomised controlled superiority trial comparing collaborative care (PARTNERS2) with usual care, with an internal pilot to assess feasibility. The setting will be primary care within four trial recruitment areas: Birmingham & Solihull, Cornwall, Plymouth, and Somerset. GP practices are randomised 1:1 to either (a) the PARTNERS2 intervention plus modified standard care ('intervention'); or (b) standard care only ('control'). PARTNERS2 is a flexible, general practice-based, person-centred, coaching-based intervention aimed at addressing mental health, physical health, and social care needs. Two hundred eligible individuals from 39 GP practices are taking part. They were recruited through identification from secondary and primary care databases. The primary hypothesis is quality of life (QOL). Secondary outcomes include: mental wellbeing, time use, recovery, and process of physical care. A process evaluation will assess fidelity of intervention delivery, test hypothesised mechanisms of action, and look for unintended consequences. An economic evaluation will estimate its cost-effectiveness. Intervention delivery and follow-up have been modified during the COVID-19 pandemic. The overarching aim is to establish the clinical and cost-effectiveness of the model for adults with a diagnosis of schizophrenia, bipolar, or other types of psychoses.
PARTNERS2 is funded by the National Institute for Health Research (NIHR) under its Programme Grant for Applied Research Programme (grant number: RP-PG- 200625). This research was also supported by the NIHR Collaboration for Leadership in Applied Health Research and Care South West Peninsula at the Royal Devon and Exeter NHS Foundation Trust.
APA, Harvard, Vancouver, ISO, and other styles
32

Jeffrey, Annah Mandu. "A control theoretic approach to HIV/AIdS drug dosage design and timing the initiation of therapy." Thesis, 2006. http://hdl.handle.net/2263/30379.

Full text
Abstract:
Current research on HIV therapy is diverse and multi-disciplinary. Engineers however, were late in joining the research movement and as such, engineering literature related to HIV chemotherapy is limited. Control engineers in particular, should have risen to the challenge, as it is apparent that HIV chemotherapy and control engineering have a lot in common. From a control theoretic point of view, HIV chemotherapy is control of a time varying nonlinear dynamical system with constrained controls. Once a suitable model has been developed or identified, control system theoretical concepts and design principles can be applied. The adopted control approach or strategy depends primarily on the control objectives, performance specifications and the control constraints. In principle, the designed control system can then be validated with clinical data. Obtaining measurements of the controlled variables however, has the potential to hinder effective control. The first part of this research focused on the application of control system analytical tools to HIV/AIDS models. The intention was to gain some insights into the HIV infection dynamics from a control theoretic perspective. The issues that needed to be addressed are: Persistent virus replication under potent HAART, variability in response to therapy between individuals on the same regimen, transient rebounds of plasma viremia after periods of suppression, the attainment, or lack thereof, of maximal and durable suppression of the viral load. The questions to answer were: When are the above mentioned observed responses to therapy most likely to occur as the HIV infection progresses, and does attaining one necessarily imply the other? Furthermore, the prognostic markers of virologic success, the possibility of individualizing therapy and timing the initiation of antiretroviral therapy such that the benefits of therapy are maximized, are matters that were also investigated. The primary objective of this thesis was to analyze models for the eventual control of the HIV infection. HIV therapy has multiple and often conflicting objectives, and these objectives had to be prioritized. The intention of the proposed control strategy was to produce practical solutions to the current antiretroviral problems. To this end, the second part of the research focused on addressing the HIV/AIDS control issues of sampling for effective control given the invasive nature of drawing blood from a patient and the derivation of drug dosage sequences to strike a balance between maximal suppression and toxicity reduction, when multiple drugs are concomitantly used to treat the infection.
Thesis (PhD)--University of Pretoria, 2006.
Electrical, Electronic and Computer Engineering
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
33

Shabut, Antesar R. M., Keshav P. Dahal, and Irfan U. Awan. "Friendship based trust model to secure routing protocols in mobile Ad Hoc networks." 2014. http://hdl.handle.net/10454/10787.

Full text
Abstract:
No
Trust management in mobile ad hoc networks (MANETs) has become a significant issue in securing routing protocols to choose reliable and trusted paths. Trust is used to cope with defection problems of nodes and stimulate them to cooperate. However, trust is a highly complex concept because of the subjective nature of trustworthiness, and has several social properties, due to its social origins. In this paper, a friendship-based trust model is proposed for MANETs to secure routing protocol from source to destination, in which multiple social degrees of friendships are introduced to represent the degree of nodes' trustworthiness. The model considers the behaviour of nodes as a human pattern to reflect the complexity of trust subjectivity and different views. More importantly, the model considers the dynamic differentiation of friendship degree over time, and utilises both direct and indirect friendship-based trust information. The model overcomes the limitation of neglecting the social behaviours of nodes when evaluating trustworthiness. The empirical analysis shows the greater robustness and accuracy of the trust model in a dynamic MANET environment.
APA, Harvard, Vancouver, ISO, and other styles
34

Cheng, Zhengang. "Verifying commitment based business protocols and their compositions: model checking using promela and spin." 2006. http://www.lib.ncsu.edu/theses/available/etd-08092006-005135/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lin, Yi-Hui, and 林宜慧. "User Efficient Authentication Protocols with Provable Security Based on Standard Reduction and Model Checking." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73312092348039907525.

Full text
Abstract:
博士
國立中山大學
資訊工程學系研究所
101
Authentication protocols are used for two parties to authenticate each other and build a secure channel over wired or wireless public channels. However, the present standards of authentication protocols are either insufficiently secure or inefficient for light weight devices. Therefore, we propose two authentication protocols for improving the security and user efficiency in wired and wireless environments, respectively. Traditionally, TLS/SSL is the standard of authentication and key exchange protocols in wired Internet. It is known that the security of TLS/SSL is not enough due to all sorts of client side attacks. To amend the client side security, multi-factor authentication is an effective solution. However, this solution brings about the issue of biometric privacy which raises public concern of revealing biometric data to an authentication server. Therefore, we propose a truly three factor authentication protocol, where the authentication server can verify their biometric data without the knowledge of users’ templates and samples. In the major wireless technologies, extensible Authentication Protocol (EAP) is an authentication framework widely used in IEEE 802.11 WLANs. Authentication mechanisms built on EAP are called EAP methods. The requirements for EAP methods in WLANs authentication have been defined in RFC 4017. To achieve user efficiency and robust security, lightweight computation and forward secrecy, excluded in RFC 4017, are desired in WLAN authentication. However, all EAP methods and authentication protocols designed for WLANs so far do not satisfy all of the above properties. We will present a complete EAP method that utilizes stored secrets and passwords to verify users so that it can (1) meet the requirements of RFC 4017, (2) provide lightweight computation, and (3) allow for forward secrecy. In order to prove our proposed protocols completely, we apply two different models to examine their security properties: Bellare’s model, a standard reduction based on computational model, that reduces the security properties to the computationally hard problems and the OFMC/AVISPA tool, a model checking approach based on formal model, that uses the concept of the search tree to systematically find the weaknesses of a protocol. Through adopting Bellare’s model and OFMC/AVISPA tool, the security of our work is firmly established.
APA, Harvard, Vancouver, ISO, and other styles
36

Martins, Miguel Antonio Rodrigues Lopes. "Geração Automática de Código Fonte a Partir de Modelos Formais." Master's thesis, 2013. http://hdl.handle.net/10316/35590.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra
The work presented/associated with this document relates to an area of computer science that is described as the generation of source code from the speci cation of a formal model, which shall be referred to as \Code Generation from Formal Models". This area is associated with two background areas, one of which being the area of \Model Checking". Model Checking, as described by Edmund M. Clarke et al.[1], is \a technique for verifying - nite state concurrent systems such as sequential circuit designs and communication protocols". This technique is appropriate for distributed and concurrent systems, since it aids developers in minimizing certain types of risks, such as the possibility that a deadlock will occur in the system at some point in time, preventing further progress, or the occurrence of a race condition. Given that only a model of a system is veri ed, but not the system itself, it naturally follows that it would be useful to generate source code from the model speci cation. This work thus encompasses the generation of source code from PROMELA models, veri ed by the Spin model checker. In its essence, this project attempts to answer the following question: is it possible to generate runnable source code from PROMELA models related to round-based consensus protocols? The short answer to it is \yes, with limitations".
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography