Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Metric Security.

Rozprawy doktorskie na temat „Metric Security”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Metric Security”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Khan, Moazzam. "Security metric based risk assessment". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47527.

Pełny tekst źródła
Streszczenie:
Modern day computer networks have become very complex and attackers have benefited due to this complexity and have found vulnerabilities and loopholes in the network architecture. In order to identify the attacks from an attacker all aspects of network architecture needs to be carefully examined such as packet headers, network scans, versions of applications, network scans, network anomalies etc. and after the examination attributes playing a significant impact on the security posture of the organization needs to be highlighted so that resources and efforts are directed towards those attributes. In this work we extensively look at network traffic at dormitory network of a large campus and try to identify the attributes that play a significant role in the infection of a machine. Our scheme is to collect as much attributes from the network traffic applying the heuristic of network infection and then devise a scheme called decision centric rank ordering of security metric that gives the priority to the security metrics so that network administrators can channel their efforts in the right direction. Another aspect of this research is to identify the probability of an attack on a communication infrastructure. A communication infrastructure becomes prone to attack if certain elements exist in it, such as vulnerabilities in the comprising elements of the system, existence of an attacker and motivation for him to attack. Focus of this study is on vulnerability assessment and security metrics such as user behavior, operating systems, user applications, and software updates. To achieve a quantified value of risk, a set of machines is carefully observed for the security metrics. Statistical analysis is applied on the data collected from compromised machines and the quantified value of risk is achieved.
Style APA, Harvard, Vancouver, ISO itp.
2

Owusu-Kesseh, Daniel. "The Relative Security Metric of Information Systems: Using AIMD Algorithms". University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1462278857.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Karabey, Bugra. "Attack Tree Based Information Technology Security Metric Integrating Enterprise Objectives With Vulnerabilities". Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12614100/index.pdf.

Pełny tekst źródła
Streszczenie:
Security is one of the key concerns in the domain of Information Technology systems. Maintaining the confidentiality, integrity and availability of such systems, mandates a rigorous prior analysis of the security risks that confront these systems. In order to analyze, mitigate and recover from these risks a metrics based methodology is essential in prioritizing the response strategies to these risks and also this approach is required for resource allocation schedules to mitigate such risks. In addition to that the Enterprise Objectives must be focally integrated in the definition, impact calculation and prioritization stages of this analysis to come up with metrics that are useful both for the technical and managerial communities within an organization. Also this inclusion will act as a preliminary filter to overcome the real life scalability issues inherent with such threat modeling efforts. Within this study an attack tree based approach will be utilized to offer an IT Security Risk Evaluation Method and Metric called TEOREM (Tree based Enterprise Objectives Risk Evaluation Method and Metric) that integrates the Enterprise Objectives with the Information Asset vulnerability analysis within an organization. Applicability of the method has been analyzed within a real life setting and the findings are discussed as well within this study.
Style APA, Harvard, Vancouver, ISO itp.
4

Erturk, Volkan. "A Framework Based On Continuous Security Monitoring". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610139/index.pdf.

Pełny tekst źródła
Streszczenie:
Continuous security monitoring is the process of following up the IT systems by collecting measurements, reporting and analysis of the results for comparing the security level of the organization on continuous time axis to see how organizational security is progressing in the course of time. In the related literature there is very limited work done to continuously monitor the security of the organizations. In this thesis, a continuous security monitoring framework based on security metrics is proposed. Moreover, to decrease the burden of implementation a software tool called SecMon is introduced. The implementation of the framework in a public organization shows that the proposed system is successful for building an organizational memory and giving insight to the security stakeholders about the IT security level in the organization.
Style APA, Harvard, Vancouver, ISO itp.
5

Rich, Ronald P., i Jonathan S. Holmgren. "Metric methodology for the creation of environments and processes to certify a component : specifically the Naval Research Laboratory Pump". Thesis, Monterey, California. Naval Postgraduate School, 2003. http://hdl.handle.net/10945/1102.

Pełny tekst źródła
Streszczenie:
This thesis was completed in cooperation with the Cebrowski Institute for Information Innovation and Superiority.
Approved for public release; distribution is unlimited
A of the NP, but the key requirement for Certification and Accreditation is the creation of a Protection Profile and an understanding of the DITSCAP requirements and process. This thesis creates a Protection Profile for the NP along with a draft Type SSAA for Certification and Accreditation of the NP.
Lieutenant, United States Navy
Lieutenant, United States Navy
Style APA, Harvard, Vancouver, ISO itp.
6

Zhou, Luyuan. "Security Risk Analysis based on Data Criticality". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-93055.

Pełny tekst źródła
Streszczenie:
Nowadays, security risk assessment has become an integral part of network security as everyday life has become interconnected with and dependent on computer networks. There are various types of data in the network, often with different criticality in terms of availability or confidentiality or integrity of information. Critical data is riskier when it is exploited. Data criticality has an impact on network security risks. The challenge of diminishing security risks in a specific network is how to conduct network security risk analysis based on data criticality. An interesting aspect of the challenge is how to integrate the security metric and the threat modeling, and how to consider and combine the various elements that affect network security during security risk analysis. To the best of our knowledge, there exist no security risk analysis techniques based on threat modeling that consider the criticality of data. By extending the security risk analysis with data criticality, we consider its impact on the network in security risk assessment. To acquire the corresponding security risk value, a method for integrating data criticality into graphical attack models via using relevant metrics is needed. In this thesis, an approach for calculating the security risk value considering data criticality is proposed. Our solution integrates the impact of data criticality in the network by extending the attack graph with data criticality. There are vulnerabilities in the network that have potential threats to the network. First, the combination of these vulnerabilities and data criticality is identified and precisely described. Thereafter the interaction between the vulnerabilities through the attack graph is taken into account and the final security metric is calculated and analyzed. The new security metric can be used by network security analysts to rank security levels of objects in the network. By doing this, they can find objects that need to be given additional attention in their daily network protection work. The security metric could also be used to help them prioritize vulnerabilities that need to be fixed when the network is under attack. In general, network security analysts can find effective ways to resolve exploits in the network based on the value of the security metric.
Style APA, Harvard, Vancouver, ISO itp.
7

Holmgren, Jonathan S. Rich Ronald P. "Metric methodology for the creation of environments and processes to certify a component : specifically the Naval Research Laboratory Pump /". Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FHolmgren.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, March 2003.
Thesis advisor(s): George Dinolt, Craig Rasmussen. Includes bibliographical references (p. 155-157). Also available online.
Style APA, Harvard, Vancouver, ISO itp.
8

Homer, John. "A comprehensive approach to enterprise network security management". Diss., Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1372.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Bilal, Muhammad, i Ganesh Sankar. "Trust & Security issues in Mobile banking and its effect on Customers". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3166.

Pełny tekst źródła
Streszczenie:
Context: The invention of mobile phones makes the human life easier. The purpose of this study is to identify security risks in mobile banking and to provide an authentication method for mobile banking transaction by using bio-metric mechanism. Objectives: Current mobile banking authentication is challenging and identified as a major security risk. Literature review shows that customer distrusts mobile banking due to security issues. The authors discuss security risks in current authentication methods in mobile banking. Methods: There are different methods and approaches to handle authentication in mobile banking. In this thesis, we propose a new approach of authentication in mobile banking. The strengths and weaknesses of existing approaches of authentication are identified with the help of Literature Review and interviews. The authors present basic transaction model and include security risks. By Literature Review it is found that finger print mechanism is a suitable method for authentication. Authors focus on authentication method and present a biometric scanning device which can identify the customer’s finger print thus enabling the customer to access mobile banking facility. Results: An authentication model is proposed through design process. The proposed biometric design was validated by conducting a workshop. The analysis of the workshop’s results showed that customer’s trust in security for mobile banking will be increased by finger print mechanism. To promote mobile banking, it is necessary to improve customer trust in terms of security. Conclusions: The authors concluded that, only authorized person will be able to use mobile banking services by incorporating bio-metric finger-print mechanism. By literature review and interview it was found that finger-print mechanism is more suitable than other ordinary mechanisms like login and password mechanism, SMS etc.
Using mobile phones for mobile banking, customers can push or pull the details like Funds transfer, Bill payment, Share trade, Check order and also inquiries like Account balance, Account statement and Check status Transaction history etc. It means that the customer is interacting with the files, databases etc., of the bank . Database at the server end is sensitive in terms of security. Customers distrust mobile devices to transfer money or for making any transactions. The reason is that security is a major concern for the customer’s fulfillment. Customer’s main concern in using mobile devices for mobile banking is the authentication method used to ensure that the right person is accessing the services like transaction etc.The authors made a basic model for mobile banking transaction. All security risks were included in the transaction model. Then the authors focused on authentication method. By literature review and interview it was concluded that security can be improved by bio metric methods. The authors focused on different bio-metric mechanism and concluded that fingerprint mechanism is more suitable as it requires less storage capacity in database and identifies the uniqueness of customers. The authors suggest a possible solution by proposing finger-print mechanism model and designed a bio-metric scanning device as a solution through which customer can interact with banking system using their finger-print. The result of workshop shows that bio-metric finger print mechanism is more suitable and secure then other authentication methods for mobile banking.
004531847791
Style APA, Harvard, Vancouver, ISO itp.
10

Taylor, Christopher P. "A Security Framework for Logic Locking Through Local and Global Structural Analysis". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587681912604658.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

BATISTA, CARLOS FREUD ALVES. "SOFTWARE SECURITY METRICS". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10990@1.

Pełny tekst źródła
Streszczenie:
PETRÓLEO BRASILEIRO S. A.
A dependência cada vez maior da tecnologia de informação (TI) torna software seguro um elemento chave para a continuidade dos serviços de nossa sociedade atual. Nos últimos anos, instituições públicas e privadas aumentaram seus investimentos em segurança da informação, mas a quantidade de ataques vem crescendo mais rapidamente do que a nossa capacidade de poder enfrentálos, colocando em risco a propriedade intelectual, a relação de confiança de clientes e a operação de serviços e negócios apoiados pelos serviços de TI. Especialistas em segurança afirmam que atualmente boa parte dos incidentes de segurança da informação ocorrem a partir de vulnerabilidades encontradas no software, componente presente em boa parte dos sistemas de informação. Para tornar o software fidedigno em relação à segurança, a criação e o uso de métricas de segurança serão fundamentais para gerenciar e entender o impacto dos programas de segurança nas empresas. Porém, métricas de segurança são cobertas de mistério e consideradas bastante difíceis de serem implementadas. Este trabalho pretende mostrar que hoje ainda não é possível termos métricas quantitativas capazes de indicar o nível de segurança que o software em desenvolvimento virá a ter. Necessitam-se, então, outras práticas para assegurar níveis de segurança a priori, ou seja, antes de se por o software em uso.
Today`s growing dependency on information technology (IT) makes software security a key element of IT services. In recent years public and private institutions raised the investment on information security, however the number of attacks is growing faster than our power to face them, putting at risk intellectual property, customer`s confidence and businesses that rely on IT services. Experts say that most information security incidents occur due to the vulnerabilities that exist in software systems in first place. Security metrics are essential to assess software dependability with respect to security, and also to understand and manage impacts of security initiatives in organizations. However, security metrics are shrouded in mystery and very hard to implement. This work intends to show that there are no adequate metrics capable of indicating the security level that a software will achieve. Hence, we need other practices to assess the security of software while developing it and before deploying it.
Style APA, Harvard, Vancouver, ISO itp.
12

Deka, Bhaswati. "Secure Localization Topology and Methodology for a Dedicated Automated Highway System". DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1995.

Pełny tekst źródła
Streszczenie:
Localization of nodes is an important aspect in a vehicular ad-hoc network (VANET). Research has been done on various localization methods. Some are more apt for a specific purpose than others. To begin with, we give an overview of a vehicular ad-hoc network, localization methods, and how they can be classified. The distance bounding and verifiable trilateration methods are explained further with their corresponding algorithms and steps used for localization. Distance bounding is a range-based distance estimation algorithm. Verifiable trilateration is a popular geometric method of localization. A dedicated automated highway infrastructure can use distance bounding and/or trilateration to localize an automated vehicle on the highway. We describe a highway infrastructure for our analysis and test how well each of the methods performs, according to a security measure defined as spoofing probability. The spoofing probability is, simply put, the probability that a given point on the highway will be successfully spoofed by an attacker that is located at any random position along the highway. Spoofing probability depends on different quantities depending on the method of localization used. We compare the distance bounding and trilateration methods to a novel method using friendly jamming for localization. Friendly jamming works by creating an interference around the region whenever communication takes place between a vehicle and a verifier (belonging to the highway infrastructure, which is involved in the localization process using a given algorithm and localization method). In case of friendly jamming, the spoofing probability depends both on the position and velocity of the attacker and those of the target vehicle (which the attacker aims to spoof). This makes the spoofing probability much less for friendly jamming. On the other hand, the distance bounding and trilateration methods have spoofing probabilities depending only on their position. The results are summarized at the end of the last chapter to give an idea about how the three localization methods, i.e. distance bounding, verifiable trilateration, and friendly jamming, compare against each other for a dedicated automated highway infrastructure. We observe that the spoofing probability of the friendly jamming infrastructure is less than 2% while the spoofing probabilities of distance bounding and trilateration are 25% and 11%, respectively. This means that the friendly jamming method is more secure for the corresponding automated transportation system (ATS) infrastructure than distance bounding and trilateration. However, one drawback of friendly jamming is that it has a high standard deviation because the range of positions that are most vulnerable is high. Even though the spoofing probability is much less, the friendly jamming method is vulnerable to an attack over a large range of distances along the highway. This can be overcome by defining a more robust infrastructure and using the infrastructure's resources judiciously. This can be the future scope of our research. Infrastructures that use the radio resources in a cost effective manner to reduce the vulnerability of the friendly jamming method are a promising choice for the localization of vehicles on an ATS highway.
Style APA, Harvard, Vancouver, ISO itp.
13

Calmon, Flavio du Pin. "Information-theoretic metrics for security and privacy". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101567.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 143-150).
In this thesis, we study problems in cryptography, privacy and estimation through the information-theoretic lens. We introduce information-theoretic metrics and associated results that shed light on the fundamental limits of what can be learned from noisy data. These metrics and results, in turn, are used to evaluate and design both symmetric-key encryption schemes and privacy-assuring mappings with provable information-theoretic security guarantees. We start by studying information-theoretic properties of symmetric-key encryption in the "small key" regime (i.e. when the key rate is smaller than the entropy rate of the message source). It is well known that security against computationally unbounded adversaries in such settings can only be achieved when the communicating parties share a key that is at least as long as the secret message (i.e. plaintext) being communicated, which is infeasible in practice. Nevertheless, even with short keys, we show that a certain level of security can be guaranteed, albeit not perfect secrecy. In order to quantify exactly how much security can be provided with short keys, we propose a new security metric, called symbol secrecy, that measures how much an adversary that observes only the encrypted message learns about individual symbols of the plaintext. Unlike most traditional rate-based information-theoretic metrics for security, symbol secrecy is non-asymptotic. Furthermore, we demonstrate how fundamental symbol secrecy performance bounds can be achieved through standard code constructions (e.g. Reed-Solomon codes). While much of information-theoretic security has considered the hiding of the plaintext, cryptographic metrics of security seek to hide functions thereof. Consequently, we extend the definition of symbol secrecy to quantify the information leaked about certain classes of functions of the plaintext. This analysis leads to a more general question: can security claims based on information metrics be translated into guarantees on what an adversary can reliably infer from the output of a security system? On the one hand, information metrics usually quantify how far the probability distribution between the secret and the disclosed information is from the ideal case where independence is achieved. On the other hand, estimation guarantees seek to assure that an adversary cannot significantly improve his estimate of the secret given the information disclosed by the system. We answer this question in the positive, and present formulations based on rate-distortion theory that allow security bounds given in terms of information metrics to be transformed into bounds on how well an adversary can estimate functions of secret variable. We do this by solving a convex program that minimizes the average estimation error over all possible distributions that satisfy the bound on the information metric. Using this approach, we are able to derive a set of general sharp bounds on how well certain classes of functions of a hidden variable can(not) be estimated from a noisy observation in terms of different information metrics. These bounds provide converse (negative) results: If an information metric is small, then any non-trivial function of the hidden variable cannot be estimated with probability of error or mean-squared error smaller than a certain threshold. The main tool used to derive the converse bounds is a set of statistics known as the Principal Inertia Components (PICs). The PICs provide a fine-grained decomposition of the dependence between two random variables. Since there are well-studied statistical methods for estimating the PICs, we can then determine the (im)possibility of estimating large classes of functions by using the bounds derived in this thesis and standard statistical tests. The PICs are of independent interest, and are applicable to problems in information theory, statistics, learning theory, and beyond. In the security and privacy setting, the PICs fulfill the dual goal of providing (i) a measure of (in)dependence between the secret and disclosed information of a security system, and (ii) a complete characterization of the functions of the secret information that can or cannot be reliably inferred given the disclosed information. We study the information-theoretic properties of the PICs, and show how they characterize the fundamental limits of perfect privacy. The results presented in this thesis are applicable to estimation, security and privacy. For estimation and statistical learning theory, they shed light on the fundamental limits of learning from noisy data, and can help guide the design of practical learning algorithms. Furthermore, as illustrated in this thesis, the proposed converse bounds are particularly useful for creating security and privacy metrics, and characterize the inherent trade-off between privacy and utility in statistical data disclosure problems. The study of security systems through the information-theoretic lens adds a new dimension for understanding and quantifying security against very powerful adversaries. Furthermore, the framework and metrics discussed here provide practical insight on how to design and improve security systems using well-known coding and optimization techniques. We conclude the thesis by presenting several promising future research directions.
by Flavio du Pin Calmon.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
14

Bouyahia, Tarek. "Metrics for security activities assisted by argumentative logic". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0013/document.

Pełny tekst źródła
Streszczenie:
L'accroissement et la diversification des services offerts par les systèmes informatiques modernes rendent la tâche de sécuriser ces systèmes encore plus complexe. D'une part, l'évolution du nombre de services système accroît le nombre des vulnérabilités qui peuvent être exploitées par des attaquants afin d'atteindre certains objectifs d'intrusion. D'autre part, un système de sécurité moderne doit assurer un certain niveau de performance et de qualité de service tout en maintenant l'état de sécurité. Ainsi, les systèmes de sécurité modernes doivent tenir compte des exigences de l'utilisateur au cours du processus de sécurité. En outre, la réaction dans des contextes critiques contre une attaque après son exécution ne peut pas toujours remédier à ses effets néfastes. Dans certains cas, il est essentiel que le système de sécurité soit en avance de phase par rapport à l'attaquant et de prendre les mesures nécessaires pour l'empêcher d'atteindre son objectif d'intrusion. Nous soutenons dans cette thèse que le processus de sécurité doit suivre un raisonnement intelligent qui permet au système de prévoir les attaques qui peuvent se produire par corrélation à une alerte détectée et d'appliquer les meilleures contre-mesures possibles. Nous proposons une approche qui génère des scénarios potentiels d'attaque qui correspondent à une alerte détectée. Ensuite, nous nous concentrons sur le processus de génération d'un ensemble approprié de contre-mesures contre les scénarios d'attaque générés. Un ensemble généré des contre-mesures est considéré comme approprié dans l'approche proposée s'il présente un ensemble cohérent et il satisfait les exigences de l'administrateur de sécurité (par exemple, la disponibilité). Nous soutenons dans cette thèse que le processus de réaction peut être considéré comme un débat entre deux agents. D'un côté, l'attaquant choisit ses arguments comme étant un ensemble d'actions pour essayer d'atteindre un objectif d'intrusion, et de l'autre côté l'agent défendant la cible choisit ses arguments comme étant un ensemble de contre-mesures pour bloquer la progression de l'attaquant ou atténuer les effets de l'attaque. D'autre part, nous proposons une approche basée sur une méthode d'aide à la décision multicritère. Cette approche assiste l'administrateur de sécurité lors de la sélection des contre-mesures parmi l'ensemble approprié des contre-mesures générées à partir de la première approche. Le processus d'assistance est basé sur l'historique des décisions de l'administrateur de sécurité. Cette approche permet également de sélectionner automatiquement des contre-mesures appropriées lorsque l'administrateur de sécurité est dans l'incapacité de les sélectionner (par exemple, en dehors des heures de travail, par manque de connaissances sur l'attaque). Enfin, notre approche est implémentée et testée dans le cadre des systèmes automobiles
The growth and diversity of services offered by modern systems make the task of securing these systems a complex exercise. On the one hand, the evolution of the number of system services increases the risk of causing vulnerabilities. These vulnerabilities can be exploited by malicious users to reach some intrusion objectives. On the other hand, the most recent competitive systems are those that ensure a certain level of performance and quality of service while maintaining the safety state. Thus, modern security systems must consider the user requirements during the security process.In addition, reacting in critical contexts against an attack after its execution can not always mitigate the adverse effects of the attack. In these cases, security systems should be in a phase ahead of the attacker in order to take necessary measures to prevent him/her from reaching his/her intrusion objective. To address those problems, we argue in this thesis that the reaction process must follow a smart reasoning. This reasoning allows the system, according to a detected attack, to preview the related attacks that may occur and to apply the best possible countermeasures. On the one hand, we propose an approach that generates potential attack scenarios given a detected alert. Then, we focus on the generation process of an appropriate set of countermeasures against attack scenarios generated among all system responses defined for the system. A generated set of countermeasures is considered as appropriate in the proposed approach if it presents a coherent set (i.e., it does not contain conflictual countermeasures) and it satisfies security administrator requirements (e.g., performance, availability). We argue in this thesis that the reaction process can be seen as two agents arguing against each other. On one side the attacker chooses his arguments as a set of actions to try to reach an intrusion objective, and on the other side the agent defending the target chooses his arguments as a set of countermeasures to block the attacker's progress or mitigate the attack effects. On the other hand, we propose an approach based on a recommender system using Multi-Criteria Decision Making (MCDM) method. This approach assists security administrators while selecting countermeasures among the appropriate set of countermeasures generated from the first approach. The assistance process is based on the security administrator decisions historic. This approach permits also, to automatically select appropriate system responses in critical cases where the security administrator is unable to select them (e.g., outside working hours, lack of knowledge about the ongoing attack). Finally, our approaches are implemented and tested in the automotive system use case to ensure that our approaches implementation successfully responded to real-time constraints
Style APA, Harvard, Vancouver, ISO itp.
15

Ramos, Alex Lacerda. "Network security metrics for the Internet of things". Universidade de Fortaleza, 2018. http://dspace.unifor.br/handle/tede/108423.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2019-03-30T00:02:10Z (GMT). No. of bitstreams: 0 Previous issue date: 2018-11-26
Recent advances in networking technologies, such as the IPv6 over Low-Power Wireless Personal Area Networks (6LoWPAN) standard, have allowed to interconnect wireless sensor networks (WSNs) to the Internet, thus forming the Internet of Things (IoT). Despite the availability of different message security mechanisms, sensor networks are still vulnerable to several types of attack. To identify such attacks, an Intrusion Detection System (IDS) can be deployed. However, IDSs can generate several false positives and false negatives. Moreover, the alerts raised by IDSs provide no information regarding the impact an attack has on the security of a sensor network. As a consequence, it becomes difficult for WSN administrators and users to take proper responsive actions when attacks occur. To address these issues, this thesis proposes three security metrics. The first metric, called Trust Probability, quantifies by how much an IDS output could be trusted (to be correct). Such metric can help administrators decide which alerts deserve careful attention or which alerts might be safely ignored. Since this type of metric provides a measure of IDS effectiveness, it can also be used to compare different IDSs as well as to fine-tune a given IDS. The second metric, named Damage Level, quantifies the severity of an attack. This metric, when combined with the Trust Probability metric, enables the administrator to correctly prioritize and respond to alerts by evaluating them in terms of accuracy and attack impact. Finally, the third metric, namely Data Security Level, quantifies the degree to which sensor data can be trusted when the sensor is under attack. The security situational awareness provided by this metric helps WSN users make better decisions about the use of the gathered sensor data. Experimental results show that the proposed metrics can accurately quantify security level with low performance overhead and power consumption. Keywords: Network Security Metrics, Quantitative Security Analysis, Security Situational Awareness, Internet of Things, Wireless Sensor Networks.
Recentes avanços nas tecnologias de rede, tais como o padrão IPv6 over Low-Power Wireless Personal Area Networks (6LoWPAN), permitiram a interconexão de redes de sensores sem fio (RSSF) à Internet, formando assim a Internet das Coisas (Internet of Things -- IoT). Apesar da disponibilidade de diferentes mecanismos de segurança de mensagens, as redes de sensores ainda são vulneráveis a vários tipos de ataques. Para identificar esses ataques, um Sistema de Detecção de Intrusão (Intrusion Detection System -- IDS) pode ser implantado. No entanto, os IDSs podem gerar vários falsos positivos e falsos negativos. Além disso, os alertas gerados pelos IDSs não fornecem nenhuma informação sobre o impacto de um ataque sobre a segurança de uma RSSF. Consequentemente, torna-se difícil para os administradores e usuários da rede tomarem as devidas ações responsivas quando ataques ocorrerem. Para tratar estas questões, esta tese propõe três métricas de segurança. A primeira delas, chamada Trust Probability, quantifica o quão confiável (correto) é um output de um IDS. Essa métrica pode ajudar os administradores a decidir quais alertas merecem mais atenção ou quais podem ser ignorados com segurança. Já que essa métrica fornece uma medida da efetividade de um IDS, ela também pode ser usada para comparar diferentes IDSs, bem como para otimizar um dado IDS. A segunda métrica, denominada Damage Level, quantifica a gravidade de um ataque. Esta métrica, quando combinada com a Trust Probability, permite ao administrador priorizar e responder corretamente a alertas, avaliando-os em termos de precisão e impacto de ataque. Por fim, a terceira métrica, chamada de Data Security Level, quantifica quão confiáveis os dados dos sensores são quando a rede está sob ataque. Conhecer a informação fornecida por esta métrica ajuda os usuários a tomar melhores decisões sobre o uso dos dados coletados pelos sensores. Os resultados experimentais mostram que as métricas propostas podem quantificar com precisão o nível de segurança da rede, com baixo consumo de energia e sobrecarga de desempenho. Palavras-chave:Métricas de Segurança de Rede, Análise Quantitativa de Segurança, Consciência Situacional de Segurança, Internet das Coisas, Redes de Sensores sem Fio.
Style APA, Harvard, Vancouver, ISO itp.
16

Alshammari, Bandar M. "Quality metrics for assessing security-critical computer programs". Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49780/1/Bandar_Alshammari_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Style APA, Harvard, Vancouver, ISO itp.
17

Bengtsson, Mattias. "Mathematical foundation needed for development of IT security metrics". Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9766.

Pełny tekst źródła
Streszczenie:

IT security metrics are used to achieve an IT security assessment of certain parts of the IT security environment. There is neither a consensus of the definition of an IT security metric nor a natural scale type of the IT security. This makes the interpretation of the IT security difficult. To accomplish a comprehensive IT security assessment one must aggregate the IT security values to compounded values.

When developing IT security metrics it is important that permissible mathematical operations are made so that the information are maintained all the way through the metric. There is a need for a sound mathematical foundation for this matter.

The main results produced by the efforts in this thesis are:

• Identification of activities needed for IT security assessment when using IT security metrics.

• A method for selecting a set of security metrics in respect to goals and criteria, which also is used to

• Aggregate security values generated from a set of security metrics to compounded higher level security values.

• A mathematical foundation needed for development of security metrics.

Style APA, Harvard, Vancouver, ISO itp.
18

Lundholm, Kristoffer. "Design and implementation of a framework for security metrics creation". Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18217.

Pełny tekst źródła
Streszczenie:

Measuring information security is the key to unlocking the knowledge of how secure information systems really are. In order to perform these measurements, security metrics can be used. Since all systems and organizations are different, there is no single set of metrics that is generally applicable. In order to help organizations create metrics, this thesis will present a metrics creation framework providing a structured way of creating the necessary metrics for any information system. The framework takes a high level information security goal as input, and transforms it to metrics using decomposition of goals that are then inserted into a template. The thesis also presents a set of metrics based on a minimum level of information security produced by the Swedish emergency management agency. This set of metrics can be used to show compliance with the minimum level or as a base when a more extensive metrics program is created.

Style APA, Harvard, Vancouver, ISO itp.
19

Lemos, Maria Carmen, David Manuel-Navarrete, Bram Leo Willems, Rolando Diaz Caravantes i Robert G. Varady. "Advancing metrics: models for understanding adaptive capacity and water security". ELSEVIER SCI LTD, 2016. http://hdl.handle.net/10150/622827.

Pełny tekst źródła
Streszczenie:
We explore the relationship between water security (WS) and adaptive capacity (AC); the two concepts are connected because achieving the first may be dependent on building the second. We focus on how metrics of WS and AC are operationalized and what implications they may have for short and long-term management. We argue that rather than static conceptualizations of WS and AC, we need to understand what combinations of capacities are needed as a function of how controllable key parameters of WS are and the types of outcomes we seek to achieve. We offer a conceptual model of the relationship between WS and AC to clarify what aspects of human-water interactions each concept emphasizes and suggest a hypothetical example of how decision-makers may use these ideas.
Style APA, Harvard, Vancouver, ISO itp.
20

Nia, Ramadianti Putri Mganga i Medard Charles. "Enhancing Information Security in Cloud Computing Services using SLA based metrics". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1999.

Pełny tekst źródła
Streszczenie:
Context: Cloud computing is a prospering technology that most organizations are considering for adoption as a cost effective strategy for managing IT. However, organizations also still consider the technology to be associated with many business risks that are not yet resolved. Such issues include security, privacy as well as legal and regulatory risks. As an initiative to address such risks, organizations can develop and implement SLA to establish common expectations and goals between the cloud provider and customer. Organizations can base on the SLA to measure the achievement of the outsourced service. However, many SLAs tend to focus on cloud computing performance whilst neglecting information security issues. Objective: We identify threats and security attributes applicable in cloud computing. We also select a framework suitable for identifying information security metrics. Moreover, we identify SLA based information security metrics in the cloud in line with the COBIT framework. Methods: We conducted a systematic literature review (SLR) to identify studies focusing on information security threats in the cloud computing. We also used SLR to select frameworks available for identification of security metrics. We used Engineering Village and Scopus online citation databases as primary sources of data for SLR. Studies were selected based on the inclusion/exclusion criteria we defined. A suitable framework was selected based on defined framework selection criteria. Based on the selected framework and conceptual review of the COBIT framework we identified SLA based information security metrics in the cloud. Results: Based on the SLR we identified security threats and attributes in the cloud. The Goal Question Metric (GQM) framework was selected as a framework suitable for identification of security metrics. Following the GQM approach and the COBIT framework we identified ten areas that are essential and related with information security in the cloud computing. In addition, covering the ten essential areas we identified 41 SLA based information security metrics that are relevant for measuring and monitoring security performance of cloud computing services. Conclusions: Cloud computing faces similar threats as traditional computing. Depending on the service and deployment model adopted, addressing security risks in the cloud may become a more challenging and complex undertaking. This situation therefore appeals to the cloud providers the need to execute their key responsibilities of creating not only a cost effective but also a secure cloud computing service. In this study, we assist both cloud provider and customers on the security issues that are to be considered for inclusion in their SLA. We have identified 41 SLA based information security metrics to aid both cloud providers and customers obtain common security performance expectations and goals. We anticipate that adoption of these metrics can help cloud providers in enhancing security in the cloud environment. The metrics will also assist cloud customers in evaluating security performance of the cloud for improvements.
Style APA, Harvard, Vancouver, ISO itp.
21

Carl, Stephen J. "United States Foreign Assistance Programs: the Requirement of Metrics for Security Assistance and Security Cooperation Programs". Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/7316.

Pełny tekst źródła
Streszczenie:
Foreign aid has been a signal component of United States foreign policy since the creation of the Marshall Plan. Since that time, as new requirements emerged, numerous foreign aid programs and initiatives were created and subsequently piecemealed together under various U.S. agencies. The confluence of programs, initiatives, and agencies has created a confusing and overly bureaucratized environment for expending funds in an effort to support the democratization and modernization of other countries. This study examines U.S. aid provided to Ukraine and Georgia to determine if they have progressed toward Westernized defense and military structures, in accordance with their stated national goals, within the realm of logistics. The question is whether U.S. security aid in these states has helped to achieve these goals. Addressing this question, this thesis proposes a hierarchal construct with differing assessment criteria based on how and where U.S. aid is applied. In the end, this analysis shows that U.S. aid and assistance programs and funds have assisted both Ukraine and Georgia with their modernization efforts. However, U.S. policy makers and policy implementers need to consideration alternative and new methods to accurately assess how well those funds are spent in-line with U.S. foreign policy goals.
Style APA, Harvard, Vancouver, ISO itp.
22

Vasilevskaya, Maria. "Security in Embedded Systems : A Model-Based Approach with Risk Metrics". Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122149.

Pełny tekst źródła
Streszczenie:
The increasing prevalence of embedded devices and a boost in sophisticated attacks against them make embedded system security an intricate and pressing issue. New approaches to support the development of security-enhanced systems need to be explored. We realise that efficient transfer of knowledge from security experts to embedded system engineers is vitally important, but hardly achievable in current practice. This thesis proposes a Security-Enhanced Embedded system Design (SEED) approach, which is a set of concepts, methods, and processes that together aim at addressing this challenge of bridging the gap between the two areas of expertise.  We introduce the concept of a Domain-Specific Security Model (DSSM) as a suitable abstraction to capture the knowledge of security experts in a way that this knowledge can be later reused by embedded system engineers. Each DSSM characterises common security issues of a specific application domain in a form of security properties linked to a range of solutions. Next, we complement a DSSM with the concept of a Performance Evaluation Record (PER) to account for the resource-constrained nature of embedded systems. Each PER characterises the resource overhead created by a security solution, a provided level of security, and other relevant information.  We define a process that assists an embedded system engineer in selecting a suitable set of security solutions. The process couples together (i) the use of the security knowledge accumulated in DSSMs and PERs, (ii) the identification of security issues in a system design, (iii) the analysis of resource constraints of a system and available security solutions, and (iv) model-based quantification of security risks to data assets associated with a design model. The approach is supported by a set of tools that automate certain steps. We use scenarios from a smart metering domain to demonstrate how the SEED approach can be applied. We show that our artefacts are rich enough to support security experts in description of knowledge about security solutions, and to support embedded system engineers in integration of an appropriate set of security solutions based on that knowledge. We demonstrate the effectiveness of the proposed method for quantification of security risks by applying it to a metering device. This shows its usage for visualising of and reasoning about security risks inherent in a system design.

In the electronic version are grammatical and spelling errors corrected.

Style APA, Harvard, Vancouver, ISO itp.
23

Doherty, Vincent J. "Metrics for success : using metrics in exercises to assess the preparedness of the fire service in Homeland Security". Thesis, Monterey, California. Naval Postgraduate School, 2004. http://handle.dtic.mil/100.2/ADA424982.

Pełny tekst źródła
Streszczenie:
Thesis (M.A.)--Naval Postgraduate School, 2004.
Title from title page of source document (viewed on April 23, 2008). "Approved for public release, distribution is unlimited." Includes bibliographical references (p. 73-74).
Style APA, Harvard, Vancouver, ISO itp.
24

Třeštíková, Lenka. "Bezpečnostní metriky platformy SAP". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363799.

Pełny tekst źródła
Streszczenie:
Main goal of this thesis is analyzing potential security risks of the SAP NetWeaver platform and identifying various vulnerabilities, that are results of poor system configuration, incorrect segregation of duties or insufficient patch management. Methodology for platform evaluation is defined by vulnerabilities, security requirements and controls will be created.
Style APA, Harvard, Vancouver, ISO itp.
25

Farhady, Ghalaty Nahid. "Fault Attacks on Cryptosystems: Novel Threat Models, Countermeasures and Evaluation Metrics". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/72280.

Pełny tekst źródła
Streszczenie:
Recent research has demonstrated that there is no sharp distinction between passive attacks based on side-channel leakage and active attacks based on fault injection. Fault behavior can be processed as side-channel information, offering all the benefits of Differential Power Analysis including noise averaging and hypothesis testing by correlation. In fault attacks, the adversary induces faults into a device while it is executing a known program and observes the reaction. The abnormal reactions of the device are later analyzed to obtain the secrets of the program under execution. Fault attacks are a powerful threat. They are used to break cryptosystems, Pay TVs, smart cards and other embedded applications. In fault attack resistant design, the fault is assumed to be induced by a smart, malicious, determined attacker who has high knowledge of the design under attack. Moreover, the purpose of fault attack resistant design is for the system to work correctly under intentional fault injection without leaking any secret data information. Towards building a fault attack resistant design, the problem can be categorized into three main subjects: begin{itemize} item Investigating novel and more powerful threat models and attack procedures. item Proposing countermeasures to build secure systems against fault attacks item Building evaluation metrics to measure the security of designs end{itemize} In this regard, my thesis has covered the first bullet, by proposing the Differential Fault Intensity Analysis (DFIA) based on the biased fault model. The biased fault model in this attack means the gradual behavior of the fault as a cause of increasing the intensity of fault injection. The DFIA attack has been successfully launched on AES, PRESENT and LED block ciphers. Our group has also recently proposed this attack on the AES algorithm running on a LEON3 processor. In our work, we also propose a countermeasure against one of the most powerful types of fault attacks, namely, Fault Sensitivity Analysis (FSA). This countermeasure is based on balancing the delay of the circuit to destroy the correlation of secret data and timing delay of a circuit. Additionally, we propose a framework for assessing the vulnerability of designs against fault attacks. An example of this framework is the Timing Violation Vulnerability Factor (TVVF) that is a metric for measuring the vulnerability of hardware against timing violation attacks. We compute TVVF for two implementations of AES algorithm and measure the vulnerability of these designs against two types of fault attacks. %For future work, we plan to propose an attack that is a combination of power measurements and fault injections. This attack is more powerful in the sense that it has less fault injection restrictions and requires less amount of information from the block cipher's data. We also plan to design more efficient and generic evaluation metrics than TVVF. As shown in this thesis, fault attacks are more serious threat than considered by the cryptography community. This thesis provides a deep understanding of the fault behavior in the circuit and therefore a better knowledge on powerful fault attacks. The techniques developed in this dissertation focus on different aspects of fault attacks on hardware architectures and microprocessors. Considering the proposed fault models, attacks, and evaluation metrics in this thesis, there is hope to develop robust and fault attack resistant microprocessors. We conclude this thesis by observing future areas and opportunities for research.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
26

Prosperi, Paolo. "Metrics of food security and sustainability An indicator-based vulnerability and resilience approach". Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/4012.

Pełny tekst źródła
Streszczenie:
Food crises and global climate change, along with natural system depletion, placed food security and environmental sustainability at the top of the political agenda. Analyses of the dynamic interlinkages between food consumption patterns and environmental concerns recently received considerable attention from the international community. Using the lens of a broad sustainability approach and recognizing the systemic dimension of sustainability - as the capacity of a system to maintain its functions over time - the thesis aimed at developing a multidimensional framework, to identify metrics for assessing the sustainability of food systems and diets, applicable at a subregional level. Building on Social-Ecological Systems frameworks, the Mediterranean Latin Arc presents several socioeconomic and biophysical drivers of change making the food system vulnerable in its functions. A vulnerability/resilience approach was applied to analyze the main issues related to food and nutrition security. Formalizing the food system as a dynamic complex system, a model originates from this framework. Several causal models of vulnerability were identified, describing the interactions where drivers of change directly affect food and nutrition security outcomes, disentangling exposure, sensitivity, and resilience. This theoretical modeling exercise allowed the identification of a first suite of indicators. A reduced pool of metrics was then obtained through an expert-based elicitation process (Delphi Survey), moving beyond subjective evaluation and reaching consensus.
Style APA, Harvard, Vancouver, ISO itp.
27

Andersson, Rikard. "A Method for Assessment of System Security". Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4386.

Pełny tekst źródła
Streszczenie:

With the increasing use of extensive IT systems for sensitive or safety-critical applications, the matter of IT security is becoming more important. In order to be able to make sensible decisions about security there is a need for measures and metrics for computer security. There currently exist no established methods to assess the security of information systems.

This thesis presents a method for assessing the security of computer systems. The basis of the method is that security relevant characteristics of components are modelled by a set of security features and connections between components are modelled by special functions that capture the relations between the security features of the components. These modelled components and relations are used to assess the security of each component in the context of the system and the resulting system dependent security values are used to assess the overall security of the system as a whole.

A software tool that implements the method has been developed and used to demonstrate the method. The examples studied show that the method delivers reasonable results, but the exact interpretation of the results is not clear, due to the lack of security metrics.

Style APA, Harvard, Vancouver, ISO itp.
28

Lundin, Reine. "Towards Measurable and Tunable Security". Licentiate thesis, Karlstad : Faculty of Economic Sciences, Communication and IT, Computer Science, Karlstad University, 2007. http://www.diva-portal.org/kau/abstract.xsql?dbid=1200.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Garfin, Gregg, Robert Varady, Robert Merideth, Margaret O. Wilder i Christopher Scott. "Metrics for assessing adaptive capacity and water security: Common challenges, diverging contexts, emerging consensus". ELSEVIER SCI LTD, 2016. http://hdl.handle.net/10150/622654.

Pełny tekst źródła
Streszczenie:
The rapid pace of climate and environmental changes requires some degree of adaptation, to forestall or avoid severe impacts. Adaptive capacity and water security are concepts used to guide the ways in which resource managers plan for and manage change. Yet the assessment of adaptive capacity and water security remains elusive, due to flaws in guiding concepts, paucity or inadequacy of data, and multiple difficulties in measuring the effectiveness of management prescriptions at scales relevant to decision-making. We draw on conceptual framings and empirical findings of the articles in this special issue and seek to respond to key questions with respect to metrics for the measurement, governance, information accessibility, and robustness of the knowledge produced in conjunction with ideas related to adaptive capacity and water security. Three overarching conclusions from this body of work are (a) systematic cross-comparisons of metrics, using the same models and indicators, are needed to validate the reliability of evaluation instruments for adaptive capacity and water security, (b) the robustness of metrics to applications across multiple scales of analysis can be enhanced by a “metrics plus” approach that combines well-designed quantitative metrics with in-depth qualitative methods that provide rich context and local knowledge, and (c) changes in the governance of science-policy can address deficits in public participation, foster knowledge exchange, and encourage the co-development of adaptive processes and approaches (e.g., risk-based framing) that move beyond development and use of static indicators and metrics.
Style APA, Harvard, Vancouver, ISO itp.
30

Bengtsson, Jonna. "Scenario-Based Evaluation of a Method for System Security Assessment". Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6004.

Pełny tekst źródła
Streszczenie:

This thesis evaluates a method for system security assessment (MASS), developed at the Swedish Defence Research Agency in Linköping. The evaluation has been carried out with the use of scenarios, consisting of three example networks and several modifications of those. The results from the scenarios are then compared to the expectations of the author and a general discussion is taken about whether or not the results are realistic.

The evaluation is not meant to be exhaustive, so even if MASS had passed the evaluation with flying colors, it could not have been regarded as proof that the method works as intended. However, this was not the case; even though MASS responded well to the majority of the modifications, some issues indicating possible adjustments or improvements were found and commented on in this report.

The conclusion from the evaluation is therefore that there are issues to be solved and that the evaluated version of MASS is not ready to be used to evaluate real networks. The method has enough promise not to be discarded, though. With the aid of the issues found in this thesis, it should be developed further, along with the supporting tools, and be re-evaluated.

Style APA, Harvard, Vancouver, ISO itp.
31

Lundin, Reine. "Guesswork and Entropy as Security Measures for Selective Encryption". Doctoral thesis, Karlstads universitet, Avdelningen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-14032.

Pełny tekst źródła
Streszczenie:
More and more effort is being spent on security improvements in today's computer environments, with the aim to achieve an appropriate level of security. However, for small computing devices it might be necessary to reduce the computational cost imposed by security in order to gain reasonable performance and/or energy consumption. To accomplish this selective encryption can be used, which provides confidentiality by only encrypting chosen parts of the information. Previous work on selective encryption has chiefly focused on how to reduce the computational cost while still making the information perceptually secure, but not on how computationally secure the selectively encrypted information is.  Despite the efforts made and due to the harsh nature of computer security, good quantitative assessment methods for computer security are still lacking. Inventing new ways of measuring security are therefore needed in order to better understand, assess, and improve the security of computer environments. Two proposed probabilistic quantitative security measures are entropy and guesswork. Entropy gives the average number of guesses in an optimal binary search attack, and guesswork gives the average number of guesses in an optimal linear search attack. In information theory, a considerable amount of research has been carried out on entropy and on entropy-based metrics. However, the same does not hold for guesswork. In this thesis, we evaluate the performance improvement when using the proposed generic selective encryption scheme. We also examine the confidentiality strength of selectively encrypted information by using and adopting entropy and guesswork. Moreover, since guesswork has been less theoretical investigated compared to entropy, we extend guesswork in several ways and investigate some of its behaviors.
Style APA, Harvard, Vancouver, ISO itp.
32

Miah, Abdul. "Product-based environmental metrics for use within aerospace, defence, space and security industries (ADS)". Thesis, University of Surrey, 2018. http://epubs.surrey.ac.uk/845983/.

Pełny tekst źródła
Streszczenie:
Within the aerospace, defence, space, and security (ADS) industries, there is a growing reporting requirement and interest in understanding and reducing the environmental impacts of products and related risks to business. This dissertation presents the research carried out in collaboration with six ADS companies (ADS Group, Airbus Group, BAE Systems, Bombardier Aerospace, Granta Design, and Rolls-Royce) to establish industry methods for consistently measuring and reporting two pre-selected product-based environmental indicators identified as important to the industry: energy consumption and access to resources. Following an action research approach, four potential methods for calculating and reporting the manufacturing energy footprint of ADS products were identified and industry tested on three case study parts selected by Airbus Group, Bombardier Aerospace, and Rolls-Royce. Methods tested were: (1) Direct measurement, (2) Theoretical calculation, (3) Facility level allocation of energy consumption (based on annual production hours, quantity, and weight of parts manufactured), and (4) Approximation based on generic data. Method 3 (Production Hours) was found to be the most suitable “single” method for immediately reporting the manufacturing energy footprint of parts as it was quick to implement and based on widely available industry data. Regarding the comparability of methods, methods were found to be incomparable and produce significantly different results when applied to calculate the manufacturing energy footprint of the same part. Differences in the comparison of two methods could be in the order of one magnitude based on findings. Such large differences are significant for understanding energy use/costs, environmental impacts (e.g. carbon footprint), and reliably reporting and comparing information for informing decisions. Therefore, methods for calculating the manufacturing energy footprint of products cannot be assumed to be interchangeable and stacked in LCAs, EPDs, and other standards. These findings challenge current LCA practices and the interpretation of product-based environmental declarations if multiple methods have been used and results stacked. Thus, existing standards and growing product-orientated environmental polices allowing for the use of multiple methods (e.g. EPDs and PEFs) may indeed proliferate incomparability rather than engender comparability. Regarding approximating product energy footprints using generic data, the research was only able to approximate the machining energy consumption associated with the case study parts because of data gaps in the generic database. However, a high comparability between generic data use and direct measurement (i.e. specific/primary data) was found. These limited findings challenge attitudes towards generic data use and indicate potential scope to replace expensive primary data collection with more cost-effective (and similarly accurate) generic data. With regards to proposing a method for measuring the access to resources (A2R) product-based environmental indicator, several supply risk indicators and methodological choices for measuring the indicator were identified. Methodological choices included decisions such as to normalise and aggregate supply risk indicators into a single score. A workshop with the industry consortium was consequently carried out to explore and agree: (1) what indicators should be selected to appropriately measure A2R, and (2) how the selected indicators should be measured. Out of 18 potential supply risk indicators, five were identified as key: conflict material risk, environmental country risk, price volatility risk, sourcing and geopolitical risk, and monopoly of supply risk were selected because of clear links to legislation, use of reliable data, and effect on material prices. Regarding methodological choices for measuring A2R, the industry consortium preferred to avoid normalising and aggregating indicators to prevent masking information. The dissertation highlights several major contributions to knowledge, industry, policy, and the development of standards as a result of the research. The main contribution to knowledge is the methods developed and the learnings derived from the process undertaken to determine them. The main contribution and benefit to the ADS industries are single, practical, research informed, and industry consortium agreed methods for cost-effectively measuring two product-based environmental indicators (which support the informational requirements of a wide range of stakeholders and potential end-uses). The examined indicators and the 'case study’ approach utilised with an industry consortium to identify the generic issues in developing suitable methods will be of value for: (1) other industries with similar product/value chain characteristics, and (2) the development of methods for measuring other product-based environmental indicators for industry use (e.g. water, waste, recyclability, etc.). Presented research outcomes provide valuable industry insights for informing the development of emerging product-orientated environmental policies and standards in a manner which benefit the ADS industries and broader environment. Overall, the research has enhanced academic understanding and provides industry capability to support businesses and other similar industries to consistently assess, report, and improve the sustainability of their products and supply chains.
Style APA, Harvard, Vancouver, ISO itp.
33

Araujo, Neto Afonso Comba de. "Security Benchmarking of Transactional Systems". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/143292.

Pełny tekst źródła
Streszczenie:
A maioria das organizações depende atualmente de algum tipo de infraestrutura computacional para suportar as atividades críticas para o negócio. Esta dependência cresce com o aumento da capacidade dos sistemas informáticos e da confiança que se pode depositar nesses sistemas, ao mesmo tempo que aumenta também o seu tamanho e complexidade. Os sistemas transacionais, tipicamente centrados em bases de dados utilizadas para armazenar e gerir a informação de suporte às tarefas diárias, sofrem naturalmente deste mesmo problema. Assim, uma solução frequentemente utilizada para amenizar a dificuldade em lidar com a complexidade dos sistemas passa por delegar sob outras organizações o trabalho de desenvolvimento, ou mesmo por utilizar soluções já disponíveis no mercado (sejam elas proprietárias ou abertas). A diversidade de software e componentes alternativos disponíveis atualmente torna necessária a existência de testes padronizados que ajudem na seleção da opção mais adequada entre as alternativas existentes, considerando uma conjunto de diferentes características. No entanto, o sucesso da investigação em testes padronizados de desempenho e confiabilidade contrasta radicalmente com os avanços em testes padronizados de segurança, os quais têm sido pouco investigados, apesar da sua extrema relevância. Esta tese discute o problema da definição de testes padronizados de segurança, comparando-o com outras iniciativas de sucesso, como a definição de testes padronizados de desempenho e de confiabilidade. Com base nesta análise é proposta um modelo de base para a definição de testes padronizados de segurança. Este modelo, aplicável de forma genérica a diversos tipos de sistemas e domínios, define duas etapas principais: qualificação de segurança e teste padronizado de confiança. A qualificação de segurança é um processo que permite avaliar um sistema tendo em conta os aspectos e requisitos de segurança mais evidentes num determinado domínio de aplicação, dividindo os sistemas avaliados entre aceitáveis e não aceitáveis. O teste padronizado de confiança, por outro lado, consiste em avaliar os sistemas considerados aceitáveis de modo a estimar a probabilidade de existirem problemas de segurança ocultados ou difíceis de detectar (o objetivo do processo é lidar com as incertezas inerentes aos aspectos de segurança). O modelo proposto é demonstrado e avaliado no contexto de sistemas transacionais, os quais podem ser divididos em duas partes: a infraestrutura e as aplicações de negócio. Uma vez que cada uma destas partes possui objetivos de segurança distintos, o modelo é utilizado no desenvolvimento de metodologias adequadas para cada uma delas. Primeiro, a tese apresenta um teste padronizado de segurança para infraestruturas de sistemas transacionais, descrevendo e justificando todos os passos e decisões tomadas ao longo do seu desenvolvimento. Este teste foi aplicado a quatro infraestruturas reais, sendo os resultados obtidos cuidadosamente apresentados e analisados. Ainda no contexto das infraestruturas de sistemas transacionais, a tese discute o problema da seleção de componentes de software. Este é um problema complexo uma vez que a avaliação de segurança destas infraestruturas não é exequível antes da sua entrada em funcionamento. A ferramenta proposta, que tem por objetivo ajudar na seleção do software básico para suportar este tipo de infraestrutura, é aplicada na avaliação e análise de sete pacotes de software distintos, todos alternativas tipicamente utilizadas em infraestruturas reais. Finalmente, a tese aborda o problema do desenvolvimento de testes padronizados de confiança para aplicações de negócio, focando especificamente em aplicações Web. Primeiro, é proposta uma abordagem baseada no uso de ferramentas de análise de código, sendo apresentadas as diversas experiências realizadas para avaliar a validade da proposta, incluindo um cenário representativo de situações reais, em que o objetivo passa por selecionar o mais seguro de entre sete alternativas de software para suportar fóruns Web. Com base nas análises realizadas e nas limitações desta proposta, é de seguida definida uma abordagem genérica para a definição de testes padronizados de confiança para aplicações Web.
Most organizations nowadays depend on some kind of computer infrastructure to manage business critical activities. This dependence grows as computer systems become more reliable and useful, but so does the complexity and size of systems. Transactional systems, which are database-centered applications used by most organizations to support daily tasks, are no exception. A typical solution to cope with systems complexity is to delegate the software development task, and to use existing solutions independently developed and maintained (either proprietary or open source). The multiplicity of software and component alternatives available has boosted the interest in suitable benchmarks, able to assist in the selection of the best candidate solutions, concerning several attributes. However, the huge success of performance and dependability benchmarking markedly contrasts with the small advances on security benchmarking, which has only sparsely been studied in the past. his thesis discusses the security benchmarking problem and main characteristics, particularly comparing these with other successful benchmarking initiatives, like performance and dependability benchmarking. Based on this analysis, a general framework for security benchmarking is proposed. This framework, suitable for most types of software systems and application domains, includes two main phases: security qualification and trustworthiness benchmarking. Security qualification is a process designed to evaluate the most obvious and identifiable security aspects of the system, dividing the evaluated targets in acceptable or unacceptable, given the specific security requirements of the application domain. Trustworthiness benchmarking, on the other hand, consists of an evaluation process that is applied over the qualified targets to estimate the probability of the existence of hidden or hard to detect security issues in a system (the main goal is to cope with the uncertainties related to security aspects). The framework is thoroughly demonstrated and evaluated in the context of transactional systems, which can be divided in two parts: the infrastructure and the business applications. As these parts have significantly different security goals, the framework is used to develop methodologies and approaches that fit their specific characteristics. First, the thesis proposes a security benchmark for transactional systems infrastructures and describes, discusses and justifies all the steps of the process. The benchmark is applied to four distinct real infrastructures, and the results of the assessment are thoroughly analyzed. Still in the context of transactional systems infrastructures, the thesis also addresses the problem of the selecting software components. This is complex as evaluating the security of an infrastructure cannot be done before deployment. The proposed tool, aimed at helping in the selection of basic software packages to support the infrastructure, is used to evaluate seven different software packages, representative alternatives for the deployment of real infrastructures. Finally, the thesis discusses the problem of designing trustworthiness benchmarks for business applications, focusing specifically on the case of web applications. First, a benchmarking approach based on static code analysis tools is proposed. Several experiments are presented to evaluate the effectiveness of the proposed metrics, including a representative experiment where the challenge was the selection of the most secure application among a set of seven web forums. Based on the analysis of the limitations of such approach, a generic approach for the definition of trustworthiness benchmarks for web applications is defined.
Style APA, Harvard, Vancouver, ISO itp.
34

Ray, Donald James. "A Quantified Model of Security Policies, with an Application for Injection-Attack Prevention". Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6133.

Pełny tekst źródła
Streszczenie:
This dissertation generalizes traditional models of security policies, from specifications of whether programs are secure, to specifications of how secure programs are. This is a generalization from qualitative, black-and-white policies to quantitative, gray policies. Included are generalizations from traditional definitions of safety and liveness policies to definitions of gray-safety and gray-liveness policies. These generalizations preserve key properties of safety and liveness, including that the intersection of safety and liveness is a unique allow-all policy and that every policy can be written as the conjunction of a single safety and a single liveness policy. It is argued that the generalization provides several benefits, including that it serves as a unifying framework for disparate approaches to security metrics, and that it separates—in a practically useful way—specifications of how secure systems are from specifications of how secure users require their systems to be. To demonstrate the usefulness of the new model, policies for mitigating injection attacks (including both code- and noncode-injection attacks) are explored. These policies are based on novel techniques for detecting injection attacks that avoid many of the problems associated with existing mechanisms for preventing injection attacks.
Style APA, Harvard, Vancouver, ISO itp.
35

Rieger, Pavel. "Právní a systémová analýza e-Governmentu". Doctoral thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-202340.

Pełny tekst źródła
Streszczenie:
The goal of this doctoral thesis is to analyse some legal, technical and economic aspects, related to the development of e-Government in the Czech Republic and to draw some general prerequisites for a successful development of e-Government services. The work contains a review of the legislation, which has a direct or indirect impact on the development of e-Government. It concerns mainly the digital signature legislation, public administration information systems, general regulations on administrative procedures and some special regulations. In this thesis there are analysed in detail currently implemented e-Government projects, especially the public administration contact points, the data inboxes, the authorized documents' conversion and the basic registers of public administration. Special attention is also devoted to the questions of personal data security in information systems. Possible effects of e-Government are evaluated with regard to recipients of public administration, particularly to citizens and legal persons. Measurable indicators (metrics) are proposed to measure the progress of individual e-Government projects, along with the procedure of their calculation. This thesis concludes that for reliable measurement of efficiency, it is necessary to identify some quantifiable indicators on the cost side as well as and on the benefits side. These indicators should be oriented to the cost aspect as well as to the specific objectives of public administration. As for the methodology, in the thesis there are used the usual methods of research in the field of legal science. Concretely used methods are the process analysis, cost analysis and metrics analysis, whose methodology is based primarily on the economics, management and information management. Furthermore, the case studies have been used to fulfil the objectives of the thesis and to verify the thesis' hypotheses.
Style APA, Harvard, Vancouver, ISO itp.
36

Miani, Rodrigo Sanches 1983. "Um estudo sobre métricas e quantificação em segurança da informação". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260948.

Pełny tekst źródła
Streszczenie:
Orientadores: Leonardo de Souza Mendes, Bruno Bogaz Zarpelão
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-23T07:05:01Z (GMT). No. of bitstreams: 1 Miani_RodrigoSanches_D.pdf: 2910742 bytes, checksum: e722dcc4c3bc0741a15ed5ec79cfa1ec (MD5) Previous issue date: 2013
Resumo: Com o aumento da frequência e diversidade de ataques, uma preocupação crescente das organizações é garantir a segurança da rede. Para compreender as ações que conduziram os incidentes e como eles podem ser mitigados, pesquisadores devem identificar e medir os fatores que influenciam os atacantes e também as vítimas. A quantificação de segurança é, em particular, importante na construção de métricas relevantes para apoiar as decisões que devem ser tomadas para a proteção de sistemas e redes. O objetivo deste trabalho foi propor soluções para auxiliar o desenvolvimento de modelos de quantificação de segurança aplicados em ambientes reais. Três diferentes abordagens foram usadas para a investigação do problema: identificação de limitações nos métodos existentes na literatura, investigação de fatores que influenciam a segurança de uma organização e a criação e aplicação de um questionário para investigar o uso de métricas na prática. Os estudos foram conduzidos usando dados fornecidos pela University of Maryland e pelo Centro de Atendimento a Incidentes de Segurança (CAIS) vinculado a Rede Nacional de Pesquisa (RNP). Os resultados mostraram que as organizações podem se beneficiar de análises mais rigorosas e eficientes a partir do uso de métricas de segurança e que a continuidade das pesquisas nessa área está intimamente ligada ao desenvolvimento de estudos em sistemas reais
Abstract: With the increase in the number and diversity of attacks, a critical concern for organizations is to keep their network secure. To understand the actions that lead to successful attacks and also how they can be mitigated, researchers should identify and measure the factors that influence both attackers and victims. Quantifying security is particularly important to construct relevant metrics that support the decisions that need to be made to protect systems and networks. In this work, we aimed at proposing solutions to support the development of security quantification models applied in real environments. Three different approaches were used to investigate the problem: identifying issues on existing methods, evaluating metrics using empirical analysis and conducting a survey to investigate metrics in practice. Studies were conducted using data provided by the University of Maryland and also by the Security Incident Response Team (CAIS) from the National Education and Research Network (RNP). Our results showed that organizations could better manage security by employing security metrics and also that future directions in this field are related to the development of studies on real systems
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
37

Edge, Crystal. "Quantitative Assessment of the Modularization of Security Design Patterns with Aspects". NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/142.

Pełny tekst źródła
Streszczenie:
Following the success of software engineering design patterns, security patterns are a promising approach to aid in the design and development of more secure software systems. At the same time, recent work on aspect-oriented programming (AOP) suggests that the cross-cutting nature of software security concerns makes it a good candidate for AOP techniques. This work uses a set of software metrics to evaluate and compare object-oriented and aspect-oriented implementations of five security patterns--Secure Base Action, Intercepting Validator, Authentication Enforcer, Authorization Enforcer, and Secure Logger. Results show that complete separation of concerns was achieved with the aspect-oriented implementations and the modularity of the base application was improved, but at a cost of increased complexity in the security pattern code. In most cases the cohesion, coupling, and size metrics were improved for the base application but worsened for the security pattern package. Furthermore, a partial aspect-oriented solution, where the pattern code is decoupled from the base application but not completely encapsulated by the aspect, demonstrated better modularity and reusability than a full aspect solution. This study makes several contributions to the fields of aspect-oriented programming and security patterns. It presents quantitative evidence of the effect of aspectization on the modularity of security pattern implementations. It augments four existing security pattern descriptions with aspect-oriented solution strategies, complete with new class and sequence diagrams based on proposed aspect-oriented UML extensions. Finally, it provides a set of role-based refactoring instructions for each security pattern, along with a proposal for three new basic generalization refactorings for aspects.
Style APA, Harvard, Vancouver, ISO itp.
38

Wang, Hsiu-Chung. "Toward a Heuristic Model for Evaluating the Complexity of Computer Security Visualization Interface". Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/35.

Pełny tekst źródła
Streszczenie:
Computer security visualization has gained much attention in the research community in the past few years. However, the advancement in security visualization research has been hampered by the lack of standardization in visualization design, centralized datasets, and evaluation methods. We propose a new heuristic model for evaluating the complexity of computer security visualizations. This complexity evaluation method is designed to evaluate the efficiency of performing visual search in security visualizations in terms of measuring critical memory capacity load needed to perform such tasks. Our method is based on research in cognitive psychology along with characteristics found in a majority of the security visualizations. The main goal for developing this complexity evaluation method is to guide computer security visualization design and compare different visualization designs. Finally, we compare several well known computer security visualization systems. The proposed method has the potential to be extended to other areas of information visualization.
Style APA, Harvard, Vancouver, ISO itp.
39

Aparicio-Navarro, Francisco J. "Using metrics from multiple layers to detect attacks in wireless networks". Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16309.

Pełny tekst źródła
Streszczenie:
The IEEE 802.11 networks are vulnerable to numerous wireless-specific attacks. Attackers can implement MAC address spoofing techniques to launch these attacks, while masquerading themselves behind a false MAC address. The implementation of Intrusion Detection Systems has become fundamental in the development of security infrastructures for wireless networks. This thesis proposes the designing a novel security system that makes use of metrics from multiple layers of observation to produce a collective decision on whether an attack is taking place. The Dempster-Shafer Theory of Evidence is the data fusion technique used to combine the evidences from the different layers. A novel, unsupervised and self- adaptive Basic Probability Assignment (BPA) approach able to automatically adapt its beliefs assignment to the current characteristics of the wireless network is proposed. This BPA approach is composed of three different and independent statistical techniques, which are capable to identify the presence of attacks in real time. Despite the lightweight processing requirements, the proposed security system produces outstanding detection results, generating high intrusion detection accuracy and very low number of false alarms. A thorough description of the generated results, for all the considered datasets is presented in this thesis. The effectiveness of the proposed system is evaluated using different types of injection attacks. Regarding one of these attacks, to the best of the author knowledge, the security system presented in this thesis is the first one able to efficiently identify the Airpwn attack.
Style APA, Harvard, Vancouver, ISO itp.
40

Birch, Huw. "A study into the feasibility of local renewable energy systems with storage, using security and sustainability metrics for optimisation and evaluation". Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/16725/.

Pełny tekst źródła
Streszczenie:
The aim of this thesis was to develop tools for evaluating the potential sustainability and security of renewable energy systems in the UK, with a long-term view of maximising the potential renewable energy penetration of wind and solar by deployment of electrical energy storage. Using computer modelled renewable energy systems, a number of system variables are considered such as system size, energy sources (solar and/or wind), type of demand load, and capacity and type of storage technology. The results allow for a broad comparison of different types of renewable energy systems, and their optimisation. The optimisation methodology is also critically evaluated with consideration of its robustness and applicability, using two alternative metrics to measure system energy security and two different measurements of energy return on investment (EROI) to measure sustainability. When comparing renewable energy systems, results found that large systems that predominately got their power from wind sources were the most sustainable and secure, using optimisation methods that penalised both their overproduction and underproduction. Nearly all systems benefit from the use of electrical energy storage, without impacting too much on sustainability levels, but larger wind systems used less storage, suffering lower energy security as a result. System performance can best be improved by developing solar power technologies with lower embodied energy costs, followed by a reduction in embodied energy of storage technology. The former will enable more effective use of storage methods, while the latter allows for larger storage capacities with less environmental impact. Sustainability and energy security were given equal priority in the optimisation, however it was found that more sustainable generation technologies were preferable to more secure technologies, as there is more scope to improve energy return on investment than security. Therefore there is a limit, generally around 45-85\% (depending on size of system and choice of technology) to the proportion of time that renewable energy systems using variable energy sources can be autonomous, meaning that energy backup from the grid and/or dispatchable sources is still required.
Style APA, Harvard, Vancouver, ISO itp.
41

Miani, Rodrigo Sanches. "Aplicação de metricas a analise de segurança em redes metropolitanas de acesso aberto". [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259460.

Pełny tekst źródła
Streszczenie:
Orientador: Leonardo de Souza Mendes
Dissertação (mestrado) - Universidade Estdual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-13T09:33:37Z (GMT). No. of bitstreams: 1 Miani_RodrigoSanches_M.pdf: 1458322 bytes, checksum: 8aae1af3ae9789f087bb70e07f08660a (MD5) Previous issue date: 2009
Resumo: As questões relacionadas à garantia de segurança influenciam diretamente o sucesso da implantação de redes metropolitanas de acesso aberto. Dessa forma, são necessários métodos eficientes para analisar a segurança destas redes em todos os níveis (organizacional, físico e de sistemas), a fim de propor soluções e implementar melhorias. Nossa proposta consiste em criar métricas de segurança específicas para as redes metropolitanas de acesso aberto que visam medir a eficiência dos programas de segurança e apoiar o planejamento das ações contra os problemas detectados. Este trabalho apresenta um conjunto de doze métricas de segurança para tais redes e os parâmetros para a sua definição, tais como dois modelos para o cálculo do indicador de segurança de uma métrica. Também serão apresentados os resultados obtidos com a aplicação de tais métricas para o estabelecimento de políticas de segurança na rede metropolitana de acesso aberto de Pedreira, cidade localizada no interior do estado de São Paulo. Os resultados mostraram que a aplicação de métricas bem definidas pode ser eficiente na detecção de vulnerabilidades e correção de problemas de segurança.
Abstract: Information security has direct influence on any successful deployment of metropolitan broadband access networks. Efficient methods are required for security analysis of metropolitan networks in all levels: organization, structure and system. This work proposes the development and application of specific security metrics for metropolitan broadband access networks that aim to measure the efficiency of security programs and support action planning against detected problems. The approach presented in this work show metrics developed for these networks and parameters for metrics definition, such as a model for calculation of a security indicator of a metric. This paper also presents results achieved from application of the metrics reported here to establish security policies in the metropolitan broadband access network of Pedreira, a city located in the state of São Paulo, Brazil. These results show that well formed security metrics can be efficient in vulnerability detection and solutions of security issues.
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
42

Nascimento, Tiago Belmonte. "Uma proposta de desenvolvimento de métricas para a rede da Unipampa". Universidade Federal do Pampa, 2013. http://dspace.unipampa.edu.br:8080/xmlui/handle/riu/246.

Pełny tekst źródła
Streszczenie:
Submitted by Sandro Camargo (sandro.camargo@unipampa.edu.br) on 2015-05-09T19:15:36Z No. of bitstreams: 1 107110009.pdf: 1750995 bytes, checksum: 7c771ac4e6d9517bfe5c709731c3743e (MD5)
Made available in DSpace on 2015-05-09T19:15:36Z (GMT). No. of bitstreams: 1 107110009.pdf: 1750995 bytes, checksum: 7c771ac4e6d9517bfe5c709731c3743e (MD5) Previous issue date: 2013-07-25
Um dos maiores desafios da implantação da Universidade Federal do Pampa como uma instituição pública de ensino superior no interior do Rio Grande do Sul é a estruturação de sua rede de dados. Devido às suas peculiaridades a rede de computadores da UNIPAMPA necessita de controles eficientes para garantir sua operação com estabilidade e segurança. Dessa forma, torna-se imprescindível o uso de sistemas confiáveis de comunicação que interliguem todas estas unidades descentralizadas. Em geral, a confiabilidade dos sistemas de comunicação pode ser melhorada em três grandes frentes de ação. 1) manipulação e codificação da informação, 2) melhoria de recursos como potência e banda nos canais de comunicação físicos 3) levantamento de métricas nos pontos de transmissão e recepção. A fim de colaborar neste processo, nosso trabalho consistiu na elaboração de uma proposta do uso de métricas na política de segurança desta rede, tornando mais eficiente a detecção de vulnerabilidades e a orientação de novas políticas de segurança e investimentos. As 10 métricas apresentadas e o método que foi utilizado para gerá-las podem ser aplicados em qualquer rede com características similares à rede da Unipampa.
One of the biggest challenges in the implementation of the University of Pampa as a public university in the countryside of the state of Rio Grande do Sul is the structure of its data network. Due to its peculiarities, the Unipampa's computer network needs efficient controls to ensure operations with stability and safety. Thus, it ecomes essential to use reliable communication systems that interconnect all these decentralized units. In general, the reliability of communication systems can be improved in three major areas of action. 1) anipulation and encoding of information, 2) improving resources such as power and bandwidth in communication physical channels 3) survey metrics at points of transmission and reception. Aiming to contribute in this process, our research consisted in elaborating a proposal of metric use in the security policy of this network, making the vulnerability detection more efficient as well as the orientation of new policies of safety and investment. The 10 metrics and presented method was used to generate them may be applied in any network with similar characteristics to the network of Unipampa.
Style APA, Harvard, Vancouver, ISO itp.
43

Voronkov, Artem. "Usable Firewall Rule Sets". Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-64703.

Pełny tekst źródła
Streszczenie:
Correct functioning is the most important requirement for any system. Nowadays there are a lot of threats to computer systems that undermine confidence in them and, as a result, force a user to abandon their use. Hence, a system cannot be trusted if there is no proper security provided. Firewalls are an essential component of network security and there is an obvious need for their use. The level of security provided by a firewall depends on how well it is configured. Thus, to ensure the proper level of network security, it is necessary to have properly configured firewalls. However, setting up the firewall correctly is a very challenging task. These configuration files might be hard to understand even for system administrators. This is due to the fact that these configuration files have a certain structure: the higher the position of a rule in the rule set, the higher priority it has. Challenging problems arise when a new rule is being added to the set, and a proper position, where to place it, needs to be found. Misconfiguration might sooner or later be made and that will lead to an inappropriate system's security. This brings us to the usability problem associated with the configuration of firewalls. The overall aim of this thesis is to identify existing firewall usability gaps and to mitigate them. To achieve the first part of the objective, we conducted a series of interviews with system administrators. In the interviews, system administrators were asked about the problems they face when dealing with firewalls. After having ascertained that the usability problems exist, we turned to literature to get an understanding on the state-of-the-art of the field and therefore conducted a systematic literature review. This review presents a classification of available solutions and identifies open challenges in this area. To achieve the second part of the objective, we started working on one identified challenge. A set of usability metrics was proposed and mathematically formalized. A strong correlation between our metrics and how system administrators describe usability was identified.
Network security is an important aspect that must be taken into account. Firewalls are systems that are used to make sure that authorized network traffic is allowed and unauthorized traffic is prohibited. However, setting up a firewall correctly is a challenging task. Their configuration files might be hard to understand even for system administrators. The overall aim of this thesis is to identify firewall usability gaps and to mitigate them. To achieve the first part of the objective, we conduct a series of interviews with system administrators. In the interviews, system administrators are asked about the problems they face when dealing with firewalls. After having ascertained that the usability problems exist, we conduct a systematic literature review to get an understanding on the state of the art of the field. This review classifies available solutions and identifies open challenges. To achieve the second part of the objective, a set of usability metrics is proposed and mathematically formalized. A strong correlation between our metrics and how system administrators describe usability is identified.
HITS, 4707
Style APA, Harvard, Vancouver, ISO itp.
44

Holm, Hannes. "A Framework and Calculation Engine for Modeling and Predicting the Cyber Security of Enterprise Architectures". Doctoral thesis, KTH, Industriella informations- och styrsystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-140525.

Pełny tekst źródła
Streszczenie:
Information Technology (IT) is a cornerstone of our modern society and essential for governments' management of public services, economic growth and national security. Consequently, it is of importance that IT systems are kept in a dependable and secure state. Unfortunately, as modern IT systems typically are composed of numerous interconnected components, including personnel and processes that use or support it (often referred to as an enterprise architecture), this is not a simple endeavor. To make matters worse, there are malicious actors who seek to exploit vulnerabilities in the enterprise architecture to conduct unauthorized activity within it. Various models have been proposed by academia and industry to identify and mitigate vulnerabilities in enterprise architectures, however, so far none has provided a sufficiently comprehensive scope. The contribution of this thesis is a modeling framework and calculation engine that can be used as support by enterprise decision makers in regard to cyber security matters, e.g., chief information security officers. In summary, the contribution can be used to model and analyze the vulnerability of enterprise architectures, and provide mitigation suggestions based on the resulting estimates. The contribution has been tested in real-world cases and has been validated on both a component level and system level; the results of these studies show that it is adequate in terms of supporting enterprise decision making. This thesis is a composite thesis of eight papers. Paper 1 describes a method and dataset that can be used to validate the contribution described in this thesis and models similar to it. Paper 2 presents what statistical distributions that are best fit for modeling the time required to compromise computer systems. Paper 3 describes estimates on the effort required to discover novel web application vulnerabilities. Paper 4 describes estimates on the possibility of circumventing web application firewalls. Paper 5 describes a study of the time required by an attacker to obtain critical vulnerabilities and exploits for compiled software. Paper 6 presents the effectiveness of seven commonly used automated network vulnerability scanners. Paper 7 describes the ability of the signature-based intrusion detection system Snort at detecting attacks that are more novel, or older than its rule set. Finally, paper 8 describes a tool that can be used to estimate the vulnerability of enterprise architectures; this tool is founded upon the results presented in papers 1-7.
Informationsteknik (IT) är en grundsten i vårt moderna samhälle och grundläggande för staters hantering av samhällstjänster, ekonomisk tillväxt och nationell säkerhet. Det är därför av vikt att IT-system hålls i ett tillförlitligt och säkert tillstånd. Då moderna IT-system vanligen består av en mångfald av olika integrerade komponenter, inklusive människor och processer som nyttjar eller stödjer systemet (ofta benämnd organisationsövergripande arkitektur, eller enterprise architecture), är detta tyvärr ingen enkel uppgift. För att förvärra det hela så finns det även illvilliga aktörer som ämnar utnyttja sårbarheter i den organisationsövergripande arkitekturen för att utföra obehörig aktivitet inom den. Olika modeller har föreslagits av den akademiska världen och näringslivet för att identifiera samt behandla sårbarheter i organisationsövergripande arkitekturer, men det finns ännu ingen modell som är tillräckligt omfattande. Bidraget presenterat i denna avhandling är ett modelleringsramverk och en beräkningsmotor som kan användas som stöd av organisatoriska beslutsfattare med avseende på säkerhetsärenden. Sammanfattningsvis kan bidraget användas för att modellera och analysera sårbarheten av organisationsövergripande arkitekturer, samt ge förbättringsförslag baserat på dess uppskattningar. Bidraget har testats i fallstudier och validerats på både komponentnivå och systemnivå; resultaten från dessa studier visar att det är lämpligt för att stödja organisatoriskt beslutsfattande. Avhandlingen är en sammanläggningsavhandling med åtta artiklar. Artikel 1 beskriver en metod och ett dataset som kan användas för att validera avhandlingens bidrag och andra modeller likt detta. Artikel 2 presenterar vilka statistiska fördelningar som är bäst lämpade för att beskriva tiden som krävs för att kompromettera en dator. Artikel 3 beskriver uppskattningar av tiden som krävs för att upptäcka nya sårbarheter i webbapplikationer. Artikel 4 beskriver uppskattningar för möjligheten att kringgå webbapplikationsbrandväggar. Artikel 5 beskriver en studie av den tid som krävs för att en angripare skall kunna anskaffa kritiska sårbarheter och program för att utnyttja dessa för kompilerad programvara. Artikel 6 presenterar effektiviteten av sju vanligt nyttjade verktyg som används för att automatiskt identifiera sårbarheter i nätverk. Artikel 7 beskriver förmågan av det signatur-baserade intrångsdetekteringssystemet Snort att upptäcka attacker som är nyare, eller äldre, än dess regeluppsättning. Slutligen beskriver artikel 8 ett verktyg som kan användas för att uppskatta sårbarheten av organisationsövergripande arkitekturer; grunden för detta verktyg är de resultat som presenteras i artikel 1-7.

QC 20140203

Style APA, Harvard, Vancouver, ISO itp.
45

Wonjiga, Amir Teshome. "User-centric security monitoring in cloud environments". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S080.

Pełny tekst źródła
Streszczenie:
Les fournisseurs de services reconnaissent le problème de confiance et fournissent une garantie par le biais d'un contrat appelé Service Level Agreement (SLA). Presque tous les SLAs existants ne traitent que des fonctionnalités du nuage informatique et ne garantissent donc pas l'aspect sécurité des services hébergés pour les locataires. Comme le nuage informatique continue d'être de plus en plus intégré, il est nécessaire d'avoir des services de supervision de la sécurité propres a l'utilisateur, qui sont fondés sur les exigences des locataires. Cet aspect de supervision de la sécurité du système exige également que les fournisseurs offrent des garanties. Dans cette thèse, nous contribuons à l'inclusion de termes de supervision de la sécurité centres sur l'utilisateur dans les SLAs de l'informatique en nuage. Nous concevons des extensions à un langage de SLA existant appelé Cloud SLA (CSLA). Notre extension, appelée Extended CSLA (ECSLA), permet aux locataires de décrire leurs besoins en matière de supervision de la sécurité en termes de vulnérabilités. Plus précisément, un service de supervision de la sécurité est décrit comme une relation entre les besoins de l'utilisateur en tant que vulnérabilités, un produit logiciel présentant ces vulnérabilités et une infrastructure où le logiciel est exécuté. Nous proposons une solution pour réduire le nombre d’évaluations nécessaires par rapport au nombre de configurations possibles. La solution proposée introduit deux nouvelles idées. Tout d'abord, nous concevons une méthode de construction d'une base de connaissances qui repose sur des regroupements de vulnérabilités a partir d'heuristiques. Deuxièmement, nous proposons un modèle pour quantifier l’interférence entre des règles de détection associées à des vulnérabilités différentes. En utilisant ces deux méthodes, nous pouvons estimer la performance d'un dispositif de supervision avec peu d’évaluations par rapport à une approche naïve. Les métriques utilisées dans nos termes de SLA tiennent compte de l'environnement opérationnel des dispositifs de supervision de la sécurité. Afin de tenir compte des paramètres imprévisibles de l'environnement opérationnel, nous proposons un mécanisme d'estimation ou la performance d'un dispositif de supervision est mesurée à l'aide de valeurs connues pour ces paramètres et le résultat est utilisé pour modéliser sa performance et l'estimer pour les valeurs inconnues de ces paramètres. Une définition de SLA contient le modèle, qui peut être utilisé chaque fois que la mesure est effectuée. Nous proposons une méthode d’évaluation in situ de la configuration d'un dispositif de supervision de sécurité. Elle permet de vérifier la performance d'une configuration de l'infrastructure de supervision de sécurité dans un environnement de production. La méthode utilise une technique d'injection d'attaques mais les attaques injectées n'affectent pas les machines virtuelles de production. Nous avons mis en œuvre et évalué la méthode de vérification proposée. La méthode peut être utilisée par l'une ou l'autre des parties pour calculer la métrique requise. Cependant, elle exige une coopération entre les locataires et les fournisseurs de service. Afin de réduire la dépendance entre eux lors de la vérification, nous proposons d'utiliser un composant logique sécurisé. L'utilisation proposée d'un composant logique sécurisé pour la vérification est illustrée dans un SLA portant sur l’intégrité des données dans les nuages informatiques. La méthode utilise un registre sécurisé, fiable et distribué pour stocker les preuves de l’intégrité des données. La méthode permet de vérifier l’intégrité des données sans se fier à l'autre partie. S'il y a un conflit entre un locataire et un fournisseur, la preuve peut être utilisée pour résoudre ce conflit
Migrating to the cloud results in losing full control of the physical infrastructure as the cloud service provider (CSP) is responsible for managing the infrastructure including its security. As this incites tenants to rely on CSPs for the security of their information system, it creates a trust issue. CSPs acknowledge the trust issue and provide a guarantee through Service Level Agreement (SLA). The agreement describes the provided service and penalties for the cases of violation. Almost all existing SLAs only address the functional features of the cloud and thus do not guarantee the security aspect of tenants’ hosted services. Security monitoring is the process of collecting and analyzing indicators of potential security threats, then triaging these threats by appropriate action. It is highly desirable for CSPs to provide user-specific security monitoring services which are based on the requirements of a tenant. In this thesis we present our contribution to include user-centric security monitoring terms into cloud SLAs. This requires performing different tasks in the cloud service life-cycle, starting before the actual service deployment until the end of the service. Our contributions are presented as follows : we design extensions to an existing SLA language called Cloud SLA (CSLA). Our extension, called Extended CSLA (ECSLA), allows tenants to describe their security monitoring requirements in terms of vulnerabilities. More precisely, a security monitoring service is described as a relation between user requirements as vulnerabilities, a software product having the vulnerabilities and an infrastructure where the software is running. To offer security monitoring SLAs, CSPs need to measure the performance of their security monitoring capability with different configurations. We propose a solution to reduces the required number of evaluations compared to the number of possible configurations. The proposed solution introduces two new ideas. First, we design a knowledge base building method which uses clustering to categorize a bunch of vulnerabilities together in groups using some heuristics. Second we propose a model to quantify the interference between operations of monitoring vulnerabilities. Using these two methods we can estimate the performance of a monitoring device with few numbers of evaluations compared to the naive approach. The metrics used in our SLA terms consider the operational environment of the security monitoring devices. In order to consider the non-determistic operational environment parameters, we propose an estimation mechanism where the performance of a monitoring device is measured using known parameters and the result is used to model its performance and estimate it for unknown values of that parameter. An SLA definition contains the model, which can be used whenever the measurement is performed. We propose an in situ evaluation method of the security monitoring configuration. It can evaluate the performance of a security monitoring setup in a production environment. The method uses an attack injection technique but injected attacks do not affect the production virtual machines. We have implemented and evaluated the proposed method. The method can be used by either of the parties to compute the required metric. However, the method requires cooperation between tenants and CSPs. In order to reduce the dependency between tenants and CSPs while performing verification, we propose to use a logical secure component. The proposed use of a logical secure component for verification is illustrated in an SLA addressing data integrity in clouds. The method uses a secure trusted and distributed ledger (blockchain) to store evidences of data integrity. The method allows checking data integrity without relying on the other party. If there is any conflict between tenants and CSPs the evidence can be used to resolve the conflict
Style APA, Harvard, Vancouver, ISO itp.
46

Habib, Zadeh Esmaeil. "Modelling and Quantitative Analysis of Performance vs Security Trade-offs in Computer Networks: An investigation into the modelling and discrete-event simulation analysis of performance vs security trade-offs in computer networks, based on combined metrics and stochastic activity networks (SANs)". Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/17412.

Pełny tekst źródła
Streszczenie:
Performance modelling and evaluation has long been considered of paramount importance to computer networks from design through development, tuning and upgrading. These networks, however, have evolved significantly since their first introduction a few decades ago. The Ubiquitous Web in particular with fast-emerging unprecedented services has become an integral part of everyday life. However, this all is coming at the cost of substantially increased security risks. Hence cybercrime is now a pervasive threat for today’s internet-dependent societies. Given the frequency and variety of attacks as well as the threat of new, more sophisticated and destructive future attacks, security has become more prevalent and mounting concern in the design and management of computer networks. Therefore equally important if not more so is security. Unfortunately, there is no one-size-fits-all solution to security challenges. One security defence system can only help to battle against a certain class of security threats. For overall security, a holistic approach including both reactive and proactive security measures is commonly suggested. As such, network security may have to combine multiple layers of defence at the edge and in the network and in its constituent individual nodes. Performance and security, however, are inextricably intertwined as security measures require considerable amounts of computational resources to execute. Moreover, in the absence of appropriate security measures, frequent security failures are likely to occur, which may catastrophically affect network performance, not to mention serious data breaches among many other security related risks. In this thesis, we study optimisation problems for the trade-offs between performance and security as they exist between performance and dependability. While performance metrics are widely studied and well-established, those of security are rarely defined in a strict mathematical sense. We therefore aim to conceptualise and formulate security by analogy with dependability so that, like performance, it can be modelled and quantified. Having employed a stochastic modelling formalism, we propose a new model for a single node of a generic computer network that is subject to various security threats. We believe this nodal model captures both performance and security aspects of a computer node more realistically, in particular the intertwinements between them. We adopt a simulation-based modelling approach in order to identify, on the basis of combined metrics, optimal trade-offs between performance and security and facilitate more sophisticated trade-off optimisation studies in the field. We realise that system parameters can be found that optimise these abstract combined metrics, while they are optimal neither for performance nor for security individually. Based on the proposed simulation modelling framework, credible numerical experiments are carried out, indicating the scope for further work extensions for a systematic performance vs security tuning of computer networks.
Style APA, Harvard, Vancouver, ISO itp.
47

Baker, Wade Henderson. "Toward a Decision Support System for Measuring and Managing Cybersecurity Risk in Supply Chains". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/85128.

Pełny tekst źródła
Streszczenie:
Much of the confusion about the effectiveness of information security programs concerns not only how to measure, but also what to measure — an issue of equivocality. Thus, to lower uncertainty for improved decision-making, it is first essential to reduce equivocality by defining, expanding, and clarifying risk factors so that metrics, the "necessary measures," can be unambiguously applied. We formulate a system that (1) allows threats to be accurately measured and tracked, (2) enables the impacts and costs of successful threats to be determined, and (3) aids in evaluating the effectiveness and return on investment of countermeasures. We then examine the quality of controls implemented to mitigate cyber risk and study how effectively they reduce the likelihood of security incidents. Improved control quality was shown to reduce the likelihood of security incidents, yet the results indicate that investing in maximum quality is not necessarily the most efficient use of resources. The next manuscript expands the discussion of cyber risk management beyond single organizations by surveying perceptions and experiences of risk factors related to 3rd parties. To validate and these findings, we undertake in an in-depth investigation of nearly 1000 real-world data breaches occurring over a ten-year period. It provides a robust data model and rich database required by a decision support system for cyber risk in the extended enterprise. To our knowledge, it is the most comprehensive field study ever conducted on the subject. Finally, we incorporate these insights, data, and factors into a simulation model that enables us study the transfer of cyber risk across different supply chain configurations and draw important managerial implications.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
48

Prosperi, Paolo. "Mesures de la sécurité alimentaire et de l'alimentation durable en Méditerranée, basées sur les approches de la vulnérabilité et de la résilience". Electronic Thesis or Diss., Montpellier, SupAgro, 2015. http://www.supagro.fr/theses/extranet/15-0003_Prosperi.pdf.

Pełny tekst źródła
Streszczenie:
Les crises alimentaires récurrentes, les changements globaux, l'épuisement des ressources naturelles et les carences alimentaires en micronutriments placent la sécurité alimentaire et la durabilité environnementale au centre de l'agenda politique. Les analyses des interactions dynamiques entre les modes de consommation et les préoccupations environnementales ont reçu une attention considérable. Les changements socioéconomiques et biophysiques affectent les fonctions du système alimentaire, y compris la sécurité alimentaire et nutritionnelle. Ainsi, la durabilité du système alimentaire est menacée. La construction de systèmes alimentaires durables est devenue un effort majeur pour réorienter les systèmes et les politiques alimentaires vers des objectifs adaptés et vers l'amélioration du bien-être sociétal. L'identification et la modélisation des propriétés intrinsèques du système alimentaire peuvent contribuer à suivre les progrès accomplis vers la durabilité, et participer à la mise en place de politiques de transformation et d'innovation.L'objectif général de cette thèse est d'analyser la durabilité du système alimentaire à travers l'identification d'un ensemble d'indicateurs dans la région méditerranéenne. Spécifiquement il s'agit de développer un cadre multidimensionnel pour évaluer la durabilité des systèmes alimentaires, d'identifier les principales variables pour formaliser et opérationnaliser le concept abstrait et multidimensionnel des systèmes alimentaires durables, et de définir une série d'indicateurs pour évaluer la durabilité des systèmes à un niveau sous-régional.A travers une approche de la durabilité dans son sens large, la démarche méthodologique s'appuie sur les théories de la vulnérabilité et de la résilience. Suivant les étapes de l'évaluation de la vulnérabilité aux changements globaux, une analyse causale a été présentée à une échelle géographique concernant trois pays méditerranéens: l'Espagne, la France et l'Italie. Huit modèles causaux de la vulnérabilité ont été identifiés. Une enquête Delphi a ensuite été menée pour sélectionner les indicateurs.Un cadre conceptuel hiérarchique a été identifié pour modéliser les relations complexes entre la sécurité alimentaire et nutritionnelle et la durabilité. Il a ainsi été possible de formaliser huit modèles causaux de la vulnérabilité et de la résilience. Outre cela, les propriétés intrinsèques du système alimentaire qui modélisent les interactions directes entre les facteurs de changement (l'épuisement des ressources en eau; la perte de la biodiversité; la volatilité des prix; les changements dans les modes de consommation) et les enjeux de la sécurité alimentaire et nutritionnelle au niveau sous-régional (la qualité nutritionnelle de l'approvisionnement; l'accès économique, l'équilibre énergétique; la satisfaction des préférences) ont pu être identifiées. Chaque interaction a été déclinée en exposition, sensibilité et résilience. Ce cadre théorique a été opérationnalisé par l'identification d'une série de 136 indicateurs. L'étude Delphi a révélé des consensus de niveau majoritaire, faible, modéré ou fort sur les indicateurs dans 75% des interactions. Les résultats obtenus, en ce qui concerne les taux de réponse globale, des taux de participation d'experts, et de consensus sur les indicateurs, sont considérés plus que satisfaisants. Aussi, les experts ont confirmé la validité des enjeux et des interactions proposés.Cet exercice de modélisation théorique, ainsi que l'enquête Delphi, ont permis l'identification d'une première série d'indicateurs des systèmes alimentaire durables, et d'obtenir un consensus tout en évitant le risque d'une évaluation individuelle et subjective, afin de supporter le processus décisionnel. L'opérationnalisation des théories de la vulnérabilité et de la résilience, grâce à une approche basée sur des indicateurs, a fourni une démarche spécifique pour l'analyse des problèmes de la durabilité du système alimentaire
Recurrent food crises and global change, along with habitat loss and micronutrient deficiencies, placed food security and environmental sustainability at the top of the political agenda. Analyses of the dynamic interlinkages between food consumption patterns and environmental concerns recently received considerable attention from the international community. Socioeconomic and biophysical changes affect the food system functions including food and nutrition security. The sustainability of food system is at risk. Building sustainable food systems has become a key effort to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems involve multiple interactions between human and natural components. The systemic nature of these interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system can help tracking progress towards sustainability and setting policies towards positive transformations.The general objective of this thesis is to analyze and explore the sustainability of the food system through identifying a set of metrics at the Mediterranean region level. The specific aims consist of developing a multidimensional framework to evaluate the sustainability of food systems and diets, identifying the main variables to formalize and operationalize the abstract and multidimensional concept of sustainable food systems, and defining metrics for assessing the sustainability of food systems and diets, at a subregional level.Through a broad understanding of sustainability, the methodological approach of this thesis builds on the theories of vulnerability and resilience. Following the steps of the global change vulnerability assessment a causal factor analysis is presented concerning three Mediterranean countries, namely Spain, France and Italy. Formulating "what is vulnerable to what" hypotheses, we identified eight causal models of vulnerability. A three-round Delphi survey was then applied to select indicators on the basis of the vulnerability/resilience theoretical framework.A conceptual hierarchical framework was identified for modeling the complex relationships between food and nutrition security and sustainability for developing potential indicators of sustainable diets and food systems. A feedback-structured framework of the food system formalized eight selected causal models of vulnerability and resilience and identified intrinsic properties of the food system, shaping the interactions where a set of drivers of change (Water depletion; Biodiversity loss; Food price volatility; Changes in food consumption patterns) directly affect food and nutrition security outcomes at a subregional level (Nutritional quality of food supply; Affordability of food; Dietary energy balance; Satisfaction of cultural food preferences). Each interaction was disentangled in exposure, sensitivity and resilience. This theoretical framework was operationalized through the identification of a set of 136 indicators. The Delphi study revealed low, medium, and high consensus and majority level on indicators in 75% of the interactions out of the 24 initial ones. The results obtained in terms of global response, expert participation rates, and consensus on indicators were then satisfactory. Also, expert confirmed with positive feedback the appraisal of the components of the framework.This theoretical modeling exercise and the Delphi survey allowed the identification of a first suite of indicators, moving beyond single and subjective evaluation, and reaching consensus on metrics of sustainable diets and food systems for supporting decision-making. The operationalization of the theories of vulnerability and resilience, through an indicator-based approach, can contribute to further analyses on the socioeconomic and biophysical aspects and interlinkages concerning the sustainability of diets and food systems
Style APA, Harvard, Vancouver, ISO itp.
49

Mohd, Saudi Madihah. "A new model for worm detection and response : development and evaluation of a new model based on knowledge discovery and data mining techniques to detect and respond to worm infection by integrating incident response, security metrics and apoptosis". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5410.

Pełny tekst źródła
Streszczenie:
Worms have been improved and a range of sophisticated techniques have been integrated, which make the detection and response processes much harder and longer than in the past. Therefore, in this thesis, a STAKCERT (Starter Kit for Computer Emergency Response Team) model is built to detect worms attack in order to respond to worms more efficiently. The novelty and the strengths of the STAKCERT model lies in the method implemented which consists of STAKCERT KDD processes and the development of STAKCERT worm classification, STAKCERT relational model and STAKCERT worm apoptosis algorithm. The new concept introduced in this model which is named apoptosis, is borrowed from the human immunology system has been mapped in terms of a security perspective. Furthermore, the encouraging results achieved by this research are validated by applying the security metrics for assigning the weight and severity values to trigger the apoptosis. In order to optimise the performance result, the standard operating procedures (SOP) for worm incident response which involve static and dynamic analyses, the knowledge discovery techniques (KDD) in modeling the STAKCERT model and the data mining algorithms were used. This STAKCERT model has produced encouraging results and outperformed comparative existing work for worm detection. It produces an overall accuracy rate of 98.75% with 0.2% for false positive rate and 1.45% is false negative rate. Worm response has resulted in an accuracy rate of 98.08% which later can be used by other researchers as a comparison with their works in future.
Style APA, Harvard, Vancouver, ISO itp.
50

Barabas, Maroš. "Bezpečnostní analýza síťového provozu pomocí behaviorálních signatur". Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-412570.

Pełny tekst źródła
Streszczenie:
This thesis focuses on description of the current state of research in the detection of network attacks and subsequently on the improvement of detection capabilities of specific attacks by establishing a formal definition of network metrics. These metrics approximate the progress of network connection and create a signature, based on behavioral characteristics of the analyzed connection. The aim of this work is not the prevention of ongoing attacks, or the response to these attacks. The emphasis is on the analysis of connections to maximize information obtained and definition of the basis of detection system that can minimize the size of data collected from the network, leaving the most important information for subsequent analysis. The main goal of this work is to create the concept of the detection system by using defined metrics for reduction of the network traffic to signatures with an emphasis on the behavioral aspects of the communication. Another goal is to increase the autonomy of the detection system by developing an expert knowledge of honeypot system, with the condition of independence to the technological aspects of analyzed data (e.g. encryption, protocols used, technology and environment). Defining the concept of honeypot system's expert knowledge in the role of the teacher of classification algorithms creates autonomy of the~system for the detection of unknown attacks. This concept also provides the possibility of independent learning (with no human intervention) based on the knowledge collected from attacks on these systems. The thesis describes the process of creating laboratory environment and experiments with the defined network connection signature using collected data and downloaded test database. The results are compared with the state of the art of the network detection systems and the benefits of the proposed approximation methods are highlighted.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii