Dissertations / Theses on the topic 'Network measures'

To see the other types of publications on this topic, follow the link: Network measures.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Network measures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Traore, Abdoulaye S. "Mixed Network Interference Management with Multi-Distortion Measures." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604294.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
This paper presents a methodology for the management of interference and spectrum for iNET. It anticipates a need for heavily loaded test environments with Test Articles (TAs) operating over the horizon. In such cases, it is anticipated that fixed and ad hoc networks will be employed, and where spectrum reuse and interference will limit performance. The methodology presented here demonstrates how this can be accomplished in mixed networks.
APA, Harvard, Vancouver, ISO, and other styles
2

Grando, Felipe. "Methods for the approximation of network centrality measures." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/186166.

Full text
Abstract:
Medidas de centralidades são um mecanismo importante para revelar informações vitais sobre redes complexas. No entanto, essas métricas exigem um alto custo computacional que prejudica a sua aplicação em grandes redes do mundo real. Em nosso estudo propomos e explicamos que através do uso de redes neurais artificiais podemos aplicar essas métricas em redes de tamanho arbitrário. Além disso, identificamos a melhor configuração e metodologia para otimizar a acurácia do aprendizado neural, além de apresentar uma maneira fácil de obter e gerar um número suficiente de dados de treinamento substanciais através do uso de um modelo de redes complexas que é adaptável a qualquer aplicação. Também realizamos um comparativo da técnica proposta com diferentes metodologias de aproximação de centralidade da literatura, incluindo métodos de amostragem e outros algoritmos de aprendizagem, e, testamos o modelo gerado pela rede neural em casos reais. Mostramos com os resultados obtidos em nossos experimentos que o modelo de regressão gerado pela rede neural aproxima com sucesso as métricas é uma alternativa eficiente para aplicações do mundo real. A metodologia e o modelo de aprendizagem de máquina que foi proposto usa apenas uma fração do tempo de computação necessário para os algoritmos de aproximação baseados em amostragem e é mais robusto que as técnicas de aprendizagem de máquina testadas
Centrality measures are an important analysis mechanism to uncover vital information about complex networks. However, these metrics have high computational costs that hinder their applications in large real-world networks. I propose and explain the use of artificial neural learning algorithms can render the application of such metrics in networks of arbitrary size. Moreover, I identified the best configuration and methodology for neural learning to optimize its accuracy, besides presenting an easy way to acquire and generate plentiful and meaningful training data via the use of a complex networks model that is adaptable for any application. In addition, I compared my prosed technique based on neural learning with different centrality approximation methods proposed in the literature, consisting of sampling and other artificial learning methodologies, and, I also tested the neural learning model in real case scenarios. I show in my results that the regression model generated by the neural network successfully approximates the metric values and is an effective alternative in real-world applications. The methodology and machine learning model that I propose use only a fraction of computing time with respect to other commonly applied approximation algorithms and is more robust than the other tested machine learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Pellegrinet, Sarah <1988&gt. "Systemic Risk Measures and Connectedness: a network approach." Master's Degree Thesis, Università Ca' Foscari Venezia, 2015. http://hdl.handle.net/10579/6009.

Full text
Abstract:
The main objective of the thesis is the construction of a systemic risk indicator based on the dispersion of the risk measures of the individual institutions of the system. The thesis presents a brief introduction to the literature on systemic risk measures, focusing on the Marginal Expected Shortfall (MES) and the delta CoVaR, which consider not only the yield of an individual firm, but also the effect of its performance on the whole system, and on the connectedness measure between the financial institutions, which measures the relationship between the institutions. We apply the systemic risk and the connectedness measures to the daily returns of the European financial institutions from the 1St January 1985 to 12th May 2014. Finally, to aggregate the risk measures and build a systemic risk indicator, we estimate the measure entropy and find that the indicator has good forecasting abilities for the financial crisis.
APA, Harvard, Vancouver, ISO, and other styles
4

Benbrook, Jimmie Glen 1943. "A SYSTEM ANALYSIS OF A MULTILEVEL SECURE LOCAL AREA NETWORK (COMPUTER)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Hyoungshick. "Complex network analysis for secure and robust communications." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Zehua. "Confidence measures in deep neural network based stereo matching." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEC014.

Full text
Abstract:
Malgré des décennies d’amélioration depuis la première proposition de Barnard et Fischler, les approches d’appariement stéréo souffrent encore d’imprécision, notamment en présence d’occlusion, des conditions d’éclairage extrêmes et d’ambiguïté. Pour pallier ces imprécisions, de nombreuses méthodes, appelées mesures de confiance, ont été proposées permettant d’évaluer l’exactitude des appariements. Dans cette thèse, nous étudions les mesures de confiance de l’état de l’art et proposons deux mesures, à bases de réseaux neurones et d’apprentissage profond, permettant d’améliorer les performances de l’appariement stéréo. Une première approche proposée utilise des données multimodales comprenant la disparité initiale et des images RGB de référence. Cette architecture multimodale est par la suite améliorée en élargissant le champ d’activation efficace (Effective Receptive Field-ERF) permettant un apprentissage avec davantage d’informations contextuelles et conduisant ainsi à une meilleure détection d’erreur d’appariement. Évaluée sur les données de KITTI2012 et KITTI2015, notre approche multimodale a atteint les meilleures performances du moment. Comme seconde approche, un réseau de neurones récurrent (Recurrent Neural Network-RNN) est proposée afin de raffiner pas à pas le résultat de l’appariement. Les réseaux de neurones récurrents à portes incorporés (Gated Recurrent Unit-GRU), combinés avec notre réseau de confiance multimodal à convolution dilatée, utilisent les informations d’une étape pour guider le raffinement dans une étape suivante. À notre connaissance, il s’agit de la première approche de raffinement proposée basée sur un réseau de neurones récurrent. L’approche proposée est aisément applicable à différents réseaux de neurones convolutifs (Convolutional Neural Network-CNN) d’appariement stéréo afin de produire une solution, de bout en bout, efficace et précise. Les résultats expérimentaux prouvent des améliorations significatives à la fois sur la base stéréo KITTI 2012 et sur KITTI 2015
Despite decades of enhancement since the first proposal of Barnard and Fischler’s, stereo matching approaches still suffer from imprecision, especially in the presence of occlusion, extreme lighting conditions and ambiguity. To overcome these inaccuracies, many methods, called confidence measures, have been proposed to assess the accuracy of the matching. In this thesis, we study state-of-the-art confidence measures and propose two measures, based on neural networks and deep learning, to improve the performance of stereo matching. A first proposed approach uses multi-modal data including the initial disparity and reference RGB images. The multi-modal architecture is subsequently improved by enlarging the Effective Receptive Field (ERF) enabling learning with more contextual information and thus leading to better detection of matching errors. Evaluated on KITTI2012 and KITTI2015 datasets, our multi-modal approach had achieved the best performance during the time. As a second approach, a Recurrent Neural Network (RNN) is proposed in order to refine the result of the stereo matching, step by step. The Gated Recurrent Units (GRU), combined with our multi-modal dilated convolutional network, use information from one step to guide refinement in the next. To the best of our knowledge, this is the first attempt to refine stereo matching based on an RNN. The proposed approach is easily applicable to different Convolutional Neural Networks (CNNs) in stereo matching to produce an effective and precise end-to-end solution. The experimental results prove significant improvements both on KITTI2012 and KITTI2015 datasets
APA, Harvard, Vancouver, ISO, and other styles
7

Olsson, Eric J. "Literature survey on network concepts and measures to support research in network-centric operations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FOlsson.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fuloria, Shailendra. "Robust security for the electricity network." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Woldearegay, Yonas, and Oumar Traore. "Optimization of Nodes in Mixed Network Using Three Distance Measures." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595764.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
This paper presents a method for the management of mixed networks as envisioned in future iNET applications and develops a scheme for global optimal performance for features that include signal to Noise Ratio (SNR), Quality of service (QoS), and Interference. This scheme demonstrates potential for significant enhancement of performance for dense traffic environments envisioned in future telemetry applications. Previous research conducted at Morgan State University has proposed a cellular and Ad hoc mixed network for optimum capacity and coverage using two distance measures: QoS and SNR. This paper adds another performance improvement technique, interference, as a third distance measure using an analytical approach and using extensive simulation with MATLAB. This paper also addresses solutions where performance parameters are correlated and uncorrelated. The simulations show the optimization of mixed network nodes using distance, traffic and interference measures all at one time. This has great potential in mobile communication and iNET.
APA, Harvard, Vancouver, ISO, and other styles
10

Mooi, Roderick David. "A model for security incident response in the South African National Research and Education network." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/d1017598.

Full text
Abstract:
This dissertation addresses the problem of a lack of a formal incident response capability in the South African National Research and Education Network (SA NREN). While investigating alternatives it was found that no clear method exists to solve this problem. Therefore, a second problem is identified: the lack of a definitive method for establishing a Computer Security Incident Response Team (CSIRT) or Computer Emergency Response Team (CERT) in general. Solving the second problem is important as we then have a means of knowing how to start when building a CSIRT. This will set the basis for addressing the initial problem, resulting in a prepared, improved and coordinated response to IT security incidents affecting the SANREN. To commence, the requirements for establishing a CSIRT are identified via a comprehensive literature review. These requirements are categorized into five areas, namely, the basic business requirements followed by the four Ps of the IT Infrastructure Library (ITIL). That is, People, Processes, Product and Partners, adapted to suit the CSIRT context. Through the use of argumentation, the relationships between the areas are uncovered and explored. Thereafter, a Design Science Research-based process is utilised to develop a generic model for establishing a CSIRT. The model is based on the interactions uncovered between the business requirements and the adapted four Ps. These are summarised through two views -- strategic and tactical -- together forming an holistic model for establishing a CSIRT. The model highlights the decisions required for the business requirements, services, team model and staff, policies and processes, tools and technologies, and partners of a CSIRT respectively. Finally, to address the primary objective, the generic model is applied to the SANREN environment. Thus, the second artefact is an instantiation, a specific model, which can be implemented to create a CSIRT for the SA NREN. To produce the specific model, insight into the nature of the SANREN environment was required. The status quo was revealed through the use of a survey and argumentative analysis of the results. The specific decisions in each area required to establish an SA NREN CSIRT are explored throughout the development of the model. The result is a comprehensive framework for implementing a CSIRT in the SA NREN, detailing the decisions required in each of the areas. This model additionally acts as a demonstration of the utility of the generic model. The implications of this research are twofold. Firstly, the generic model is useful as a basis for anyone wanting to establish a CSIRT. It helps to ensure that all factors are considered and that no important decisions are neglected, thereby enabling an holistic view. Secondly, the specific model for the SA NREN CSIRT serves as a foundation for implementing the CSIRT going forward. It accelerates the process by addressing the important considerations and highlighting the concerns that must be addressed while establishing the CSIRT.
APA, Harvard, Vancouver, ISO, and other styles
11

Goyal, Ravi. "Estimating Network Features and Associated Measures of Uncertainty and Their Incorporation in Network Generation and Analysis." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10605.

Full text
Abstract:
The efficacy of interventions to control HIV spread depends upon many features of the communities where they are implemented, including not only prevalence, incidence, and per contact risk of transmission, but also properties of the sexual or transmission network. For this reason, HIV epidemic models have to take into account network properties including degree distribution and mixing patterns. The use of sampled data to estimate properties of a network is a common practice; however, current network generation methods do not account for the uncertainty in the estimates due to sampling. In chapter 1, we present a framework for constructing collections of networks using sampled data collected from ego-centric surveys. The constructed networks not only target estimates for density, degree distributions and mixing frequencies, but also incorporate the uncertainty due to sampling. Our method is applied to the National Longitudinal Study of Adolescent Health and considers two sampling procedures. We demonstrate how a collection of constructed networks using the proposed methods are useful in investigating variation in unobserved network topology, and therefore also insightful for studying processes that operate on networks. In chapter 2, we focus on the degree to which impact of concurrency on HIV incidence in a community may be overshadowed by differences in unobserved, but local, network properties. Our results demonstrate that even after controlling for cumulative ego-centric properties, i.e. degree distribution and concurrency, other network properties, which include degree mixing and clustering, can be very influential on the size of the potential epidemic. In chapter 3, we demonstrate the need to incorporate information about degree mixing patterns in such modeling. We present a procedure to construct collections of bipartite networks, given point estimates for degree distribution, that either makes use of information on the degree mixing matrix or assumes that no such information is available. These methods permit a demonstration of the differences between these two network collections, even when degree sequence is fixed. Methods are also developed to estimate degree mixing patterns, given a point estimate for the degree distribution.
APA, Harvard, Vancouver, ISO, and other styles
12

Park, Yongro. "A statistical process control approach for network intrusion detection." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6835.

Full text
Abstract:
Intrusion detection systems (IDS) have a vital role in protecting computer networks and information systems. In this thesis we applied an SPC monitoring concept to a certain type of traffic data in order to detect a network intrusion. We developed a general SPC intrusion detection approach and described it and the source and the preparation of data used in this thesis. We extracted sample data sets that represent various situations, calculated event intensities for each situation, and stored these sample data sets in the data repository for use in future research. A regular batch mean chart was used to remove the sample datas inherent 60-second cycles. However, this proved too slow in detecting a signal because the regular batch mean chart only monitored the statistic at the end of the batch. To gain faster results, a modified batch mean (MBM) chart was developed that met this goal. Subsequently, we developed the Modified Batch Mean Shewhart chart, the Modified Batch Mean Cusum chart, and the Modified Batch Mean EWMA chart and analyzed the performances of each one on simulated data. The simulation studies showed that the MBM charts perform especially well with large signals ?the type of signal typically associated with a DOS intrusion. The MBM Charts can be applied two ways: by using actual control limits or by using robust control limits. The actual control limits must be determined by simulation, but the robust control limits require nothing more than the use of the recommended limits. The robust MBM Shewhart chart was developed based on choosing appropriate values based on batch size. The robust MBM Cusum chart and robust MBM EWMA chart were developed on choosing appropriate values of charting parameters.
APA, Harvard, Vancouver, ISO, and other styles
13

Loutsios, Demetrios. "A holistic approach to network security in OGSA-based grid systems." Thesis, Nelson Mandela Metropolitan University, 2006. http://hdl.handle.net/10948/550.

Full text
Abstract:
Grid computing technologies facilitate complex scientific collaborations between globally dispersed parties, which make use of heterogeneous technologies and computing systems. However, in recent years the commercial sector has developed a growing interest in Grid technologies. Prominent Grid researchers have predicted Grids will grow into the commercial mainstream, even though its origins were in scientific research. This is much the same way as the Internet started as a vehicle for research collaboration between universities and government institutions, and grew into a technology with large commercial applications. Grids facilitate complex trust relationships between globally dispersed business partners, research groups, and non-profit organizations. Almost any dispersed “virtual organization” willing to share computing resources can make use of Grid technologies. Grid computing facilitates the networking of shared services; the inter-connection of a potentially unlimited number of computing resources within a “Grid” is possible. Grid technologies leverage a range of open standards and technologies to provide interoperability between heterogeneous computing systems. Newer Grids build on key capabilities of Web-Service technologies to provide easy and dynamic publishing and discovery of Grid resources. Due to the inter-organisational nature of Grid systems, there is a need to provide adequate security to Grid users and to Grid resources. This research proposes a framework, using a specific brokered pattern, which addresses several common Grid security challenges, which include: Providing secure and consistent cross-site Authentication and Authorization; Single-sign on capabilities to Grid users; Abstract iii; Underlying platform and runtime security, and; Grid network communications and messaging security. These Grid security challenges can be viewed as comprising two (proposed) logical layers of a Grid. These layers are: a Common Grid Layer (higher level Grid interactions), and a Local Resource Layer (Lower level technology security concerns). This research is concerned with providing a generic and holistic security framework to secure both layers. This research makes extensive use of STRIDE - an acronym for Microsoft approach to addressing security threats - as part of a holistic Grid security framework. STRIDE and key Grid related standards, such as Open Grid Service Architecture (OGSA), Web-Service Resource Framework (WS-RF), and the Globus Toolkit are used to formulate the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
14

Aktop, Baris. "A framework for maximizing the survivability of network dependent services." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FAktop.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kobo, Hlabishi. "Situation-aware routing for wireless mesh networks with mobile nodes." Thesis, University of the Western Cape, 2012. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_6647_1370594682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kaiser, Edward Leo. "Addressing Automated Adversaries of Network Applications." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/4.

Full text
Abstract:
The Internet supports a perpetually evolving patchwork of network services and applications. Popular applications include the World Wide Web, online commerce, online banking, email, instant messaging, multimedia streaming, and online video games. Practically all networked applications have a common objective: to directly or indirectly process requests generated by humans. Some users employ automation to establish an unfair advantage over non-automated users. The perceived and substantive damages that automated, adversarial users inflict on an application degrade its enjoyment and usability by legitimate users, and result in reputation and revenue loss for the application's service provider. This dissertation examines three challenges critical to addressing the undesirable automation of networked applications. The first challenge explores individual methods that detect various automated behaviors. Detection methods range from observing unusual network-level request traffic to sensing anomalous client operation at the application-level. Since many detection methods are not individually conclusive, the second challenge investigates how to combine detection methods to accurately identify automated adversaries. The third challenge considers how to leverage the available knowledge to disincentivize adversary automation by nullifying their advantage over legitimate users. The thesis of this dissertation is that: there exist methods to detect automated behaviors with which an application's service provider can identify and then systematically disincentivize automated adversaries. This dissertation evaluates this thesis using research performed on two network applications that have different access to the client software: Web-based services and multiplayer online games.
APA, Harvard, Vancouver, ISO, and other styles
17

Moyo, Norbert. "The potential network effects of travellers' responses to travel demand management measures." Thesis, University of Southampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bränberg, Stefan. "Computing network centrality measures on fMRI data using fully weighted adjacency matrices." Thesis, Umeå universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-128177.

Full text
Abstract:
A lot of interesting research is currently being done in the field of neuroscience, a recent subject being the effort to analyse the the human brain connectome and its functional connectivity. One way this is done is by applying graph-theory based network analysis, such as centrality, on data from fMRI measurements. This involves creating a graph representation from a correlation matrix containing the correlations over time between all measured voxels. Since the input data can be very big, this results in computations that are too memory and time consuming for an ordinary computer. Researchers have used different techniques to work around this problem, examples include thresholding correlations when creating the adjacency matrix and using a smaller input data with lower resolution.This thesis proposes three ways to compute two different centrality measures, degree centrality and eigenvector centrality, on fully weighted adjacency matrices that are built from complete correlation matrices computed from high resolution input data. The first is reducing the problem by doing the calculations in optimal order and avoiding the construction of the large correlation matrix. The second solution is to distribute and do the computations in parallel on a large computer cluster using MPI. The third solution is to calculate as large sets as possible on an ordinary laptop using shared-memory parallelism with OpenMP. Algorithms are presented for the different solutions, and the effectiveness of the implementations of them is tested.
APA, Harvard, Vancouver, ISO, and other styles
19

Signorelli, Camilo Miguel. "Theoretical models and measures of conscious brain network dynamics : an integrative approach." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2021. http://hdl.handle.net/10803/671858.

Full text
Abstract:
En el área de la neurociencia de la conciencia, actualmente existe una tendencia de contrastar y comparar modelos de conciencia. Aunque la palabra final es empírica, esfuerzos teóricos son esenciales para poner en contexto tanto suposiciones conceptuales como resultados experimentales. Desde allí, podemos diseñar mejores pruebas y responder a la pregunta sobre que modelo es optimo. En esta dirección, esta tesis explora modelos y enfoques computacionales integrativos. El documento clasifica modelos científicos de conciencia de acuerdo a sus "perfiles explicativos". Resultados empíricos son describidos a la luz de teoría de redes. Luego, herramientas computacionales inspiradas por la integración conceptual de dos de los mas influyentes modelos son implementados para cuantificar las diferencias entre la condiciones de despierto y anestesiado. Finalmente, la tesis introduce nuevos conceptos para evadir el actual reducionismo de algunos modelos, orientando el texto hacia polémicas discusiones. Esta tesis es un trabajo teórico y conceptual inspirado por resultados empíricos que intenta revelar el poder de modelos computacionales y matemáticos en la búsqueda de desarrollar hipótesis testeables y entender mejor la neurociencia de la conciencianciencia.
In the field of neuroscience of consciousness, there is a current trend to contrast and compare existing models of consciousness. Even though the final word is empirical, theoretical efforts are essential to place both, conceptual assumptions and experimental results in context. From that, we can design better assessments and answer the question about what model is optimal. In this direction, this thesis explores models and computational integrative approaches. The document classifies scientific models of consciousness according to their "explanatory profile". Empirical data is described in light of network theory. Then, computational tools inspired by the conceptual integration of two influential models are implemented to quantify differences between awake and anaesthetic conditions. Finally, the thesis introduces new concepts to avoid the current reductionism of some models, pushing the text to controversial discussions. This thesis is a theoretical and conceptual work inspired by empirical results, attempting to reveal the power of computational and mathematical models in order to develop testable hypotheses and understand better the neuroscience of consciousness.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Junjie. "Effective and scalable botnet detection in network traffic." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44837.

Full text
Abstract:
Botnets represent one of the most serious threats against Internet security since they serve as platforms that are responsible for the vast majority of large-scale and coordinated cyber attacks, such as distributed denial of service, spamming, and information stolen. Detecting botnets is therefore of great importance and a number of network-based botnet detection systems have been proposed. However, as botnets perform attacks in an increasingly stealthy way and the volume of network traffic is rapidly growing, existing botnet detection systems are faced with significant challenges in terms of effectiveness and scalability. The objective of this dissertation is to build novel network-based solutions that can boost both the effectiveness of existing botnet detection systems by detecting botnets whose attacks are very hard to be observed in network traffic, and their scalability by adaptively sampling network packets that are likely to be generated by botnets. To be specific, this dissertation describes three unique contributions. First, we built a new system to detect drive-by download attacks, which represent one of the most significant and popular methods for botnet infection. The goal of our system is to boost the effectiveness of existing drive-by download detection systems by detecting a large number of drive-by download attacks that are missed by these existing detection efforts. Second, we built a new system to detect botnets with peer-to-peer (P2P) command&control (C&C) structures (i.e., P2P botnets), where P2P C&Cs represent currently the most robust C&C structures against disruption efforts. Our system aims to boost the effectiveness of existing P2P botnet detection by detecting P2P botnets in two challenging scenarios: i) botnets perform stealthy attacks that are extremely hard to be observed in the network traffic; ii) bot-infected hosts are also running legitimate P2P applications (e.g., Bittorrent and Skype). Finally, we built a novel traffic analysis framework to boost the scalability of existing botnet detection systems. Our framework can effectively and efficiently identify a small percentage of hosts that are likely to be bots, and then forward network traffic associated with these hosts to existing detection systems for fine-grained analysis, thereby boosting the scalability of existing detection systems. Our traffic analysis framework includes a novel botnet-aware and adaptive packet sampling algorithm, and a scalable flow-correlation technique.
APA, Harvard, Vancouver, ISO, and other styles
21

Shumaker, Todd, and Dennis Rowlands. "Risk assessment of the Naval Postgraduate School gigabit network." Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1351.

Full text
Abstract:
Approved for public release; distribution is unlimited
This research thoroughly examines the current Naval Postgraduate School Gigabit Network security posture, identifies any possible threats or vulnerabilities, and recommends any appropriate safeguards that may be necessary to counter the found threats and vulnerabilities. The research includes any portion of computer security, physical security, personnel security, and communication security that may be applicable to the overall security of both the .mil and .edu domains. The goal of the research was to ensure that the campus network is operating with the proper amount of security safeguards to protect the confidentiality, integrity, availability, and authenticity adequately from both insider and outsider threats. Risk analysis was performed by assessing all of the possible threat and vulnerability combinations to determine the likelihood of exploitation and the potential impact the exploitation could have on the system, the information, and the mission of the Naval Postgraduate School. The results of the risk assessment performed on the network are to be used by the Designated Approving Authority of the Naval Postgraduate School Gigabit network when deciding whether to accredit the system.
Civilian, Research Associate
APA, Harvard, Vancouver, ISO, and other styles
22

Astatke, Yacob. "Distance Measures for QOS Performance Management in Mixed Networks." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606197.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
The integrated Network Enhanced Telemetry effort (iNET) was launched to create a telemetry network that will enhance the traditional point-to-point telemetry link from test articles (TAs) to ground stations (GS). Two of the critical needs identified by the Central Test and Evaluation Investment Program (CTEIP) are, "the need to be able to provide reliable coverage in potentially high capacity environments, even in Over-The-Horizon (OTH) settings", and "the need to make more efficient use of spectrum resources through dynamic sharing of said resources, based on instantaneous demand thereof". Research conducted at Morgan State University (MSU) has focused on providing solutions for both critical problems. The Mixed Network architecture developed by MSU has shown that a hybrid network can be used to provide coverage for TAs that are beyond the coverage area of the GS. The mixed network uses clustering techniques to partition the aggregate network into clusters or sub-networks based on properties of each TA, which currently include signal strengths, and location. The paper starts with a detailed analysis of two parameters that affect the performance of each sub-network: contention between the TAs in the mobile ad-hoc network, and queuing at the Gateway TAs that serve as the link between the mobile ad-hoc and the Cellular networks. Contention and queuing will be used to evaluate two performance (distance) measures for each sub-network: throughput and delay. We define a new distance measure known as "power", which is equal to the ratio of throughput over delay, and is used as a measure of performance of the mixed network for Quality of Service (QOS). This paper describes the analytical foundation used to prove that the "power" performance measure is an excellent tool for optimizing the clustering of a mixed network to provide QOS.
APA, Harvard, Vancouver, ISO, and other styles
23

Martina, Jean Everson. "Verification of security protocols based on multicast communication." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nath, Madhurima. "Application of Network Reliability to Analyze Diffusive Processes on Graph Dynamical Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/86841.

Full text
Abstract:
Moore and Shannon's reliability polynomial can be used as a global statistic to explore the behavior of diffusive processes on a graph dynamical system representing a finite sized interacting system. It depends on both the network topology and the dynamics of the process and gives the probability that the system has a particular desired property. Due to the complexity involved in evaluating the exact network reliability, the problem has been classified as a NP-hard problem. The estimation of the reliability polynomials for large graphs is feasible using Monte Carlo simulations. However, the number of samples required for an accurate estimate increases with system size. Instead, an adaptive method using Bernstein polynomials as kernel density estimators proves useful. Network reliability has a wide range of applications ranging from epidemiology to statistical physics, depending on the description of the functionality. For example, it serves as a measure to study the sensitivity of the outbreak of an infectious disease on a network to the structure of the network. It can also be used to identify important dynamics-induced contagion clusters in international food trade networks. Further, it is analogous to the partition function of the Ising model which provides insights to the interpolation between the low and high temperature limits.
Ph. D.
The research presented here explores the effects of the structural properties of an interacting system on the outcomes of a diffusive process using Moore-Shannon network reliability. The network reliability is a finite degree polynomial which provides the probability of observing a certain configuration for a diffusive process on networks. Examples of such processes analyzed here are outbreak of an epidemic in a population, spread of an invasive species through international trade of commodities and spread of a perturbation in a physical system with discrete magnetic spins. Network reliability is a novel tool which can be used to compare the efficiency of network models with the observed data, to find important components of the system as well as to estimate the functions of thermodynamic state variables.
APA, Harvard, Vancouver, ISO, and other styles
25

Srivatsa, Mudhakar. "Security Architecture and Protocols for Overlay Network Services." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16284.

Full text
Abstract:
Conventional wisdom suggests that in order to build a secure system, security must be an integral component in the system design. However, cost considerations drive most system designers to channel their efforts on the system's performance, scalability and usability. With little or no emphasis on security, such systems are vulnerable to a wide range of attacks that can potentially compromise confidentiality, integrity and availability of sensitive data. It is often cumbersome to redesign and implement massive systems with security as one of the primary design goals. This thesis advocates a proactive approach that cleanly retrofits security solutions into existing system architectures. The first step in this approach is to identify security threats, vulnerabilities and potential attacks on a system or an application. The second step is to develop security tools in the form of customizable and configurable plug-ins that address these security issues and minimally modify existing system code, while preserving its performance and scalability metrics. This thesis uses overlay network applications to shepherd through and address challenges involved in supporting security in large scale distributed systems. In particular, the focus is on two popular applications: publish/subscribe networks and VoIP networks. Our work on VoIP networks has for the first time identified and formalized caller identification attacks on VoIP networks. We have identified two attacks: a triangulation based timing attack on the VoIP network's route set up protocol and a flow analysis attack on the VoIP network's voice session protocol. These attacks allow an external observer (adversary) to uniquely (nearly) identify the true caller (and receiver) with high probability. Our work on the publish/subscribe networks has resulted in the development of an unified framework for handling event confidentiality, integrity, access control and DoS attacks, while incurring small overhead on the system. We have proposed a key isomorphism paradigm to preserve the confidentiality of events on publish/subscribe networks while permitting scalable content-based matching and routing. Our work on overlay network security has resulted in a novel information hiding technique on overlay networks. Our solution represents the first attempt to transparently hide the location of data items on an overlay network.
APA, Harvard, Vancouver, ISO, and other styles
26

Farquhar, MaryBeth Anne. "Actor Networks in Health Care: Translating Values into Measures of Hospital Performance." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28312.

Full text
Abstract:
The health care system within the United States is in a state of transition. The industry, confronted with a variety of new technologies, new ways of organizing, spiraling costs, diminishing service quality and new actors, is changing, almost on a daily basis. Reports issued by the Institute of Medicine raise quality issues such as avoidable errors and underuse/overuse of services; other studies document regional variation in care. Improvement in the quality of care, according to health care experts is accomplished through measuring and comparing performance, but there are a number of disparate actors involved in this endeavor. Through a network of both public and private actors, collaboration on the development of a set of national performance measures is underway. Organizations such as the National Quality Forum (NQF), the Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare & Medicaid Services (CMS) and other have formed networks to develop and standardize performance measurement systems that can distinguish between quality services and substandard ones. While there is some available research about the processes involved in performance measurement system design, there is little known about the factors that influence the development and work of the network, particularly the selection of hospital performance measures. This dissertation explored the development of a national performance measurement system for hospitals, using an institutional rational choice perspective and actor-network theory as frameworks for discussion. Through qualitative research methods such as direct observation, interviews, participant observations and document review, a theoretically informed case study of the NQFâ s Hospital Steering Committee was performed, to address the following questions: How is a national performance measurement system developed and what is the role of federal agencies (e.g., AHRQ and CMS) in the process?
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
27

Mantrach, Amin. "Novel measures on directed graphs and applications to large-scale within-network classification." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210033.

Full text
Abstract:
Ces dernières années, les réseaux sont devenus une source importante d’informations dans différents domaines aussi variés que les sciences sociales, la physique ou les mathématiques. De plus, la taille de ces réseaux n’a cessé de grandir de manière conséquente. Ce constat a vu émerger de nouveaux défis, comme le besoin de mesures précises et intuitives pour caractériser et analyser ces réseaux de grandes tailles en un temps raisonnable.

La première partie de cette thèse introduit une nouvelle mesure de similarité entre deux noeuds d’un réseau dirigé et pondéré :la covariance “sum-over-paths”. Celle-ci a une interprétation claire et précise :en dénombrant tous les chemins possibles deux noeuds sont considérés comme fortement corrélés s’ils apparaissent souvent sur un même chemin – de préférence court. Cette mesure dépend d’une distribution de probabilités, définie sur l’ensemble infini dénombrable des chemins dans le graphe, obtenue en minimisant l'espérance du coût total entre toutes les paires de noeuds du graphe sachant que l'entropie relative totale injectée dans le réseau est fixée à priori. Le paramètre d’entropie permet de biaiser la distribution de probabilité sur un large spectre :allant de marches aléatoires naturelles où tous les chemins sont équiprobables à des marches biaisées en faveur des plus courts chemins. Cette mesure est alors appliquée à des problèmes de classification semi-supervisée sur des réseaux de taille moyennes et comparée à l’état de l’art.

La seconde partie de la thèse introduit trois nouveaux algorithmes de classification de noeuds en sein d’un large réseau dont les noeuds sont partiellement étiquetés. Ces algorithmes ont un temps de calcul linéaire en le nombre de noeuds, de classes et d’itérations, et peuvent dés lors être appliqués sur de larges réseaux. Ceux-ci ont obtenus des résultats compétitifs en comparaison à l’état de l’art sur le large réseaux de citations de brevets américains et sur huit autres jeux de données. De plus, durant la thèse, nous avons collecté un nouveau jeu de données, déjà mentionné :le réseau de citations de brevets américains. Ce jeu de données est maintenant disponible pour la communauté pour la réalisation de tests comparatifs.

La partie finale de cette thèse concerne la combinaison d’un graphe de citations avec les informations présentes sur ses noeuds. De manière empirique, nous avons montré que des données basées sur des citations fournissent de meilleurs résultats de classification que des données basées sur des contenus textuels. Toujours de manière empirique, nous avons également montré que combiner les différentes sources d’informations (contenu et citations) doit être considéré lors d’une tâche de classification de textes. Par exemple, lorsqu’il s’agit de catégoriser des articles de revues, s’aider d’un graphe de citations extrait au préalable peut améliorer considérablement les performances. Par contre, dans un autre contexte, quand il s’agit de directement classer les noeuds du réseau de citations, s’aider des informations présentes sur les noeuds n’améliora pas nécessairement les performances.

La théorie, les algorithmes et les applications présentés dans cette thèse fournissent des perspectives intéressantes dans différents domaines.

In recent years, networks have become a major data source in various fields ranging from social sciences to mathematical and physical sciences. Moreover, the size of available networks has grow substantially as well. This has brought with it a number of new challenges, like the need for precise and intuitive measures to characterize and analyze large scale networks in a reasonable time.

The first part of this thesis introduces a novel measure between two nodes of a weighted directed graph: The sum-over-paths covariance. It has a clear and intuitive interpretation: two nodes are considered as highly correlated if they often co-occur on the same -- preferably short -- paths. This measure depends on a probability distribution over the (usually infinite) countable set of paths through the graph which is obtained by minimizing the total expected cost between all pairs of nodes while fixing the total relative entropy spread in the graph. The entropy parameter allows to bias the probability distribution over a wide spectrum: going from natural random walks (where all paths are equiprobable) to walks biased towards shortest-paths. This measure is then applied to semi-supervised classification problems on medium-size networks and compared to state-of-the-art techniques.

The second part introduces three novel algorithms for within-network classification in large-scale networks, i.e. classification of nodes in partially labeled graphs. The algorithms have a linear computing time in the number of edges, classes and steps and hence can be applied to large scale networks. They obtained competitive results in comparison to state-of-the-art technics on the large scale U.S.~patents citation network and on eight other data sets. Furthermore, during the thesis, we collected a novel benchmark data set: the U.S.~patents citation network. This data set is now available to the community for benchmarks purposes.

The final part of the thesis concerns the combination of a citation graph with information on its nodes. We show that citation-based data provide better results for classification than content-based data. We also show empirically that combining both sources of information (content-based and citation-based) should be considered when facing a text categorization problem. For instance, while classifying journal papers, considering to extract an external citation graph may considerably boost the performance. However, in another context, when we have to directly classify the network citation nodes, then the help of features on nodes will not improve the results.

The theory, algorithms and applications presented in this thesis provide interesting perspectives in various fields.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
28

Chance, Christopher P. "Designing and implementing a network authentication service for providing a secure communication channel." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Stapelberg, Nicolas Jacques Christian. "Quantitative Relationships Between Clinical Measures Of Depression And Heart Rate Variability As Measured By Linear And Nonlinear Methods." Thesis, Griffith University, 2016. http://hdl.handle.net/10072/367151.

Full text
Abstract:
A relationship exists between mood and cardiac control systems. This relationship has been established through correlations between medical pathology, such as Coronary Heart Disease (CHD), and psychopathological changes in mood, such as Major Depressive Disorder (MDD). Euthymic mood and normal cardiac regulation, as well as the diseases MDD and CHD, are linked by numerous interrelated physiological pathways. These pathways form part of a physiological regulatory network in the body called the psycho-immune-neuroendocrine (PINE) network. PINE network homeostasis can be disrupted by stress, resulting in pathological changes, which give rise to MDD, CHD, or both diseases. Heart Rate Variability (HRV) is a physiological measure of cardiac function which reflects autonomic cardiac control, an important component of the PINE network. This body of work set out to test three hypotheses, the first being that psychometric measures of mood and HRV measures are related across a broad range of mood, from non-pathological to pathological depressed mood. The second hypothesis is that psychometric measures of mood and HRV measures remain related over time, even when an individual’s mood changes across time. The third hypothesis is that in people with CHD, HRV measures would still be related to mood, but HRV may be globally reduced compared to people without CHD. A longitudinal cohort study, the Heart and Mind Study, recruited 88 participants and followed them up over six months.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Applied Psychology
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
30

Yelne, Samir. "Measures of User Interactions, Conversations, and Attacks in a Crowdsourced Platform Offering Emotional Support." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1482330888961028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Naude, Kevin Alexander. "A method for the evaluation of similarity measures on graphs and network-structured data." Thesis, Nelson Mandela Metropolitan University, 2014.

Find full text
Abstract:
Measures of similarity play a subtle but important role in a large number of disciplines. For example, a researcher in bioinformatics may devise a new computed measure of similarity between biological structures, and use its scores to infer bio-logical association. Other academics may use related approaches in structured text search, or for object recognition in computer vision. These are diverse and practical applications of similarity. A critical question is this: to what extent can a given similarity measure be trusted? This is a difficult problem, at the heart of which lies the broader issue: what exactly constitutes good similarity judgement? This research presents the view that similarity measures have properties of judgement that are intrinsic to their formulation, and that such properties are measurable. The problem of comparing similarity measures is one of identifying ground-truths for similarity. The approach taken in this work is to examine the relative ordering of graph pairs, when compared with respect to a common reference graph. Ground- truth outcomes are obtained from a novel theory: the theory of irreducible change in graphs. This theory supports stronger claims than those made for edit distances. Whereas edit distances are sensitive to a configuration of costs, irreducible change under the new theory is independent of such parameters. Ground-truth data is obtained by isolating test cases for which a common outcome is assured for all possible least measures of change that can be formulated within a chosen change descriptor space. By isolating these specific cases, and excluding others, the research introduces a framework for evaluating similarity measures on mathematically defensible grounds. The evaluation method is demonstrated in a series of case studies which evaluate the similarity performance of known graph similarity measures. The findings of these experiments provide the first general characterisation of common similarity measures over a wide range of graph properties. The similarity computed from the maximum common induced subgraph (Dice-MCIS) is shown to provide good general similarity judgement. However, it is shown that Blondel's similarity measure can exceed the judgement sensitivity of Dice-MCIS, provided the graphs have both sufficient attribute label diversity, and edge density. The final contribution is the introduction of a new similarity measure for graphs, which is shown to have statistically greater judgement sensitivity than all other measures examined. All of these findings are made possible through the theory of irreducible change in graphs. The research provides the first mathematical basis for reasoning about the quality of similarity judgments. This enables researchers to analyse similarity measures directly, making similarity measures first class objects of scientific inquiry.
APA, Harvard, Vancouver, ISO, and other styles
32

Mjikeliso, Yolanda. "Guidelines to address the human factor in the South African National Research and Education Network beneficiary institutions." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/9946.

Full text
Abstract:
Even if all the technical security solutions appropriate for an organisation’s network are implemented, for example, firewalls, antivirus programs and encryption, if the human factor is neglected then these technical security solutions will serve no purpose. The greatest challenge to network security is probably not the technological solutions that organisations invest in, but the human factor (non-technical solutions), which most organisations neglect. The human factor is often ignored even though humans are the most important resources of organisations and perform all the physical tasks, configure and manage equipment, enter data, manage people and operate the systems and networks. The same people that manage and operate networks and systems have vulnerabilities. They are not perfect and there will always be an element of mistake-making or error. In other words, humans make mistakes that could result in security vulnerabilities, and the exploitation of these vulnerabilities could in turn result in network security breaches. Human vulnerabilities are driven by many factors including insufficient security education, training and awareness, a lack of security policies and procedures in the organisation, a limited attention span and negligence. Network security may thus be compromised by this human vulnerability. In the context of this dissertation, both physical and technological controls should be implemented to ensure the security of the SANReN network. However, if the human factors are not adequately addressed, the network would become vulnerable to risks posed by the human factor which could threaten the security of the network. Accordingly, the primary research objective of this study is to formulate guidelines that address the information security related human factors in the rolling out and continued management of the SANReN network. An analysis of existing policies and procedures governing the SANReN network was conducted and it was determined that there are currently no guidelines addressing the human factor in the SANReN beneficiary institutions. Therefore, the aim of this study is to provide the guidelines for addressing the human factor threats in the SANReN beneficiary institutions.
APA, Harvard, Vancouver, ISO, and other styles
33

Van, Heerden Renier Pelser. "A formalised ontology for network attack classification." Thesis, Rhodes University, 2014. http://hdl.handle.net/10962/d1011603.

Full text
Abstract:
One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
APA, Harvard, Vancouver, ISO, and other styles
34

Lim, Yu-Xi. "Secure Geolocation for Wireless Indoor Networks." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11454.

Full text
Abstract:
The objective of the research is to develop an accurate system for indoor location estimation using a secure architecture based on the IEEE 802.11 standard for infrastructure networks. Elements of this secure architecture include: server-oriented platform for greater trust and manageability; multiple wireless network parameters for improved accuracy; and Support Vector Regression (SVR) for accurate, high-resolution estimates. While these elements have been investigated individually in earlier research, none has combined them to a single security-oriented system. Thus this research investigates the feasibility of using these elements together.
APA, Harvard, Vancouver, ISO, and other styles
35

Baratz, Joshua W. (Joshua William) 1981. "Regions Security Policy (RSP) : applying regions to network security." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17933.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 51-54).
The Regions network architecture is a new look at network organization that groups nodes into regions based on common purposes. This shift from strict network topology groupings of nodes requires a change in security systems. This thesis designs and implements the Regions Security Policy (RSP). RSP allows a unified security policy to be set across a region, fully controlling data as it enters into, exits from, and transits within a region. In doing so, it brings together several existing security solutions so as to provide security comparable to existing systems that is more likely to function correctly.
by Joshua W. Baratz.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
36

Sung, Minho. "Scalable and efficient distributed algorithms for defending against malicious Internet activity." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-07172006-134741/.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2007.
Xu, Jun, Committee Chair ; Ahamad, Mustaque, Committee Member ; Ammar, Mostafa, Committee Member ; Bing, Benny, Committee Member ; Zegura, Ellen, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
37

Corte, Coi Claudio. "Network approaches for the analysis of resting state fMRI data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10820/.

Full text
Abstract:
Negli ultimi anni la teoria dei network è stata applicata agli ambiti più diversi, mostrando proprietà caratterizzanti tutti i network reali. In questo lavoro abbiamo applicato gli strumenti della teoria dei network a dati cerebrali ottenuti tramite MRI funzionale “resting”, provenienti da due esperimenti. I dati di fMRI sono particolarmente adatti ad essere studiati tramite reti complesse, poiché in un esperimento si ottengono tipicamente più di centomila serie temporali per ogni individuo, da più di 100 valori ciascuna. I dati cerebrali negli umani sono molto variabili e ogni operazione di acquisizione dati, così come ogni passo della costruzione del network, richiede particolare attenzione. Per ottenere un network dai dati grezzi, ogni passo nel preprocessamento è stato effettuato tramite software appositi, e anche con nuovi metodi da noi implementati. Il primo set di dati analizzati è stato usato come riferimento per la caratterizzazione delle proprietà del network, in particolare delle misure di centralità, dal momento che pochi studi a riguardo sono stati condotti finora. Alcune delle misure usate indicano valori di centralità significativi, quando confrontati con un modello nullo. Questo comportamento `e stato investigato anche a istanti di tempo diversi, usando un approccio sliding window, applicando un test statistico basato su un modello nullo pi`u complesso. Il secondo set di dati analizzato riguarda individui in quattro diversi stati di riposo, da un livello di completa coscienza a uno di profonda incoscienza. E' stato quindi investigato il potere che queste misure di centralità hanno nel discriminare tra diversi stati, risultando essere dei potenziali bio-marcatori di stati di coscienza. E’ stato riscontrato inoltre che non tutte le misure hanno lo stesso potere discriminante. Secondo i lavori a noi noti, questo `e il primo studio che caratterizza differenze tra stati di coscienza nel cervello di individui sani per mezzo della teoria dei network.
APA, Harvard, Vancouver, ISO, and other styles
38

Opie, Jake Weyman. "Securing softswitches from malicious attacks." Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1007714.

Full text
Abstract:
Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
APA, Harvard, Vancouver, ISO, and other styles
39

Alfraidi, Hanadi Humoud A. "Interactive System for Scientific Publication Visualization and Similarity Measurement based on Citation Network." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/33135.

Full text
Abstract:
Online scientific publications are becoming more and more popular. The number of publications we can access almost instantaneously is rapidly increasing. This makes it more challenging for researchers to pursue a topic, review literature, track research history or follow research trends. Using online resources such as search engines and digital libraries is helpful to find scientific publications, however most of the time the user ends up with an overwhelming amount of linear results to go through. This thesis proposes an alternative system, which takes advantage of citation/reference relations between publications. This demonstrates better insight of the hierarchy distribution of publications around a given topic. We also utilize information visualization techniques to represent the publications as a network. Our system is designed to automatically retrieve publications from Google Scholar and visualize them as a 2-dimensional graph representation using the citation relations. In this, the nodes represent the documents while the links represent the citation/reference relations between them. Our visualization system provides a better view of publications, making it easier to identify the research flow, connect publications, and assess similarities/differences between them. It is an interactive web based system, which allows the users to get more information about any selected publication and calculate a similarity score between two selected publications. Traditionally, similar documents are found using Natural Language Processing (NLP), which compares documents based on matching their contents. In the proposed method, similar documents are found using the citation/reference relations which are iii represented by the relationship that was originally inputted by the authors. We propose a new path based metric for measuring the similarity scores between any pair of publications. This is based on both the number of paths and the length of each path. More paths and shorter lengths increase the similarity score. We compare our similarity score results with another similarity score from Scurtu’s Document Similarity [1] that uses the NLP method. We then use the average of the similarity scores collected from 15 users as a ground truth to validate the efficiency of our method. The results indicate that our Citation Network approach yielded better scores than Scurtu’s approach.
APA, Harvard, Vancouver, ISO, and other styles
40

Abdullah, Kulsoom B. "Scaling and Visualizing Network Data to Facilitate in Intrusion Detection Tasks." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10509.

Full text
Abstract:
As the trend of successful network attacks continue to rise, better forms of intrusion, detection and prevention are needed. This thesis addresses network traffic visualization techniques that aid administrators in recognizing attacks. A view of port statistics and Intrusion Detection System (IDS) alerts has been developed. Each help to address issues with analyzing large datasets involving networks. Due to the amount of traffic as well as the range of possible port numbers and IP addresses, scaling techniques are necessary. A port-based overview of network activity produces an improved representation for detecting and responding to malicious activity. We have found that presenting an overview using stacked histograms of aggregate port activity, combined with the ability to drill-down for finer details allows small, yet important details to be noticed and investigated without being obscured by large, usual traffic. Another problem administrators face is the cumbersome amount of alarm data generated from IDS sensors. As a result, important details are often overlooked, and it is difficult to get an overall picture of what is occurring in the network by manually traversing textual alarm logs. We have designed a novel visualization to address this problem by showing alarm activity within a network. Alarm data is presented in an overview from which system administrators can get a general sense of network activity and easily detect anomalies. They additionally have the option of then zooming and drilling down for details. Based on our system administrator requirements study, this graphical layout addresses what system administrators need to see, is faster and easier than analyzing text logs, and uses visualization techniques to effectively scale and display the data. With this design, we have built a tool that effectively uses operational alarm log data generated on the Georgia Tech campus network. For both of these systems, we describe the input data, the system design, and examples. Finally, we summarize potential future work.
APA, Harvard, Vancouver, ISO, and other styles
41

Oh, Khoon Wee. "Wireless network security : design considerations for an enterprise network /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FOh.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Irwin, Barry Vivian William. "A framework for the application of network telescope sensors in a global IP network." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1004835.

Full text
Abstract:
The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
APA, Harvard, Vancouver, ISO, and other styles
43

Bonora, Filippo. "Dynamic networks, text analysis and Gephi: the art math." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6327/.

Full text
Abstract:
In numerosi campi scientici l'analisi di network complessi ha portato molte recenti scoperte: in questa tesi abbiamo sperimentato questo approccio sul linguaggio umano, in particolare quello scritto, dove le parole non interagiscono in modo casuale. Abbiamo quindi inizialmente presentato misure capaci di estrapolare importanti strutture topologiche dai newtork linguistici(Degree, Strength, Entropia, . . .) ed esaminato il software usato per rappresentare e visualizzare i grafi (Gephi). In seguito abbiamo analizzato le differenti proprietà statistiche di uno stesso testo in varie sue forme (shuffolato, senza stopwords e senza parole con bassa frequenza): il nostro database contiene cinque libri di cinque autori vissuti nel XIX secolo. Abbiamo infine mostrato come certe misure siano importanti per distinguere un testo reale dalle sue versioni modificate e perché la distribuzione del Degree di un testo normale e di uno shuffolato abbiano lo stesso andamento. Questi risultati potranno essere utili nella sempre più attiva analisi di fenomeni linguistici come l'autorship attribution e il riconoscimento di testi shuffolati.
APA, Harvard, Vancouver, ISO, and other styles
44

Bataineh, Mohammad Hindi. "Artificial neural network for studying human performance." Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3259.

Full text
Abstract:
The vast majority of products and processes in industry and academia require human interaction. Thus, digital human models (DHMs) are becoming critical for improved designs, injury prevention, and a better understanding of human behavior. Although many capabilities in the DHM field continue to mature, there are still many opportunities for improvement, especially with respect to posture- and motion-prediction. Thus, this thesis investigates the use of artificial neural network (ANN) for improving predictive capabilities and for better understanding how and why human behave the way they do. With respect to motion prediction, one of the most challenging opportunities for improvement concerns computation speed. Especially, when considering dynamic motion prediction, the underlying optimization problems can be large and computationally complex. Even though the current optimization-based tools for predicting human posture are relatively fast and accurate and thus do not require as much improvement, posture prediction in general is a more tractable problem than motion prediction and can provide a test bead that can shed light on potential issues with motion prediction. Thus, we investigate the use of ANN with posture prediction in order to discover potential issues. In addition, directly using ANN with posture prediction provides a preliminary step towards using ANN to predict the most appropriate combination of performance measures (PMs) - what drives human behavior. The PMs, which are the cost functions that are minimized in the posture prediction problem, are typically selected manually depending on the task. This is perhaps the most significant impediment when using posture prediction. How does the user know which PMs should be used? Neural networks provide tools for solving this problem. This thesis hypothesizes that the ANN can be trained to predict human motion quickly and accurately, to predict human posture (while considering external forces), and to determine the most appropriate combination of PM(s) for posture prediction. Such capabilities will in turn provide a new tool for studying human behavior. Based on initial experimentation, the general regression neural network (GRNN) was found to be the most effective type of ANN for DHM applications. A semi-automated methodology was developed to ease network construction, training and testing processes, and network parameters. This in turn facilitates use with DHM applications. With regards to motion prediction, use of ANN was successful. The results showed that the calculation time was reduced from 1 to 40 minutes, to a fraction of a second without reducing accuracy. With regards to posture prediction, ANN was again found to be effective. However, potential issues with certain motion-prediction tasks were discovered and shed light on necessary future development with ANNs. Finally, a decision engine was developed using GRNN for automatically selecting four human PMs, and was shown to be very effective. In order to train this new approach, a novel optimization formulation was used to extract PM weights from pre-existing motion-capture data. Eventually, this work will lead to automatically and realistically driving predictive DHMs in a general virtual environment.
APA, Harvard, Vancouver, ISO, and other styles
45

Morissette, Laurence. "Auditory Object Segregation: Investigation Using Computer Modelling and Empirical Event-Related Potential Measures." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37856.

Full text
Abstract:
There are multiple factors that influence auditory steaming. Some, like frequency separation or rate of presentation, have effects that are well understood while others remain contentious. Human behavioural studies and event-related potential (ERP) studies have shown dissociation between a pre-attentive sound segregation process and an attention-dependent process in forming perceptual objects and streams. This thesis first presents a model that synthetises the processes involved in auditory object creation. It includes sensory feature extraction based on research by Bregman (1990), sensory feature binding through an oscillatory neural network based on work by Wang (1995; 1996; 1999; 2005; 2008), work by Itti and Koch (2001a) for the saliency map, and finally, work by Wrigley and Brown (2004) for the architecture of single feature processing streams, the inhibition of return of the activation and the attentional leaky integrate and fire neuron. The model was tested using stimuli and an experimental paradigm used by Carlyon, Cusack, Foxton and Robertson (2001). Several modifications were then implemented to the initial model to bring it closer to psychological and cognitive validity. The second part of the thesis furthers the knowledge available concerning the influence of the time spent attending to a task on streaming. Two deviant detection experiments using triplet stimuli are presented. The first experiment is a follow-up of Thompson, Carlyon and Cusack (2011) and replicated their behavioural findings, showing that the time spent attending to a task enhances streaming, and that deviant detection is easier when one stream is perceived. The ERP results showed double decisions markers indicating that subjects may have made their deviant detection based on the absence of the time delayed deviant and confirmed their decision with its later presence. The second experiment investigated the effect of the time spent attending to the task in presence of a continuity illusion on streaming. It was found that the presence of this illusion prevented streaming in such a way that the pattern of the triplet was strengthened through time instead of separated into two streams, and that the deviant detection was easier the longer the subjects attended to the sound sequence.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Gang Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "The impact of inter-company network technology on correlations between supply chain drivers and performance measures." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2009. http://handle.unsw.edu.au/1959.4/43645.

Full text
Abstract:
This research aims to examine how, and to what extent, the advanced network technology such as custom-built large-scale network, or internet-based technology contribute to the correlations between supply chain drivers and performance measures. The uniqueness of the research is to use network technology as a leverage factor, instead of merely one of the supply chain drivers, to analyse how it would impact on the correlations between supply chain drivers and performance measures. Through literature review, we identified the key drivers in supply chain and the key performance indicators as independent and dependent variables respectively for data analysis in the research. We consider the utilisation of network technology as a selection variable in the analysis. We also proposed a set of research questions and hypotheses resulting from the literature review. The subsequent data analyses attempted to find answers for these questions and test the validity of the hypotheses. This was achieved by a field survey for 1035 major Australian firms through a structured questionnaire. The response rate of the survey was 20.8%. All these data were analysed with statistical models such as reliability test, multi-collinearity test, MANOVA procedures, factor analysis, and multiple regression modelling to validate whether the survey was robust and how the leverage factor (network technology) would impact on the correlations between supply chain drivers and performance measures. Each research question and hypothesis was reviewed, validated, and concluded based on the results from data analysis. The key findings from the data analysis support the perception that the network technologies with their external customers and suppliers dramatically affect the correlations between supply chain drivers and performance measures. Statistically it actually determines whether the supply chain will success or fail when comparing firms using the technologies with firms not using them. In general, the impact on the correlations is directional and positive. A set of validated theoretical models was also proposed to depict the dynamics between supply chain variables under the influence of network technology. Implications of the findings are also provided in the thesis.
APA, Harvard, Vancouver, ISO, and other styles
47

Scheepers, Christiaan Frederick. "Implementing energy efficiency measures on the compressed air network of old South African mines / Scheepers C.F." Thesis, North-West University, 2011. http://hdl.handle.net/10394/7569.

Full text
Abstract:
anthracite with the highest fixed carbon and lowest ash contents exhibited the smallest shrinkage during in situ TMA calcination. High fixed carbon, low ash type anthracites are therefore less prone to dimensional instabilities in Soderberg electrodes, as a result of poor calcination. The dimensional changes observed in the calcined anthracites were very similar to those observed for the electrode graphite samples. The expansions/shrinkages observed in the graphite samples were mostly less than 0.5%, whereas the expansions/shrinkages observed in the various calcined anthracites were approximately 0.6 to 0.9%. The difference in the magnitude of the dimensional behaviour between the calcined anthracites and the graphite can be attributed to the fact that the graphite had already undergone maximum structural ordering (having been pre–baked at 3000°C). A simulation was done to indicate the effect of underground valve control in a compressed air network. The simulation presents the effect of increased pressure by active compressed air control on each of the mining levels. A decrease in pressure will result in lowered operation of the compressors. The implementation of EE/DSM was successfully completed for this case study, which resulted in an average saving of 1.80 MW (91.6% of the target savings) for the three performance assessment months. To achieve this savings the Real–time Energy Management System (REMS) was installed to ensure automatic control on the newly installed infrastructure for the compressed air network. Due to the successful implementation of the project, the client benefitted from large financial savings. Furthermore, it was demonstrated that EE/DSM strategies could be successfully implemented on the compressed air systems of old gold mines.
Thesis (M.Ing. (Electrical and Electronic Engineering))--North-West University, Potchefstroom Campus, 2012.
APA, Harvard, Vancouver, ISO, and other styles
48

Shatnawi, Ibrahem Mahmoud. "Automated Vehicle Delay and Travel Time Estimation Techniques for Improved Performance Measures of Urban Network System." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1446473677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

PEDIO, MANUELA. "Essays on the Time Series and Cross-Sectional Predictive Power of Network-Based Volatility Spillover Measures." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/305198.

Full text
Abstract:
• Questa tesi include due saggi che sono dedicati allo studio delle serie temporali e del potere predittivo a livello di cross-section di un indice di spillover di volatilità di nuova concezione basato sulle volatilità implicite delle opzioni. Nel primo saggio, ci concentriamo sulla stima dell'indice e sulla valutazione se i (cambiamenti nell'indice) possono prevedere i rendimenti in eccesso delle serie temporali di (un insieme di) singoli titoli e dello S&P 500. Si confronta il potere predittivo in-sample e out-of-sample di questo indice con quello dell'indice di spillover di volatilità proposto da Diebold e Yilmaz (2008, 2012), che si basa invece su volatilità realizzate. Sebbene entrambe le misure mostrino la prova del potere predittivo all'interno del campione, solo la misura basata sulla volatilità implicita è in grado di produrre previsioni out-of-sample che battono un semplice benchmark costituito dalla media storica. Troviamo che questo potere predittivo possa essere sfruttato da un investitore utilizzando semplici strategie di trading basate sul segno dell'eccesso di rendimento previsto e da un investitore media-varianza. Mostriamo anche che, nonostante la sovra-performance predittiva dell'indice di spillover della volatilità implicita provenga principalmente da periodi di alta volatilità, il potere previsionale aggiuntivo non è sussunto dall'inclusione del VIX (come proxy della volatilità aggregata) nelle regressioni predittive. Nel secondo saggio, indaghiamo se il rischio di spillover della volatilità (oltre al rischio di volatilità aggregata) sia valutato nella cross-section dei rendimenti delle azioni statunitensi. Per il nostro scopo, conduciamo diversi test di asset pricing (parametrici e non parametrici). In primo luogo, ordiniamo l'universo azionario in cinque portafogli quintili in base alla loro esposizione all'indice di spillover di volatilità implicita che abbiamo sviluppato nel primo saggio. In secondo luogo, utilizziamo una procedura di ordinamento condizionale per controllare le variabili che possono avere un effetto di confusione sui nostri risultati. Troviamo che i titoli con una bassa esposizione agli spillover di volatilità guadagnano in media il 6,45% all'anno in più rispetto ai titoli con un'elevata esposizione agli spillover di volatilità. Questa differenza persiste anche dopo l'aggiustamento per il rischio e quando controlliamo l'esposizione a shock di volatilità aggregata. Infine, utilizziamo un approccio Fama-Mac Beth per stimare il premio per il rischio associato al rischio di ricaduta della volatilità; questa procedura conferma in parte i risultati dell'analisi non parametrica di portfolio, sebbene il premio sia inferiore e generalmente stimato in modo impreciso.
This thesis includes two essays that are devoted to study the time-series and cross-sectional predictive power of a newly developed, forward-looking volatility spillover index based on option implied volatilities. In the first essay, we focus on the estimation of the index and on the assessment of whether the (changes in) the index can predict the time-series excess returns of (a set of) individual stocks and of the S&P 500. We also compare the in-sample and out-of-sample predictive power of this index with that of the volatility spillover index proposed by Diebold and Yilmaz (2008, 2012), which is instead based on realized, backward-looking volatilities. While both measures show evidence of in-sample predictive power, only the option-implied measure is able to produce out-of-sample forecasts that outperform a simple historical mean benchmark. We find this predictive power to be exploitable by an investor using simple trading strategies based on the sign of the predicted excess return and also by a mean-variance optimizer. We also show that, despite the predictive outperformance of the implied volatility spillover index is mostly coming from high-volatility periods, the additional forecast power is not subsumed by the inclusion of the VIX (as a proxy of aggregate volatility) in the predictive regressions. In the second essay, we investigate whether volatility spillover risk (in addition to aggregate volatility risk) is priced in the cross-section of US stock returns. To our purpose, we conduct several (parametric and non-parametric) asset pricing tests. First, we sort the stock universe into five quintile portfolios based on their exposure to the implied volatility spillover index that we have developed in the first essay. Second, we use a conditional sorting procedure to control for variables that may have a confounding effect on our results. We find that stocks with a low exposure to volatility spillovers earn an average 6.45% per annum more than stocks with a high exposure to volatility spillovers. This difference persists also after adjusting for risk and when we control for the exposure to aggregate volatility shocks. Finally, we employ a Fama-Mac Beth approach to estimate the risk premium associated with volatility spillover risk; this procedure partly confirms the results from the non-parametric, portfolio sorting analysis, although the premium is lower and generally imprecisely estimated.
APA, Harvard, Vancouver, ISO, and other styles
50

De, Bruyn Daniel Nicholas. "Investigation and development of a system for secure synchronisation in a wireless mesh network." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2010. http://hdl.handle.net/11462/132.

Full text
Abstract:
Thesis (M. Tech.(Electrical Engineering)) -- Central University of technology, Free State, 2010
This dissertation gives an overview of the research done in developing a protocol to synchronise information in a secure wireless mesh network. Alternative methods to control wireless devices were investigated in the non-controlled frequency spectrum. The aim of the research was to develop a protocol that can be loaded on a micro-controller with limited intelligence, controlling endpoints. The protocol minimises human interference and automatically negotiates which device becomes the master controller. The device is able to discover and locate neighbour devices in range. The device has the capability to be stationary or mobile and host multiple control endpoints. Control endpoints can be digital or analogue, input or output, and belongs to a group like security, lighting or irrigation. These capabilities can change according to the solution’s requirements. Control endpoints with the same capabilities must be able to establish a connection between each other. An endpoint has a user-friendly name and can update the remote endpoint with the description. When a connection is established both endpoints update each other with their user-friendly name and their status. A local endpoint can trigger a certain action on a receiving control point. The system was tested with a building monitoring system because it is static and a less expensive choice, thus making the evaluation more suitable. A simulator for a personal computer was developed to evaluate the new protocol. Finally, the protocol was implemented and tested on a micro-controller platform.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography