To see the other types of publications on this topic, follow the link: Network traffic detection.

Dissertations / Theses on the topic 'Network traffic detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Network traffic detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brauckhoff, Daniela. "Network traffic anomaly detection and evaluation." Aachen Shaker, 2010. http://d-nb.info/1001177746/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Udd, Robert. "Anomaly Detection in SCADA Network Traffic." Thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122680.

Full text
Abstract:
Critical infrastructure provides us with the most important parts of modern society, electricity, water and transport. To increase efficiency and to meet new demands from the customer remote monitoring and control of the systems is necessary. This opens new ways for an attacker to reach the Supervisory Control And Data Acquisition (SCADA) systems that control and monitors the physical processes involved. This also increases the need for security features specially designed for these settings. Anomaly-based detection is a technique suitable for the more deterministic SCADA systems. This thesis uses a combination of two techniques to detect anomalies. The first technique is an automatic whitelist that learns the behavior of the network flows. The second technique utilizes the differences in arrival times of the network packets. A prototype anomaly detector has been developed in Bro. To analyze the IEC 60870-5-104 protocol a new parser for Bro was also developed. The resulting anomaly detector was able to achieve a high detection rate for three of the four different types of attacks evaluated. The studied methods of detection are promising when used in a highly deterministic setting, such as a SCADA system.
APA, Harvard, Vancouver, ISO, and other styles
3

Yellapragada, Ramani. "Probabilistic Model for Detecting Network Traffic Anomalies." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1088538020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Junjie. "Effective and scalable botnet detection in network traffic." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44837.

Full text
Abstract:
Botnets represent one of the most serious threats against Internet security since they serve as platforms that are responsible for the vast majority of large-scale and coordinated cyber attacks, such as distributed denial of service, spamming, and information stolen. Detecting botnets is therefore of great importance and a number of network-based botnet detection systems have been proposed. However, as botnets perform attacks in an increasingly stealthy way and the volume of network traffic is rapidly growing, existing botnet detection systems are faced with significant challenges in terms of effectiveness and scalability. The objective of this dissertation is to build novel network-based solutions that can boost both the effectiveness of existing botnet detection systems by detecting botnets whose attacks are very hard to be observed in network traffic, and their scalability by adaptively sampling network packets that are likely to be generated by botnets. To be specific, this dissertation describes three unique contributions. First, we built a new system to detect drive-by download attacks, which represent one of the most significant and popular methods for botnet infection. The goal of our system is to boost the effectiveness of existing drive-by download detection systems by detecting a large number of drive-by download attacks that are missed by these existing detection efforts. Second, we built a new system to detect botnets with peer-to-peer (P2P) command&control (C&C) structures (i.e., P2P botnets), where P2P C&Cs represent currently the most robust C&C structures against disruption efforts. Our system aims to boost the effectiveness of existing P2P botnet detection by detecting P2P botnets in two challenging scenarios: i) botnets perform stealthy attacks that are extremely hard to be observed in the network traffic; ii) bot-infected hosts are also running legitimate P2P applications (e.g., Bittorrent and Skype). Finally, we built a novel traffic analysis framework to boost the scalability of existing botnet detection systems. Our framework can effectively and efficiently identify a small percentage of hosts that are likely to be bots, and then forward network traffic associated with these hosts to existing detection systems for fine-grained analysis, thereby boosting the scalability of existing detection systems. Our traffic analysis framework includes a novel botnet-aware and adaptive packet sampling algorithm, and a scalable flow-correlation technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Vu, Hong Linh. "DNS Traffic Analysis for Network-based Malware Detection." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93842.

Full text
Abstract:
Botnets are generally recognized as one of the most challenging threats on the Internet today. Botnets have been involved in many attacks targeting multinational organizations and even nationwide internet services. As more effective detection and mitigation approaches are proposed by security researchers, botnet developers are employing new techniques for evasion. It is not surprising that the Domain Name System (DNS) is abused by botnets for the purposes of evasion, because of the important role of DNS in the operation of the Internet. DNS provides a flexible mapping between domain names and IP addresses, thus botnets can exploit this dynamic mapping to mask the location of botnet controllers. Domain-flux and fast-flux (also known as IP-flux) are two emerging techniques which aim at exhausting the tracking and blacklisting effort of botnet defenders by rapidly changing the domain names or their associated IP addresses that are used by the botnet. In this thesis, we employ passive DNS analysis to develop an anomaly-based technique for detecting the presence of a domain-flux or fast- flux botnet in a network. To do this, we construct a lookup graph and a failure graph from captured DNS traffic and decompose these graphs into clusters which have a strong correlation between their domains, hosts, and IP addresses. DNS related features are extracted for each cluster and used as input to a classication module to identify the presence of a domain-flux or fast-flux botnet in the network. The experimental evaluation on captured traffic traces veried that the proposed technique successfully detected domain-flux botnets in the traces. The proposed technique complements other techniques for detecting botnets through traffic analysis.
Botnets betraktas som ett av de svåraste Internet-hoten idag. Botnets har använts vid många attacker mot multinationella organisationer och även nationella myndigheters och andra nationella Internet-tjänster. Allt eftersom mer effektiva detekterings - och skyddstekniker tas fram av säkerhetsforskare, har utvecklarna av botnets tagit fram nya tekniker för att undvika upptäckt. Därför är det inte förvånande att domännamnssystemet (Domain Name System, DNS) missbrukas av botnets för att undvika upptäckt, på grund av den viktiga roll domännamnssystemet har för Internets funktion - DNS ger en flexibel bindning mellan domännamn och IP-adresser. Domain-flux och fast-flux (även kallat IP-flux) är två relativt nya tekniker som används för att undvika spårning och svartlistning av IP-adresser av botnet-skyddsmekanismer genom att snabbt förändra bindningen mellan namn och IP-adresser som används av botnets. I denna rapport används passiv DNS-analys för att utveckla en anomali-baserad teknik för detektering av botnets som använder sig av domain-flux eller fast-flux. Tekniken baseras på skapandet av en uppslagnings-graf och en fel-graf från insamlad DNS-traffik och bryter ned dessa grafer i kluster som har stark korrelation mellan de ingående domänerna, maskinerna, och IP-adresserna. DNSrelaterade egenskaper extraheras för varje kluster och används som indata till en klassifficeringsmodul för identiffiering av domain-flux och fast-flux botnets i nätet. Utvärdering av metoden genom experiment på insamlade traffikspår visar att den föreslagna tekniken lyckas upptäcka domain-flux botnets i traffiken. Genom att fokusera på DNS-information kompletterar den föreslagna tekniken andra tekniker för detektering av botnets genom traffikanalys.
APA, Harvard, Vancouver, ISO, and other styles
6

Gupta, Vikas. "File Detection in Network Traffic Using Approximate Matching." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for telematikk, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22696.

Full text
Abstract:
Virtually every day data breach incidents are reported in the news. Scammers, fraudsters, hackers and malicious insiders are raking in millions with sensitive business and personal information. Not all incidents involve cunning and astute hackers. The involvement of insiders is ever increasing. Data information leakage is a critical issue for many companies, especially nowadays where every employee has an access to high speed internet.In the past, email was the only gateway to send out information but with the advent of technologies like SaaS (e.g. Dropbox) and other similar services, possible routes have become numerous and complicated to guard for an organisation. Data is valuable, for legitimate purposes or criminal purposes alike. An intuitive approach to check data leakage is to scan the network traffic for presence of any confidential information transmitted. The existing systems use slew of techniques like keyword matching, regular expression pattern matching, cryptographic algorithms or rolling hashes to prevent data leakage. These techniques are either trivial to evade or suffer with high false alarm rate. In this thesis, 'known file content' detection in network traffic using approximate matching is presented. It performs content analysis on-the-fly. The approach is protocol agnostic and filetype independent. Compared to existing techniques, proposed approach is straight forward and does not need comprehensive configuration. It is easy to deploy and maintain, as only file fingerprint is required, instead of verbose rules.
APA, Harvard, Vancouver, ISO, and other styles
7

Brauckhoff, Daniela [Verfasser]. "Network Traffic Anomaly Detection and Evaluation / Daniela Brauckhoff." Aachen : Shaker, 2010. http://d-nb.info/1122546610/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dandurand, Luc. "Detection of network infrastructure attacks using artificial traffic." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq44906.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Taggart, Benjamin T. "Incorporating neural network traffic prediction into freeway incident detection." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=723.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains viii, 55 p. : ill. (some col.) Vita. Includes abstract. Includes bibliographical references (p. 52-55).
APA, Harvard, Vancouver, ISO, and other styles
10

Kakavelakis, Georgios. "A real-time system for abusive network traffic detection." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5754.

Full text
Abstract:
Approved for public release; distribution is unlimited
Abusive network traffic--to include unsolicited e-mail, malware propagation, and denial-of-service attacks--remains a constant problem in the Internet. Despite extensive research in, and subsequent deployment of, abusive-traffic detection infrastructure, none of the available techniques addresses the problem effectively or completely. The fundamental failing of existing methods is that spammers and attack perpetrators rapidly adapt to and circumvent new mitigation techniques. Analyzing network traffic by exploiting transport-layer characteristics can help remedy this and provide effective detection of abusive traffic. Within this framework, we develop a real-time, online system that integrates transport layer characteristics into the existing SpamAssasin tool for detecting unsolicited commercial e-mail (spam). Specifically, we implement the previously proposed, but undeveloped, SpamFlow technique. We determine appropriate algorithms based on classification performance, training required, adaptability, and computational load. We evaluate system performance in a virtual test bed and live environment and present analytical results. Finally, we evaluate our system in the context of Spam Assassin's auto-learning mode, providing an effective method to train the system without explicit user interaction or feedback.
APA, Harvard, Vancouver, ISO, and other styles
11

Moe, Lwin P. "Cyber security risk analysis framework : network traffic anomaly detection." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118536.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 84-86).
Cybersecurity is a growing research area with direct commercial impact to organizations and companies in every industry. With all other technological advancements in the Internet of Things (IoT), mobile devices, cloud computing, 5G network, and artificial intelligence, the need for cybersecurity is more critical than ever before. These technologies drive the need for tighter cybersecurity implementations, while at the same time act as enablers to provide more advanced security solutions. This paper will discuss a framework that can predict cybersecurity risk by identifying normal network behavior and detect network traffic anomalies. Our research focuses on the analysis of the historical network traffic data to identify network usage trends and security vulnerabilities. Specifically, this thesis will focus on multiple components of the data analytics platform. It explores the big data platform architecture, and data ingestion, analysis, and engineering processes. The experiments were conducted utilizing various time series algorithms (Seasonal ETS, Seasonal ARIMA, TBATS, Double-Seasonal Holt-Winters, and Ensemble methods) and Long Short-Term Memory Recurrent Neural Network algorithm. Upon creating the baselines and forecasting network traffic trends, the anomaly detection algorithm was implemented using specific thresholds to detect network traffic trends that show significant variation from the baseline. Lastly, the network traffic data was analyzed and forecasted in various dimensions: total volume, source vs. destination volume, protocol, port, machine, geography, and network structure and pattern. The experiments were conducted with multiple approaches to get more insights into the network patterns and traffic trends to detect anomalies.
by Lwin P. Moe.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
12

Carlsson, Oskar, and Daniel Nabhani. "User and Entity Behavior Anomaly Detection using Network Traffic." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Caulkins, Bruce. "SESSION-BASED INTRUSION DETECTION SYSTEM TO MAP ANOMALOUS NETWORK TRAFFIC." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3466.

Full text
Abstract:
Computer crime is a large problem (CSI, 2004; Kabay, 2001a; Kabay, 2001b). Security managers have a variety of tools at their disposal – firewalls, Intrusion Detection Systems (IDSs), encryption, authentication, and other hardware and software solutions to combat computer crime. Many IDS variants exist which allow security managers and engineers to identify attack network packets primarily through the use of signature detection; i.e., the IDS recognizes attack packets due to their well-known "fingerprints" or signatures as those packets cross the network's gateway threshold. On the other hand, anomaly-based ID systems determine what is normal traffic within a network and reports abnormal traffic behavior. This paper will describe a methodology towards developing a more-robust Intrusion Detection System through the use of data-mining techniques and anomaly detection. These data-mining techniques will dynamically model what a normal network should look like and reduce the false positive and false negative alarm rates in the process. We will use classification-tree techniques to accurately predict probable attack sessions. Overall, our goal is to model network traffic into network sessions and identify those network sessions that have a high-probability of being an attack and can be labeled as a "suspect session." Subsequently, we will use these techniques inclusive of signature detection methods, as they will be used in concert with known signatures and patterns in order to present a better model for detection and protection of networks and systems.
Ph.D.
Other
Arts and Sciences
Modeling and Simulation
APA, Harvard, Vancouver, ISO, and other styles
14

LUO, SONG. "CREATING MODELS OF INTERNET BACKGROUND TRAFFIC SUITABLE FOR USE IN EVALUATING NETWORK INTRUSION DETECTION SYSTEMS." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2790.

Full text
Abstract:
This dissertation addresses Internet background traffic generation and network intrusion detection. It is organized in two parts. Part one introduces a method to model realistic Internet background traffic and demonstrates how the models are used both in a simulation environment and in a lab environment. Part two introduces two different NID (Network Intrusion Detection) techniques and evaluates them using the modeled background traffic. To demonstrate the approach we modeled five major application layer protocols: HTTP, FTP, SSH, SMTP and POP3. The model of each protocol includes an empirical probability distribution plus estimates of application-specific parameters. Due to the complexity of the traffic, hybrid distributions (called mixture distributions) were sometimes required. The traffic models are demonstrated in two environments: NS-2 (a simulator) and HONEST (a lab environment). The simulation results are compared against the original captured data sets. Users of HONEST have the option of adding network attacks to the background. The dissertation also introduces two new template-based techniques for network intrusion detection. One is based on a template of autocorrelations of the investigated traffic, while the other uses a template of correlation integrals. Detection experiments have been performed on real traffic and attacks; the results show that the two techniques can achieve high detection probability and low false alarm in certain instances.
Ph.D.
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
15

Cowan, KC Kaye. "Detecting Hidden Wireless Cameras through Network Traffic Analysis." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100148.

Full text
Abstract:
Wireless cameras dominate the home surveillance market, providing an additional layer of security for homeowners. Cameras are not limited to private residences; retail stores, public bathrooms, and public beaches represent only some of the possible locations where wireless cameras may be monitoring people's movements. When cameras are deployed into an environment, one would typically expect the user to disclose the presence of the camera as well as its location, which should be outside of a private area. However, adversarial camera users may withhold information and prevent others from discovering the camera, forcing others to determine if they are being recorded on their own. To uncover hidden cameras, a wireless camera detection system must be developed that will recognize the camera's network traffic characteristics. We monitor the network traffic within the immediate area using a separately developed packet sniffer, a program that observes and collects information about network packets. We analyze and classify these packets based on how well their patterns and features match those expected of a wireless camera. We used a Support Vector Machine classifier and a secondary-level of classification to reduce false positives to design and implement a system that uncovers the presence of hidden wireless cameras within an area.
Master of Science
Wireless cameras may be found almost anywhere, whether they are used to monitor city traffic and report on travel conditions or to act as home surveillance when residents are away. Regardless of their purpose, wireless cameras may observe people wherever they are, as long as a power source and Wi-Fi connection are available. While most wireless camera users install such devices for peace of mind, there are some who take advantage of cameras to record others without their permission, sometimes in compromising positions or places. Because of this, systems are needed that may detect hidden wireless cameras. We develop a system that monitors network traffic packets, specifically based on their packet lengths and direction, and determines if the properties of the packets mimic those of a wireless camera stream. A double-layered classification technique is used to uncover hidden wireless cameras and filter out non-wireless camera devices.
APA, Harvard, Vancouver, ISO, and other styles
16

Ramadas, Manikantan. "Detecting Anomalous Network Traffic With Self-Organizing Maps." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1049472005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Seong Soo. "Real-time analysis of aggregate network traffic for anomaly detection." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2312.

Full text
Abstract:
The frequent and large-scale network attacks have led to an increased need for developing techniques for analyzing network traffic. If efficient analysis tools were available, it could become possible to detect the attacks, anomalies and to appropriately take action to contain the attacks before they have had time to propagate across the network. In this dissertation, we suggest a technique for traffic anomaly detection based on analyzing the correlation of destination IP addresses and distribution of image-based signal in postmortem and real-time, by passively monitoring packet headers of traffic. This address correlation data are transformed using discrete wavelet transform for effective detection of anomalies through statistical analysis. Results from trace-driven evaluation suggest that the proposed approach could provide an effective means of detecting anomalies close to the source. We present a multidimensional indicator using the correlation of port numbers as a means of detecting anomalies. We also present a network measurement approach that can simultaneously detect, identify and visualize attacks and anomalous traffic in real-time. We propose to represent samples of network packet header data as frames or images. With such a formulation, a series of samples can be seen as a sequence of frames or video. Thisenables techniques from image processing and video compression such as DCT to be applied to the packet header data to reveal interesting properties of traffic. We show that ??scene change analysis?? can reveal sudden changes in traffic behavior or anomalies. We show that ??motion prediction?? techniques can be employed to understand the patterns of some of the attacks. We show that it may be feasible to represent multiple pieces of data as different colors of an image enabling a uniform treatment of multidimensional packet header data. Measurement-based techniques for analyzing network traffic treat traffic volume and traffic header data as signals or images in order to make the analysis feasible. In this dissertation, we propose an approach based on the classical Neyman-Pearson Test employed in signal detection theory to evaluate these different strategies. We use both of analytical models and trace-driven experiments for comparing the performance of different strategies. Our evaluations on real traces reveal differences in the effectiveness of different traffic header data as potential signals for traffic analysis in terms of their detection rates and false alarm rates. Our results show that address distributions and number of flows are better signals than traffic volume for anomaly detection. Our results also show that sometimes statistical techniques can be more effective than the NP-test when the attack patterns change over time.
APA, Harvard, Vancouver, ISO, and other styles
18

El-Shehaly, Mai Hassan. "A Visualization Framework for SiLK Data exploration and Scan Detection." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34606.

Full text
Abstract:
Network packet traces, despite having a lot of noise, contain priceless information, especially for investigating security incidents or troubleshooting performance problems. However, given the gigabytes of flow crossing a typical medium sized enterprise network every day, spotting malicious activity and analyzing trends in network behavior becomes a tedious task. Further, computational mechanisms for analyzing such data usually take substantial time to reach interesting patterns and often mislead the analyst into reaching false positives, benign traffic being identified as malicious, or false negatives, where malicious activity goes undetected. Therefore, the appropriate representation of network traffic data to the human user has been an issue of concern recently. Much of the focus, however, has been on visualizing TCP traffic alone while adapting visualization techniques for the data fields that are relevant to this protocol's traffic, rather than on the multivariate nature of network security data in general, and the fact that forensic analysis, in order to be fast and effective, has to take into consideration different parameters for each protocol. In this thesis, we bring together two powerful tools from different areas of application: SiLK (System for Internet-Level Knowledge), for command-based network trace analysis; and ComVis, a generic information visualization tool. We integrate the power of both tools by aiding simplified interaction between them, using a simple GUI, for the purpose of visualizing network traces, characterizing interesting patterns, and fingerprinting related activity. To obtain realistic results, we applied the visualizations on anonymized packet traces from Lawrence Berkley National Laboratory, captured on selected hours across three months. We used a sliding window approach in visually examining traces for two transport-layer protocols: ICMP and UDP. The main contribution of this research is a protocol-specific framework of visualization for ICMP and UDP data. We explored relevant header fields and the visualizations that worked best for each of the two protocols separately. The resulting views led us to a number of guidelines that can be vital in the creation of "smart books" describing best practices in using visualization and interaction techniques to maintain network security; while creating visual fingerprints which were found unique for individual types of scanning activity. Our visualizations use a multiple-views approach that incorporates the power of two-dimensional scatter plots, histograms, parallel coordinates, and dynamic queries.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Sathyanarayana, Supreeth. "Characterizing the effects of device components on network traffic." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47640.

Full text
Abstract:
When a network packet is formed by a computer's protocol stack, there are many components (e.g., Memory, CPU, etc.) of the computer that are involved in the process. The objective of this research is to identify, characterize and analyze the effects of the various components of a device (e.g., Memory, CPU, etc.) on the device's network traffic by measuring the changes in its network traffic with changes in its components. We also show how this characterization can be used to effectively perform counterfeit detection of devices which have counterfeit components (e.g., Memory, CPU, etc.). To obtain this characterization, we measure and apply statistical analyses like probability distribution fucntions (PDFs) on the interarrival times (IATs) of the device's network packets (e.g., ICMP, UDP, TCP, etc.). The device is then modified by changing just one component (e.g., Memory, CPU, etc.) at a time while holding the rest constant and acquiring the IATs again. This, over many such iterations provides an understanding of the effect of each component on the overall device IAT statistics. Such statistics are captured for devices (e.g., field-programmable gate arrays (FPGAs) and personal computers (PCs)) of different types. Some of these statistics remain stable across different IAT captures for the same device and differ for different devices (completely different devices or even the same device with its components changed). Hence, these statistical variations can be used to detect changes in a device's composition, which lends itself well to counterfeit detection. Counterfeit devices are abundant in today's world and cause billions of dollars of loss in revenue. Device components are substituted with inferior quality components or are replaced by lower capacity components. Armed with our understanding of the effects of various device components on the device's network traffic, we show how such substitutions or alterations of legitimate device components can be detected and hence perform effective counterfeit detection by statistically analyzing the deviation of the device's IATs from that of the original legitimate device. We perform such counterfeit detection experiments on various types of device configurations (e.g., PC with changed CPU, RAM, etc.) to prove the technique's efficacy. Since this technique is a fully network-based solution, it is also a non-destructive technique which can quickly, inexpensively and easily verify the device's legitimacy. This research also discusses the limitations of network-based counterfeit detection.
APA, Harvard, Vancouver, ISO, and other styles
20

Alizadeh, Hassan. "Intrusion detection and traffic classification using application-aware traffic profiles." Doctoral thesis, Universidade de Aveiro, 2018. http://hdl.handle.net/10773/23545.

Full text
Abstract:
Doutoramento em Engenharia Eletrotécnica no âmbito do programa doutoral MAP-tele
Along with the ever-growing number of applications and end-users, online network attacks and advanced generations of malware have continuously proliferated. Many studies have addressed the issue of intrusion detection by inspecting aggregated network traffic with no knowledge of the responsible applications/services. Such systems may detect abnormal tra c, but fail to detect intrusions in applications whenever their abnormal traffic ts into the network normality profiles. Moreover, they cannot identify intrusion-infected applications responsible for the abnormal traffic. This work addresses the detection of intrusions in applications when their traffic exhibits anomalies. To do so, we need to: (1) bind traffic to applications; (2) have per-application traffic profiles; and (3) detect deviations from profiles given a set of traffic samples. The first requirement has been addressed in our previous works. Assuming that such binding is available, this thesis' work addresses the last two topics in the detection of abnormal traffic and thereby identify its source (possibly malware-infected) application. Applications' traffic profiles are not a new concept, since researchers in the field of Traffic Identification and Classification (TIC) make use of them as a baseline of their systems to identify and categorize traffic samples by application (types-of-interest). But they do not seem to have received much attention in the scope of intrusion detection systems (IDS). We first provide a survey on TIC strategies, within a taxonomy framework, focusing on how the referred TIC techniques could help us for building application's traffic profiles. As a result of this study, we found that most TIC methodologies are based on some statistical (well-known) assumptions extracted from different traffic sources and make the use of machine learning techniques in order to build models (profiles) for recognition of either application types-of-interest or application-layer protocols. Moreover, the literature of traffic classification observed some traffic sources (e.g. first few packets of ows and multiple sub- ows) that do not seem to have received much attention in the scope of IDS research. An IDS can take advantage of such traffic sources in order to provide timely detection of intrusions before they propagate their infected traffic. First, we utilize conventional Gaussian Mixture Models (GMMs) to build per-application profiles. No prior information on data distribution of each application is available. Despite the improvement in performance, stability in high-dimensional data and calibrating a proper threshold for intrusion detection are still main concern. Therefore, we improve the framework restoring universal background model (UBM) to robustly learn application specific models. The proposed anomaly detection systems are based on class-specific and global thresholding mechanisms, where a threshold is set at Equal Error Rate (EER) operating point to determine whether a ow claimed by an application is genuine. Our proposed modelling approaches can also be used in a traffic classification scenario, where the aim is to assign each specific ow to an application (type-of-interest). We also investigate the suitability of the proposed approaches with just a few, initial packets from a traffic ow, in order to provide a more eficient and timely detection system. Several tests are conducted on multiple public datasets collected from real networks. In the numerous experiments that are reported, the evidence of the efectiveness of the proposed approaches are provided.
Em paralelo com o número crescente de aplicações e usuários finais, os ataques em linha na Internet e as gerações avançadas de malware têm proliferado continuadamente. Muitos estudos abordaram a questão da detecção de intrusões através da inspecção do tráfego de rede agregado, sem o conhecimento das aplicações / serviços responsáveis. Esses sistemas podem detectar tráfego anormal, mas não conseguem detectar intrusões em aplicações sempre que seu tráfego anormal encaixa nos perfis de normalidade da rede. Além disso, eles não conseguem identificar as aplicações infectadas por intrusões que são responsáveis pelo tráfego anormal. Este trabalho aborda a detecção de intrusões em aplicações quando seu tráfego exibe anomalias. Para isso, precisamos: (1) vincular o tráfego a aplicações; (2) possuir perfis de tráfego por aplicação; e (3) detectar desvios dos perfis dado um conjunto de amostras de tráfego. O primeiro requisito foi abordado em trabalhos nossos anteriores. Assumindo que essa ligação esteja disponível, o trabalho desta tese aborda os dois últimos tópicos na detecção de tráfego anormal e, assim, identificar a sua aplicação fonte (possivelmente infectada por um malware). Os perfis de tráfego das aplicações não são um conceito novo, uma vez que os investigadores na área da Identificação e Classificação de Tráfego (TIC) utilizam-nos nos seus sistemas para identificar e categorizar amostras de tráfego por tipos de aplicações (ou tipos de interesse). Mas eles não parecem ter recebido muita atenção no âmbito dos sistemas de detecção de intrusões (IDS). Assim, primeiramente fornecemos um levantamento de estratégias de TIC, dentro de uma estrutura taxonómica, tendo como foco a forma como as técnicas de TIC existentes nos poderiam ajudar a lidar com perfis de tráfego de aplicações. Como resultado deste estudo, verificou-se que a maioria das metodologias TIC baseia-se nalguns pressupostos estatísticos (bem conhecidos) extraídos de diferentes fontes de tráfego e usam técnicas de aprendizagem automática para construir os modelos (perfis) para o reconhecimento de quaisquer tipos de interesse ou protocolos aplicacionais. Além disso, a literatura de classificação de tráfego analisou algumas fontes de tráfego (por exemplo, primeiros pacotes de fluxos e subfluxos múltiplos) que não parecem ter recebido muita atenção no âmbito da IDS. Um IDS pode aproveitar essas fontes de tráfego para fornecer detecção atempada de intrusões antes de propagarem o seu tráfego infectado. Primeiro, utilizamos modelos convencionais de mistura gaussiana (GMMs) para construir perfis por aplicação. Nenhuma informação prévia sobre a distribuição de dados de cada aplicação estava disponível. Apesar da melhoria no desempenho, a estabilidade com dados de alta dimensionalidade e a calibração de um limiar adequado para a detecção de intrusões continuaram a ser um problema. Consequentemente, melhoramos a infraestrutura de detecção através da introdução do modelo basal universal (UBM) para robustecer a aprendizagem do modelo especifico de cada aplicação. As abordagens de modelação que propomos também podem ser usadas cenários de classificação de trafego, onde o objectivo e atribuir cada fluxo especifico a uma aplicação (tipo de interesse). Os sistemas de detecção de anomalias propostos baseiam-se em mecanismos de limiar específicos de classes e globais, nos quais um limiar e definido no ponto de operação da Taxa de Erros Igual (EER) para determinar se um fluxo reivindicado por uma aplicação é genuíno. Também investigamos a adequação das abordagens propostas com apenas alguns pacotes iniciais de um fluxo de trafego, a fim de proporcionar um sistema de detecção mais eficiente e oportuno. Para avaliar a eficácia das aproximações tomadas realizamos vários testes com múltiplos conjuntos de dados públicos, colectados em redes reais. Nas numerosas experiências que são relatadas, são fornecidas evidências da eficácia das abordagens propostas.
APA, Harvard, Vancouver, ISO, and other styles
21

Syal, Astha. "Automatic Network Traffic Anomaly Detection and Analysis using SupervisedMachine Learning Techniques." Youngstown State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1578259840945109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Xiaoming. "Hierarchical TCP network traffic classification with adaptive optimisation." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/8228.

Full text
Abstract:
Nowadays, with the increasing deployment of modern packet-switching networks, traffic classification is playing an important role in network administration. To identify what kinds of traffic transmitting across networks can improve network management in various ways, such as traffic shaping, differential services, enhanced security, etc. By applying different policies to different kinds of traffic, Quality of Service (QoS) can be achieved and the granularity can be as fine as flow-level. Since illegal traffic can be identified and filtered, network security can be enhanced by employing advanced traffic classification. There are various traditional techniques for traffic classification. However, some of them cannot handle traffic generated by applications using non-registered ports or forged ports, some of them cannot deal with encrypted traffic and some techniques require too much computational resources. The newly proposed technique by other researchers, which uses statistical methods, gives an alternative approach. It requires less resources, does not rely on ports and can deal with encrypted traffic. Nevertheless, the performance of the classification using statistical methods can be further improved. In this thesis, we are aiming for optimising network traffic classification based on the statistical approach. Because of the popularity of the TCP protocol, and the difficulties for classification introduced by TCP traffic controls, our work is focusing on classifying network traffic based on TCP protocol. An architecture has been proposed for improving the classification performance, in terms of accuracy and response time. Experiments have been taken and results have been evaluated for proving the improved performance of the proposed optimised classifier. In our work, network packets are reassembled into TCP flows. Then, the statistical characteristics of flows are extracted. Finally the classes of input flows can be determined by comparing them with the profiled samples. Instead of using only one algorithm for classifying all traffic flows, our proposed system employs a series of binary classifiers, which use optimised algorithms to detect different traffic classes separately. There is a decision making mechanism for dealing with controversial results from the binary classifiers. Machining learning algorithms including k-nearest neighbour, decision trees and artificial neural networks have been taken into consideration together with a kind of non-parametric statistical algorithm — Kolmogorov-Smirnov test. Besides algorithms, some parameters are also optimised locally, such as detection windows, acceptance thresholds. This hierarchical architecture gives traffic classifier more flexibility, higher accuracy and less response time.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Robert. "ON THE APPLICATION OF LOCALITY TO NETWORK INTRUSION DETECTION: WORKING-SET ANALYSIS OF REAL AND SYNTHETIC NETWORK SERVER TRAFFIC." Doctoral diss., Orlando, Fla. : University of Central Florida, 2009. http://purl.fcla.edu/fcla/etd/CFE0002718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Minton, Carl Edward. "Modeling and Estimation Techniques for Wide-Area Network Traffic with Atypical Components." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/32044.

Full text
Abstract:
A critical first step to improving existing and designing future wide-area networks is an understanding of the load placed on these networks. Efforts to model traffic are often confounded by atypical traffic - traffic particular to the observation site not ubiquitously applicable. The causes and characteristics of atypical traffic are explored in this thesis. Atypical traffic is found to interfere with parsimonious analytic traffic models. A detection and modeling technique is presented and studied for atypical traffic characterized by strongly clustered inliers. This technique is found to be effective using both real-world observations and simulated data.

Another form of atypical traffic is shown to result in multimodal distributions of connection statistics. Putative methods for bimodal estimation are reviewed and a novel technique, the midpoint-distance profile, is presented. The performance of these estimation techniques is studied via simulation and the methods are examined in the context of atypical network traffic. The advantages and disadvantages of each method are reported.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
25

Soysal, Murat. "A Novel Method For The Detection Of P2p Traffic In The Network Backbone Inspired By Intrusion Detection Systems." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607315/index.pdf.

Full text
Abstract:
The share of peer-to-peer (P2P) protocol in the total network traffic grows dayby- day in the Turkish Academic Network (UlakNet) similar to the other networks in the world. This growth is mostly because of the popularity of the shared content and the great enhancement in the P2P protocol since it first came out with Napster. The shared files are generally both large and copyrighted. Motivated by the problems of UlakNet with the P2P traffic, we propose a novel method for P2P traffic detection in the network backbone in this thesis. Observing the similarity between detecting traffic that belongs to a specific protocol and detecting an intrusion in a computer system, we adopt an Intrusion Detection System (IDS) technique to detect P2P traffic. Our method is a passive detection procedure that uses traffic flows gathered from border routers. Hence, it is scalable and does not have the problems of other approaches that rely on packet payload data or transport layer ports.
APA, Harvard, Vancouver, ISO, and other styles
26

Casas, Hernandez Pedro. "Statistical analysis of network traffic for anomaly detection and quality of service provisioning." Télécom Bretagne, 2010. http://www.theses.fr/2010TELB0111.

Full text
Abstract:
Traditionnellement, la gestion du trafic en cœur de réseau repose sur le surdimensionnement pour simplifier les opérations de gestion. Cependant, étant donnés la grande variabilité et l'hétérogénéité du trafic actuel, la montée en puissance d'applications qui nécessitent de la Qualité de Service, et le déploiement des technologies à très haut débit dans l'accès au réseau, il est nécessaire de développer des techniques d'ingénierie qui optimisent l'utilisation des ressources déployées. En particulier, il est nécessaire de concevoir une ingénierie de réseau qui s'appuie sur la mesure du trafic. La Matrice de Trafic (TM) donne une vision globale des volumes de trafic échangés sur un réseau. La tendance actuelle est d'estimer les TMs à partir des données remontées par les sondes NetFlow ou par ses avatars. Cependant, les mesures de trafic au niveau flot induisent une charge importante au niveau des routeurs. Par conséquent, les mesures sont sous-échantillonnées, ce qui induit une imprécision dans l'estimation de la TM. Dans nos travaux de thèse, nous avons proposé d'analyser la TM à partir de mesures des volumes de trafic agrégés échangés sur les différents liens du réseau. Cette approche réduit considérablement le coût engendré par la mesure et simplifie les questions d'implémentation. D'un point de vue statistique, le problème de l'estimation de la TM à partir de ces mesures est un problème linéaire inverse fortement mal pose. La première contribution concerne la modélisation et l'estimation de la TM. Nous avons proposé de nouveaux modèles statistiques et des nouvelles méthodes d'estimation instantanée et de poursuite pour analyser une TM à partir des mesures SNMP. La deuxième contribution considère la détection et la localisation d'anomalies volumétriques dans la TM. En utilisant un modèle linéaire parcimonieux de la TM, nous avons traité le problème de détection comme un problème invariant avec paramètres de nuisance. Nous nous sommes basés sur des algorithmes récents de théorie de la décision ayant des propriétés d'optimalité bien établies, contrairement à la plupart des techniques de la littérature qui se basent sur des heuristiques. La dernière contribution concerne l'optimisation de l'équilibrage de charge, dans le cas où la TM est variable et difficile à prévoir. En utilisant des techniques d'optimisation robuste, nous avons étudié différents scénarios en présence d'une demande de trafic fortement variable et incertaine. De plus, nous avons mené de manière critique une étude comparée des approches basées sur le routage robuste et des approches d'équilibrage dynamique basées sur les jeux de routage. Afin de démontrer la pertinence de nos contributions, toutes les méthodes proposées dans cette thèse ont été validées en utilisant des données réelles de trafic mesurées sur différents réseaux opérationnels. De plus, les performances des méthodes développées ont été comparées aux travaux bien connus de la littérature. Les résultats de ces comparaisons démontrent de bien meilleures performances dans la plupart des cas, et mettent également en évidence des défauts de conception de certains des algorithmes de la littérature
Network-wide traffic analysis and monitoring in large-scale networks is a challenging and expensive task. In this thesis work we have proposed to analyze the traffic of a large-scale IP network from aggregated traffic measurements, reducing measurement overheads and simplifying implementation issues. We have provided contributions in three different networking fields related to network-wide traffic analysis and monitoring in large-scale IP networks. The first contribution regards Traffic Matrix (TM) modeling and estimation, where we have proposed new statistical models and new estimation methods to analyze the Origin-Destination (OD) flows of a large-scale TM from easily available link traffic measurements. The second contribution regards the detection and localization of volume anomalies in the TM, where we have introduced novel methods with solid optimality properties that outperform current well-known techniques for network-wide anomaly detection proposed so far in the literature. The last contribution regards the optimization of the routing configuration in large-scale IP networks, particularly when the traffic is highly variable and difficult to predict. Using the notions of Robust Routing Optimization we have proposed new approaches for Quality of Service provisioning under highly variable and uncertain traffic scenarios. In order to provide strong evidence on the relevance of our contributions, all the methods proposed in this thesis work were validated using real traffic data from different operational networks. Additionally, their performance was compared against well-known works in each field, showing outperforming results in most cases. Taking together the ensemble of developed TM models, the optimal network-wide anomaly detection and localization methods, and the routing optimization algorithms, this thesis work offers a complete solution for network operators to efficiently monitor large-scale IP networks from aggregated traffic measurements and to provide accurate QoS-based performance, even in the event of volume traffic anomalies
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Qinghua. "Traffic analysis, modeling and their applications in energy-constrained wireless sensor networks on network optimization and anomaly detection /." Doctoral thesis, Sundsvall : Tryckeriet Mittuniversitetet, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-10690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Khasgiwala, Jitesh. "Analysis of Time-Based Approach for Detecting Anomalous Network Traffic." Ohio University / OhioLINK, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1113583042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Thomas, Kim. "Incident detection on arterials using neural network data fusion of simulated probe vehicle and loop detector data /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18433.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Damour, Gabriel. "Information-Theoretic Framework for Network Anomaly Detection: Enabling online application of statistical learning models to high-speed traffic." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252560.

Full text
Abstract:
With the current proliferation of cyber attacks, safeguarding internet facing assets from network intrusions, is becoming a vital task in our increasingly digitalised economies. Although recent successes of machine learning (ML) models bode the dawn of a new generation of intrusion detection systems (IDS); current solutions struggle to implement these in an efficient manner, leaving many IDSs to rely on rule-based techniques. In this paper we begin by reviewing the different approaches to feature construction and attack source identification employed in such applications. We refer to these steps as the framework within which models are implemented, and use it as a prism through which we can identify the challenges different solutions face, when applied in modern network traffic conditions. Specifically, we discuss how the most popular framework -- the so called flow-based approach -- suffers from significant overhead being introduced by its resource heavy pre-processing step. To address these issues, we propose the Information Theoretic Framework for Network Anomaly Detection (ITF-NAD); whose purpose is to facilitate online application of statistical learning models onto high-speed network links, as well as provide a method of identifying the sources of traffic anomalies. Its development was inspired by previous work on information theoretic-based anomaly and outlier detection, and employs modern techniques of entropy estimation over data streams. Furthermore, a case study of the framework's detection performance over 5 different types of Denial of Service (DoS) attacks is undertaken, in order to illustrate its potential use for intrusion detection and mitigation. The case study resulted in state-of-the-art performance for time-anomaly detection of single source as well as distributed attacks, and show promising results regarding its ability to identify underlying sources.
I takt med att antalet cyberattacker växer snabbt blir det alltmer viktigt för våra digitaliserade ekonomier att skydda uppkopplade verksamheter från nätverksintrång. Maskininlärning (ML) porträtteras som ett kraftfullt alternativ till konventionella regelbaserade lösningar och dess anmärkningsvärda framgångar bådar för en ny generation detekteringssytem mot intrång (IDS). Trots denna utveckling, bygger många IDS:er fortfarande på signaturbaserade metoder, vilket förklaras av de stora svagheter som präglar många ML-baserade lösningar. I detta arbete utgår vi från en granskning av nuvarande forskning kring tillämpningen av ML för intrångsdetektering, med fokus på de nödvändiga steg som omger modellernas implementation inom IDS. Genom att sätta upp ett ramverk för hur variabler konstrueras och identifiering av attackkällor (ASI) utförs i olika lösningar, kan vi identifiera de flaskhalsar och begränsningar som förhindrar deras praktiska implementation. Särskild vikt läggs vid analysen av de populära flödesbaserade modellerna, vars resurskrävande bearbetning av rådata leder till signifikant tidsfördröjning, vilket omöjliggör deras användning i realtidssystem. För att bemöta dessa svagheter föreslår vi ett nytt ramverk -- det informationsteoretiska ramverket för detektering av nätverksanomalier (ITF-NAD) -- vars syfte är att möjliggöra direktanslutning av ML-modeller över nätverkslänkar med höghastighetstrafik, samt tillhandahåller en metod för identifiering av de bakomliggande källorna till attacken. Ramverket bygger på modern entropiestimeringsteknik, designad för att tillämpas över dataströmmar, samt en ASI-metod inspirerad av entropibaserad detektering av avvikande punkter i kategoriska rum. Utöver detta presenteras en studie av ramverkets prestanda över verklig internettrafik, vilken innehåller 5 olika typer av överbelastningsattacker (DoS) genererad från populära DDoS-verktyg, vilket i sin tur illustrerar ramverkets användning med en enkel semi-övervakad ML-modell. Resultaten visar på hög nivå av noggrannhet för detektion av samtliga attacktyper samt lovande prestanda gällande ramverkets förmåga att identifiera de bakomliggande aktörerna.
APA, Harvard, Vancouver, ISO, and other styles
31

Kačic, Matej. "Analýza útoků na bezdrátové sítě." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-412597.

Full text
Abstract:
This work describes security mechanisms of wireless network based on 802.11 standard and security enhancement 802.11i of these networks known as WPA2, where the analysis of vulnerabilities and attacks on these networks were performed. The work discusses two major security issues. The first is unsecure management frames responsible for vulnerability with direct impact on availability and the other is the vulnerability that allows executing the impersonalize type of attacks. The system for generation attacks was designed to realize any attack very fast and efficient. The core of the thesis is the design of a system for attack analysis using the principle of trust and reputation computation. The conclusion of the work is devoted to experimenting with the proposed system, especially with the selection of suitable metrics for calculating the trust value.
APA, Harvard, Vancouver, ISO, and other styles
32

Akhlaq, Monis. "Improved performance high speed network intrusion detection systems (NIDS) : a high speed NIDS architectures to address limitations of packet loss and low detection rate by adoption of dynamic cluster architecture and traffic anomaly filtration (IADF)." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5377.

Full text
Abstract:
Intrusion Detection Systems (IDS) are considered as a vital component in network security architecture. The system allows the administrator to detect unauthorized use of, or attack upon a computer, network or telecommunication infrastructure. There is no second thought on the necessity of these systems however; their performance remains a critical question. This research has focussed on designing a high performance Network Intrusion Detection Systems (NIDS) model. The work begins with the evaluation of Snort, an open source NIDS considered as a de-facto IDS standard. The motive behind the evaluation strategy is to analyze the performance of Snort and ascertain the causes of limited performance. Design and implementation of high performance techniques are considered as the final objective of this research. Snort has been evaluated on highly sophisticated test bench by employing evasive and avoidance strategies to simulate real-life normal and attack-like traffic. The test-methodology is based on the concept of stressing the system and degrading its performance in terms of its packet handling capacity. This has been achieved by normal traffic generation; fussing; traffic saturation; parallel dissimilar attacks; manipulation of background traffic, e.g. fragmentation, packet sequence disturbance and illegal packet insertion. The evaluation phase has lead us to two high performance designs, first distributed hardware architecture using cluster-based adoption and second cascaded phenomena of anomaly-based filtration and signature-based detection. The first high performance mechanism is based on Dynamic Cluster adoption using refined policy routing and Comparator Logic. The design is a two tier mechanism where front end of the cluster is the load-balancer which distributes traffic on pre-defined policy routing ensuring maximum utilization of cluster resources. The traffic load sharing mechanism reduces the packet drop by exchanging state information between load-balancer and cluster nodes and implementing switchovers between nodes in case the traffic exceeds pre-defined threshold limit. Finally, the recovery evaluation concept using Comparator Logic also enhance the overall efficiency by recovering lost data in switchovers, the retrieved data is than analyzed by the recovery NIDS to identify any leftover threats. Intelligent Anomaly Detection Filtration (IADF) using cascaded architecture of anomaly-based filtration and signature-based detection process is the second high performance design. The IADF design is used to preserve resources of NIDS by eliminating large portion of the traffic on well defined logics. In addition, the filtration concept augment the detection process by eliminating the part of malicious traffic which otherwise can go undetected by most of signature-based mechanisms. We have evaluated the mechanism to detect Denial of Service (DoS) and Probe attempts based by analyzing its performance on Defence Advanced Research Projects Agency (DARPA) dataset. The concept has also been supported by time-based normalized sampling mechanisms to incorporate normal traffic variations to reduce false alarms. Finally, we have observed that the IADF has augmented the overall detection process by reducing false alarms, increasing detection rate and incurring lesser data loss.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Zhi. "Fuzzy logic based robust control of queue management and optimal treatment of traffic over TCP/IP networks." University of Southern Queensland, Faculty of Sciences, 2005. http://eprints.usq.edu.au/archive/00001461/.

Full text
Abstract:
Improving network performance in terms of efficiency, fairness in the bandwidth, and system stability has been a research issue for decades. Current Internet traffic control maintains sophistication in end TCPs but simplicity in routers. In each router, incoming packets queue up in a buffer for transmission until the buffer is full, and then the packets are dropped. This router queue management strategy is referred to as Drop Tail. End TCPs eventually detect packet losses and slow down their sending rates to ease congestion in the network. This way, the aggregate sending rate converges to the network capacity. In the past, Drop Tail has been adopted in most routers in the Internet due to its simplicity of implementation and practicability with light traffic loads. However Drop Tail, with heavy-loaded traffic, causes not only high loss rate and low network throughput, but also long packet delay and lengthy congestion conditions. To address these problems, active queue management (AQM) has been proposed with the idea of proactively and selectively dropping packets before an output buffer is full. The essence of AQM is to drop packets in such a way that the congestion avoidance strategy of TCP works most effectively. Significant efforts in developing AQM have been made since random early detection (RED), the first prominent AQM other than Drop Tail, was introduced in 1993. Although various AQMs also tend to improve fairness in bandwidth among flows, the vulnerability of short-lived flows persists due to the conservative nature of TCP. It has been revealed that short-lived flows take up traffic with a relatively small percentage of bytes but in a large number of flows. From the user’s point of view, there is an expectation of timely delivery of short-lived flows. Our approach is to apply artificial intelligence technologies, particularly fuzzy logic (FL), to address these two issues: an effective AQM scheme, and preferential treatment for short-lived flows. Inspired by the success of FL in the robust control of nonlinear complex systems, our hypothesis is that the Internet is one of the most complex systems and FL can be applied to it. First of all, state of the art AQM schemes outperform Drop Tail, but their performance is not consistent under different network scenarios. Research reveals that this inconsistency is due to the selection of congestion indicators. Most existing AQM schemes are reliant on queue length, input rate, and extreme events occurring in the routers, such as a full queue and an empty queue. This drawback might be overcome by introducing an indicator which takes account of not only input traffic but also queue occupancy for early congestion notification. The congestion indicator chosen in this research is traffic load factor. Traffic load factor is in fact dimensionless and thus independent of link capacity, and also it is easy to use in more complex networks where different traffic classes coexist. The traffic load indicator is a descriptive measure of the complex communication network, and is well suited for use in FL control theory. Based on the traffic load indicator, AQM using FL – or FLAQM – is explored and two FLAQM algorithms are proposed. Secondly, a mice and elephants (ME) strategy is proposed for addressing the problem of the vulnerability of short-lived flows. The idea behind ME is to treat short-lived flows preferably over bulk flows. ME’s operational location is chosen at user premise gateways, where surplus processing resources are available compared to other places. By giving absolute priority to short-lived flows, both short and long-lived flows can benefit. One problem with ME is starvation of elephants or long-lived flows. This issue is addressed by dynamically adjusting the threshold distinguishing between mice and elephants with the guarantee that minimum capacity is maintained for elephants. The method used to dynamically adjust the threshold is to apply FL. FLAQM is deployed to control the elephant queue with consideration of capacity usage of mice packets. In addition, flow states in a ME router are periodically updated to maintain the data storage. The application of the traffic load factor for early congestion notification and the ME strategy have been evaluated via extensive experimental simulations with a range of traffic load conditions. The results show that the proposed two FLAQM algorithms outperform some well-known AQM schemes in all the investigated network circumstances in terms of both user-centric measures and network-centric measures. The ME strategy, with the use of FLAQM to control long-lived flow queues, improves not only the performance of short-lived flows but also the overall performance of the network without disadvantaging long-lived flows.
APA, Harvard, Vancouver, ISO, and other styles
34

Hoelscher, Igor Gustavo. "Detecção e classificação de sinalização vertical de trânsito em cenários complexos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/163777.

Full text
Abstract:
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens. Este trabalho apresenta um estudo sobre técnicas de detecção e classificação de sinalização vertical de trânsito em imagens de cenários de tráfego complexos. O sistema de reconhecimento visual automático dos sinais destina-se a ser utilizado para o auxílio na atividade de direção de um condutor humano ou como informação para um veículo autônomo. Com base nas normas para sinalização viária, foram testadas duas abordagens para a segmentação de imagens e seleção de regiões de interesse. O primeiro, uma limiarização de cor em conjunto com Descritores de Fourier. Seu desempenho não foi satisfatório. No entanto, utilizando os seus princípios, desenvolveu-se um novo método de filtragem de cores baseado em Lógica Fuzzy que, juntamente com um algoritmo de seleção de regiões estáveis em diferentes tons de cinza (MSER), ganhou robustez à oclusão parcial e a diferentes condições de iluminação. Para classificação, duas Redes Neurais Convolucionais curtas são apresentadas para reconhecer sinais de trânsito brasileiros e alemães. A proposta é ignorar cálculos complexos ou features selecionadas manualmente para filtrar falsos positivos antes do reconhecimento, realizando a confirmação (etapa de detecção) e a classificação simultaneamente. A utilização de métodos do estado da arte para treinamento e otimização melhoraram a eficiência da técnica de aprendizagem da máquina. Além disso, este trabalho fornece um novo conjunto de imagens com cenários de tráfego em diferentes regiões do Brasil, contendo 2.112 imagens em resolução WSXGA+. As análises qualitativas são mostradas no conjunto de dados brasileiro e uma análise quantitativa com o conjunto de dados alemão apresentou resultados competitivos com outros métodos: 94% de acurácia na extração e 99% de acurácia na classificação.
Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems. This work presents a study on techniques for detecting and classifying traffic signs in images of complex traffic scenarios. The system for automatic visual recognition of signs is intended to be used as an aid for a human driver or as input to an autonomous vehicle. Based on the regulations for road signs, two approaches for image segmentation and selection of regions of interest were tested. The first one, a color thresholding in conjunction with Fourier Descriptors. Its performance was not satisfactory. However, using its principles, a new method of color filtering using Fuzzy Logic was developed which, together with an algorithm that selects stable regions in different shades of gray (MSER), the approach gained robustness to partial occlusion and to different lighting conditions. For classification, two short Convolutional Neural Networks are presented to recognize both Brazilian and German traffic signs. The proposal is to skip complex calculations or handmade features to filter false positives prior to recognition, making the confirmation (detection step) and the classification simultaneously. State-of-the-art methods for training and optimization improved the machine learning efficiency. In addition, this work provides a new dataset with traffic scenarios in different regions of Brazil, containing 2,112 images in WSXGA+ resolution. Qualitative analyzes are shown in the Brazilian dataset and a quantitative analysis with the German dataset presented competitive results with other methods: 94% accuracy in extraction and 99% accuracy in the classification.
APA, Harvard, Vancouver, ISO, and other styles
35

Gustavsson, Vilhelm. "Machine Learning for a Network-based Intrusion Detection System : An application using Zeek and the CICIDS2017 dataset." Thesis, KTH, Hälsoinformatik och logistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253273.

Full text
Abstract:
Cyber security is an emerging field in the IT-sector. As more devices are connected to the internet, the attack surface for hackers is steadily increasing. Network-based Intrusion Detection Systems (NIDS) can be used to detect malicious traffic in networks and Machine Learning is an up and coming approach for improving the detection rate. In this thesis the NIDS Zeek is used to extract features based on time and data size from network traffic. The features are then analyzed with Machine Learning in Scikit-Learn in order to detect malicious traffic. A 98.58% Bayesian detection rate was achieved for the CICIDS2017 which is about the same level as the results from previous works on CICIDS2017 (without Zeek). The best performing algorithms were K-Nearest Neighbors, Random Forest and Decision Tree.
IT-säkerhet är ett växande fält inom IT-sektorn. I takt med att allt fler saker ansluts till internet, ökar även angreppsytan och risken för IT-attacker. Ett Nätverksbaserat Intrångsdetekteringssystem (NIDS) kan användas för att upptäcka skadlig trafik i nätverk och maskininlärning har blivit ett allt vanligare sätt att förbättra denna förmåga. I det här examensarbetet används ett NIDS som heter Zeek för att extrahera parametrar baserade på tid och datastorlek från nätverkstrafik. Dessa parametrar analyseras sedan med maskininlärning i Scikit-Learn för att upptäcka skadlig trafik. För datasetet CICIDS2017 uppnåddes en Bayesian detection rate på 98.58% vilket är på ungefär samma nivå som resultat från tidigare arbeten med CICIDS2017 (utan Zeek). Algoritmerna som gav bäst resultat var K-Nearest Neighbors, Random Forest och Decision Tree.
APA, Harvard, Vancouver, ISO, and other styles
36

Swaro, James E. "A Heuristic-Based Approach to Real-Time TCP State and Retransmission Analysis." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1448030769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Barabas, Maroš. "Bezpečnostní analýza síťového provozu pomocí behaviorálních signatur." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-412570.

Full text
Abstract:
This thesis focuses on description of the current state of research in the detection of network attacks and subsequently on the improvement of detection capabilities of specific attacks by establishing a formal definition of network metrics. These metrics approximate the progress of network connection and create a signature, based on behavioral characteristics of the analyzed connection. The aim of this work is not the prevention of ongoing attacks, or the response to these attacks. The emphasis is on the analysis of connections to maximize information obtained and definition of the basis of detection system that can minimize the size of data collected from the network, leaving the most important information for subsequent analysis. The main goal of this work is to create the concept of the detection system by using defined metrics for reduction of the network traffic to signatures with an emphasis on the behavioral aspects of the communication. Another goal is to increase the autonomy of the detection system by developing an expert knowledge of honeypot system, with the condition of independence to the technological aspects of analyzed data (e.g. encryption, protocols used, technology and environment). Defining the concept of honeypot system's expert knowledge in the role of the teacher of classification algorithms creates autonomy of the~system for the detection of unknown attacks. This concept also provides the possibility of independent learning (with no human intervention) based on the knowledge collected from attacks on these systems. The thesis describes the process of creating laboratory environment and experiments with the defined network connection signature using collected data and downloaded test database. The results are compared with the state of the art of the network detection systems and the benefits of the proposed approximation methods are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
38

Číp, Pavel. "Detekce a rozpoznávání dopravních značek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217772.

Full text
Abstract:
The thesis deals with traffic sign detection and recongnition in the urban environment and outside the town. A precondition for implementation of the system is built-in camera, usually in a car rear-view mirror. The camera scans the scene before the vehicle. The image data are transfered to the connected PC, where the data are transformation to information and evalutations. If the sign was detected the system is visually warned the driver. For a successful goal is divided into four separate blocks. The first part is the preparing of the image data. There are color segmentation with knowledge of color combination traffic signs in Czech Republic. Second part is deals with shape detection in segmentation image. Part number three is deals with recognition of inner pictogram and its finding in the image database. The final part is the visual output of displaying founded traffic signs. The thesis has been prepader so as to ensure detection of all relevant traffic signs in three basic color combinations according to existing by Decree of Ministry of Transport of Czech Republic. The result is the source code for the program MATLAB. .
APA, Harvard, Vancouver, ISO, and other styles
39

Mazel, Johan. "Unsupervised network anomaly detection." Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0024/document.

Full text
Abstract:
La détection d'anomalies est une tâche critique de l'administration des réseaux. L'apparition continue de nouvelles anomalies et la nature changeante du trafic réseau compliquent de fait la détection d'anomalies. Les méthodes existantes de détection d'anomalies s'appuient sur une connaissance préalable du trafic : soit via des signatures créées à partir d'anomalies connues, soit via un profil de normalité. Ces deux approches sont limitées : la première ne peut détecter les nouvelles anomalies et la seconde requiert une constante mise à jour de son profil de normalité. Ces deux aspects limitent de façon importante l'efficacité des méthodes de détection existantes.Nous présentons une approche non-supervisée qui permet de détecter et caractériser les anomalies réseaux de façon autonome. Notre approche utilise des techniques de partitionnement afin d'identifier les flux anormaux. Nous proposons également plusieurs techniques qui permettent de traiter les anomalies extraites pour faciliter la tâche des opérateurs. Nous évaluons les performances de notre système sur des traces de trafic réel issues de la base de trace MAWI. Les résultats obtenus mettent en évidence la possibilité de mettre en place des systèmes de détection d'anomalies autonomes et fonctionnant sans connaissance préalable
Anomaly detection has become a vital component of any network in today’s Internet. Ranging from non-malicious unexpected events such as flash-crowds and failures, to network attacks such as denials-of-service and network scans, network traffic anomalies can have serious detrimental effects on the performance and integrity of the network. The continuous arising of new anomalies and attacks create a continuous challenge to cope with events that put the network integrity at risk. Moreover, the inner polymorphic nature of traffic caused, among other things, by a highly changing protocol landscape, complicates anomaly detection system's task. In fact, most network anomaly detection systems proposed so far employ knowledge-dependent techniques, using either misuse detection signature-based detection methods or anomaly detection relying on supervised-learning techniques. However, both approaches present major limitations: the former fails to detect and characterize unknown anomalies (letting the network unprotected for long periods) and the latter requires training over labeled normal traffic, which is a difficult and expensive stage that need to be updated on a regular basis to follow network traffic evolution. Such limitations impose a serious bottleneck to the previously presented problem.We introduce an unsupervised approach to detect and characterize network anomalies, without relying on signatures, statistical training, or labeled traffic, which represents a significant step towards the autonomy of networks. Unsupervised detection is accomplished by means of robust data-clustering techniques, combining Sub-Space clustering with Evidence Accumulation or Inter-Clustering Results Association, to blindly identify anomalies in traffic flows. Correlating the results of several unsupervised detections is also performed to improve detection robustness. The correlation results are further used along other anomaly characteristics to build an anomaly hierarchy in terms of dangerousness. Characterization is then achieved by building efficient filtering rules to describe a detected anomaly. The detection and characterization performances and sensitivities to parameters are evaluated over a substantial subset of the MAWI repository which contains real network traffic traces.Our work shows that unsupervised learning techniques allow anomaly detection systems to isolate anomalous traffic without any previous knowledge. We think that this contribution constitutes a great step towards autonomous network anomaly detection.This PhD thesis has been funded through the ECODE project by the European Commission under the Framework Programme 7. The goal of this project is to develop, implement, and validate experimentally a cognitive routing system that meet the challenges experienced by the Internet in terms of manageability and security, availability and accountability, as well as routing system scalability and quality. The concerned use case inside the ECODE project is network anomaly
APA, Harvard, Vancouver, ISO, and other styles
40

Šišmiš, Lukáš. "Optimalizace IDS/IPS systému Suricata." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445503.

Full text
Abstract:
V dnešnom svete zrýchľujúcej sa sieťovej prevádzky je potrebné držať krok v jej monitorovaní . Dostatočný prehľad o dianí v sieti dokáže zabrániť rozličným útokom na ciele nachádzajúce sa v nej . S tým nám pomáhajú systémy IDS, ktoré upozorňujú na udalosti nájdené v analyzovanej prevádzke . Pre túto prácu bol vybraný systém Suricata . Cieľom práce je vyladiť nastavenia systému Suricata s rozhraním AF_PACKET pre optimálnu výkonnosť a následne navrhnúť a implementovať optimalizáciu Suricaty . Výsledky z meraní AF_PACKET majú slúžiť ako základ pre porovnanie s navrhnutým vylepšením . Navrhovaná optimalizácia implementuje nové rozhranie založené na projekte Data Plane Development Kit ( DPDK ). DPDK je schopné akcelerovať príjem paketov a preto sa predpokladá , že zvýši výkon Suricaty . Zhodnotenie výsledkov a porovnanie rozhraní AF_PACKET a DPDK je možné nájsť na konci diplomovej práce .
APA, Harvard, Vancouver, ISO, and other styles
41

Korczynski, Maciej. "Classification de flux applicatifs et détection d'intrusion dans le trafic Internet." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00858571.

Full text
Abstract:
Le sujet de la classification de trafic r'eseau est d'une grande importance pourla planification de r'eseau efficace, la gestion de trafic 'a base de r'egles, la gestionde priorit'e d'applications et le contrˆole de s'ecurit'e. Bien qu'il ait re¸cu une atten-tion consid'erable dans le milieu de la recherche, ce th'eme laisse encore de nom-breuses questions en suspens comme, par exemple, les m'ethodes de classificationdes flux de trafics chiffr'es. Cette th'ese est compos'ee de quatre parties. La premi'erepr'esente quelques aspects th'eoriques li'es 'a la classification de trafic et 'a la d'etec-tion d'intrusion. Les trois parties suivantes traitent des probl'emes sp'ecifiques declassification et proposent des solutions pr'ecises.Dans la deuxi'eme partie, nous proposons une m'ethode d''echantillonnage pr'ecisepour d'etecter les attaques de type "SYN flooding"et "portscan". Le syst'eme examineles segments TCP pour trouver au moins un des multiples segments ACK provenantdu serveur. La m'ethode est simple et 'evolutive, car elle permet d'obtenir unebonne d'etection avec un taux de faux positif proche de z'ero, mˆeme pour des tauxd''echantillonnage tr'es faibles. Nos simulations bas'ees sur des traces montrent quel'efficacit'e du syst'eme propos'e repose uniquement sur le taux d''echantillonnage,ind'ependamment de la m'ethode d''echantillonnage.Dans la troisi'eme partie, nous consid'erons le probl'eme de la d'etection et de laclassification du trafic de Skype et de ses flux de services tels que les appels vocaux,SkypeOut, les vid'eo-conf'erences, les messages instantan'es ou le t'el'echargement defichiers. Nous proposons une m'ethode de classification pour le trafic Skype chiffr'ebas'e sur le protocole d'identification statistique (SPID) qui analyse les valeurs statis-tiques de certains attributs du trafic r'eseau. Nous avons 'evalu'e notre m'ethode surun ensemble de donn'ees montrant d'excellentes performances en termes de pr'eci-sion et de rappel. La derni'ere partie d'efinit un cadre fond'e sur deux m'ethodescompl'ementaires pour la classification des flux applicatifs chiffr'es avec TLS/SSL.La premi'ere mod'elise des 'etats de session TLS/SSL par une chaˆıne de Markov ho-mog'ene d'ordre 1. Les param'etres du mod'ele de Markov pour chaque applicationconsid'er'ee diff'erent beaucoup, ce qui est le fondement de la discrimination entreles applications. La seconde m'ethode de classification estime l''ecart d'horodatagedu message Server Hello du protocole TLS/SSL et l'instant d'arriv'ee du paquet.Elle am'eliore la pr'ecision de classification des applications et permet l'identificationviiefficace des flux Skype. Nous combinons les m'ethodes en utilisant une ClassificationNaive Bay'esienne (NBC). Nous validons la proposition avec des exp'erimentationssur trois s'eries de donn'ees r'ecentes. Nous appliquons nos m'ethodes 'a la classificationde sept applications populaires utilisant TLS/SSL pour la s'ecurit'e. Les r'esultatsmontrent une tr'es bonne performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Sedlo, Ondřej. "Vylepšení Adversariální Klasifikace v Behaviorální Analýze Síťové Komunikace Určené pro Detekci Cílených Útoků." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417204.

Full text
Abstract:
V této práci se zabýváme vylepšením systémů pro odhalení síťových průniků. Konkrétně se zaměřujeme na behaviorální analýzu, která využívá data extrahovaná z jednotlivých síťových spojení. Tyto informace využívá popsaný framework k obfuskaci cílených síťových útoků, které zneužívají zranitelností v sadě soudobých zranitelných služeb. Z Národní databáze zranitelností od NIST vybíráme zranitelné služby, přičemž se omezujeme jen na roky 2018 a 2019. Ve výsledku vytváříme nový dataset, který sestává z přímých a obfuskovaných útoků, provedených proti vybraným zranitelným službám, a také z jejich protějšků ve formě legitimního provozu. Nový dataset vyhodnocujeme za použití několika klasifikačních technik, a demonstrujeme, jak důležité je trénovat tyto klasifikátory na obfuskovaných útocích, aby se zabránilo jejich průniku bez povšimnutí. Nakonec provádíme křížové vyhodnocení datasetů pomocí nejmodernějšího datasetu ASNM-NPBO a našeho datasetu. Výsledky ukazují důležitost opětovného trénování klasifikátorů na nových zranitelnostech při zachování dobrých schopností detekovat útoky na staré zranitelnosti.
APA, Harvard, Vancouver, ISO, and other styles
43

Hošták, Viliam Samuel. "Učení se automatů pro rychlou detekci anomálií v síťovém provozu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449296.

Full text
Abstract:
The focus of this thesis is the fast network anomaly detection based on automata learning. It describes and compares several chosen automata learning algorithms including their adaptation for the learning of network characteristics. In this work, various network anomaly detection methods based on learned automata are proposed which can detect sequential as well as statistical anomalies in target communication. For this purpose, they utilize automata's mechanisms, their transformations, and statistical analysis. Proposed detection methods were implemented and evaluated using network traffic of the protocol IEC 60870-5-104 which is commonly used in industrial control systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Vopálenský, Radek. "Detekce, sledování a klasifikace automobilů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-413327.

Full text
Abstract:
The aim of this master thesis is to design and implementation in language C++ a system for the detection, tracking and classification of vehicles from streams or records from traffic cameras. The system runs on the platform Robot Operating System and uses the OpenCV, FFmpeg, TensorFlow and Keras libraries. For detection is used cascade classifier, for tracking Kalman filter and for classification of the convolutional neural network. Success rate for detection is 91.93 %, tracking 81.94 % and classification 63.72 %. This system is part of a comprehensive system, that can moreover calibrate video and measure of vehicles speed. The resulting system can be used for traffic analysis.
APA, Harvard, Vancouver, ISO, and other styles
45

Štourač, Jan. "Rozpoznávaní aplikací v síťovém provozu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-413325.

Full text
Abstract:
This thesis introduces readers various methods that are currently used for detection of network-based applications. Further part deals with selection of appropriate detection method and implementation of proof-of-concept script, including testing its reliability and accuracy. Chosen detection algorithm is based on statistics data from network flows of tested network communication. Due to its final solution does not depend on whether communication is encrypted or not. Next part contains several possible variants of how to integrate proposed solution in the current architecture of the existing product Kernun UTM --- which is firewall produced by Trusted Network Solutions a.s. company. Most suitable variant is chosen and described furthermore in more details. Finally there is also mentioned plan for further developement and possible ways how to improve final solution.
APA, Harvard, Vancouver, ISO, and other styles
46

Vopálenský, Radek. "Detekce, sledování a klasifikace automobilů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385899.

Full text
Abstract:
The aim of this master thesis is to design and implement a system for the detection, tracking and classification of vehicles from streams or records from traffic cameras in language C++. The system runs on the platform Robot Operating System and uses the OpenCV, FFmpeg, TensorFlow and Keras libraries. For detection cascade classifier is used, for tracking Kalman filter and for classification of the convolutional neural network. Out of a total of 627 cars, 479 were tracked correctly. From this number 458 were classified (trucks or lorries not included). The resulting system can be used for traffic analysis.
APA, Harvard, Vancouver, ISO, and other styles
47

Alkadi, Alaa. "Anomaly Detection in RFID Networks." UNF Digital Commons, 2017. https://digitalcommons.unf.edu/etd/768.

Full text
Abstract:
Available security standards for RFID networks (e.g. ISO/IEC 29167) are designed to secure individual tag-reader sessions and do not protect against active attacks that could also compromise the system as a whole (e.g. tag cloning or replay attacks). Proper traffic characterization models of the communication within an RFID network can lead to better understanding of operation under “normal” system state conditions and can consequently help identify security breaches not addressed by current standards. This study of RFID traffic characterization considers two piecewise-constant data smoothing techniques, namely Bayesian blocks and Knuth’s algorithms, over time-tagged events and compares them in the context of rate-based anomaly detection. This was accomplished using data from experimental RFID readings and comparing (1) the event counts versus time if using the smoothed curves versus empirical histograms of the raw data and (2) the threshold-dependent alert-rates based on inter-arrival times obtained if using the smoothed curves versus that of the raw data itself. Results indicate that both algorithms adequately model RFID traffic in which inter-event time statistics are stationary but that Bayesian blocks become superior for traffic in which such statistics experience abrupt changes.
APA, Harvard, Vancouver, ISO, and other styles
48

Anbaroglu, B. "Spatio-temporal clustering for non-recurrent traffic congestion detection on urban road networks." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1408826/.

Full text
Abstract:
Non-Recurrent Congestion events (NRCs) frustrate commuters, companies and traffic operators because they cause unexpected delays. Most existing studies consider NRCs to be an outcome of incidents on motorways. The differences between motorways and urban road networks, and the fact that incidents are not the only cause of NRCs, limit the usefulness of existing automatic incident detection methods for identifying NRCs on an urban road network. This thesis contributes to the literature by developing an NRC detection methodology to support the accurate detection of NRCs on large urban road networks. To achieve this, substantially high Link Journey Time estimates (LJTs) on adjacent links that occur at the same time are clustered. Substantially high LJTs are defined in two different ways: (i) those LJTs that are greater than a threshold, (ii) those LJTs that belong to a statistically significant Space-Time Region (STR). These two different ways of defining the term ‘substantially high LJT’ lead to different NRC detection methods. To evaluate these methods, two novel criteria are proposed. The first criterion, high-confidence episodes, assesses to what extent substantially high LJTs that last for a minimum duration are detected. The second criterion, the Localisation Index, assesses to what extent detected NRCs could be related to incidents. The proposed NRC detection methodology is tested for London’s urban road network, which consists of 424 links. Different levels of travel demand are analysed in order to establish a complete understanding of the developed methodology. Optimum parameter settings of the two proposed NRC detection methods are determined by sensitivity analysis. Related to the first method, LJTs that are at least 40% higher than their expected values are found to maintain the best balance between the proposed evaluation criteria for detecting NRCs. Related to the second method, it is found that constructing STRs by considering temporal adjacencies rather than spatial adjacencies improves the performance of the method. These findings are applied in real life situations to demonstrate the advantages and limitations of the proposed NRC detection methods. Traffic operation centres could readily start using the proposed NRC detection methodology. In this way, traffic operators could be able to quantify the impact of incidents and develop effective NRC reduction strategies.
APA, Harvard, Vancouver, ISO, and other styles
49

Teknős, Martin. "Rozšíření behaviorální analýzy síťové komunikace určené pro detekci útoků." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234931.

Full text
Abstract:
This thesis is focused on network behavior analysis (NBA) designed to detect network attacks. The goal of the thesis is to increase detection accuracy of obfuscated network attacks. Methods and techniques used to detect network attacks and network traffic classification were presented first. Intrusion detection systems (IDS) in terms of their functionality and possible attacks on them are described next. This work also describes principles of selected attacks against IDS. Further, obfuscation methods which can be used to overcome NBA are suggested. The tool for automatic exploitation, attack obfuscation and collection of this network communication was designed and implemented. This tool was used for execution of network attacks. A dataset for experiments was obtained from collected network communications. Finally, achieved results emphasized requirement of training NBA models by obfuscated malicious network traffic.
APA, Harvard, Vancouver, ISO, and other styles
50

Sikora, Marek. "Detekce slow-rate DDoS útoků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-317019.

Full text
Abstract:
This diploma thesis is focused on the detection and protection against Slow DoS and DDoS attacks using computer network traffic analysis. The reader is introduced to the basic issues of this specific category of sophisticated attacks, and the characteristics of several specific attacks are clarified. There is also a set of methods for detecting and protecting against these attacks. The proposed methods are used to implement custom intrusion prevention system that is deployed on the border filtering server of computer network in order to protect Web servers against attacks from the Internet. Then created system is tested in the laboratory network. Presented results of the testing show that the system is able to detect attacks Slow GET, Slow POST, Slow Read and Apache Range Header and then protect Web servers from affecting provided services.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography