To see the other types of publications on this topic, follow the link: Analysis and filtering of network traffic.

Dissertations / Theses on the topic 'Analysis and filtering of network traffic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analysis and filtering of network traffic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Klečka, Jan. "Monitorovací sonda síťové komunikace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442398.

Full text
Abstract:
Master thesis deals with analysis of single board PC which use Linux as operation system. Analysis of individual NIDS systems and examined their properties for choosing right candidate for single board computer which shall be used as network probe for analysis, filtering and logging of network traffic. Part of the work is aimed on development of a interface which is used for configuration of network probe through the web browser. Web interface allows perform basic operations over network probe which influence network traffic or specify, which information shall be logged. Subsequently network parsers were implemented for network protocols using the Scappy library. The conclusion of the thesis contains the design of the security cover for the device according to the IP54 requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Wei 1975. "Network traffic modelling and analysis." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82613.

Full text
Abstract:
In all-photonic networks, both transmission and switching is performed in the optical domain, without optoelectronic conversion for the data traversing the network. An accurate traffic model is critical in an agile all-photonic network (AAPN) which has the ability to dynamically allocate bandwidth to traffic flows as the demand varies.
This thesis focuses on traffic modelling and analysis. A novel traffic model is proposed which can capture the traffic behaviours in all-photonic networks. The new model is based on a study of existing traffic modelling literature. It combines the time-varying Poisson model, gravity model and fractional Gaussian noise. This model can be used for the short-range traffic prediction. We examine Long-Range Dependence and test the time constancy of scaling parameters using the tools designed by Abry and Veitch, to analyze empirical and synthesized traffic traces.
APA, Harvard, Vancouver, ISO, and other styles
3

Simhairi, Nather Zeki. "Traffic assignment and network analysis." Thesis, Royal Holloway, University of London, 1987. http://repository.royalholloway.ac.uk/items/a3377f99-4ed8-4000-91f8-0384aed4a3c6/1/.

Full text
Abstract:
This thesis studies the transportation network, and is divided into three sections. Initially an algorithm is described which finds the user-equilibrium assignment for networks with linear congestion functions where the cost of travel on a link is dependent on the flow in the whole network. Secondly it investigates the sensitivity of the cost of travel and of the flow distribution in the network, to changes in the link congestion function. Combinatorial methods are used for evaluating the results of the sensitivity analysis. This is done with the aim of obtaining fast and efficient algorithms for the evaluation of cost sensitive and paradoxical links. Finally, for networks where the demand is elastic, it describes the catastrophic behaviour of the point representing the user-equilibrium flow distribution under certain cost conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Jian. "Fractal Network Traffic Analysis with Applications." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11477.

Full text
Abstract:
Today, the Internet is growing exponentially, with traffic statistics that mathematically exhibit fractal characteristics: self-similarity and long-range dependence. With these properties, data traffic shows high peak-to-average bandwidth ratios and causes networks inefficient. These problems make it difficult to predict, quantify, and control data traffic. In this thesis, two analytical methods are used to study fractal network traffic. They are second-order self-similarity analysis and multifractal analysis. First, self-similarity is an adaptability of traffic in networks. Many factors are involved in creating this characteristic. A new view of this self-similar traffic structure related to multi-layer network protocols is provided. This view is an improvement over the theory used in most current literature. Second, the scaling region for traffic self-similarity is divided into two timescale regimes: short-range dependence (SRD) and long-range dependence (LRD). Experimental results show that the network transmission delay separates the two scaling regions. This gives us a physical source of the periodicity in the observed traffic. Also, bandwidth, TCP window size, and packet size have impacts on SRD. The statistical heavy-tailedness (Pareto shape parameter) affects the structure of LRD. In addition, a formula to estimate traffic burstiness is derived from the self-similarity property. Furthermore, studies with multifractal analysis have shown the following results. At large timescales, increasing bandwidth does not improve throughput. The two factors affecting traffic throughput are network delay and TCP window size. On the other hand, more simultaneous connections smooth traffic, which could result in an improvement of network efficiency. At small timescales, in order to improve network efficiency, we need to control bandwidth, TCP window size, and network delay to reduce traffic burstiness. In general, network traffic processes have a Hlder exponent a ranging between 0.7 and 1.3. Their statistics differ from Poisson processes. From traffic analysis, a notion of the efficient bandwidth, EB, is derived. Above that bandwidth, traffic appears bursty and cannot be reduced by multiplexing. But, below it, traffic is congested. An important finding is that the relationship between the bandwidth and the transfer delay is nonlinear.
APA, Harvard, Vancouver, ISO, and other styles
5

Jiang, Michael Zhonghua. "Analysis of wireless data network traffic." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0012/MQ61444.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Heller, Mark D. "Behavioral analysis of network flow traffic." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/5108.

Full text
Abstract:
Approved for public release, distribution unlimited
Network Behavior Analysis (NBA) is a technique to enhance network security by passively monitoring aggregate traffic patterns and noting unusual action or departures from normal operations. The analysis is typically performed offline, due to the huge volume of input data, in contrast to conventional intrusion prevention solutions based on deep packet inspection, signature detection, and real-time blocking. After establishing a benchmark for normal traffic, an NBA program monitors network activity and flags unknown, new, or unusual patterns that might indicate the presence of a potential threat. NBA also monitors and records trends in bandwidth and protocol use. Computer users in the Department of Defense (DoD) operational networks may use Hypertext Transport Protocol (HTTP) to stream video from multimedia sites like youtube.com, myspace.com, mtv.com, and blackplanet.com. Such streaming may hog bandwidth, a grave concern, given that increasing amounts of operational data are exchanged over the Global Information Grid, and introduce malicious viruses inadvertently. This thesis develops an NBA solution to identify and estimate the bandwidth usage of HTTP streaming video traffic entirely from flow records such as Cisco's NetFlow data.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yichi. "Residential Network Traffic and User Behavior Analysis." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-27001.

Full text
Abstract:
Internet usage is changing and the demands on the broadband networks are ever increasing. So it is still crucial to understand today's network traffic and the usage patterns of the end users, which will lead to more efficient network design, energy and costs savings, and improvement of the service offered to end users. This thesis aims at finding hidden patterns of traffic and user behavior in a residential fiber based access network. To address the problem, a systematic framework of traffic measurement and analysis is developed. It involves PacketLogic traffic data collecting, MySQL database storing, and traffic and user behavior analysis by using Python scripts.   Our approach provides new insights on residential network traffic properties and Internet user habits of households, covering topics of aggregated traffic pattern, household traffic modeling, traffic and user penetration for applications, grouping analysis by cluster and subscriber, and concurrent application analysis. The analysis solutions we provide are based on open source tools without proprietary, giving the most flexibility for codes modification and distribution.
APA, Harvard, Vancouver, ISO, and other styles
8

Kreibich, Christian Peter. "Structural traffic analysis for network security monitoring." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Han. "Analysis of network traffic in grid system." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/35162.

Full text
Abstract:
The aim of this work was to conduct experiments to discover the characteristics of the network traffic generated by running Grid applications. In the experiments, the Grid application used was to discover resources and services registered in non-central resource. The Vega Grid was introduced as the experimental Grid platform and Resource Discovery was run in this platform. Since the application of Resource Discovery could generate continuous network traffic, it was useful to measure and analyse this network traffic and find out its characteristics. Several experiments were conducted for the same purpose using three, four, five, and six PCs in different experiments. Moreover, the same experiment using a WAN was conducted with seven PCs (four PCs inside the campus and three PCs outside). Specifically, ETHEREAL was introduced to collect data packets involved with the experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Vu, Hong Linh. "DNS Traffic Analysis for Network-based Malware Detection." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93842.

Full text
Abstract:
Botnets are generally recognized as one of the most challenging threats on the Internet today. Botnets have been involved in many attacks targeting multinational organizations and even nationwide internet services. As more effective detection and mitigation approaches are proposed by security researchers, botnet developers are employing new techniques for evasion. It is not surprising that the Domain Name System (DNS) is abused by botnets for the purposes of evasion, because of the important role of DNS in the operation of the Internet. DNS provides a flexible mapping between domain names and IP addresses, thus botnets can exploit this dynamic mapping to mask the location of botnet controllers. Domain-flux and fast-flux (also known as IP-flux) are two emerging techniques which aim at exhausting the tracking and blacklisting effort of botnet defenders by rapidly changing the domain names or their associated IP addresses that are used by the botnet. In this thesis, we employ passive DNS analysis to develop an anomaly-based technique for detecting the presence of a domain-flux or fast- flux botnet in a network. To do this, we construct a lookup graph and a failure graph from captured DNS traffic and decompose these graphs into clusters which have a strong correlation between their domains, hosts, and IP addresses. DNS related features are extracted for each cluster and used as input to a classication module to identify the presence of a domain-flux or fast-flux botnet in the network. The experimental evaluation on captured traffic traces veried that the proposed technique successfully detected domain-flux botnets in the traces. The proposed technique complements other techniques for detecting botnets through traffic analysis.
Botnets betraktas som ett av de svåraste Internet-hoten idag. Botnets har använts vid många attacker mot multinationella organisationer och även nationella myndigheters och andra nationella Internet-tjänster. Allt eftersom mer effektiva detekterings - och skyddstekniker tas fram av säkerhetsforskare, har utvecklarna av botnets tagit fram nya tekniker för att undvika upptäckt. Därför är det inte förvånande att domännamnssystemet (Domain Name System, DNS) missbrukas av botnets för att undvika upptäckt, på grund av den viktiga roll domännamnssystemet har för Internets funktion - DNS ger en flexibel bindning mellan domännamn och IP-adresser. Domain-flux och fast-flux (även kallat IP-flux) är två relativt nya tekniker som används för att undvika spårning och svartlistning av IP-adresser av botnet-skyddsmekanismer genom att snabbt förändra bindningen mellan namn och IP-adresser som används av botnets. I denna rapport används passiv DNS-analys för att utveckla en anomali-baserad teknik för detektering av botnets som använder sig av domain-flux eller fast-flux. Tekniken baseras på skapandet av en uppslagnings-graf och en fel-graf från insamlad DNS-traffik och bryter ned dessa grafer i kluster som har stark korrelation mellan de ingående domänerna, maskinerna, och IP-adresserna. DNSrelaterade egenskaper extraheras för varje kluster och används som indata till en klassifficeringsmodul för identiffiering av domain-flux och fast-flux botnets i nätet. Utvärdering av metoden genom experiment på insamlade traffikspår visar att den föreslagna tekniken lyckas upptäcka domain-flux botnets i traffiken. Genom att fokusera på DNS-information kompletterar den föreslagna tekniken andra tekniker för detektering av botnets genom traffikanalys.
APA, Harvard, Vancouver, ISO, and other styles
11

Hunter, John B. Gromann Holger. "Analysis and design of a universal traffic network /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA384024.

Full text
Abstract:
Thesis (M.S. in Computer Science) Naval Postgraduate School, Sept. 2000.
"September 2000." Thesis advisor(s): Lundy, Gilbert M. Includes bibliographical references (p. 113-115). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
12

Hunter, John B., and Holger Gromann. "Analysis and design of a universal traffic network." Thesis, Monterey, California. Naval Postgraduate School, 2000. http://hdl.handle.net/10945/9406.

Full text
Abstract:
As the field of computer networking has evolved, so too has the use of these networks. Modern networks must be capable of performing more than simple data transfer. To be of value, a network must be able to handle the convergence of different types of traffic: voice, video, and data; and the Quality of Service requirements associated with each type. This thesis performs a detailed analysis of the different types of traffic, the two primary transmission media, fiber optical and copper based connections, and the connection-orientation technology to route the traffic. Presented in this thesis is a fiber-based hybrid network consisting of Asynchronous Transfer Mode at the backbone layer and Frame Relay and Passive Optical Networking at the local access layer. The proposed Universal Traffic Network, based on present-day technology, is a viable solution to the challenge imposed by the convergence of different traffic types.
APA, Harvard, Vancouver, ISO, and other styles
13

Weaver, Cyrus-Charles. "Understanding information seeking behavior through network traffic analysis." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46520.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 150-151).
Many of today's information workers use the Internet as a valuable first-choice source for new knowledge. As such, Internet based information seeking is a key part of how information workers find information. This study develops techniques to quantify the information seeking patterns of information workers by looking at Web Site diversity, page rank, and general statistics of Web Site viewership. Future research by our group will build on these measurement techniques and explore the relationship between information worker productivity and Internet information seeking behavior.
by Cyrus-Charles Weaver.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Cowan, KC Kaye. "Detecting Hidden Wireless Cameras through Network Traffic Analysis." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100148.

Full text
Abstract:
Wireless cameras dominate the home surveillance market, providing an additional layer of security for homeowners. Cameras are not limited to private residences; retail stores, public bathrooms, and public beaches represent only some of the possible locations where wireless cameras may be monitoring people's movements. When cameras are deployed into an environment, one would typically expect the user to disclose the presence of the camera as well as its location, which should be outside of a private area. However, adversarial camera users may withhold information and prevent others from discovering the camera, forcing others to determine if they are being recorded on their own. To uncover hidden cameras, a wireless camera detection system must be developed that will recognize the camera's network traffic characteristics. We monitor the network traffic within the immediate area using a separately developed packet sniffer, a program that observes and collects information about network packets. We analyze and classify these packets based on how well their patterns and features match those expected of a wireless camera. We used a Support Vector Machine classifier and a secondary-level of classification to reduce false positives to design and implement a system that uncovers the presence of hidden wireless cameras within an area.
Master of Science
Wireless cameras may be found almost anywhere, whether they are used to monitor city traffic and report on travel conditions or to act as home surveillance when residents are away. Regardless of their purpose, wireless cameras may observe people wherever they are, as long as a power source and Wi-Fi connection are available. While most wireless camera users install such devices for peace of mind, there are some who take advantage of cameras to record others without their permission, sometimes in compromising positions or places. Because of this, systems are needed that may detect hidden wireless cameras. We develop a system that monitors network traffic packets, specifically based on their packet lengths and direction, and determines if the properties of the packets mimic those of a wireless camera stream. A double-layered classification technique is used to uncover hidden wireless cameras and filter out non-wireless camera devices.
APA, Harvard, Vancouver, ISO, and other styles
15

Senthivel, Saranyan. "Automatic Forensic Analysis of PCCC Network Traffic Log." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2394.

Full text
Abstract:
Most SCADA devices have a few built-in self-defence mechanisms and tend to implicitly trust communications received over the network. Therefore, monitoring and forensic analysis of network traffic is a critical prerequisite for building an effective defense around SCADA units. In this thesis work, We provide a comprehensive forensic analysis of network traffic generated by the PCCC(Programmable Controller Communication Commands) protocol and present a prototype tool capable of extracting both updates to programmable logic and crucial configuration information. The results of our analysis shows that more than 30 files are transferred to/from the PLC when downloading/uplloading a ladder logic program using RSLogix programming software including configuration and data files. Interestingly, when RSLogix compiles a ladder-logic program, it does not create any lo-level representation of a ladder-logic file. However the low-level ladder logic is present and can be extracted from the network traffic log using our prototype tool. the tool extracts SMTP configuration from the network log and parses it to obtain email addresses, username and password. The network log contains password in plain text.
APA, Harvard, Vancouver, ISO, and other styles
16

Naboulsi, Diala. "Analysis and exploitation of mobile traffic datasets." Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0084/document.

Full text
Abstract:
Les équipements mobiles deviennent une partie intégrale de notre vie numérique. Ces équipements interagissent avec l'infrastructure des réseaux cellulaires et leur activité est enregistrée pour des buts de facturation et monitoring sous forme de données de trafic mobile. Les travaux menés dans cette thèse se focalisent sur le potentiel que portent ces données pour l'amélioration des réseaux cellulaires futurs. D'une part, on montre que les données mobiles permettent de construire des profils spatio-temporels typiques de l'utilisation des réseaux cellulaires en environnement urbain. Cette analyse permet aussi la détection des comportements atypiques dans le réseau qui sont liés à des événements spéciaux. D'autre part, on montre que les données mobiles constituent un élément méthodologique important pour l'évaluation des solutions réseaux. Dans ce sens, on propose un mécanisme pour réduire la consommation énergétique des infrastructures cellulaires, en contrôlant la puissance sur le réseau d'accès à différents moments de la journée, tout en assurant la couverture géographique du réseau. On exploite aussi ces données pour évaluer les gains apportés par une nouvelle architecture de réseau d'accès, basée sur la virtualisation d'une partie du réseau et sa centralisation dans un cloud. Nos résultats montrent que cette architecture est bénéfique du point de vue des messages de signalisation, notamment pour les utilisateurs mobiles
Mobile devices are becoming an integral part of our everyday digitalized life. In 2014, the number of mobile devices, connected to the internet and consuming traffic, has even exceeded the number of human beings on earth. These devices constantly interact with the network infrastructure and their activity is recorded by network operators, for monitoring and billing purposes. The resulting logs, collected as mobile traffic datasets, convey important information concerning spatio-temporal traffic dynamics, relating to large populations with millions of individuals. The thesis sheds light on the potential carried by mobile traffic datasets for future cellular networks. On one hand, we target the analysis of these datasets. We propose a usage patterns characterization framework, capable of defining meaningful categories of mobile traffic profiles and classifying network usages accordingly. On the other hand, we exploit mobile traffic datasets to evaluate two dynamic networking solutions. First, we focus on the reduction of energy consumption over typical Radio Access Networks (RAN). We introduce a power control mechanism that adapts the RAN's power configuration to users demands, while maintaining a geographical coverage. We show that our scheme allows to significantly reduce power consumption over the network infrastructure. Second, we study the problem of topology management of future Cloud-RAN (C-RAN). We propose a mobility-driven dynamic association scheme of the C-RAN components, which takes into account users traffic demand. The introduced strategy is observed to lead to important savings in the network, in terms of handovers
APA, Harvard, Vancouver, ISO, and other styles
17

Dupasquier, Benoit. "Monitoring and analysis of network traffic for information security." Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601445.

Full text
Abstract:
Network traffic monitoring and analysis has several practical implications. It can be used for malicious or legitimate purpose and aimed at improving the quality of communications, enhancing the security of a system or extracting information via side-channels. Such analysis can even deal with the use of encryption and obfuscation and extract meaningful information from huge amounts of Internet traffic. First, tills thesis explores its use to investigate the leakage of information from Skype, a widely used and encrypted VoIP application. VoIP has experienced tremendous growth over the last few years and is now widely used among the public and for business purposes. The security of such VoIP systems is often assumed, creating a false sense of privacy. Experiments have shown that isolated phonemes can be classified and given sentences identified. By using the DTW algorithm, frequently used in speech processing, an accuracy of 60% can be reached. The results can be further improved by choosing specific training data and reach an accuracy of 83% under specific conditions. The initial results being speaker dependent, an approach involving the Kalman filter is proposed to extract the kernel of all training signals. Second, the use of traffic monitoring and analysis for network security is investigated to detect hosts infected with the ZeuS botnet, a recent infamous trojan that steals banking information and one of the most prominent cyber threats to date. Cyber threats are becoming ever more sophisticated, persistent and difficult to detect. As highlighted by recent success stories of malware, such as the ZeuS botnet, current defence solutions are not enough to thwart these threats. Therefore, it is of paramount importance to be able to detect and mitigate these kinds of malware. This work proposes a detailed analysis of the network communications that occur between a bot and its master as part of the command and control traffic. This research identifies six key attributes which provide a reliable way of detecting hosts infected by the Zeus botnet. These discoveries are then used in combination with different machine learning algorithms in order to prove their validity. Finally, the use of IBM QRadar, a commercial SIEM product, to detect ZeuS infected hosts is investigated.
APA, Harvard, Vancouver, ISO, and other styles
18

Chang, Xuquan Stanley, and Kim Yong Chua. "A cyberciege traffic analysis extension for teaching network security." Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/10578.

Full text
Abstract:
CyberCIEGE is an interactive game simulating realistic scenarios that teaches the players Information Assurance (IA) concepts. The existing game scenarios only provide a high-level abstraction of the networked environment, e.g., nodes do not have Internet protocol (IP) addresses or belong to proper subnets, and there is no packet-level network simulation. This research explored endowing the game with network level traffic analysis, and implementing a game scenario to take advantage of this new capability. Traffic analysis is presented to players in a format similar to existing tools such that learned skills may be easily transferred to future real-world situations. A network traffic analysis tool simulation within CyberCIEGE was developed and this new tool provides the player with traffic analysis capability. Using existing taxonomies of cyber-attacks, the research identified a subset of network-based attacks most amenable to modeling and representation within CyberCIEGE. From the attacks identified, a complementary CyberCIEGE scenario was developed to provide the player with new educational opportunities for network analysis and threat identification. From the attack scenario, players also learn about the effects of these cyber-attacks and glean a more informed understanding of appropriate mitigation measures.
APA, Harvard, Vancouver, ISO, and other styles
19

Moe, Lwin P. "Cyber security risk analysis framework : network traffic anomaly detection." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118536.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 84-86).
Cybersecurity is a growing research area with direct commercial impact to organizations and companies in every industry. With all other technological advancements in the Internet of Things (IoT), mobile devices, cloud computing, 5G network, and artificial intelligence, the need for cybersecurity is more critical than ever before. These technologies drive the need for tighter cybersecurity implementations, while at the same time act as enablers to provide more advanced security solutions. This paper will discuss a framework that can predict cybersecurity risk by identifying normal network behavior and detect network traffic anomalies. Our research focuses on the analysis of the historical network traffic data to identify network usage trends and security vulnerabilities. Specifically, this thesis will focus on multiple components of the data analytics platform. It explores the big data platform architecture, and data ingestion, analysis, and engineering processes. The experiments were conducted utilizing various time series algorithms (Seasonal ETS, Seasonal ARIMA, TBATS, Double-Seasonal Holt-Winters, and Ensemble methods) and Long Short-Term Memory Recurrent Neural Network algorithm. Upon creating the baselines and forecasting network traffic trends, the anomaly detection algorithm was implemented using specific thresholds to detect network traffic trends that show significant variation from the baseline. Lastly, the network traffic data was analyzed and forecasted in various dimensions: total volume, source vs. destination volume, protocol, port, machine, geography, and network structure and pattern. The experiments were conducted with multiple approaches to get more insights into the network patterns and traffic trends to detect anomalies.
by Lwin P. Moe.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Jie. "Locality of Internet Traffic : An analysis based upon traffic in an IP access network." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-107686.

Full text
Abstract:
The rapid growth of Internet Traffic has emerged as a major issue due to the rapid development of various network applications and Internet services. One of the challenges facing Internet Service Providers (ISPs) is to optimize the performance of their networks in the face of continuously increasing amounts of IP traffic while guaranteeing some specific Quality of Services (QoS). Therefore it is necessary for ISPs to study the traffic patterns and user behaviors in different localities, to estimate the application usage trends, and thereby to come up with solutions that can effectively, efficiently, and economically support their users’ traffic. The main objective of this thesis is to analyze and characterize traffic in a local multi-service residential IP network in Sweden (referred to in this report as “Network North”). The data about the amount of traffic was measured using a real-time traffic-monitoring tool from PacketLogic. Traffic from the monitored network to various destinations was captured and classified into 5 ring-wise locality levels in accordance with the traffic’s geographic destinations: traffic within Network North and traffic to the remainder of the North of Sweden, Sweden, Europe, and World. Parameters such as traffic patterns (e.g., traffic volume distribution, application usage, and application popularity) and user behavior (e.g., usage habits, user interests, etc.) at different geographic localities were studied in this project. As a result of a systematic and in-depth measurement and the fact that the number of content servers at the World, Europe, and Sweden levels are quite large, we recommend that an intelligent content distribution system be positioned at Level 1 localities in order to reduce the amount of duplicate traffic in the network and thereby removing this traffic load from the core network. The results of these measurements provide a temporal reference for ISPs of their present traffic and should allow them to better manage their network. However, due to certain circumstances the analysis was limited due to the set of available daily traffic traces. To provide a more trustworthy solution, a relatively longer-term, periodic, and seasonal traffic analysis could be done in the future based on the established measurement framework.
Den ökande tillväxten av Internet Trafik har blivit en viktig fråga med anledning av den snabba utvecklingen av olika internetbaserade applikationer och tjänster. En av utmaningarna för Internet leverantörerna är att optimera prestandan i sina nät inför de ständigt ökande datamängderna och samtidigt garantera kvalitet på tjänsterna (QoS). Därför är det nödvändigt för Internetleverantörer att studera trafikmönster och lokala differentierade användarbeteenden, för att uppskatta trender av nyttjande av internettjänster, och därmed komma med lösningar som effektivt och ekonomiskt stödja deras kunders trafik. Det främsta syftet med denna avhandling är att analysera och karaktärisera internettrafiken i ett lokalt IP baserat multiservicenätverk i Sverige (i denna rapport avseende "Network North"). Uppgifterna om trafikmängden mättes i realtid med ett övervakningsverktyg från PacketLogic. Trafik till och från det övervakade nätverkets olika destinationer fångades upp och delades in i 5 cirkelliknande lokaliseringsnivåer i enlighet med geografiska trafikdestinationer: trafik inom nätverket North och till resten av norra Sverige, Sverige, Europa och världen. Parametrar som trafikmönster (t.ex. distribuerad internettrafik mängd, användning av olika tjänster och applikationer med dess popularitet) och användarbeteenden (t.ex. användar-vanor och intressen, etc.) på olika geografiska lokaliseringsnivåer har studerades i inom projekt. Som ett resultat av de systematiska och djupgående internetmätningar med det faktum av det stora antalet existerande tjänsteinnehållsservrar som ofta finns placerad långt ifrån slutanvändaren, ute i världen eller i Europa som är ganska så många till antalet. Rekommenderar vi att ett intelligent tjänstedistributionssystem appliceras närmre slutanvändaren på en regional nivå, för att minska på dagens onödiga omfattande duplicerande internettrafik i nom stamnäteten. Resultaten av dessa trafikmätningar av internettrafik ger en tidsmässig referens för Internetleverantörerna av deras nuvarande trafik och bör göra det möjligt för dem att bättre hantera sin nätverksinfrastruktur. Men på grund av vissa omständigheter begränsades mätanalysen på grund av möjliga och tillgängliga tidrammar att utföra dagliga trafikmätningsuppsättningen. För att ge en mer tillförlitlig lösning kan en på en längre sikt, periodisk och säsongsbunden trafikanalys göras i framtiden, baserat på den etablerade mätinfrastrukturen.
APA, Harvard, Vancouver, ISO, and other styles
21

Mawji, Afzal. "Achieving Scalable, Exhaustive Network Data Processing by Exploiting Parallelism." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/779.

Full text
Abstract:
Telecommunications companies (telcos) and Internet Service Providers (ISPs) monitor the traffic passing through their networks for the purposes of network evaluation and planning for future growth. Most monitoring techniques currently use a form of packet sampling. However, exhaustive monitoring is a preferable solution because it ensures accurate traffic characterization and also allows encoding operations, such as compression and encryption, to be performed. To overcome the very high computational cost of exhaustive monitoring and encoding of data, this thesis suggests exploiting parallelism. By utilizing a parallel cluster in conjunction with load balancing techniques, a simulation is created to distribute the load across the parallel processors. It is shown that a very scalable system, capable of supporting a fairly high data rate can potentially be designed and implemented. A complete system is then implemented in the form of a transparent Ethernet bridge, ensuring that the system can be deployed into a network without any change to the network. The system focuses its encoding efforts on obtaining the maximum compression rate and, to that end, utilizes the concept of streams, which attempts to separate data packets into individual flows that are correlated and whose redundancy can be removed through compression. Experiments show that compression rates are favourable and confirms good throughput rates and high scalability.
APA, Harvard, Vancouver, ISO, and other styles
22

Kim, Seong Soo. "Real-time analysis of aggregate network traffic for anomaly detection." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2312.

Full text
Abstract:
The frequent and large-scale network attacks have led to an increased need for developing techniques for analyzing network traffic. If efficient analysis tools were available, it could become possible to detect the attacks, anomalies and to appropriately take action to contain the attacks before they have had time to propagate across the network. In this dissertation, we suggest a technique for traffic anomaly detection based on analyzing the correlation of destination IP addresses and distribution of image-based signal in postmortem and real-time, by passively monitoring packet headers of traffic. This address correlation data are transformed using discrete wavelet transform for effective detection of anomalies through statistical analysis. Results from trace-driven evaluation suggest that the proposed approach could provide an effective means of detecting anomalies close to the source. We present a multidimensional indicator using the correlation of port numbers as a means of detecting anomalies. We also present a network measurement approach that can simultaneously detect, identify and visualize attacks and anomalous traffic in real-time. We propose to represent samples of network packet header data as frames or images. With such a formulation, a series of samples can be seen as a sequence of frames or video. Thisenables techniques from image processing and video compression such as DCT to be applied to the packet header data to reveal interesting properties of traffic. We show that ??scene change analysis?? can reveal sudden changes in traffic behavior or anomalies. We show that ??motion prediction?? techniques can be employed to understand the patterns of some of the attacks. We show that it may be feasible to represent multiple pieces of data as different colors of an image enabling a uniform treatment of multidimensional packet header data. Measurement-based techniques for analyzing network traffic treat traffic volume and traffic header data as signals or images in order to make the analysis feasible. In this dissertation, we propose an approach based on the classical Neyman-Pearson Test employed in signal detection theory to evaluate these different strategies. We use both of analytical models and trace-driven experiments for comparing the performance of different strategies. Our evaluations on real traces reveal differences in the effectiveness of different traffic header data as potential signals for traffic analysis in terms of their detection rates and false alarm rates. Our results show that address distributions and number of flows are better signals than traffic volume for anomaly detection. Our results also show that sometimes statistical techniques can be more effective than the NP-test when the attack patterns change over time.
APA, Harvard, Vancouver, ISO, and other styles
23

Stergiou, Ilias. "Novel computer-network traffic modelling techniques for analysis and simulation." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Khasgiwala, Jitesh. "Analysis of Time-Based Approach for Detecting Anomalous Network Traffic." Ohio University / OhioLINK, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1113583042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nkhumeleni, Thizwilondi Moses. "Correlation and comparative analysis of traffic across five network telescopes." Thesis, Rhodes University, 2014. http://hdl.handle.net/10962/d1011668.

Full text
Abstract:
Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
APA, Harvard, Vancouver, ISO, and other styles
26

Vieira, Thiago Pereira de Brito. "An approach for profiling distributed applications through network traffic analysis." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/12454.

Full text
Abstract:
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T17:32:13Z No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T14:22:30Z (GMT) No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Made available in DSpace on 2015-03-13T14:22:30Z (GMT). No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-05
Distributed systems has been adopted for building modern Internet services and cloud computing infrastructures, in order to obtain services with high performance, scalability, and reliability. Cloud computing SLAs require low time to identify, diagnose and solve problems in a cloud computing production infrastructure, in order to avoid negative impacts into the quality of service provided for its clients. Thus, the detection of error causes, diagnose and reproduction of errors are challenges that motivate efforts to the development of less intrusive mechanisms for monitoring and debugging distributed applications at runtime. Network traffic analysis is one option to the distributed systems measurement, although there are limitations on capacity to process large amounts of network traffic in short time, and on scalability to process network traffic where there is variation of resource demand. The goal of this dissertation is to analyse the processing capacity problem for measuring distributed systems through network traffic analysis, in order to evaluate the performance of distributed systems at a data center, using commodity hardware and cloud computing services, in a minimally intrusive way. We propose a new approach based on MapReduce, for deep inspection of distributed application traffic, in order to evaluate the performance of distributed systems at runtime, using commodity hardware. In this dissertation we evaluated the effectiveness of MapReduce for a deep packet inspection algorithm, its processing capacity, completion time speedup, processing capacity scalability, and the behavior followed by MapReduce phases, when applied to deep packet inspection for extracting indicators of distributed applications.
Sistemas distribuídos têm sido utilizados na construção de modernos serviços da Internet e infraestrutura de computação em núvem, com o intuito de obter serviços com alto desempenho, escalabilidade e confiabilidade. Os acordos de níves de serviço adotados pela computação na núvem requerem um reduzido tempo para identificar, diagnosticar e solucionar problemas em sua infraestrutura, de modo a evitar que problemas gerem impactos negativos na qualidade dos serviços prestados aos seus clientes. Então, a detecção de causas de erros, diagnóstico e reprodução de erros provenientes de sistemas distribuídos são desafios que motivam esforços para o desenvolvimento de mecanismos menos intrusivos e mais eficientes, para o monitoramento e depuração de aplicações distribuídas em tempo de execução. A análise de tráfego de rede é uma opção para a medição de sistemas distribuídos, embora haja limitações na capacidade de processar grande quantidade de tráfego de rede em curto tempo, e na escalabilidade para processar tráfego de rede sob variação de demanda de recursos. O objetivo desta dissertação é analisar o problema da capacidade de processamento para mensurar sistemas distribuídos através da análise de tráfego de rede, com o intuito de avaliar o desempenho de sistemas distribuídos de um data center, usando hardware não especializado e serviços de computação em núvem, de uma forma minimamente intrusiva. Nós propusemos uma nova abordagem baseada em MapReduce para profundamente inspecionar tráfego de rede de aplicações distribuídas, com o objetivo de avaliar o desempenho de sistemas distribuídos em tempo de execução, usando hardware não especializado. Nesta dissertação nós avaliamos a eficácia do MapReduce para um algoritimo de avaliação profunda de pacotes, sua capacidade de processamento, o ganho no tempo de conclusão de tarefas, a escalabilidade na capacidade de processamento, e o comportamento seguido pelas fases do MapReduce, quando aplicado à inspeção profunda de pacotes, para extrair indicadores de aplicações distribuídas.
APA, Harvard, Vancouver, ISO, and other styles
27

Ntlangu, Mbulelo Brenwen. "Modelling computer network traffic using wavelets and time series analysis." Master's thesis, Faculty of Engineering and the Built Environment, 2019. http://hdl.handle.net/11427/30146.

Full text
Abstract:
Modelling of network traffic is a notoriously difficult problem. This is primarily due to the ever-increasing complexity of network traffic and the different ways in which a network may be excited by user activity. The ongoing development of new network applications, protocols, and usage profiles further necessitate the need for models which are able to adapt to the specific networks in which they are deployed. These considerations have in large part driven the evolution of statistical profiles of network traffic from simple Poisson processes to non-Gaussian models that incorporate traffic burstiness, non-stationarity, self-similarity, long-range dependence (LRD) and multi-fractality. The need for ever more sophisticated network traffic models has led to the specification of a myriad of traffic models since. Many of these are listed in [91, 14]. In networks comprised of IoT devices much of the traffic is generated by devices which function autonomously and in a more deterministic fashion. Thus in this dissertation the activity of building time series models for IoT network traffic is undertaken. In the work that follows a broad review of the historical development of network traffic modelling is presented tracing a path that leads to the use of time series analysis for the said task. An introduction to time series analysis is provided in order to facilitate the theoretical discussion regarding the feasibility and suitability of time series analysis techniques for modelling network traffic. The theory is then followed by a summary of the techniques and methodology that might be followed to detect, remove and/or model the typical characteristics associated with network traffic such as linear trends, cyclic trends, periodicity, fractality, and long range dependence. A set of experiments is conducted in order determine the effect of fractality on the estimation of AR and MA components of a time series model. A comparison of various Hurst estimation techniques is also performed on synthetically generated data. The wavelet-based Abry-Veitch Hurst estimator is found to perform consistly well with respect to its competitors, and the subsequent removal of fractality via fractional differencing is found to provide a substantial improvement on the estimation of time series model parameters.
APA, Harvard, Vancouver, ISO, and other styles
28

Nordlöv, Anna, and Niklas Lindqvist. "Network based spatial analysis of traffic accidents in Stockholm, Sweden." Thesis, KTH, Geoinformatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Caldwell, Sean W. "On Traffic Analysis of 4G/LTE Traffic." Cleveland State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=csu1632179249187618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Lei. "Analytical Modelling of Scheduling Schemes under Self-similar Network Traffic. Traffic Modelling and Performance Analysis of Centralized and Distributed Scheduling Schemes." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4863.

Full text
Abstract:
High-speed transmission over contemporary communication networks has drawn many research efforts. Traffic scheduling schemes which play a critical role in managing network transmission have been pervasively studied and widely implemented in various practical communication networks. In a sophisticated communication system, a variety of applications co-exist and require differentiated Quality-of-Service (QoS). Innovative scheduling schemes and hybrid scheduling disciplines which integrate multiple traditional scheduling mechanisms have emerged for QoS differentiation. This study aims to develop novel analytical models for commonly interested scheduling schemes in communication systems under more realistic network traffic and use the models to investigate the issues of design and development of traffic scheduling schemes. In the open literature, it is commonly recognized that network traffic exhibits self-similar nature, which has serious impact on the performance of communication networks and protocols. To have a deep study of self-similar traffic, the real-world traffic datasets are measured and evaluated in this study. The results reveal that selfsimilar traffic is a ubiquitous phenomenon in high-speed communication networks and highlight the importance of the developed analytical models under self-similar traffic. The original analytical models are then developed for the centralized scheduling schemes including the Deficit Round Robin, the hybrid PQGPS which integrates the traditional Priority Queueing (PQ) and Generalized Processor Sharing (GPS) schemes, and the Automatic Repeat reQuest (ARQ) forward error control discipline in the presence of self-similar traffic. Most recently, research on the innovative Cognitive Radio (CR) techniques in wireless networks is popular. However, most of the existing analytical models still employ the traditional Poisson traffic to examine the performance of CR involved systems. In addition, few studies have been reported for estimating the residual service left by primary users. Instead, extensive existing studies use an ON/OFF source to model the residual service regardless of the primary traffic. In this thesis, a PQ theory is adopted to investigate and model the possible service left by selfsimilar primary traffic and derive the queue length distribution of individual secondary users under the distributed spectrum random access protocol.
APA, Harvard, Vancouver, ISO, and other styles
31

Ware, Ryan T. "An analysis of two layers of encryption to protect network traffic." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FWare.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Dinolt, George ; Second Reader: Guild, Jennifer. "June 2010." Description based on title screen as viewed on July 15, 2010. Author(s) subject terms: encryption, computer security, network security, architecture, fault tree analysis, defense-in-depth Includes bibliographical references (p. 75-77). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
32

Kalyar, Iftekhar A. "Prediction of Sunday afternoon traffic using neural network and regression analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0005/MQ35841.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Banerji, Pratip K. "An analysis of network management traffic and requirements in wireless networks." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Iorliam, Aamo. "Application of power laws to biometrics, forensics and network traffic analysis." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/812720/.

Full text
Abstract:
Tampering of biometric samples is becoming an important security concern. Attacks can occur in behavioral modalities (e.g. keyboard stroke) as well. Besides biometric data, other important security concerns are related to network traffic data on the Internet. In this thesis, we investigate the application of Power laws for biometrics, forensics and network traffic analysis. Passive detection techniques such as Benford’s law and Zipf’s law have not been investigated for the detection and forensic analysis of malicious and non-malicious tampering of biometric, keystroke and network traffic data. The Benford’s law has been reported in the literature to be very effective in detecting tampering of natural images. In this thesis, our experiments show that the biometric samples do follow the Benford’s law; and that the highest detection and localisation accuracies for the biometric face images and fingerprint images are achieved at 97.41% and 96.40%, respectively. The divergence values of Benford’s law are then used for the classification and source identification of fingerprint images with good accuracies between the range of 76.0357% and 92.4344%. Another research focus in this thesis is on the application and analysis of the Benford’s law and Zipf’s law for keystroke dynamics to differentiate between the behaviour of human beings and non-human beings. The divergence values of the Benford’s law and the P-values of the Zipf’s law based on the latency values of the keystroke data can be used effectively to differentiate between human and non-human behaviours. Finally, the Benford’s law and Zipf’s law are analysed for TCP flow size difference for the detection of malicious traffics on the Internet with AUC values between the range of 0.6858 and 1. Furthermore, the P-values of the Zipf’s law have also been found to differentiate between malicious and non-malicious network traffics, which can be potentially exploited for intrusion detection system applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Kälkäinen, J. (Juha). "Collection and analysis of malicious SSH traffic in Oulu University network." Bachelor's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812053224.

Full text
Abstract:
Secure Shell (SSH) is a commonly used tool by many organizations to establish secure data communication and remote access to systems that store confidential information and resources. Assessing, defending against and studying the different threats the systems using this protocol are subjected to can be done by using a honeypot. This thesis studied malicious SSH traffic directed at Oulu university network by using an SSH honeypot Cowrie. The honeypot was deployed in the panOULU network located at the Oulu University campus. Two other identical honeypots were deployed from different networks to study if there are differences in malicious traffic directed to all three honeypots. Additionally, a passive fingerprinting tool p0f was used for differentiating what operating systems the attacks originated from. The honeypots successfully remained active and collected data for 12 days. Based on gathered data malicious SSH traffic on all three networks was quite similar. Most common interaction on all three honeypots was likely from bots seeking to exploit common vulnerabilities to gain access to resources. Data from p0f revealed that most common successfully fingerprinted operating system used by attackers on all three honeypots was Linux based. The research showed that honeypots can be used to collect data that is valuable to any security researcher or network administrator. The university network and the other two networks that were studied are constantly subjected to multitude of attacks and scans
Secure Shell (SSH) on monissa organisaatioissa yleisesti käytetty työkalu suojatun tietoliikenteen muodostamiseen ja luotettavaa tietoa sekä resursseja sisältävien järjestelmien etäkäyttöön. Tähän protokollaan kohdistettujen eri uhkien arvioimiseen, niitä vastaan puolustautumiseen ja niiden opiskeluun voidaan käyttää hunajapurkkia. Tämä opinnäytetyö tutki Oulun yliopiston verkkoon kohdistettua haitallista SSH-tietoliikennettä käyttämällä SSH-hunajapurkkia Cowrie. Hunajapurkki asetettiin panOULU-verkkoon, joka sijaitsee Oulun yliopiston kampuksella. Kaksi muuta identtistä hunajapurkkia asetettiin eri verkkoihin haitallisen tietoliikenteen poikkeavuuksien tutkimiseksi. Tämän lisäksi passiiviseen sormenjälkitunnistukseen pohjautuvaa työkalua p0f käytettiin selvittämään, mistä eri käyttöjärjestelmistä hyökkäykset alun perin saapuivat. Hunajapurkit olivat onnistuneesti päällä ja keräsivät tietoja 12 päivää. Kerättyjen tietojen perusteella kaikkiin kolmeen verkkoon kohdistettu haitallinen SSH-tietoliikenne oli hyvin samankaltaista. Yleisin kaikkiin kolmeen hunajapurkkiin kohdistettu kanssakäyminen saapui todennäköisesti botteilta, joiden tavoitteena on saada haltuun resursseja hyväksikäyttämällä yleisiä haavoittuvuuksia. P0f-ohjelmalla kerätty tieto paljasti yleisimmän hyökkääjien käyttämän tunnistetun käyttöjärjestelmän olevan Linux-pohjainen. Tutkimus osoitti, että SSH-hunajapurkkeja voidaan käyttää tiedon keräämiseen, joka on arvokasta kenelle tahansa tietoturva-asiantuntijalle tai verkon ylläpitäjälle. Yliopiston verkko ja kaksi muuta tutkittua verkkoa ovat jatkuvan hyökkäyksien ja tarkkailun alaisena
APA, Harvard, Vancouver, ISO, and other styles
36

Syal, Astha. "Automatic Network Traffic Anomaly Detection and Analysis using SupervisedMachine Learning Techniques." Youngstown State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1578259840945109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

LÖFROTH, BJÖRN. "Mobile traffic dataset comparisons throughcluster analysis of radio network event sequences." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-153914.

Full text
Abstract:
Ericsson regularly collects traffic datasets from different radio networks around the world. These data sets can be used for several research purposes, ranging from general statistics to more specific studies such as system troubleshooting and buffer-level analysis. Currently, a researcher may find it difficult to assess if a certain dataset is useful for aparticular investigation, since there exists no easily accessible overview of the properties of the different data sets.This thesis project aims to make it easier to compare the existing traffic datasets in terms of general statistics, user and time coverage,data integrity and the patterns of sequences in radio network event logs. The key contribution is a method of clustering event sequences based on sequence duration and occurrences of a number of key events.A method called the Gap-statistic was applied to determine that using 11 clusters was suitable for the analysis, although no strong evidence was found for the existence of well separated clusters.The results show that the method can work as a useful extension of basic comparative statistics. Two dense ranges of sequence durations discovered in the basic statistics could successfully be linked to corresponding clusters of sequences. Extensive statistics about the cluster members then revealed detailed properties of the sequences in these two dense areas, at a deeper level than could be understood from the basic statistics.A problematic part of interpreting the results of the method is that many different perspectives of the data need to be considered at the same time to find interesting links. Future work could include automating the process of linking features in the basic statistics to clusters.
Att jämföra trafikdatamängder för mobila enheter genom klusteranalys för sekvenser av event i radionätet Ericsson samlar regelbundet in trafikdatamängder ifrån olika radionätverk runt om i världen. Dessa datamängder kan användas i många olika forsknings- och utvecklingssyften, både ur ett generellt perspektiv genom att betrakta allmän statistik, men även för specifika studier som till exempel felsökning av system och analys av buffernivåer i nätverket. För närvarande kan det dock vara svårt för en potentiell analytiker av dessa datamängder att avgöra om de lämpar sig för en viss studie. Detta examensarbete är inriktat på att underlätta jämförelser mellan olika inspelningar av dessa trafikdatamängder vad gäller allmänstatistik, användar- och tidstäckning och dataintegritet samt mönster i loggarna för radionätshändelser. Det huvudsakliga bidraget av detta examensarbete är en metod för att klustra händelsesekvenser baserat på deras tidsspann och antal förekomster av nyckelhändelser. Den s.k. Gap Statistic-metoden användes för att avgöra att 11 kluster var lämpligt för klusteranalysen, även om starka bevis inte kunde hittas för existensen av tydligt separerade kluster i de studerade datamängderna. Resultaten visar på att den valda metoden kan fungera som en användbar fördjupning av allmän jämförande statistik. Två intervall av tätt samlande durationer för händelsesekvenser kunde länkas till två motsvarande kluster av sekvenser. Utförlig statistik om sekvenserna i dessa kluster kunde visa på sekvensernas egenskaper i stor detalj, på en djupare nivå än vad som kunde åstadkommas med allmän statistik. En problematisk del i tolkandet av metodens resultat var att flera olika perspektiv av data var tvungna att betraktas på samma gång för att kunna upptäcka intressanta länkar. En vidareutveckling av arbetet i denna rapport kan vara att skapa metoder för att automatisera och förenkla processen att länka intressanta fenomen i den allmänna statistiken till olika kluster.
APA, Harvard, Vancouver, ISO, and other styles
38

Cunha, Rodrigo Lopes da. "Uplink video traffic determination and network optimization." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23487.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
Com o aumento do número de plataformas de transmissão de vídeo, as operadoras têm sofrido uma maior sobrecarga nas suas redes. De forma a fornecer uma melhor gestão dessas mesmas redes, garantindo qualidade de serviço a todos os clientes, torna-se necessário dar prioridade ao tráfego correspondente a vídeo aplicando novos conveitos na área das telecomunicações, como é o caso de Software-Defined Networking. Esta dissertação procura, numa primeira fase, apresentar uma revisão de vários temas relacionados com a determinação de tráfego de vídeo, Software-Defined Networking e qualidade de serviço. Posteriormente, é apresentada uma solução de uma aplicação de monitorização, que tem como objetivo, a deteção de tráfego de vídeo, de forma a ajudar na priorização de tráfego e na otimização da rede. A solução é validada através de uma implementação, baseada na performance e na baixa latência do sistema, que procura responder o mais rápido possível com informação sobre um determinado fluxo de pacotes na rede. São ainda apresentados resultados relativos a esta implementação.
With the increase of live streaming platforms, service providers have been experiencing a overhead on their networks. In order to provide a better management of these networks, ensuring quality of service to all customers, it is necessary to prioritize video traffic using new concepts being introduced into the telecommunications field, such as Software-Defined Networking. Firstly, this dissertation aims to present a review of several topics related with video traffic determination, Software-Defned Networking and quality of service. Secondly, a monitoring application solution is presented, which aims to detect video traffic in order to help the prioritization of traffic and network optimization. The solution is validated through an implementation, based on the system’s performance and low latency, which tries to reply as quickly as possible with information about a certain flow of network packets. Results related with this implementation are also presented
APA, Harvard, Vancouver, ISO, and other styles
39

Trovini, Kevin L. "Analysis of network traffic and bandwidth capacity : load balancing and rightsizing of Wide Area Network links /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA322237.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1996.
"September 1996." Thesis advisor(s): S. Sridhar and Rex Buddenberg. Includes bibliographical references (p. 113-115). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
40

Wallentinsson, Emma Wallentinsson. "Multiple Time Series Forecasting of Cellular Network Traffic." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154868.

Full text
Abstract:
The mobile traffic in cellular networks is increasing in a steady rate as we go intoa future where we are connected to the internet practically all the time in one wayor another. To map the mobile traffic and the volume pressure on the base stationduring different time periods, it is useful to have the ability to predict the trafficvolumes within cellular networks. The data in this work consists of 4G cellular trafficdata spanning over a 7 day coherent period, collected from cells in a moderately largecity. The proposed method in this work is ARIMA modeling, in both original formand with an extension where the coefficients of the ARIMA model are re-esimated byintroducing some user characteristic variables. The re-estimated coefficients produceslightly lower forecast errors in general than a isolated ARIMA model where thevolume forecasts only depends on time. This implies that the forecasts can besomewhat improved when we allow the influence of these variables to be a part ofthe model, and not only the time series itself.
APA, Harvard, Vancouver, ISO, and other styles
41

Cassir, C. "A flow model for the analysis of transport network reliability." Thesis, University of Newcastle Upon Tyne, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Darweesh, Turki H. "Capacity and performance analysis of a multi-user, mixed traffic GSM network." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0021/MQ48468.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chevalier, Philippe B., and Lawrence M. Wein. "Scheduling Networks of Queues: Heavy Traffic Analysis of a Multistation Closed Network." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5319.

Full text
Abstract:
We consider the problem of finding an optimal dynamic priority sequencing policy to maximize the mean throughput rate in a multistation, multiclass closed queueing network with general service time distributions and a general routing structure. Under balanced heavy loading conditions, this scheduling problem can be approximated by a control problem involving Brownian motion. Although a unique, closed form solution to the Brownian control problem is not derived, an analysis of the problem leads to an effective static sequencing policy, and to an approximate means of comparing the relative performance of arbitrary static policies. Three examples are given that illustrate the effectiveness of our procedure.
APA, Harvard, Vancouver, ISO, and other styles
44

Darweesh, Turki H. "Capacity and performance analysis of a multi-user, mixed traffic GSM network." Ottawa, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
45

Boppana, Neelima. "Simulation and analysis of network traffic for efficient and reliable information transfer." FIU Digital Commons, 2002. http://digitalcommons.fiu.edu/etd/1732.

Full text
Abstract:
With the growing commercial importance of the Internet and the development of new real-time, connection-oriented services like IP-telephony and electronic commerce resilience is becoming a key issue in the design of TP-based networks. Two emerging technologies, which can accomplish the task of efficient information transfer, are Multiprotocol Label Switching (MPLS) and Differentiated Services. A main benefit of MPLS is the ability to introduce traffic-engineering concepts due to its connection-oriented characteristic. With MPLS it is possible to assign different paths for packets through the network. Differentiated services divides traffic into different classes and treat them differently, especially when there is a shortage of network resources. In this thesis, a framework was proposed to integrate the above two technologies and its performance in providing load balancing and improving QoS was evaluated. Simulation and analysis of this framework demonstrated that the combination of MPLS and Differentiated services is a powerful tool for QoS provisioning in IP networks.
APA, Harvard, Vancouver, ISO, and other styles
46

Gan, Diane Elisabeth. "Performance analysis of an ATM network with multimedia traffic : a simulation study." Thesis, University of Greenwich, 1998. http://gala.gre.ac.uk/8653/.

Full text
Abstract:
Traffic and congestion control are important in enabling ATM networks to maintain the Quality of Service (QoS) required by end users. A Call Admission Control (CAC) strategy ensures that the network has sufficient resources available at the start of each call, but this does not prevent a traffic source from violating the negotiated contract. A policing strategy (User Parameter Control (UPC)) is also required to enforce the negotiated rates for a particular connection and to protect conforming users from network overload. The aim of this work is to investigate traffic policing and bandwidth management at the User to Network Interface (UNI). A policing function is proposed which is based on the leaky bucket (LB) which offers improved performance for both real time (RT) traffic such as speech and video and non-real time (non-RT) traffic, mainly data by taking into account the QoS requirements. A video cell in violation of the negotiated bit rate causes the remainder of the slice to be discarded. This 'tail clipping' provides protection for the decoder from damaged video slices. Speech cells are coded using a frequency domain coder, which places the most significant bits of a double speech sample into a high priority cell and the least significant bits into a high priority cell. In the case of congestion, the low priority cell can be discarded with little impact on the intelligibility of the received speech. However, data cells require loss-free delivery and are buffered rather than being discarded or tagged for subsequent deletion. This triple strategy is termed the super leaky bucket (SLB). Separate queues for RT and non-RT traffic, are also proposed at the multiplexer, with non pre-emptive priority service for RT traffic if the queue exceeds a predetermined threshold. If the RT queue continues to grow beyond a second threshold, then all low priority cells (mainly speech) are discarded. This scheme protects non-RT traffic from being tagged and subsequently discarded, by queueing the cells and also by throttling back non-RT sources during periods of congestion. It also prevents the RT cells from being delayed excessively in the multiplexer queue. A simulation model has been designed and implemented to test the proposal. Realistic sources have been incorporated into the model to simulate the types of traffic which could be expected on an ATM network. The results show that the S-LB outperforms the standard LB for video cells. The number of cells discarded and the resulting number of damaged video slices are significantly reduced. Dual queues with cyclic service at the multiplexer also reduce the delays experienced by RT cells. The QoS for all categories of traffic is preserved.
APA, Harvard, Vancouver, ISO, and other styles
47

El-Shehaly, Mai Hassan. "A Visualization Framework for SiLK Data exploration and Scan Detection." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34606.

Full text
Abstract:
Network packet traces, despite having a lot of noise, contain priceless information, especially for investigating security incidents or troubleshooting performance problems. However, given the gigabytes of flow crossing a typical medium sized enterprise network every day, spotting malicious activity and analyzing trends in network behavior becomes a tedious task. Further, computational mechanisms for analyzing such data usually take substantial time to reach interesting patterns and often mislead the analyst into reaching false positives, benign traffic being identified as malicious, or false negatives, where malicious activity goes undetected. Therefore, the appropriate representation of network traffic data to the human user has been an issue of concern recently. Much of the focus, however, has been on visualizing TCP traffic alone while adapting visualization techniques for the data fields that are relevant to this protocol's traffic, rather than on the multivariate nature of network security data in general, and the fact that forensic analysis, in order to be fast and effective, has to take into consideration different parameters for each protocol. In this thesis, we bring together two powerful tools from different areas of application: SiLK (System for Internet-Level Knowledge), for command-based network trace analysis; and ComVis, a generic information visualization tool. We integrate the power of both tools by aiding simplified interaction between them, using a simple GUI, for the purpose of visualizing network traces, characterizing interesting patterns, and fingerprinting related activity. To obtain realistic results, we applied the visualizations on anonymized packet traces from Lawrence Berkley National Laboratory, captured on selected hours across three months. We used a sliding window approach in visually examining traces for two transport-layer protocols: ICMP and UDP. The main contribution of this research is a protocol-specific framework of visualization for ICMP and UDP data. We explored relevant header fields and the visualizations that worked best for each of the two protocols separately. The resulting views led us to a number of guidelines that can be vital in the creation of "smart books" describing best practices in using visualization and interaction techniques to maintain network security; while creating visual fingerprints which were found unique for individual types of scanning activity. Our visualizations use a multiple-views approach that incorporates the power of two-dimensional scatter plots, histograms, parallel coordinates, and dynamic queries.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Durner, Raphael [Verfasser], Wolfgang [Akademischer Betreuer] Kellerer, Wolfgang [Gutachter] Kellerer, and Georg [Gutachter] Carle. "Fine-grained isolation and filtering of network traffic using SDN and NFV / Raphael Durner ; Gutachter: Wolfgang Kellerer, Georg Carle ; Betreuer: Wolfgang Kellerer." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1241740143/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Maritz, Gert Stephanus Herman. "A network traffic analysis tool for the prediction of perceived VoIP call quality." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/17897.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2011.
ENGLISH ABSTRACT: The perceived quality of Voice over Internet Protocol (IP) (VoIP) communication relies on the network which is used to transport voice packets between the end points. Variable network characteristics such as bandwidth, delay and loss are critical for real-time voice traffic and are not always guaranteed by networks. It is important for network service providers to determine the Quality of Service (QoS) it provides to its customers. The solution proposed here is to predict the perceived quality of a VoIP call, in real-time by using network statistics. The main objective of this thesis is to develop a network analysis tool, which gathers meaningful statistics from network traffic. These statistics will then be used for predicting the perceived quality of a VoIP call. This study includes the investigation and deployment of two main components. Firstly, to determine call quality, it is necessary to extract the voice streams from captured network traffic. The extracted sound files can then be analysed by various VoIP quality models to determine the perceived quality of a VoIP call. The second component is the analysis of network characteristics. Loss, delay and jitter are all known to influence perceived call quality. These characteristics are, therefore, determined from the captured network traffic and compared with the call quality. Using the statistics obtained by the repeated comparison of the call quality and network characteristics, a network specific algorithm is generated. This Non-Intrusive Quality Prediction Algorithm (NIQPA) uses basic characteristics such as time of day, delay, loss and jitter to predict the quality of a real-time VoIP call quickly in a non-intrusive way. The realised algorithm for each network will differ, because every network is different. Prediction results can then be used to adapt either the network (more bandwidth, packet prioritising) or the voice stream (error correction, change VoIP codecs) to assure QoS.
AFRIKAANSE OPSOMMING: Die kwaliteit van spraak oor die internet (VoIP) kommunikasie is afhanklik van die netwerk wat gebruik word om spraakpakkies te vervoer tussen die eindpunte. Netwerk eienskappe soos bandwydte, vertraging en verlies is krities vir intydse spraakverkeer en kan nie altyd gewaarborg word deur netwerkverskaffers nie. Dit is belangrik vir die netwerk diensverskaffers om die vereiste gehalte van diens (QoS) te verskaf aan hul kliënte. Die oplossing wat hier voorgestel word is om die kwaliteit van ’n VoIP oproep intyds te voorspel, deur middel van die netwerkstatistieke. Die belangrikste doel van hierdie projek is om ’n netwerk analise-instrument te ontwikkel. Die instrument versamel betekenisvolle statistiek deur van netwerkverkeer gebruik te maak. Hierdie statistiek sal dan gebruik word om te voorspel wat die gehalte van ’n VoIP oproep sal wees vir sekere netwerk toestande. Hierdie studie berus op die ondersoek en implementering van twee belangrike komponente. In die eerste plek, moet oproep kwaliteit bepaal word. Spraakstrome word uit die netwerkverkeer onttrek. Die onttrekte klanklêers kan dan geanaliseer word deur verskeie spraak kwaliteitmodelle om die kwaliteitdegradasie van ’n spesifieke VoIP oproep vas te stel. Die tweede komponent is die analise van netwerkeienskappe. Pakkieverlies, pakkievertraging en bibbereffek is bekend vir hul invloed op VoIP kwaliteit en is waargeneem. Hierdie netwerk eienskappe word dus bepaal uit die netwerkverkeer en daarna vergelyk met die gemete gesprekskwaliteit. Statistiek word verkry deur die herhaalde vergelyking van gesprekkwaliteit en netwerk eienskappe. Uit die statistiek kan ’n algoritme (vir die spesifieke network) gegenereer word om spraakkwaliteit te voorspel. Hierdie Nie-Indringende Kwaliteit Voorspellings-algoritme (NIKVA), gebruik basiese kenmerke, soos die tyd van die dag, pakkie vertraging, pakkie verlies en bibbereffek om die kwaliteit van ’n huidige VoIP oproep te voorspel. Hierdie metode is vinnig, in ’n nie-indringende manier. Die gerealiseerde algoritme vir die verskillende netwerke sal verskil, want elke netwerk is anders. Die voorspelling van spraakgehalte kan dan gebruik word om òf die netwerk aan te pas (meer bandwydte, pakkie prioriteit) òf die spraakstroom aan te pas (foutkorreksie, verander VoIP kodering) om die goeie kwaliteit van ’n VoIP oproep te verseker.
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Young Yong. "A study on traffic and channel dynamics for wireless multimedia network performance analysis /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography