Dissertations / Theses on the topic 'INS data'

To see the other types of publications on this topic, follow the link: INS data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'INS data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Adusumilli, Srujana. "Development of Statistical Learning Techniques for INS and GPS Data Fusion." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1398772813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Almshekhs, Rasha. "Data Modeling to Predict the Performance of Emerson Walk-in Freezer." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1512143011742024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Burman, Helén. "Calibration and orientation of airborne image and laser scanner data using GPS and INS." Doctoral thesis, KTH, Geodesy and Photogrammetry, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2970.

Full text
Abstract:

GPS and INS measurements provide positions and attitudesthat can be used for direct orientation of airborne sensors.This research improves the final results by performingsimultaneous adjustments of GPS, INS and image or laser scannerdata. The first part of this thesis deals with in-airinitialisation of INS attitude using GPS and INS velocitydifference. This is an improvement over initialisation on theground. Even better results can probably be obtained ifaccelerometer biases are modelled and horizontal accelerationsmade larger.

The second part of this thesis deals with GPS/INSorientation of aerial images. Theoretical investigations havebeen made to find the expected accuracy of stereo models andorthophotos oriented by GPS/INS. Direct orientation will becompared to block triangulation. Triangulation can to greaterextent model systematic errors in image and GPS-coordinates.Further, the precision in attitude after triangulation isbetter than that found in present INS performance. On the otherhand, direct orientation can provide more effective dataprocessing, since there is no need for finding or measuring tiepoints or ground control points. In strip triangulation, thenumber of ground control points can be reduced, since INSattitude measurements control error propagation through thestrip. Even if consecutive images are strongly correlated indirect orientation, it is advisable to make a relativeorientation to minimise stereo model deformations.

The third part of this thesis deals with matching laserscanner data. Both elevation and intensity data are used formatching and the differences between overlapping strips aremodelled as exterior orientation errors. Special attention ispaid to determining misalignment between the INS and the laserscanner coordinate systems. We recommend flying in fourdifferent directions over an area with elevation and/orintensity gradients. In this way, misalignment can be foundwithout any ground control. This method can also be used withother imaging sensors,e.g.an aerial camera.

Keywords:Airborne, Camera, Laser scanner, GPS, INS,Adjustment, Matching.

APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Mahendra, Stuart McNamee, Rick Navarro, Amy Fleishans, Louie Garcia, and Allen Khosrowabadi. "IMPROVING PERFORMANCE OF SINGLE OBJECT TRACKING RADAR WITH INTEGRATED GPS/INS." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608266.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
A novel approach combines GPS receiver technology with micro-electromechanical inertial sensors to improve performance of single object tracking radar. The approach enhances range safety by integrating an airborne Global Positioning System/Inertial Movement Unit (GPS/IMU) with a C-band transponder to downlink time-space-position information (TSPI) via FPS-16 instrumentation radar. This improves current telemetry links and the Range Application Joint Program Office (RAJPO) data link for downlinking TSPI because of the inherent long-range advantage of the radar. The goal of the project is to provide distance independent accuracy, and to demonstrate continuous 15-meter or better position accuracy over the entire flight envelope out to slant ranges up to 1,000 Km with at least 50 updates per second. This improves safety coverage for the wide area flight testing. It provides risk reduction for the Air Force Flight Test Center (AFFTC), Edwards Air Force Base, California and other ranges planning TSPI system upgrades.
APA, Harvard, Vancouver, ISO, and other styles
5

McIntyre, David S. "GPS effective data rate optimization with applications to integrated GPS/INS attitude and heading determination." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182445154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Persson, Joakim. "Bearbetning av GPS-data vid Flyg- och Systemprov." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1166.

Full text
Abstract:

At Flight and Systems test Saab AB, a post-processing software is used to process GPS data. A new software by the name GrafNav has been purchased and the purpose of this master thesis therefore became, partly to make a judgment regarding GrafNav’s ability to estimate position, velocity and accuracy, partly to if needed improve the estimate and finally find one or several methods to estimate the position and velocity accuracy.

The judgment of GrafNav was performed partly by a comparison to the former post-processing software (PNAV) and partly by a comparison to the airplane’s inertial navigation system (INS). The experiments showed that GrafNav’s ability to estimate the position is comparable with PNAV:s, but its capacity to estimate the velocity is considerably worse. The velocity estimate even showed a more noisy behavior than the original velocity from the receiver. More effort is needed to judge GrafNav’s ability to estimate the accuracy thru its quality signals.

A few trials were made to improve the velocity estimate thru Kalman filtering (Rauch-Tung-Striebel smoothing). The filtering was first made using only the position data from GrafNav as measurements and afterwards both position and velocity data from GrafNav was used. The outcome of the Kalman filtering showed that the best result is obtained when solely position data is used and that the estimate in general is comparable with PNAV:s estimate, but considerable big deviations is obtained in conjunction to interruptions in position data. More over, is more effort needed using both position and velocity data when performing the smoothing and also replacing the stationary Kalman filter with an adaptive filter.

Finally a method was brought out to estimate the position precision and a method to estimate the velocity accuracy. Both methods use the INS velocity to perform an estimation.

APA, Harvard, Vancouver, ISO, and other styles
7

Pálenská, Markéta. "Návrh algoritmu pro fúzi dat navigačních systémů GPS a INS." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230495.

Full text
Abstract:
Diplomová práce se zabývá návrhem algoritmu rozšířeného Kalmanova filtru, který integruje data z inerciálního navigačního systému (INS) a globálního polohovacího systému (GPS). Součástí algoritmu je i samotná mechanizace INS, určující na základě dat z akcelerometrů a gyroskopů údaje o rychlosti, zeměpisné pozici a polohových úhlech letadla. Vzhledem k rychlému nárůstu chybovosti INS je výstup korigován hodnotami rychlosti a pozice získané z GPS. Výsledný algoritmus je implementován v prostředí Simulink. Součástí práce je odvození jednotlivých stavových matic rozšířeného Kalmanova filtru.
APA, Harvard, Vancouver, ISO, and other styles
8

Vitkovskiy, Arseniy <1979&gt. "Memory hierarchy and data communication in heterogeneous reconfigurable SoCs." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1127/.

Full text
Abstract:
The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.
APA, Harvard, Vancouver, ISO, and other styles
9

Moretti, Simone <1983&gt. "Data Processing and Fusion For Multi-Source Wireless Systems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7450/.

Full text
Abstract:
The constant evolution of the telecommunication technologies is one fundamental aspect that characterizes the modern era. In the context of healthcare and security, different scenarios are characterized by the presence of multiple sources of information that can support a large number of innovative services. For example, in emergency scenarios, reliable transmission of heterogeneous information (health conditions, ambient and diagnostic videos) can be a valid support for managing the first-aid operations. The presence of multiple sources of information requires a careful communication management, especially in case of limited transmission resource availability. The objective of my Ph.D. activity is to develop new optimization techniques for multimedia communications, considering emergency scenarios characterized by wireless connectivity. Different criteria are defined in order to prioritize the available heterogeneous information before transmission. The proposed solutions are based on the modern concept of content/context awareness: the transmission parameters are optimized taking into account the informative content of the data and the general context in which the information sources are located. To this purpose, novel cross-layer adaptation strategies are proposed for multiple SVC videos delivered over wireless channel. The objective is to optimize the resource allocation dynamically adjusting the overall transmitted throughput to meet the actual available bandwidth. After introducing a realistic camera network, some numerical results obtained with the proposed techniques are showed. In addition, through numerical simulations the benefits are showed, in terms of QoE, introduced by the proposed adaptive aggregation and transmission strategies applied in the context of emergency scenarios. The proposed solution is fully integrated in European research activities, including the FP7 ICT project CONCERTO. To implement, validate and demonstrate the functionalities of the proposed solutions, extensive transmission simulation campaigns are performed. Hence, the presented solutions are integrated on a common system simulator which is been developed within the CONCERTO project.
APA, Harvard, Vancouver, ISO, and other styles
10

Giannuzzi, Fabio <1986&gt. "General-Purpose Data Acquisition Cards Based on FPGAs and High Speed Serial Protocols." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7519/.

Full text
Abstract:
This thesis exhibits the results of my PhD Apprenticeship Program, carried out at the “Marposs S.p.a.” firm, in the electronic research division, and at the Department of Physics and Astronomy of the Bologna University, in the INFN's electronics laboratories of the ATLAS group. During these three years of research, I worked on the development and realization of electronic boards dedicated to flexible data acquisition, designed to be applied in several contexts, that need to share high performance FPGAs and high-speed serial communications. The thesis describes the successful application of high-speed configurable electronic devices to two different fields, firstly developed in the particle physics scenario, and then the industrial measurement of mechanical pieces, reaching the main goal of the PhD Apprenticeship Program. The common denominator is the development of high speed electronics based on FPGAs for demanding data acquisition and data processing applications. The thesis describes the contribution to the luminosity monitor of LHC at CERN and illustrates a multi-camera system developed for automatic inspection of mechanical pieces made by a machine tool. The Apprenticeship Program allowed me to continue my academic course in parallel with my working activity, giving me the opportunity to finalize the project started during my internship and thesis for my master degree. It also allowed me to achieve a higher level in education and training in two different contexts of excellence, i.e. the industrial company and the academic research, where I concretely learned the best technical knowledge. The chance of bringing together two distant worlds was the most enthusiastic aspect of this PhD research. The world of industry and academic research face similar problems but with different points of view and goals. I had the opportunity to explore pure academic research, and also to apply the knowledge acquired in these years to the industrial research.
APA, Harvard, Vancouver, ISO, and other styles
11

Collina, Matteo <1984&gt. "Application Platforms for the Internet of Things: Theory, Architecture, Protocols, Data Formats, and Privacy." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6251/.

Full text
Abstract:
The Internet of Things (IoT) is the next industrial revolution: we will interact naturally with real and virtual devices as a key part of our daily life. This technology shift is expected to be greater than the Web and Mobile combined. As extremely different technologies are needed to build connected devices, the Internet of Things field is a junction between electronics, telecommunications and software engineering. Internet of Things application development happens in silos, often using proprietary and closed communication protocols. There is the common belief that only if we can solve the interoperability problem we can have a real Internet of Things. After a deep analysis of the IoT protocols, we identified a set of primitives for IoT applications. We argue that each IoT protocol can be expressed in term of those primitives, thus solving the interoperability problem at the application protocol level. Moreover, the primitives are network and transport independent and make no assumption in that regard. This dissertation presents our implementation of an IoT platform: the Ponte project. Privacy issues follows the rise of the Internet of Things: it is clear that the IoT must ensure resilience to attacks, data authentication, access control and client privacy. We argue that it is not possible to solve the privacy issue without solving the interoperability problem: enforcing privacy rules implies the need to limit and filter the data delivery process. However, filtering data require knowledge of how the format and the semantics of the data: after an analysis of the possible data formats and representations for the IoT, we identify JSON-LD and the Semantic Web as the best solution for IoT applications. Then, this dissertation present our approach to increase the throughput of filtering semantic data by a factor of ten.
APA, Harvard, Vancouver, ISO, and other styles
12

Trevisan, Riccardo <1987&gt. "Contactless Energy Transfer Techniques for Industrial Applications. Power and Data Transfer to Moving Parts." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7503/.

Full text
Abstract:
Contactless energy transfer (CET) systems are gaining increasing interest in the automatic machinery industries. For this reason, circuit equivalent networks of CET systems considered in the literature are introduced with emphasis on their industrial applicability. The main operating principles and the required compensating networks, along with different topologies of power supplies optimised for wireless powering, are discussed. The analysis of the wireless transfer, at the maximum efficiency, of high power levels shows that, in the kHz range, highly coupled inductive links are needed and soft-switching power sources required. The employment of CET units in controlled systems requires combining a link for data communication with the wireless power channel. At low frequencies, capacitive and inductive couplings are integrated in a unique platform to implement the wireless data and power links, respectively. Differently, at UHF, an increased data channel transfer efficiency is made possible by exploiting auto-resonant structures, such as split-ring resonators instead of capacitances, one at each far-end side of the link. The design procedure of a power CET system, including the dc/ac converter, a rotary transformer and its windings, is discussed and the results presented. A different version of a WPT system, which involves multiple transmitting coils and a sliding receiver, is also presented. A low frequency RFID capacitive data link is then combined with the rotary CET unit to provide the temperature feedback of a controlled system, wherein the rectifying part of a passive tag is exploited to simultaneously power and read a temperature probe. Subsequently, a split-ring based near-field UHF data link is designed to ensure an improved temperature detection in terms of accuracy and resolution. The sensor readout is performed at the transmitter side by measuring the reflected power by the load rectifier.
APA, Harvard, Vancouver, ISO, and other styles
13

Silva, Lidiana Mendes da. "Framework para interface e gerenciamento de bancos de dados." Universidade Federal de Uberlândia, 2009. https://repositorio.ufu.br/handle/123456789/14426.

Full text
Abstract:
The usage of specific databases for certain application does not allow the replacement of database or sharing information with other databases by user, without rebuilding the entire application, turning non-trivial the application of the solutions proposed in the literature. This reduces the interoperability among different suppliers of present biomedical data. This paper describes the design and development of a system for interface and management of biomedical signals databases. The system s features includes the use of different databases and the storage of the data for further analysis and was designed using the object-oriented technique, plug-ins and reflection technique, leading to create an application capable of connecting with the different databases. The developed system has capability of storage biomedical information, through the use of adaptive systems and plug-ins, reducing the problems of lack of compatibility between data banks, difficulties in maintenance and integration among them. The experiment results showed that the framework is able to collect patients biomedical data, that may be registered with their clinical features, and interact with differentdatabases.
A utilização de bancos de dados específicos para determinado aplicativo não permite ao usuário a substituição do banco de dados ou o compartilhamento com outras bases de informações sem a reconstrução de todo o aplicativo, tornando não trivial a aplicação das soluções propostas na literatura. Isso reduz a interoperabilidade entre os aplicativos de software de diferentes fornecedores de equipamentos biomédicos existentes. Este trabalho descreve o projeto e o desenvolvimento de um framework para interface e gerenciamento de bancos de dados utilizados em aplicações biomédicas. O sistema se caracteriza por permitir que diferentes bancos de dados possam ser utilizados para diferentes aplicativos a fim de armazenar os dados para posteriores análises. O sistema foi projetado utilizando a técnica de orientação a objetos, plug-ins e reflexão, permitindo criar aplicativos capazes de conectarem-se com os diferentes bancos de dados. Os resultados demonstram que o framework permite a coleta e o armazenamento de informações biomédicas por meio de sistemas adaptativos e de plug-ins, minimizando os problemas de falta de compatibilidade entre bancos, as dificuldades de manutenção e melhorando a integração dos mesmos.
Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
14

Mandurino, Claudia <1975&gt. "I dati metereologici per applicazioni energetiche e ambientali." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1451/.

Full text
Abstract:
Many energetic and environmental evaluations need appropriate meteorological data, as input to analysis and prevision softwares. In Italy there aren't adeguate meteorological data because, in many cases, they are incomplete, incorrect and also very expensive for a long-term analysis (that needs multi-year data sets). A possible solution to this problem is the use of a Typical Meteorological Year (TRY), generated for specific applications. Nowadays the TRYs have been created, using statistical criteria, just for the analysis of solar energy systems and for predicting the thermal performance of buildings, applying it also to the study of photovoltaic plants (PV), though not specifically created for this type of application. The present research has defined the methodology for the creation of TRYs for different applications. In particular TRYs for environmental and wind plant analysis have been created. This is the innovative aspect of this research, never explored before. In additions, the methodology of the generation for the PV TRYs has been improved. The results are very good and the TRYs generated for these applications are adeguate to characterize the climatic condition of the place over a long period and can be used for energetic and environmental studies.
APA, Harvard, Vancouver, ISO, and other styles
15

Nardini, Fabrizio <1985&gt. "Subject Specific Knee Joint Modelling Based on In Vivo Clinical Data." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7588/.

Full text
Abstract:
The knee is one of the most complex and studied joint of the musculoskeletal system provided its great importance in locomotion. Therefore, a deep understanding of its behaviour and of the role played by each of the structures composing it is fundamental. Knee joint models are an invaluable tool to understand the behaviour of the knee and their usefulness is proved in many fields such as surgical planning and prosthetic design. A huge amount of models has been proposed in the literature focusing on the kinematic, the kinetostatic and the dynamic behavior of the joint. Models can be based on in vivo or in vitro data. While the kinematic and the kinetostatic models are defined properly on in vitro data, the dynamic ones cannot. This discrepancy leads to a gap, a lack of coherence, between the usually in vitro defined kinematic and kinetostatic models and the study of the active structures of the joint. In order to achieve a comprehensive knee joint description in which the kinematic, kinetostatic and dynamic models coherently stem one from the other, the identification of a procedure that allows to obtaining reliable kinematic and kinetostatic models in vivo is needed. In the present dissertation a procedure is defined that allows for the identification of a subject specific knee joint model in vivo starting from standard clinical data obtained by the use of non invasive techniques such as computed tomography (CT), magnetic resonance imaging (MRI) and fluoroscopy. This procedure leads to an accurate identification of the parameters needed to personalize the 5-5 parallel mechanism and its patello-femoral extension on a single patient in order to accurately reply the knee joint original motion. Furthermore, following the sequential approach to the modelling of the joint, a stiffness model of the knee is specialized on the specific subject's anatomy.
APA, Harvard, Vancouver, ISO, and other styles
16

Pirini, Tommaso <1986&gt. "Distributed Information Systems and Data Mining in Self-Organizing Networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7284/.

Full text
Abstract:
The diffusion of sensors and devices to generate and collect data is capillary. The infrastructure that envelops the smart city has to react to the contingent situations and to changes in the operating environment. At the same time, the complexity of a distributed system, consisting of huge amounts of components fixed and mobile, can generate unsustainable costs and latencies to ensure robustness, scalability, and reliability, with type architectures middleware. The distributed system must be able to self-organize and self-restore adapting its operating strategies to optimize the use of resources and overall efficiency. Peer-to-peer systems (P2P) can offer solutions to face the requirements of managing, indexing, searching and analyzing data in scalable and self-organizing fashions, such as in cloud services and big data applications, just to mention two of the most strategic technologies for the next years. In this thesis we present G-Grid, a multi-dimensional distributed data indexing able to efficiently execute arbitrary multi-attribute exact and range queries in decentralized P2P environments. G-Grid is a foundational structure and can be effectively used in a wide range of application environments, including grid computing, cloud and big data domains. Nevertheless we proposed some improvements on the basic structure introducing a bit of randomness by using Small World networks, whereas are structures derived from social networks and show an almost uniform traffic distribution. This produced huge advantages in efficiency, cutting maintenance costs, without losing efficacy. Experiments show how this new hybrid structure obtains the best performance in traffic distribution and it a good settlement for the overall performance on the requirements desired in the modern data systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Palmerini, Luca <1981&gt. "Data Mining in Clinical Practice for the Quantification of Motor Impairment in Parkinson's Disease." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4845/.

Full text
Abstract:
Advances in biomedical signal acquisition systems for motion analysis have led to lowcost and ubiquitous wearable sensors which can be used to record movement data in different settings. This implies the potential availability of large amounts of quantitative data. It is then crucial to identify and to extract the information of clinical relevance from the large amount of available data. This quantitative and objective information can be an important aid for clinical decision making. Data mining is the process of discovering such information in databases through data processing, selection of informative data, and identification of relevant patterns. The databases considered in this thesis store motion data from wearable sensors (specifically accelerometers) and clinical information (clinical data, scores, tests). The main goal of this thesis is to develop data mining tools which can provide quantitative information to the clinician in the field of movement disorders. This thesis will focus on motor impairment in Parkinson's disease (PD). Different databases related to Parkinson subjects in different stages of the disease were considered for this thesis. Each database is characterized by the data recorded during a specific motor task performed by different groups of subjects. The data mining techniques that were used in this thesis are feature selection (a technique which was used to find relevant information and to discard useless or redundant data), classification, clustering, and regression. The aims were to identify high risk subjects for PD, characterize the differences between early PD subjects and healthy ones, characterize PD subtypes and automatically assess the severity of symptoms in the home setting.
APA, Harvard, Vancouver, ISO, and other styles
18

Domeniconi, Giacomo <1986&gt. "Data and Text Mining Techniques for In-Domain and Cross-Domain Applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7494/.

Full text
Abstract:
In the big data era, a wide amount of data has been generated in different domains, from social media to news feeds, from health care to genomic functionalities. When addressing a problem, we usually need to harness multiple disparate datasets. Data from different domains may follow different modalities, each of which has a different representation, distribution, scale and density. For example, text is usually represented as discrete sparse word count vectors, whereas an image is represented by pixel intensities, and so on. Nowadays plenty of Data Mining and Machine Learning techniques are proposed in literature, which have already achieved significant success in many knowledge engineering areas, including classification, regression and clustering. Anyway some challenging issues remain when tackling a new problem: how to represent the problem? What approach is better to use among the huge quantity of possibilities? What is the information to be used in the Machine Learning task and how to represent it? There exist any different domains from which borrow knowledge? This dissertation proposes some possible representation approaches for problems in different domains, from text mining to genomic analysis. In particular, one of the major contributions is a different way to represent a classical classification problem: instead of using an instance related to each object (a document, or a gene, or a social post, etc.) to be classified, it is proposed to use a pair of objects or a pair object-class, using the relationship between them as label. The application of this approach is tested on both flat and hierarchical text categorization datasets, where it potentially allows the efficient addition of new categories during classification. Furthermore, the same idea is used to extract conversational threads from an unregulated pool of messages and also to classify the biomedical literature based on the genomic features treated.
APA, Harvard, Vancouver, ISO, and other styles
19

Jílek, Tomáš. "Pokročilá navigace v heterogenních multirobotických systémech ve vnějším prostředí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234530.

Full text
Abstract:
The doctoral thesis discusses current options for the navigation of unmanned ground vehicles with a focus on achieving high absolute compliance of the required motion trajectory and the obtained one. The current possibilities of key self-localization methods, such as global satellite navigation systems, inertial navigation systems, and odometry, are analyzed. The description of the navigation method, which allows achieving a centimeter-level accuracy of the required trajectory tracking with the above mentioned self-localization methods, forms the core of the thesis. The new navigation method was designed with regard to its very simple parameterization, respecting the limitations of the used robot drive configuration. Thus, after an appropriate parametrization of the navigation method, it can be applied to any drive configuration. The concept of the navigation method allows integrating and using more self-localization systems and external navigation methods simultaneously. This increases the overall robustness of the whole process of the mobile robot navigation. The thesis also deals with the solution of cooperative convoying heterogeneous mobile robots. The proposed algorithms were validated under real outdoor conditions in three different experiments.
APA, Harvard, Vancouver, ISO, and other styles
20

Terranova, Nicholas <1986&gt. "Covariance Evaluation for Nuclear Data of Interest to the Reactivity Loss Estimation of the Jules Horowitz Material Testing Reactor." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7550/.

Full text
Abstract:
In modern nuclear technology, integral reactor parameter uncertainty evaluation plays a crucial role for both economic and safety purposes. Target accuracies for operating and future nuclear facilities can be obtained only if the available simulation tools, such that computational platforms and nuclear data, are precise enough to produce reduced biases and uncertainties on target reactor parameters. The quality of any engineering parameter uncertainty quantification analysis strongly depends on the reliability related to the covariance information contained in evaluated libraries. To propagate properly nuclear data uncertainty on nuclear reactor parameters, science-based variance-covariance matrices are then indispensable. The present work is devoted to nuclear data covariance matrices generation for reactivity loss uncertainty estimations regarding the Jules Horowitz Reactor (JHR), a material testing facility under construction at CEA-Cadarache (France). During depletion, in fact, various fission products appear and the related nuclear data are often barely known. In particular, the strenuous and worldwide recognized problem of generating fission product yields covariances has been mainly considered. Present nuclear data libraries such as JEFF or ENDF/B do not have complete uncertainty information on fission yields, which is limited to only variances. The main goal of this work is to generate science-based and physically consistent fission yields covariances to be associated to the existing European library JEFF-3.1.1. Variance-covariance matrices have been evaluated using CONRAD (COde for Nuclear Reaction Analysis and Data assimilation, developed at CEA-Cadarache) for the most significant fissioning systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Lucchi, Francesca <1984&gt. "Reverse Engineering tools: development and experimentation of innovative methods for physical and geometrical data integration and post-processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5837/.

Full text
Abstract:
In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.
L’impiego di tecniche di Ingegneria Inversa si è ampiamente diffuso e consolidato negli ultimi anni, tanto che questi sistemi sono comunemente impiegati in numerose applicazioni. Pertanto, numerose attività di ricerca sono volte all’analisi del dato acquisito in termini di accuratezza e precisione ed alla definizione di tecniche innovative per il post processing. In questo panorama, l’attività di ricerca presentata in questa tesi di dottorato è rivolta alla definizione di due metodologie, l’una finalizzata a facilitare le operazioni di elaborazione del dato e l’altra a permettere un agevole data fusion tra informazioni fisiche e geometriche di uno stesso oggetto. In particolare, il primo approccio prevede l’individuazione della componente di errore nelle coordinate di punti acquisiti mediate un sistema di scansione a triangolazione ottica. Un’opportuna matrice di correzione della componente sistematica è stata individuata, a seconda delle condizioni operative e dei parametri di acquisizione del sistema. Pertanto, si è raggiunto un miglioramento delle performance del sistema in termini di incremento dell’accuratezza del dato acquisito. Il secondo tema di ricerca affrontato in questa tesi consiste nell’integrazione tra il dato geometrico proveniente da una scansione 3D e le informazioni sulla temperatura rilevata mediante un’indagine termografica. Si è così ottenuto un termogramma in 3D registrando opportunamente su ogni punto acquisito il relativo valore di temperatura. L’informazione geometrica, proveniente dalla scansione laser, è stata inoltre utilizzata per normalizzare il termogramma, rendendolo indipendente dal punto di vista della presa termografica.
APA, Harvard, Vancouver, ISO, and other styles
22

Magagni, Matteo <1975&gt. "Valutazione delle performance degli scalpelli da perforazione: studi teorici, analisi dati e valutazioni tecnico-economiche." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/858/.

Full text
Abstract:
The study is aimed to calculate an innovative numerical index for bit performance evaluation called Bit Index (BI), applied on a new type of bit database named Formation Drillability Catalogue (FDC). A dedicated research programme (developed by Eni E&P and the University of Bologna) studied a drilling model for bit performance evaluation named BI, derived from data recorded while drilling (bit records, master log, wireline log, etc.) and dull bit evaluation. This index is calculated with data collected inside the FDC, a novel classification of Italian formations aimed to the geotechnical and geomechanical characterization and subdivisions of the formations, called Minimum Interval (MI). FDC was conceived and prepared at Eni E&P Div., and contains a large number of significant drilling parameters. Five wells have been identified inside the FDC and have been tested for bit performance evaluation. The values of BI are calculated for each bit run and are compared with the values of the cost per metre. The case study analyzes bits of the same type, diameters and run in the same formation. The BI methodology implemented on MI classification of FDC can improve consistently the bit performances evaluation, and it helps to identify the best performer bits. Moreover, FDC turned out to be functional to BI, since it discloses and organizes formation details that are not easily detectable or usable from bit records or master logs, allowing for targeted bit performance evaluations. At this stage of development, the BI methodology proved to be economic and reliable. The quality of bit performance analysis obtained with BI seems also more effective than the traditional “quick look” analysis, performed on bit records, or on the pure cost per metre evaluation.
APA, Harvard, Vancouver, ISO, and other styles
23

Lazarovich, Amir. "Invisible Ink : blockchain for data privacy." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98626.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 81-85).
The problem of maintaining complete control over and transparency with regard to our digital identity is growing more urgent as our lives become more dependent on online and digital services. What once was rightfully ours and under our control is now spread among uncountable entities across many locations. We have built a platform that securely distributes encrypted user-sensitive data. It uses the Bitcoin blockchain to keep a trust-less audit trail for data interactions and to manage access to user data. Our platform offers advantages to both users and service providers. The user enjoys the heightened transparency, control, and security of their personal data, while the service provider becomes much less vulnerable to single point-of failures and breaches, which in turn decreases their exposure to information-security liability, thereby saving them money and protecting their brand. Our work extends an idea developed by the author and two collaborators, a peer-to- peer network that uses blockchain technology and off-blockchain storage to securely distribute sensitive data in a decentralized manner using a custom blockchain protocol. Our two main contributions are: 1. developing this platform and 2. analyzing its feasibility in real-world applications. This includes designing a protocol for data authentication that runs on an Internet scale peer-to-peer network, abstracting complex interactions with encrypted data, building a dashboard for data auditing and management, as well as building servers and sample services that use this platform for testing and evaluation. This work has been supported by the MIT Communication Futures Program and the Digital Life Consortium.
by Amir Lazarovich.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Blagojevic, Rachel Venita. "Using data mining for digital ink recognition." Thesis, University of Auckland, 2011. http://hdl.handle.net/2292/7526.

Full text
Abstract:
Computational recognition of hand-drawn diagrams has come a long way but is still inadequate for general use. This research uses data mining techniques to improve the accuracy of recognition. We focus on text-shape division as a challenging example that benefits from this approach. Surprisingly, although text is a fundamental part of diagrams it has been largely ignored. A review of the literature will show that feature-based recognisers are ideal candidates for solving these types of problems. Such recognisers require a good feature set and a suitable algorithm. For recognition to be successful, the features fed into the algorithms must provide good distinguishing characteristics between classes of interest. While small feature sets have been reported, currently there is no extensive survey of existing features employed for sketch recognition. Such a survey could act as a library for algorithms to employ for a given problem in sketch recognition. In addition, while various algorithms have been tried, there has been no extensive study of algorithms to determine the most optimal fit for accurate text-shape dividers. To build our text-shape dividers, we have assembled a comprehensive library of ink features that can be used for sketch recognition problems and compiled a large repository of labelled sketch data. To collect this data we built our own tool, DataManager, which includes support for collecting and labelling sketches as well as automatically generating datasets. Using this feature library and data repository a systematic investigation and tuning of machine learning algorithms has identified the algorithms best suited to text-shape division. The extensive evaluation on diagrams from six different domains has shown that our resulting dividers, using LADTree and LogitBoost, are significantly more accurate than three existing dividers. To our knowledge, these algorithms have not been used for text-shape division before.
APA, Harvard, Vancouver, ISO, and other styles
25

Da, San Martino Giovanni <1979&gt. "Kernel Methods for Tree Structured Data." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1400/.

Full text
Abstract:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
APA, Harvard, Vancouver, ISO, and other styles
26

Stejskal, Martin. "Polymorfní USB – I2S rozhraní." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220359.

Full text
Abstract:
The thesis goal was introduce with 32-bit microcontrollers produced by ATMEL, synchronous digital interface and implementation USB protocol in MCU. On base of this informations design device which have behaviour as USB audio with fully variable parameters and digital output in I2S format.
APA, Harvard, Vancouver, ISO, and other styles
27

Vasques, Duarte Luís Sales. "O Statistical Data Warehouse do INE : uma observação participativa." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19176.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação
Vivemos na era dos dados, onde estes são equiparados ao valor do petróleo e quem detém o poder da informação detém enorme saber sobre como uma sociedade funciona em todas as suas vicissitudes. A presente dissertação aborda aquela que é, provavelmente, a base de dados mais importante que um país pode ter: a base de dados do seu instituto nacional de estatística oficial, também denominada de Statistical Data Warehouse. Pretendeu-se estudar como está desenhada a arquitetura do Statistical Data Warehouse do Instituto Nacional de Estatística português, como funciona o processo de produção de estatísticas oficiais e como é que os dois se interligam, procurando elucidar-se a dinâmica entre utilizadores, tecnologia e processos. Para tal, conduziu-se um estudo qualitativo, com forte ênfase na observação participativa, com recurso a análise documental, entrevistas com especialistas e trabalho próprio do investigador realizado no objeto de estudo. Retira-se a importância do avanço tecnológico e da digitalização na produção de estatísticas oficiais, não obstante os seus custos em termos financeiros e de recursos humanos. Denota-se, também, a importância de se envolver todo o instituto nacional de estatística em torno do novo ambiente de produção estatística, assinalando-se resultados animadores na sua adoção e desenvolvimento, sem esquecer a formação que é necessária efetuar sobre os seus utilizadores. Este estudo incide sobre um nicho de Data Warehouse, demonstrando o modo de funcionamento da sua arquitetura e como é que o modelo de produção de estatística oficial se interliga com a mesma, bem como com as áreas funcionais do instituto.
We're living in the age of data, where data are equated to the oil' value and who holds the power of information holds a huge knowledge about how a society works and all its' vicissitudes. This dissertation approach what is, probably, the most important database that a country may have: the database of its national statistics institute, also called a Statistical Data Warehouse. It was intended to study how the Statistical Data Warehouse' architecture of the Portuguese National Statistics Institute is designed, how does the official statistics production process works and how the two intertwine, trying to clarify the dynamics between users, technology and processes. For such purpose, a qualitative study was conducted, with strong emphasis no participative observation, utilizing documental analysis, interviews with experts and the own researcher' work on the studied object. The importance of a technological advance and digitalization in the production of official statistics is duly noted, notwithstanding the financial and human resources costs it carries. It's worth noting, also, the importance of engaging all the national statistics institute around the new statistics production environment, which reveals promising results in terms of its adoption and development, without forgetting the need for formation that its' users require. This dissertation approaches a niche of the Data Warehouse, demonstrating how its architecture works and how the official statistics production model connects with it, as well as with the key departments of the institute.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
28

Carpineti, Samuele <1978&gt. "Data and behavioral contracts for web services." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2007. http://amsdottorato.unibo.it/368/.

Full text
Abstract:
The recent trend in Web services is fostering a computing scenario where loosely coupled parties interact in a distributed and dynamic environment. Such interactions are sequences of xml messages and in order to assemble parties – either statically or dynamically – it is important to verify that the “contracts” of the parties are “compatible”. The Web Service Description Language (wsdl) is a standard used for describing one-way (asynchronous) and request/response (synchronous) interactions. Web Service Conversation Language extends wscl contracts by allowing the description of arbitrary, possibly cyclic sequences of exchanged messages between communicating parties. Unfortunately, neither wsdl nor wscl can effectively define a notion of compatibility, for the very simple reason that they do not provide any formal characterization of their contract languages. We define two contract languages for Web services. The first one is a data contract language and allow us to describe a Web service in terms of messages (xml documents) that can be sent or received. The second one is a behavioral contract language and allow us to give an abstract definition of the Web service conversation protocol. Both these languages are equipped with a sort of “sub-typing” relation and, therefore, they are suitable to be used for querying Web services repositories. In particular a query for a service compatible with a given contract may safely return services with “greater” contract.
APA, Harvard, Vancouver, ISO, and other styles
29

Bujari, Armir <1984&gt. "Opportunistic Data Gathering and Dissemination in Urban Scenarios." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6512/.

Full text
Abstract:
In the era of the Internet of Everything, a user with a handheld or wearable device equipped with sensing capability has become a producer as well as a consumer of information and services. The more powerful these devices get, the more likely it is that they will generate and share content locally, leading to the presence of distributed information sources and the diminishing role of centralized servers. As of current practice, we rely on infrastructure acting as an intermediary, providing access to the data. However, infrastructure-based connectivity might not always be available or the best alternative. Moreover, it is often the case where the data and the processes acting upon them are of local scopus. Answers to a query about a nearby object, an information source, a process, an experience, an ability, etc. could be answered locally without reliance on infrastructure-based platforms. The data might have temporal validity limited to or bounded to a geographical area and/or the social context where the user is immersed in. In this envisioned scenario users could interact locally without the need for a central authority, hence, the claim of an infrastructure-less, provider-less platform. The data is owned by the users and consulted locally as opposed to the current approach of making them available globally and stay on forever. From a technical viewpoint, this network resembles a Delay/Disruption Tolerant Network where consumers and producers might be spatially and temporally decoupled exchanging information with each other in an adhoc fashion. To this end, we propose some novel data gathering and dissemination strategies for use in urban-wide environments which do not rely on strict infrastructure mediation. While preserving the general aspects of our study and without loss of generality, we focus our attention toward practical applicative scenarios which help us capture the characteristics of opportunistic communication networks.
APA, Harvard, Vancouver, ISO, and other styles
30

Consonni, Cristian. "The Dao of Wikipedia: Extracting Knowledge from the Structure of Wikilinks." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/243097.

Full text
Abstract:
Wikipedia is a multilingual encyclopedia written collaboratively by volunteers online, and it is now the largest, most visited encyclopedia in existence. Wikipedia has arisen through the self-organized collaboration of contributors, and since its launch in January 2001, its potential as a research resource has become apparent to scientists, its appeal lies in the fact that it strikes a middle ground between accurate, manually created, limited-coverage resources, and noisy knowledge mined from the web. For this reason, Wikipedia's content has been exploited for a variety of applications: to build knowledge bases, to study interactions between users on the Internet, and to investigate social and cultural issues such as gender bias in history, or the spreading of information. Similarly to what happened for the Web at large, a structure has emerged from the collaborative creation of Wikipedia: its articles contain hundreds of millions of links. In Wikipedia parlance, these internal links are called wikilinks. These connections explain the topics being covered in articles and provide a way to navigate between different subjects, contextualizing the information, and making additional information available. In this thesis, we argue that the information contained in the link structure of Wikipedia can be harnessed to gain useful insights by extracting it with dedicated algorithms. More prosaically, in this thesis, we explore the link structure of Wikipedia with new methods. In the first part, we discuss in depth the characteristics of Wikipedia, and we describe the process and challenges we have faced to extract the network of links. Since Wikipedia is available in several language editions and its entire edition history is publicly available, we have extracted the wikilink network at various points in time, and we have performed data integration to improve its quality. In the second part, we show that the wikilink network can be effectively used to find the most relevant pages related to an article provided by the user. We introduce a novel algorithm, called CycleRank, that takes advantage of the link structure of Wikipedia considering cycles of links, thus giving weight to both incoming and outgoing connections, to produce a ranking of articles with respect to an article chosen by the user. In the last part, we explore applications of CycleRank. First, we describe the Engineroom EU project, where we faced the challenge to find which were the most relevant Wikipedia pages connected to the Wikipedia article about the Internet. Finally, we present another contribution using Wikipedia article accesses to estimate how the information about diseases propagates. In conclusion, with this thesis, we wanted to show that browsing Wikipedia's wikilinks is not only fascinating and serendipitous, but it is an effective way to extract useful information that is latent in the user-generated encyclopedia.
APA, Harvard, Vancouver, ISO, and other styles
31

Katti, Sachin (Katti Rajsekhar). "On attack correlation and the benefits of sharing IDS data." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34363.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 47-49).
This thesis presents the first wide-scale study of correlated attacks, i.e., attacks mounted by the same source IP against different networks. Using a large dataset from 1700 intrusion detection systems (IDSs), this thesis shows that correlated attacks are prevalent in the current Internet; 20% of all offending sources mount correlated attacks and they account for more than 40% of all the IDS alerts in our logs. Correlated attacks appear at different networks within a few minutes of each other, indicating the difficulty of warding off these attacks by occasional offline exchange of lists of malicious IP addresses. Furthermore, correlated attacks are highly targeted. The 1700 DSs can be divided into small groups with 4-6 members that do not change with time; IDSs in the same group experience a large number of correlated attacks, while IDSs in different groups see almost no correlated attacks These results have important implications on collaborative intrusion detection of common attackers. They show that collaborating IDSs need to exchange alert information in realtime. Further, exchanging alerts among the few fixed IDSs in the same correlation group achieves almost the same benefits as collaborating with all IDSs, while dramatically reducing the overhead.
by Sachin Katti.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
32

Kulla-Mader, Julia. "Graphs via Ink: Understanding How the Amount of Non-data Ink in a Graph Affects Perception and Learning." Thesis, School of Information and Library Science, 2007. http://hdl.handle.net/1901/379.

Full text
Abstract:
There is much debate in the design community concerning how to make an easy-to-understand graph. While expert designers recommend including as little non-data ink as possible, there is little empirical evidence to support their arguments. Non-data ink refers to any ink on a graph that is not required to display the graph's data. As a result of the lack of strong evidence concerning how to design graphs, there is widespread confusion when it comes to best practices. This paper describes a preliminary study of graph perception and learning using an eye-tracking system at UNC's School of Information and Library Science.
APA, Harvard, Vancouver, ISO, and other styles
33

Fernandez, Maria del Mar, and Ignacio Porres. "An Evaluation of current IDS." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11635.

Full text
Abstract:

With the possibility of connecting several computers and networks the necessity of protecting the whole data and machines from attackers (hackers) that try to get some confident information to use for their own benefit or just destroy or modify valuable information was born. At this point IDS appears to help users, companies or institutions to detect when they are getting compromised. This thesis will cover two main parts: the first one consists of an intense research study about the world of IDS and its environment. Subsequently, we will conclude this part with some points where IDS still needs to be questioned and show up desirable requirements for “the perfect” intrusion detection system. This “perfect” adjective can of course be discussed variously. The second part of the thesis approaches the implementation of the most used open source IDS: Snort. Some basic attacks on the machine where Snort is installed will be performed in order to make the future user see what kind of protection it ensures and the usability of this. There is a brief discussion about two of the main challenges in IDS will follow: analyzing big amounts of packets and encrypted traffic. Finally there are conclusions for a safe computer environment as well as the suggestion that some skilled programmer should give Snort a more friendly interface for every kind of users and a built in programme package which includes webserver, database and other libraries that are needed to run it properly with all its features.

APA, Harvard, Vancouver, ISO, and other styles
34

Gibson, David A., Newton B. Penrose, and Ralph B. Jr Wade. "HSTSS-DAC CUSTOM ICS IMPACT ON 2.75" MISSILE TELEMETRY." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608742.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
We analyze several telemetry data acquisition systems to gage the system impact of denser custom ICs being developed under the HSTSS-DAC project. Our baseline is a telemetry system recently developed at Eglin AFB to support 16 analog input channels, signal conditioning and encoding for Pulse Code Modulation (PCM) using Commercial Off-the- Shelf (COTS) ICs. The data acquisition portion of the system occupies three double-sided, round circuit cards, each 2.3" in diameter. A comparable system using HSTSS-DAC custom Ics will occupy only one side of one card - a factor of six-volume reduction compared to the COTS approach.
APA, Harvard, Vancouver, ISO, and other styles
35

Marques, Leonardo Fernando Félix. "Equity research - Verizon Communications Inc." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19961.

Full text
Abstract:
Mestrado em Finanças
Comprar, é a minha recomendação para a Verizon Communications Inc. (Vz), usando o DCF método, obtive um preço alvo USD 67,39 por acção, representando um potencial crescimento de 19,86% face ao último preço de mercado de 2018. O risco estimado é médio, com grandes barreiras de entrado e um mercado oligopólio. O segmento de wireless é o factor chave da Verizon e do mercado de telecomunicações nos Estados Unidos, no final de 2018 Verizon tinha 34,05% de quota de mercado Wireless e 17,86% de Wireline, embora haja uma diferença nas quotas de mercados, o volume de vendas Wireless representavam 162% do volume de Wireline. O segmento de Wireless é dividido em Serviços, equipamentos Wireless e outros, serviços são a principal fonte de rendimento no segmento. O mercado de telecomunicações prepara-se para a introdução da 5ª geração (5g), esta tecnologia é baseado na atual 4G, mas com velocidade e qualidade de conecção superior, que permitirá a implementação da Internet of Things(IoT), fazendo com que tenhamos mais equipamentos electrónicos conectados à internet. A estratégia da Verizon passa por ser a primeira empresa nos Estados Unidos a disponibilizar o 5G para o Wireless e Wireline, aumentando o volume de vendas através da vendas de novos equipamentos, esperando um crescimento composto de 11% entre 2018 e 2022F, maioritariamente devido à IoT.
Buy is my recommendation for Verizon Communications Inc. (Vz), using a DCF method got a price target of USD 67,39 per share for the end of 2019, representing an upside potential of 19,86% comparing with the last closed price of 2018, USD 56,22. Risk assessment estimates a medium risk for Verizon, high barriers to new entrants, and a market oligopoly. The wireless segment is the driver of Verizon and telecom market in US, at the end of 2018 Verizon had 34,05% of wireless market share, making Verizon the leader in the market. For the Wireline, Verizon has 17,86% of market share, therefore there is a difference between both market shares, the Wireless segment revenues at the end of 2018 represent 162% of Wireline revenues. Wireless segment is divided in Service, Wireless equipment and Others, Service revenues are the main source of revenue for the segment. The telecom industry is prepared to face the 5th Generation (5G), this technology it is based on the actual 4G Long Term Evolution, but with more speed and data quality, it will be the key driver for the implementation of Internet of Things, that will allow having more electronic devices connected to internet, from household appliances to other devices. Verizon strategy is to become the first company in the US with wireless and wireline 5G, increase his revenues from Wireless equipment, as it is expected that Wireless equipment revenues in US will grow a CAGR of 11% from 2018 to 2022F, mainly due IoT.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
36

Navarin, Nicolò <1984&gt. "Learning with Kernels on Graphs: DAG-based kernels, data streams and RNA function prediction." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6578/.

Full text
Abstract:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
37

Pasupathipillai, Sivam. "Modern Anomaly Detection: Benchmarking, Scalability and a Novel Approach." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/281952.

Full text
Abstract:
Anomaly detection consists in automatically detecting the most unusual elements in a data set. Anomaly detection applications emerge in domains such as computer security, system monitoring, fault detection, and wireless sensor networks. The strategic importance of detecting anomalies in these domains makes anomaly detection a critical data analysis task. Moreover, the contextual nature of anomalies, among other issues, makes anomaly detection a particularly challenging problem. Anomaly detection has received significant research attention in the last two decades. Much effort has been invested in the development of novel algorithms for anomaly detection. However, several open challenges still exist in the field.This thesis presents our contributions toward solving these challenges. These contributions include: a methodological survey of the recent literature, a novel benchmarking framework for anomaly detection algorithms, an approach for scaling anomaly detection techniques to massive data sets, and a novel anomaly detection algorithm inspired by the law of universal gravitation. Our methodological survey highlights open challenges in the field, and it provides some motivation for our other contributions. Our benchmarking framework, named BAD, tackles the problem of reliably assess the accuracy of unsupervised anomaly detection algorithms. BAD leverages parallel and distributed computing to enable massive comparison studies and hyperparameter tuning tasks. The challenge of scaling unsupervised anomaly detection techniques to massive data sets is well-known in the literature. In this context, our contributions are twofold: we investigate the trade-offs between a single-threaded implementation and a distributed approach considering price-performance metrics, and we propose a scalable approach for anomaly detection algorithms to arbitrary data volumes. Our results show that, when high scalability is required, our approach can handle arbitrarily large data sets without significantly compromising detection accuracy. We conclude our contributions by proposing a novel algorithm for anomaly detection, named Gravity. Gravity identifies anomalies by considering the attraction forces among massive data elements. Our evaluation shows that Gravity is competitive with other popular anomaly detection techniques on several benchmark data sets. Additionally, the properties of Gravity makes it preferable in cases where hyperparameter tuning is challenging or unfeasible.
APA, Harvard, Vancouver, ISO, and other styles
38

Svitek, Richard M. "SiGe BiCMOS RF ICs and Components for High Speed Wireless Data Networks." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/27375.

Full text
Abstract:
The advent of high-fT silicon CMOS/BiCMOS technologies has led to a dramatic upsurge in the research and development of radio and microwave frequency integrated circuits (ICs) in silicon. The integration of silicon-germanium heterojunction bipolar transistors (SiGe HBTs) into established "digital" CMOS processes has provided analog performance in silicon that is not only competitive with III-V compound-semiconductor technologies, but is also potentially lower in cost. Combined with improvements in silicon on-chip passives, such as high-Q metal-insulator-metal (MIM) capacitors and monolithic spiral inductors, these advanced RF CMOS and SiGe BiCMOS technologies have enabled complete silicon-based RF integrated circuit (RFIC) solutions for emerging wireless communication standards; indeed, both the analog and digital functionalities of an entire wireless system can now be combined in a single IC, also known as a wireless "system-on-a-chip" (SoC). This approach offers a number of potential benefits over multi-chip solutions, such as reductions of parasitics, size, power consumption, and bill-of-materials; however, a number of critical challenges must be considered in the integration of such SoC solutions. The focus of this research is the application of SiGe BiCMOS technology to on-going challenges in the development of receiver components for high speed wireless data networks. The research seeks to drive SoC integration by investigating circuit topologies that eliminate the need for off-chip components and are amenable to complete on-chip integration. The first part of this dissertation presents the design, fabrication, and measurement of a 5--6GHz sub-harmonic direct-conversion-receiver (DCR) front-end, implemented in the IBM 0.5um 5HP SiGe BiCMOS process. The design consists of a fully-differential low-noise amplifier (LNA), a set of quadrature (I and Q)x~2 sub-harmonic mixers, and an LO conditioning chain. The front-end design provides a means to address performance limitations of the DCR architecture (such as DC-offsets, second-order distortion, and quadrature phase and amplitude imbalances) while enabling the investigation of high-frequency IC design complications, such as package parasitics and limited on-chip isolation. The receiver front-end has a measured conversion gain of ~18dB, an input second-order intercept point of +17.5dBm, and a noise figure of 7.2dB. The quadrature phase balance at the sub-harmonic mixer IF outputs was measured in the presence of digital switching noise; 90 balance was achieved, over a specific range of LO power levels, with a square wave noise signal injected onto the mixer DC supply rails. The susceptibility of receiver I/Q balance to mixed-signal effects in a SoC environment motivates the second part of this dissertation --- the design of a phase and amplitude tunable, quadrature voltage-controlled oscillator (QVCO) for the on-chip synthesis of quadrature signals. The QVCO design, implemented in the Freescale (formerly Motorola) 0.18um SiGe:C RFBiCMOS process, uses two identical, differential LC-tank VCOs connected such that the two oscillator outputs lock in quadrature to the same frequency. The QVCO designs proposed in this work provide the additional feature of phase-tunability, i.e. the relative phase balance between the quadrature outputs can be adjusted dynamically, offering a simulated tuning range of ~90+/-10â ¹degree> in addition, a variable-gain buffer/amplifier circuit that provides amplitude tunability is introduced. One potential application of the QVCO is in a self-correcting RF receiver architecture, which, using the phase and amplitude tunability of the QVCO, could dynamically adjust the IF output quadrature phase and amplitude balance, in near real-time, in the analog-domain. The need for high-quality inductors in both the DCR and QVCO designs motivates the third aspect of this dissertation --- the characterization and modeling of on-chip spiral inductors with patterned ground shields, which are placed between the inductor coil and the underlying substrate in order to improve the inductor quality factor (Q). The shield prevents the coupling of energy away from the inductor spiral to the typically lossy Si substrate, while the patterning disrupts the flow of induced image currents within the shield. The experimental effort includes the fabrication and testing of a range of inductors with different values, and different types of patterned ground shields in different materials. Two-port measurements show a ~50% improvement in peak-Q and a ~20% degradation in self-resonant frequency for inductors with shields. From the measured results, a scalable lumped element model is developed for the rapid simulation of spiral inductors with and without patterned ground shields. The knowledge gained from this work can be combined and applied to a range of future RF/wireless SoC applications. The designs developed in this dissertation can be ported to other technologies (e.g. RF CMOS) and scaled to other frequency ranges (e.g. 24GHz ISM band) to provide solutions for emerging applications that require low-cost, low-power RF/microwave circuit implementations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Donini, Elena. "Advanced methods for simulation-based performance assessment and analysis of radar sounder data." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/304147.

Full text
Abstract:
Radar Sounders (RSs) are active sensors that transmit in the nadir electromagnetic (EM) waves with a low frequency in the range of High-Frequency and Very-High-Frequency and relatively wide bandwidth. Such a signal penetrates the surface and propagates in the subsurface, interacting with dielectric interfaces. This interaction yields to backscattered echoes detectable by the antenna that are coherently summed and stored in radargrams. RSs are used for planetary exploration and Earth observation for their value in investigating subsurface geological structures and processes, which reveal the past geomorphological history and possible future evolution. RS instruments have several parameter configurations that have to be designed to achieve the mission science goals. On Mars, radargram visual analyses revealed the icy layered deposits and liquid water evidence in the poles. On the Earth, RSs showed relevant structures and processes in the cryosphere and the arid areas that help to monitor the subsurface geological evolution, which is critical for climate change. Despite the valuable results, visual analysis is subjective and not feasible for processing a large amount of data. Therefore, a need emerges for automatic methods extracting fast and reliable information from radargrams. The thesis addresses two main open issues of the radar-sounding literature: i) assessing target detectability in simulated orbiting radargrams to guide the design of RS instruments, and ii) designing automatic methods for information extraction from RS data. The RS design is based on assessing the performance of a given instrument parameter configuration in achieving the mission science goals and detecting critical targets. The assessment guides the parameter selection by determining the appropriate trade-off between the achievable performance and technical limitations. We propose assessing the detectability of subsurface targets (e.g., englacial layering and basal interface) from satellite radar sounders with novel performance metrics. This performance assessment strategy can be applied to guide the design of the SNR budget at the surface, which can further support the selection of the main EORS instrument parameters. The second contribution is designing automatic methods for analyzing radargrams based on fuzzy logic and deep learning. The first method aims at identifying buried cavities, such as lava tubes, exploiting their geometric and EM models. A fuzzy system is built on the model that detects candidate reflections from the surface and the lava tube boundary. The second and third proposed methods are based on deep learning, as they showed groundbreaking results in several applications. We contributed with an automatic technique for analyzing radargram acquired in icy areas to investigate the basal layer. To this end, radargrams are segmented with a deep learning network into literature classes, including englacial layers, bedrock, and echo-free zone (EFZ) and thermal noise, as well as new classes of basal ice and signal perturbation. The third method proposes an unsupervised segmentation of radargrams with deep learning for detecting subsurface features. Qualitative and quantitative experimental results obtained on planetary and terrestrial radargrams confirm the effectiveness of the proposed methods, which investigate new subsurface targets and allow an improvement in terms of accuracy when compared to other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Prestberg, Lars. "Automatisk sammanställning av mätbara data : Intrusion detection system." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-28254.

Full text
Abstract:
Projektet utförs på IT-säkerhetsbolaget i Skandinavien AB, en del i deras utbud är ett Cyberlarm där delar skall automatiseras för att kunna presentera information till kunder på ett smidigare sätt. Syftet är att kunna erbjuda kunder mer valuta för pengarna vilket samtidigt innebär ett extra säljargument för produkten. Cyberlarmet är förenklat ett Intrusion Detection System som läser av trafik på ett nätverk och larmar operatören om något suspekt sker på nätet. Utifrån databasen som all information sparas i skapas grafer och tabeller som en översikt av nätet, denna information skall skickas till kunder på veckobasis, vilket sker genom ett Python-script samt ett antal open-source programvaror. Resultatet visar att det automatiserade sättet att utföra uppgiften tar 5,5% av tiden det tog att skapa en levererad grafsida med orginalmetoden. Mot den föreslagna manuella metoden, för tre sensorer, tog den automatiserade metoden 11% av tiden. När endast skapandet av pdf utfördes låg den automatiserade metoden på 82,1% respektive 69,7% av den manuella tiden för en respektive tre sensorer.
APA, Harvard, Vancouver, ISO, and other styles
41

Abdulrazzaq, Mohammed, and Yuan Wei. "Industrial Control System (ICS) Network Asset Identification and Risk Management." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-38198.

Full text
Abstract:
Setting against the significant background of Industrial 4.0, the Industrial Control System (ICS) accelerates and enriches the upgrade the existing production infrastructure. To make the infrastructures “smart”, huge parts of manual operations have been automated in this upgrade and more importantly, the isolated controlled processes have been connected through ICS. This has also raised the issues in asset management and security concerns. Being the starting point of securing the ICS, the asset identification is, nevertheless, first dealt by exploring the definition of assets in the ICS domain due to insufficient documentation and followed by the introduction of ICS constituents and their statuses in the whole network. When the definition is clear, a well-received categorization of assets in the ICS domain is introduced, while mapping out their important attributes and their significance relating the core of service they perform. To effectively tackle the ever-increasing amount of assets, identification approaches are compared and a case study was performed to test the effectiveness of two open source software. Apart from the identification part, this thesis describes a framework for efficient asset management from CRR. The four cyclic modules proposed give an overview on how the asset management should be managed according the dynamics of the assets in the production environment.
APA, Harvard, Vancouver, ISO, and other styles
42

Topham, Debra Ann. "Decentralised wireless data dissemination for vehicle-to-vehicle communications." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3335/.

Full text
Abstract:
This thesis is concerned with inter-vehicle communications supporting the deployment of future safety-related applications. Through use case analysis of the specific communica- tions requirements of safety related and traffic efficiency applications, a data dissemination framework is proposed that is able to meet the various message delivery requirements. More specifically, this thesis focuses on the subset of the proposed framework, which provides geocasting, i.e. addressing a geographical area on the road network, and local zone connectivity, providing neighbour awareness, for safety related applications. The enabling communications technology for inter-vehicle communications based on IEEE 802.11 wireless local area network devices and the associated lack of reliability it presents for the distribution of safety messages in broadcast mode, form the main topic of this thesis. A dissemination scheme for safety related inter-vehicular communication applica- tions, using realistic vehicular traffic patterns, is proposed, implemented and evaluated to demonstrate mechanisms for efficient, reliable and timely delivery of safety messages over an unreliable channel access scheme. The original contribution of this thesis is to propose a novel data dissemination protocol for vehicular environments, capable of simultaneously achieving significant economy of messaging, whilst maintaining near 100% reliable message delivery in a timely manner for a wide variety of highway traffic flow scenarios, ranging from sparsely, fragmented networks to dense, congested road networks. This is achieved through increased protocol complexity in inferring and tracking each vehicular node’s local environment, coupled with implementing adaptation to both local data traffic intensity and vehicular density. Adaptivity is achieved through creating and employing an empirical channel access delay model and embedding the stochastic delay distribution in decisions made at the network layer; this method of adaptivity is novel in itself. Moreover, unnecessary retransmissions arising from the inherent uncertainty of the wireless medium are suppressed through a novel three-step mechanism.
APA, Harvard, Vancouver, ISO, and other styles
43

Alberi, Matteo. "La PCA per la riduzione dei dati di SPHERE IFS." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6563/.

Full text
Abstract:
Nel primo capitolo di questa tesi viene presentata una panoramica dei principali metodi di rivelazione degli esopianeti: il metodo della Velocità Radiale, il metodo Astrometrico, il metodo del Pulsar Timing, il metodo del Transito, il metodo del Microlensing ed infine il metodo del Direct Imaging che verrà approfondito nei capitoli successivi. Nel secondo capitolo vengono presentati i principi della diffrazione, viene mostrato come attenuare la luce stellare con l'uso del coronografo; vengono descritti i fenomeni di aberrazione della luce provocati dalla strumentazione e dagli effetti distorsivi dell'atmosfera che originano le cosiddette speckle; vengono poi presentate le moderne soluzioni tecniche come l'ottica attiva e adattiva, che hanno permesso un considerevole miglioramento della qualità delle osservazioni. Nel terzo capitolo sono illustrate le tecniche di Differential Imaging che permettono di rimuovere efficacemente le speckle e di migliorare il contrasto delle immagini. Nel quarto viene presentata una descrizione matematica della Principal Component Analysis (Analisi delle Componenti Principali), il metodo statistico utilizzato per la riduzione dei dati astronomici. Il quinto capitolo è dedicato a SPHERE, lo strumento progettato per il Very Large Telescope (VLT), in particolare viene descritto il suo spettrografo IFS con il quale sono stati ottenuti, nella fase di test, i dati analizzati nel lavoro di tesi. Nel sesto capitolo vengono mostrate le procedure di riduzione dati e l'applicazione dell'algoritmo di IDL LA_SVD che applica la Principal Component Analysis e ha permesso, analogamente ai metodi di Differenzial Imaging visti in precedenza, di rimuovere le speckle e migliorare il contrasto delle immagini. Nella parte conclusiva, vengono discussi i risultati.
APA, Harvard, Vancouver, ISO, and other styles
44

Metz, Dirk. "Remotely detecting submarine volcanic activity at Monowai : insights from International Monitoring System hydroacoustic data." Thesis, University of Oxford, 2018. https://ora.ox.ac.uk/objects/uuid:996bdece-49b6-495c-b758-33ec105f37a5.

Full text
Abstract:
Monowai is an active submarine volcanic center in the Kermadec Arc, Southwest Pacific Ocean. We show, using cross-correlation and time-difference-of-arrival techniques, that low-frequency underwater sound waves from the volcano travel in the Sound Fixing and Ranging (SOFAR) channel and can be detected by bottom-moored hydrophone arrays of the International Monitoring System (IMS), a global sensor network maintained by the Comprehensive Nuclear-Test-Ban Treaty Organization. Hydroacoustic phases associated with the May 2011 eruption at Monowai are identified in the record of the IMS station at Ascension Island, Equatorial Atlantic Ocean. The source-receiver distance of ~15,800 km is the furthest documented range of any naturally occurring underwater signal ever observed. Our observations are consistent with results from transmission loss modeling, which suggest that acoustic propagation at southern latitudes is facilitated by the anomalous temperature regime of the Antarctic Circumpolar Current. Subsequently, we examine the 3.5-year record of the IMS hydrophone station near Juan Fernández Islands, Southeast Pacific Ocean, for volcanic activity at Monowai. Density-based clustering of arrivals during the time periods when data is available, i.e. from July 2003 to March 2004, and between April 2014 and January 2017, reveals 82 discrete episodes that are spaced days to weeks apart, typically ranging from a few hours to days in length. The resolution of the hydrophone data for seismic events at the volcano is estimated at 2.2 mb and exceeds regional broadband networks by one order of magnitude. Considering the results and techniques developed in the study of Monowai, we investigate the 2014 submarine eruption of Ahyi volcano in the Northern Mariana Islands. Acoustic phases of the 15-day episode are identified in the record of an IMS hydrophone array located at Wake Island in the northwestern Pacific Ocean. Explosive volcanic activity occurred in two bursts, accompanied by a decrease in low-frequency arrivals that is interpreted as a shift in signal source parameters. Acoustic energy released during the event is on the order of 9.7 1013 J.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Taekyu. "Ontology/Data Engineering Based Distributed Simulation Over Service Oriented Architecture For Network Behavior Analysis." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/193678.

Full text
Abstract:
As network uses increase rapidly and high quality-of-service (QoS) is required, efficient network managing methods become important. Many previous studies and commercial tools of network management systems such as tcpdump, Ethereal, and other applications have weaknesses: limited size of files, command line execution, and large memory and huge computational power requirement. Researchers struggle to find fast and effective analyzing methods to save maintenance budgets and recover from systematic problems caused by the rapid increment of network traffic or intrusions. The main objective of this study is to propose an approach to deal with a large amount of network behaviors being quickly and efficiently analyzed. We study an ontology/data engineering methodology based network analysis system. We design a behavior, which represents network traffic activity and network packet information such as IP addresses, protocols, and packet length, based on the System Entity Structure (SES) methodology. A significant characteristic of SES, a hierarchical tree structure, enables systems to access network packet information quickly and efficiently. Also, presenting an automated system design is the secondary purpose of this study. Our approach shows adaptive awareness of pragmatic frames (contexts) and makes a network traffic analysis system with high throughput and a fast response time that is ready to respond to user applications. We build models and run simulations to evaluate specific purposes, i.e., analyzing network protocols use, evaluating network throughput, and examining intrusion detection algorithms, based on Discrete Event System Specification (DEVS) formalism. To study speed up, we apply a web-based distributed simulation methodology. DEVS/Service Oriented Architecture (DEVS/SOA) facilitates deploying workloads into multi-servers and consequently increasing overall system performance. In addition to the scalability limitations, both tcpdump and Ethereal have a security issue. As well as basic network traffic information, captured files by these tools contain secure information: user identification numbers and passwords. Therefore, captured files should not allow to be leaked out. However, network analyses need to be performed outside target networks in some cases. The distributed simulation--allocating distributing models inside networks and assigning analyzing models outside networks--also allows analysis of network behaviors out of networks while keeping important information secured.
APA, Harvard, Vancouver, ISO, and other styles
46

Cremaschi, Andrea. "Comparing computational approaches to the analysis of high-frequency trading data using Bayesian methods." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/60839/.

Full text
Abstract:
Financial prices are usually modelled as continuous, often involving geometric Brownian motion with drift, leverage, and possibly jump components. An alternative modelling approach allows financial observations to take discrete values when they are interpreted as integer multiples of a fixed quantity, the ticksize, the monetary value associated with a single change in the asset evolution. These samples are usually collected at very high frequency, exhibiting diverse trading operations per seconds. In this context, the observables are modelled in two different ways: on one hand, via the Skellam process, defined as the difference between two independent Poisson processes; on the other, using a stochastic process whose conditional law is that of a mixture of Geometric distributions. The parameters of the two stochastic processes modelled as functions of a stochastic volatility process, which is in turn described by a discretised Gaussian Ornstein-Uhlenbeck AR(1) process. The work will present, at first, a parametric model for independent and identically distributed data, in order to motivate the algorithmic choices used as a basis for the next Chapters. These include adaptive Metropolis-Hastings algorithms, and Interweaving Strategy. The central Chapters of the work are devoted to the illustration of Particle Filtering methods for MCMC posterior computations (or PMCMC methods). The discussion starts by presenting the existing Particle Gibbs and the Particle Marginal Metropolis-Hastings samplers. Additionally, we propose two extensions to the existing methods. Posterior inference and out-of-sample prediction obtained with the different methodologies is discussed, and compared to the methodologies existing in the literature. To allow for more flexibility in the modelling choices, the work continues with a presentation of a semi-parametric version of the original model. Comparative inference obtained via the previously discussed methodologies is presented. The work concludes with a summary and an account of topics for further research.
APA, Harvard, Vancouver, ISO, and other styles
47

Bai, Zhenlong, and 白真龍. "A study on a goal oriented detection and verification based approach for image and ink document analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B36600003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Andersson, Jonathan. "Meltdowns påverkan på PHP-prestanda under IIS i Hyper-V miljöer : En kvantitativ studie som undersöker Meltdown-patchens effekter på PHP under IIS." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15303.

Full text
Abstract:
Meltdown and Spectre are two vulnerabilities discovered in 2017. These vulnerabilities allow unauthorized users to extract confidential information from systems. Security patches have been developed to resolve these vulnerabilities, but with a potental performance loss. This report investigates how PHP is affected by Microsofts web server IIS, with the security patch developed by Microsoft to protect systems against Meltdown and Spectre applied. In a practical laboratory experiment, the JMeter tool has been used to create a simulated load on the web server system, where PHPts CPU usage is monitored and noted for further analysis. The result shows that there has been a 13.74% performance loss with Microsofts security patch applied. This opens up for discussion about what Microsofts security patch actually does in a system, and if there is a suffciently large degradation to consider a transition to another solution.
APA, Harvard, Vancouver, ISO, and other styles
49

Conocimiento, Dirección de Gestión del. "Up to Date." UpToDate Inc, 2004. http://hdl.handle.net/10757/655399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hautsalo, Jesper. "Using Supervised Learning and Data Fusion to Detect Network Attacks." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54957.

Full text
Abstract:
Network attacks remain a constant threat to organizations around the globe. Intrusion detection systems provide a vital piece of the protection needed in order to fend off these attacks. Machine learning has become a popular method for developing new anomaly-based intrusion detection systems, and in recent years, deep learning has followed suit. Additionally, data fusion is often applied to intrusion detection systems in research, most often in the form of feature reduction, which can improve the accuracy and training times of classifiers. Another less common form of data fusion is decision fusion, where the outputs of multipe classifiers are fused into a more reliable result. Recent research has produced some contradictory results regarding the efficiency of traditional machine learning algorithms compared to deep learning algorithms. This study aims to investigate this problemand provide some clarity about the relative performance of a selection of classifier algorithms, namely artificial neural network, long short-term memory and random forest. Furthermore, two feature selection methods, namely correlation coefficient method and principal component analysis, as well as one decision fusion method in D-S evidence theory are tested. The majority of the feature selection methods fail to increase the accuracy of the implemented models, although the accuracy is not drastically reduced. Among the individual classifiers, random forest shows the best performance, obtaining an accuracy of 87,87%. Fusing the results with D-S evidence theory further improves this result, obtaining an accuracy of 88,56%, and proves particularly useful for reducing the number of false positives.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography