To see the other types of publications on this topic, follow the link: Information filters.

Dissertations / Theses on the topic 'Information filters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information filters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kondo, Daishi. "Preventing information leakage in NDN with name and flow filters." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0233/document.

Full text
Abstract:
Au cours des dernières années, les réseaux de type (NDN) sont devenus une des architectures réseau les plus prometteuses. Pour être adopté à l'échelle d'Internet, NDN doit résoudre les problèmes inhérents à l'Internet actuel. La fuite d’informations fait partie de ces problèmes, et il est très important d’évaluer ce risque pour les réseaux de type NDN. La thèse se propose d'évaluer ce risque. En supposant (i) qu'un ordinateur appartient au réseau d'une entreprise basée sur une architecture NDN, (ii) que l'ordinateur a déjà été compromis par un support malveillant, et (iii) que la société installe un pare-feu, la thèse évalue la situation dans laquelle l’ordinateur infecté tente de divulguer des données à un attaquant externe à l'entreprise. Les contributions de cette thèse sont au nombre de cinq. Tout d'abord, cette thèse propose une attaque par fuite d'informations via un paquet donné et un paquet intérêt propres à NDN. Deuxièmement, afin de remédier à l'attaque fuite d'informations, cette thèse propose un pare-feu basé sur l'utilisation d'une liste blanche et d'une liste noire afin de surveiller et traiter le trafic NDN provenant des consommateurs. Troisièmement, cette thèse propose un filtre de noms NDN pour classifier un nom dans un paquet d'intérêt comme étant légitime ou non. Le filtre de noms peut ainsi réduire le débit par paquet d'intérêt. Cependant, pour adapter la vitesse de l'attaque, les logiciels malveillants peuvent envoyer de nombreux intérêts en très peu de temps. De même, le logiciel malveillant peut exploiter un intérêt avec une information explicite dans le nom (comme peut le faire un message véhiculé par un POST sur HTTP). Cela dépasse alors la portée du filtre de nom proposé et rend le filtre inefficace. Pour prendre en compte le flux de trafic analysé par le pare-feu NDN, cette thèse propose comme quatrième contribution la surveillance du flux NDN à travers le pare-feu. Enfin, afin de traiter les inconvénients du filtre de noms NDN, cette thèse propose un filtre de flux NDN permettant de classer un flux comme légitime ou non. L'évaluation des performances montre que le filtre de flux complète de manière tout à fait performante le filtre de nom et réduit considérablement le débit de fuite d'informations
In recent years, Named Data Networking (NDN) has emerged as one of the most promising future networking architectures. To be adopted at Internet scale, NDN needs to resolve the inherent issues of the current Internet. Since information leakage from an enterprise is one of the big issues even in the Internet and it is very crucial to assess the risk before replacing the Internet with NDN completely, this thesis investigates whether a new security threat causing the information leakage can happen in NDN. Assuming that (i) a computer is located in the enterprise network that is based on an NDN architecture, (ii) the computer has already been compromised by suspicious media such as a malicious email, and (iii) the company installs a firewall connected to the NDN-based future Internet, this thesis focuses on a situation that the compromised computer (i.e., malware) attempts to send leaked data to the outside attacker. The contributions of this thesis are fivefold. Firstly, this thesis proposes an information leakage attack through a Data and through an Interest in NDN. Secondly, in order to address the information leakage attack, this thesis proposes an NDN firewall which monitors and processes the NDN traffic coming from the consumers with the whitelist and blacklist. Thirdly, this thesis proposes an NDN name filter to classify a name in the Interest as legitimate or not. The name filter can, indeed, reduce the throughput per Interest, but to ameliorate the speed of this attack, malware can send numerous Interests within a short period of time. Moreover, the malware can even exploit an Interest with an explicit payload in the name (like an HTTP POST message in the Internet), which is out of scope in the proposed name filter and can increase the information leakage throughput by adopting a longer payload. To take traffic flow to the NDN firewall from the consumer into account, fourthly, this thesis proposes an NDN flow monitored at an NDN firewall. Fifthly, in order to deal with the drawbacks of the NDN name filter, this thesis proposes an NDN flow filter to classify a flow as legitimate or not. The performance evaluation shows that the flow filter complements the name filter and greatly chokes the information leakage throughput
APA, Harvard, Vancouver, ISO, and other styles
2

Eskiyerli, Mirat Hayri. "Square root domain filters." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fischer, Peter Michael. "Adaptive optimization techniques for context-aware information filters." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Walter, Matthew R. "Sparse Bayesian information filters for localization and mapping." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46498.

Full text
Abstract:
Thesis (S.M.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2008.
Includes bibliographical references (p. 159-170).
This thesis formulates an estimation framework for Simultaneous Localization and Mapping (SLAM) that addresses the problem of scalability in large environments. We describe an estimation-theoretic algorithm that achieves significant gains in computational efficiency while maintaining consistent estimates for the vehicle pose and the map of the environment.We specifically address the feature-based SLAM problem in which the robot represents the environment as a collection of landmarks. The thesis takes a Bayesian approach whereby we maintain a joint posterior over the vehicle pose and feature states, conditioned upon measurement data. We model the distribution as Gaussian and parametrize the posterior in the canonical form, in terms of the information (inverse covariance) matrix. When sparse, this representation is amenable to computationally efficient Bayesian SLAM filtering. However, while a large majority of the elements within the normalized information matrix are very small in magnitude, it is fully populated nonetheless. Recent feature-based SLAM filters achieve the scalability benefits of a sparse parametrization by explicitly pruning these weak links in an effort to enforce sparsity. We analyze one such algorithm, the Sparse Extended Information Filter (SEIF), which has laid much of the groundwork concerning the computational benefits of the sparse canonical form. The thesis performs a detailed analysis of the process by which the SEIF approximates the sparsity of the information matrix and reveals key insights into the consequences of different sparsification strategies. We demonstrate that the SEIF yields a sparse approximation to the posterior that is inconsistent, suffering from exaggerated confidence estimates.
(cont) This overconfidence has detrimental effects on important aspects of the SLAM process and affects the higher level goal of producing accurate maps for subsequent localization and path planning. This thesis proposes an alternative scalable filter that maintains sparsity while preserving the consistency of the distribution. We leverage insights into the natural structure of the feature-based canonical parametrization and derive a method that actively maintains an exactly sparse posterior. Our algorithm exploits the structure of the parametrization to achieve gains in efficiency, with a computational cost that scales linearly with the size of the map. Unlike similar techniques that sacrifice consistency for improved scalability, our algorithm performs inference over a posterior that is conservative relative to the nominal Gaussian distribution. Consequently, we preserve the consistency of the pose and map estimates and avoid the effects of an overconfident posterior. We demonstrate our filter alongside the SEIF and the standard EKEF both in simulation as well as on two real-world datasets. While we maintain the computational advantages of an exactly sparse representation, the results show convincingly that our method yields conservative estimates for the robot pose and map that are nearly identical to those of the original Gaussian distribution as produced by the EKF, but at much less computational expense. The thesis concludes with an extension of our SLAM filter to a complex underwater environment. We describe a systems-level framework for localization and mapping relative to a ship hull with an Autonomous Underwater Vehicle (AUV) equipped with a forward-looking sonar. The approach utilizes our filter to fuse measurements of vehicle attitude and motion from onboard sensors with data from sonar images of the hull. We employ the system to perform three-dimensional, 6-DOF SLAM on a ship hull.
by Matthew R. Walter.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohieldin, Ahmed Nader. "High performance continuous-time filters for information transfer systems." Texas A&M University, 2003. http://hdl.handle.net/1969/233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wicks, Tony. "Design and implementation of PCAS filters." Thesis, University of Warwick, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alexiou, Ioannis. "Complex filters and higher-order spatial information for image categorization." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/14488.

Full text
Abstract:
This Thesis applies complex spatial filters to the front end filtering to a computer vision framework for object recognition and scene categorization. This involves careful filter design in the Fourier domain based on discrete frame properties. Biological plausibility of the suggested filtering is compared against a common model found in the computer vision literature. The designed complex filter bank is equipped with focus-of-attention operators. Specifically, two possible keypoint detection methodologies are examined and compared with state of the art keypoint detection methods. This includes an investigation of scale-estimation methods. In addition, three image patch descriptor arrangements are proposed to sample the complex filter responses, and an initial evaluation of categorization performance is undertaken. Next, the spatial pooling arrangement of the best performing descriptor is further optimised and the performance of different complex filter bandwidths is examined in class separation tasks. A further study is conducted on the effects of a Winner-Take-All (WTA) approach to modifying filter responses before pooling. A thorough evaluation of descriptor performance is undertaken to reveal any advantages or disadvantages from a variety of perspectives. Next, the clustering behaviour of descriptors of various types is inspected in the descriptor feature space. A reverse look-up of visual words attempts to relate clustering behaviour to descriptor performance. Typical grouping approaches, such as spatial pyramids, are then compared with a novel method for coupling visual words in which a linear kernel SVM learns class separability. A final evaluation on this stage is presented and discussed, leading to conclusive arguments about the importance of careful approaches to word-pairing for good-quality categorization.
APA, Harvard, Vancouver, ISO, and other styles
8

Beam, Michael A. "Personalized News: How Filters Shape Online News Reading Behavior." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1315716858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alam, Dawood. "Design and VLSI implementation of two-dimensional allpass digital filters." Thesis, University of Warwick, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pfann, Eugen. "Design and analysis of oversampled #sigma# #delta# adaptive LMS filters." Thesis, University of Strathclyde, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.273397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ma, Qiang. "The application of genetic algorithms to the adaptation of IIR filters." Thesis, Loughborough University, 1995. https://dspace.lboro.ac.uk/2134/32269.

Full text
Abstract:
The adaptation of an IIR filter is a very difficult problem due to its non-quadratic performance surface and potential instability. Conventional adaptive IIR algorithms suffer from potential instability problems and a high cost for stability monitoring. Therefore, there is much interest in adaptive IIR filters based on alternative algorithms. Genetic algorithms are a family of search algorithms based on natural selection and genetics. They have been successfully used in many different areas. Genetic algorithms applied to the adaptation of IIR filtering problems are studied in this thesis, and show that the genetic algorithm approach has a number of advantages over conventional gradient algorithms, particularly, for the adaptation of high order adaptive IIR filters, IIR filters with poles close to the unit circle and IIR filters with multi-modal error surfaces. The conventional gradient algorithms have difficulty solving these problems. Coefficient results are presented for various orders of IIR filters in this thesis. In the computer simulations presented in this thesis, the direct, cascade, parallel and lattice form IIR filter structures have been used and compared. The lattice form IIR filter structure shows its superiority over the cascade and parallel form IIR filter structures in terms of its mean square error convergence performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Alm, Erik. "Area and Power Efficiency of Multiplier-Free Finite Impulse Response Filters." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237417.

Full text
Abstract:
In digital radio systems, a large number of finite impulse response filters are typically used. Due to their nature of operation, such filters require many multiplication operations, leading to great costs in terms of both chip area and power consumption. For cost reduction reasons, there is a strong business case for implementing these filters without general multipliers so as to reduce the area and power consumption of the overall system.This thesis explores a method of implementing finite impulse response halfband filters without general multipliers, by using a special filter structure and replacing multipliers with sequences of binary shifts and additions. The savings in terms of area and power consumption are estimated and compared to a conventional filter (with a common structure) implementation containing general multipliers, as well as the same conventional filter implemented without general multipliers by means of manipulating its coefficients such that they can be implemented with shifts and additions.The results show that while using the special filter structure with shifts and additions consumes less area and power than a conventional filter with general multipliers, employing simpler methods to obtain coefficients implementable with shifts and additions in a conventional filter structure produces smaller filters consuming less power. Moreover, the results of this thesis show that using methods allowing for multiplier-free filter implementations with conventional filter structures seems favorable, hence further investigation of such methods is recommended. Future studies could also focus on methods applicable to filters with support for dynamic coefficients.
Digitala radiosystem innehåller ofta ett stort antal filter med ändliga impulssvar. På grund av hur sådana filter opererar krävs ett stort antal multiplikationer, vilka implementerade i hårdvara tenderar ockupera stor kiselyta och konsumera hög effekt. För att reducera kostnader finns det därför ett starkt incitament att implementera dessa filter utan generella multiplikatorer. Detta examensarbete utforskar en metod för att implementera digitala halvbandsfilter utan generella multiplicerare, genom att använda en speciell filterstruktur och ersätta multiplikationerna med sekvenser av binära skiftoperationer och additioner. Besparingarna i termer av effektförbrukning och kiselyta uppskattas och jämförs med ett konventionellt implementerat filter (med en vanlig struktur) som uppfyller samma specifikationer samt samma filter med koefficienter manipulerade så att de kan uttryckas som sekvenser av binära skiftoperationer och additioner. Resultaten visar att såväl kiselyta som effektförbrukning ter sig lägre för filtret implementerat med den speciella strukturen och utan generella multiplicerare än för det konventionella filtret innehållande generella multiplicerare. Dock visas också att ännu större besparingar uppnås genom att använda den konventionella filterstrukturen men med koefficienter ma-nipulerade så att dessa kan implementeras utan multiplicerare. Överlag ärslutsatsen att konventionella filterstrukturer i kombination med metoder för att göra dess koefficienter implementerbara utan multiplicerare verkar mer lovande och att ytterligare studier av sådana metoders förtjänster bör stud-eras. Framtida studier skulle även kunna ta i beaktande metoder som ärapplicerbara på filter med icke-konstanta koefficienter.
APA, Harvard, Vancouver, ISO, and other styles
13

Lingtorp, Alexander. "Real Time Voxel Cone Tracing using Bilateral Filters and 3D Clipmaps." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276680.

Full text
Abstract:
Global illumination is a collective term of photorealistic lighting in computer graphics. Computation of global illumination is expensive in terms of real- time applications such as computer games. Voxel cone tracing is a technique that has shown promise to provide some of the most visually important effects of global illumination in close to real-time. Reducing the computational cost of global illumination enables more visually impressive scenes to be computed. This thesis evaluates the runtime performance and image quality of global illumination computed using voxel cone tracing in lower resolutions in con- junction with bilateral filtering and joint bilateral upsampling. A qualitative and quantitative evaluation is performed using both images and error metrics comparing the method against bilinear upsampling. The results show that the runtime performance is increased while the image quality is reduced. Filtering is shown to reduce specular artifacts. A discussion of the performance and image qualitative characteristics of the method implemented is performed. The image quality is concluded to be maintained due to the filtering being able to reduce artifacts in the low resolution global illumination and the lower error metric measured compared to bilinear upsampling.
Global ljussättning är ett begrepp som innefattar flera fotorealistiska ljuseffekter inom datorgrafik. Beräkning av global ljussättning är dyrt i termer av beräkningskraft speciellt i realtids applikationer såsom datorspel. Voxel kon- spårning är en teknik som kan beräkna några av dem mest visuellt viktiga delarna av global ljussättning. Denna uppsats evaluerar exekveringstiden och bildkvaliteten av global ljus- sättning beräknad av voxel konspårning i lägre upplösningar tillämpat med bi- laterala filter och sammanslagen bilateral uppskalning. En kvalitativ och kvantitativ evaluering av bilder och feldata används i en jämförelse med bilinjär uppskalning av den global ljussättningen. Resultatet visar att exekveringstiden minskar samtidigt som bildkvaliten försämras. Filtreringen visar sig reducera spekulära artifakter. En diskussion av dem bildkvalitativa och körtids egenskaperna hos den implementerade metoden utförs. Bildkvaliten bedöms som oförminskad på grund av reduceringen av artifakter i den låg upplösta globala ljussättningen av dem bilateral filtren samt ett lägre felvärde mätt mot bilinjär uppskalning.
APA, Harvard, Vancouver, ISO, and other styles
14

Casbeer, David W. "Decentralized Estimation Using Information Consensus Filters with a Multi-static UAV Radar Tracking System." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2779.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Roy, Emmanuel. "Investigation of the theory and implementation of adaptive recursive second order polynomial filters." Thesis, University of Strathclyde, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ho, Peter. "Organization in decentralized sensing." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mountain, David Michael. "Exploring mobile trajectories : an investigation of individual spatial behaviour and geographic filters for information retrieval." Thesis, City University London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Junler, Ludvig. "Evaluation of Tracking Filters for Tracking of Manoeuvring Targets." Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166957.

Full text
Abstract:
This thesis evaluates different solutions to the target tracking problem with the use of airborne radar measurements. The purpose of this report is to present and compare options that can improve the tracking performance when the target is performing various manoeuvres while the radar measurements are noisy. A simulation study is done to evaluate and compare the presented solutions, where the evaluating criteria are the estimation errors and the computational complexity. The algorithms investigated are the general pseudo Bayesian of order one (GPB(1)) filter and the interacting multiple model (IMM) filter, each using three motion models, along with several single model Kalman filters. Additionally, the impact on the tracking performance by different choices of radar parameters is also examined. The results show that filters using multiple models are best suited for tracking targets performing different manoeuvres. The tracking performance is improved with both the GPB(1) and IMM algorithms compared to the filters using a single model. Looking at the estimation errors, IMM outperforms the other algorithms and achieves a better general performance for different kinds of manoeuvres. However, IMM have a much higher computational complexity than the filters with a single model. GPB(1) could therefore be more suited for applications where computational power poses a problem, since it is less computationally demanding than IMM. Furthermore, it is shown that different radar parameters have an impact on the tracking performance. The choice of pulse repetition frequency (PRF) and duty cycle used by the radar affects the accuracy of the measurements. The estimation errors of the tracking filters become larger with poor measurements, which also makes it more difficult for the multiple model algorithms to make good use of the different motion models. In most cases, IMM is however less sensitive to the choice of PRF, in relation to how the models are used in the algorithm, compared to GPB(1). Nevertheless, the study shows that there are cases where some combinations of radar parameters drastically reduces the tracking performance and no clear improvement can be seen, not even for IMM.
APA, Harvard, Vancouver, ISO, and other styles
19

Mulholland, P. J. "Adaptive filters and their application to an adaptive receiving array for an underwater acoustic data link." Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fantinato, Denis Gustavo 1985. "Equalização não supervisionada : contribuições ao estudo do critério do módulo constante e de métodos baseados na teoria da informação." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259838.

Full text
Abstract:
Orientadores: Romis Ribeiro de Faissol Attux, Aline de Oliveira Neves Panazio
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-24T02:36:22Z (GMT). No. of bitstreams: 1 Fantinato_DenisGustavo_M.pdf: 4268670 bytes, checksum: 334fedf837e8de7a8a94d1e5f7c70fba (MD5) Previous issue date: 2013
Resumo: O resumo poderá ser visualizado no texto completo da tese digital
Abstract: The complete abstract is available with the full electronic document.
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
21

Martinson, Tiina, and Johanna Åkesson. "Designing with only second hand information- An evaluation of the filters and formatters in the Billing Gateway GUI." Thesis, Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1595.

Full text
Abstract:
This Bachelor´s Thesis concerns 20 points at the MDA-program (People, Computers and Work) at Blekinge Institute of Technology in Ronneby. The MDA-program focuses on how people use Information Technology and its design and development. In this Bachelor?s Thesis we describe a study process at the Billing Gateway Department at Ericsson Software Technology in Ronneby in which we investigate how to design for destined users without direct contact with them. Our aim of the study was to evaluate and bring out a design suggestion of a graphical representation of the filters and formatters for the end-users. With only second hand information about them we found the task to be impossible to accomplish. Instead this Bachelor?s Thesis is an investigation of how a design process is developed without direct contact with the end-users. The results are based on second hand information about the needs of the end-users. We give Ericsson suggestions of how to involve the end-users in the design process. ----------------------- Tiina Martinson Johanna Åkesson
APA, Harvard, Vancouver, ISO, and other styles
22

Bae, Cheolyong, and Madhur Gokhale. "Implementation of High-Speed 512-Tap FIR Filters for Chromatic Dispersion Compensation." Thesis, Linköpings universitet, Datorteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153435.

Full text
Abstract:
A digital filter is a system or a device that modifies a signal. This is an essential feature in digital communication. Using optical fibers in the communication has various advantages like higher bandwidth and distance capability over copper wires. However, at high-rate transmission, chromatic dispersion arises as a problem to be relieved in an optical communication system. Therefore, it is necessary to have a filter that compensates chromatic dispersion. In this thesis, we introduce the implementation of a new architecture of the filter and compare it with a previously proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
23

Medran, del Rio Jose Luis. "Study of the effects of applying higher symmetries to printed filters." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234914.

Full text
Abstract:
In recent years, a new field of study has arisen around theapplication of the so-called higher symmetries to structures that werealready thought to be completely studied and with no room forimprovement in terms of benefits.There are two types of higher symmetries, named glide and twistsymmetries. The objective of this Master thesis is the study of theeffects coming from the application of the glide symmetry on aprinted filter in microstrip technology and its comparison with afilter with conventional reflective symmetry.The filter was adapted to the glide characteristics and we provide itsbehavior from different points of view with the purpose of reachinga better understanding of the consequences of applying the glidesymmetry on this type of printed filters.Three prototypes were manufactured and measured to validate thesimulated results.
Under senare år har ett nytt forskningsfält uppkommit genom applikationen av högre symmetrier till strukturer som redan studerats extensivt. Det finns två typer av högre symmetrier och dessa kallas glid- och vridsymmetrier. Målet med detta masterexamensarbete är att studera effekten som kommer från applikationen av glidsymmetri på ett filter, implementerat i mikrostripsteknik. Jämförelse med filtret med enbart reflektionssymmetri görs också. Glidsymmetri har introducerats i filtret och vi undersöker filtrets beteende från en rad olika synvinklar för att uppnå en bättre förståelse för konsekvenserna av glidsymmetri för den här typen av tryckta (”printed”) filter. Flera prototyper har tillverkats och mätts för att validera de simulerade resultaten.
APA, Harvard, Vancouver, ISO, and other styles
24

Skepetzis, Vasilios, and Pontus Hedman. "The Effect of Beautification Filters on Image Recognition : "Are filtered social media images viable Open Source Intelligence?"." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44799.

Full text
Abstract:
In light of the emergence of social media, and its abundance of facial imagery, facial recognition finds itself useful from an Open Source Intelligence standpoint. Images uploaded on social media are likely to be filtered, which can destroy or modify biometric features. This study looks at the recognition effort of identifying individuals based on their facial image after filters have been applied to the image. The social media image filters studied occlude parts of the nose and eyes, with a particular interest in filters occluding the eye region. Our proposed method uses a Residual Neural Network Model to extract features from images, with recognition of individuals based on distance measures, based on the extracted features. Classification of individuals is also further done by the use of a Linear Support Vector Machine and XGBoost classifier. In attempts to increase the recognition performance for images completely occluded in the eye region, we present a method to reconstruct this information by using a variation of a U-Net, and from the classification perspective, we also train the classifier on filtered images to increase the performance of recognition. Our experimental results showed good recognition of individuals when filters were not occluding important landmarks, especially around the eye region. Our proposed solution shows an ability to mitigate the occlusion done by filters through either reconstruction or training on manipulated images, in some cases, with an increase in the classifier’s accuracy of approximately 17% points with only reconstruction, 16% points when the classifier trained on filtered data, and  24% points when both were used at the same time. When training on filtered images, we observe an average increase in performance, across all datasets, of 9.7% points.
APA, Harvard, Vancouver, ISO, and other styles
25

Favero, Andre Luis. "Metodologia para detecção de incoerências entre regras em filtros de pacotes." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/12073.

Full text
Abstract:
Embora firewall seja um assunto bastante discutido na área de segurança da informação, existem lacunas em termos de verificação de firewalls. Verificações de firewalls têm o intuito de garantir a correta implementação dos mecanismos de filtragem e podem ser realizadas em diferentes níveis: sintaxe das regras; conformidade com a política; e relacionamento entre as regras. Os aspectos referentes a erros de sintaxe das regras são geralmente tratados pela ferramenta de filtragem em uso. O segundo nível, no entanto, depende da existência de uma política formal e documentada, algo não encontrado na maioria das organizações, e de uma metodologia eficaz que, através da entrada da política de segurança e do conjunto de regras de firewall implementado, possa comparálas e apontar as discrepâncias lógicas entre a especificação (política) e a implementação (regras). O último, verificação dos relacionamentos entre regras, não requer qualquer documentação, uma vez que somente o conjunto de regras do firewall é necessário para a identificação de incoerências entre elas.Baseado nessas considerações, este trabalho objetivou o estudo e a definição de uma metodologia para a análise de relacionamentos entre regras, apontando erros e possíveis falhas de configuração. Três metodologias já existentes foram estudadas, analisadas e utilizadas como base inicial para uma nova metodologia que atingisse os requisitos descritos neste trabalho. Para garantir a efetividade da nova metodologia, uma ferramenta protótipo foi implementada e aplicada a três estudos de caso.
Although firewalls are a well discussed issue in the information security field, there are gaps in terms of firewall verification. Firewall verification is aimed at enforce the correct filtering mecanisms implementation and can be executed in three distinct levels: rules syntax, policy compliance and rules relationship. The aspects related to rule syntax errors are usually addressed by the filtering tool in use. However, the second level depends on the existance of a formalized and documented policy, something not usual in most organizations, and on an efficient metodology that, receiving the security policy and the firewall rules set as inputs, could compare them and point logical discrepancies between the specification (policy) and the implementation (rules set). The last level, rules relationship verification, doesn't require any previous documentation, indeed only the firewall rule set is required to conduct a process of rules incoherencies identification. Based on those considerations, this work aimed at studying and defining a methodology for analysis of rules relationships, pointing errors and possible misconfigurations. Three already existent methodologies were studied, analyzed and used as a initial foundation to a new methodology that achieve the requirements described in this work. To assure the effectivity of the new methodology, a prototype tool were implemented and applied to three case studies.
APA, Harvard, Vancouver, ISO, and other styles
26

Kokaram, Anil Christopher. "Motion picture restoration." Thesis, University of Cambridge, 1993. https://www.repository.cam.ac.uk/handle/1810/256798.

Full text
Abstract:
This dissertation presents algorithms for restoring some of the major corruptions observed in archived film or video material. The two principal problems of impulsive distortion (Dirt and Sparkle or Blotches) and noise degradation are considered. There is also an algorithm for suppressing the inter-line jitter common in images decoded from noisy video signals. In the case of noise reduction and Blotch removal the thesis considers image sequences to be three dimensional signals involving evolution of features in time and space. This is necessary if any process presented is to show an improvement over standard two-dimensional techniques. It is important to recognize that consideration of image sequences must involve an appreciation of the problems incurred by the motion of objects in the scene. The most obvious implication is that due to motion, useful three dimensional processing does not necessarily proceed in a direction 'orthogonal' to the image frames. Therefore, attention is given to discussing motion estimation as it is used for image sequence processing. Some discussion is given to image sequence models and the 3D Autoregressive model is investigated. A multiresolution BM scheme is used for motion estimation throughout the major part of the thesis. Impulsive noise removal in image processing has been traditionally achieved by the use of median filter structures. A new three dimensional multilevel median structure is presented in this work with the additional use of a detector which limits the distortion caused by the filters . This technique is found to be extremely effective in practice and is an alternative to the traditional global median operation. The new median filter is shown to be superior to those previously presented with respect to the ability to reject the kind of distortion found in practice. A model based technique using the 3D AR model is also developed for detecting and removing Blotches. This technique achieves better fidelity at the expense of heavier computational load. Motion compensated 3D IIR and FIR Wiener filters are investigated with respect to their ability to reject noise in an image sequence. They are compared to several algorithms previously presented which are purely temporal in nature. The filters presented are found to be effective and compare favourably to the other algorithms. The 3D filtering process is superior to the purely temporal process as expected. The algorithm that is presented for suppressing inter-line jitter uses a 2D AR model to estimate and correct the relative displacements between the lines. The output image is much more satisfactory to the observer although in a severe case some drift of image features is to be expected. A suggestion for removing this drift is presented in the conclusions. There are several remaining problems in moving video. In particular, line scratches and picture shake/roll. Line scratches cannot be detected successfully by the detectors presented and so cannot be removed efficiently. Suppressing shake and roll involves compensating the entire frame for motion and there is a need to separate global from local motion. These difficulties provide ample opportunity for further research.
APA, Harvard, Vancouver, ISO, and other styles
27

Karlsson, Casper. "Analysis of harmonic cross-modulation in HVDC line-commutated converters for practical design purposes." Thesis, Högskolan Väst, Avdelningen för Industriell ekonomi, Elektro- och Maskinteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-12308.

Full text
Abstract:
The use of HVDC has many benefits over HVAC for some specific power transmission purposes. However, two major problems with HVDC links are that the converters produce harmonics and consume reactive power. Electrical filters are used to compensate for this and are a major cost driver of HVDC links. An accurate analysis of the modulation makes it possible to design an optimal filter solution in order to make the link financial viable. This report investigates how the complex cross-modulation phenomenon affects the harmonics. The investigation is limited to line-commutated converter technology (LCC) and excludes the newer voltage-source converter technology (VSC) for which the crossmodulation behavior is very different and requires completely different analysis techniques. This report refers to several MATLAB programs. All of these have been developed by the author as part of this thesis work. The report starts by explaining analytically, with the help of switching functions, how harmonics are cross-modulated across line-commutated converters. It is then explained and shown how a MATLAB program, BOWSER, in the time-domain can be used to calculate accurate switching functions when the converter is supplied with a general voltage source and when the DC current contains ripple. After that it is explained and shown how a second MATLAB program, DONKEYKONG, can be created to model an HVDC link in the frequency domain by iterating BOWSER. The cross-modulation phenomena is then finally analyzed in the frequency domain with the help of DONKEYKONG. The result is that the cross-modulation phenomenon can be divided into two groups, affected by grid and DC-side impedance as well as the overlap angle variation. Which will affect the characteristic and non-characteristic harmonics in different ways. It was found that the characteristic harmonics are affected by the cross-modulation due to grid and DC-side impedance by up to 12 % and that low order non-characteristic harmonics can diverge up to 900 % when the converter was supplied with 1 % fundamental unbalance. It also showed that the non-characteristic harmonics have almost the same amplitude for all power transmission levels of the HVDC link. The report shows that the cross-modulation caused by the grid and DC-side impedance, which is sometimes ignored or treated in a simplified way, can affect the practical filter design a lot. It also shows step by step how the MATLAB programs are created
APA, Harvard, Vancouver, ISO, and other styles
28

Jesus, Gildson Queiroz de. "Filtragem robusta recursiva para sistemas lineares a tempo discreto com parâmetros sujeitos a saltos Markovianos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-03102011-091822/.

Full text
Abstract:
Este trabalho trata de filtragem robusta para sistemas lineares sujeitos a saltos Markovianos discretos no tempo. Serão desenvolvidas estimativas preditoras e filtradas baseadas em algoritmos recursivos que são úteis para aplicações em tempo real. Serão desenvolvidas duas classes de filtros robustos, uma baseada em uma estratégia do tipo H \'INFINITO\' e a outra baseada no método dos mínimos quadrados regularizados robustos. Além disso, serão desenvolvidos filtros na forma de informação e seus respectivos algoritmos array para estimar esse tipo de sistema. Neste trabalho assume-se que os parâmetros de saltos do sistema Markoviano não são acessíveis.
This work deals with the problem of robust state estimation for discrete-time uncertain linear systems subject to Markovian jumps. Predicted and filtered estimates are developed based on recursive algorithms which are useful in on-line applications. We develop two classes of filters, the first one is based on a H \'INFINITO\' approach and the second one is based on a robust regularized leastsquare method. Moreover, we develop information filter and their respective array algorithms to estimate this kind of system. We assume that the jump parameters of the Markovian system are not acessible.
APA, Harvard, Vancouver, ISO, and other styles
29

Bancal, Sylvain. "Basic design of an HVDC interconnection in Brazil." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187693.

Full text
Abstract:
HVDC technologies are very effective on long distance power transmission but generally raise large interrogations as how to determine an effective configuration. This thesis propose part of an optimization process in order to determine an optimal configuration for an HVDC installation, emphasizing in this report the impact of the conductor selection and of the filter design.  The conductor selection, including varying voltages raises concern about the knowledge of the energy cost evolution in order to determine its optimum and a sensitivity analysis is proposed to evaluate the impact of this factor on the final design. The conductor selection approach made it possible to determine the key parameters in choosing a conductor which are the radius and number of conductors. Different types of conductors and configurations were compared with different scenari for energy cost in order to determine the most economical conductor.  Filter design is a matter that concerns both the internal components and the AC components of the HVDC station but can also be considered as an optimization process, considering the total losses of the filters and the total harmonic distortion and using a minimax approach.  The optimization approach, based on a Newton-Raphson algorithm, made it possible to determine an optimal combination of filters in order to account for all the power range in the HVDC link. It was observed that even though the actual choice for the design was close to the final design selected, it was not optimal for low power harmonics.
HVDC är en mycket effektiv teknik för kraftöverföring av elektrisk energi på långa avstånd, men ställer generellt stora krav på hur man genomför en effektiv konfiguration. Denna avhandling föreslår en del av en optimeringsprocess för att bestämma en optimal konfiguration för en HVDC-anläggning. Det som betonas i denna rapport är effekterna av val av ledare och design av filter. Ledarvalet, inklusive val av spänning, kräver en prognostisering av energikostnadernasutvecklingen för att optimera designen och göra en känslighetsanalys för att utvärdera effekterna av dessa faktorer på den slutliga utformningen. Tillvägagångssättet för ledarval gjorde det möjligt att fastställa de viktigaste parametrarna att välja en ledare som är radien och antal delledare. Olika typer av ledare och konfigurationer jämfördes med olika scenario för energikostnaden för att bestämma denmest ekonomiska ledaren. Filterdesignen är en fråga som berör både de inre komponenterna och AC komponenter i HVDC-stationen, men kan också betraktas som en optimeringsprocess, med avseende på de totala förlusterna av filtren och total harmonisk distorsion och med hjälp av en minimax tillvägagångssätt. Optimeringsstrategin som bygger på en Newton-Raphsons algoritm, gjorde det möjligt att fastställa en optimal kombination av filter för att ta hänsyn till alla effektområden i HVDC-förbindelsen. Det observerades att även om det faktiska valet för konstruktionen var nära den slutliga utformning som valdes, så var den inte optimala för låga  övertoner.
APA, Harvard, Vancouver, ISO, and other styles
30

Karlsson, Linda S. "Spatio-Temporal Pre-Processing Methods for Region-of-Interest Video Coding." Licentiate thesis, Mid Sweden University, Department of Information Technology and Media, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-51.

Full text
Abstract:

In video transmission at low bit rates the challenge is to compress the video with a minimal reduction of the percieved quality. The compression can be adapted to knowledge of which regions in the video sequence are of most interest to the viewer. Region of interest (ROI) video coding uses this information to control the allocation of bits to the background and the ROI. The aim is to increase the quality in the ROI at the expense of the quality in the background. In order for this to occur the typical content of an ROI for a particular application is firstly determined and the actual detection is performed based on this information. The allocation of bits can then be controlled based on the result of the detection.

In this licenciate thesis existing methods to control bit allocation in ROI video coding are investigated. In particular pre-processing methods that are applied independently of the codec or standard. This makes it possible to apply the method directly to the video sequence without modifications to the codec. Three filters are proposed in this thesis based on previous approaches. The spatial filter that only modifies the background within a single frame and the temporal filter that uses information from the previous frame. These two filters are also combined into a spatio-temporal filter. The abilities of these filters to reduce the number of bits necessary to encode the background and to successfully re-allocate these to the ROI are investigated. In addition the computational compexities of the algorithms are analysed.

The theoretical analysis is verified by quantitative tests. These include measuring the quality using both the PSNR of the ROI and the border of the background, as well as subjective tests with human test subjects and an analysis of motion vector statistics.

The qualitative analysis shows that the spatio-temporal filter has a better coding efficiency than the other filters and it successfully re-allocates the bits from the foreground to the background. The spatio-temporal filter gives an improvement in average PSNR in the ROI of more than 1.32 dB or a reduction in bitrate of 31 % compared to the encoding of the original sequence. This result is similar to or slightly better than the spatial filter. However, the spatio-temporal filter has a better performance, since its computational complexity is lower than that of the spatial filter.

APA, Harvard, Vancouver, ISO, and other styles
31

Einarsson, Henrik. "Implementation and Performance Analysis of Filternets." Thesis, Linköping University, Department of Biomedical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Trevino, Alberto. "Improving Filtering of Email Phishing Attacks by Using Three-Way Text Classifiers." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3103.

Full text
Abstract:
The Internet has been plagued with endless spam for over 15 years. However, in the last five years spam has morphed from an annoying advertising tool to a social engineering attack vector. Much of today's unwanted email tries to deceive users into replying with passwords, bank account information, or to visit malicious sites which steal login credentials and spread malware. These email-based attacks are known as phishing attacks. Much has been published about these attacks which try to appear real not only to users and subsequently, spam filters. Several sources indicate traditional content filters have a hard time detecting phishing attacks because the emails lack the traditional features and characteristics of spam messages. This thesis tests the hypothesis that by separating the messages into three categories (ham, spam and phish) content filters will yield better filtering performance. Even though experimentation showed three-way classification did not improve performance, several additional premises were tested, including the validity of the claim that phishing emails are too much like legitimate emails and the ability of Naive Bayes classifiers to properly classify emails.
APA, Harvard, Vancouver, ISO, and other styles
33

Wettermark, Emma, and Linda Berglund. "Multi-Modal Visual Tracking Using Infrared Imagery." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176540.

Full text
Abstract:
Generic visual object tracking is the task of tracking one or several objects in all frames in a video, knowing only the location and size of the target in the initial frame. Visual tracking can be carried out in both the infrared and the visual spectrum simultaneously, this is known as multi-modal tracking. Utilizing both spectra can result in a more diverse tracker since visual tracking in infrared imagery makes it possible to detect objects even in poor visibility or in complete darkness. However, infrared imagery lacks the number of details that are present in visual images. A common method for visual tracking is to use discriminative correlation filters (DCF). These correlation filters are then used to detect an object in every frame of an image sequence. This thesis focuses on investigating aspects of a DCF based tracker, operating in the two different modalities, infrared and visual imagery. First, it was investigated whether the tracking benefits from using two channels instead of one and what happens to the tracking result if one of those channels is degraded by an external cause. It was also investigated if the addition of image features can further improve the tracking. The result shows that the tracking improves when using two channels instead of only using a single channel. It also shows that utilizing two channels is a good way to create a robust tracker, which is still able to perform even though one of the channels is degraded. Using deep features, extracted from a pre-trained convolutional neural network, was the image feature improving the tracking the most, although the implementation of the deep features made the tracking significantly slower.
APA, Harvard, Vancouver, ISO, and other styles
34

Gafarov, Timur. "Kontrola kvality finančních výkazů pro zavedení systému vnitřní kontroly." Doctoral thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-233732.

Full text
Abstract:
Though at the enterprises the estimation of a financial condition is annually, it is necessary to develop, to improve constantly and to evaluate the system of the internal control, necessary to develop a technique of the reporting quality estimation of the enterprise specially for the certain enterprise in view of all features, to take advantage of statistical data and to draw corresponding conclusions, to make constant monitoring. The purpose of development of the mechanism - detection of deviations of data in the reporting from actual results of activity, definition of clauses causing distortion of a real financial condition of the enterprise, revealing of size of influence of the given distortions and qualities of the reporting as a whole on decision-making, and also revealing of the reasons causing these deviations and distortions, and development of recommendations on corresponding correction separate directions for improvement of quality of the reporting. How can high reporting quality and internal control create an advantage? In survey of institutional investors is reported that investors apply a penalty if they believe a company’s internal control to be insufficient. Sixty-one percent of respondents said they had avoided investing in companies and 48% had de-invested in companies where internal control was considered inadequate. As additional support, they study went on to report that 82% of respondents agreed that good internal control was worth a premium on share price. These institutional investors are pushing for greater transparency on risk issues and related internal control efforts. Simply put, an organization’s ability to implement and maintain a leading-class control framework can create competitive advantage in today’s market. A system of the financial reporting conducting with strong management, quality control and good legislative base is the key factor of economic development. The trust of investors in the financial and not financial information is based on strong Internal Control, high-quality standards of the financial reporting, audit and ethics, thus, standards and Internal Control play the leading part in assistance of economic growth and financial stability in the country. Nevertheless, every company meets the problems of implementation of the internal control. Among them there can be problems in labor qualification, legislation and so on. It is also necessary to examine the successful experience at the micro level.
APA, Harvard, Vancouver, ISO, and other styles
35

Frighetto, Michele. "Regras de associação aplicadas aos filtros de mensagens e canais de informação do projeto direto." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/96975.

Full text
Abstract:
Neste trabalho é apresentado um breve estudo sobre o processo de descoberta de conhecimento em banco de dados, com enfoque na etapa de mineração de dados através de regras de associação. Propostas por Agrawal em 1993, num estudo chamado análise de cesta de mercado, as regras de associação representam que com um certo grau de suporte e confiança um conjunto de itens pode estar presente numa transação visto que outro conjunto está presente. A necessidade de análise semelhante às realizadas por Agrawal surgiu em outros campos e estas foram estendidas a outras aplicações. Neste, são apresentadas as principais variações sobre o tema regras de associação encontradas na literatura. É proposta a mineração de dados através de regras de associação sobre filtros de mensagens e canais de informação do software de catálogo, agenda e correio eletrônico Direto. Para as pesquisas são utilizadas três ferramentas: Intelligent Miner, CBA e Magnus Opus. Elas foram aplicadas sobre uma lista de discussão da Linguagem Java, pois o projeto Direto ainda não possui mensagens públicas. As ferramentas possuem características distintas: o Intelligent Miner permite a definição de hierarquias sobre os dados que serão minerados; o Magnus Opus trabalha com diversos filtros e com a definição de intervalos para o tratamento de campos numéricos; o CBA permite que sejam especificados suportes múltiplos para os itens.
This work presents a brief review about knowledge discovery in database having association rules as the data mining process. Association rules were proposed by Agrawal in 1993 in a basket data analysis. Association rules have been extended to other applications because there is a necessity for similar Agrawal’s analysis in different domains. Here are presented some variations proposed in the literature about association rules along with the main algorithms. This work proposes the use of association rules over message filters and information channels from the Direto, which is a catalog, schedule and e-mail software. Three data mining tools were used: Intelligent Miner, CBA and Magnus Opus. They were applied over a Java discussion list because Direto project does not have public messages. Each tool has distinct features: Intelligent Miner allows to define a hierarchy over the data that will be mined; Magnus Opus works with many filters over the data and permits to define ranges over numeric fields and CBA allows to specify multiple minimum support over the items.
APA, Harvard, Vancouver, ISO, and other styles
36

Jonsson, Per. "Parallelization of the Kalman filter for multi-output systems on multicore platforms." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-205553.

Full text
Abstract:
The Kalman filter is a very commonly used signal processing tool for estimating state variables from noisy observations in linear systems. Because of its cubic complexity, it is motivated to search for more computationally efficient Kalman filter implementations. In this thesis work, previous attempts of parallelizing the Kalman filter have been investigated to determine whether any of them could run efficiently on modern multi-core computers. Two of the most interesting methods from a multi-core perspective have undergone further analysis to study how they perform in a multi-core environment. In the analysis, both state estimation accuracy and algorithm speedup have been considered. The experiment results indicate that one of the evaluated algorithms, denoted the Fusion Gain method in this report, is faster on a quad-core CPU than a straight-forward implementation of the original Kalman filter when the number of output signals is large. It should be noted, however, that this algorithm is not identical to the true Kalman filter due to an approximation used in the derivation of the method. Despite this detail, it might still be of use in some applications where speed is more important than accurate state estimates. The other evaluated method is based upon a fast Givens rotation. It was originally implemented on a so-called systolic array, which makes use of parallelism differently than multi-core computers. Unfortunately, this algorithm turned out to run very slow in the benchmarks even though the number of floating-point operations per second (FLOPS) should be far less than many of the other methods according to the theoretical analysis. More attention could be devoted to this implementation to locate possible bottlenecks.
APA, Harvard, Vancouver, ISO, and other styles
37

Hrbáček, Jan. "Navigation and Information System for Visually Impaired." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-391817.

Full text
Abstract:
Poškození zraku je jedním z nejčastějších tělesných postižení -- udává se, že až 3 % populace trpí vážným poškozením nebo ztrátou zraku. Oslepnutí výrazně zhoršuje schopnost orientace a pohybu v okolním prostředí -- bez znalosti uspořádání prostoru, jinak získané převážně pomocí zraku, postižený zkrátka neví, kudy se pohybovat ke svému cíli. Obvyklým řešením problému orientace v neznámých prostředích je doprovod nevidomého osobou se zdravým zrakem; tato služba je však velmi náročná a nevidomý se musí plně spolehnout na doprovod. Tato práce zkoumá možnosti, kterými by bylo možné postiženým ulehčit orientaci v prostoru, a to využitím existujících senzorických prostředků a vhodného zpracování jejich dat. Téma je zpracováno skrze analogii s mobilní robotikou, v jejímž duchu je rozděleno na část lokalizace a plánování cesty. Zatímco metody plánování cesty jsou vesměs k dispozici, lokalizace chodce často trpí značnými nepřesnostmi určení polohy a komplikuje tak využití standardních navigačních přístrojů nevidomými uživateli. Zlepšení odhadu polohy může být dosaženo vícero cestami, zkoumanými analytickou kapitolou. Předložená práce prvně navrhuje fúzi obvyklého přijímače systému GPS s chodeckou odometrickou jednotkou, což vede k zachování věrného tvaru trajektorie na lokální úrovni. Pro zmírnění zbývající chyby posunu odhadu je proveden návrh využití přirozených význačných bodů prostředí, které jsou vztaženy ke globální referenci polohy. Na základě existujících formalismů vyhledávání v grafu jsou zkoumána kritéria optimality vhodná pro volbu cesty nevidomého skrz městské prostředí. Generátor vysokoúrovňových instrukcí založený na fuzzy logice je potom budován s motivací uživatelského rozhraní působícího lidsky; doplňkem je okamžitý haptický výstup korigující odchylku směru. Chování navržených principů bylo vyhodnoceno na základě realistických experimentů zachycujících specifika cílového městského prostředí. Výsledky vykazují značná zlepšení jak maximálních, tak středních ukazatelů chyby určení polohy.
APA, Harvard, Vancouver, ISO, and other styles
38

Rocha, Elise Amador. "Diversidade funcional em comunidades de peixes lagunares no Sul do Brasil." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96863.

Full text
Abstract:
Os paradigmas da teoria de metacomunidades atribuem diferentes graus de importância à dispersão, filtros ambientais, interações bióticas e processos estocásticos na organização de comunidades. Incluir atributos funcionais em conjunto com aspectos espaciais da estrutura da paisagem pode resultar em uma poderosa ferramenta para a investigação dos diferentes processos que atuam na organização de comunidades biológicas. Neste estudo, utilizamos o potencial de análise e de levantamento de hipóteses que os atributos funcionais proporcionam em uma metacomunidade de peixes, formada por 37 lagoas em uma bacia hidrográfica na região costeira do sul do Brasil (29º37’ a 30º30’ de latitude Sul e 49º74’ a 50º24’ de longitude Oeste). Os objetivos deste trabalho foram identificar qual é a relação entre a diversidade taxonômica com índices de redundância e diversidade funcional. Também verificar se variáveis espaciais são determinantes na variação de índices funcionais, e das composições taxonômica e funcional, e se ocorrem padrões de convergência e de divergência de atributos. Através de sistemas de informação geográfica (imagens Spot e Landsat-TM5), estas lagoas foram mapeadas e delas foram obtidas variáveis estruturais (área, forma, distância do mar, coeficiente de variação da área, conectividade primária e conectividade estuarina). Os dados da ictiofauna foram obtidos através de amostragem padronizada, utilizando-se redes de espera, e uma série de atributos relacionados às habilidades de dispersão e de uso de recursos alimentares foram tomados. A diversidade taxonômica demonstrou ser fortemente correlacionada com a redundância e a diversidade funcional. Os modelos que melhor explicam a redundância funcional são aqueles que incluíram a forma e o coeficiente de variação da área das lagoas, mas a diversidade funcional não foi predita significativamente por nenhuma variável espacial. Não foram encontrados padrões de convergência e de divergência de atributos, e lagoas semelhantes em suas características espaciais não possuem composição funcional similar. Nossos resultados sugerem que o paradigma neutro de metacomunidades é a abordagem que melhor explica a estruturação deste sistema, o qual prediz equivalência funcional entre espécies.
The metacommunity theory paradigms attribute different degrees of importance to dispersal, environmental filtering, biotic interactions and stochastic processes in community assembly. To include the jointly use of functional traits with the spatial aspects of landscape structure could result in a powerful tool for the investigation of the different processes involved in the organization of biological communities. In this study, we used the potential of analysis and survey of hypotheses that functional traits provide in a fish metacommunity, composed by 37 lagoons in a river basin in the coastal region of southern Brazil (29º37' to 30°30' south latitude and 49º74' to 50°24' west longitude). The aims of this work were to identify the relation between taxonomic diversity indices with redundancy and functional diversity. Also, it was verified if spatial variables are determinants for the variation in functional indices, taxonomic and functional composition of fishes. We also look for trait convergence and trait divergence assembly patterns. Through geographic information systems (Spot e Landsat-TM5 images), these lagoons were mapped and were quantified structural variables (area, shape, distance to the ocean, coefficient of variation of area, primary connectivity and estuarine connectivity). Ichthyofauna data were obtained through standardized sampling, using gillnets, and a set of traits related to dispersal abilities and use of food resource were obtained. The taxonomic diversity showed to be strongly correlated with functional diversity and redundancy. The models that best explain the functional redundancy are those involving the shape and area variation coefficient, however, the functional diversity was not significantly predicted by any spatial variable. We did not find trait convergence and trait divergence assembly patterns, and lagoons that share similar spatial features do not have similar functional composition. Our results suggest that the neutral metacommunity paradigm is the approach which has the best explanation for the structure of this system, which predicts functional equivalence among species.
APA, Harvard, Vancouver, ISO, and other styles
39

Wihlney, Kristina. "Ja eller nej till filter? En kvalitativ undersökning i frågan om svenska folkbibliotek bör filtrera Internet eller ej." Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20611.

Full text
Abstract:
The focus of this work is to study whether public libraries should offer their patrons unlimited access to the Internet, or restrict which sites they can visit. The most common tool used to limit Internet access is filtering software. The author has administered four interviews with public librarians working in libraries in Southern Sweden. All four librarians had different opinions about restricting Internet access. The most important question asked was if they had chosen to use filtering, and the reasons for their decision. The Internet is becoming more frequently used in modern libraries. It is a useful source for quickly finding information, but it has several disadvantages compared to traditional media. The information can be erroneous, and the amount of information is enormous. These two problems can make it difficult to find the desired information and to be sure of its accuracy. The Internet has also made it much easier for children to access inappropriate material. These facts demonstrate why librarians have to think carefully when offering their patrons Internet access. The librarians interviewed did not think that filtering is a kind of censorship. Three of the librarians mentioned children when asked about filtering. Two of them were in favor of filtering, but the third was against filtering. She stated that in order to learn, children need to be exposed to information which is suspect, so that they can be more critical about what they see on the Internet. The main conclusion is that regardless of the choice of whether or not to use filtering, librarians need to be aware of the problems associated with Internet access so they can make an informed decision. Filtering, in some instances, results in a positive effect, while in other cases it is inappropriate.
Uppsatsnivå: D
APA, Harvard, Vancouver, ISO, and other styles
40

Islam, Md Monowarul, and Muftadi Ullah Arpon. "Image Reconstruction Techniques using Kaiser Window in 2D CT Imaging." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-94135.

Full text
Abstract:
The traditional Computed Tomography (CT) is based on the Radon Transform and its inversion. The Radon transform uses parallel beam geometry and its inversion is based on the Fourier slice theorem. In practice, it is very efficient to employ a back-projection algorithm in connection with the Fast Fourier Transform, and which can be interpreted as a 1-D filtering across the radial dimension of the 2-D Fourier plane of the transformed image. This approach can easily be adapted to windowing techniques in the frequency domain, giving the capability to reduce image noise. In this work we are investigating the capabilities of the so called Kaiser window (giving an optimal trade-off between the main lobe energy and the sidelobe suppression) to achieve a near optimal trade-off between the noise reduction and the image sharpness in the context of Radon inversion. Finally, we simulate our image reconstruction using MATLAB software and compare and estimate our results based on the normalized Least Square Error (LSE). We conclude that the Kaiser window can be used to achieve an optimal trade-off between noise reduction and sharpness in the image, and hence outperforms all the other classical window function in this regard.
APA, Harvard, Vancouver, ISO, and other styles
41

Refors, Michael. "Information filter based sensor fusion for estimation of vehicle velocity." Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192157.

Full text
Abstract:
In this thesis, the possibility to estimate the velocity of a Heavy-duty vehicle (HDV) based on the Global Positioning System (GPS), an Inertial Measurement Unit (IMU) and the propeller shaft tachometer is investigated. The thesis was performed at Scania CV AB. The objective was to find an alternative to the wheel encoders that currently are used for velocity estimation. Three different sensor configurations were tested: the first (SC1) was based on GPS and an accelerometer, the second (SC2) was based on GPS, an accelerometer and a gyroscope, and the third (SC3) was based on GPS, an accelerometer and the propeller shaft tachometer. An experimental sensor architecture for collection of measurement data was built. The sensor configurations were evaluated in simulations based on measurement data collected from a test vehicle at Scania’s test track in S¨odert¨alje. An Information filter (IF) was used for decentralized fusion of sensor measurements. The sensor configurations were evaluated against the wheel encoders and a high quality GPS/IMU reference system using the Root Mean Squared Error (RMSE), Mean Signed Deviation (MSD) and maximum error. It was concluded that the sensor configurations based solely on GPS and IMU are not robust enough during GPS outages because of the IMU’s drift. An alternative source to GPS for correction of the IMU errors was thus necessary. The propeller shaft tachometer was used for this. The RMSE for this sensor configuration (SC3) was reduced with 37% and the MSD was reduced with 60% in comparison to the wheel encoder based velocity in the most extreme test performed, when the wheels slip and the GPS signal is erroneous during two instances. SC3 is thus proposed for further development. This work lays the basis for real-time implementation of the proposed sensor configuration and shows the feasibility of using the IF for decentralized multi-sensor fusion. It is also suggested to use the IF for integration of multiple sensors to create a refined and redundant velocity estimation.
I det här examensarbetet undersöks möjligheten att skatta hastigheten av ett tungt fordon baserat på GPS, IMU och den drivande axelns varvtalsgivare. Projektet utfördes hos Scania CV AB. Målet var att finna ett alternativ till hjulhastighetssensorerna som används för hastighetsskattning idag. Tre olika sensorkonfigurationer testades. Den första (SC1) baserades på GPS och en longitudinell accelerometer, den andra (SC2) på GPS, en longitudinell accelerometer and ett gyroskop som mäter lutning. Den tredje (SC3) baserades på GPS, en longitudinell accelerometer och den drivande axelns varvtalsgivare. En experimentiell sensorarkitektur byggdes för insamling av mätdata. Sensorkonfigurationerna evaluerades med simuleringar baserade på mätdata från ett testfordon insamlad på Scanias testbana i Södertälje. Ett infrotmationsfilter (IF) användes för decentraliserad fusion av sensordata. Sensorkonfigurationerna evaluerades mot hjulhastighetssensorerna och ett högkvalitativt GPS/IMU-referenssystem med hjälp av de statistiska mätvärdena Root Mean Square Error (RMSE), Mean Signed Deviation (MSD) och det maximala felet. Resultaten visade att sensorkonfigurationerna baserade endast på GPS och IMU inte var tillräckligt robusta då GPS-signalen förlorades på grund av IMU:ns tendens att divergera. En alternativ källa till GPS för korrigering av IMU:ns fel var därför nödvändig. För detta användes den drivande axelns varvtalsgivare. Denna sensorkonfiguration (SC3) har visat en RMSE-förbättring med 37% och en MSD förbättrad med 60% i jämförelse med hjulhastighetssensorerna i det mest extrema test som geneomförts, då hjulen spinner och GPSsignalen är felaktig under två tillfällen. SC3 är därför föreslagen för vidareutveckling. Detta arbete lägger grunden för fortsatt utveckling av en realtidsimplementation av den föreslagna sensorkonfigurationen, och påvisar möjligheten att använda ett IF för decentraliserad multisensorfusion. Det är även föreslaget att använda IF för integration av flera sensorer för att skapa en förfinad och redundant hastighetsskattning.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Yubin [Verfasser]. "Adaptive Particle Filters for Wireless Indoor Target Tracking / Yubin Zhao." Berlin : Freie Universität Berlin, 2014. http://d-nb.info/1062950186/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Penberthy, Harris Stephen. "Natural algorithms in digital filter design." Thesis, University of Plymouth, 2001. http://hdl.handle.net/10026.1/2752.

Full text
Abstract:
Digital filters are an important part of Digital Signal Processing (DSP), which plays vital roles within the modern world, but their design is a complex task requiring a great deal of specialised knowledge. An analysis of this design process is presented, which identifies opportunities for the application of optimisation. The Genetic Algorithm (GA) and Simulated Annealing are problem-independent and increasingly popular optimisation techniques. They do not require detailed prior knowledge of the nature of a problem, and are unaffected by a discontinuous search space, unlike traditional methods such as calculus and hill-climbing. Potential applications of these techniques to the filter design process are discussed, and presented with practical results. Investigations into the design of Frequency Sampling (FS) Finite Impulse Response (FIR) filters using a hybrid GA/hill-climber proved especially successful, improving on published results. An analysis of the search space for FS filters provided useful information on the performance of the optimisation technique. The ability of the GA to trade off a filter's performance with respect to several design criteria simultaneously, without intervention by the designer, is also investigated. Methods of simplifying the design process by using this technique are presented, together with an analysis of the difficulty of the non-linear FIR filter design problem from a GA perspective. This gave an insight into the fundamental nature of the optimisation problem, and also suggested future improvements. The results gained from these investigations allowed the framework for a potential 'intelligent' filter design system to be proposed, in which embedded expert knowledge, Artificial Intelligence techniques and traditional design methods work together. This could deliver a single tool capable of designing a wide range of filters with minimal human intervention, and of proposing solutions to incomplete problems. It could also provide the basis for the development of tools for other areas of DSP system design.
APA, Harvard, Vancouver, ISO, and other styles
44

Othman, Mohd Ridzal. "DSP-based active power filter." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/6966.

Full text
Abstract:
Harmonics in systems are conventionally suppressed using passive tuned filters, which have practical limitations in terms of the overall cost, size and performance, and these are particularly unsatisfactory when large number of harmonics are involved Active power filtering is an alternative approach in which the filter injects suitable compensation currents to cancel the harmonic currents, usually through the use of power electronic converters. This type of filter does not exhibit the drawbacks normally associated with its passive counterpart, and a large number of harmonics can be compensated by a single unit without incurring additional cost or performance degradation. This thesis investigates an active power filter configuration incorporating instantaneous reactive power theory to calculate the compensation currents. Since the original equations for determining the reference compensation currents are defined in two imaginary phases, considerable computation time is necessary to transform them from the real three-phase values. The novel approach described in the thesis minimises the required computation time by calculating the equations directly in terms of the phase values i. e. three-phase currents and voltages. Furthermore, by utilising a sufficiently fast digital signal processor ( DSP ) to perform the calculation, real-time compensation can be achieved with greater accuracy. The results obtained show that the proposed approach leads to further harmonic suppression in both the current and voltage waveforms compared to the original approach, due to considerable reduction in the computation time of the reference compensation currents.
APA, Harvard, Vancouver, ISO, and other styles
45

Del, Vigna Matteo. "Information asymmetry and equilibrium models in behavioral finance." Paris 9, 2012. http://basepub.dauphine.fr/xmlui/handle/123456789/9075.

Full text
Abstract:
Dans cette thèse nous abordons deux sujets récents de la finance comportementale qui concernent l’optimisation de portefeuille et l’existence d’équilibres dans les marchés financiers. On introduit d’abord des théories développées pour représenter les préférences des agents comportementals. Dans le deuxième chapitre, nous étudions l’optimisation de portefeuille en temps continue pour un agent initié qui suit la Cumulative Prospect Theory (CPT). Nous donnons une solution dans les cas d’information forte, incomplète et faible et nous faisons une comparaison avec un agent qui maximise son utilité espérée (UE). Dans le troisième chapitre, nous étudions des modèles d’équilibre statique dans un marché financier simple. Les agents ont préférences hétérogènes et ils interagissent entre eux. Nous donnons des conditions suffisantes pour l’existence dans le cas d’un agent UE influent, un agent CPT influent et un market maker complaisant. Enfin, le cas de plusieurs agents UE et CPT est considéré
In this thesis we explore two recent topics in behavioral finance, namely portfolio optimization by non-expected utility insiders and existence of equilibria in financial markets populated by heterogeneous agents. Firstly, we review a number of theories which have been used to model behavioral decision makers’ preferences. In the second chapter, we set and solve a portfolio optimization problem in continuous time for an insider trader following Cumulative Prospect Theory (CPT). We provide an analysis in the strong as well as partial and weak information cases and we perform a comparison with respect to an Expected Utility (EU) decision maker. In the third chapter, we study equilibrium models in a one-period stylized financial market where agents with different preference structures can interact. We give sufficient conditions for existence when a large EU, a large CPT investor and an accommodating market maker trade. At last, the case of many EU and many CPT agents is presented
APA, Harvard, Vancouver, ISO, and other styles
46

Udovenko, Nikita. "3D geometrijos atstatymas panaudojant Kinect jutiklį." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120723_110949-72095.

Full text
Abstract:
Šiame darbe yra tiriamos atrankiojo aplinkos 3D geometrijos atstatymo galimybės panaudojant Kinect jutiklio kombinuotą vaizdo-gylio kamerą: pateikiamas matematinis atstatymo modelis, jo parametrizavimui reikalingi koeficientai, apibūdinama tikėtinų paklaidų apimtis, siūloma aktualių scenos duomenų išskyrimo iš scenos procedūra, tiriamas gaunamo modelio triukšmas ir jo pašalinimo galimybės ir metodai. Atstatyta geometrija yra pateikiama metrinėje matų sistemoje, ir kiekvienas 3D scenos taškas papildomai saugo savo spalvinę informaciją. Praktinėje dalyje pateikiama sukurta taikomoji programa yra įgyvendinta naudojant C++ ir OpenCV matematines programavimo bibliotekas. Ji atlieka 3D geometrijos atstatymą pagal pateiktą teorinį modelį, išskiria aktualius scenos duomenis, pašalina triukšmą ir gali išsaugoti gautus duomenis į 3D modeliavimo programoms suprantamą PLY formato bylą. Darbą sudaro: įvadas, 3 skyriai, išvados ir literatūros sąrašas. Darbo apimtis – 61 p. teksto be priedų, 43 paveikslai, 4 lentelės, 22 bibliografiniai šaltiniai.
The purpose of this thesis is to investigate the possibilities of selective 3D geometry reconstruction using Kinect combined image-depth camera: a mathematical reconstruction model is provided, as well as coefficients to parametrize it and estimates on expected precision; a procedure on filtering out the background from depth image is proposed, depth image noise and possibilities for its removal are studied. Resulting reconstructed geometry is provided using metric system of measurement, and each 3D point also retains it's color data. Resulting application is implemented in C++ programming language and uses OpenCV programming library. It implements 3D geometry reconstruction as described in theory section, removes background from depth image, as well as noise, and is able to save the resulting 3D geometry to a 3D modeling applications readable file format. Structure: introduction, 3 chapters, conclusions, references. Thesis consists of – 61 p. of text, 43 figures, 4 tables, 22 bibliographical entries.
APA, Harvard, Vancouver, ISO, and other styles
47

Meindersma, Johannes. "Inferring competitiveness without price information : an application to the motor insurance portfolio of Fidelidade." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20488.

Full text
Abstract:
Mestrado em Actuarial Science
Este trabalho propõe novos métodos para inferir a competitividade das apólices de seguro automóvel quando a informação disponível sobre preços é limitada. Funcionalidades do espaço dos espaços são utilizadas para filtrar o ruído das observações, introduzindo dependências temporais para a transição entre seguradoras ou para conversão. Foram recolhidos dados sobre transições entre companhias de seguros no mercado português para estimar as probabilidades de transição entre seguradoras. O modelo binomial oculto de Markov mostrou-se algo limitado ao pressupor um espaço de estados discreto. O filtro de Kalman foi mais bem sucedido na remoção do ruído das observações. O filtro de Kalman proporcionou resultados intuitivos que são interpretáveis mesmo para um público não técnico. Também se utilizaram dados de conversão para inferir estimativas semanais de alteração de competitividade. Propusemos modelos de regressão penalizada em que o tempo é incluído como uma estrutura de passeio aleatório. O modelo utiliza ponderadores de credibilidade para combinar alterações em cada segmento com as alterações da carteira. A estrutura hierárquica do modelo produz estimativas para as alterações de competitividade que são mais interpretáveis do que as dos modelos lineares generalizados, onde o tempo é incluído como uma variável categórica. Além disso, o método proposto supera os modelos lineares generalizados em termos de desempenho preditivo. Ambos os métodos podem servir como uma ferramenta para apoiar o processo de tomada de decisão sobre preços por parte das seguradoras, quando a disponibilidade de informação fiável sobre preços é limitada.
This work proposes several novel methods for inferring competitiveness of motor insurance policies in a setting of limited availability of price information. State-space functionalities are employed to filter noise from observations by introducing underlying time-dependent structures for transition and conversion data. Transition data of insurance companies of vehicles in the Portuguese insurance market was collected to analyze the evolution of the incoming transition probabilities of insurers. The binomial hidden Markov model is somewhat restricted due to its assumption of discrete state-space. The Kalman smoother is more successful in removing noise from the observations. The smoother provides intuitive results that are interpretable for a non-technical audience. Furthermore, conversion data was used to infer weekly segment-specific estimates of competitiveness changes. We have proposed a penalized regression framework where time is included as a random walk structure. The model uses credibility weighting on each segment's changes using the full portfolio's changes as the complement. The powerful hierarchical fashion of the model produces estimates of competitiveness changes that are more interpretable than those of generalized linear models, where time is included as a categorical variable. Moreover, the proposed method outperforms the generalized linear models in terms of predictive performance. Both methods can serve as a tool to support the price decision-making process by insurers when the availability of reliable price information is limited.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
48

Kramer, Joseph E. "PSK shift timing information detection using image processing and a matched filter." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FKramer.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Fargues, Monique P. "September 2009." Description based on title screen as viewed on 5 November 2009. Author(s) subject terms: Phase shift keyed signals, image processing, temporal correlation function, edge detection, morphological operations, two-dimensional matched filter. Includes bibliographical references (p. 125). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
49

Sannelli, Claudia Verfasser], and Klaus-Robert [Akademischer Betreuer] [Müller. "Optimizing Spatial Filters to reduce BCI Inefficiency / Claudia Sannelli. Betreuer: Klaus-Robert Müller." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2013. http://d-nb.info/1036263010/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sannelli, Claudia [Verfasser], and Klaus-Robert [Akademischer Betreuer] Müller. "Optimizing Spatial Filters to reduce BCI Inefficiency / Claudia Sannelli. Betreuer: Klaus-Robert Müller." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2013. http://d-nb.info/1036263010/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography