Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Signal processing Data processing.

Дисертації з теми "Signal processing Data processing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Signal processing Data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Bañuelos, Saucedo Miguel Angel. "Signal and data processing for THz imaging." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/signal-and-data-processing-for-thz-imaging(58a646f3-033b-4771-b1dc-d1f9fc6dfbf0).html.

Повний текст джерела
Анотація:
This thesis presents the research made on signal and data processing for THz imaging, with emphasis in noise analysis and tomography in amplitude contrast using a THz time-domain spectrometry system. A THz computerized tomography system was built, tested and characterized. The system is controlled from a personal computer using a program developed ad hoc. Detail is given on the operating principles of the system’s numerous optical and THz components, the design of a computer-based fast lock-in amplifier, the proposal of a local apodization method for reducing spurious oscillations in a THz spectrum, and the use of a parabolic interpolation of integrated signals as a method for estimating THz pulse delay. It is shown that our system can achieve a signal-to-noise ratio of 60 dB in spectrometry tests and 47 dB in tomography tests. Styrofoam phantoms of different shapes and up to 50x60 mm is size are used for analysis. Tomographic images are reconstructed at different frequencies from 0.2 THz to 2.5 THz, showing that volume scattering and edge contrast increase with wavelength. Evidence is given that refractive losses and surface scattering are responsible of high edge contrast in THz tomography images reconstructed in amplitude contrast. A modified Rayleigh roughness factor is proposed to model surface transmission scattering. It is also shown that volume scattering can be modelled by the material’s attenuation coefficient. The use of 4 mm apertures as spatial filters is compared against full beam imaging, and the limitations of Raleigh range are also addressed. It was estimated that for some frequencies between 0.5 THz and 1 THz the Rayleigh range is enough for the tested phantoms. Results on the influence of attenuation and scattering at different THz frequencies can be applied to the development of THz CW imaging systems and as a point of departure for the development of more complex scattering models.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Varnavas, Andreas Soteriou. "Signal processing methods for EEG data classification." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/11943.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ma, Ding. "Miniature data acquisition system for multi-channel sensor arrays." Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Thesis/Spring2010/d_ma_042610.pdf.

Повний текст джерела
Анотація:
Thesis (M.S. in electrical engineering)--Washington State University, May 2010.
Title from PDF title page (viewed on July 23, 2010). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 55-57).
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.

Повний текст джерела
Анотація:
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

T, N. Santhosh Kumar, K. Abdul Samad A, and M. Sarojini K. "DSP BASED SIGNAL PROCESSING UNIT FOR REAL TIME PROCESSING OF VIBRATION AND ACOUSTIC SIGNALS OF SATELLITE LAUNCH VEHICLES." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608530.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
Measurement of vibration and acoustic signals at various locations in the launch vehicle is important to establish the vibration and acoustic environment encountered by the launch vehicle during flight. The vibration and acoustic signals are wideband and require very large telemetry bandwidth if directly transmitted to ground. The DSP based Signal Processing Unit is designed to measure and analyse acoustic and vibration signals onboard the launch vehicle and transmit the computed spectrum to ground through centralised baseband telemetry system. The analysis techniques employed are power spectral density (PSD) computations using Fast Fourier Transform (FFT) and 1/3rd octave analysis using digital Infinite Impulse Response (IIR) filters. The programmability of all analysis parameters is achieved using EEPROM. This paper discusses the details of measurement and analysis techniques, design philosophy, tools used and implementation schemes. The paper also presents the performance results of flight models.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Roberts, G. "Some aspects seismic signal processing and analysis." Thesis, Bangor University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379692.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sungoor, Ala M. H. "Genomic signal processing for enhanced microarray data clustering." Thesis, Kingston University, 2009. http://eprints.kingston.ac.uk/20310/.

Повний текст джерела
Анотація:
Genomic signal processing is a new area of research that combines genomics with digital signal processing methodologies for enhanced genetic data analysis. Microarray is a well known technology for the evaluation of thousands of gene expression profiles. By considering these profiles as digital signals, the power of DSP methods can be applied to produce robust and unsupervised clustering of microarray samples. This can be achieved by transferring expression profiles into spectral components which are interpreted as a measure of profile similarity. This thesis introduces enhanced signal processing algorithms for robust clustering of micro array gene expression samples. The main aim of the research is to design and validate novel genomic signal processing methodologies for micro array data analysis based on different DSP methods. More specifically, clustering algorithms based on Linear prediction coding, Wavelet decomposition and Fractal dimension methods combined with Vector quantisation algorithm are applied and compared on a set of test microarray datasets. These techniques take as an input microarray gene expression samples and produce predictive coefficients arrays associated to the microarray data that are quantised in discrete levels, and consequently used for sample clustering. A variety of standard micro array datasets are used in this work to validate the robustness of these methods compared to conventional methods. Two well known validation approaches, i.e. Silhouette and Davies Bouldin index methods, are applied to evaluate internally and externally the genomic signal processing clustering results. In conclusion, the results demonstrate that genomic signal processing based methods outperform traditional methods by providing more clustering accuracy. Moreover, the study shows that the local features of the gene expression signals are better clustered using wavelets compared to the other DSP methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hloupis, Georgios. "Seismological data acquisition and signal processing using wavelets." Thesis, Brunel University, 2009. http://bura.brunel.ac.uk/handle/2438/3470.

Повний текст джерела
Анотація:
This work deals with two main fields: a) The design, built, installation, test, evaluation, deployment and maintenance of Seismological Network of Crete (SNC) of the Laboratory of Geophysics and Seismology (LGS) at Technological Educational Institute (TEI) at Chania. b) The use of Wavelet Transform (WT) in several applications during the operation of the aforementioned network. SNC began its operation in 2003. It is designed and built in order to provide denser network coverage, real time data transmission to CRC, real time telemetry, use of wired ADSL lines and dedicated private satellite links, real time data processing and estimation of source parameters as well as rapid dissemination of results. All the above are implemented using commercial hardware and software which is modified and where is necessary, author designs and deploy additional software modules. Up to now (July 2008) SNC has recorded 5500 identified events (around 970 more than those reported by national bulletin the same period) and its seismic catalogue is complete for magnitudes over 3.2, instead national catalogue which was complete for magnitudes over 3.7 before the operation of SNC. During its operation, several applications at SNC used WT as a signal processing tool. These applications benefited from the adaptation of WT to non-stationary signals such as the seismic signals. These applications are: HVSR method. WT used to reveal undetectable non-stationarities in order to eliminate errors in site’s fundamental frequency estimation. Denoising. Several wavelet denoising schemes compared with the widely used in seismology band-pass filtering in order to prove the superiority of wavelet denoising and to choose the most appropriate scheme for different signal to noise ratios of seismograms. EEWS. WT used for producing magnitude prediction equations and epicentral estimations from the first 5 secs of P wave arrival. As an alternative analysis tool for detection of significant indicators in temporal patterns of seismicity. Multiresolution wavelet analysis of seismicity used to estimate (in a several years time period) the time where the maximum emitted earthquake energy was observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kolb, John. "SIGNAL PROCESSING ABOUT A DISTRIBUTED DATA ACQUISITION SYSTEM." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/605610.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
Because modern data acquisition systems use digital backplanes, it is logical for more and more data processing to be done in each Data Acquisition Unit (DAU) or even in each module. The processing related to an analog acquisition module typically takes the form of digital signal conditioning for range adjust, linearization and filtering. Some of the advantages of this are discussed in this paper. The next stage is powerful processing boards within DAUs for data reduction and third-party algorithm development. Once data is being written to and from powerful processing modules an obvious next step is networking and decom-less access to data. This paper discusses some of the issues related to these types of processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, Siheng. "Data Science with Graphs: A Signal Processing Perspective." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/724.

Повний текст джерела
Анотація:
A massive amount of data is being generated at an unprecedented level from a diversity of sources, including social media, internet services, biological studies, physical infrastructure monitoring and many others. The necessity of analyzing such complex data has led to the birth of an emerging framework, graph signal processing. This framework offers an unified and mathematically rigorous paradigm for the analysis of high-dimensional data with complex and irregular structure. It extends fundamental signal processing concepts such as signals, Fourier transform, frequency response and filtering, from signals residing on regular lattices, which have been studied by the classical signal processing theory, to data residing on general graphs, which are called graph signals. In this thesis, we consider five fundamental tasks on graphs from the perspective of graph signal processing: representation, sampling, recovery, detection and localization. Representation, aiming to concisely model shapes of graph signals, is at the heart of the proposed techniques. Sampling followed by recovery, aiming to reconstruct an original graph signal from a few selected samples, is applicable in semi-supervised learning and user profiling in online social networks. Detection followed by localization, aiming to identify and localize targeted patterns in noisy graph signals, is related to many real-world applications, such as localizing virus attacks in cyber-physical systems, localizing stimuli in brain connectivity networks, and mining traffic events in city street networks, to name just a few. We illustrate the power of the proposed tools on two real-world problems: fast resampling of 3D point clouds and mining of urban traffic data.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Javidi, Soroush. "Adaptive signal processing algorithms for noncircular complex data." Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/6328.

Повний текст джерела
Анотація:
The complex domain provides a natural processing framework for a large class of signals encountered in communications, radar, biomedical engineering and renewable energy. Statistical signal processing in C has traditionally been viewed as a straightforward extension of the corresponding algorithms in the real domain R, however, recent developments in augmented complex statistics show that, in general, this leads to under-modelling. This direct treatment of complex-valued signals has led to advances in so called widely linear modelling and the introduction of a generalised framework for the differentiability of both analytic and non-analytic complex and quaternion functions. In this thesis, supervised and blind complex adaptive algorithms capable of processing the generality of complex and quaternion signals (both circular and noncircular) in both noise-free and noisy environments are developed; their usefulness in real-world applications is demonstrated through case studies. The focus of this thesis is on the use of augmented statistics and widely linear modelling. The standard complex least mean square (CLMS) algorithm is extended to perform optimally for the generality of complex-valued signals, and is shown to outperform the CLMS algorithm. Next, extraction of latent complex-valued signals from large mixtures is addressed. This is achieved by developing several classes of complex blind source extraction algorithms based on fundamental signal properties such as smoothness, predictability and degree of Gaussianity, with the analysis of the existence and uniqueness of the solutions also provided. These algorithms are shown to facilitate real-time applications, such as those in brain computer interfacing (BCI). Due to their modified cost functions and the widely linear mixing model, this class of algorithms perform well in both noise-free and noisy environments. Next, based on a widely linear quaternion model, the FastICA algorithm is extended to the quaternion domain to provide separation of the generality of quaternion signals. The enhanced performances of the widely linear algorithms are illustrated in renewable energy and biomedical applications, in particular, for the prediction of wind profiles and extraction of artifacts from EEG recordings.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Naulleau, Patrick. "Optical signal processing and real world applications /." Online version of thesis, 1993. http://hdl.handle.net/1850/12136.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Pan, Jian Jia. "EMD/BEMD improvements and their applications in texture and signal analysis." HKBU Institutional Repository, 2013. https://repository.hkbu.edu.hk/etd_oa/75.

Повний текст джерела
Анотація:
The combination of the well-known Hilbert spectral analysis (HAS) and the recently developed Empirical Mode Decomposition (EMD) designated as the Hilbert-Huang Transform (HHT) by Huang in 1998, represents a paradigm shift of data analysis methodology. The HHT is designed specifically for analyzing nonlinear and nonstationary data. The key part of HHT is EMD with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMFs). For two dimension, bidimensional IMFs (BIMFs) is decomposed by use of bidimensional EMD (BEMD). However, the HHT has some limitations in signal processing and image processing. This thesis addresses the problems of using HHT for signal and image processing. To reduce end effect in EMD, we propose a boundary extend method for EMD. A linear prediction based method combined with boundary extrema points information is employed to extend the signal, which reduces the end effect in EMD sifting process. It is a simple and effective method. In the EMD decomposition, interpolation method is another key point to get ideal components. The envelope mean in EMD is computed from the upper and lower envelopes by cubic spline interpolation, which has overshooting problem and is time-consuming. Based on the linear interpolation (straight line) method, we propose using the extrema points information to get the mean envelope, which is Extrema Mean Empirical Mode Decomposition (EMEMD). The mean envelope taken by EMEMD is smoother than EMD and the undershooting and overshooting problems in cubic spline are reduced compared with EMD. EMEMD also reduces the computation complex. Experimental results show the IMFs of EMEMD present more and clearer time-frequency information than EMD. Hilbert spectral of EMEMD is also clearer and more meaningful than EMD. Furthermore, based on the procedure of EMEMD method, a fast method to detect the frequency change location information of the piecewise stationary signal is also proposed, which is Extrema Points Empirical Mode Decomposition (EPEMD). Later, two applications based on the improved EMD/BEMD methods are proposed. One application is texture classification in image processing. A saddle points added BEMD is developed to supply multi-scale components (BIMFs) and Riesz transform is used to get the frequency domain characters of these BIMFs. Based on local descriptor Local Binary Pattern (LBP), two new features (based on BIMFs and based on Monogenic-BIMFs signals) are developed. In these new multi-scale components and frequency domain components, the LBP descriptor can achieve better performance than in original image. Experimental results show the texture images recognition rate based on our methods are better than other texture features methods. Another application is signal forecasting in one dimensional time series. EMEMD combined with Local Linear Wavelet Neural Network (LLWNN) for signal forecasting is proposed. The architecture is a decomposition-trend detection-forecasting-ensemble methodology. The EMEMD based decomposition forecasting method decomposed the time series into its basic components, and more accurate forecasts are obtained. In short, the main contributions of this thesis are summarized as following: 1. A boundary extension method is developed for one dimensional EMD. This extension method is based on linear prediction and end points adjusting. This extension method can reduce the end effect in EMD. 2. A saddle points added BEMD is developed to analysis and classify the texture images. This new BEMD detected more high oscillation in BIMFs and contributed for texture analysis. 3. A new texture analysis and classification method is proposed, which is based on BEMD (no/with saddle points), LBP and Riesz transform. The texture features based on BIMFs and BIMFs’ frequency domain 2D monogenic phase are developed. The performances and comparisons on the Brodatz, KTH-TIPS2a, CURet and Outex databases are reported. 4. An improved EMD method, EMEMD, is proposed to overcome the shortcoming in interpolation. EMEMD can provide more meaningful IMFs and it is also a fast decomposition method. The decomposition result and analysis in simulation temperature signal compare with Fourier transform, Wavelet transform are reported. 5. A forecasting methodology based on EMEMD and LLWNN is proposed. The architecture is a decomposition-trend detection-forecasting-ensemble methodology. The predicted results of Hong Kong Hang Seng Index and Global Land-Ocean Temperature Index are reported.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Aviran, Sharon. "Constrained coding and signal processing for data storage systems." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3214776.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed July 11, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bengtsson, Mats. "Antenna array signal processing for high rank data models." Doctoral thesis, KTH, Signaler, sensorer och system, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Guttman, Michael. "Sampled-data IIR filtering via time-mode signal processing." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86770.

Повний текст джерела
Анотація:
In this work, the design of sampled-data infinite impulse response filters based on time-mode signal processing circuits is presented. Time-mode signal processing (TMSP), defined as the processing of sampled analog information using time-difference variables, has become one of the more popular emerging technologies in circuit design. As TMSP is still relatively new, there is still much development needed to extend the technology into a general signal-processing tool. In this work, a set of general building block will be introduced that perform the most basic mathematical operations in the time-mode. By arranging these basic structures, higher-order time-mode systems, specifically, time-mode filters, will be realized. Three second-order time-mode filters (low-pass, band-reject, high-pass) are modeled using MATLAB, and simulated in Spectre to verify the design methodology. Finally, a damped integrator and a second-order low-pass time-mode IIR filter are both implemented using discrete components.
Dans ce mémoire, la conception de filtres de données-échantillonnées ayant une réponse impulsionnelle infinie basée sur le traitement de signal en mode temporel est présentée. Le traitement de signal dans le domaine temporel (TSDT), définie comme étant le traitement d'information analogique échantillonnée en utilisant des différences de temps comme variables, est devenu une des techniques émergentes de conception de circuits des plus populaires. Puisque le TSDT est toujours relativement récent, il y a encore beaucoup de développements requis pour étendre cette technologie comme un outil de traitement de signal général. Dans cette recherche, un ensemble de blocs d'assemblage capable de réaliser la plupart des opérations mathématiques dans le domaine temporel sera introduit. En arrangeant ces structures élémentaires, des systèmes en mode temporel d'ordre élevé, plus spécifiquement des filtres en mode temporel, seront réalisés. Trois filtres de deuxième ordre dans le domaine temporel (passe-bas, passe-bande et passe-haut) sont modélisés sur MATLAB et simulé sur Spectre afin de vérifier la méthodologie de conception. Finalement, un intégrateur amorti et un filtre passe-bas IIR de deuxième ordre en mode temporel sont implémentés avec des composantes discrètes.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Battersby, Nicholas Charles. "Switched-current techniques for analogue sampled-data signal processing." Thesis, Imperial College London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.394048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Valdivia, Paola Tatiana Llerena. "Graph signal processing for visual analysis and data exploration." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15102018-165426/.

Повний текст джерела
Анотація:
Signal processing is used in a wide variety of applications, ranging from digital image processing to biomedicine. Recently, some tools from signal processing have been extended to the context of graphs, allowing its use on irregular domains. Among others, the Fourier Transform and the Wavelet Transform have been adapted to such context. Graph signal processing (GSP) is a new field with many potential applications on data exploration. In this dissertation we show how tools from graph signal processing can be used for visual analysis. Specifically, we proposed a data filtering method, based on spectral graph filtering, that led to high quality visualizations which were attested qualitatively and quantitatively. On the other hand, we relied on the graph wavelet transform to enable the visual analysis of massive time-varying data revealing interesting phenomena and events. The proposed applications of GSP to visually analyze data are a first step towards incorporating the use of this theory into information visualization methods. Many possibilities from GSP can be explored by improving the understanding of static and time-varying phenomena that are yet to be uncovered.
O processamento de sinais é usado em uma ampla variedade de aplicações, desde o processamento digital de imagens até a biomedicina. Recentemente, algumas ferramentas do processamento de sinais foram estendidas ao contexto de grafos, permitindo seu uso em domínios irregulares. Entre outros, a Transformada de Fourier e a Transformada Wavelet foram adaptadas nesse contexto. O Processamento de Sinais em Grafos (PSG) é um novo campo com muitos aplicativos potenciais na exploração de dados. Nesta dissertação mostramos como ferramentas de processamento de sinal gráfico podem ser usadas para análise visual. Especificamente, o método de filtragem de dados porposto, baseado na filtragem de grafos espectrais, levou a visualizações de alta qualidade que foram atestadas qualitativa e quantitativamente. Por outro lado, usamos a transformada de wavelet em grafos para permitir a análise visual de dados massivos variantes no tempo, revelando fenômenos e eventos interessantes. As aplicações propostas do PSG para analisar visualmente os dados são um primeiro passo para incorporar o uso desta teoria nos métodos de visualização da informação. Muitas possibilidades do PSG podem ser exploradas melhorando a compreensão de fenômenos estáticos e variantes no tempo que ainda não foram descobertos.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Nagahara, Masaaki. "Multirate digital signal processing via sampled-data H∞ optimization." 京都大学 (Kyoto University), 2003. http://hdl.handle.net/2433/120982.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fookes, Gregory Peter Gwyn. "Interactive geophysical data processing with eigendecomposition methods." Thesis, Birkbeck (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336344.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Gudmundson, Erik. "Signal Processing for Spectroscopic Applications." Doctoral thesis, Uppsala universitet, Avdelningen för systemteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120194.

Повний текст джерела
Анотація:
Spectroscopic techniques allow for studies of materials and organisms on the atomic and molecular level. Examples of such techniques are nuclear magnetic resonance (NMR) spectroscopy—one of the principal techniques to obtain physical, chemical, electronic and structural information about molecules—and magnetic resonance imaging (MRI)—an important medical imaging technique for, e.g., visualization of the internal structure of the human body. The less well-known spectroscopic technique of nuclear quadrupole resonance (NQR) is related to NMR and MRI but with the difference that no external magnetic field is needed. NQR has found applications in, e.g., detection of explosives and narcotics. The first part of this thesis is focused on detection and identification of solid and liquid explosives using both NQR and NMR data. Methods allowing for uncertainties in the assumed signal amplitudes are proposed, as well as methods for estimation of model parameters that allow for non-uniform sampling of the data. The second part treats two medical applications. Firstly, new, fast methods for parameter estimation in MRI data are presented. MRI can be used for, e.g., the diagnosis of anomalies in the skin or in the brain. The presented methods allow for a significant decrease in computational complexity without loss in performance. Secondly, the estimation of blood flow velo-city using medical ultrasound scanners is addressed. Information about anomalies in the blood flow dynamics is an important tool for the diagnosis of, for example, stenosis and atherosclerosis. The presented methods make no assumption on the sampling schemes, allowing for duplex mode transmissions where B-mode images are interleaved with the Doppler emissions.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Purahoo, K. "Maximum entropy data analysis." Thesis, Cranfield University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Archer, Cynthia. "A framework for representing non-stationary data with mixtures of linear models /." Full text open access at:, 2002. http://content.ohsu.edu/u?/etd,585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Deri, Joya A. "Graph Signal Processing: Structure and Scalability to Massive Data Sets." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/725.

Повний текст джерела
Анотація:
Large-scale networks are becoming more prevalent, with applications in healthcare systems, financial networks, social networks, and traffic systems. The detection of normal and abnormal behaviors (signals) in these systems presents a challenging problem. State-of-the-art approaches such as principal component analysis and graph signal processing address this problem using signal projections onto a space determined by an eigendecomposition or singular value decomposition. When a graph is directed, however, applying methods based on the graph Laplacian or singular value decomposition causes information from unidirectional edges to be lost. Here we present a novel formulation and graph signal processing framework that addresses this issue and that is well suited for application to extremely large, directed, sparse networks. In this thesis, we develop and demonstrate a graph Fourier transform for which the spectral components are the Jordan subspaces of the adjacency matrix. In addition to admitting a generalized Parseval’s identity, this transform yields graph equivalence classes that can simplify the computation of the graph Fourier transform over certain networks. Exploration of these equivalence classes provides the intuition for an inexact graph Fourier transform method that dramatically reduces computation time over real-world networks with nontrivial Jordan subspaces. We apply our inexact method to four years of New York City taxi trajectories (61 GB after preprocessing) over the NYC road network (6,400 nodes, 14,000 directed edges). We discuss optimization strategies that reduce the computation time of taxi trajectories from raw data by orders of magnitude: from 3,000 days to less than one day. Our method yields a fine-grained analysis that pinpoints the same locations as the original method while reducing computation time and decreasing energy dispersal among spectral components. This capability to rapidly reduce raw traffic data to meaningful features has important ramifications for city planning and emergency vehicle routing.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lane, Dallas W. "Signal processing methods for airborne lidar bathymetry." Title page, table of contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09ENS/09ensl265.pdf.

Повний текст джерела
Анотація:
"August 2001." Includes bibliographical references (leaves 77-80). Examines the susceptibility of existing signal processing methods to errors and identifies other possible causes of depth error not accounted for by existing signal processing methods, by analysis of the detected laser return waveform data. Methods to improve depth accuracy are investigated.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Famorzadeh, Shahram. "BEEHIVE : an adaptive, distributed, embedded signal processing environment." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/14803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

González, Arcelus Isabel. "Advanced signal processing schemes for high density optical data storage." Thesis, University of Exeter, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413895.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Dietl, Hubert. "Digital signal processing techniques for detection applied to biomedical data." Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kunicki, Theodora C. "Novel data-processing techniques for signal extraction in Project 8." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105596.

Повний текст джерела
Анотація:
Thesis: S.B., Massachusetts Institute of Technology, Department of Physics, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 49-50).
Project 8 presents a new modality of electron spectroscopy with the potential to exceed the resolution of the most precise electron spectrometers in operation today, potentially at much lower cost. Project 8, being a novel method, has different computational demands from existing experiments. This thesis explores the use of the Hough Transform as a tool in data processing in Project 8 and discusses its utility and function generally.
by Theodora C. Kunicki.
S.B.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Orellana, Marco Antônio Pinto. "Seizure detection in electroencephalograms using data mining and signal processing." Universidade Federal de Viçosa, 2017. http://www.locus.ufv.br/handle/123456789/11589.

Повний текст джерела
Анотація:
Submitted by Reginaldo Soares de Freitas (reginaldo.freitas@ufv.br) on 2017-08-22T13:26:59Z No. of bitstreams: 1 texto completo.pdf: 5760621 bytes, checksum: f90e38633fae140744262e882dc7ae5d (MD5)
Made available in DSpace on 2017-08-22T13:26:59Z (GMT). No. of bitstreams: 1 texto completo.pdf: 5760621 bytes, checksum: f90e38633fae140744262e882dc7ae5d (MD5) Previous issue date: 2017-03-10
Agencia Boliviana Espacial
A epilepsia é uma das doenças neurológicas mais comuns definida como a predisposição a sofrer convulsões não provocadas. A Organização Mundial da Saúde estima que 50 milhões de pessoas estão sofrendo esta condição no mundo inteiro. O diagnóstico de epilepsia implica em um processo caro e longo baseado na opinião de especialistas com base em eletroencefalogramas (EEGs) e gravações de vídeo. Neste trabalho, foram desenvolvidos dois métodos para a predição automática de convulsões usando EEG e mineração de dados. O primeiro sistema desenvolvido é um método específico para cada paciente (patient-specific) que consiste em extrair características espectro-temporais de todos os canais de EEG, aplicar um algoritmo de redução de dimensão, recuperar o envelope do sinal e criar um modelo usando um classificador random forest. Testando este sistema com um grande banco de dados de epilepsia, atingimos 97% de especificidade e 99% de sensibilidade. Assim, a primeira proposta mostrou ter um grande potencial para colaborar com o diagnóstico em um contexto clínico. O segundo sistema desenvolvido é um método não específico do paciente (non-patient specific) que consiste em selecionar o sinal diferencial de dois eletrodos, aplicar um vetor de bancos de filtros para esse sinal, extrair atributos de séries temporais e criar um modelo preditivo usando uma árvore de decisão CART. O desempenho deste método foi de 95% de especificidade e 87% de sensibilidade. Estes valores não são tão altos quanto os de métodos propostos anteriormente. No entanto, a abordagem que propomos apresenta uma viabilidade muito maior para implementação em dispositivos que possam ser efetivamente utilizados por pacientes em larga escala. Isto porque somente dois elétrodos são utilizados e o modelo de predição é computacionalmente leve. Note-se que, ainda assim, o modelo xigerado apresenta um poder preditivo satisfatório e generaliza melhor que em trabalhos anteriores já que pode ser treinado com dados de um conjunto de pacientes e utilizado em pacientes distintos (non-patient specific). Ambas as propostas apresentadas aqui, utilizando abordagens distintas, demonstram ser alternativas de predição de convulsões com performances bastante satisfatórias sob diferentes circunstâncias e requisitos.
Epilepsy is one of the most common neurological diseases and is defined as the pre- disposition to suffer unprovoked seizures. The World Health Organization estimates that 50 million people are suffering this condition worldwide. Epilepsy diagnosis im- plies an expensive and long process based on the opinion of specialist personnel about electroencephalograms (EEGs) and video recordings. We have developed two meth- ods for automatic seizure detection using EEG and data mining. The first system is a patient-specific method that consists of extracting spectro-temporal features of 23 EEG channels, applying a dimension reduction algorithm, recovering the envelope of the signal, and creating a model using a random forest classifier. Testing this system against a large dataset, we reached 97% of specificity and 99% of sensitivity. Thus, our first proposal showed to have a great potential for diagnosis support in clinical context. The other developed system is a non-patient specific method that consists of selecting the differential signal of two electrodes, applying an array of filter banks to that signal, extracting time series features, and creating a predictive model using a decision tree. The performance of this method was 95% of specificity, and 87% of sensitivity. Although the performance is lower than previous propos- als, due to the design conditions and characteristics, our method allows an easier implementation with low hardware requirements. Both proposals presented here, using distinct approaches, demonstrate to be seizure prediction alternatives with very satisfactory performances under different circumstances and requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Xia, Bing 1972 Nov 7. "A direct temporal domain approach for ultrafast optical signal processing and its implementation using planar lightwave circuits /." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103007.

Повний текст джерела
Анотація:
Ultrafast optical signal processing, which shares the same fundamental principles of electrical signal processing, can realize numerous important functionalities required in both academic research and industry. Due to the extremely fast processing speed, all-optical signal processing and pulse shaping have been widely used in ultrafast telecommunication networks, photonically-assisted RFlmicro-meter waveform generation, microscopy, biophotonics, and studies on transient and nonlinear properties of atoms and molecules. In this thesis, we investigate two types of optical spectrally-periodic (SP) filters that can be fabricated on planar lightwave circuits (PLC) to perform pulse repetition rate multiplication (PRRM) and arbitrary optical waveform generation (AOWG).
First, we present a direct temporal domain approach for PRRM using SP filters. We show that the repetition rate of an input pulse train can be multiplied by a factor N using an optical filter with a free spectral range that does not need to be constrained to an integer multiple of N. Furthermore, the amplitude of each individual output pulse can be manipulated separately to form an arbitrary envelope at the output by optimizing the impulse response of the filter.
Next, we use lattice-form Mach-Zehnder interferometers (LF-MZI) to implement the temporal domain approach for PRRM. The simulation results show that PRRM with uniform profiles, binary-code profiles and triangular profiles can be achieved. Three silica based LF-MZIs are designed and fabricated, which incorporate multi-mode interference (MMI) couplers and phase shifters. The experimental results show that 40 GHz pulse trains with a uniform envelope pattern, a binary code pattern "1011" and a binary code pattern "1101" are generated from a 10 GHz input pulse train.
Finally, we investigate 2D ring resonator arrays (RRA) for ultraf ast optical signal processing. We design 2D RRAs to generate a pair of pulse trains with different binary-code patterns simultaneously from a single pulse train at a low repetition rate. We also design 2D RRAs for AOWG using the modified direct temporal domain approach. To demonstrate the approach, we provide numerical examples to illustrate the generation of two very different waveforms (square waveform and triangular waveform) from the same hyperbolic secant input pulse train. This powerful technique based on SP filters can be very useful for ultrafast optical signal processing and pulse shaping.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Vu, Viet Thuy. "Practical Consideration on Ultrawideband Synthetic Aperture Radar Data Processing." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2009. http://www.bth.se/fou/Forskinfo.nsf/Sok/681bd71aea5abe8dc125765f00321457!OpenDocument.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Lee, Jae-Min. "Characterization of spatial and temporal brain activation patterns in functional magnetic resonance imaging data." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Shen, Mengzhe, and 沈梦哲. "Parametric wavelength exchange and its application in high speed optical signal processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42841616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Shen, Mengzhe. "Parametric wavelength exchange and its application in high speed optical signal processing." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42841616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Devlin, Steve. "Telemetry Data Processing: A Modular, Expandable Approach." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615091.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
The growing complexity of missle, aircraft, and space vehicle systems, along with the advent of fly-by-wire and ultra-high performance unstable airframe technology has created an exploding demand for real time processing power. Recent VLSI developements have allowed addressing these needs in the design of a multi-processor subsystem supplying 10 MIPS and 5 MFLOPS per processor. To provide up to 70 MIPS a Digital Signal Processing subsystem may be configured with up to 7 Processors. Multiple subsystems may be employed in a data processing system to give the user virtually unlimited processing power. Within the DSP module, communication between cards is over a high speed, arbitrated Private Data bus. This prevents the saturation of the system bus with intermediate results, and allows a multiple processor configuration to make full use of each processor. Design goals for a single processor included executing number system conversions, data compression algorithms and 1st order polynomials in under 2 microseconds, and 5th order polynomials in under 4 microseconds. The processor design meets or exceeds all of these goals. Recently upgraded VLSI is available, and makes possible a performance enhancement to 11 MIPS and 9 MFLOPS per processor with reduced power consumption. Design tradeoffs and example applications are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Smith, Stewart Gresty. "Serial-data computation in VLSI." Thesis, University of Edinburgh, 1987. http://hdl.handle.net/1842/11922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Crook, Alex, and Gregory Kissinger. "Using COTS Graphics Processing Units in Signal Analysis Workstations." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595798.

Повний текст джерела
Анотація:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
Commercial off-the-shelf (COTS) graphics processing units (GPU) perform the signal processing operations needed for video games and similar consumer applications. The high volume and competitive nature of that industry have produced inexpensive GPUs with impressive amounts of signal processing power. These devices use parallel processing architectures to execute DSP algorithms far faster than single, or even multi-core central processing units typically found in workstations. This paper describes a project which improves the performance of a radar telemetry application using the NVidia™ brand GPU and CUDA™ software, although the results could be extended to other devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Gomes, Ricardo Rafael Baptista. "Long-term biosignals visualization and processing." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/7979.

Повний текст джерела
Анотація:
Thesis submitted in the fulfillment of the requirements for the Degree of Master in Biomedical Engineering
Long-term biosignals acquisitions are an important source of information about the patients’state and its evolution. However, long-term biosignals monitoring involves managing extremely large datasets, which makes signal visualization and processing a complex task. To overcome these problems, a new data structure to manage long-term biosignals was developed. Based on this new data structure, dedicated tools for long-term biosignals visualization and processing were implemented. A multilevel visualization tool for any type of biosignals, based on subsampling is presented, focused on four representative signal parameters (mean, maximum, minimum and standard deviation error). The visualization tool enables an overview of the entire signal and a more detailed visualization in specific parts which we want to highlight, allowing an user friendly interaction that leads to an easier signal exploring. The ”map” and ”reduce” concept is also exposed for long-term biosignal processing. A processing tool (ECG peak detection) was adapted for long-term biosignals. In order to test the developed algorithm, long-term biosignals acquisitions (approximately 8 hours each) were carried out. The visualization tool has proven to be faster than the standard methods, allowing a fast navigation over the different visualization levels of biosignals. Regarding the developed processing algorithm, it detected the peaks of long-term ECG signals with fewer time consuming than the nonparalell processing algorithm. The non-specific characteristics of the new data structure, visualization tool and the speed improvement in signal processing introduced by these algorithms makes them powerful tools for long-term biosignals visualization and processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Noh, Seongmin, and Hoyoung Yoo. "IMPLEMENTATION OF DIGITAL SIGNAL PROCESSING WITH HIGH DATA RATE SENSORS FOR DATA COMPRESSION." International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/626959.

Повний текст джерела
Анотація:
The onboard telemetry system of Korea Space Launch Vehicle-II (KSLV-II) acquires acoustic, vibration, and piezoelectric pressure sensors that require high data rate over several kilo samples per second, so the compression method is needed to expand link margin of telemetry system. This paper implements third-octave and FFT signal processing algorithms to reduce sensor data with high compression ratio depends on data acquisition requirements. The developed signal processing hardware module is composed of analog signal conditioning block and digital signal processing block on FPGA, and the digital block is fully implemented with dedicated hardware using HDL. For digital hardware implementation, multistage structure with ANSI standard octave filter bank is used for third-octave processing, and pipelined architecture is used for FFT. The performance of data acquisition and signal processing is evaluated and compared to the commercial data acquisition equipment.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Goldschneider, Jill R. "Lossy compression of scientific data via wavelets and vector quantization /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/5881.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Koriziz, Hariton. "Signal processing methods for the modelling and prediction of financial data." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504921.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Frogner, Gary Russell. "Monitoring of global acoustic transmissions : signal processing and preliminary data analysis." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28379.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Landström, Anders. "Adaptive tensor-based morphological filtering and analysis of 3D profile data." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26510.

Повний текст джерела
Анотація:
Image analysis methods for processing 3D profile data have been investigated and developed. These methods include; Image reconstruction by prioritized incremental normalized convolution, morphology-based crack detection for steel slabs, and adaptive morphology based on the local structure tensor. The methods have been applied to a number of industrial applications.An issue with 3D profile data captured by laser triangulation is occlusion, which occurs when the line-of-sight between the projected laser light and the camera sensor is obstructed. To overcome this problem, interpolation of missing surface in rock piles has been investigated and a novel interpolation method for filling in missing pixel values iteratively from the edges of the reliable data, using normalized convolution, has been developed.3D profile data of the steel surface has been used to detect longitudinal cracks in casted steel slabs. Segmentation of the data is done using mathematical morphology, and the resulting connected regions are assigned a crack probability estimate based on a statistic logistic regression model. More specifically, the morphological filtering locates trenches in the data, excludes scale regions for further analysis, and finally links crack segments together in order to obtain a segmented region which receives a crack probability based on its depth and length.Also suggested is a novel method for adaptive mathematical morphology intended to improve crack segment linking, i.e. for bridging gaps in the crack signature in order to increase the length of potential crack segments. Standard morphology operations rely on a predefined structuring element which is repeatedly used for each pixel in the image. The outline of a crack, however, can range from a straight line to a zig-zag pattern. A more adaptive method for linking regions with a large enough estimated crack depth would therefore be beneficial. More advanced morphological approaches, such as morphological amoebas and path openings, adapt better to curvature in the image. For our purpose, however, we investigate how the local structure tensor can be used to adaptively assign to each pixel an elliptical structuring element based on the local orientation within the image. The information from the local structure tensor directly defines the shape of the elliptical structuring element, and the resulting morphological filtering successfully enhances crack signatures in the data.
Godkänd; 2012; 20121017 (andlan); LICENTIATSEMINARIUM Ämne: Signalbehandling/Signal Processing Examinator: Universitetslektor Matthew Thurley, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Associate Professor Cris Luengo, Centre for Image Analysis, Uppsala Tid: Onsdag den 21 november 2012 kl 12.30 Plats: A1545, Luleå tekniska universitet
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Nedstrand, Paul, and Razmus Lindgren. "Test Data Post-Processing and Analysis of Link Adaptation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121589.

Повний текст джерела
Анотація:
Analysing the performance of cell phones and other wireless connected devices to mobile networks are key when validating if the standard of the system is achieved. This justies having testing tools that can produce a good overview of the data between base stations and cell phones to see the performance of the cell phone. This master thesis involves developing a tool that produces graphs with statistics from the trac data in the communication link between a connected mobile device and a base station. The statistics will be the correlation between two parameters in the trac data in the channel (e.g. throughput over the channel condition). The tool is oriented on analysis of link adaptation and by the produced graphs the testing personnel at Ericsson will be able to analyse the performance of one or several mobile equipments. We performed our own analysis on link adaptation using the tool to show that this type of analysis is possible with this tool. To show that the tool is useful for Ericsson we let test personnel answer a survey on the usability and user friendliness of it.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Pentaris, Fragkiskos. "Digital signal processing for structural health monitoring of buildings." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/10560.

Повний текст джерела
Анотація:
Structural health monitoring (SHM) systems is a relatively new discipline, studying the structural condition of buildings and other constructions. Current SHM systems are either wired or wireless, with a relatively high cost and low accuracy. This thesis exploits a blend of digital signal processing methodologies, for structural health monitoring (SHM) and develops a wireless SHM system in order to provide a low cost implementation yet reliable and robust. Existing technologies of wired and wireless sensor network platforms with high sensitivity accelerometers are combined, in order to create a system for monitoring the structural characteristics of buildings very economically and functionally, so that it can be easily implemented at low cost in buildings. Well-known and established statistical time series methods are applied to SHM data collected from real concrete structures subjected to earthquake excitation and their strong and weak points are investigated. The necessity to combine parametric and non-parametric approaches is justified and to this direction novel and improved digital signal processing techniques and indexes are applied to vibration data recordings, in order to eliminate noise and reveal structural properties and characteristics of the buildings under study, that deteriorate due to environmental, seismic or anthropogenic impact. A characteristic and potential harming specific case study is presented, where consequences to structures due to a strong earthquake of magnitude 6.4 M are investigated. Furthermore, is introduced a seismic influence profile of the buildings under study related to the seismic sources that exist in the broad region of study.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Appelgren, Filip, and Måns Ekelund. "Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253816.

Повний текст джерела
Анотація:
Graphics Processing Units (GPU) are increasingly being used for general-purpose programming, instead of their traditional graphical tasks. This is because of their raw computational power, which in some cases give them an advantage over the traditionally used Central Processing Unit (CPU). This thesis therefore sets out to identify the performance of a GPU in a correlation algorithm, and what parameters have the greatest effect on GPU performance. The method used for determining performance was quantitative, utilizing a clock library in C++ to measure performance of the algorithm as problem size increased. Initial problem size was set to 28 and increased exponentially to 221. The results show that smaller sample sizes perform better on the serial CPU implementation but that the parallel GPU implementations start outperforming the CPU between problem sizes of 29 and 210. It became apparent that GPU’s benefit from larger problem sizes, mainly because of the memory overhead costs involved with allocating and transferring data. Further, the algorithm that is under evaluation is not suited for a parallelized implementation due to a high amount of branching. Logic can lead to warp divergence, which can drastically lower performance. Keeping logic to a minimum and minimizing the number of memory transfers are vital in order to reach high performance with a GPU.
GPUer (grafikprocessor) som traditionellt används för att rita grafik i datorer, används mer och mer till att utföra vanliga programmeringsuppgifter. Detta är för att de har en stor beräkningskraft, som kan ge dem ett övertag över vanliga CPUer (processor) i vissa uppgifter. Det här arbetet undersöker därför prestandaskillnaderna mellan en CPU och en GPU i en korrelations-algoritm samt vilka parametrar som har störst påverkan på prestanda. En kvantitativ metod har använts med hjälp av ett klock-bibliotek, som finns tillgängligt i C++, för att utföra tidtagning. Initial problemstorlek var satt till 28 och ökade sedan exponentiellt till 221. Resultaten visar att algoritmen är snabbare på en CPU vid mindre problemstorlekar. Däremot börjar GPUn prestera bättre än CPUn mellan problemstorlekar av 29 och 210. Det blev tydligt att GPUer tjänar på större problem, framför allt för att det tar mycket tid att involvera GPUn i algoritmen. Datäoverföringar och minnesallokering på GPUn tar tid, vilket blir tydligt vid små storlekar. Algoritmen passar sig inte heller speciellt bra för en parallell lösning, eftersom den innehåller mycket logik. En algoritm med design där exekveringstrådarna kan gå isär under exekvering, är helst att undvika eftersom mycket parallell prestanda tappas. Att minimera logik, datäoverföringar samt minnesallokeringar är viktiga delar för hög GPU-prestanda.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

R, S. Umesh. "Algorithms for processing polarization-rich optical imaging data." Thesis, Indian Institute of Science, 2004. http://hdl.handle.net/2005/96.

Повний текст джерела
Анотація:
This work mainly focuses on signal processing issues related to continuous-wave, polarization-based direct imaging schemes. Here, we present a mathematical framework to analyze the performance of the Polarization Difference Imaging (PDI) and Polarization Modulation Imaging (PMI). We have considered three visualization parameters, namely, the polarization intensity (PI), Degree of Linear Polarization (DOLP) and polarization orientation (PO) for comparing these schemes. The first two parameters appear frequently in literature, possibly under different names. The last parameter, polarization orientation, has been introduced and elaborated in this thesis. We have also proposed some extensions/alternatives for the existing imaging and processing schemes and analyzed their advantages. Theoretically and through Monte-Carlo simulations, we have studied the performance of these schemes under white and coloured noise conditions, concluding that, in general, the PMI gives better estimates of all the parameters. Experimental results corroborate our theoretical arguments. PMI is shown to give asymptotically efficient estimates of these parameters, whereas PDI is shown to give biased estimates of the first two and is also shown to be incapable of estimating PO. Moreover, it is shown that PDI is a particular case of PMI. The property of PDI, that it can yield estimates at lower variances has been recognized as its major strength. We have also shown that the three visualization parameters can be fused to form a colour image, giving a holistic view of the scene. We report the advantages of analyzing chunks of data and bootstrapped data under various circumstances. Experiments were conducted to image objects through calibrated scattering media and natural media like mist, with successful results. Scattering media prepared with polystyrene microspheres of diameters 2.97m, 0.06m and 0.13m dispersed in water were used in our experiments. An intensified charge coupled device (CCD) camera was used to capture the images. Results showed that imaging could be performed beyond optical thickness of 40, for particles with 0.13m diameter. For larger particles, the depth to which we could image was much lesser. An experiment using an incoherent source yielded better results than with coherent sources, which we attribute to the speckle noise induced by coherent sources. We have suggested a harmonic based imaging scheme, which can perhaps be used when we have a mixture of scattering particles. We have also briefly touched upon the possible post processing that can be performed on the obtained results, and as an example, shown segmentation based on a PO imaging result.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Hoya, Tetsuya. "Graph theoretic methods for data partitioning." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286542.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Lanciani, Christopher A. "Compressed-domain processing of MPEG audio signals." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13760.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії