Segui questo link per vedere altri tipi di pubblicazioni sul tema: Digital techniques.

Tesi sul tema "Digital techniques"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Digital techniques".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

England, Janine V. "Digital filter design techniques/". Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23177.

Testo completo
Abstract (sommario):
An overview and investigation of the more popular digital filter design techniques are presented, with the intent of providing the filter design engineer a complete and concise source of information. Advantages and disadvantages of the various techniques are discussed, and extensive design examples used to illustrate their application to specific design problems. Both IIR (Butterworth, Chebyshev and elliptic), and FIR (Fourier coefficient design, windows and frequency sampling) design methods are featured, as well as, the Optimum FIR Filter Design Program of Parks and McClellan, and the Minimum p - Error IIR Filter Design Method of Deczky. Keywords: Digital filter design, IIR, FIR, Butterworth, Chebyshev, Elliptic, Fourier coefficient, Windows, Frequency sampling, Remez exchange algorithm, Minimum p-error, and IRR filter design
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Ge, He. "Flexible Digital Authentication Techniques". Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5277/.

Testo completo
Abstract (sommario):
Abstract This dissertation investigates authentication techniques in some emerging areas. Specifically, authentication schemes have been proposed that are well-suited for embedded systems, and privacy-respecting pay Web sites. With embedded systems, a person could own several devices which are capable of communication and interaction, but these devices use embedded processors whose computational capabilities are limited as compared to desktop computers. Examples of this scenario include entertainment devices or appliances owned by a consumer, multiple control and sensor systems in an automobile or airplane, and environmental controls in a building. An efficient public key cryptosystem has been devised, which provides a complete solution to an embedded system, including protocols for authentication, authenticated key exchange, encryption, and revocation. The new construction is especially suitable for the devices with constrained computing capabilities and resources. Compared with other available authentication schemes, such as X.509, identity-based encryption, etc, the new construction provides unique features such as simplicity, efficiency, forward secrecy, and an efficient re-keying mechanism. In the application scenario for a pay Web site, users may be sensitive about their privacy, and do not wish their behaviors to be tracked by Web sites. Thus, an anonymous authentication scheme is desirable in this case. That is, a user can prove his/her authenticity without revealing his/her identity. On the other hand, the Web site owner would like to prevent a bunch of users from sharing a single subscription while hiding behind user anonymity. The Web site should be able to detect these possible malicious behaviors, and exclude corrupted users from future service. This dissertation extensively discusses anonymous authentication techniques, such as group signature, direct anonymous attestation, and traceable signature. Three anonymous authentication schemes have been proposed, which include a group signature scheme with signature claiming and variable linkability, a scheme for direct anonymous attestation in trusted computing platforms with sign and verify protocols nearly seven times more efficient than the current solution, and a state-of-the-art traceable signature scheme with support for variable anonymity. These three schemes greatly advance research in the area of anonymous authentication. The authentication techniques presented in this dissertation are based on common mathematical and cryptographical foundations, sharing similar security assumptions. We call them flexible digital authentication schemes.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tan, B. T. "Digital transmission using transform techniques". Thesis, University of Cambridge, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384556.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Liu, Wei. "Digital beamforming employing subband techniques". Thesis, University of Southampton, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422950.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Shankar, Udaya. "Implementation of digital modulation techniques using direct digital synthesis". Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-03302010-020333/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Majidi, Rabeeh. "DIGITALLY ASSISTED TECHNIQUES FOR NYQUIST RATE ANALOG-to-DIGITAL CONVERTERS". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/275.

Testo completo
Abstract (sommario):
With the advance of technology and rapid growth of digital systems, low power high speed analog-to-digital converters with great accuracy are in demand. To achieve high effective number of bits Analog-to-Digital Converter(ADC) calibration as a time consuming process is a potential bottleneck for designs. This dissertation presentsa fully digital background calibration algorithm for a 7-bit redundant flash ADC using split structure and look-up table based correction. Redundant comparators are used in the flash ADC design of this work in order to tolerate large offset voltages while minimizing signal input capacitance. The split ADC structure helps by eliminating the unknown input signal from the calibration path. The flash ADC has been designed in 180nm IBM CMOS technology and fabricated through MOSIS. This work was supported by Analog Devices, Wilmington,MA. While much research on ADC design has concentrated on increasing resolution and sample rate, there are many applications (e.g. biomedical devices and sensor networks) that do not require high performance but do require low power energy efficient ADCs. This dissertation also explores on design of a low quiescent current 100kSps Successive Approximation (SAR) ADC that has been used as an error detection ADC for an automotive application in 350nm CD (CMOS-DMOS) technology. This work was supported by ON Semiconductor Corp, East Greenwich,RI.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Suen, Tsz-yin Simon, e 孫子彥. "Curvature domain stitching of digital photographs". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38800901.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Keskinarkaus, A. (Anja). "Digital watermarking techniques for printed images". Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526200583.

Testo completo
Abstract (sommario):
Abstract During the last few decades, digital watermarking techniques have gained a lot of interest. Such techniques enable hiding imperceptible information to images; information which can be extracted later from those images. As a result, digital watermarking techniques have many interesting applications for example in Internet distribution. Contents such as images are today manipulated mainly in digital form; thus, traditionally, the focus of watermarking research has been the digital domain. However, a vast amount of images will still appear in some physical format such as in books, posters or labels, and there are a number of possible applications of hidden information also in image printouts. In this case, an additional level of challenge is introduced, as the watermarking technique should be robust to extraction from printed output. In this thesis, methods are developed, where a watermarked image appears in a printout and the invisible information can be later extracted using a scanner or mobile phone camera and watermark extraction software. In these cases, the watermarking method has to be carefully designed because both the printing and capturing process cause distortions that make watermark extraction challenging. The focus of the study is on developing blind, multibit watermarking techniques, where the robustness of the algorithms is tested in an office environment, using standard office equipment. The possible effect of the background of the printed images, as well as compound attacks, are both paid particular attention to, since these are considered important in practical applications. The main objective is thus to provide technical means to achieve high robustness and to develop watermarking methods robust to printing and scanning process. A secondary objective is to develop methods where the extraction is possible with the aid of a mobile phone camera. The main contributions of the thesis are: (1) Methods to increase watermark extraction robustness with perceptual weighting; (2) Methods to robustly synchronize the extraction of a multibit message from a printout; (3) A method to encode a multibit message, utilizing directed periodic patterns and a method to decode the message after attacks; (4) A demonstrator of an interactive poster application and a key based robust and secure identification method from a printout
Tiivistelmä Digitaalinen vesileimaus on parin viime vuosikymmenen aikana runsaasti huomiota saanut tekniikka, jonka avulla kuviin voidaan piilottaa aistein havaitsematonta tietoa. Tämä tieto voidaan myöhemmin poimia esiin, minkä vuoksi sovelluskohteita esimerkiksi Internetin kautta tapahtuvassa jakelussa on useita. Perinteisesti vesileimaustekniikat keskittyvät pelkästään digitaalisessa muodossa pysyvään tietoon. Kuitenkin iso osa kuvainformaatiosta saa yhä vielä myös fyysisen muodon esimerkiksi kirjoissa, julisteissa ja etiketeissä. Myös vesileimauksella on useita sovelluskohteita painettujen kuvienkin osalta. Vesileimausta ajatellen painatus tuo kumminkin omat erityishaasteensa vesileimaustekniikoille. Tässä väitöskirjassa kehitetään menetelmiä, jotka mahdollistavat piilotetun tiedon säilymisen painetussa kuvassa. Piilotettu tieto voidaan lukea käyttämällä skanneria tai matkapuhelimen kameraa tiedon digitalisointiin. Digitalisoinnin jälkeen vesileimausohjelma osaa lukea piilotetun tiedon. Vesileimauksen osalta haasteellisuus tulee vääristymistä, joita sekä kuvien tulostus sekä digitalisointi aiheuttavat. Väitöstyössä keskitytään monibittisiin vesileimaustekniikoihin, joissa alkuperäistä kuvaa ei tarvita vesileimaa poimittaessa. Väitöstyössä kehitetyt menetelmät on testattu toimistoympäristössä standardi toimistolaitteita käyttäen. Käytännön sovelluksia ajatellen, testeissä on kiinnitetty huomiota myös yhdistelmähyökkäysten sekä painetun kuvan taustan vaikutukseen algoritmin robustisuudelle. Ensisijainen tavoite on kehittää menetelmiä, jotka kestävät printtaus ja skannaus operaation. Toinen tavoite on tiedon kestävyys luettaessa tietoa matkapuhelimen kameran avulla. Väitöskirjassa tarkastellaan ja kehitellään ratkaisuja neljälle eri osa-alueelle: (1) Ihmisaisteja mallintavien menetelmien käyttö vesileimauksen kestävyyden lisäämiseksi; (2) Robusti synkronointi luettaessa monibittistä tietoa painotuotteesta; (3) Suunnattuja jaksollisia kuvioita käyttävä menetelmä, joka mahdollistaa monibittisen tiedon koodaamisen ja dekoodaamisen hyökkäysten jälkeen; (4) Sovellustasolla tarkastellaan kahta pääsovellusta: interaktiivinen juliste sekä kestävä ja turvattu avaimen avulla tapahtuva painotuotteen identifiointi
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gou, Hongmei. "Digital forensic techniques for graphic data". College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7361.

Testo completo
Abstract (sommario):
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Honary, Souroush. "Advanced techniques for digital video broadcasting". Thesis, University of Leeds, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509846.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Damianos, J. "Testing hybrid circuits using digital techniques". Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.483107.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Kempson, C. N. "Statistical techniques for digital modulation recognition". Thesis, Cranfield University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277938.

Testo completo
Abstract (sommario):
Automatic modulation recognition is an important part of communications electronic monitoring and surveillance systems where it is used for signal sorting and receiver switching. ' This thesis introduces a novel application of multivariate statistical techniques to the problem of automatic modulation classification. The classification technique uses modulation features derived from time-domain parameters of instantaneous signal envelope, frequency and phase. Principal component analysis (PCA) is employed for data reduction and multivariate analysis of variance (MANOVA) is used to investigate the data and to construct a discriminant function to enable the classification of modulation type. MANOVA is shown to offer advantages over the techniques already used for modulation recognition, even when simple features are used. The technique is used to construct a universal discriminator which is independent of the unknown signal to noise ratio (SNR) of the received signal. The universal discriminator is shown to extend the range of signal-to-noise ratios (SNRs) over which discrimination is possible, being effective over an SNR range of 0-4OdB. Development of discriminant functions using MANOVA is shown to be an extensible technique, capable of application to more complex problems. i
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Kale, Izzet. "Techniques for reducing digital filter complexity". Thesis, University of Westminster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319680.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Webb, William. "QAM techniques for digital mobile radio". Thesis, University of Southampton, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385448.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Traiola, Marcello. "TEST TECHNIQUES FOR APPROXIMATE DIGITAL CIRCUITS". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS060.

Testo completo
Abstract (sommario):
Au cours des dernières décennies, la demande d’efficacité informatique n’a cessé de croître. L’affirmation d’applications de nouvelle génération consommatrices d’énergie d’un côté, et d’appareils portables basse consommation de l’autre, exige un nouveau paradigme informatique capable de faire face aux exigences concurrentes des défis technologiques actuels. Ces dernières années, plusieurs études sur les applications dites de "Recognition, Mining and Synthesis (RMS)" ont été menées. Une particularité très intéressante a été identifiée : la résilience intrinsèque de ces applications. Une telle propriété permet aux applications RMS d’être très tolérantes aux erreurs. Ceci est dû à différents facteurs, tels que les données bruyantes traitées par ces applications, les algorithmes non déterministes utilisés et les réponses non uniques possibles. Ces propriétés ont été exploitées par un nouveau paradigme informatique de plus en plus établi : le calcul approximé (AxC). L’AxC profite intelligemment de la résilience intrinsèque des applications RMS pour réaliser des gains en termes de consommation électrique, de temps de fonctionnement et/ou de surface de puce. En effet, en introduisant des assouplissants sélectifs des spécifications non critiques, certaines parties du système informatique cible peuvent être simplifiées, pour finalement atteindre l’objectif de l’AxC. De plus, l’AxC est capable de cibler différentes couches des systèmes informatiques, du matériel au logiciel. Dans cette thèse, nous nous concentrons sur les circuits intégrés approximés (AxICs) qui sont le résultat de l’application AxC au niveau matériel. En particulier, nous nous concentrons sur l’approximation fonctionnelle des circuits intégrés, utilisée au cours des dernières années afin de concevoir efficacement les AxICs. En raison de la pertinence croissante des AxICs, il devient important de relever les nouveaux défis pour tester de tels circuits. À cet égard, certains travaux ont attiré l’attention sur les défis que représente l’approximation fonctionnelle pour les procédures de test. En même temps, l’approximation fonctionnelle des circuits intégrés offre également des possibilités. Plus en détails - d’une part - le concept de circuit acceptable change : alors qu’un circuit est conventionnellement bon si ses réponses ne sont jamais différentes de celles attendues, dans le contexte AxIC certaines réponses inattendues peuvent encore être acceptables. Pour la même raison - d’autre part - certaines fautes acceptables peuvent ne pas être détectées, ce qui mène à un gain de rendement de production (c.-à-d., augmentation du pourcentage de circuits acceptables, parmi tous les circuits fabriqués). Pour mesurer l’erreur produite par un AxIC, plusieurs métriques d’erreur ont été proposées dans la littérature. Dans cette thèse, nous présentons un ensemble de techniques de test pour les circuits approximés. En particulier, nous nous concentrons sur trois phases fondamentales du déroulement du test. Premièrement, la classification des fautes AxIC en non-redundant et ax-redundant (c.-à-d. catastrophique et acceptable, respectivement) en fonction d’un seuil d’erreur (c.-à-d. la quantité maximale tolérable d’erreur). Cette classification permet d’obtenir deux listes de fautes (c.-à-d. nonredundant et ax-redundant). Ensuite, nous proposons une génération automatique de séquences de test qui soit “consciente de l’approximation”. Les tests obtenus préviennent les défaillances catastrophiques en détectant les fautes non-redundant. En même temps, ils minimisent la détection sur les ax-redundant. Enfin – puisque dans certains cas le gain de rendement obtenu ne correspond toujours pas à celui attendu, à cause de la structure propre des AxICs – nous proposons une technique pour classer correctement les AxICs dans les catégories “catastrophiquement défectueux” et “acceptablement défectueux”, après l’application du test
Despite great improvements of the semiconductor industry in terms of energy efficiency, the computer systems’ energy consumption is constantly growing. Many largely used applications – usually referred to as Recognition, Mining and Synthesis (RMS) applications – are more and more deployed as mobile applications and on Internet of Things (IoT) structures. Therefore, it is mandatory to improve the future silicon devices and architectures on which these applications will run. Inherent resiliency property of RMS applications has been thoroughly investigated over the last few years. This interesting property leads applications to be tolerant to errors, as long as their results remain close enough to the expected ones. Approximate Computing (AxC) , is an emerging computing paradigm which takes advantages of this property. AxC has gained increasing interest in the scientific community in last years. It is based on the intuitive observation that introducing selective relaxation of non-critical specifications may lead to efficiency gains in terms of power consumption, run time, and/or chip area. So far, AxC has been applied on the whole digital system stack, from hardware to application level. This work focuses on approximate integrated circuits (AxICs), which are the result of AxC application at hardware-level. Functional approximation has been successfully applied to integrated circuits (ICs) in order to efficiently design AxICs. Specifically, we focus on testing aspects of functionally approximate ICs. In fact – since approximation changes the functional behavior of ICs – techniques to test them have to be revisited. In fact, some previous works – have shown that circuit approximation brings along some challenges for testing procedures, but also some opportunities. In particular, approximation procedures intrinsically lead the circuit to produce errors, which have to be taken into account in test procedures. Error can be measured according to different error metrics. On the one hand, the occurrence of a defect in the circuit can lead it to produce unexpected catastrophic errors. On the other hand, some defects can be tolerated, when they do not induce errors over a certain threshold. This phenomenon could lead to a yield increase, if properly investigated and managed. To deal with such aspects, conventional test flow should be revisited. Therefore, we introduce Approximation-Aware testing (AxA testing). We identify three main AxA testing phases: (i) AxA fault classification, (ii) AxA test pattern generation and (iii) AxA test set application. Briefly, the first phase has to classify faults into catastrophic and acceptable; the test pattern generation has to produce test vectors able to cover all the catastrophic faults and, at the same time, to leave acceptable faults undetected; finally, the test set application needs to correctly classify AxICs under test into catastrophically faulty, acceptably faulty, fault-free. Only AxICs falling into the first group will be rejected. In this thesis, we thoroughly discuss the three phases of AxA testing, and we present a set of AxA test techniques for approximate circuits. Firstly, we work on the classification of AxIC faults into catastrophic and acceptable according to an error threshold (i.e. the maximum tolerable amount of error). This classification provides two lists of faults (i.e. catastrophic and acceptable). Then, we propose an approximation-aware (ax-aware) Automatic Test Pattern Generation. Obtained test patterns prevent catastrophic failures by detecting catastrophic defects. At the same time, they minimize the detection of acceptable ones. Finally – since the AxIC structure often leads to a yield gain lower than expected – we propose a technique to correctly classify AxICs into “catastrophically faulty”, “acceptably faulty”, “and fault-free”, after the test application. To evaluate the proposed techniques, we perform extensive experiments on state-ofthe-art AxICs
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Chan, Tsang Hung. "Digital signal processing in optical fibre digital speckle pattern interferometry". HKBU Institutional Repository, 1996. http://repository.hkbu.edu.hk/etd_ra/269.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Samsheerali, P. T. "Investigations on improved digital holographic imaging techniques". Thesis, IIT Delhi, 2015. http://localhost:8080/iit/handle/2074/6928.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Arani, Faramarz Shayan. "Trellis coded modulation techniques". Thesis, University of Warwick, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387324.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Shi, Junhao. "Boolean techniques in testing of digital circuits". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=98361816X.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Alahakoon, Sanath. "Digital motion control techniques for electrical drives". Doctoral thesis, KTH, Electric Power Systems, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2954.

Testo completo
Abstract (sommario):

Digital motion control area toady is a well-established one,which is believed to be first initiated by power electronicengineers in the early seventies. Modern digital controltheory, advances in digital signal processor andmicrocontroller technology and recent developments in powerelectronic devices have made this field a very competitive one.The objective of this thesis is to present some digital motioncontrol techniques that can be applied for electrical drives.This is done by investigating two motion control problemsassociated with electrical drives; namely, precision motioncontrol and sensorless motion control.

Application of digital motion control techniques for preciseeccentric rotor positioning of an induction machine with ActiveMagnetic Bearings (AMB) is the first application problemaddressed in the thesis. The final goal is to prepare aflexible test rig for the study of acoustic noise in standardinduction machines with rotor eccentricity. AMB control hasbeen a challenging task for the control engineers since itsinvention. Various types of control techniques - both analogand digital - have been attempted with a lot of success overthe past years. In the application area of rotating machines,the whole concept of AMB control means stabilizing the rotor ofthe machine in the exact center of the radial AMBs andmaintaining that position under magnetic disturbance forcesexerted on it by the stator under running condition. The aim ofthe first part of the thesis is to present several digitalmotion control techniques that would give the user theflexibility of moving the rotor to any arbitrary position inthe air gap and maintaining that eccentric position.

The second part of the thesis dealt with sensorless controlof Permanent Magnet Synchronous Motors (PMSM) for high-speedapplications. Conventional PMSM drives employ a shaft-mountedencoder or a resolver to identify the rotor flux position. Itis advantageous to eliminate the shaft-mounted sensor byincorporating sensorless control schemes for PMSM drive systemsdue to many reasons. A sensorless control scheme must besufficiently robust and less computationally heavy for it to besuccessful. However, reliable performance of a sensorlesscontrol drive strategy is always an integration of many digitalmotion control techniques. Implementation of fast currentcontrol by overcoming sampling delay in the discrete system isa key issue in this respect. Suitable speed control with areliable controller anti-windup mechanism is also essential.Compensation techniques for the inverter non-idealities mustalso be incorporated to achieve better performance. In thispart of the thesis, all these aspects of a well performingsensorless control strategy for a PMSM are investigated.Frequency dependent machine parameter variation, which is asignificant practical problem against achieving the expectedperformance of these control strategies, is also addressed.

Most of the problems addressed in the thesis are related toimplementation issues of a successful control method. Theapproach in this work is to find solutions to those applicationissues from the automatic control theory.

Keywords:Eccentric rotor positioning, modeling,integrator anti-windup, bumpless transfer, identification,periodic disturbance cancellation, sampling delay compensation,cascaded control, speed and position estimation, compensationsfor non-idealities, parameter estimation, start-uptechnique

Gli stili APA, Harvard, Vancouver, ISO e altri
21

Rockliff, Simon C. "Frequency hopping techniques for digital mobile radio /". Title page, contents and abstract only, 1990. http://web4.library.adelaide.edu.au/theses/09PH/09phr683.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Brisbane, Gareth Charles Beattie. "On information hiding techniques for digital images". Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050221.122028/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Papaspiridis, Alexandros. "Digital signal processing techniques for gene prediction". Thesis, Imperial College London, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.590037.

Testo completo
Abstract (sommario):
The purpose of this research is to apply existing Digital Signal Processing techniques to DNA sequences, with the objective of developing improved methods for gene prediction. Sections of DNA sequences are analyzed in the frequency domain and frequency components that distinguish intron regions are identified (21t/lOA). Novel detectors are created using digital filters and auto correlation, capable of identifying the location of intron regions in a sequence. The resulting signal from these detectors is used as a dynamic threshold in existing gene detectors, resulting in an improved accuracy of 12% and 25% respectively. Finally, DNA sequences are analyzed in terms of their amino acid composition, and new gene prediction algorithms are introduced.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Plummer, Andrew Robert. "Digital control techniques for electro-hydraulic servosystems". Thesis, University of Bath, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280263.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Sari, Hayri. "Underwater acoustic voice communications using digital techniques". Thesis, Loughborough University, 1997. https://dspace.lboro.ac.uk/2134/13854.

Testo completo
Abstract (sommario):
An underwater acoustic voice communications system can provide a vital communication link between divers and surface supervisors. There are numerous situations in which a communication system is essential. In the event of an emergency, a diver's life may depend on fast and effective action at the surface. The design and implementation of a digital underwater acoustic voice communication system using a digital signal processor (DSP) is described. The use of a DSP enables the adoption of computationally complex speech signal processing algorithms and the transmission and reception of digital data through an underwater acoustic channel. The system is capable of operating in both transmitting and receiving modes by using a mode selection scheme. During the transmission mode, by using linear predictive coding (LPC), the speech signal is compressed whilst transmitting the compressed data in digital pulse position modulation (DPPM) format at a transmission rate of 2400 bps. At the receiver, a maximum energy detection technique is employed to identify the pulse position, enabling correct data decoding which in turn allows the speech signal to be reconstructed. The advantage of the system is to introduce advances in digital technology to underwater acoustic voice communications and update the present analogue systems employing AM and SSB modulation. Since the DSP-based system is designed in modular sections, the hardware and software can be modified if the performance of the system is inadequate. The communication system was tested successfully in a large indoor tank to simulate the effect of a short and very shallow underwater channel with severe multipath reverberation. The other objective of this study was to improve the quality of the transmitted speech signal. When the system is used by SCUBA divers, the speech signal is produced in a mask with a high pressure air environment, and bubble and breathing noise affect the speech clarity. Breathing noise is cancelled by implementing a combination of zero crossing rate and energy detection. In order to cancel bubble noise spectral subtraction and adaptive noise cancelling algorithms were simulated; the latter was found to be superior and was adopted for the current system.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Bibby, Geoffrey Thomas. "Digital image processing using parallel processing techniques". Thesis, Liverpool John Moores University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304539.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Greenfield, Richard Glentworth. "Application of digital techniques to loadspeaker equalization". Thesis, University of Essex, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292175.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Amphlett, Robert W. "Multiprocessor techniques for high quality digital audio". Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337273.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Pitchford, Randall S. "Telemetry Simulation Using Direct Digital Synthesis Techniques". International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613393.

Testo completo
Abstract (sommario):
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Direct digital synthesis technology has been employed in the development of a telemetry data simulator constructed for the Western Space and Missile Center (WSMC). The telemetry simulator, known as TDVS II, is briefly described to provide background; however, the principal subject is related to the development of programmable synthesizer modules employed in the TDVS II system. The programmable synthesizer modules (or PSMs) utilize direct digital synthesizer (DDS) technology to generate a variety of common telemetry signals for simulation output. The internal behavior of DDS devices has been thoroughly examined in the literature for nearly 20 years. The author is aware of significant work in this area by every major aerospace contractor, as well as a broad range of activity by semiconductor developers, and in the universities. The purpose here is to expand awareness of the subject and its basic concepts in support of applications for the telemetry industry. During the TDVS II application development period, new DDS devices have appeared and several advances in device technology (in terms of both speed and technique) have been effected. Many fundamental communications technologies will move into greater capacity and offer new capabilities over the next few years as a direct result of DDS technology. Among these are: cellular telephony, high-definition television and video delivery systems in general, data communications down to the general business facsimile and home modem level, and other communications systems of various types to include telemetry systems. A recent literature search of the topic, limited only to documents available in English, indicates that some 25 articles and dissertations of significance have appeared since 1985, with over 30% of these appearing in international forums (including Germany, Japan, Great Britain, Portugal, Finland...). Product advertisements can readily be found in various publications on test instruments, amateur radio, etc., which indicate that international knowledge and product application of the technology is becoming increasingly widespread.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Sirousi, Sorena. "Distributed Digital Beamforming Techniques in Satellite Networks". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Cerca il testo completo
Abstract (sommario):
In recent years, satellite communication systems, in particular LEO constellations, have been subject of increased attention in the new space race; this is substantiated by the numerous industrial endeavors aiming at providing high-speed broadband access anywhere at anytime, E.G., SpaceX Starlink. In 5G systems, there has been an increased focus to integrate a non-terrestrial component into the broader wireless communication infrastructure. It is expected that this trend will continue in the future. Satellites can provide coverage in areas where a terrestrial infrastructure is congested or unavailable; however, their energy resources are limited and due to the sidelobes in the multiple beam coverage, co-channel interference arises. Here, beamforming is an effective remedy for both problems. In this thesis, a distributed beamforming solution is investigated and compared with classic centralized methods. The distributed solution benefits from the fact that beamforming is not performed in a centralized manner in a single satellite, but is done collectively. So, if one satellite malfunctions, others can still provide coverage. Lastly, numerical simulations performed in MATLAB substantiate the advantages of distributed beamforming approach.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Bhadra, Jayanta. "Abstraction techniques for verification of digital designs". Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3024993.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Marokkey, Sajan Raphael. "Digital techniques for dynamic visualization in photomechanics". Thesis, Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14670896.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Emmanouilidi, Archontia [Verfasser], e Wael [Akademischer Betreuer] Att. "Accuracy of various intraoral digital impression techniques". Freiburg : Universität, 2020. http://d-nb.info/1215031769/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Musoke, David. "Digital image processing with the Motorola 56001 digital signal processor". Scholarly Commons, 1992. https://scholarlycommons.pacific.edu/uop_etds/2236.

Testo completo
Abstract (sommario):
This report describes the design and testing of the Image56 system, an IBM-AT based system which consists of an analog video board and a digital board. The former contains all analog and video support circuitry to perform real-time image processing functions. The latter is responsible for performing non real-time, complex image processing tasks using a Motorola DSP56001 digital signal processor. It is supported by eight image data buffers and 512K words of DSP memory (see Appendix A for schematic diagram).
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Cloete, Eric. "Nonlinear smoothers for digital image processing". Thesis, Cape Technikon, 1997. http://hdl.handle.net/20.500.11838/2073.

Testo completo
Abstract (sommario):
Thesis (DTech(Business Informatics))--Cape Technikon, Cape Town, 1997
Modem applications in computer graphics and telecommunications command high performance filtering and smoothing to be implemented. The recent development of a new class of max-min selectors for digital image processing is investigated with special emphasis on the practical implications for hardware and software design.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Nell, Raymond D. "Design and analysis of a system for 3D fabrication of synthetic anatomical structures". Thesis, Cape Peninsula University of Technology, 2005. http://hdl.handle.net/20.500.11838/1149.

Testo completo
Abstract (sommario):
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2005
This dissertation is the reading and display ofDICOM medical images (Digital Imaging and Communication in Medicine) and production ofmodel artifacts of anatomical organs using Rapid Prototyping An algorithm to read these DICOM medical images was developed. It also displays pixel information ofthe image. When the DICOM image has been read and displayed, the information required to produce the anatomical artifact is extracted. These 2D slice images, MRI (Magnetic Resonance Imaging) and CT Scan (Computer Tomography) images are written to 3D file in SLC (Slice files) and STL (Stereolithography File Format) format. A 3D softcopy ofthe anatomical structure is created. At this stage, the clinician or surgeon can make any changes or require additional information to be added to the anatomical structure. With the 3D model available in STL format, a physical artifact is produced using Rapid Prototyping. The external edge ofthe anatomical structure can be produced using Rapid Prototyping as well as the outer rim with the internal structures. To produce the external surface ofthe structure, an outer rim edge detection algorithm has been developed. This will only extract the external surface ofthe structure. In addition to the softcopy ofthe structure, multiple organs can be displayed on the same image and this will give a representation ofthe interaction ofneighboring organs and structures. This is useful as both the normal anatomy as well as the infiltration ofthe abnormal pathology can be viewed simultaneously. One of the major limitations ofdisplaying the information in a 3D image is that the files are very large. Since 3D STL files use triangles to display the outer surface ofa structure, a method to reduce the file size and still keep the image information was developed. The triangle reduction method is a method to display the 3D information and to decrease the STL file size depending on the complexity ofthe outer surface ofthe structure. To ensure that the anatomical model s represented as in the DlCOM files, an Interpolation Algorithm was developed to reconstruct the outer ofthe model from 2D MRI or CT-Scan images. A word about computer models: Some of the programs and presentations are based on the real world. They model the real world and anatomical structures. It is very important to note that the models are created with software. Obviously a model is useful if it resembles reality closely, but it is only a prediction about the model itself. Models are useful because they help to explain why certain things happen and how interaction takes place. Models provide suggestions for how structures might look. Computer models provide answers very quickly. These are computer models representing the real structure. (Czes Kosniowski, 1983)
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Rosenthal, Jordan. "Filters and filterbanks for hexagonally sampled signals". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/13347.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Delic-Ibukic, Alma. "Digital Background Calibration Techniques for High-Resolution, Wide Bandwidth Analog-to-Digital Converters". Fogler Library, University of Maine, 2008. http://www.library.umaine.edu/theses/pdf/Delic-IbukicA2008.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Karanicolas, Andrew N. (Andrew Nicholas). "Digital self-calibration techniques for high-accuracy, high speed analog-to-digital converters". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12010.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 219-224).
by Andrew Nicholas Karanicolas.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Solanke, Abiodun Abdullahi <1983&gt. "Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10400/1/SOLANKE-ABIODUN-ABDULLAHI-Tesi.pdf.

Testo completo
Abstract (sommario):
Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Farahati, Nader. "New techniques for adaptive equalisation". Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302950.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Hamlett, Neil A. "Comparison of multiresolution techniques for digital signal processing". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://edocs.nps.edu/npspubs/scholarly/theses/1993/Mar/93Mar_Hamlett.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Scraggs, David Peter Thomas. "Digital signal processing techniques for semiconductor Compton cameras". Thesis, University of Liverpool, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491364.

Testo completo
Abstract (sommario):
The work presented in this thesis has focused on the development of a low dose Compton camera for nuclear medicine. A Compton camera composed of two high-purity planar germanium orthogonal-strip detectors has been constructed. Fast digital data acquisition has been utilised for the application of pulse shape analysis techniques. A simple back projection imaging code has been developed and validated with a Geant4 radiation transport simulation of the Compton camera configuration. L A 137CS isotropic source and a 22Na anisotropic source have been experimentally reconstructed. Parametric pulse shape analysis was applied to both data sets and has been shown to increase the detector spatial resolution from a raw granularity of 5x5x20mm to a spatial resolution that can be represented by a Gaussian distribution with a standard deviation of 1.5mm < u < 2mm in all dimensions; this result was in-part derived from Geant4 simulations. Qualitatively poor images have been shown to result - based wholly on simulation - from Gaussian spatial-resolution distributions that have a standard deviation of greater than 4mm. A partial experimental basis-data-set has been developed and proved capable of providing 1.9mm FWHM average spatial resolution through the depth axis of a single detector crystal. A novel technique to identify gamma ray scattering within single detector c1osed-face-pixels - hitherto unrecognised - has also been introduced in this thesis. This technique, henceforth known as Digital Compton Suppression (DieS), is based on spectral analysis and has demonstrated the ability of identifying events in which the Compton scattering and photoelectric absorption sites are separated by 13mm in the direction ofthe electric field. Supplied by The British Library - 'The world's knowledge'
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Hashimi, Seyed Bahauddin. "Coded modulation techniques for digital mobile communication systems". Thesis, Staffordshire University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321787.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Ali, Mohammad Athar. "Digital rights management techniques for H.264 video". Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8890.

Testo completo
Abstract (sommario):
This work aims to present a number of low-complexity digital rights management (DRM) methodologies for the H.264 standard. Initially, requirements to enforce DRM are analyzed and understood. Based on these requirements, a framework is constructed which puts forth different possibilities that can be explored to satisfy the objective. To implement computationally efficient DRM methods, watermarking and content based copy detection are then chosen as the preferred methodologies. The first approach is based on robust watermarking which modifies the DC residuals of 4×4 macroblocks within I-frames. Robust watermarks are appropriate for content protection and proving ownership. Experimental results show that the technique exhibits encouraging rate-distortion (R-D) characteristics while at the same time being computationally efficient. The problem of content authentication is addressed with the help of two methodologies: irreversible and reversible watermarks. The first approach utilizes the highest frequency coefficient within 4×4 blocks of the I-frames after CAVLC en- tropy encoding to embed a watermark. The technique was found to be very effect- ive in detecting tampering. The second approach applies the difference expansion (DE) method on IPCM macroblocks within P-frames to embed a high-capacity reversible watermark. Experiments prove the technique to be not only fragile and reversible but also exhibiting minimal variation in its R-D characteristics. The final methodology adopted to enforce DRM for H.264 video is based on the concept of signature generation and matching. Specific types of macroblocks within each predefined region of an I-, B- and P-frame are counted at regular intervals in a video clip and an ordinal matrix is constructed based on their count. The matrix is considered to be the signature of that video clip and is matched with longer video sequences to detect copies within them. Simulation results show that the matching methodology is capable of not only detecting copies but also its location within a longer video sequence. Performance analysis depict acceptable false positive and false negative rates and encouraging receiver operating charac- teristics. Finally, the time taken to match and locate copies is significantly low which makes it ideal for use in broadcast and streaming applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Hobson, David Mark. "Characterisation of rice grains using digital imaging techniques". Thesis, University of Kent, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509657.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Al-Mbaideen, Amneh Ahmed. "Digital signal processing techniques fpr NIR spectroscopy analysis". Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538095.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Zenteno, Efrain. "Digital Compensation Techniques for Transmitters inWireless Communications Networks". Doctoral thesis, KTH, Signalbehandling, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167971.

Testo completo
Abstract (sommario):
Since they appeared, wireless technologies have deeply transformed our society. Today, wireless internet access and other wireless applications demandincreasingly more traffic. However, the continuous traffic increase can be unbearableand requires rethinking and redesigning the wireless technologies inmany different aspects. Aiming to respond to the increasing needs of wirelesstraffic, we are witnessing a rapidly evolving wireless technology scenario.This thesis addresses various aspects of the transmitters used in wireless communications.Transmitters present several hardware (HW) impairments thatcreate distortions, polluting the radio spectrum and decreasing the achievabletraffic in the network. Digital platforms are now flexible, robust and cheapenough to enable compensation of HW impairments at the digital base-bandsignal. This has been coined as ’dirty radio’. Dirty radio is expected in future transmitters where HW impairments may arise to reduce transmitter cost or to enhance power efficiency. This thesis covers the software (SW) compensation schemes of dirty radio developed for wireless transmitters. As describedin the thesis, these schemes can be further enhanced with knowledge of thespecific signal transmission or scenarios, e.g., developing cognitive digital compensationschemes. This can be valuable in today’s rapidly evolving scenarioswhere multiple signals may co-exist, sharing the resources at the same radiofrequency (RF) front-end. In the first part, this thesis focuses on the instrumentation challenges andHWimpairments encountered at the transmitter. A synthetic instrument (SI)that performs network analysis is designed to suit the instrumentation needs.Furthermore, how to perform nonlinear network analysis using the developedinstrument is discussed. Two transmitter HW impairments are studied: themeasurement noise and the load impedance mismatch at the transmitter, asis their coupling with the state-of-the-art digital compensation techniques.These two studied impairments are inherent to measurement systems and areexpected in future wireless transmitters. In the second part, the thesis surveys the area of behavioral modeling and digital compensation techniques for wireless transmitters. Emphasis is placed on low computational complexity techniques. The low complexity is motivated by a predicted increase in the number of transmitters deployed in the network, from base stations (BS), access points and hand-held devices. A modeling methodology is developed that allows modeling transmitters to achieve both reduced computational complexity and low modeling error. Finally, the thesis discusses the emerging architectures of multi-channel transmittersand describes their digital compensation techniques. It revises the MIMOVolterra series formulation to address the general modeling problem anddrafts possible solutions to tackle its dimensionality. In the framework of multi-channel transmitters, a technique to compensate nonlinear multi-carrier satellite transponders is presented. This technique is cognitive because it uses the frequency link planning and the pulse-shaping filters of the individual carriers. This technique shows enhanced compensation ability at reduced computational complexity compared to the state-of-the-art techniques and enables the efficient operation of satellite transponders.

QC 20150526

Gli stili APA, Harvard, Vancouver, ISO e altri
49

Goldfarb, Gilad. "DIGITAL SIGNAL PROCESSING TECHNIQUES FOR COHERENT OPTICAL COMMUNICATION". Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2893.

Testo completo
Abstract (sommario):
Coherent detection with subsequent digital signal processing (DSP) is developed, analyzed theoretically and numerically and experimentally demonstrated in various fiber‐optic transmission scenarios. The use of DSP in conjunction with coherent detection unleashes the benefits of coherent detection which rely on the preservation of full information of the incoming field. These benefits include high receiver sensitivity, the ability to achieve high spectral‐efficiency and the use of advanced modulation formats. With the immense advancements in DSP speeds, many of the problems hindering the use of coherent detection in optical transmission systems have been eliminated. Most notably, DSP alleviates the need for hardware phase‐locking and polarization tracking, which can now be achieved in the digital domain. The complexity previously associated with coherent detection is hence significantly diminished and coherent detection is once again considered a feasible detection alternative. In this thesis, several aspects of coherent detection (with or without subsequent DSP) are addressed. Coherent detection is presented as a means to extend the dispersion limit of a duobinary signal using an analog decision‐directed phase‐lock loop. Analytical bit‐error ratio estimation for quadrature phase‐shift keying signals is derived. To validate the promise for high spectral efficiency, the orthogonal‐wavelength‐division multiplexing scheme is suggested. In this scheme the WDM channels are spaced at the symbol rate, thus achieving the spectral efficiency limit. Theory, simulation and experimental results demonstrate the feasibility of this approach. Infinite impulse response filtering is shown to be an efficient alternative to finite impulse response filtering for chromatic dispersion compensation. Theory, design considerations, simulation and experimental results relating to this topic are presented. Interaction between fiber dispersion and nonlinearity remains the last major challenge deterministic effects pose for long‐haul optical data transmission. Experimental results which demonstrate the possibility to digitally mitigate both dispersion and nonlinearity are presented. Impairment compensation is achieved using backward propagation by implementing the split‐step method. Efficient realizations of the dispersion compensation operator used in this implementation are considered. Infinite‐impulse response and wavelet‐based filtering are both investigated as a means to reduce the required computational load associated with signal backward‐propagation. Possible future research directions conclude this dissertation.
Ph.D.
Optics and Photonics
Optics and Photonics
Optics PhD
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Erk, Patrick P. (Patrick Peter). "Digital signal processing techniques for laser-doppler anemometry". Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/43026.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia