Dissertations / Theses on the topic 'Signal efficiency'

To see the other types of publications on this topic, follow the link: Signal efficiency.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Signal efficiency.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Berndsen, Kevin J. "Signal Optimization for Efficient High-Power Amplifier Operation." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1307985518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alonso, Kevin. "A high efficiency method for exploiting an ultra fast opto-electric signal generator." FIU Digital Commons, 1999. http://digitalcommons.fiu.edu/etd/1109.

Full text
Abstract:
Simulations suggest that photomixing in resonant laser-assisted field emission could be used to generate and detect signals from DC to 100 THz. It is the objective of this research to develop a system to efficiently couple the microwave signals generated on an emitting tip by optical mixing. Four different methods for coupling are studied. Tapered Goubau line is found to be the most suitable. Goubau line theory is reviewed, and programs are written to determine loss on the line. From this, Goubau tapers are designed that have a 1:100 bandwidth. These tapers are finally simulated using finite difference time domain, to find the optimum design parameters. Tapered Goubau line is an effective method for coupling power from the field emitting tip. It has large bandwidth, and acceptable loss. Another important consideration is that it is the easiest to manufacture of the four possibilities studied, an important quality for any prototype.
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Lu. "Coding and Signal Processing Techniques for High Efficiency Data Storage and Transmission Systems." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/293753.

Full text
Abstract:
Generally speaking, a communication channel refers to a medium through which an information-bearing signal is corrupted by noise and distortion. A communication channel may result from data storage over time or data transmission through space. A primary task for communication engineers is to mathematically characterize the channel to facilitate the design of appropriate detection and coding systems. In this dissertation, two different channel modeling challenges for ultra-high density magnetic storage are investigated: two-dimensional magnetic recording (TDMR) and bit-patterned magnetic recording (BPMR). In the case of TDMR, we characterize the error mechanisms during the write/read process of data on a TDMR medium by a finite-state machine, and then design a state-based detector that provides soft decisions for use by an outer decoder. In the case of BPMR, we employ an insertion/deletion (I/D) model. We propose a LDPC-CRC product coding scheme that enables the error detection without the involvement of Marker codes specifically designed for an I/D channel. We also propose a generalized Gilbert-Elliott (GE) channel to approximate the I/D channel in the sense of an equivalent I/D event rate. A lower bound of the channel capacity for the BPMR channel is derived, which supports our claim that commonly used error-correction codes are effective on the I/D channel under the assumption that I/D events are limited to a finite length. Another channel model we investigated is perpendicular magnetic recording model. Advanced signal processing for the pattern-dependent-noise-predictive channel detectors is our focus. Specifically, we propose an adaptive scheme for a hardware design that reduces the complexity of the detector and the truncation/saturation error caused by a fix-point representation of values in the detector. Lastly, we designed a sequence detector for compressively sampled Bluetooth signals, thus allowing data recovery via sub-Nyquist sampling. This detector skips the conventional step of reconstructing the original signal from compressive samples prior to detection. We also propose an adaptive design of the sampling matrix, which almost achieves Nyquist sampling performance with a relatively high compression ratio. Additionally, this adaptive scheme can automatically choose an appropriate compression ratio as a function of E(b)/N₀ without explicit knowledge of it.
APA, Harvard, Vancouver, ISO, and other styles
4

Gonzalez, Maria C., and George R. Branner. "EFFECTS OF NON- LINEAR AMPLIFICATION ON N-GMSK AND N-FQPSK SIGNAL STATISTICS." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605306.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
Digital modulation schemes that are power and bandwidth efficient are highly desirable. After non-linear amplification has been done, signal modulation schemes having constant or quasi-constant envelopes are not as susceptible to spectral regrowth as those with non-constant envelopes. Since such distortion generates interference in the adjacent channels, the power operation of the amplifier in non-constant envelope modulations is typically backed off, resulting in systems with reduced power efficiency. On the other hand, constant envelope modulation may have different bandwidth spectra. This paper examines the statistical characteristics of N-GMSK and N-FQPSK [1] signals to assess the bandwidth efficiency in the presence of amplifier nonlinearities.
APA, Harvard, Vancouver, ISO, and other styles
5

Schmitz, Michael J. "Multisine Excitation Design to Increase the Efficiency of System Identification Signal Generation and Analysis." Diss., North Dakota State University, 2012. https://hdl.handle.net/10365/26701.

Full text
Abstract:
Reducing sample frequencies in measurement systems can save power, but reduction to the point of undersampling results in aliasing and possible signal distortion. Nonlinearities of the system under test can also lead to distortions prior to measurement. In this dissertation, a first algorithm is presented for designing multisine excitation signals that can be undersampled without distortion from the aliasing of excitation frequencies or select harmonics. Next, a second algorithm is presented for designing undersampled distributions that approximate target frequency distributions. Results for pseudo-logarithmically-spaced frequency distributions designed for undersampling without distortion from select harmonics show a considerable decrease in the required sampling frequency and an improvement in the discrete Fourier transform (DFT) bin utilization compared to similar Nyquist-sampled output signals. Specifically, DFT bin utilization is shown to improve by eleven-fold when the second algorithm is applied to a 25 tone target logarithmic-spaced frequency distribution that can be applied to a nonlinear system with 2nd and 3rd order harmonics without resulting in distortion of the excitation frequencies at the system output. This dissertation also presents a method for optimizing the generation of multisine excitation signals to allow for significant simplifications in hardware. The proposed algorithm demonstrates that a summation of square waves can sufficiently approximate a target multisine frequency distribution while simultaneously optimizing the frequency distribution to prevent corruption from some non-fundamental harmonic frequencies. Furthermore, a technique for improving the crest factor of a multisine signal composed of square waves shows superior results compared to random phase optimization, even when the set of obtainable signal phases is restricted to a limited set to further reduce hardware complexity.
APA, Harvard, Vancouver, ISO, and other styles
6

Qian, Hua. "Power Efficiency Improvements for Wireless Transmissions." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/11649.

Full text
Abstract:
Many communications signal formats are not power efficient because of their large peak-to-average power ratios (PARs). Moreover, in the presence of nonlinear devices such as power amplifiers (PAs) or mixers, the non-constant-modulus signals may generate both in-band distortion and out-of-band interference. Backing off the signal to the linear region of the device further reduces the system power efficiency. To improve the power efficiency of the communication system, one can pursue two approaches: i) linearize the PA; ii) reduce the high PAR of the input signal. In this dissertation, we first explore the optimal nonlinearity under the peak power constraint. We show that the optimal nonlinearity is a soft limiter with a specific gain calculated based on the peak power limit, noise variance, and the probability density function of the input amplitude. The result is also extended to the fading channel case. Next, we focus on digital baseband predistortion linearization for power amplifiers with memory effects. We build a high-speed wireless test-bed and carry out digital baseband predistortion linearization experiments. To implement adaptive PA linearization in wireless handsets, we propose an adaptive digital predistortion linearization architecture that utilizes existing components of the wireless transceiver to fulfill the adaptive predistorter training functionality. We then investigate the topic of PAR reduction for OFDM signals and forward link CDMA signals. To reduce the PAR of the OFDM signal, we propose a dynamic selected mapping (DSLM) algorithm with a two-buffer structure to reduce the computational requirement of the SLM method without sacrificing the PAR reduction capability. To reduce the PAR of the forward link CDMA signal, we propose a new PAR reduction algorithm by introducing a relative offset between the in-phase branch and the quadrature branch of the transmission system.
APA, Harvard, Vancouver, ISO, and other styles
7

Flanagan, T. B. "Signal controlled roundabouts : An investigation into the efficiency of signal controlled roundabouts utilizing simulation techniques with particular reference to junctions with three approaches." Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deshpande, Vinit Vinod. "Evaluating the Impacts of Transit Signal Priority Strategies on Traffic Flow Characteristics:Case Study along U.S.1, Fairfax County, Virginia." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31319.

Full text
Abstract:
Transportation engineers and planners worldwide are faced with the challenge of improving transit services in urban areas using low cost means. Transit signal priority is considered to be an effective way to improve transit service reliability and efficiency. In light of the interest in testing and deploying transit signal priority on a major arterial in Northern Virginia, this research focuses on the impacts of transit signal priority in the U.S.1 corridor in Fairfax County in terms of benefits to transit and impacts on other traffic. Using a simulation tool, VISSIM, these impacts were assessed considering a ten second green extension priority strategy. The results of the simulation analysis indicated that the Fairfax Connector buses benefit from the green extension strategy with little to no impact on the other non-transit traffic. Overall, improvements of 3.61% were found for bus service reliability and 2.64% for bus efficiency, while negative impacts were found in the form of increases in queue lengths on side streets by a maximum value of approximately one vehicle. Because this research has provided a foundation for the evaluation of transit signal priority for VDOT and Fairfax County engineers and planners, future research can build upon this effort. Areas identified for future research include the provision of priority for the entire bus route; combination of emergency preemption and transit priority strategies; evaluation of other priority strategies using system- wide priority concepts; and the impacts of priority strategies in monetary terms.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Kamdar, Vaibhavi Killol. "Evaluating the Transit Signal Priority Impacts along the U.S. 1 Corridor in Northern Verginia." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/30845.

Full text
Abstract:
Heavy traffic volumes in peak hours accompanied by closely located signalized intersections and nearside bus stops on U.S. 1, result in congestion and traffic delays that bus transit may be able to alleviate to some extent. The capital investment and operating costs of other transit solutions such as â Bus Rapid Transitâ and â Heavy Rail Transitâ projects were found to be cost prohibitive compared to bus transit signal priority (TSP) options. Successful implementation of a limited TSP pilot project led local authorities to conclude that TSP should be extended to the full length of the Fairfax Connector bus routes on U.S. 1. This research focused on testing the impacts of a ten second green extension priority strategy for all the northbound transit buses in the morning peak period at twenty-six signalized intersections along U.S. 1. A micro simulation model VISSIM 3.7 was used to analyze the impacts of TSP. The simulation analysis indicates that the Fairfax Connector buses might benefit from the green extension strategy. Overall, improvements of up to 4% for transit travel time savings and 5-13% reduction in control delay for transit vehicles were observed. Considering all side street traffic, the total increase in maximum queue length might be up to 1.23%. Future research possibilities proposed include the evaluation of different priority strategies such as an early green, red truncation and queue jumps. Impacts of using a dedicated lane for transit buses along with TSP can also be evaluated. Conditional transit signal priority may also include bus occupancy levels and bus latenesses.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Mallavarpu, Navin. "Large signal model development and high efficiency power amplifier design in cmos technology for millimeter-wave applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44711.

Full text
Abstract:
This dissertation presents a novel large signal modeling approach which can be used to accurately model CMOS transistors used in millimeter-wave CMOS power amplifiers. The large signal model presented in this work is classified as an empirical compact device model which incorporates temperature-dependency and device periphery scaling. These added features allow for efficient design of multi-stage CMOS power amplifiers by virtue of the process-scalability. Prior to the presentation of the details of the model development, background is given regarding the 90nm CMOS process, device test structures, de-embedding methods and device measurements, all of which are necessary preliminary steps for any device modeling methodology. Following discussion of model development, the design of multi-stage 60GHz Class AB CMOS power amplifiers using the developed model is shown, providing further model validation. The body of research concludes with an investigation into designing a CMOS power amplifier operating at frequencies close to the millimeter-wave range with a potentially higher-efficiency class of power amplifier operation. Specifically, a 24GHz 130nm CMOS Inverse Class F power amplifier is simulated using a modified version of the device model, fabricated and compared with simulations. This further demonstrates the robustness of this device modeling method.
APA, Harvard, Vancouver, ISO, and other styles
11

Netzer, Gilbert. "Efficient LU Factorization for Texas Instruments Keystone Architecture Digital Signal Processors." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170445.

Full text
Abstract:
The energy consumption of large-scale high-performance computer (HPC) systems has become one of the foremost concerns of both data-center operators and computer manufacturers. This has renewed interest in alternative computer architectures that could offer substantially better energy-efficiency.Yet, the for the evaluation of the potential of these architectures necessary well-optimized implementations of typical HPC benchmarks are often not available for these for the HPC industry novel architectures. The in this work presented LU factorization benchmark implementation aims to provide such a high-quality tool for the HPC industry standard high-performance LINPACK benchmark (HPL) for the eight-core Texas Instruments TMS320C6678 digitalsignal processor (DSP). The presented implementation could perform the LU factorization at up to 30.9 GF/s at 1.25 GHz core clock frequency by using all the eight DSP cores of the System-on-Chip (SoC). This is 77% of the attainable peak double-precision floating-point performance of the DSP, a level of efficiency that is comparable to the efficiency expected on traditional x86-based processor architectures. A presented detailed performance analysis shows that this is largely due to the optimized implementation of the embedded generalized matrix-matrix multiplication (GEMM). For this operation, the on-chip direct memory access (DMA) engines were used to transfer the necessary data from the external DDR3 memory to the core-private and shared scratchpad memory. This allowed to overlap the data transfer with computations on the DSP cores. The computations were in turn optimized by using software pipeline techniques and were partly implemented in assembly language. With these optimization the performance of the matrix multiplication reached up to 95% of attainable peak performance. A detailed description of these two key optimization techniques and their application to the LU factorization is included. Using a specially instrumented Advantech TMDXEVM6678L evaluation module, described in detail in related work, allowed to measure the SoC’s energy efficiency of up to 2.92 GF/J while executing the presented benchmark. Results from the verification of the benchmark execution using standard HPL correctness checks and an uncertainty analysis of the experimentally gathered data are also presented.
Energiförbrukningen av storskaliga högpresterande datorsystem (HPC) har blivit ett av de främsta problemen för såväl ägare av dessa system som datortillverkare. Det har lett till ett förnyat intresse för alternativa datorarkitekturer som kan vara betydligt mer effektiva ur energiförbrukningssynpunkt. För detaljerade analyser av prestanda och energiförbrukning av dessa för HPC-industrin nya arkitekturer krävs väloptimerade implementationer av standard HPC-bänkmärkningsproblem. Syftet med detta examensarbete är att tillhandhålla ett sådant högkvalitativt verktyg i form av en implementation av ett bänkmärkesprogram för LU-faktorisering för den åttakärniga digitala signalprocessorn (DSP) TMS320C6678 från Texas Instruments. Bänkmärkningsproblemet är samma som för det inom HPC-industrin välkända bänkmärket “high-performance LINPACK” (HPL). Den här presenterade implementationen nådde upp till en prestanda av 30,9 GF/s vid 1,25 GHz klockfrekvens genom att samtidigt använda alla åtta kärnor i DSP:n. Detta motsvarar 77% av den teoretiskt uppnåbara prestandan, vilket är jämförbart med förväntningar på effektivteten av mer traditionella x86-baserade system. En detaljerad prestandaanalys visar att detta tillstor del uppnås genom den högoptimerade implementationen av den ingående matris-matris-multiplikationen. Användandet av specialiserade “direct memory access” (DMA) hårdvaruenheter för kopieringen av data mellan det externa DDR3 minnet och det interna kärn-privata och delade arbetsminnet tillät att överlappa dessa operationer med beräkningar. Optimerade mjukvaruimplementationer av dessa beräkningar, delvis utförda i maskinspåk, tillät att utföra matris-multiplikationen med upp till 95% av den teoretiskt nåbara prestandan. I rapporten ges en detaljerad beskrivning av dessa två nyckeltekniker. Energiförbrukningen vid exekvering av det implementerade bänkmärket kunde med hjälp av en för ändamålet anpassad Advantech TMDXEVM6678L evalueringsmodul bestämmas till maximalt 2,92 GF/J. Resultat från verifikationen av bänkmärkesimplementationen och en uppskattning av mätosäkerheten vid de experimentella mätningarna presenteras också.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Xiaohui. "Bahadur Efficiencies for Statistics of Truncated P-value Combination Methods." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1238.

Full text
Abstract:
Combination of p-values from multiple independent tests has been widely studied since 1930's. To find the optimal combination methods, various combiners such as Fisher's method, inverse normal transformation, maximal p-value, minimal p-value, etc. have been compared by different criteria. In this work, we focus on the criterion of Bahadur efficiency, and compare various methods under the TFisher. As a recently developed general family of combiners, TFisher cover Fisher's method, the rank truncated product method (RTP), the truncation product method (TPM, or the hard-thresholding method), soft-thresholding method, minimal p-value method, etc. Through the Bahadur asymptotics, we better understand the relative performance of these methods. In particular, through calculating the Bahadur exact slopes for the problem of detecting sparse signals, we reveal the relative advantages of truncation versus non-truncation, hard-thresholding versus soft-thresholding. As a result, the soft thresholding method is shown superior when signal strength is relatively weak and the ratio between the sample size of each p-value and the number of combining p-values is small.
APA, Harvard, Vancouver, ISO, and other styles
13

Abdelghaffar, Hossam Mohamed Abdelwahed. "Developing and Testing a Novel De-centralized Cycle-free Game Theoretic Traffic Signal Controller: A Traffic Efficiency and Environmental Perspective." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/100681.

Full text
Abstract:
Traffic congestion negatively affects traveler mobility and air quality. Stop and go vehicular movements associated with traffic jams typically result in higher fuel consumption levels compared to cruising at a constant speed. The first objective in the dissertation is to investigate the spatial relationship between air quality and traffic flow patterns. We developed and applied a recursive Bayesian estimation algorithm to estimate the source location (associated with traffic jam) of an airborne contaminant (aerosol) in a simulation environment. This algorithm was compared to the gradient descent algorithm and an extended Kalman filter algorithm. Results suggest that Bayesian estimation is less sensitive to the choice of the initial state and to the plume dispersion model. Consequently, Bayesian estimation was implemented to identify the location (correlated with traffic flows) of the aerosol (soot) that can be attributed to traffic in the vicinity of the Old Dominion University campus, using data collected from a remote sensing system. Results show that the source location of soot pollution is located at congested intersections, which demonstrate that air quality is correlated with traffic flows and congestion caused by signalized intersections. Sustainable mobility can help reduce traffic congestion and vehicle emissions, and thus, optimizing the performance of available infrastructure via advanced traffic signal controllers has become increasingly appealing. The second objective in the dissertation is to develop a novel de-centralized traffic signal controller, achieved using a Nash bargaining game-theoretic framework, that operates a flexible phasing sequence and free cycle length to adapt to dynamic changes in traffic demand levels. The developed controller was implemented and tested in the INTEGRATION microscopic traffic assignment and simulation software. The proposed controller was compared to the operation of an optimum fixed-time coordinated plan, an actuated controller, a centralized adaptive phase split controller, a decentralized phase split and cycle length controller, and a fully coordinated adaptive phase split, cycle length, and offset optimization controller to evaluate its performance. Testing was initially conducted on an isolated intersection, showing a 77% reduction in queue length, a 17% reduction in vehicle emission levels, and a 64% reduction in total delay. In addition, the developed controller was tested on an arterial network producing statistically significant reductions in total delay ranging between 36% and 67% and vehicle emissions reductions ranging between 6% and 13%. Analysis of variance, Tukey, and pairwise comparison tests were conducted to establish the significance of the proposed controller. Moreover, the controller was tested on a network of 38 intersections producing significant reduction in the travel time by 23.6%, a reduction in the queue length by 37.6%, and a reduction in CO2 emissions by 10.4%. Finally, the controller was tested on the Los Angeles downtown network composed of 457 signalized intersections, producing a 35% reduction in travel time, a 54.7% reduction in queue length, and a 10% reduction in the CO2 emissions. The results demonstrate that the proposed decentralized controller produces major improvements over other state-of-the-art centralized and de-centralized controllers. The proposed controller is capable of alleviating congestion as well as reducing emissions and enhancing air quality.
PHD
APA, Harvard, Vancouver, ISO, and other styles
14

Holmström, Johnny. "GOVERNOR ELECTRONICS FOR DIESEL ENGINES : High availability platform for real-time control and advanced fuel efficiency algorithms." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-20282.

Full text
Abstract:
Fossil fuel is a rare commodity and the combustion of this fuel results in negative environmental effects. This paper evaluates and validates the electronics needed to run intelligent algorithms to lower the fuel consumption for commercial vessels. This is done by integrating advanced fuel saving functions into an electronic device that controls the fuel injection of large diesel engines, as known as a diesel engine governor. The control system is classified as a safety critical system. This means that the electronics needs to be designed for fail safe operation. To allow for future research and development, the platform needs flexibility in respect to hardware reconfiguration and software changes, i.e. this is the basis for a system that allows for hardware-software co-design. For efficient installation and easy commissioning, the system shall allow for auto-calibration combined with programmable jumper selections to attain a cost effective solution. The computation of the fuel saving algorithm require accurate data to build a model of the vessels motions. This is achieved by integrating state of the art sensors and a multitude of communication interfaces. Among other things gyroscopes contra accelerometers where evaluated to find the best solution in respect to cost and performance. This design replace the current product DEGO III. The new product requires the same functionality and shall allow for more functions. Focus has been spent on communication, methods of accruing sensor data and more computation speed. In creating a new generation of a product there are tasks like selecting components, questions pertaining to layout of the printed circuit board and an evaluation of supply chains. The manufacturing aspects are considered to rationalize production and testing.
Fossila bränslen är en dyrbar råvara och förbränningen av detta bränsle leder till negativa miljöeffekter. Detta papper utvärderar och verifierar elektroniken som behövs för att beräkna intelligenta algoritmer som minskar bränsle konsumtionen för kommersiella fartyg. Detta görs genom att sammanfoga avancerade funktioner i en och samma elektroniska enhet som kontrollerar bränsle insprutningen på stora diesel motorer, denna elektronik är känd som en varvtals regulator. Kontroll systemet är klassificerat som ett säkerhetskritiskt system. Detta betyder att elektroniken måste utformas för att vara felsäker. För att tillåta framtida forskning och utveckling behöver plattformen vara flexibel. Den ska tillåta konfiguration av hårdvara och mjukvara ändringar. Samverkan mellan hårdvara och mjukvara. För effektiv installation samt drifttagning, måste systemet vara automat-kalibrerande och utrustat med programmerbara byglingar som möjliggör en kostnadseffektiv lösning. Beräkningen av bränsle optimeringen behöver en detaljerad modell av fartygets rörelse. Detta möjliggörs genom att integrera moderna sensorer och en mängd olika kommunikationsmedier. Bland annat så utvärderades gyroskop kontra accelerometrar för att hitta den bästa lösningen i förhållande till kostnad och kvalitet. Denna design ersätter den nuvarande produkten DEGO III. Den nya produkten behöver samma funktionalitet samt en mängd nya funktioner. Fokus har varit kommunikation, metoder för att samla sensordata och ökad beräknings kraft. När en ny generation av en produkt ska utvecklas finns uppgifter så som att välja komponenter, frågor gällande mönsterkorts layout och en utvärdering av leverantörs källor. Tillverkningen av prototypen inkluderar utvärdering av produktions metoder för att effektivisera tillverkning och verifiering.
APA, Harvard, Vancouver, ISO, and other styles
15

Jang, Haedong. "NONLINEAR EMBEDDING FOR HIGH EFFICIENCY RF POWER AMPLIFIER DESIGN AND APPLICATION TO GENERALIZED ASYMMETRIC DOHERTY AMPLIFIERS." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406269587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Samy, Md Arif Abdulla. "Characterization of 3D Silicon Pixel Detectors for the ATLAS ITk." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/347623.

Full text
Abstract:
After ten years of massive success, the Large Hadron Collider (LHC) at CERN is going for an upgrade to the next phase, The High Luminosity Large Hadron Collider (HL-LHC) which is planned to start its operation in 2029. This is expected to have a fine boost to its performance, with an instantaneous luminosity of 5.0×1034 cm-2s -1 (ultimate value 7.5×1034 cm-2s -1 ) with 200 average interactions per bunch crossing which will increase the fluences up to more than 1016 neq/ cm2 , resulting in high radiation damage in ATLAS detector. To withstand this situation, it was proposed to make the innermost layer with 3D silicon sensors, which will have radiation tolerance up to 2×1016 neq/cm2 with a Total Ionization Dose of 9.9 MGy. Two-pixel geometries have been selected for 3D sensors, 50 × 50 µm2 for Endcap (ring), which will be produced by FBK (Italy) and SINTEF (Norway), and 25 × 100 µm2 for Barrel (stave), will be produced by CNM (Spain). A discussion is made in this thesis about the production of FBK on both geometries, as they have made a breakthrough with their Stepper lithography process. The yield improved, specifically for the geometry 25 × 100 µm2 with two electrode readouts, which was problematic in the mask aligner approach. Their sensors were characterized electrically at waferlevel as well as after integration with RD53a readout chip (RoC) on single-chip cards (SCC) and were verified against Innermost Tracker criteria. The SCCs were sent for irradiation up to 1×1016 neq/cm2 and were tested under electron test beam, and a hit efficiency of 97% was presented. Some more SCCs have been sent to Los Alamos for irradiating them up to 1.5×1016 neq/cm2 fluence. As the 3D sensors will be mounted as Triplets, a discussion is also made on their assembly and QA/QC process. A reception testing and electrical testing setup both at room temperature and the cold temperature was made and discussed, with results from some early RD53a RoC-based triplets. The pre-production sensors are already evaluated, and soon they will be available bump-bonded with ITkPixV1 RoC for further testing.
APA, Harvard, Vancouver, ISO, and other styles
17

Schindler, Stefan [Verfasser]. "An improved signal model for a dual-phase xenon TPC using Bayesian inference and studies on the software trigger efficiency of the XENON1T DAQ system / Stefan Schindler." Mainz : Universitätsbibliothek der Johannes Gutenberg-Universität Mainz, 2021. http://d-nb.info/1227048599/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jirattigalachote, Amornrat. "Provisioning Strategies for Transparent Optical Networks Considering Transmission Quality, Security, and Energy Efficiency." Doctoral thesis, KTH, Optical Network Laboratory (ON Lab), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-94011.

Full text
Abstract:
The continuous growth of traffic demand driven by the brisk increase in number of Internet users and emerging online services creates new challenges for communication networks. The latest advances in Wavelength Division Multiplexing (WDM) technology make it possible to build Transparent Optical Networks (TONs) which are expected to be able to satisfy this rapidly growing capacity demand. Moreover, with the ability of TONs to transparently carry the optical signal from source to destination, electronic processing of the tremendous amount of data can be avoided and optical-to-electrical-to-optical (O/E/O) conversion at intermediate nodes can be eliminated. Consequently, transparent WDM networks consume relatively low power, compared to their electronic-based IP network counterpart. Furthermore, TONs bring also additional benefits in terms of bit rate, signal format, and protocol transparency. However, the absence of O/E/O processing at intermediate nodes in TONs has also some drawbacks. Without regeneration, the quality of the optical signal transmitted from a source to a destination might be degraded due to the effect of physical-layer impairments induced by the transmission through optical fibers and network components. For this reason, routing approaches specifically tailored to account for the effect of physical-layer impairments are needed to avoid setting up connections that don’t satisfy required signal quality at the receiver. Transparency also makes TONs highly vulnerable to deliberate physical-layer attacks. Malicious attacking signals can cause a severe impact on the traffic and for this reason proactive mechanisms, e.g., network design strategies, able to limit their effect are required. Finally, even though energy consumption of transparent WDM networks is lower than in the case of networks processing the traffic at the nodes in the electronic domain, they have the potential to consume even less power. This can be accomplished by targeting the inefficiencies of the current provisioning strategies applied in WDM networks. The work in this thesis addresses the three important aspects mentioned above. In particular, this thesis focuses on routing and wavelength assignment (RWA) strategies specifically devised to target: (i) the lightpath transmission quality, (ii) the network security (i.e., in terms of vulnerability to physical-layer attacks), and (iii) the reduction of the network energy consumption. Our contributions are summarized below. A number of Impairment Constraint Based Routing (ICBR) algorithms have been proposed in the literature to consider physical-layer impairments during the connection provisioning phase. Their objective is to prevent the selection of optical connections (referred to as lightpaths) with poor signal quality. These ICBR approaches always assign each connection request the least impaired lightpath and support only a single threshold of transmission quality, used for all connection requests. However, next generation networks are expected to support a variety of services with disparate requirements for transmission quality. To address this issue, in this thesis we propose an ICBR algorithm supporting differentiation of services at the Bit Error Rate (BER) level, referred to as ICBR-Diff. Our approach takes into account the effect of physical-layer impairments during the connection provisioning phase where various BER thresholds are considered for accepting/blocking connection requests, depending on the signal quality requirements of the connection requests. We tested the proposed ICBR-Diff approach in different network scenarios, including also a fiber heterogeneity. It is shown that it can achieve a significant improvement of network performance in terms of connection blocking, compared to previously published non-differentiated RWA and ICBR algorithms.  Another important challenge to be considered in TONs is their vulnerability to physical-layer attacks. Deliberate attacking signals, e.g., high-power jamming, can cause severe service disruption or even service denial, due to their ability to propagate in the network. Detecting and locating the source of such attacks is difficult, since monitoring must be done in the optical domain, and it is also very expensive. Several attack-aware RWA algorithms have been proposed in the literature to proactively reduce the disruption caused by high-power jamming attacks. However, even with attack-aware network planning mechanisms, the uncontrollable propagation of the attack still remains an issue. To address this problem, we propose the use of power equalizers inside the network nodes in order to limit the propagation of high-power jamming attacks. Because of the high cost of such equipment, we develop a series of heuristics (incl. Greedy Randomized Adaptive Search Procedure (GRASP)) aiming at minimizing the number of power equalizers needed to reduce the network attack vulnerability to a desired level by optimizing the location of the equalizers. Our simulation results show that the equalizer placement obtained by the proposed GRASP approach allows for 50% reduction of the sites with the power equalizers while offering the same level of attack propagation limitation as it is possible to achieve with all nodes having this additional equipment installed. In turn, this potentially yields a significant cost saving.    Energy consumption in TONs has been the target of several studies focusing on the energy-aware and survivable network design problem for both dedicated and shared path protection. However, survivability and energy efficiency in a dynamic provisioning scenario has not been addressed. To fill this gap, in this thesis we focus on the power consumption of survivable WDM network with dynamically provisioned 1:1 dedicated path protected connections. We first investigate the potential energy savings that are achievable by setting all unused protection resources into a lower-power, stand-by state (or sleep mode) during normal network operations. It is shown that in this way the network power consumption can be significantly reduced. Thus, to optimize the energy savings, we propose and evaluate a series of energy-efficient strategies, specifically tailored around the sleep mode functionality. The performance evaluation results reveal the existence of a trade-off between energy saving and connection blocking. Nonetheless, they also show that with the right provisioning strategy it is possible to save a considerable amount of energy with a negligible impact on the connection blocking probability. In order to evaluate the performance of our proposed ICBR-Diff and energy-aware RWA algorithms, we develop two custom-made discrete-event simulators. In addition, the Matlab program of GRASP approach for power equalization placement problem is implemented.

QC 20120508

APA, Harvard, Vancouver, ISO, and other styles
19

Shehata, Mohamed. "Hybrid Analogue and digital techniques applied to massive MIMO systems for 5G transmission at millimeter waves." Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0026.

Full text
Abstract:
L’objectif principal de ce travail est d’analyser analytiquement les performances de la formation de faisceaux hybrides (HBF) dans des systèmes MIMO massifs à ondes millimétriques (mmWave), de développer des algorithmes HBF de faible complexité et optimiser les systèmes hybrides comprenant des analogiques et numériques pour s’adapter à ces systèmes et enfin de vérifier la validité pratique de ces algorithmes. Le système MIMO massif fournit un gain de transmission élevé, permettent de compenser les pertes importantes en espace libre inhérentes aux transmissions mmWave. D'autre part, l’utilisation de système HBF dans des canaux clairs offre une performance proche de l'efficacité spectrale (SE) par rapport à la formation de faisceau entièrement numérique, avec un coût matériel et une consommation d'énergie inférieurs. Dans cette thèse, nous commençons par définir les conditions pour lesquelles le HBF et la formation de faisceau entièrement numérique peuvent atteindre des performances SE similaires. Ensuite, nous analysons l’écart de performance SE qui se produit entre eux dans des canaux MIMO mmWave. De plus, nous fournissons des modèles analytique SE pour les techniques de base analogiques et HBF dans des canaux MIMO mmWave typiques. Nous considérons ensuite une structure MIMO HBF massive multi-utilisateurs (MU) qui prend en compte plusieurs techniques de traitement de signaux spatiaux de faible complexité afin de fournir un système HBF de faible complexité de mise en oeuvre pour les futurs réseaux de communication sans fil
The main aim of this work is to analytically analyze the performance of Hybrid Beamforming (HBF) in Millimeter Wave (mmWave) massive Multiple Input Multiple Output (MIMO) systems, to develop low complexity HBF algorithms to adapt with such systems and finally to verify the practical validity of these algorithms. The massive MIMO antenna array provides high transmit gain overcoming the severe path-loss limitation of the mm Wave systems. On the other had applying HBF in sparse channels achieves close Spectral Efficiency (SE) perfonnance compared to the full digital beamforming, however with lower hardware cost and power consumption. In this thesis we start by defining the conditions for which bath the HBF and full digital beamfonning can achieve exactly similar SE performance. Then, we analyze the SE perfonnance gap that arise between them in sparse mmWave MIMO channels. Moreover, we provide closed form SE models for basic analog and HBF techniques in typical mmWave MIMO channels. Later we consider a Multi User (MU) massive MIMO HBF framework that considers multiple spatial signal processing techniques for the analog domain processing, digital domain processing, power allocation and users scheduling. We develop low complexity algorithms for such framework in order to provide a low complexity practical HBF framework for future wireless communication networks that can cope with the challenges of mm Wave channels
APA, Harvard, Vancouver, ISO, and other styles
20

Anany, Hossam. "Effectiveness of a Speed Advisory Traffic Signal System for Conventional and Automated vehicles in a Smart City." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156650.

Full text
Abstract:
This thesis project investigates the state-of-the-art in traffic management "Green Light Optimal Speed Advisory (GLOSA)" for vehicles in a smart city. GLOSA utilizes infrastructure and vehicles communication through using current signal plan settings and updated vehicular information in order to influence the intersection approach speeds. The project involves traffic microscopic simulations for a mixed traffic environment of conventional and automated vehicles (AVs) both connected to the intersection control and guided by a speed advisory traffic management system. Among the project goals is to assess the effects on traffic performance when human drivers comply to the speed advice. The GLOSA management approach is accessed for its potential to improve traffic efficiency in a full market penetration of connected AVs with absolute compliance. The project also aims to determine the possible outcome resulting from enhancing the AVs capabilities such as implementing short time headways between vehicles in the future.  The best traffic performance results achieved by operating GLOSA goes for connected AVs with the lowest simulated time headway (0.3 sec). The waiting time reduction reaches 95% and trip delay lessens to 88 %.
APA, Harvard, Vancouver, ISO, and other styles
21

Rukpakavong, Wilawan. "Energy-efficient and lifetime aware routing in WSNs." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/14497.

Full text
Abstract:
Network lifetime is an important performance metric in Wireless Sensor Networks (WSNs). Transmission Power Control (TPC) is a well-established method to minimise energy consumption in transmission in order to extend node lifetime and, consequently, lead to solutions that help extend network lifetime. The accurate lifetime estimation of sensor nodes is useful for routing to make more energy-efficient decisions and prolong lifetime. This research proposes an Energy-Efficient TPC (EETPC) mechanism using the measured Received Signal Strength (RSS) to calculate the ideal transmission power. This includes the investigation of the impact factors on RSS, such as distance, height above ground, multipath environment, the capability of node, noise and interference, and temperature. Furthermore, a Dynamic Node Lifetime Estimation (DNLE) technique for WSNs is also presented, including the impact factors on node lifetime, such as battery type, model, brand, self-discharge, discharge rate, age, charge cycles, and temperature. In addition, an Energy-Efficient and Lifetime Aware Routing (EELAR) algorithm is designed and developed for prolonging network lifetime in multihop WSNs. The proposed routing algorithm includes transmission power and lifetime metrics for path selection in addition to the Expected Transmission Count (ETX) metric. Both simulation and real hardware testbed experiments are used to verify the effectiveness of the proposed schemes. The simulation experiments run on the AVRORA simulator for two hardware platforms: Mica2 and MicaZ. The testbed experiments run on two real hardware platforms: the N740 NanoSensor and Mica2. The corresponding implementations are on two operating systems: Contiki and TinyOS. The proposed TPC mechanism covers those investigated factors and gives an overall performance better than the existing techniques, i.e. it gives lower packet loss and power consumption rates, while delays do not significantly increase. It can be applied for single-hop with multihoming and multihop networks. Using the DNLE technique, node lifetime can be predicted more accurately, which can be applied for both static and dynamic loads. EELAR gives the best performance on packet loss rate, average node lifetime and network lifetime compared to the other algorithms and no significant difference is found between each algorithm with the packet delay.
APA, Harvard, Vancouver, ISO, and other styles
22

Shibata, Takafumi, Masaaki Katayama, and Akira Ogawa. "Performance of Asynchronous Band-Limited DS/SSMA Systems." IEICE, 1993. http://hdl.handle.net/2237/7200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chataut, Robin. "Optimization of Massive MIMO Systems for 5G Networks." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707262/.

Full text
Abstract:
In the first part of the dissertation, we provide an extensive overview of sub-6 GHz wireless access technology known as massive multiple-input multiple-output (MIMO) systems, highlighting its benefits, deployment challenges, and the key enabling technologies envisaged for 5G networks. We investigate the fundamental issues that degrade the performance of massive MIMO systems such as pilot contamination, precoding, user scheduling, and signal detection. In the second part, we optimize the performance of the massive MIMO system by proposing several algorithms, system designs, and hardware architectures. To mitigate the effect of pilot contamination, we propose a pilot reuse factor scheme based on the user environment and the number of active users. The results through simulations show that the proposed scheme ensures the system always operates at maximal spectral efficiency and achieves higher throughput. To address the user scheduling problem, we propose two user scheduling algorithms bases upon the measured channel gain. The simulation results show that our proposed user scheduling algorithms achieve better error performance, improve sum capacity and throughput, and guarantee fairness among the users. To address the uplink signal detection challenge in the massive MIMO systems, we propose four algorithms and their system designs. We show through simulations that the proposed algorithms are computationally efficient and can achieve near-optimal bit error rate performance. Additionally, we propose hardware architectures for all the proposed algorithms to identify the required physical components and their interrelationships.
APA, Harvard, Vancouver, ISO, and other styles
24

Dallmeyer, Matthew John. "Reducing Fir Filter Costs: A Review of Approaches as Applied to Massive Fir Filter Arrays." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1417544448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kandukuri, Somasekhar Reddy. "Spatio-Temporal Adaptive Sampling Techniques for Energy Conservation in Wireless Sensor Networks." Thesis, La Réunion, 2016. http://www.theses.fr/2016LARE0021/document.

Full text
Abstract:
La technologie des réseaux de capteurs sans fil démontre qu'elle peut être très utile dans de nombreuses applications. Ainsi chaque jour voit émerger de nouvelles réalisations dans la surveillance de notre environnement comme la détection des feux de forêt, l'approvisionnement en eau. Les champs d'applications couvrent aussi des domaines émergents et sensibles pour la population avec les soins aux personnes âgées ou les patients récemment opérés dans le cadre. L'indépendance des architectures RCSFs par rapport aux infrastructures existantes permet aux d'être déployées dans presque tous les sites afin de fournir des informations temporelles et spatiales. Dans les déploiements opérationnels le bon fonctionnement de l'architecture des réseaux de capteurs sans fil ne peut être garanti que si certains défis sont surmontés. La minisation de l'énergie consommée en fait partie. La limitation de la durée de vie des nœuds de capteurs est fortement couplée à l'autonomie de la batterie et donc à l'optimisation énergétique des nœuds du réseau. Nous présenterons plusieurs propositions à ces problèmes dans le cadre de cette thèse. En résumé, les contributions qui ont été présentées dans cette thèse, abordent la durée de vie globale du réseau, l'exploitation des messages de données redondantes et corrélées et enfin le fonctionnement nœud lui-même. Les travaux ont conduit à la réalisation d'algorithmes de routage hiérarchiques et de filtrage permettant la suppression des redondances. Ils s'appuient sur les corrélations spatio-temporelles des données mesurées. Enfin, une implémentation de ce réseau de capteurs multi-sauts intégrant ces nouvelles fonctionnalités est proposée
Wireless sensor networks (WSNs) technology have been demonstrated to be a usefulmeasurement system for numerous bath indoor and outdoor applications. There is avast amount of applications that are operating with WSN technology, such asenvironmental monitoring, for forest fire detection, weather forecasting, water supplies, etc. The independence nature of WSNs from the existing infrastructure. Virtually, the WSNs can be deployed in any sort of location, and provide the sensor samples accordingly in bath time and space. On the contrast, the manual deployments can only be achievable at a high cost-effective nature and involve significant work. ln real-world applications, the operation of wireless sensor networks can only be maintained, if certain challenges are overcome. The lifetime limitation of the distributed sensor nodes is amongst these challenges, in order to achieve the energy optimization. The propositions to the solution of these challenges have been an objective of this thesis. ln summary, the contributions which have been presented in this thesis, address the system lifetime, exploitation of redundant and correlated data messages, and then the sensor node in terms of usability. The considerations have led to the simple data redundancy and correlated algorithms based on hierarchical based clustering, yet efficient to tolerate bath the spatio-temporal redundancies and their correlations. Furthermore, a multihop sensor network for the implementation of propositions with more features, bath the analytical proofs and at the software level, have been proposed
APA, Harvard, Vancouver, ISO, and other styles
26

Desmond, Allan Peter. "An analytical signal transform derived from the Walsh Transform for efficient detection of dual tone multiple frequency (DTMF) signals." Thesis, Bucks New University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chiu, Leung Kin. "Efficient audio signal processing for embedded systems." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44775.

Full text
Abstract:
We investigated two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound "richer" and "fuller," using a combination of bass extension and dynamic range compression. We also developed an audio energy reduction algorithm for loudspeaker power management by suppressing signal energy below the masking threshold. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. We also designed the circuits to implement the AdaBoost-based analog classifier.
APA, Harvard, Vancouver, ISO, and other styles
28

Benjebbour, Anass. "Efficient Signal Processing Techniques for MIMO Systems." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/147580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zaidi, Syed Izhar Hussain. "Power Efficient Signal Processing in Reconf0igurable Computing." Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hwang, Suk-seung, John J. Shynk, and Hua Lee. "Efficient AOA Estimation Techniques for GPS Signal." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596458.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
Global Positioning System (GPS) interference signals are suppressed using angle-of-arrival (AOA) techniques, while at the same time the power of the GPS signal is enhanced. After estimating all AOAs from the received signal, we must determine which AOA corresponds to the GPS signal of interest, and in the presence of high-power interference signals. In this paper, we describe an algorithm for selecting the GPS AOA by first comparing all AOAs derived from the received signals before despreading. Although this approach has excellent performance, it has a high computational complexity. In order to overcome this drawback, we introduce a modification that yields an efficient GPS AOA estimation algorithm, which is based on a modified despreader and the constant modulus (CM) array cost function. The CM array is capable of selecting signals that have a constant modulus while rejecting non-CM interference signals. The modified despreader is the mechanism that allows this to be achieved, where unlike the interference signals, the GPS signal of interest maintains a constant modulus.
APA, Harvard, Vancouver, ISO, and other styles
31

Gouba, Oussoulare. "Approche conjointe de la réduction du facteur de crête et de la linéarisation dans le contexte OFDM." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00931304.

Full text
Abstract:
Les amplificateurs de puissance sont au centre des systèmes actuels de télécommunications. Leur linéarité (pour préserver la qualité des données transmises) et leur rendement énergétique (pour faire des économies d'énergie) sont très importants et constituent les préoccupations majeures des concepteurs. Cependant, ce sont des composants analogiques intrinsèquement non-linéaires et leur utilisation avec des signaux à enveloppes non-constantes génèrent des distorsions à savoir des remontées spectrales hors-bandes et une dégradation du taux d'erreurs. Les signaux OFDM à la base de nombreux standards comme le Wifi, le Wi-Max, la télévision numérique, le LTE, etc. ont de fortes variations de puissance encore appelées PAPR (Peak-to-Average Power Ratio) qui aggravent ces problèmes de non-linéarité de l'amplificateur et réduit son rendement. Le traitement conjoint des non-linéarités et l'amélioration du rendement de l'amplificateur est l'objectif de cette thèse.Pour cela, l'accent est mis sur une approche conjointe de la linéarisation et de la réduction du PAPR. Ces deux méthodes jusqu'à présent abordées séparément dans la littérature sont en fait complémentaires et interdépendantes. Cela a été prouvé grâce à une étude analytique que nous avons menée. Grâce à l'approche conjointe, on peut simplement les associer, on parle dans ce cas d'approche non-collaborative ou leur permettre en plus d'échanger des informations et de s'adapter l'une par rapport à l'autre et/ou vice versa. Ce dernier cas est l'approche collaborative. Nous avons ensuite proposé des algorithmes d'approche conjointe collaborative basés sur les techniques d'ajout de signal. La réduction du PAPR et la prédistorsion (choisie comme méthode de linéarisation) sont fusionnées sous une seule formulation d'ajout de signal. Un signal additionnel conjoint est alors généré pour à la fois compenser les non-linéarités de l'amplificateur de puissance et réduire la dynamique du signal à amplifier.
APA, Harvard, Vancouver, ISO, and other styles
32

Weaver, Ben. "Computationally-efficient Signal Processing Algorithms for Communications Systems." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Holm, Rasmus. "Energy-Efficient Mobile Communication with Cached Signal Maps." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124607.

Full text
Abstract:
Data communication over cellular networks is expensive for the mobile device in terms of energy, especially when the received signal strength (RSS) is low. The mobile device needs to amplify its transmission power to compensate for noise leading to an increased energy consumption. This thesis focuses on developing a RSS map for the third generation cellular technology (3G) which can be stored locally at the mobile device, and can be used for avoiding expensive communication in low RSS areas. The proposed signal map is created by crowdsourced information collected from several mobile devices. An application is used to collect data in the mobile device of the user and the application periodically sends the information back to the server which computes the total signal map. The signal map is composed of three levels of information: RSS information, data rate tests and estimated energy levels. The energy level categorizes the energy consumption of an area into "High", "Medium" or "Low" based on the RSS, data rate test information and an energy model developed from physical power measurements. The coarse categorization provides an estimation of the energy consumption at each location. It is evaluated by collecting data traces on a smartphone at different locations and comparing the measured energy consumption at each location to the energy level categories of the map. The RSS prediction is preliminarily evaluated by collecting new data along a path and comparing how well it correlates to the signal map. The evaluation in this thesis shows that with the current collected data there are not enough observations in the map to properly estimate the RSS. However, we believe that with more observations a more accurate evaluation could be done.
APA, Harvard, Vancouver, ISO, and other styles
34

Brandon, Mathilde. "Optimisation conjointe de méthodes de linéarisation de l'émetteur pour des modulations multi-porteuses." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00762747.

Full text
Abstract:
Les modulations multiporteuses apparaissent aujourd'hui comme une technologie éprouvée pour la transmission de données à haut-débits sur des canaux pouvant être très perturbés. L'OFDM (Orthogonal Frequency Division Multiplexing) a d'ailleurs été choisie dans plusieurs normes de télécommunications (ADSL, Wi-Max, IEEE 802.11a/g/n, LTE, DVB,...). Cependant un des inconvénients de ce type de modulation est la forte variation de la puissance instantanée à transmettre. Cette propriété rend ces modulations très sensibles aux non-linéarités des composants analogiques, en particulier celles de l'amplificateur de puissance à l'émission. Or l'amplificateur de puissance est un élément déterminant dans une chaîne de communication dans la mesure où il a une influence prépondérante sur le bilan global de la transmission en termes de puissance, de rendement et de distorsion. Plus l'on souhaite que l'impact de ses non linéarités soit faible et plus son rendement est faible, et inversement. Il est donc nécessaire d'effectuer un compromis linéarité/rendement.L'objectif de la thèse est d'éviter cette détérioration du rendement tout en conservant de bonnes performances de linéarité, de surcroit pour des signaux OFDM. Pour ce faire nous proposons d'utiliser conjointement des méthodes de linéarisation (prédistorsion numérique en bande de base) et d'amélioration du rendement (envelope tracking) de l'amplificateur de puissance ainsi qu'une méthode de réduction de la dynamique du signal (active constellation extension). La prédistorsion numérique classique échouant aux fortes puissances, nous proposons une méthode d'amélioration de cette technique à ces puissances. Nos résultats sont validés par des mesures sur un amplificateur de puissance 50W. Nous proposons également une association des méthodes permettant d'améliorer simultanément les performances en terme de linéarité hors bande et de rendement en minimisant les dégradations des performances de taux d'erreur binaire.
APA, Harvard, Vancouver, ISO, and other styles
35

Manda, Manoj Sai. "Communication Channel Analysis for Efficient Beamforming." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20776.

Full text
Abstract:
In this modern communication era, we are surrounded by unlimited electronic devices, the need to connect with everyone and everything increases dramatically. As the number of electronic devices increases, the amount of data to process increases and the need for higher data speeds occurs. After 1G and 2G, LTE (Long Term Evolution, 3G) come with the improvements in technology which allows reaching those new high data rates. Next comes the upgraded version of LTE called LTE-Advanced, which was launched to boost the speeds further. In this thesis, a 4G LTE environment has been created using (Graphic User Interface) in MATLAB. Many characteristics and parameters can be tuned such as type of modulation, number of UEs, type of channel, channel scenario, and some others to know how the system behaves and varied results outcome. Focus on the presence of a line of sight between the receiver and the transmitter helps to distinguish the Rayleigh and Rician scenario. In this thesis simulations on different channel models are simulated and various beamforming algorithms are tested to estimate that line of sight component (K-factor) and Error vector magnitude. The main aim of the thesis is to understand the communication channel behaviour in Static (Line of sight between transmitter and receiver) condition and High-Speed Train Condition along with EPA, EVA, ETU. The other aim of this report is to use the channel knowledge comprises of signal to noise ratio (SNR), bit error rate (BER) and error vector magnitude (EVM) helps to reduce the number of computations required while performing beamforming by varying the beam weight resolution.
APA, Harvard, Vancouver, ISO, and other styles
36

Kawala-Janik, Aleksandra. "Efficiency evaluation of external environments control using bio-signals." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/9810/.

Full text
Abstract:
There are many types of bio-signals with various control application prospects. This dissertation regards possible application domain of electroencephalographic signal. The implementation of EEG signals, as a source of information used for control of external devices, became recently a growing concern in the scientific world. Application of electroencephalographic signals in Brain-Computer Interfaces (BCI) (variant of Human-Computer Interfaces (HCI)) as an implement, which enables direct and fast communication between the human brain and an external device, has become recently very popular. Currently available on the market, BCI solutions require complex signal processing methodology, which results in the need of an expensive equipment with high computing power. In this work, a study on using various types of EEG equipment in order to apply the most appropriate one was conducted. The analysis of EEG signals is very complex due to the presence of various internal and external artifacts. The signals are also sensitive to disturbances and non-stochastic, what makes the analysis a complicated task. The research was performed on customised (built by the author of this dissertation) equipment, on professional medical device and on Emotiv EPOC headset. This work concentrated on application of an inexpensive, easy to use, Emotiv EPOC headset as a tool for gaining EEG signals. The project also involved application of embedded system platform - TS-7260. That solution caused limits in choosing an appropriate signal processing method, as embedded platforms characterise with a little efficiency and low computing power. That aspect was the most challenging part of the whole work. Implementation of the embedded platform enables to extend the possible future application of the proposed BCI. It also gives more flexibility, as the platform is able to simulate various environments. The study did not involve the use of traditional statistical or complex signal processing methods. The novelty of the solution relied on implementation of the basic mathematical operations. The efficiency of this method was also presented in this dissertation. Another important aspect of the conducted study is that the research was carried out not only in a laboratory, but also in an environment reflecting real-life conditions. The results proved efficiency and suitability of the implementation of the proposed solution in real-life environments. The further study will focus on improvement of the signal-processing method and application of other bio-signals - in order to extend the possible applicability and ameliorate its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
37

Shoji, Seiichiro. "Efficient individualisation of binaural audio signals." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cui, Xian. "Efficient radio frequency power amplifiers for wireless communications." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1195652135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Asai, Takahiro. "Spatiotemporal signal processing for highly-efficient broadband wireless communications." 京都大学 (Kyoto University), 2008. http://hdl.handle.net/2433/136019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Variyam, Pramodchandran. "Efficient testing techniques for analog and mixed-signal circuits." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Fisher, Andrew N. "Efficient, sound formal verification for analog/mixed-signal circuits." Thesis, The University of Utah, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10003590.

Full text
Abstract:

The increasing demand for smaller, more efficient circuits has created a need for both digital and analog designs to scale down. Digital technologies have been successful in meeting this challenge, but analog circuits have lagged behind due to smaller transistor sizes having a disproportionate negative affect. Since many applications require small, low-power analog circuits, the trend has been to take advantage of digital's ability to scale by replacing as much of the analog circuitry as possible with digital counterparts. The results are known as \emph{digitally-intensive analog/mixed-signal} (AMS) circuits. Though such circuits have helped the scaling problem, they have further complicated verification. This dissertation improves on techniques for AMS property specifications, as well as, develops sound, efficient extensions to formal AMS verification methods. With the \emph{language for analog/mixed-signal properties} (LAMP), one has a simple intuitive language for specifying AMS properties. LAMP provides a more procedural method for describing properties that is more straightforward than temporal logic-like languages. However, LAMP is still a nascent language and is limited in the types of properties it is capable of describing. This dissertation extends LAMP by adding statements to ignore transient periods and be able to reset the property check when the environment conditions change. After specifying a property, one needs to verify that the circuit satisfies the property. An efficient method for formally verifying AMS circuits is to use the restricted polyhedral class of \emph{zones}. Zones have simple operations for exploring the reachable state space, but they are only applicable to circuit models that utilize constant rates. To extend zones to more general models, this dissertation provides the theory and implementation needed to soundly handle models with ranges of rates. As a second improvement to the state representation, this dissertation describes how octagons can be adapted to model checking AMS circuit models. Though zones have efficient algorithms, it comes at a cost of over-approximating the reachable state space. Octagons have similarly efficient algorithms while adding additional flexibility to reduce the necessary over-approximations. Finally, the full methodology described in this dissertation is demonstrated on two examples. The first example is a switched capacitor integrator that has been studied in the context of transforming the original formal model to use only single rate assignments. Th property of not saturating is written in LAMP, the circuit is learned, and the property is checked against a faulty and correct circuit. In addition, it is shown that the zone extension, and its implementation with octagons, recovers all previous conclusions with the switched capacitor integrator without the need to translate the model. In particular, the method applies generally to all the models produced and does not require the soundness check needed by the translational approach to accept positive verification results. As a second example, the full tool flow is demonstrated on a digital C-element that is driven by a pair of RC networks, creating an AMS circuit. The RC networks are chosen so that the inputs to the C-element are ordered. LAMP is used to codify this behavior and it is verified that the input signals change in the correct order for the provided SPICE simulation traces.

APA, Harvard, Vancouver, ISO, and other styles
42

Maraš, Mirjana. "Learning efficient signal representation in sparse spike-coding networks." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE023.

Full text
Abstract:
La complexité de l’entrée sensorielle est parallèle à la complexité de sa représentation dans l’activité neurale des systèmes biologiques. Partant de l’hypothèse que les réseaux biologiques sont réglés pour atteindre une efficacité et une robustesse maximales, nous étudions comment une représentation efficace peut être réalisée dans des réseaux avec des probabilités de connexion locale et une dynamique synaptique observée de manière expérimentale. Nous développons une règle synaptique locale régularisée de type Lasso, qui optimise le nombre et l’efficacité des connexions récurrentes. Les connexions qui affectent le moins le rendement sont élaguées, et la force des connexions restantes est optimisée pour une meilleure représentation du signal. Notre théorie prédit que la probabilité de connexion locale détermine le compromis entre le nombre de potentiels d’action de la population et le nombre de connexions synaptiques qui sont développées et maintenues dans le réseau. Les réseaux plus faiblement connectés représentent des signaux avec des fréquences de déclenchement plus élevées que ceux avec une connectivité plus dense. La variabilité des probabilités de connexion observées dans les réseaux biologiques pourrait alors être considérée comme une conséquence de ce compromis et serait liée à différentes conditions de fonctionnement des circuits. Les connexions récurrentes apprises sont structurées et la plupart des connexions sont réciproques. La dimensionnalité des poids synaptiques récurrents peut être déduite de la probabilité de connexion du réseau et de la dimensionnalité du stimulus. La connectivité optimale d’un réseau avec des délais synaptiques se situe quelque part à un niveau intermédiaire, ni trop faible ni trop dense. De plus, lorsque nous ajoutons une autre contrainte biologique comme la régulation des taux de décharge par adaptation, notre règle d’apprentissage conduit à une mise à l’échelle observée de manière expérimentale des poids synaptiques. Nos travaux soutiennent l’idée que les micro-circuits biologiques sont hautement organisés et qu’une étude détaillée de leur organisation nous aidera à découvrir les principes de la représentation sensorielle
The complexity of sensory input is paralleled by the complexity of its representation in the neural activity of biological systems. Starting from the hypothesis that biological networks are tuned to achieve maximal efficiency and robustness, we investigate how efficient representation can be accomplished in networks with experimentally observed local connection probabilities and synaptic dynamics. We develop a Lasso regularized local synaptic rule, which optimizes the number and efficacy of recurrent connections. The connections that impact the efficiency the least are pruned, and the strength of the remaining ones is optimized for efficient signal representation. Our theory predicts that the local connection probability determines the trade-off between the number of population spikes and the number of recurrent synapses, which are developed and maintained in the network. The more sparsely connected networks represent signals with higher firing rates than those with denser connectivity. The variability of observed connection probabilities in biological networks could then be seen as a consequence of this trade-off, and related to different operating conditions of the circuits. The learned recurrent connections are structured, with most connections being reciprocal. The dimensionality of the recurrent weights can be inferred from the network’s connection probability and the dimensionality of the feedforward input. The optimal connectivity of a network with synaptic delays is somewhere at an intermediate level, neither too sparse nor too dense. Furthermore, when we add another biological constraint, adaptive regulation of firing rates, our learning rule leads to an experimentally observed scaling of the recurrent weights. Our work supports the notion that biological micro-circuits are highly organized and principled. A detailed examination of the local circuit organization can help us uncover the finer aspects of the principles which govern sensory representation
APA, Harvard, Vancouver, ISO, and other styles
43

Heyne, Benjamin. "Efficient CORDIC based implementation of selected signal processing algorithms." Aachen Shaker, 2008. http://d-nb.info/991790073/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yatawatta, Sarod Petropulu Athina P. "Efficient signal processing techniques for future wireless communications systems /." Philadelphia, Pa. : Drexel University, 2004. http://dspace.library.drexel.edu/handle/1860/374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Galluzzo, Francesca <1985&gt. "Efficient ultrasonic signal processing techniques for aided medical diagnostics." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5731/1/galluzzo_francesca_tesi.pdf.

Full text
Abstract:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
L’ecografia è una tecnica diagnostica per immagini utilizzata nella pratica clinica, non invasiva, poco costosa e in tempo reale. La bassa qualità delle immagini rende l’interpretazione complessa e soggettiva. Per supportare i medici nella diagnosi è possibile utilizzare sistemi Computer Aided Detection (CAD). La tesi discute tecniche efficienti di elaborazione dei segnali ultrasonici per diagnostica medica supportata da computer. Vengono trattate due tematiche: (i) la caratterizzazione tissutale basata su ultrasuoni, finalizzata al miglioramento del protocollo bioptico per la diagnosi del tumore alla prostata; (ii) la segmentazione di immagini ecocardiografiche, finalizzata all’individuazione dei contorni delle strutture cardiache per misurare in modo automatico le dimensioni dell’organo e calcolare indici di funzionalità clinicamente rilevanti. Nell’ambito del primo tema è stato sviluppato un sistema CAD. I contributi in quest’ambito sono: (i) lo sviluppo di un robusto sistema di classificazione, (i) l’utilizzo di elaborazione parallela su GPU per ottenere prestazioni in tempo reale; (iii) l’introduzione di un nuovo algoritmo di apprendimento semi-supervisionato e di una procedura di addestramento capace di utilizzare tutti i dati raccolti. Il sistema guida il medico indirizzando il campionamento bioptico verso zone potenzialmente patologiche. Una validazione con dati clinici ha dimostrato la validità del sistema come strumento di supporto alla diagnosi in grado di consentire una riduzione del numero di campioni necessari per una diagnosi accurata. Nell’ambito del secondo tema è stato sviluppato uno strumento per la diagnosi dei malfunzionamenti cardiaci tramite ecocardiografia 3D real-time. I contributi della tesi sono: (i) lo sviluppo di un framework di segmentazione automatica per immagini 3D basato sulla tecnica level-set e operante su GPU; (ii) la sua applicazione ad immagini ecocardiografiche tridimensionali. Risultati sperimentali ne hanno dimostrato l’elevata efficienza e flessibilità. Una validazione su dati clinici ne ha dimostrato l’efficacia come strumento di supporto al medico per l’analisi quantitativa della morfologia e della funzionalità cardiaca.
APA, Harvard, Vancouver, ISO, and other styles
46

Galluzzo, Francesca <1985&gt. "Efficient ultrasonic signal processing techniques for aided medical diagnostics." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5731/.

Full text
Abstract:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
L’ecografia è una tecnica diagnostica per immagini utilizzata nella pratica clinica, non invasiva, poco costosa e in tempo reale. La bassa qualità delle immagini rende l’interpretazione complessa e soggettiva. Per supportare i medici nella diagnosi è possibile utilizzare sistemi Computer Aided Detection (CAD). La tesi discute tecniche efficienti di elaborazione dei segnali ultrasonici per diagnostica medica supportata da computer. Vengono trattate due tematiche: (i) la caratterizzazione tissutale basata su ultrasuoni, finalizzata al miglioramento del protocollo bioptico per la diagnosi del tumore alla prostata; (ii) la segmentazione di immagini ecocardiografiche, finalizzata all’individuazione dei contorni delle strutture cardiache per misurare in modo automatico le dimensioni dell’organo e calcolare indici di funzionalità clinicamente rilevanti. Nell’ambito del primo tema è stato sviluppato un sistema CAD. I contributi in quest’ambito sono: (i) lo sviluppo di un robusto sistema di classificazione, (i) l’utilizzo di elaborazione parallela su GPU per ottenere prestazioni in tempo reale; (iii) l’introduzione di un nuovo algoritmo di apprendimento semi-supervisionato e di una procedura di addestramento capace di utilizzare tutti i dati raccolti. Il sistema guida il medico indirizzando il campionamento bioptico verso zone potenzialmente patologiche. Una validazione con dati clinici ha dimostrato la validità del sistema come strumento di supporto alla diagnosi in grado di consentire una riduzione del numero di campioni necessari per una diagnosi accurata. Nell’ambito del secondo tema è stato sviluppato uno strumento per la diagnosi dei malfunzionamenti cardiaci tramite ecocardiografia 3D real-time. I contributi della tesi sono: (i) lo sviluppo di un framework di segmentazione automatica per immagini 3D basato sulla tecnica level-set e operante su GPU; (ii) la sua applicazione ad immagini ecocardiografiche tridimensionali. Risultati sperimentali ne hanno dimostrato l’elevata efficienza e flessibilità. Una validazione su dati clinici ne ha dimostrato l’efficacia come strumento di supporto al medico per l’analisi quantitativa della morfologia e della funzionalità cardiaca.
APA, Harvard, Vancouver, ISO, and other styles
47

Tervo, O. (Oskari). "Transceiver optimization for energy-efficient multiantenna cellular networks." Doctoral thesis, Oulun yliopisto, 2018. http://urn.fi/urn:isbn:9789526219356.

Full text
Abstract:
Abstract This thesis focuses on the timely problem of energy-efficient transmission for wireless multiantenna cellular systems. The emphasis is on transmit beamforming (BF) and active antenna set optimization to maximize the network-wide energy efficiency (EE) metric, i.e., the number of transmitted bits per energy unit. The fundamental novelty of EE optimization is that it incorporates the transceivers' processing power in addition to the actual transmit power in the BF design. The key features of the thesis are that it focuses on sophisticated power consumption models (PCMs), giving useful insights into the EE of current cellular systems in particular, and provides mathematical tools for EE optimization in future wireless networks generally. The BF problem is first studied in a multiuser multiple-input single-output system by using a PCM scaling with transmit power and the number of active radio frequency (RF) chains. To find the best performance, a globally optimal solution based on a branch-reduce-and-bound (BRB) method is proposed, and two efficient designs based on zero-forcing and successive convex approximation (SCA) are derived for practical applications. Next, joint BF and antenna selection (JBAS) is studied, which can switch off some RF chains for further EE improvements. An optimal BRB method and efficient SCA-based algorithms exploiting continuous relaxation (CR) or sparse BF are proposed to solve the resulting mixed-Boolean nonconvex problem (MBNP). In a multi-cell system, energy-efficient coordinated BF is explored under two optimization targets: 1) the network EE maximization and 2) the weighted sum EEmax (WsumEEmax). A more sophisticated PCM scaling also with the data rate and the associated computational complexity is assumed. The SCA-based methods are derived to solve these problems in a centralized manner, and distributed algorithms relying only on the local channel state information and limited backhaul signaling are then proposed. The WsumEEmax problem is solved using SCA combined with an alternating direction method of multipliers, and iterative closed-form algorithms having easily derivable computational complexity are developed to solve both problems. The work is subsequently extended to a multi-cell multigroup multicasting system, where user groups request multicasting data. For the MBNP, a modeling method to improve the performance of the SCA for solving the CR is proposed, aiming at encouraging the relaxed Boolean variables to converge at the binary values. A second approach based on sparse BF, which introduces no Boolean variables, is also derived. The methods are then modified to solve the EE and sum rate trade-off problem. Finally, the BF design with multiantenna receivers is considered, where the users can receive both unicasting and multicasting data simultaneously. The performances of the developed algorithms are assessed via thorough computer simulations. The results show that the proposed algorithms provide 30-300% EE improvements over various conventional methods in the BF optimization, and that JBAS techniques can offer further gains of more than 100%
Tiivistelmä Tämä väitöskirja keskittyy ajankohtaiseen energiatehokkaaseen lähetinsuunnitteluun langattomissa solukkoverkoissa, joissa suorituskykymittarina käytetään energiatehokkuuden (energy efficiency (EE)) maksimointia, eli kuinka monta bittiä pystytään lähettämään yhtä energiayksikköä kohti. Työn painopiste on lähettimien keilanmuodostuksen (beamforming (BF)) ja aktiivisten lähetinantennien optimoinnissa. EE-optimoinnin uutuusarvo on ottaa lähettimien prosessoinnin tehonkulutus huomioon keilanmuodostuksen suunnittelussa, varsinaisen lähetystehon lisäksi. Työ antaa hyvän käsityksen erityisesti tämänhetkisten solukkoverkkojen energiatehokkuudesta, ja luo työkaluja EE-optimointiin tulevaisuuden järjestelmissä. Ensin suunnitellaan keilanmuodostus yksisolumallissa, jossa tehonkulutus kasvaa lähetystehon ja aktiivisten radiotaajuusketjujen lukumäärän mukana. Ongelmaan johdetaan optimaalinen ratkaisu, ja kaksi käytännöllistä menetelmää perustuen nollaanpakotukseen tai peräkkäinen konveksi approksimaatio (successive convex approximation (SCA)) -ideaan. Seuraavaksi keskitytään keilanmuodostuksen ja antenninvalinnan yhteisoptimointiin (joint beamforming and antenna selection (JBAS)), jossa radiotaajuusketjuja voidaan sulkea EE:n parantamiseksi. Tähän ehdotetaan optimaalinen menetelmä ja kaksi käytännöllistä SCA-menetelmää perustuen binääristen ja jatkuvien muuttujien yhteisoptimointiongelman relaksaatioon, tai harvan vektorin optimointiin. Monisoluverkon EE-optimoinnissa käytetään yksityiskohtaisempaa tehonkulutusmallia, joka skaalautuu myös datanopeuden ja prosessoinnin monimutkaisuuden mukaan. Työssä käytetään kahta suorituskyvyn mittaria: 1) koko verkon energiatehokkuuden, ja 2) painotettujen energiatehokkuuksien summien maksimointia (weighted sum EEmax (WsumEEmax)). Ensin johdetaan keskitetyt ratkaisut SCA-ideaa käyttäen. Tämän jälkeen keskitytään hajautettuun optimointiin, joka pystytään toteuttamaan paikallisen kanavatiedon avulla, kun matalanopeuksinen skalaariarvojen jako on käytettävissä tukiasemien välillä. Ensin WsumEEmax-ongelma ratkaistaan yhdistämällä SCA ja kerrointen vaihtelevan suunnan menetelmä, ja lisäksi ehdotetaan iteratiivinen suljetun muodon ratkaisu molempiin ongelmiin, joka mahdollistaa tarkan laskennallisen monimutkaisuuden määrityksen. Lopussa työ laajennetaan monisoluverkkoon, jossa tukiasemat palvelevat käyttäjäryhmiä ryhmälähetyksenä. Keskittymällä JBAS-ongelmaan, ensin ehdotetaan lähestymistapa parantaa SCA-menetelmän suorituskykyä yhteisoptimointiongelman relaksaation ratkaisemisessa. Toinen yksinkertaisempi lähestymistapa perustuu harvan vektorin optimointiin, joka ei vaadi binäärisiä muuttujia. Lisäksi menetelmiä muunnellaan myös energiatehokkuuden ja summadatanopeuden kompromissin optimointiin. Lopussa työ ottaa huomioon vielä moniantennivastaanottimet, joka mahdollistaa sekä täsmälähetyksen että ryhmälähetyksen samanaikaisesti. Menetelmien suorituskykyä arvioidaan laajamittaisilla tietokonesimulaatioilla. Tulokset näyttävät väitöskirjan menetelmien lisäävän energiatehokkuutta 30-300% verrattuna lukuisiin perinteisiin menetelmiin BF-optimoinnissa, ja JBAS-menetelmät antavat vielä yli 100% lisää suorituskykyä
APA, Harvard, Vancouver, ISO, and other styles
48

Nordlund, Per-Johan. "Efficient Estimation and Detection Methods for Airborne Applications." Doctoral thesis, Linköpings universitet, Institutionen för systemteknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15826.

Full text
Abstract:
The overall purpose with this thesis is to investigate and provide computationally efficient methods for estimation and detection. The focus is on airborne applications, and we seek estimation and detection methods which are accurate and reliable yet effective with respect to computational load. In particular, the methods shall be optimized for terrain-aided navigation andcollision avoidance respectively. The estimation part focuses on particle filtering and the in general much more efficient marginalized particle filter. The detection part focuses on finding efficient methods for evaluating the probability of extreme values. This is achieved by considering the, in general, much easier task to compute the probability of level-crossings. The concept of aircraft navigation using terrain height information is attractive because of the independence of external information sources. Typicallyterrain-aided navigation consists of an inertial navigation unit supported by position estimates from a terrain-aided positioning (TAP) system. TAP integrated with an inertial navigation system is challenging due to its highly nonlinear nature. Today, the particle filter is an accepted method for estimation of more or less nonlinear systems. At least when the requirements on computational load are not rigorous. In many on-line processing applications the requirements are such that they prevent the use of theparticle filter. We need more efficient estimation methods to overcome this issue, and the marginalized particle filter constitutes a possible solution. The basic principle for the marginalized particle filter is to utilize linear and discrete substructures within the overall nonlinear system. These substructures are used for efficient estimation by applying optimal filters such as the Kalman filter. The computationally demanding particle filter can then be concentrated on a smaller part of the estimation problem. The concept of an aircraft collision avoidance system is to assist or ultimately replace the pilot in order to to minimize the resulting collision risk. Detection is needed in aircraft collision avoidance because of the stochastic nature of thesensor readings, here we use information from video cameras. Conflict is declared if the minimum distance between two aircraft is less than a level. The level is given by the radius of a safety sphere surrounding the aircraft.We use the fact that the probability of conflict, for the process studied here, is identical to the probability for a down-crossing of the surface of the sphere. In general, it is easier to compute the probability of down-crossings compared to extremes. The Monte Carlo method provides a way forward to compute the probability of conflict. However, to provide a computationally tractable solution we approximate the crossing of the safety sphere with the crossing of a circular disc. The approximate method yields a result which is as accurate as the Monte Carlo method but the computational load is decreased significantly.
APA, Harvard, Vancouver, ISO, and other styles
49

Mahdi, Abdul-Hussain Ebrahim. "Efficient generalized transform algorithms for digital implementation." Thesis, Bangor University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Turnes, Christopher Kowalczyk. "Efficient solutions to Toeplitz-structured linear systems for signal processing." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51878.

Full text
Abstract:
This research develops efficient solution methods for linear systems with scalar and multi-level Toeplitz structure. Toeplitz systems are common in one-dimensional signal-processing applications, and typically correspond to temporal- or spatial-invariance in the underlying physical phenomenon. Over time, a number of algorithms have been developed to solve these systems economically by exploiting their structure. These developments began with the Levinson-Durbin recursion, a classical fast method for solving Toeplitz systems that has become a standard algorithm in signal processing. Over time, more advanced routines known as superfast algorithms were introduced that are capable of solving Toeplitz systems with even lower asymptotic complexity. For multi-dimensional signals, temporally- and spatially-invariant systems have linear-algebraic descriptions characterized by multi-level Toeplitz matrices, which exhibit Toeplitz structure on multiple levels. These matrices lack the same algebraic properties and structural simplicity of their scalar analogs. As a result, it has proven exceedingly difficult to extend the existing scalar Toeplitz algorithms for their treatment. This research presents algorithms to solve scalar and two-level Toeplitz systems through a constructive approach, using methods devised for specialized cases to build more general solution methods. These methods extend known scalar Toeplitz inversion results to more general scalar least-squares problems and to multi-level Toeplitz problems. The resulting algorithms have the potential to provide substantial computational gains for a large class of problems in signal processing, such as image deconvolution, non-uniform resampling, and the reconstruction of spatial volumes from non-uniform Fourier samples.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography