Dissertations / Theses on the topic 'Reduction technique'

To see the other types of publications on this topic, follow the link: Reduction technique.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reduction technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wise, Michael Anthony. "A variance reduction technique for production cost simulation." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182181023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Panwher, Mohammad Ibrahim. "A novel technique for tube sinking." Thesis, Sheffield Hallam University, 1986. http://shura.shu.ac.uk/20183/.

Full text
Abstract:
A new technique for tube sinking has been developed which should, in a number of ways, help to solve the problems associated with conventional tube sinking processes, eg die wear and the need for a swaged down leading end for easy insertion through the die. The conventional reduction die is altogether replaced by a die-less reduction unit of stepped bore configuration. The deformation is induced by means of hydrodynamic pressure and drag force generated in the unit due to the motion of the tube through the viscous fluid medium (polymer melt). The dimensions of the die-less reduction unit are such that the smallest bore size is dimensionally greater than the nominal diameter of the undeformed tube, thus metal to metal contact and hence wear, should no longer be a problem. As no conventional reduction die is used, the need for a reduced diameter leading end is also eliminated. Experimental results show that greater reduction in tube diameter and the coating thickness were obtained at slower drawing speeds (about 0.1 m/s). The maximum reduction in diameter noted was about 7 per cent. Analytical models have been developed, assuming with Newtonian and non-Newtonian characteristics of the pressure medium, which enabled prediction of the length of the deformation zone, percentage reduction in diameter and drawing stress. In the non-Newtonian analysis account was taken of the pressure coefficient of viscosity, derived from the available data; the limiting shear stress, which manifests itself as slip in the polymer melt and the strain hardening and the strain rate sensitivity of the tube material. The percentage reduction in diameter predicted using the Newtonian analyses appear to differ considerably from the experimental results both in trend and magnitude. Non-Newtonian analysis predicted theoretical results which are much closer to the ones observed experimentally.
APA, Harvard, Vancouver, ISO, and other styles
3

Coupland, Jeremy. "Particle image velocimetry : data reduction using optical correlation." Thesis, University of Southampton, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.255649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koh, Jeongwook. "Low-frequency-noise reduction technique for linear analog CMOS IC's." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=979089980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Miah, Tunu. "Vanishing windows : a technique for adaptive screen management." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/27081.

Full text
Abstract:
Windowing systems offer many benefits to users, such as being able to work on multiple tasks concurrently; or working with a number of windows, each connected to different remote machines or applications. Unless these windows are managed efficiently, users can easily become overwhelmed by the number of currently open windows and lose their way round the desktop. This can lead to a state where the desktop is cluttered with windows. At this stage "window thrashing" occurs, as users begin to perform window management operations (move, resize, minimise etc.) in order to locate relevant pieces of information contained in one of several open windows.
APA, Harvard, Vancouver, ISO, and other styles
6

Barbarulo, Andrea. "On a PGD model order reduction technique for mid-frequency acoustic." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00822643.

Full text
Abstract:
In many industrial contexts, such as aerospace applications or cars design, numerical prediction techniquesbecome more and more useful. They restrict the use of real prototypes to a minimum and make easier thedesign phase. In such industries and in the specific for acoustic, engineers are interested in computing theresponses of systems on frequency bands. In order to predict the vibration behavior of systems overfrequency bands, standard numerical techniques usually involve many frequency-fixed computations, atmany different frequencies. Although it is a straightforward and natural mean to answer to the posed problem,such a strategy can easily lead to huge computations, and the amount of data to store often increasessignificantly. This is particularly true in the context of medium frequency bands, where these responses havea strong sensitivity to the frequency. In this work PGD (Proper Generalized Decomposition), in a first time, isapplied to found a separate functional representation over frequency and space of the unknown amplitude ofVTCR (Variational Theory of Complex Rays) formulation on a reduced frequency space. This allows tocalculate an high quality mid-frequency response over a wide band without a fine frequency discretization,saving computational resources. Moreover the PGD representation of the solution allows to save a hugeamount of space in term of stored data. In a second time, PGD technique as been applied to extend itspeculiarity to mid-frequency wide band with uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
7

Sim, Zee Ang. "PAPR Reduction in Multicarrier Communication Systems Using Efficient Pulse Shaping Technique." Thesis, Curtin University, 2020. http://hdl.handle.net/20.500.11937/79917.

Full text
Abstract:
Emerging multicarrier modulation schemes have been considered for the fifth generation (5G) communication systems. However, existing designs often suffer from a high peak-to-average power ratio (PAPR) in the transmitted signal. This thesis aims to (i) design pulse shaping filters to reduce the PAPR using computationally efficient optimisation approach (ii) investigate the performance of the multicarrier systems employing the designed filter and (iii) study the power utilisation efficiency of the nonlinear amplifier with the use of the designed filters.
APA, Harvard, Vancouver, ISO, and other styles
8

Houlis, Pantazis Constantine. "A novel parametrized controller reduction technique based on different closed-loop configurations." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2010.0052.

Full text
Abstract:
This Thesis is concerned with the approximation of high order controllers or the controller reduction problem. We firstly consider approximating high-order controllers by low order controllers based on the closed-loop system approximation. By approximating the closed-loop system transfer function, we derive a new parametrized double-sided frequency weighted model reduction problem. The formulas for the input and output weights are derived using three closed-loop system configurations: (i) by placing a controller in cascade with the plant, (ii) by placing a controller in the feedback path, and (iii) by using the linear fractional transformation (LFT) representation. One of the weights will be a function of a free parameter which can be varied in the resultant frequency weighted model reduction problem. We show that by using standard frequency weighted model reduction techniques, the approximation error can be easily reduced by varying the free parameter to give more accurate low order controllers. A method for choosing the free parameter to get optimal results is being suggested. A number of practical examples are used to show the effectiveness of the proposed controller reduction method. We have then considered the relationships between the closed-loop system con gurations which can be expressed using a classical control block diagram or a modern control block diagram (LFT). Formulas are derived to convert a closed-loop system represented by a classical control block diagram to a closed-loop system represented by a modern control block diagram and vice versa.
APA, Harvard, Vancouver, ISO, and other styles
9

Rosa, Thiago Raupp da. "Reduction of energy consumption in MPSOCS through a dynamic frequency scaling technique." Pontifícia Universidade Católica do Rio Grande do Sul, 2012. http://hdl.handle.net/10923/1482.

Full text
Abstract:
Made available in DSpace on 2013-08-07T18:42:26Z (GMT). No. of bitstreams: 1 000439390-Texto+Completo-0.pdf: 3101139 bytes, checksum: 42025c16dd319c48f3a185f5f4c5dbc5 (MD5) Previous issue date: 2012
NoC-based MPSoCs are employed in several embedded systems due to the high performance, achieved by using multiple processing elements (PEs). However, power and energy restrictions, especially in mobile applications, may render the design of MPSoCs over-constrained. Thus, the use of power management techniques is mandatory. Moreover, due to the high variability present in application workloads executed by these devices, this management must be performed dynamically. The use of traditional dynamic voltage and frequency scaling (DVFS) techniques proved to be useful in several scenarios to save energy. Nonetheless, due to technology scaling that limits the voltage variation and to the slow response of DVFS schemes, the use of such technique may become inadequate in newer DSM technology nodes. As alternative, the use of dynamic frequency scaling (DFS) may provide a good trade-off between power savings and power overhead. This work proposes a self-adaptable distributed DFS scheme for NoC-Based MPSoCs. Both NoC and PEs have an individual frequency control scheme. The DFS scheme for PEs takes into account the PE computation and communication loads to dynamically change the operating frequency. In the NoC, a DFS controller uses packet information and router activity to decide the router operating frequency. Also, the clock generation module is designed to provide a clock signal to PEs and NoC routers. The clock generation method is simple, based on local selective clock gating of a single global clock, provides a wide range of generated clocks, induces low area and power overheads and presents small response time. Synthetic and real applications were used to evaluate the proposed scheme. Results show that the number of executed instructions can be reduced by 65% (28% in average), with an execution time overhead up to only 14% (9% in average).The consequent power dissipation reduction in PEs reaches up to 52% (23% in average) and in the NoC up to 76% (71% in average). The power overhead induced by the proposed scheme is around 3% in PEs and around 10% in the NoC.
MPSoCs baseados em NoC têm sido empregados em sistemas embarcados devido ao seu alto desempenho, atingido através do uso de múltiplos elementos de processamento (PEs). Entretanto, a especificação da funcionalidade, agregada a especificação de requisitos de consumo de energia em aplicações móveis, pode comprometer o processo de projeto em termos de tempo e/ou custo. Dessa forma, a utilização de técnicas para gerenciamento de consumo de energia é essencial. Além disso, aplicações que possuam carga de trabalho dinâmica podem realizar esse gerenciamento dinamicamente. A utilização de técnicas para escalonamento dinâmico de tensão e frequência (DVFS) mostrou-se adequada para a redução do consumo de energia em sistemas computacionais. No entanto, devido à evolução da tecnologia, a variação mínima de tensão é menor, e o tempo de resposta elevado dos métodos de DVFS pode tornar esta técnica inadequada em tecnologias DSM (deep sub-micron). Como alternativa, a utilização de técnicas para escalonamento dinâmico de frequência (DFS) pode prover uma boa relação custo-benefício entre economia e consumo de energia. O presente trabalho apresenta um esquema de escalonamento dinâmico de frequência distribuído auto-adaptável para MPSoCs baseados em NoC. Ambos os elementos do MPSoC (NoC e PEs) possuem um esquema específico. O esquema para os PEs leva em consideração as cargas de computação e comunicação do mesmo. Na NoC, o esquema é controlado através de informações provenientes do pacote que trafega na rede e da atividade do roteador. Além disso, um módulo para geração local de relógio é apresentado, o qual é responsável por prover o sinal de relógio para PEs e roteadores da NoC.O esquema de geração do sinal de relógio é simples, baseado em roubo de ciclo de um sinal de relógio global. Este ainda fornece uma ampla variedade de frequências, induz baixo custo adicional de área e consumo e responde rapidamente a uma nova configuração. Para avaliar o esquema proposto, aplicações sintéticas e reais foram simuladas. Os resultados mostram que a redução no número de instruções executadas é de até 65% (28% em média), com um custo adicional de no máximo 14% no tempo de execução (9% em média). Em relação à dissipação de potência, os resultados mostram que a redução é de até 52% nos PEs (23% em média) e de até 76% na NoC (71% em média). O overhead de consumo apresentado pelo esquema dos PEs é de 3% e pelo esquema da NoC é de 10%.
APA, Harvard, Vancouver, ISO, and other styles
10

Buckley, Richard James. "A digital signal processing-based predistortion technique for reduction of intermodulation distortion /." Online version of print, 1993. http://hdl.handle.net/1850/11455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.

Full text
Abstract:
L’algorithme Adaptive Multilevel Splitting (AMS) a récemment fait son apparition dans la littérature de mathématiques appliquées, en tant que méthode de réduction de variance pour la simulation Monte Carlo de chaı̂nes de Markov. Ce travail de thèse se propose d’implémenter cette méthode de réduction de variance adaptative dans le code Monte-Carlo de transport de particules TRIPOLI-4,dédié entre autres aux études de radioprotection et d’instrumentation nucléaire. Caractérisées par de fortes atténuations des rayonnements dans la matière, ces études entrent dans la problématique du traitement d’évènements rares. Outre son implémentation inédite dans ce domaine d’application, deux nouvelles fonctionnalités ont été développées pour l’AMS, testées puis validées. La première est une procédure d’encaissement au vol permettant d’optimiser plusieurs scores en une seule simulation AMS. La seconde est une extension de l’AMS aux processus branchants, courants dans les simulations de radioprotection, par exemple lors du transport couplé de neutrons et des photons induits par ces derniers. L’efficacité et la robustesse de l’AMS dans ce nouveau cadre applicatif ont été démontrées dans des configurations physiquement très sévères (atténuations du flux de particules de plus de 10 ordres de grandeur), mettant ainsi en évidence les avantages prometteurs de l’AMS par rapport aux méthodes de réduction de variance existantes
The Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
APA, Harvard, Vancouver, ISO, and other styles
12

Phillips, Rhonda D. "Improving the Performance of a Hybrid Classification Method Using a Parallel Algorithm and a Novel Data Reduction Technique." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/42680.

Full text
Abstract:
This thesis presents both a shared memory parallel version of the hybrid classification algorithm IGSCR (iterative guided spectral class rejection) and a novel data reduction technique that can be used in conjuction with pIGSCR (parallel IGSCR). The parallel algorithm is motivated by a demonstrated need for more computing power driven by the increasing size of remote sensing datasets due to higher resolution sensors, larger study regions, and the like. Even with a fast algorithm such as pIGSCR, the reduction of dimension in a dataset is desirable in order to decrease the processing time further and possibly improve overall classification accuracy. pIGSCR was developed to produce fast and portable code using Fortran 95, OpenMP, and the Hierarchical Data Format version 5 (HDF5) and accompanying data access library. The applicability of the faster pIGSCR algorithm is demonstrated by classifying Landsat data covering most of Virginia, USA into forest and non-forest classes with approximately 90 percent accuracy. Parallel results are given using the SGI Altix 3300 shared memory computer and the SGI Altix 3700 with as many as 64 processors reaching speedups of almost 77. This fast algorithm allows an analyst to perform and assess multiple classifications to refine parameters. As an example, pIGSCR was used for a factorial analysis consisting of 42 classifications of a 1.2 gigabyte image to select the number of initial classes (70) and class purity (70%) used for the remaining two images. A feature selection or reduction method may be appropriate for a specific lassification method depending on the properties and training required for the classification method, or an alternative band selection method may be derived based on the classification method itself. This thesis introduces a feature reduction method based on the singular value decomposition (SVD). This feature reduction technique was applied to training data from two multitemporal datasets of Landsat TM/ETM+ imagery acquired over a forested area in Virginia, USA and Rondonia, Brazil. Subsequent parallel iterative guided spectral class rejection (pIGSCR) forest/non-forest classifications were performed to determine the quality of the feature reduction. The classifications of the Virginia data were five times faster using SVD based feature reduction without affecting the classification accuracy. Feature reduction using the SVD was also compared to feature reduction using principal components analysis (PCA). The highest average accuracies for the Virginia dataset (88.34%) and for the Amazon dataset (93.31%) were achieved using the SVD. The results presented here indicate that SVD based feature reduction can produce statistically significantly better classifications than PCA.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Yuchen. "Transformer Shielding Technique for Common Mode Noise Reduction in Switch Mode Power Supplies." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49263.

Full text
Abstract:
Switch mode power supplies are widely used in different applications. High efficiency and high power density are two driving forces for power supply systems. However, high dv/dt and di/dt in switch mode power supplies will cause severe EMI noise issue. In a typical front-end converter, the EMI filter usually occupies 1/3 to 1/4 volume of total converter. Hence, reducing the EMI noise of power converter can help reduce the volume of EMI filter and improving the total power density of the converter. For off-line switch mode power supplies, DM noise is dominated by PFC converter. CM noise is a more complicated issue. It is contributed by both PFC converter and DC/DC converter. While many researches have focused on reducing CM noise for PFC converter, the CM noise of DC/DC converter still remains a challenge. The main objective of this thesis is provide a solution to have best CM noise reduction for DC/DC converters. The shielding concept and balance concept are combined to propose a novel balance double shielding technique. This method can have an effective CM noise reduction in the circuit level. In addition it is easy to design and implement in the real production. The balance condition is easily controlled and guarantees effective CM noise reduction in mass production. Then, a novel one-layer shielding method for PCB winding transformer is provided. This shielding technique can block CM noise from primary side and also cancel the CM noise from secondary side. In addition, shielding does not increase the loss of converter too much. Furthermore, this shielding technique can be applied to matrix transformer structure. For matrix transformer LLC converter, the inter-winding capacitor is very large and will cause severe CM noise problem. By adding shielding layer, CM noise has been greatly reduced. In addition, by modifying the secondary winding, the loss on shielding layer is minimized and experiments show that the total efficiency of converter has almost no impact. Furthermore, although this thesis uses flyback and LLC resonant converter as example to demonstrate the concept, the novel shielding technique can also be applied to other topologies that have similar transformer structure.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Kitchen, Christine A. "Analysis and reduction of grid errors in the finite difference Poisson-Boltzmann technique." Thesis, University of Sheffield, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Thorne, Simon. "An alternative modelling technique for the reduction of error in decision support spreadsheets." Thesis, Cardiff Metropolitan University, 2008. http://hdl.handle.net/10369/6492.

Full text
Abstract:
Spreadsheet applications are currently the most prevalent end user tool in organisations across the world. Surveys on spreadsheet use show spreadsheets are used as decision making tools in a range of organisations from credit liability assessment in the business world to patient cardiovascular-anaesthesia risk in the medical community. However, there is strong evidence to suggest a significant proportion of spreadsheets contain errors that affect the validity of their operation and results. In addition most end users receive no relevant information systems training and consequently have no concept of creating reliable software. This can result in poorly designed untested spreadsheets that are potentially full of errors. This thesis presents an alternative novel modelling technique to decision support spreadsheets. The novel technique uses attribute classifications (user defined examples) to create a model of a problem. This technique is coined "Example Driven Modelling" (EDM). Through experimentation, the relative benefits and useful limits of EDM are explored and established. The practical application of EDM to real world spreadsheets demonstrates how EDM outperforms equivalent spreadsheet models in a medical decision making spreadsheet used to determine the anaesthesia risk of a patient undergoing cardiovascular surgery.
APA, Harvard, Vancouver, ISO, and other styles
16

Lollini, Emanuele. "Analysis of multi-station technique for noise reduction in Deep Space Doppler tracking." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25914/.

Full text
Abstract:
Precision measurements of spacecraft range-rate enabled by two-way microwave links are used in navigation and in radio science experiments as planetary geodesy. The final accuracies in the observables depend almost linearly on the Doppler noise in the link. Among all the types of noise that enter a Doppler measurement, the most important are thermal noise, spacecraft antenna buffeting and ground antenna mechanical noise. Several effects at different time scales are responsible for the antenna mechanical noise, such as wind loading, bulk motion due to irregularities in the supporting azimuth ring, unmodeled subreflector motion and long-term differential thermal expansion. Therefore, it is not always simple to prevent and relieve this source of noise. The following thesis is aimed at improve the Doppler measurements exploiting a noise-cancellation technique proposed by John W. Armstrong et al. and elaborated by Virginia Notaro et al. from the mechanical and aerospace engineering department at the Sapienza University. The Time-Delay Mechanical Noise Cancellation (TDMC) technique consists in a combination of Doppler measurements given by a two-way antenna and an additional one which should be stiffer, smaller and placed in a site with good tropospheric conditions. The antenna considered for the two-way link is the DSS 25 in Goldstone, CA from NASA; for the three-way antenna has been taken the 12-m Atacama Pathfinder Experiment (APEX) in Chajnantor, Chile. The simulation is performed for a 1000 s integration time.
APA, Harvard, Vancouver, ISO, and other styles
17

Signorello, Concetta. "Reduction of Switching Losses in IGBT Power Modules." Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/4056.

Full text
Abstract:
The purpose of this work is to study in deep the transition phenomena of IGBTs in order to evaluate different optimization strategies for losses reduction and propose a novel technique. In power applications a particular attention must be taken to such phenomena as commutation losses, overcurrents during the turn-ON and overvoltage at the turn-OFF of the devices. These phenomena are connected to non ideal behavior of real devices and stray circuit parameters. Steep profiles of current lead to large ElectroMagnetic Interference (EMI) and overvoltages, while rapid variations of the voltage can produce phenomena of "latch-up" in single IGBT or unwanted commutations. On the other hand, slow commutations are characterized by low values of dv/dt and di/dt, causing excessive losses in those power application during commutations. Therefore, it is essential to face the issue with opposite requirements at the design stage obtaining optimal tradeoff.
APA, Harvard, Vancouver, ISO, and other styles
18

Roy, Soumyaroop. "A compiler-based leakage reduction technique by power-gating functional units in embedded microprocessors." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhao, Yuxin. "Efficient erasure marking technique for delay reduction in DSL systems impaired by impulse noise." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110650.

Full text
Abstract:
Digital Subscriber Line (DSL) technologies have experienced rapid development. Protecting the DSL systems against Impulse Noise (IN) is an important issue and it has recently received considerable attention. A combination of Reed–Solomon (RS) codes and interleaving is used to mitigate the destructive effects of IN. However, it is shown that the interleaving structure introduces long delay, which is certainly undesirable in high-rate transmission systems supporting interactive applications such as Internet Protocol Television (IPTV). Different techniques have therefore been proposed to reduce the interleaving delay while still being able to effectively protect the systems from IN. In particular, Error and Erasure Decoding (EED) can be used instead of Error Decoding (ED) to improve the decoder correction capability, which in turn helps reducing the required interleaving depth and delay. To fully explore the error correction capacity of the EED, reliable erasure marking becomes essential. This thesis proposes an erasure marking technique that fully explores the correction capacity of the EED, and correspondingly, facilitates a shorter interleaving. We first study the sources that generate impulse noise and the statistics of impulse noise in DSL systems. Analytical models for the distribution of amplitude and inter-arrival time of impulse noise are also provided. Based on the statistics of impulse noise, a squared-distance based erasure marking technique is then proposed. Furthermore, analysis of selecting proper parameters for the proposed technique is developed. Finally, the Peak Signal-to-Noise Ratio (PSNR) performance of IPTV over DSL in presence of IN is investigated with the proposed erasure marking technique employed.
Les technologies de Ligne d'abonné numérique (DSL) ont connu un développement rapide. La protection des systèmes de DSL contre le bruit impulsif (IN) est une question importante et elle a récemment suscité attention considérable. Normalement, une combinaison decodes RS et d'entrelacement est employée pour atténuer les effets destructifs du IN. Mais, il est établi que la structure d'entrelacement introduit des longs retards qui sont certainement indésirables dans les systèmes de transmission à haut débit qui supportent des applications interactives telles que la télévision sur IP (IPTV). Donc, il existe des techniques différentes pour réduire le retard d'entrelacement tout en étant capables de protéger efficacement les systèmes contre l'IN. En particulier, le décodage d'erreur et d'effacement (EED) peut être utilisé au lieu du décodage d'erreur (ED) afin d'améliorer la capacité de correction du décodeur, qui, à son tour, contribue à réduire la profondeur de l'entrelacement requise et les retards. Pour explorer la capacité de correction d'erreurs de l'EED, une méthode fiable pour marquer les effacements devient très essentielle. Cette thèse propose une technique de marquage de l'effacement qui utilise pleinement la capacité de correction de l'EED, et, par conséquence, entraîne un entrelacement plus court. Nous étudions d'abord les sources qui produisent l'IN et les statistiques de l'IN dans les systèmes DSL. Des modèles analytiques sur la distribution de l'amplitude et de l'intervalle entre les arrivées de l'IN sont fournis. En se basant sur les statistiques de l'IN, nous proposons une technique de marquage des suppressions basée sur la distance carrée. De plus, une analyse de la sélection des paramètres appropriés pour la technique proposée est développée. Finalement, la performance du rapport puissance maximale instantanée sur bruit (PSNR) de l'IPTV dans les systèmes de DSL en présence de l'IN est examinée avec la technique de marquage des suppressions proposée.
APA, Harvard, Vancouver, ISO, and other styles
20

Ellis, Geoffrey. "Random sampling as a clutter reduction technique to facilitate interactive visualisation of large datasets." Thesis, Lancaster University, 2008. http://eprints.lancs.ac.uk/42392/.

Full text
Abstract:
Within our physical world lies a digital world populated with an ever increasing number of sizeable data collections. Exploring these large datasets for patterns or trends is a difficult and complex task, especially when users do not always know what they are looking for. Information visualisation can facilitate this task through an interactive visual representation, thus making the data easier to interpret. However, we can soon reach a limit on the amount of data that can be plotted before the visual display becomes overcrowded or cluttered, hence potentially important information becomes hidden. The main theme of this work is to investigate the use of dynamic random sampling for reducing display clutter. Although randomness has been successfully applied in many areas of computer science and sampling has been used in data processing, the use of random sampling as a dynamic clutter reduction technique is novel. In addition, random sampling is particularly suitable for exploratory tasks as it offers a way of reducing the amount of data without the user having to decide what data is important. Sampling-based scatterplot and parallel coordinate visualisations are developed to experiment with various options and tools. These include simple, dynamic sampling controls with density feedback; a method of checking the reality of the representative sample; the option of global and/or localised clutter reduction using a variety of novel lenses and an auto-sampling option of automatically maintaining a reasonable view of the data within the lens. Furthermore, this work showed that sampling can be added to existing tools and used effectively in conjunction with other clutter reduction techniques. Sampling is evaluated both analytically, using a taxonomy of clutter reduction developed for the purpose, and experimentally using large datasets. The analytic route was prompted by an exploratory analysis, which showed that evaluation of information visualisation based on user studies are problematic. This thesis has contributed to several areas of research: ‣the feasibility and flexibility of global or lens-based sampling as a clutter reduction technique are demonstrated through sampling-based scatterplot and parallel coordinate visualisations. ‣the novel method of calculating the density for overlapping lines in parallel coordinate plots is both accurate and efficient and enables constant density within a sampling lens to be maintained without user intervention. ‣the novel criteria-based taxonomy of clutter reduction for information visualisation provides designers with a method to critique existing visualisations and think about new ones.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Hongman. "Statistical Modeling of Simulation Errors and Their Reduction via Response Surface Techniques." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/28390.

Full text
Abstract:
Errors of computational simulations in design of a high-speed civil transport (HSCT) are investigated. First, discretization error from a supersonic panel code, WINGDES, is considered. Second, convergence error from a structural optimization procedure using GENESIS is considered along with the Rosenbrock test problem. A grid converge study is performed to estimate the order of the discretization error in the lift coefficient (CL) of the HSCT calculated from WINGDES. A response surface (RS) model using several mesh sizes is applied to reduce the noise magnification problem associated with the Richardson extrapolation. The RS model is shown to be more efficient than Richardson extrapolation via careful use of design of experiments. A programming error caused inaccurate optimization results for the Rosenbrock test function, while inadequate convergence criteria of the structural optimization produced error in wing structural weight of the HSCT. The Weibull distribution is successfully fit to the optimization errors of both problems. The probabilistic model enables us to estimate average errors without performing very accurate optimization runs that can be expensive, by using differences between two sets of results with different optimization control parameters such as initial design points or convergence criteria. Optimization results with large errors, outliers, produced inaccurate RS approximations. A robust regression technique, M-estimation implemented by iteratively reweighted least squares (IRLS), is used to identify the outliers, which are then repaired by higher fidelity optimizations. The IRLS procedure is applied to the results of the Rosenbrock test problem, and wing structural weight from the structural optimization of the HSCT. A nonsymmetric IRLS (NIRLS), utilizing one-sidedness of optimization errors, is more effective than IRLS in identifying outliers. Detection and repair of the outliers improve accuracy of the RS approximations. Finally, configuration optimizations of the HSCT are performed using the improved wing bending material weight RS models.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Kalb, Arthur J. (Arthur Joseph). "An open-loop method for reduction of torque ripple and an associated thermal-management technique." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42595.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 202-204).
by Arthur Joseph Kalb.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
23

Eubanks, Sean Gilrea. "Development and application of a rapid screening technique for the isolation of selernium reduction-deficient mutants of Shewanella putrefaciens." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/25636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ravi, Ajaay. "Run-Time Active Leakage Control Mechanism based on a Light Threshold Voltage Hopping Technique (LITHE)." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1302550444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Merwe, Andria van der. "A novel signal processing technique for clutter reduction in GPR measurements of small, shallow land mines /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193272068967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Setlur, Nagesh Swetadri Vasan. "Improved imaging for x-ray guided interventions| A high resolution detector system and patient dose reduction technique." Thesis, State University of New York at Buffalo, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3613101.

Full text
Abstract:

Over the past couple of decades there has been tremendous advancements in the field of medicine and engineering technology. Increases in the level of integration between these two branches of science has led to better understanding of physiology and anatomy of a living organism, thus allowing for better understanding of diseases along with their cures and treatments. The work presented in this dissertation aims at improving the imaging aspects of x-ray image guided interventions with endovascular image guided intervention as the primary area of application.

Minimally invasive treatments for neurovascular conditions such as aneurysms, stenosis, etc involve guidance of catheters to the treatment area, and deployment of treatment devices such as stents, coils, balloons, etc, all under x-ray image guidance. The features in these device are in the order of a few 10 µm's to a few 100 µm's and hence demand higher resolution imaging than the current state of the art flat panel detector. To address this issue three high resolution x-ray cameras were developed. The Micro Angiography Fluoroscope (MAF) based on a Charge Coupled Device (MAF-CCD), the MAF based on Complementary Metal Oxide Semiconductors (MAF-CMOS) and the Solid State X-ray Image Intensifier based on Electron Multiplying CCDs. The construction details along with performance evaluations are presented. The MAF-CCD was successfully used in a few interventions on human patient to treat neurovascular conditions, primarily aneurysm. Images acquired by the MAF-CCD during these procedures are presented.

A software platform CAPIDS was previously developed to facilitate the use of the high resolution MAF-CCD in a clinical environment. In this work the platform was modified to be used with any camera. The upgrades to CAPIDS, along with parallel programming including both the Graphics Processing Unit (GPU) and Central Processing Unit (CPU) are presented.

With increasing use of x-ray guidance for minimally invasive interventions, a major cause of concern is that of prolonged exposure to x-ray radiation that can cause biological damage to the patient. Hence during x-ray guided procedures necessary steps must be taken to minimize the dose to the patient. In this work a novel dose reduction technique, using a combination of Region of Interest (ROI) fluoroscopy to reduce dose along with spatially different temporal filtering to restore image quality is presented.

Finally a novel ROI imaging technique for biplane imaging in interventional suites, combining the use of high resolution detector along with dose reduction technique using ROI fluoroscopy with spatially different temporal filtering is presented.

APA, Harvard, Vancouver, ISO, and other styles
27

Azevedo, Francisco Bernardo. "Cost Reduction Technique for Mutation Testing." Master's thesis, 2020. https://hdl.handle.net/10216/129884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Azevedo, Francisco Bernardo. "Cost Reduction Technique for Mutation Testing." Dissertação, 2020. https://hdl.handle.net/10216/129884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Wen-Chi, and 陳文祺. "Noise Reduction Technique for Zero Crossing Detection." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/00579949748941245643.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
103
One of the many issues with developing modern power electronics applications is to keep the spikes and EMI at a minimum, especially when switching AC mains in and out. Switching mains in and out at the zero crossing point requires a precise way of detecting when the next crossing will be. This raises the need for a cost efficient way to detect the zero crossing points. Accurate detection of true zero crossings is important in many fields of industrial electronics such as motor drives, power factor correctors and grid-integrated inverters, where satisfactory line synchronization requires reliable zero crossing information. Zero crossing detection can also be used for other purposes, such as frequency calculation and relative phase measurement. Most of today’s power electronics applications are controlled by microcontrollers; this gives the possibility to prevent the noise for zero crossing detection (ZCD) in a simple and cost efficient way. In this thesis, a simple digital signal processing method is proposed for efficient noise reduction in zero crossing detectors. The proposed method is very robust against strong impulsive noise. In this thesis, a systematic design procedure will be described first, test platform and prototyping circuit are then developed later to validate the correctness of the proposed algorithm. In this thesis, the proposed algorithm is realized using TMS320F28069 digital signal controller (DSC) from Texas Instrument Corp. According to the experimental results, the steady state maximum and minimum errors for 50 Hz AC input waveform are 0.023% and the smallest error is 0.002%. Compared to conventional ZCD methods, 3.272% and 0.345%, the difference is 140 and 170 times. The steady state maximum and minimum errors for 60 Hz AC input waveform are 0.023% and the smallest error is 0.004%. Compared to conventional ZCD methods, 2.029% and 0.685%, the difference is 80 and 170 times, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Guan-Yi, and 李冠毅. "A Series-Compensator Based Fault Current Reduction Technique." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95175321685287515833.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
99
When voltage sag occur, the sensitive equipment with unexpected shut-downs often result in substantial economic losses, so series compensator are usually used to restore voltage during sag for improving power quality and riding through voltage sag. However, voltage restorer cannot protect sensitive loads when short circuit fault occurs at load side. A fault current mitigation strategy based on the structure of series compensator has been proposed. When the compensator is activated, it acts like a virtual inductor in the system. Not only the fault current can be mitigated effectively, but also the voltage sag problem will not occur at other feeders. In this thesis, a fault current mitigation technique is proposed based on the concept of virtual inductor, whereas both positive and negative sequence inductor are used to mitigate fault current. When grounding fault is detected, positive and negative fault current can be mitigated by virtual inductors which is put in the system by series compensator. The selection on the value of positive and negative sequence inductors are based on the rating of converter. Additionally, the value of optimal virtual inductor are designed respectively under one phase and two phase fault. And using the value of inductor to effectively limit fault current with finite rating of converter. In this thesis, first of all the principles of operation of the proposed that fault current mitigation technique is presented in detail, and the influence of virtual inductor on fault current is derived. Finally, the proposed fault current mitigation algorithm has been tested in simulation and is validated by experiments in laboratory.
APA, Harvard, Vancouver, ISO, and other styles
31

"An Adaptive Time Reduction Technique for Video Lectures." Master's thesis, 2016. http://hdl.handle.net/2286/R.I.38760.

Full text
Abstract:
abstract: Lecture videos are a widely used resource for learning. A simple way to create videos is to record live lectures, but these videos end up being lengthy, include long pauses and repetitive words making the viewing experience time consuming. While pauses are useful in live learning environments where students take notes, I question the value of pauses in video lectures. Techniques and algorithms that can shorten such videos can have a huge impact in saving students’ time and reducing storage space. I study this problem of shortening videos by removing long pauses and adaptively modifying the playback rate by emphasizing the most important sections of the video and its effect on the student community. The playback rate is designed in such a way to play uneventful sections faster and significant sections slower. Important and unimportant sections of a video are identified using textual analysis. I use an existing speech-to-text algorithm to extract the transcript and apply latent semantic analysis and standard information retrieval techniques to identify the relevant segments of the video. I compute relevance scores of different segments and propose a variable playback rate for each of these segments. The aim is to reduce the amount of time students spend on passive learning while watching videos without harming their ability to follow the lecture. I validate the approach by conducting a user study among computer science students and measuring their engagement. The results indicate no significant difference in their engagement when this method is compared to the original unedited video.
Dissertation/Thesis
Masters Thesis Computer Science 2016
APA, Harvard, Vancouver, ISO, and other styles
32

Chih, Kechiang, and 遲克強. "Enhancing Distributed Resource Monitor Via Monotonic State Reduction Technique." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73783243346380461936.

Full text
Abstract:
碩士
東海大學
資訊工程學系
100
With the continuous evolution of network applications, analyzing whether the distributed components satisfy a certain global predicate has become an import issue. A global predicate is a logical statement defined on states of the processes in a distributed system. Detecting global predicates has been difficult due to the combinatorial nature of process states. This thesis discusses the state compression approaches which combines a sequence of execution states into a single representative state in the sense that the detection results still remain correct. This thesis develops an efficient state consolidation algorithm for the global predicates of distributed resource management applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Jiun-Kuan, and 吳俊寬. "A Flip-Flop Replacement Technique for Peak Current Reduction." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/07957997948075907290.

Full text
Abstract:
碩士
國立彰化師範大學
電子工程學系
95
As process technology progresses to ultra deep sub-micron, the number of transistors in unit's area is promoted sharply. The number of transistors that simultaneous switching also increase in the circuit. There are many issues that need to challenge in the ultra deep sub-micron circuit, such as the enormous peak current. The enormous peak current will cause the problems, such as electro migration, self-heat in chip, IR drop and ground bounce. They influence the lifespan and the reliability of the chip seriously. They also affect the speed of the execution and accuracy of the signal in the circuit. A huge peak current is observed at each clock rising edge due to the combination circuit simultaneous switching in a high-speed synchronous digital system. We design three groups of the new-type delay flip-flops with different transmission time. We replace selected normal flip-flops not in critical path of a circuit by the new-type delay flip-flops, so the replacement causes the switching times of flip-flops to be separated. Thus it can reduce the peak current and IR drop effect in the circuit. In this thesis, we propose the peak current optimization technology and the algorithm that find the suitable flip-flops in the circuit and then replace them by the new-type delay flip-flops. After the process of the algorithm, the peak current of the circuit can be effective reduced. Key word: ultra deep sub-micron, peak current, electro migration, self-heat, IR drop, ground bounce, critical path.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Pin-Yi, and 陳品頤. "Combination of Variance Reduction Technique on Stochastic Edge Networks." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/d3rqz8.

Full text
Abstract:
碩士
國立中央大學
工業管理研究所
106
Due to the apparent drawbacks of Monte Carlo method including comparatively lower sampling efficiency, requiring a large number of samples. Variance Reduction Technique used to improve those drawbacks. The method of sampling will affect the precision, so sampling methods also a part of variance reduction technique. Variance Reduction Technique includes Antithetic random variables(ARV), Control variates technique(CV), Stratification, Importance sampling(IS), Latin hypercube sampling (LHS)and Common random numbers(CRN). Cross-entropy(CE) strategy provides a general, simple and efficient method for solving such problems like the quadratic assignment problem and the rare event-simulation. Before we are sampling, CE strategy is use to choose an importance probability density function(IPDF). And then use LHS to sampling the IPDF we got. After that, used ARV technique can be further to reduce the variance of test function. The simulation results of stochastic edge networks examples show that this combined sampling method with CE strategy (CSMCES) can enhance efficiency under certain level of precision and effectively reduce sample size.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Yu-Shu, and 李昱樞. "Design of DC-DC Converter with spur reduction technique." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/8fdcdx.

Full text
Abstract:
碩士
元智大學
電機工程學系
106
A DC-DC converter with spur reduction technique is presented in this paper. To achieve this goal, we use the spread spectrum technique and design an updowncounter to change the frequency and switching the frequency, so that we can reduce the spur effectively. A buck converter with spur reduction technique has been simulated in standard TSMC 0.18um process. The result is that, the system switching frequency spread from 650KHZ to 1.295MHZ with a frequency step 43KHZ between 16 switching frequency, and the changing rate is 15.25KHZ. It also can regulate output voltage of 1.2V when load current change from 50mA to 500mA, and maximum magnitude of spur reduction can achieve 27dB.
APA, Harvard, Vancouver, ISO, and other styles
36

Mokwatlo, Peter Noko. "Patient satisfaction in breast reduction using the medal pedicle technique versus the inferior pedicle technique." Thesis, 2018. https://hdl.handle.net/10539/29127.

Full text
Abstract:
A dissertation submitted to the Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, in partial fulfillment of the requirements for the degree of Master of Medicine in Plastic and Reconstructive Surgery. Johannesburg, 2018
Background: Breast reduction surgery is an accepted and commonly performed procedure for addressing gigantomastia for cosmetic and functional purposes. It has been proven to have a high rate of patient satisfaction. It is a functional operation, improving quality of life in symptomatic patients. Aims: This study evaluated patients’ satisfaction in subjects who had undergone breast reduction surgery between June 2017 and June 2018 at Chris Hani Baragwanath Academic Hospital (CHBAH), Helen Joseph Academic Hospital (HJAH) and Netcare Rand Clinic, using the medial pedicle technique versus the inferior pedicle technique. Methods: Patient satisfaction was evaluated by assessing the following domains, satisfaction with breasts, satisfaction with nipples, satisfaction with outcome, psychosocial well-being, sexual well-being, physical well-being. The BREAST-Q questionnaire is a measuring tool employed to evaluate patient satisfaction secondary to breast reduction that meets international and federal standards. A total of 30 patients completed the BREAST-Q questionnaire in the clinics as they came for their follow-ups post-surgery. Fifteen participants had undergone breast reduction through the medial pedicle technique whilst the other 15 had had the procedure performed using the inferior pedicle technique. Results: The pedicles used were medial (n =15) and inferior (n =15). The findings were; breast satisfaction: medial pedicle technique 68.9 ± 17.6, inferior pedicle technique 69.6 ± 18.7 with a p-value of 0.926. Physical wellbeing: medial pedicle technique 62.7 ± 19.6, inferior pedicle technique 84.2 ± 14.2 with a p-value of 0.002. The two techniques performed equally on average and in all the domains except in the physical wellbeing domain where the inferior pedicle technique had a statistically significant superiority to the medial pedicle technique. Conclusions: The use of different techniques in breast reduction will continue. Through the use of tools like the BREAST-Q questionnaire in patient related outcome measurements, we will gain a window into the patients’ feeling about the different techniques and in the process learn or change to techniques that offer better patient satisfaction. The resected breast tissue should have been weighed at the time of operation. Symptom relief is based on the volume of tissue resected.
MT 2020
APA, Harvard, Vancouver, ISO, and other styles
37

Hsieh, Ching-Ming, and 謝志明. "The Application of Microbubble Drag Reduction Technique on ship Model." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/39723177743908830237.

Full text
Abstract:
碩士
國立臺灣大學
工程科學與海洋工程學系
92
Abstract How to reduce the drag and raise the efficiency of ship is one of the most important research topics of ship hydrodynamics. The studies conducted on the flat plate showed that the drag reduction could be achieved by the injection of microbubbles into the turbulent boundary layer of the flat plate. The 70% to 80% drag reductions were reported. The microbubble drag reduction technique is applied to ship model in this study. The mechanism of micrbubble drag injection is installed on the bottom of a ship model to reduce the resistance of the ship model. The effects of drag reduction depend on the location of microbubble injection, diameter of the porous material and the velocity of model as shown from the tested results. The better drag reduction was found for the 100 micro meter diameter of the porous material and the injection of microbubble at rear part of the model only. The 18% frictional drag reduction is the best one in this study. Thus the drag reduction of microbubble injection method is validated on the ship model.
APA, Harvard, Vancouver, ISO, and other styles
38

Liang, Li Huang, and 梁立煌. "Multi-channel noise reduction technique from the inverse reconstruction perspective." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/9ucmuh.

Full text
Abstract:
碩士
國立清華大學
動力機械工程學系
105
In this thesis, a noise reduction algorithms is presented from the perspective of source localization and separation. Minimum Power Distortionless Response (MPDR) algorithm is utilized to determine the bearings of the signal and noise sources. Tikhonov regularization (TIKR) and compressive sensing (CS) algorithm are employed to extract the amplitudes of the signal and noise sources. In order to evaluate the proposed method, the log minimum mean-square error (log-MMSE) algorithm, the Generalized Sidelobe Canceller (GSC), and the regulated multiple-input/output inverse theorem (R-MINT) are adopted as benchmarking methods. The Log-MMSE is used to estimate an optimize gain correction function as a post-filter. To enhance GSC, subband (SB) filtering and internal iteration (IIT) are incorporated, which is termed the GSC-SB-IIT method. The R-MINT used to be applied in room response inverse filtering. Numerical simulations and experiments are conducted for a 24-channel uniform circular microphone array. White noise and traffic noise are used in simulating the background noise. Objective tests based on the segmental signal-to-noise ratio (segSNR) and Perceptual Evaluation of Speech Quality (PESQ) and subjective listening tests are conducted to compare the noise reduction approaches. The results show that the CS algorithm has achieved the highest reduction of noise.
APA, Harvard, Vancouver, ISO, and other styles
39

CAI, FENG-ZHOU, and 蔡豐洲. "A pipeline bubbles reduction technique for the Monsoon dataflow architecture." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/32228626885367156032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Wen-xiang, and 林文祥. "Modified Selective Mapping Technique for PAPR Reduction in OFDM Systems." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/17986805567975920926.

Full text
Abstract:
碩士
國立中央大學
通訊工程研究所
100
Orthogonal frequency division multiplexing (OFDM) is desirable technique of wireless communication. Subcarriers of OFDM signal are orthogonally used on the frequency spectrum, and can resist multipath effect characteristics. Recently, OFDM has been used widely in many kinds of communication standards. But the OFDM signal has a major disadvantage is high peak-to-average power ratio (PAPR), which can resulted in significant nonlinear distortion when the signal is operated through a nonlinear region of power amplifier, and induces the degradation of bit error rate (BER). The methods of PAPR reduction, selective mapping (SLM) is effective and uncomplicated. SLM is a linear operation, it doesn’t destroyed the signal itself. The received signal can be demodulated perfectly at receiver. But SLM has a problem of the high computational complexity, in this paper, introduce a modified SLM. The technique uses the concept of partition into subcarriers, called a partial-sequence SLM (P-SLM), which considerably reduces the computational complexity with the similar performance of PAPR reduction compared with the conventional SLM scheme. The simulation results show that it achieves better performance of reduces the computational complexity than the conventional SLM scheme.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Ya-Min, and 林亞民. "A Circuit Design for Dynamic Power Reduction Using Transparent Technique." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/94888141609221554787.

Full text
Abstract:
碩士
國立彰化師範大學
資訊工程學系
100
This paper presents a pipeline structure based on clock-gating technique which can save dynamic power consumption. This structure fully uses transparent technique, and makes the registers of the intermediate stages into transparent mode dynamically to reduce dynamic power consumption of switching signal state. The proposed structure based on two circuits, one is interlock pipeline and the other is elastic circuit. These two circuits are similar to asynchronous characteristics, and make it having good performance of low power. The main design purpose of interlock pipeline is simulating asynchronous circuit to have the advantages of asynchronous. One of asynchronous advantages is locality, it means the stages work locality without affecting whole circuit work. According to the foregoing, signal also switches locality and decrease the dynamic power consumption. Elastic circuit works similarly to interlock pipeline, the main difference between them is the control logic. There is one more control logic to control the valid latches in interlock pipeline, but in elastic circuit, there is no control logic in elastic circuit but using clock signal directly. Finally, according to clock gating、transparent technique and locality, we implement two pipeline structures which can save dynamic power consumption and have the characteristic of asynchronous circuit, and compare these two structures at last of paper.
APA, Harvard, Vancouver, ISO, and other styles
42

Ju, Wei-Zu, and 朱韋儒. "An Efficient SLM Technique for PAPR Reduction of OFDM Signals." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/97745398874734266705.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程研究所
100
In wireless system, and time of technology, the system is required faster speed rate of transmission. Orthogonal frequency division multiplexing (OFDM) is a kind of multi-carrier modulation system. It is good for high data rate in orthogonal frequency channel, and it is effective to reduce multi-path fading on wireless channel. However, there is a main drawback in OFDM system, where the signal is higher peak-to-average power ratio (PAPR). The power amplifier (PA) needs larger linear range in order to avoid producing non-linear distortion which leads to increase bit error rate and spread outside band, but it will enlarge linear range of operation, and waste the efficiency of amplifier. To solve the PAPR problem in OFDM system. First we discuss the method of selective mapping to reduce high PAPR value, and there are a few issues associated with it. The issues include both side information and computation complexity. In this paper, a selected mapping (SLM) applied scheme using exhaust based dynamic programming (EDP) is proposed. In order to reduce the computation complexity of phase generation, using dynamic programing (DP) to obtain the last result of exhaust solution. Because the creation of phase sequences has a particular structure so the receiver can generate the phase sequences easy. This proposed method without transmitting side information greatly, thus improve the data rate and easy to realize.
APA, Harvard, Vancouver, ISO, and other styles
43

張閔超. "A Novel Body-Effect Reduction Technique for Linear Charge pump." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/82247319319787064658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Chien-Po, and 楊千柏. "Design Technique for Error Reduction On Automatic Segmentation In Microarray Image." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/00239591573186501581.

Full text
Abstract:
碩士
國立中央大學
電機工程研究所
93
DNA Microarray hybridization is a popular high throughput technique in academic as well as in industrial genomics research. The Microarray image is considered as an important tool and powerful technology for large-scale gene sequence and gene expression analysis. There are many methods to analyze the Microarray image by automatic segmentation or gridding spot. These methods always have the same problem of noise and tilt in spot array. It is difficult to process strong noise image in automation. In this paper, we can reduce the error of the edge detection which is influenced by noise and tilt spot array. We apply an automatic segmentation usually application in video segmentation to process the Microarray image. We reduce the automatic spot segmentation errors and get more exact spot position. We only use low complexity methods and some simple concept by Microarray property. Using this property as two image scan in the same Microarray and spots are like array. Eventually, we compare the result with ScanAlyze tool because ScanAlyze tool extract spot position and edge by artificial interface. We obtain the 1.43% average differential value of spots analysis ratio in result with ScanAlyze. By the proposed method, we can get more accurate spot edge segmentation and lowest error in automatic analysis Microarray image.
APA, Harvard, Vancouver, ISO, and other styles
45

Kuo-ChenChiang and 江國振. "Recovery and Reduction of Spent Metal Catalyst via Plasma Sintering Technique." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/46192475608883941499.

Full text
Abstract:
博士
國立成功大學
資源工程學系碩博士班
98
Plasma is a quasi-neutral gas consisting of a large amount of charged and neutral active species. These diverse active species with the high energy radiation capability of plasma can help to enhance the chemical reactions substantially and to make some reactions possible. As for gasification, it is commonly applied to convert coal, biomass, and waste materials to syngas and useful chemicals in industries. Supported metallic catalysts were widely used in industry for hydrogenation, hydrotreating, and steam-reforming reaction. The metallic catalysts was supported on the ceramic substrates (i.e., silica and alumina) to increase its surface area for achieving better catalytic efficiency. After multiple of catalytic reactions, the surface of metal-supported catalyst was completely covered with organic tar and loses its catalytic efficiency. This thesis is the recovery of metal and reduction of metal oxide of the spent catalyst in nitrogen medium under plasma conditions. The spent catalysts were sintered and organic wastes are converted to syngas in thermal plasma reactor. The gases evolved upon recovery of metal (Pt) and reduction of metal oxide (NiO2, Co3O4-MoO2) to metal are continuously pumped out the system and clarify the spent alumina-supported platinum catalyst. The results demonstrate that the thermal plasma treatment makes reduction of metal oxide to metal by single step thermal plasma processing. Key word: Oxidation catalyst;Alkali metal transition elements;Support carrier material;Plasma sinter;syngas;catalyst reduction
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Chin-Yang, and 林芷瑩. "Slope stability analysis under earthquake using the shear strengh reduction technique." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/76319134578648368980.

Full text
Abstract:
碩士
國立成功大學
資源工程學系碩博士班
94
The object of this thesis is to investigate the stability of soil slope by using a numerical analysis procedure combining the deformation analysis and shear strength reduction technique, in which the stress-strain characteristic of the soil can be considered. The focus of this thesis is on the stability of soil slopes which are in the states of static and dynamic. It is important to find out the failure surface and the corresponding safety factor. It should be noticed that the critical slip surface is not unique. A narrow yielding zone was developed when the slope started to fail and any slip surface passing through the yielding zone could be the failure surface, and the corresponding safety factor was the SRF. The results of static analysis showed good agreements with limit equilibrium analysis. The plastic yield zone was developed from the toe to the top, and connected two free surfaces. To simulate the soil slope under earthquake, the acceleration histories in real time domain recorded at 921-earthquake was applied in the dynamic model. From the simulation results, the failure surface under seismic is deeper than static analysis. The shear failure zone increased with the peak ground acceleration increment, and the shear zone was started from the shallow position to the deeper zone.
APA, Harvard, Vancouver, ISO, and other styles
47

Shu, Jaw-Shi, and 徐炤旭. "Finite Element Analysis & Optimization on Springback Reduction -- "Double-Bend" Technique." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/06500075738329071197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

KUO, CHIEH-HAN, and 郭玠含. "An Output Ripple Reduction Technique For Switched-Capacitor Dc-Dc Converters." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/02050213291278706967.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
104
This thesis introduces a novel output ripple reduction technique for Switched-Capacitor DC-DC converter. The whole new algorithm is presented and the minimal output ripple is obtained by the decreased peak output ripple from adjusting the switch size and improvement of the lowest point of output ripple by increasing the switching frequency. In response to different loading current and output voltage, we according to the analyzing mechanism to automatically search the best size of switch. Under the premise that with no influence of efficiency, we apply the technique of phase interleaving to cut off the phase of current which provided by flying capacitor. Moreover, in the case of ultra-light load, we can turn off the superfluous flying capacitor, and then we can get the minimum output ripple. Using this algorithm makes the output ripple lower than 20mV efficaciously under all kinds of condition. Our technique can even more apply to any switched-capacitor converter. Based on Recursive SC topology, the 2:1 fundamental topology cascade to make the 2^N-1 kinds of ratio. In line with the demand of load modulate the best ratio to get higher conversion efficiency. When input voltage is 1.8v, we can provide output voltage at 0.1v~1.68v. And it has the very outstanding efficiency. Keywords: Switched-capacitor, dc-dc converter, ripple reduction technique, wide voltage range
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Ying-Yin, and 黃瀅瑛. "Trajectory Piecewise-linear Model Order Reduction Technique for Nonlinear Bistable Mechanism." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/13564381871769849441.

Full text
Abstract:
碩士
臺灣大學
機械工程學研究所
95
In this work, we investigate the applications of the trajectory piecewise-linear model order reduction (TPWLMOR) technique on bistable mechanisms. The bistable mechanism, which is composed of a double curved-beam mechanism (DCBM), is employed in an MEMS (micro-electro-mechanical systems) optical switch. While applying an external force which is larger than a certain critical value, the DCBM quickly snaps through from a stable state to another stable state. We focused on analyzing the properties of the static and transient behaviors of the DCBM. First, we used the finite element method (FEM) software, ABAQUS, to construct a 3-D solid model of the DCBM and to simulate both the static and the transient analyses. Subsequently, we developed a 3-D dynamic nonlinear FEM numerical model. Then, the TPWLMOR is used to reduce the full-mesh FEM models into low-order models. The idea of TPWLMOR comes from the concept of piecewise-linear approximation and an Arnoldi-based model order reduction (MOR) algorithm. Reduced models are generated using the Arnoldi algorithm at appropriate linearization points, and then are superposed into a compact model (i.e., the trajectory piecewise-linear model) using weighted sum. Compared to traditional FEM modeling, the TPWLMOR models increases computational efficiency. The bistable device was realized using a simple SOI MEMS process with one photo-mask. Then we set up a Doppler laser interferometer system in order to measure the dynamics of the DCBM. We then compared the empirical data with the simulated dataset obtained by ABAQUS , FEM, and the TPWLMOR algorithm. The simulated DCBM displacements of the reduced models agree with the results of the ABAQUS, while the computational performance reaches 200 times of that of the ABAQUS. However, when the force is larger than the critical value, the DCBM does not snap-through as expected. The inaccurate results might be caused by many factors, which are subject to further study.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Yuan-Liang, and 張元良. "A Study on Slope Stability Analysis Using Shear Strength Reduction Technique." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/41279116386438620948.

Full text
Abstract:
博士
國立中興大學
土木工程學系
92
In the stability analysis of soil slopes, there are basically two deterministic approaches. The first is the limit equilibrium method (Bishop, 1955; Morgenstern and Price, 1965; Spencer, 1967; Janbu, 1973) and the second is stress-strain analysis (Brown and King, 1966; Dunlop and Duncan, 1970; Wright et al., 1973; Giam and Donald, 1988; Huang and Yamasaki, 1993; Zou et al., 1995). The limit equilibrium method is concerned with limit state analysis and has received wide acceptance because of its simplicity. However, this method has some theoretical shortcomings (Wright el al., 1973). With the advent of the finite element technique and powerful computers, given the material properties and the geometry of the slope, it is not difficult to analyze the soil slope for deformation and safety by computing the stress and strain in the slope using stress-strain analysis. The main advantages of the finite element approach to slope stability analysis over the traditional limit equilibrium method are that no assumption needs to be made in advance about slice side forces, or the location or shape of the slip surface. Failure occurs “naturally” through the zones within the soil slope in which the soil shear strength is unable to sustain the acting shear stresses (Wong, 1984). As for the attempt to convert the finite element analysis into a safety factor calculation, there are mainly two types of procedure. One is the consideration of stress level on the potential failure surface (Wright el al., 1973; Giam and Donald, 1988; Huang and Yamasaki, 1993). The other is the strength reduction technique (Zienkiewicz, et al. 1975; Donald and Giam, 1988; Ugai, 1989; Matsui and San, 1992; Griffiths and Lane, 1999; Manzari and Nour, 2000). In the former, a searching scheme as commonly adopted in the limit equilibrium method is needed to search for many slip surfaces through analysis of stress states. The safety factor is defined as the ratio of the sum of the shear strength available to the shear stress acting over the slip surface. The critical slip surface is the one corresponding to the minimum safety factor value. In a strict sense, the merits of this method over the traditional limit equilibrium method lie in the reliable estimation of stress states in the soil slope. In the strength reduction technique, the original shear strength parameters are divided in order to bring the slope to failure (Duncan, 1996). The evaluation of slope failure may use the bulging extended through the verge of instability localized in a narrow zone of the deformed shape (Donald and Giam, 1988) or non-convergence of the solution in a mathematical manner (Griffiths and Lane, 1999). Compared with the former method, only limited steps of analysis are needed to determine the critical slip surface and the corresponding safety factor using the strength reduction technique. However, the sharp bend of displacement in the instability region near the failure site may continue for several steps during the reduction process of shear strength for the existing procedure. There will be no perfectly delineated break-off point in the curve of displacement versus strength reduction factor (SRF) for determining the critical value of safety factor. In this study, an approach using the strength reduction technique of finite element analysis, with the provision of a failure criterion, and incorporation of graphical output, to examine the failure zone developed, was used in order to determine the slope failure and the corresponding safety factor. The state of the effective stresses in a slope is calculated by the finite element method using eight-node quadrilateral elements of elastic-plastic soil with the Drucker-Prager nonlinear stress-strain relationship and a nonassociated flow rule. The soil’s self-weight is modeled by a gravity “turn-on” procedure (Smith and Griffiths, 1988) with nodal loads added in a single increment. The shear strength of soil is estimated based on the Mohr-Coulomb failure criterion. Slope failure occurs when the yield zone spreads over the entire slip surface and the corresponding SRF is the safety factor of the soil slope. The slope failure could be clearly defined and progressive failure was also observed. It should be kept in mind that the critical slip surface is not unique. A narrow yielding zone was developed when the slope started to fail and any slip surface passing through the yield zone could be the failure surface. The factor of safety obtained by the proposed procedure is in good agreement with that determined by Bishop’s and Spencer’s methods. However, the proposed procedure can provide the designers a more solidly based concept in slope stability analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography