Dissertations / Theses on the topic 'Embedded Systems, Algorithms, Optimization Techniques'

To see the other types of publications on this topic, follow the link: Embedded Systems, Algorithms, Optimization Techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Embedded Systems, Algorithms, Optimization Techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

LOIACONO, CARMELO. "Algorithm Optimization and Applications for Embedded Systems." Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2642819.

Full text
Abstract:
Optimizations for Embedded Systems imply ad-hoc tunings and performance improvements, as well as development of specific algorithms and data structures, well suited for embedded platforms. This work first focuses on efficient and accurate implementation of visual search algorithms on embedded GPUs adopting the OpenGL ES and OpenCL languages. Considering embedded GPUs that support only low precision computations, we discuss the problems arising during the design phase, and we detail our implementation choices, focusing on two well-known key-point detectors SIFT and ALP. We illustrate how to re-engineering standard Gaussian Scale Space computations to mobile multi-core parallel GPUs using the OpenGL ES language transforming standard, i.e., single (or double) precision floating-point computations, to reduced-precision GPU arithmetic without precision loss. We also concentrate on an efficient and accurate OpenCL implementation of the main MPEG Compact Descriptors for Visual Search (CDVS) stages. We introduce new techniques to adapt sequential algorithms to parallel processing. Furthermore, to reduce the memory accesses, and efficiently distribute the OpenCL kernels workload, we use new approaches to store and retrieve CDVS information on proper GPU data structures. Secondly, we focus on improving the scalability of formal verification algorithms for Embedded System design models. We address the problem of reducing the size of Craig interpolants generated within inner steps of SAT-based Unbounded Model Checking. Craig interpolants are obtained from refutation proofs of unsatisfiable SAT runs, in terms of and/or circuits of linear size, w.r.t. the proof. We also consider the issue of property grouping, property decomposition, and property coverage in model checking problems. Property grouping, i.e., clustering, is a valuable solution whenever (very) large sets of properties have to be proven for a given model. On the other hand, property decomposition can be effective whenever a given property turns-out (or it is expected) to be “hard-to-prove”. Overall, experimental results are promising and demonstrate that our solutions have a speed-up over the existing ones and can be really beneficial if appropriately integrated in a new environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmadinia, Ali. "Optimization algorithms for dynamically reconfigurable embedded systems." Berlin : Köster, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2793299&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Levy, Renato. "Optimization Techniques for Energy-Aware Memory Allocation in Embedded Systems." Diss., Computer Science, George Washington University, 2004. http://hdl.handle.net/1961/116.

Full text
Abstract:
Degree awarded (2004): DScCS, Computer Science, George Washington University
A common practice to save power and energy in embedded systems is to "put to sleep" or disable parts of the hardware. The memory system consumes a significant portion of the energy budget of the overall system, so it is a natural target for energy optimization techniques. The principle of software locality makes the memory subsystem an even better choice, since all memory blocks but the ones immediately required can be disabled at any given time. This opportunity is the motivation for developing energy optimization techniques to dynamically and selectively control the power state of the different parts of the memory system. This dissertation develops a set of algorithms and techniques that can be organized into a hardware/software co-development tool to help designers apply the selective powering of memory blocks to minimize energy consumption. In data driven embedded systems, most of the data memory is used either by global static variables or by dynamic variables. Although techniques already exist for energy-aware allocation of global static arrays under certain constraints, very little work has focused on dynamic variables, which are actually more important to event driven/data driven embedded systems than their static counterparts. This dissertation addresses this gap, and extends and consolidates previous allocation techniques in a unique framework. A formal model for memory energy optimization for dynamic and global static variables and efficient algorithms for energy aware allocation of variables to memory are presented. Dependencies between generic code and data are uncovered, and this information is exploited to fine-tune a system. A framework is presented for retrieving this profile information which is then used to design energy aware allocation algorithms for dynamic variables, including heuristics for segmentation and control of the memory heap. By working at the assembly code level, these techniques can be integrated into any compiler regardless of the source language. The proposed techniques were implemented and tested against data intensive benchmarks, and experimental results indicate significant savings of up to 50% in the memory system energy consumption.
Advisory Committee: Professor Bhagirath Narahari, Professor Hyoeong-Ah Choi (Chair), Professor Rahul Simha, Professor Shmuel Rotenstreich, Professor Can E. Korman, Dr. Yul Williams
APA, Harvard, Vancouver, ISO, and other styles
4

Bautista-Quintero, Ricardo. "Techniques for the implementation of control algorithms using low-cost embedded systems." Thesis, University of Leicester, 2009. http://hdl.handle.net/2381/8220.

Full text
Abstract:
The feedback control literature has reported success in numerous implementations of systems that employ state-of-the-art components. In such systems, the quality of computer controller, actuators and sensors are largely unaffected by nonlinear effects, external disturbances and finite precision of the digital computer. Overall, this type of control systems can be designed and implemented with comparative ease. By contrast, in cases when the implementation is based on limited resources, such as, low-cost computer hardware along with simple actuators and sensors, there are significant challenges for the developer. This thesis has the goal of simplifying the design of mechatronic systems implemented using low-cost hardware. This approach involves design techniques that enhance the links between feedback control algorithms (in theory) and reliable real-time implementation (in practice). The outcome of this research provides a part of a framework that can be used to design and implement efficient control algorithms for resource-constrained embedded computers. The scope of the thesis is limited to situations where 1) the computer hardware has limited memory and CPU performance; 2) sensor-related uncertainties may affect the stability of the plant and 3) unmodelled dynamic of actuator(s) limit the performance of the plant. The thesis concludes by emphasising the importance of finding mechanisms to integrate low-cost components with nontrivial robust control algorithms in order to satisfy multi-objective requirements simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
5

McCurrey, Michael. "Probabilistic Algorithms, Lean Methodology Techniques, and Cell Optimization Results." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7939.

Full text
Abstract:
There is a significant technology deficiency within the U.S. manufacturing industry compared to other countries. To adequately compete in the global market, lean manufacturing organizations in the United States need to look beyond their traditional methods of evaluating their processes to optimize their assembly cells for efficiency. Utilizing the task-technology fit theory this quantitative correlational study examined the relationships among software using probabilistic algorithms, lean methodology techniques, and manufacturer cell optimization results. Participants consisted of individuals performing the role of the systems analyst within a manufacturing organization using lean methodologies in the Southwestern United States. Data were collected from 118 responses from systems analysts through a survey instrument, which was an integration of two instruments with proven reliability. Multiple regression analysis revealed significant positive relationships among software using probabilistic algorithms, lean methodology, and cell optimization results. These findings may provide management with information regarding the skillsets required for systems analysts to implement software using probabilistic algorithms and lean manufacturing techniques to improve cell optimization results. The findings of this study may contribute to society through the potential to bring sustainable economic improvement to impoverished communities through the implementation of efficient manufacturing solutions with lower capital expenditures.
APA, Harvard, Vancouver, ISO, and other styles
6

DeBardelaben, James Anthony. "An optimization-based approach for cost-effective embedded DSP system design." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kandi, Jayavardhan R. "Embedded Cryptography: An Analysis and Evaluation of Performance and Code Optimization Techniques for Encryption and Decryption in Embedded Systems." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

CAZZANIGA, PAOLO. "Stochastic algorithms for biochemical processes." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/7820.

Full text
Abstract:
After the completion of the human genome sequencing (and of a lot of other genomes), the main challenge for the modern biology is to understand complex biological processes such as metabolic pathways, gene regulatory networks and cell signalling pathways, which are the basis of the functioning of living cells. This goal can only be achieved by using mathematical modelling tools and computer simulation techniques, to integrate experimental data and to make predictions on the system behaviour that will be then experimentally checked, so as to gain insights into the working and the general principles of organization of biological systems. In the study of biological systems, the use of stochastic methods is motivated by the fact that these systems are usually composed by many chemical interactions among a large number of chemical species, but the molecular quantities involved can be small (few tens of molecules), and the noise plays a major role in the system’s dynamics. One major problem related to stochastic methods is that they are difficult to implement analytically; hence, they are implemented by means of numerical simulations whose computation time is usually very expensive. In this thesis we provide a discrete and stochastic framework for the modelling, simulation and analysis of biological and chemical systems, which overcomes the limitations of a variant of membrane systems, called DPPs, and of the classic stochastic algorithms. This novel method combines, in particular, the descriptive power of DPPs with the efficiency of tau-leaping algorithm. This approach, called tau-DPP, exploits the membrane structure and the system definition of DPPs, with the aim of describing multiple volume systems, and uses a modified version of the tau-leaping algorithm for the efficient description of the system behaviour. The framework of tau-DPP has been applied to ecological, biological and chemical systems. In general, the study of such kind of models requires the knowledge of many numerical factors for a complete and accurate description of biological systems, like molecular species quantities and reaction rates, which represent an indispensable quantitative information to perform computational investigations of the system behaviour. The lack and the inaccuracy of these information bring about the challenging problem of developing suitable techniques to automatically estimate the correct values to all parameters in order to reproduce the expected dynamics in the best possible way. In this thesis, we consider the application of two optimisation techniques, genetic algorithms and particle swarm optimizer, to tackle this problem. In particular, we test and compare the performances of genetic algorithms and particle swarm optimization to the aim of identify the most suitable optimisation technique for the parameter estimation. Finally, the problem related to the exploration of the parameters space of a biochemical system is described. Usually, this kind of analysis is achieved by means of large numbers of independent simulations where each execution is performed with a particular parametrisation. To efficiently tackle this problem, we present the implementation of a parameter sweep application on a grid framework.
APA, Harvard, Vancouver, ISO, and other styles
9

Pratap, Rana Jitendra. "Design and Optimization of Microwave Circuits and Systems Using Artificial Intelligence Techniques." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7225.

Full text
Abstract:
In this thesis, a new approach combining neural networks and genetic algorithms is presented for microwave design. In this method, an accurate neural network model is developed from the experimental data. This neural network model is used to perform sensitivity analysis and derive response surfaces. An innovative technique is then applied in which genetic algorithms are coupled with the neural network model to assist in synthesis and optimization. The proposed method is used for modeling and analysis of circuit parameters for flip chip interconnects up to 35 GHz, as well as for design of multilayer inductors and capacitors at 1.9 GHz and 2.4 GHz. The method was also used to synthesize mm wave low pass filters in the range of 40-60 GHz. The devices obtained from layout parameters predicted by the neuro-genetic design method yielded electrical response close to the desired value (95% accuracy). The proposed method also implements a weighted priority scheme to account for tradeoffs in microwave design. This scheme was implemented to synthesize bandpass filters for 802.11a and HIPERLAN wireless LAN applications in the range of 5-6 GHz. This research also develops a novel neuro-genetic design centering methodology for yield enhancement and design for manufacturability of microwave devices and circuits. A neural network model is used to calculate yield using Monte Carlo methods. A genetic algorithm is then used for yield optimization. The proposed method has been used for yield enhancement of SiGe heterojunction bipolar transistor and mm wave voltage-controlled oscillator. It results in significant yield enhancement of the SiGe HBTs (from 25 % to 75 %) and VCOs (from 8 % to 85 %). The proposed method is can be extended for device, circuit, package, and system level integrated co-design since it can handle a large number of design variables without any assumptions about the component behavior. The proposed algorithm could be used by microwave community for design and optimization of microwave circuits and systems with greater accuracy while consuming less computational time.
APA, Harvard, Vancouver, ISO, and other styles
10

Bao, Min. "System-Level Techniques for Temperature-Aware Energy Optimization." Licentiate thesis, Linköpings universitet, ESLAB - Laboratoriet för inbyggda system, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-60855.

Full text
Abstract:
Energy consumption has become one of the main design constraints in today’s integrated circuits. Techniques for energy optimization, from circuit-level up to system-level, have been intensively researched. The advent of large-scale integration with deep sub-micron technologies has led to both high power densities and high chip working temperatures. At the same time, leakage power is becoming the dominant power consumption source of circuits, due to continuously lowered threshold voltages, as technology scales. In this context, temperature is an important parameter. One aspect, of particular interest for this thesis, is the strong inter-dependency between leakage and temperature. Apart  from leakage power, temperature also has an important impact on circuit delay and, implicitly, on the frequency, mainly through its influence on carrier mobility and threshold voltage. For power-aware design techniques, temperature has become a major factor to be considered. In this thesis, we address the issue of system-level energy optimization for real-time embedded systems taking temperature aspects into consideration. We have investigated two problems in this thesis: (1) Energy optimization via temperature-aware dynamic voltage/frequency scaling (DVFS). (2) Energy optimization through temperature-aware idle time (or slack) distribution (ITD). For the above two problems, we have proposed off-line techniques where only static slack is considered. To further improve energy efficiency, we have also proposed online techniques, which make use of both static and dynamic slack. Experimental results have demonstrated that considerable improvement of the energy efficiency can be achieved by applying our temperature-aware optimization techniques. Another contribution of this thesis is an analytical temperature analysis approach which is both accurate and sufficiently fast to be used inside an energy optimization loop.
APA, Harvard, Vancouver, ISO, and other styles
11

Katsifodimos, Asterios. "Scalable view-based techniques for web data : algorithms and systems." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00870456.

Full text
Abstract:
XML was recommended by W3C in 1998 as a markup language to be used by device- and system-independent methods of representing information. XML is nowadays used as a data model for storing and querying large volumes of data in database systems. In spite of significant research and systems development, many performance problems are raised by processing very large amounts of XML data. Materialized views have long been used in databases to speed up queries. Materialized views can be seen as precomputed query results that can be re-used to evaluate (part of) another query, and have been a topic of intensive research, in particular in the context of relational data warehousing. This thesis investigates the applicability of materialized views techniques to optimize the performance of Web data management tools, in particular in distributed settings, considering XML data and queries. We make three contributions.We first consider the problem of choosing the best views to materialize within a given space budget in order to improve the performance of a query workload. Our work is the first to address the view selection problem for a rich subset of XQuery. The challenges we face stem from the expressive power and features of both the query and view languages and from the size of the search space of candidate views to materialize. While the general problem has prohibitive complexity, we propose and study a heuristic algorithm and demonstrate its superior performance compared to the state of the art.Second, we consider the management of large XML corpora in peer-to-peer networks, based on distributed hash tables (or DHTs, in short). We consider a platform leveraging distributed materialized XML views, defined by arbitrary XML queries, filled in with data published anywhere in the network, and exploited to efficiently answer queries issued by any network peer. This thesis has contributed important scalability oriented optimizations, as well as a comprehensive set of experiments deployed in a country-wide WAN. These experiments outgrow by orders of magnitude similar competitor systems in terms of data volumes and data dissemination throughput. Thus, they are the most advanced in understanding the performance behavior of DHT-based XML content management in real settings.Finally, we present a novel approach for scalable content-based publish/subscribe (pub/sub, in short) in the presence of constraints on the available computational resources of data publishers. We achieve scalability by off-loading subscriptions from the publisher, and leveraging view-based query rewriting to feed these subscriptions from the data accumulated in others. Our main contribution is a novel algorithm for organizing subscriptions in a multi-level dissemination network in order to serve large numbers of subscriptions, respect capacity constraints, and minimize latency. The efficiency and effectiveness of our algorithm are confirmed through extensive experiments and a large deployment in a WAN.
APA, Harvard, Vancouver, ISO, and other styles
12

CASTELLINI, ALBERTO. "Algorithms and Software for Biological MP Modeling by Statistical and Optimization Techniques." Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/342895.

Full text
Abstract:
I sistemi biologici sono gruppi di entità biologiche (es. molecole ed organismi), che interagiscono producendo specifiche dinamiche. Questi sistemi sono solitamente caratterizzati da una elevata complessità perchè coinvolgono un elevato numero di componenti con molte interconnessioni. La comprensione dei meccanismi che governano i sistemi biologici e la previsione dei loro comportamenti in condizioni normali e patologiche è una sfida cruciale della biologia dei sistemi (in inglese detta systems biology), un'area di ricerca al confine tra biologia, medicina, matematica ed informatica. In questa tesi i P sistemi metabolici, detti brevemente sistemi MP, sono stati utilizzati come modello discreto per l'analisi di dinamiche biologiche. Essi sono una classe deterministica dei P sistemi classici, che utilizzano regole di riscrittura per rappresentare le reazioni chimiche e "funzioni di regolazioni di flusso" per regolare la reattività di ciascuna reazione rispetto alla quantita' di sostanze presenti istantaneamente nel sistema. Dopo un excursus sulla letteratura relativa ad alcuni modelli convenzionali (come le equazioni differenziali ed i modelli stocastici proposti da Gillespie) e non-convenzionali (come i P sistemi ed i P sistemi metabolici), saranno presentati i risultati della mia ricerca. Essi riguardano tre argomenti principali: i) l'equivalenza tra sistemi MP e reti di Petri ibride funzionali, ii) le prospettive statistiche e di ottimizzazione nella generazione di sistemi MP a partire da dati sperimentali, iii) lo sviluppo di un laboratorio virtuale chiamato MetaPlab, un software Java basato sui sistemi MP. L'equivalenza tra i sistemi MP e le reti di Petri ibride funzionali è stata dimostrata per mezzo di due teoremi ed alcuni esperimenti al computer per il caso di studio del meccanismo regolativo del gene operone lac nella pathway glicolitica. Il secondo argomento di ricerca concerne nuovi approcci per la sintesi delle funzioni di regolazione di flusso. La regressione stepwise e le reti neurali sono state impiegate come approssimatori di funzioni, mentre algoritmi di ottimizzazione classici ed evolutivi (es. backpropagation, algoritmi genetici, particle swarm optimization ed algoritmi memetici) sono stati impiegati per l'addestramento dei modelli. Una completo workflow per l'analisi dei dati sperimentali è stato presentato. Esso gestisce ed indirizza l'intero processo di sintesi delle funzioni di regolazione, dalla preparazione dei dati alla selezione delle variabili, fino alla generazione dei modelli ed alla loro validazione. Le metodologie proposte sono state testate con successo tramite esperimenti al computer sui casi di studio dell'oscillatore mitotico negli embrioni anfibi e del non photochemical quenching (NPQ). L'ultimo tema di ricerca è infine piu' applicativo e riguarda la progettazione e lo sviluppo di una architettura Java basata su plugin e di una serie di plugin che consentono di automatizzare varie fasi del processo di modellazione con sistemi MP, come la simulazione di dinamiche, la determinazione dei flussi e la generazione delle funzioni di regolazione.
Biological systems are groups of biological entities, (e.g., molecules and organisms), that interact together producing specific dynamics. These systems are usually characterized by a high complexity, since they involve a large number of components having many interconnections. Understanding biological system mechanisms, and predicting their behaviors in normal and pathological conditions is a crucial challenge in systems biology, which is a central research area on the border among biology, medicine, mathematics and computer science. In this thesis metabolic P systems, also called MP systems, have been employed as discrete modeling framework for the analysis of biological system dynamics. They are a deterministic class of P systems employing rewriting rules to represent chemical reactions and "flux regulation functions" to tune reactions reactivity according to the amount of substances present in the system. After an excursus on the literature about some conventional (i.e., differential equations, Gillespie's models) and unconventional (i.e., P systems and metabolic P systems) modeling frameworks, the results of my research are presented. They concern three research topics: i) equivalences between MP systems and hybrid functional Petri nets, ii) statistical and optimization perspectives in the generation of MP models from experimental data, iii) development of the virtual laboratory MetaPlab, a Java software based on MP systems. The equivalence between MP systems and hybrid functional Petri nets is proved by two theorems and some in silico experiments for the case study of the lac operon gene regulatory mechanism and glycolytic pathway. The second topic concerns new approaches to the synthesis of flux regulation functions. Stepwise linear regression and neural networks are employed as function approximators, and classical/evolutionary optimization algorithms (e.g., backpropagation, genetic algorithms, particle swarm optimization, memetic algorithms) as learning techniques. A complete pipeline for data analysis is also presented, which addresses the entire process of flux regulation function synthesis, from data preparation to feature selection, model generation and statistical validation. The proposed methodologies have been successfully tested by means of in silico experiments on the mitotic oscillator in early amphibian embryos and the non photochemical quenching (NPQ). The last research topic is more applicative, and pertains the design and development of a Java plugin architecture and several plugins which enable to automatize many tasks related to MP modeling, such as, dynamics computation, flux discovery, and regulation function synthesis.
APA, Harvard, Vancouver, ISO, and other styles
13

Chiu, Leung Kin. "Efficient audio signal processing for embedded systems." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44775.

Full text
Abstract:
We investigated two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound "richer" and "fuller," using a combination of bass extension and dynamic range compression. We also developed an audio energy reduction algorithm for loudspeaker power management by suppressing signal energy below the masking threshold. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. We also designed the circuits to implement the AdaBoost-based analog classifier.
APA, Harvard, Vancouver, ISO, and other styles
14

Diaz, Leiva Juan Esteban. "Simulation-based optimization for production planning : integrating meta-heuristics, simulation and exact techniques to address the uncertainty and complexity of manufacturing systems." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/simulationbased-optimization-for-production-planning-integrating-metaheuristics-simulation-and-exact-techniques-to-address-the-uncertainty-and-complexity-of-manufacturing-systems(9ef8cb33-99ba-4eb7-aa06-67c9271a50d0).html.

Full text
Abstract:
This doctoral thesis investigates the application of simulation-based optimization (SBO) as an alternative to conventional optimization techniques when the inherent uncertainty and complex features of real manufacturing systems need to be considered. Inspired by a real-world production planning setting, we provide a general formulation of the situation as an extended knapsack problem. We proceed by proposing a solution approach based on single and multi-objective SBO models, which use simulation to capture the uncertainty and complexity of the manufacturing system and employ meta-heuristic optimizers to search for near-optimal solutions. Moreover, we consider the design of matheuristic approaches that combine the advantages of population-based meta-heuristics with mathematical programming techniques. More specifically, we consider the integration of mathematical programming techniques during the initialization stage of the single and multi-objective approaches as well as during the actual search process. Using data collected from a manufacturing company, we provide evidence for the advantages of our approaches over conventional methods (integer linear programming and chance-constrained programming) and highlight the synergies resulting from the combination of simulation, meta-heuristics and mathematical programming methods. In the context of the same real-world problem, we also analyse different single and multi-objective SBO models for robust optimization. We demonstrate that the choice of robustness measure and the sample size used during fitness evaluation are crucial considerations in designing an effective multi-objective model.
APA, Harvard, Vancouver, ISO, and other styles
15

Powell, Keith. "Next generation wavefront controller for the MMT adaptive optics system: Algorithms and techniques for mitigating dynamic wavefront aberrations." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222838.

Full text
Abstract:
Wavefront controller optimization is important in achieving the best possible image quality for adaptive optics systems on the current generation of large and very large aperture telescopes. This will become even more critical when we consider the demands of the next generation of extremely large telescopes currently under development. These telescopes will be capable of providing resolution which is significantly greater than the current generation of optical/IR telescopes. However, reaching the full resolving potential of these instruments will require a careful analysis of all disturbance sources, then optimizing the wavefront controller to provide the best possible image quality given the desired science goals and system constraints. Along with atmospheric turbulence and sensor noise, structural vibration will play an important part in determining the overall image quality obtained. The next generation of very large aperture telescopes currently being developed will require assessing the effects of structural vibration on closed loop AO system performance as an integral part of the overall system design. Telescope structural vibrations can seriously degrade image quality, resulting in actual spot full width half maximum (FWHM) and angular resolution much worse than the theoretical limit. Strehl ratio can also be significantly degraded by structural vibration as energy is dispersed over a much larger area of the detector. In addition to increasing telescope diameter to obtain higher resolution, there has also been significant interest in adaptive optics systems which observe at shorter wavelength from the near infrared to visible (VNIR) wavelengths, at or near 0.7 microns. This will require significant reduction in the overall wavefront residuals as compared with current systems, and will therefore make assessment and optimization of the wavefront controller even more critical for obtaining good AO system performance in the VNIR regime.
APA, Harvard, Vancouver, ISO, and other styles
16

Niezen, Gerrit. "The optimization of gesture recognition techniques for resource-constrained devices." Diss., Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01262009-125121/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Donfack, Simplice. "Methods and algorithms for solving linear systems of equations on massively parallel computers." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112042.

Full text
Abstract:
Les processeurs multi-cœurs sont considérés de nos jours comme l'avenir des calculateurs et auront un impact important dans le calcul scientifique. Cette thèse présente une nouvelle approche de résolution des grands systèmes linéaires creux et denses, qui soit adaptée à l'exécution sur les futurs machines pétaflopiques et en particulier celles ayant un nombre important de cœurs. Compte tenu du coût croissant des communications comparé au temps dont les processeurs mettent pour effectuer les opérations arithmétiques, notre approche adopte le principe de minimisation des communications au prix de quelques calculs redondants et utilise plusieurs adaptations pour atteindre de meilleures performances sur les machines multi-cœurs. Nous décomposons le problème à résoudre en plusieurs phases qui sont ensuite mises en œuvre séparément. Dans la première partie, nous présentons un algorithme basé sur le partitionnement d'hypergraphe qui réduit considérablement le remplissage ("fill-in") induit lors de la factorisation LU des matrices creuses non symétriques. Dans la deuxième partie, nous présentons deux algorithmes de réduction de communication pour les factorisations LU et QR qui sont adaptés aux environnements multi-cœurs. La principale contribution de cette partie est de réorganiser les opérations de la factorisation de manière à réduire la sollicitation du bus tout en utilisant de façon optimale les ressources. Nous étendons ensuite ce travail aux clusters de processeurs multi-cœurs. Dans la troisième partie, nous présentons une nouvelle approche d'ordonnancement et d'optimisation. La localité des données et l'équilibrage des charges représentent un sérieux compromis pour le choix des méthodes d'ordonnancement. Sur les machines NUMA par exemple où la localité des données n'est pas une option, nous avons observé qu'en présence de perturbations systèmes (" OS noise"), les performances pouvaient rapidement se dégrader et devenir difficiles à prédire. Pour cela, nous présentons une approche combinant un ordonnancement statique et dynamique pour ordonnancer les tâches de nos algorithmes. Nos résultats obtenues sur plusieurs architectures montrent que tous nos algorithmes sont efficaces et conduisent à des gains de performances significatifs. Nous pouvons atteindre des améliorations de l'ordre de 30 à 110% par rapport aux correspondants de nos algorithmes dans les bibliothèques numériques bien connues de la littérature
Multicore processors are considered to be nowadays the future of computing, and they will have an important impact in scientific computing. In this thesis, we study methods and algorithms for solving efficiently sparse and dense large linear systems on future petascale machines and in particular these having a significant number of cores. Due to the increasing communication cost compared to the time the processors take to perform arithmetic operations, our approach embrace the communication avoiding algorithm principle by doing some redundant computations and uses several adaptations to achieve better performance on multicore machines.We decompose the problem to solve into several phases that would be then designed or optimized separately. In the first part, we present an algorithm based on hypergraph partitioning and which considerably reduces the fill-in incurred in the LU factorization of sparse unsymmetric matrices. In the second part, we present two communication avoiding algorithms that are adapted to multicore environments. The main contribution of this part is to reorganize the computations such as to reduce bus contention and using efficiently resources. Then, we extend this work for clusters of multi-core processors. In the third part, we present a new scheduling and optimization approach. Data locality and load balancing are a serious trade-off in the choice of the scheduling strategy. On NUMA machines for example, where the data locality is not an option, we have observed that in the presence of noise, performance could quickly deteriorate and become difficult to predict. To overcome this bottleneck, we present an approach that combines a static and a dynamic scheduling approach to schedule the tasks of our algorithms.Our results obtained on several architectures show that all our algorithms are efficient and lead to significant performance gains. We can achieve from 30 up to 110% improvement over the corresponding routines of our algorithms in well known libraries
APA, Harvard, Vancouver, ISO, and other styles
18

Seo, Chung-Seok. "Physical Design of Optoelectronic System-on-a-Chip/Package Using Electrical and Optical Interconnects: CAD Tools and Algorithms." Diss., Available online, Georgia Institute of Technology, 2005, 2004. http://etd.gatech.edu/theses/available/etd-11102004-150844/.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2005.
David E. Schimmel, Committee Member ; C.P. Wong, Committee Member ; John A. Buck, Committee Member ; Abhijit Chatterjee, Committee Chair ; Madhavan Swaminathan, Committee Member. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
19

Hadj, Salem Khadija. "Optimisation du fonctionnement d'un générateur de hiérarchies mémoires pour les systèmes de vision embarquée." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM023/document.

Full text
Abstract:
Les recherches de cette thèse portent sur la mise en oeuvre des méthodes de la rechercheopérationnelle (RO) pour la conception de circuits numériques dans le domaine du traitementdu signal et de l’image, plus spécifiquement pour des applications multimédia et de visionembarquée.Face à la problématique de “Memory Wall”, les concepteurs de systèmes de vision embarquée,Mancini et al. (Proc.DATE, 2012), ont proposé un générateur de hiérarchies mémoiresad-hoc dénommé Memory Management Optimization (MMOpt). Cet atelier de conception estdestiné aux traitements non-linéaires afin d’optimiser la gestion des accès mémoires de cestraitements. Dans le cadre de l’outil MMOpt, nous abordons la problématique d’optimisationliée au fonctionnement efficace des circuits de traitement d’image générés par MMOpt visantl’amélioration des enjeux de performance (contrainte temps-réel), de consommation d’énergieet de coût de production (contrainte d’encombrement).Ce problème électronique a été modélisé comme un problème d’ordonnancement multiobjectif,appelé 3-objective Process Scheduling and Data Prefetching Problem (3-PSDPP), reflétantles 3 principaux enjeux électroniques considérés. À notre connaissance, ce problème n’apas été étudié avant dans la littérature de RO. Une revue de l’état de l’art sur les principaux travauxliés à cette thèse, y compris les travaux antérieurs proposés par Mancini et al. (Proc.DATE,2012) ainsi qu’un bref aperçu sur des problèmes voisins trouvés dans la littérature de RO,a ensuite été faite. En outre, la complexité de certaines variantes mono-objectif du problèmed’origine 3-PSDPP a été établie. Des approches de résolution, y compris les méthodes exactes(PLNE) et les heuristiques constructives, sont alors proposées. Enfin, la performance de cesméthodes a été comparée par rapport à l’algorithme actuellement utilisé dans l’outil MMOpt,sur des benchmarks disponibles dans la littérature ainsi que ceux fournis par Mancini et al.(Proc.DATE, 2012).Les solutions obtenues sont de très bonne qualité et présentent une piste prometteuse pouroptimiser les performances des hiérarchies mémoires produites par MMOpt. En revanche, vuque les besoins de l’utilisateur de l’outil sont contradictoires, il est impossible de parler d’unesolution unique en optimisant simultanément les trois critères considérés. Un ensemble debonnes solutions de compromis entre ces trois critères a été fourni. L’utilisateur de l’outilMMOpt peut alors décider de la solution qui lui est la mieux adaptée
The research of this thesis focuses on the application of the Operations Research (OR)methodology to design new optimization algorithms to enable low cost and efficient embeddedvision systems, or more generally devices for multimedia applications such as signal and imageprocessing.The design of embedded vision systems faces the “Memory Wall” challenge regarding thehigh latency of memories holding big image data. For the case of non-linear image accesses, onesolution has been proposed by Mancini et al. (Proc. DATE 2012) in the form of a software tool,called Memory Management Optimization (MMOpt), that creates an ad-hoc memory hierarchiesfor such a treatment. It creates a circuit called a Tile Processing Unit (TPU) that containsthe circuit for the treatment. In this context, we address the optimization challenge set by theefficient operation of the circuits produced by MMOpt to enhance the 3 main electronic designcharacteristics. They correspond to the energy consumption, performance and size/productioncost of the circuit.This electronic problem is formalized as a 3-objective scheduling problem, which is called3-objective Process Scheduling and Data Prefetching Problem (3-PSDPP), reflecting the 3 mainelectronic design characteristics under consideration. To the best of our knowledge, this problemhas not been studied before in the OR literature. A review of the state of the art, including theprevious work proposed by Mancini et al. (Proc.DATE, 2012) as well as a brief overview onrelated problems found in the OR literature, is then made. In addition, the complexity of someof the mono-objective sub-problems of 3-PSDPP problem is established. Several resolutionapproaches, including exact methods (ILP) and polynomial constructive heuristics, are thenproposed. Finally, the performance of these methods is compared, on benchmarks available inthe literature, as well as those provided by Mancini et al. (Proc.DATE, 2012), against the onecurrently in use in the MMOpt tool.The results show that our algorithms perform well in terms of computational efficiency andsolution quality. They present a promising track to optimize the performance of the TPUs producedby MMOpt. However, since the user’s needs of the MMOpt tool are contradictory, such aslow cost, low energy and high performance, it is difficult to find a unique and optimal solutionto optimize simultaneously the three criteria under consideration. A set of good compromisesolutions between these three criteria was provided. The MMOpt’s user can then choose thebest compromise solution he wants or needs
APA, Harvard, Vancouver, ISO, and other styles
20

Karásek, Jan. "Vysokoúrovňové objektově orientované genetické programování pro optimalizaci logistických skladů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-233624.

Full text
Abstract:
Disertační práce je zaměřena na optimalizaci průběhu pracovních operací v logistických skladech a distribučních centrech. Hlavním cílem je optimalizovat procesy plánování, rozvrhování a odbavování. Jelikož jde o problém patřící do třídy složitosti NP-težký, je výpočetně velmi náročné nalézt optimální řešení. Motivací pro řešení této práce je vyplnění pomyslné mezery mezi metodami zkoumanými na vědecké a akademické půdě a metodami používanými v produkčních komerčních prostředích. Jádro optimalizačního algoritmu je založeno na základě genetického programování řízeného bezkontextovou gramatikou. Hlavním přínosem této práce je a) navrhnout nový optimalizační algoritmus, který respektuje následující optimalizační podmínky: celkový čas zpracování, využití zdrojů, a zahlcení skladových uliček, které může nastat během zpracování úkolů, b) analyzovat historická data z provozu skladu a vyvinout sadu testovacích příkladů, které mohou sloužit jako referenční výsledky pro další výzkum, a dále c) pokusit se předčit stanovené referenční výsledky dosažené kvalifikovaným a trénovaným operačním manažerem jednoho z největších skladů ve střední Evropě.
APA, Harvard, Vancouver, ISO, and other styles
21

Ouni, Bassem. "Caractérisation, modélisation et estimation de la consommation d'énergie à haut-niveau des OS embarqués." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-01059814.

Full text
Abstract:
La consommation énergétique est devenue un problème majeur dans la conception des systèmes aussi bien d'un point de vue de la fiabilité des circuits que de l'autonomie d'un équipement embarqué. Cette thèse vise à caractériser et modéliser le coût énergétique du système d'exploitation (OS) embarqué en vue d'explorer des solutions faibles consommation. La première contribution consiste à définir une approche globale de modélisation de la consommation des services de base de l'OS: la stimulation de l'exécution de ces services, tels que le changement de contexte, l'ordonnancement et la communication interprocessus, est effectuée à travers des programmes de test adéquats. Sur la base de mesures de la consommation d'énergie sur la carte OMAP35x EVM, des paramètres pertinents soit matériels soit logiciels ont été identifiés pour en déduire des modèles de consommation. Dans une seconde étape, la prise en compte de ces paramètres doit intervenir au plus haut niveau de la conception. L'objectif sera d'exploiter les fonctionnalités offertes par un langage de modélisation et d'analyse architecturale AADL tout en modélisant les aspects logiciel et matériel en vue d'estimer la consommation d'énergie. Ensuite, les modèles énergétiques de l'OS ont été intégrés dans un simulateur multiprocesseur de politique d'ordonnancement STORM afin d'identifier la consommation de l'OS et ceci pour des politiques d'ordonnancement mettant en œuvre des techniques de réduction de la consommation tel que le DVFS et le DPM. Enfin, la définition et vérification de certaines contraintes temps-réel et énergétiques ont été effectuées avec des langages de spécification de contraintes (QAML, RDAL).
APA, Harvard, Vancouver, ISO, and other styles
22

Vacher, Blandine. "Techniques d'optimisation appliquées au pilotage de la solution GTP X-PTS pour la préparation de commandes intégrant un ASRS." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2566.

Full text
Abstract:
Les travaux présentés dans ce document portent sur des problèmes d'optimisation dans le domaine de la logistique interne des entrepôts. Le domaine est soumis à une forte concurrence et est en plein essor, poussé par les besoins croissants du marché et favorisé par l'automatisation. L'entreprise SAVOYE construit des équipements et propose sa propre solution GTP (Goods-To-Person) pour la préparation de commandes. La solution utilise un système de stockage automatisé appelé X-Picking Tray System (X-PTS) et achemine les charges automatiquement à des postes de travail via des carrousels pour effectuer des opérations séquencées. C'est un système de systèmes particulièrement complexe qui offre de nombreuses applications aux techniques de la recherche opérationnelle. Tout cela définit le périmètre applicatif et théorique des travaux menés dans cette thèse. Nous avons d'abord traité un problème d'ordonnancement de type Job Shop avec des contraintes de précédences. Le contexte particulier du problème a permis de le résoudre en un temps polynomial avec un algorithme exact. Celui-ci a permis de calculer les dates d'injection des charges provenant des différents flux de sortie du stockage pour s'agréger sur un carrousel, dans un ordre donné. Ainsi, la gestion inter-allées du stockage PTS a été améliorée et le débit du flux de charges maximisé, depuis le stockage jusqu'à un poste. Nous avons ensuite étudié des algorithmes de tri tels que le tri par base et développé un algorithme de tri en ligne, utilisé pour piloter des systèmes autonomes de tri appelés Buffers Séquenceurs (BS). Placés en amont de chaque poste de travail dans la solution GTP, les BS permettent de délocaliser la fonction de tri en aval du stockage, augmentant de facto le débit des flux de sortie. Enfin, nous avons considéré un problème de séquencement consistant à trouver une extension linéaire d'un ordre partiel minimisant une distance avec un ordre donné. Nous proposons de le résoudre par une approche de programmation linéaire en nombres entiers, par la construction de programmes dynamiques et par des heuristiques de type glouton. Une heuristique efficace a été développée en se basant sur des appels itératifs d'un des programmes dynamiques, permettant d'atteindre une solution proche ou égale à l'optimum en un temps très court. L'application de ce problème aux flux de sortie non ordonnés du stockage X-PTS permet de réaliser un pré-tri au niveau des carrousels. Les diverses solutions développées ont été validées par simulation et certaines ont été brevetées et/ou déjà été mises en application dans des entrepôts
The work presented in this PhD thesis deals with optimization problems in the context of internal warehouse logistics. The field is subject to strong competition and extensive growth, driven by the growing needs of the market and favored by automation. SAVOYE builds warehouse storage handling equipment and offers its own GTP (Goods-To-Person) solution for order picking. The solution uses an Automated Storage and Retrieval System (ASRS) called X-Picking Tray System (X-PTS) and automatically routes loads to workstations via carousels to perform sequenced operations. It is a highly complex system of systems with many applications for operational research techniques. All this defines the applicative and theoretical scope of the work carried out in this thesis. In this thesis, we have first dealt with a specific scheduling Job Shop problem with precedence constraints. The particular context of this problem allowed us to solve it in polynomial time with exact algorithms. These algorithms made it possible to calculate the injection schedule of the loads coming from the different storage output streams to aggregate on a carousel in a given order. Thus, the inter-aisle management of the X-PTS storage was improved and the throughput of the load flow was maximized, from the storage to a station. In the sequel of this work, the radix sort LSD (Least Significant Digit) algorithm was studied and a dedicated online sorting algorithm was developed. The second one is used to drive autonomous sorting systems called Buffers Sequencers (BS), which are placed upstream of each workstation in the GTP solution. Finally, a sequencing problem was considered, consisting of finding a linear extension of a partial order minimizing a distance with a given order. An integer linear programming approach, different variants of dynamic programming and greedy algorithms were proposed to solve it. An efficient heuristic was developed based on iterative calls of dynamic programming routines, allowing to reach a solution close or equal to the optimum in a very short time. The application of this problem to the unordered output streams of X-PTS storage allows pre-sorting at the carousel level. The various solutions developed have been validated by simulation and some have been patented and/or already implemented in warehouses
APA, Harvard, Vancouver, ISO, and other styles
23

Kroetz, Marcel Giovani. "Sistema de apoio na inspeção radiográfica computadorizada de juntas soldadas de tubulações de petróleo." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/509.

Full text
Abstract:
Petrobras
A inspeção radiográfica de juntas soldadas de tubulações é a atividade minuciosa e cuidadosa de observar imagens radiográficas de juntas soldadas em busca de pequenos defeitos e descontinuidades que possam comprometer a resistência mecânica dessas juntas. Como toda atividade que requer atenção constante, a inspeção radiográfica está sujeita a erros principalmente devido a fadiga visual e distrações naturais devido a repetitividade e monotonia inerentes à essa atividade. No presente trabalho, apresentam-se duas metodologias que têm por objetivo o auxílio e a automação da atividade de inspeção: a detecção automática dos cordões de solda nas radiografias e o realce das descontinuidades; compondo entre outras funcionalidades, um aplicativo completo de auxílio na inspeção radiográfica que agrega ainda a possibilidade de automação do processamento dessas imagens através da construção de rotinas e sua posterior aplicação a um conjunto de imagens semelhantes. Os resultados obtidos na detecção automática do cordão de solda são promissores, sendo possível, através da metodologia proposta, detectar cordões provenientes diferentes técnicas de ensaios radiográficos usuais. Quanto aos resultados do realce das descontinuidades, apesar de estes ainda não levarem a uma inspeção completamente autônoma e não supervisionada, apresentam resultados melhores do que aqueles existentes atualmente na literatura, principalmente quanto a correlação entre contraste visual do resultado do realce e a probabilidade de ocorrência de descontinuidades nas regiões demarcadas. Por fim, o realce das descontinuidades em conjunto com um aplicativo completo e iterativo contribui para uma maior leveza na atividade de inspeção, com o que se espera uma expressiva redução das taxas de erro devido à fadiga visual e um aumento considerável da produtividade através da automação das rotinas mais repetitivas de processamento digital a que as imagens radiográficas são submetidas durante sua inspeção.
The weld bead radiographic inspection is the activity of meticulously observe a radiographic image looking for small defects and discontinuities in the welded joints that can compromise the mechanical resistance of that joints. As any other activity than requires constant attention, the weld bead inspection is error prone due to visual fatigue, repetition and others distractions inherent to these activity. In this work, two new methodologies for help in the inspection activities are presented: the automatic detection of the weld bead and the highlighting of the weld bead discontinuities. Those that, among others functionalities, are included in a complete software solution for help in the weld bead inspection. Including the feature of macro programing for automation of the most common image processing routines and further processing bath of images in an automatic way. The results from the automatic weld bead detection is beyond the satisfactory, detecting weld bead from all the usual radiographic techniques. About the results of the highlight of the discontinuities, although that are not suited for a complete non supervised weld bead inspection, their correlation among intensity and the probability of the presence of a discontinuity is very well suited for discontinuities highlighting, a helpful tool in weld bead inspection. In conclusion, the proposed methodologies. combined with a fully featured interactive software solution, a lot contribute for the weld bead inspection activity, a decreased error rate due to visual fatigue and a better overall performance due to the automation of the most common procedures involved in this activity.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Shih-Chang, and 陳世璋. "Developing GEN_BLOCK Redistribution Algorithms and Optimization Techniques on Parallel, Distributed and Multi-Core Systems." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/31998536558283534640.

Full text
Abstract:
博士
中華大學
工程科學博士學位學程
99
Parallel computing systems have been used to solve complex scientific problems with aggregates of data such as arrays to extend sequential programming language. With the improvement of hardware architectures, parallel systems can be a cluster, multiple clusters or a multi-cluster with multi-core machines. Under this paradigm, appropriate data distribution is critical to the performance of each phase in a multi-phase program. Because the phases of a program are different from one another, the optimal distribution changes due to the characteristics of each phase, as well as on those of the following phase. In order to achieve good load balancing, improved data locality and reduced inter-processor communication during runtime, data redistribution is critical during operation. In this study, formulas for message generation, three scheduling algorithms for single cluster, multiple clusters and multi-cluster system with multi-core machines and a power saving technique are proposed to solve problems for GEN_BLOCK redistribution. Formulas for message generation provide much information of source, destination and data which are needed before scheduling algorithms giving effective results. Each node can use the formulas to obtain the information simply, effectively and independently. An effective scheduling algorithm for a cluster system is proposed to apply on heterogeneous environment. It not only guarantees minimal schedule steps but also shortens communication cost. Multi-cluster computing provides complex network and heterogeneous processors to perform GEN_BLOCK redistribution. To adapt this architecture, a new scheduling algorithm is proposed to provide better result in terms of communication cost. This technique classifies transmissions among clusters into three types and schedules transmissions inside a node together to avoid synchronization delay. While employing multi-core machines to be a part of parallel systems, present scheduling algorithms are doubted to deliver good performance. In addition, efficient power saving techniques are not under consideration for any scheduling algorithms. Therefore, four kinds of transmission time are designed for messages to increase scheduling efficiency. While performing proposed scheduling algorithm, the efficient power saving technique is also executed to evaluate the voltage value to save energy for each core on the complicated system.
APA, Harvard, Vancouver, ISO, and other styles
25

Tsai, Allan Yingming. "Advanced Techniques for High-Throughput Cellular Communications." Thesis, 2018. https://doi.org/10.7916/D88K8NMB.

Full text
Abstract:
The next generation wireless communication systems require ubiquitous high-throughput mobile connectivity under a range of challenging network settings (urban versus rural, high device density, mobility, etc). To improve the performance of the system, the physical layer design is of great importance. The previous research on improving the physical layer properties includes: a) highly directional transmissions that can enhance the throughput and spatial reuse; b) enhanced MIMO that can eliminate contention, enabling linear increase of capacity with number of antennas; c) mmWave technologies which operate on GHz bandwidth to over substantially higher throughput; d) better cooperative spectrum sharing with cognitive radios; e) better multiple access method which can mitigate multiuser interference and allow more multi-users. This dissertation addresses several techniques in the physical layer design of the next generation wireless communication systems. In chapter two, an orthogonal frequency division with code division multiple access (OFDM-CDMA) systems is proposed and a polyphase code is used to improve multiple access performance and make the OFDM signal satisfy the peak to average ratio (PAPR) constraint. Chapter three studies the I/Q imbalance for direct down converter. For wideband transmitter and receiver that use direct conversion for I/Q sampling, the I/Q imbalance becomes a critical issue. With higher I/Q imbalance, there will be higher degradation in quadrature amplitude modulation, which degrades the throughput tremendously. Chapter four investigate a problem of spectrum sharing for cognitive wideband communication. An energy-efficient sub-Nyquist sampling algorithm is developed for optimal sampling and spectrum sensing. In chapter ve, we study the channel estimation of millimeter wave full-dimensional MIMO communication. The problem is formulated as an atomic-norm minimization problem and algorithms are derived for the channel estimation in different situations. In this thesis, mathematical optimization is applied as the main approach to analyze and solve the problems in the physical layer of wireless communication so that the high-throughput is achieved. The algorithms are derived along with the theoretical analysis, which are validated with numerical results.
APA, Harvard, Vancouver, ISO, and other styles
26

Leke, Collins Achepsah. "Empirical evaluation of optimization techniques for classification and prediction tasks." Thesis, 2014. http://hdl.handle.net/10210/9858.

Full text
Abstract:
M.Ing. (Electrical and Electronic Engineering)
Missing data is an issue which leads to a variety of problems in the analysis and processing of data in datasets in almost every aspect of day−to−day life. Due to this reason missing data and ways of handling this problem have been an area of research in a variety of disciplines in recent times. This thesis presents a method which is aimed at finding approximations to missing values in a dataset by making use of Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), Random Forest (RF), Negative Selection (NS) in combination with auto-associative neural networks, and also provides a comparative analysis of these algorithms. The methods suggested use the optimization algorithms to minimize an error function derived from training an auto-associative neural network during which the interrelationships between the inputs and the outputs are obtained and stored in the weights connecting the different layers of the network. The error function is expressed as the square of the difference between the actual observations and predicted values from an auto-associative neural network. In the event of missing data, all the values of the actual observations are not known hence, the error function is decomposed to depend on the known and unknown variable values. Multi Layer Perceptron (MLP) neural network is employed to train the neural networks using the Scaled Conjugate Gradient (SCG) method. The research primarily focusses on predicting missing data entries from two datasets being the Manufacturing dataset and the Forest Fire dataset. Prediction is a representation of how things will occur in the future based on past occurrences and experiences. The research also focuses on investigating the use of this proposed technique in approximating and classifying missing data with great accuracy from five classification datasets being the Australian Credit, German Credit, Japanese Credit, Heart Disease and Car Evaluation datasets. It also investigates the impact of using different neural network architectures in training the neural network and finding approximations for the missing values, and using the best possible architecture for evaluation purposes. It is revealed in this research that the approximated values for the missing data obtained by applying the proposed models are accurate with a high percentage of correlation between the actual missing values and corresponding approximated values using the proposed models on the Manufacturing dataset ranging between 94.7% and 95.2% with the exception of the Negative Selection algorithm which resulted in a 49.6% correlation coefficient value. On the Forest Fire dataset, it was observed that there was a low percentage correlation between the actual missing values and the corresponding approximated values in the range 0.95% to 4.49% due to the nature of the values of the variables in the dataset. The Negative Selection algorithm on this dataset revealed a negative percentage correlation between the actual values and the approximated values with a value of 100%. Approximations found for missing data are also observed to depend on the particular neural network architecture employed in training the dataset. Further analysis revealed that the Random Forest algorithm on average performed better than the GA, SA, PSO, and NS algorithms yielding the lowest Mean Square Error, Root Mean Square Error, and Mean Absolute Error values. On the other end of the scale was the NS algorithm which produced the highest values for the three error metrics bearing in mind that for these, the lower the values, the better the performance, and vice versa. The evaluation of the algorithms on the classification datasets revealed that the most accurate in classifying and identifying to which of a set of categories a new observation belonged on the basis of the training set of data is the Random Forest algorithm, which yielded the highest AUC percentage values on all of the five classification datasets. The differences between its AUC values and those of the GA, SA, PSO, and NS algorithms were statistically significant, with the most statistically significant differences observed when the AUC values for the Random Forest algorithm were compared to those of the Negative Selection algorithm on all five classification datasets. The GA, SA, and PSO algorithms produced AUC values which when compared against each other on all five classification datasets were not very different. Overall analysis on the datasets considered revealed that the algorithm which performed best in solving both the prediction and classification problems was the Random Forest algorithm as seen by the results obtained. The algorithm on the other end of the scale after comparisons of results was the Negative Selection algorithm which produced the highest error metric values for the prediction problems and the lowest AUC values for the classification problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Kommaraju, Ananda Varadhan. "Designing Energy-Aware Optimization Techniques through Program Behaviour Analysis." Thesis, 2014. http://hdl.handle.net/2005/3133.

Full text
Abstract:
Green computing techniques aim to reduce the power foot print of modern embedded devices with particular emphasis on processors, the power hot-spots of these devices. In this thesis we propose compiler-driven and profile-driven optimizations that reduce power consumption in a modern embedded processor. We show that these optimizations reduce power consumption in functional units and memory subsystems with very low performance loss. We present three new techniques to reduce power consumption in processors, namely, transition aware scheduling, leakage reduction in data caches using criticality analysis, and dynamic power reduction in data caches using locality analysis of data regions. A novel instruction scheduling technique to address leakage power consumption in functional units is proposed. This scheduling technique, transition aware scheduling, is motivated by idle periods that arise in the utilization of functional units during program execution. A continuously large idle period in a functional unit can be exploited to place the unit in low power state. This novel scheduling algorithm increases the duration of idle periods without hampering performance and drives power gating in these periods. A power model defined with idle cycles as a parameter shows that this technique saves up to 25% of leakage power with very low performance impact. In modern embedded programs, data regions can be classified as critical and non-critical. Critical data regions significantly impact the performance. A new technique to identify such data regions through profiling is proposed. This technique along with a new criticality based cache policy is used to control the power state of the data cache. This scheme allocates non-critical data regions to low-power cache regions, thereby reducing leakage power consumption by up to 40% without compromising on the performance. This profiling technique is extended to identify data regions that have low locality. Some data regions have high data reuse. A locality based cache policy based on cache parameters like size and associativity is proposed. This scheme reduces dynamic as well as static power consumption in the cache subsystem. This optimization reduces 25% of the total power consumption in the data caches without hampering the execution time. In this thesis, the problem of power consumption of a program is decoupled from the number of processor cores. The underlying architecture model is simplified to abstract away a variety of processor scenarios. This simplified model can be scaled up to be implemented in various multi-core architecture models like Chip Multi-Processors, Simultaneous Multi-Threaded Processors, Chip Multi-Threaded Processors, to name a few. The three techniques proposed in this thesis leverage underlying hardware features like low power functional units, drowsy caches and split data caches. These techniques reduce power consumption of a wide range of benchmarks with low performance loss.
APA, Harvard, Vancouver, ISO, and other styles
28

Nagpal, Rahul. "Compiler-Assisted Energy Optimization For Clustered VLIW Processors." Thesis, 2008. http://hdl.handle.net/2005/684.

Full text
Abstract:
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long wires having high load capacitance which leads to delay in execution and significantly high energy consumption. Inter-cluster communication also introduces many short idle cycles, therby significantly increasing the overall leakage energy consumption in the functional units. The trend towards miniatrurization of devices (and associated reduction in threshold voltage) makes energy consumption in interconnects and functional units even worse and limits the usability of clustered architectures in smaller technologies. In the past, study of leakage energy management at the architectural level has mostly focused on storage structures such as cache. Relatively, little work has been done on architecture level leakage energy management in functional units in the context of superscalar processors and energy efficient scheduling in the context of VLIW architectures. In the absence of any high level model for interconnect energy estimation, the primary focus of research in the context of interconnects has been to reduce the latency of communication and evaluation of various inter-cluster communication models. To the best of our knowledge, there has been no such work in the past from the point of view of enegy efficiency targeting clustered VLIW architectures specifically focusing on smaller technologies. Technological advancements now permit design of interconnects and functional units With varying performance and power modes. In thesis we people scheduling algorithms that aggregate the scheduling slack of instructions and communication slack of data values to exploit the low power modes of interconnects and functional units . We also propose a high level model for estimation of interconnect delay and energy (in contrast to low-level circuit level model proposed earlier) that makes it possible to carry out architectural and compiler optimizations specifically targeting the inter connect, Finally we present synergistic combination of these algorithms that simultaneously saves energy in functional units and interconnects to improve the usability of clustered architectures by archiving better overall energy-performance trade-offs. Our compiler assisted leakage energy management scheme for functional units reduces the energy consumption of functional units approximately by 15% and 17% in the context of a 2-clustered and a 4-clustered VLIW architecture respectively with negligible performance degradation over and above that offered by a hardware-only scheme. The interconnect energy optimization scheme improves the energy consumption of interconnects on an average by 41% and 46% for a 2-clustered and a 4-clustered machine respectively with 2% and 1.5% performance degradation. The combined scheme options slightly better energy benefit in functional units and 37% and 43% energy benefit in interconnect with slightly higher performance degradation. Even with the conservative estimates of contribution of functional unit interconnect to overall processor energy consumption the proposed combined scheme obtains on an average 8% and 10% improvement in overall energy delay product with 3.5% and 2% performance degradation for a 2-clustered and a 4-clustered machine respectively. We present a detailed experimental evaluation of the proposed schemes using the Trimaran compiler infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography