To see the other types of publications on this topic, follow the link: Implementation and Optimization.

Dissertations / Theses on the topic 'Implementation and Optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Implementation and Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bruccoleri, Christian. "Flower constellation optimization and implementation." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jerez, Juan Luis. "Custom optimization algorithms for efficient hardware implementation." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/12791.

Full text
Abstract:
The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.
APA, Harvard, Vancouver, ISO, and other styles
3

Irshad, Saba, and Purna Chandra Nepal. "Rise Over Thermal Estimation Algorithm Optimization and Implementation." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4023.

Full text
Abstract:
The uplink load for the scheduling of Enhanced-Uplink (E-UL) channels determine the achievable data rate for Wideband Code Division Multiple Access (WCDMA) systems, therefore its accurate measurement carries a prime significance. The uplink load also known as Rise-over-Thermal (RoT), which is the quotient of the Received Total Wideband Power (RTWP) and the Thermal Noise Power floor. It is a major parameter which is calculated at each Transmission Time Interval (TTI) for maintaining cell coverage and stability. The RoT algorithm for evaluation of uplink load is considered as a complex and resource demanding among several Radio Resource Management (RRM) algorithms running in a radio system. The main focus of this thesis is to study RoT algorithm presently deployed in radio units and its possible optimization by reducing complexity of the algorithm in terms of memory usage and processing power. The calculation of RoT comprises three main blocks a Kalman filter, a noise floor estimator and the RoT computation. After analyzing the complexity of each block it has been established that the noise floor estimator block is consuming most of the processing power producing peak processor load since it involves many complex floating point calculations. However, the other blocks do not affect the processing load significantly. It was also observed that some block updates can be reduced in order to decrease the average load on the processor. Three techniques are proposed for reducing the complexity of the RoT algorithm, two for the reduction of peak load and one for the reduction of average load. For reducing the peak load, an interpolation approach is used instead of performing transcendental mathematical calculations. Also, the calculations involving noise floor estimation are extended over several TTIs by keeping in view that the estimation is not time critical. For the reduction of average load, the update rate for the Kalman Filter block is reduced. Based on these optimization steps, a modified algorithm for RoT computation with reduced complexity is proposed. The proposed changes are tested by means of MATLAB simulations demonstrating the improved performance with consistency in the output results. Finally, an arithmetic operation count is done using the hardware manual of Power PC (PPC405) used in Platform 4, which gives a rough estimate of decrease in the percentage of calculations after optimization.
saabairshad@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
4

Karlsson, Victor, and Susanna Olsson. "Implementation of dynamic route optimization - drivers and barriers." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150134.

Full text
Abstract:
Svevia is a company working with installation, occupancy and operation of infrastructure. They are currently testing and developing a new system called dynamic route optimization (DynOpt) in cooperation with B and M Systemutveckling. This system is able to contribute to a series of improvements for the company, such as automatization of certain processes, conclude demand on a more local level and create dynamic optimized routes which in the best way handles the determined local demands. Svevia sees great potentials of profits with the system, thereby has an interest of a support for decision-making regarding what may or may not be problematic during a potential implementation of such a system. This report regards a case study of the affects that DynOpt can have on its future users and what impact such effects might have on Svevia, with the goal of determining the drivers and barriers of DynOpt related to soft parameters. The soft parameters in this case study relate to how the changes DynOpt entails may affect the users and why or why not they may desire or be willing to accept these changes. The method used to conclude these drivers and barriers is to firstly gather information from the users by interviews and surveys, which then is analyzed in order to determine what advantages and disadvantages the users see with the system. The second step of the data processing consists of a SWOT-analysis execution. The strategic effects are determined through consultation with insight personnel. Lastly the remaining significant results are converted into drivers and barriers by firstly eliminating the information that may not pan out in any driver or barrier and secondly lumping together result describing similar effects into drivers and barriers. Eleven drivers and six barriers are concluded. Such results are for example the driver that the potential future users’ interest in technology may ease the implementation since DynOpt is a technological implementation. One of the barriers on the other hand is that the total driving distance will be reduced through optimization, which results in less available work for the chauffeurs which worsens their working conditions and can result in resistance to the implementation. This driver and barrier as well as the others in conjunction with a discussion, constitute the final result of the report and describe aspects that may hinder or make it easier to successfully implement DynOpt.
APA, Harvard, Vancouver, ISO, and other styles
5

Banjac, Goran. "Operator splitting methods for convex optimization : analysis and implementation." Thesis, University of Oxford, 2018. https://ora.ox.ac.uk/objects/uuid:17ac73af-9fdf-4cf6-a946-3048da3fc9c2.

Full text
Abstract:
Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications. Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems. The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades. By exploiting the properties of such an operator, it is possible to show that the alternating direction method of multipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result. Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators. Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs (QPs) based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas. Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.
APA, Harvard, Vancouver, ISO, and other styles
6

Safi, Mohammed. "Bridge Life Cycle Cost Optimization : Analysis, Evaluation & Implementation." Thesis, KTH, Civil and Architectural Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11908.

Full text
Abstract:

In infrastructure construction projects especially bridge investments, the most critical decisions that significantly affect the whole bridge LCC are the early stages decisions. Clearly, it's more beneficial to correctly choose the optimum bridge than to choose the optimum construction or repair method.

The ability of a bridge to provide service over time demands appropriate maintenance by the agency. Thus the investment decision should consider not only the initial activity that creates a public good, but also all future activities that will be required to keep that investment available to the public.

This research is aiming for bridge sustainability, enhance the bridge related decision making, and facilitate the usage of the bridge related feedbacks. The development of a reliable and usable computer tool for bridge LCC & LCA evaluation is the main target.

Toward the main goal, many steps were fulfilled. A unique integrated Bridge LCC evaluation methodology was developed. Two systematic evaluation ways were developed, one for bridge user cost and one for the bridge aesthetical and cultural value. To put these two systematic ways in practice, two preliminary computer programs were developed for this purpose. Today and future works are focusing on developing methodology and preliminary computer tool for bridge agency cost as well as the bridge LCA evaluation. KTH unique LCC evaluation system will enable the decision makers to correctly choose the optimum bridge in the early stages decision making phases as well as any later on reparation method.


ETSI
APA, Harvard, Vancouver, ISO, and other styles
7

Abed, El-Fattah Safi Mohammed. "Bridge Life Cycle Cost Optimization : Analysis, Evaluation, & Implementation." Thesis, KTH, Bro- och stålbyggnad, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-36944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Müllner, Marie. "Optimization and implementation of gold-nanoparticles for medical imaging." Diss., Ludwig-Maximilians-Universität München, 2015. http://nbn-resolving.de/urn:nbn:de:bvb:19-182484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gina, Ervin. "Implementation and Optimization of an Inverse Photoemission Spectroscopy Setup." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4050.

Full text
Abstract:
Inverse photoemission spectroscopy (IPES) is utilized for determining the unoccupied electron states of materials. It is a complementary technique to the widely used photoemission spectroscopy (PES) as it analyzes what PES cannot, the states above the Fermi energy. This method is essential to investigating the structure of a solid and its states. IPES has a broad range of uses and is only recently being utilized. This thesis describes the setup, calibration and operation of an IPES experiment. The IPES setup consists of an electron gun which emits electrons towards a sample, where photons are released, which are measured in isochromat mode via a photon detector of a set energy bandwidth. By varying the electron energy at the source, a spectrum of the unoccupied density of states can be obtained. Since IPES is not commonly commercially available the design consists of many custom made components. The photon detector operates as a bandpass filter with a mixture of acetone/argon and a CaF2 window setting the cutoff energies. The counter electronics consist of a pre-amplifier, amplifier and analyzer to detect the count rate at each energy level above the Fermi energy. Along with designing the hardware components, a Labview program was written to capture and log the data for further analysis. The software features several operating modes including automated scanning which allows the user to enter the desired scan parameters and the program will scan the sample accordingly. Also implemented in the program is the control of various external components such as the electron gun and high voltage power supply. The new setup was tested for different gas mixtures and an optimum ratio was determined. Subsequently, IPES scans of several sample materials were performed for testing and optimization. A scan of Au was utilized for the determination of the Fermi edge energy and for comparison to literature spectra. The Fermi edge energy was then used in a measurement of indium tin oxide (ITO) determining the conduction band onset. This allowed the determination of the "transfer gap" of ITO. Future experiments will allow further application of IPES on materials and interfaces where characterization of their electronic structure is desired.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Bin Hong Alex. "Empty container logistics optimization : an implementation framework and methods." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90715.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 68-70).
Empty container logistics is a huge cost component in an ocean carrier's operations. Managing this cost is important to ensure profitability of the business. This thesis proposes a 3-stage framework to handle empty container logistics with cost management as the objective. The first stage studies the forecasting of laden shipment demand, which provides the empty container supply requirement. Based on the supply needs, the problem of optimizing the fleet size was then addressed by using an inventory model to establish the optimal safety stock level. Simulations were used to understand the sensitivity of safety stock to desired service level. The final stage involves using mathematical programming to optimize repositioning costs incurred by carriers to ship empty containers to places which need them due to trade imbalance. At the same time, costs that are incurred due to leasing and storage are considered. A comparison between just-in-time and pre-emptive replenishment was performed and impact due to uncertainties is investigated. The framework is then implemented in a Decision Support System for an actual ocean carrier and is used to assist the empty container logistics team to take the best course of action in daily operations. The results from the optimizations show that there are opportunities for the carrier to reduce its fleet size and cut empty container logistics related costs.
by Bin Hong Alex Lee.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
11

Hanson, Clyde Russell 1959. "Implementation of Fletcher-Reeves in the GOSPEL optimization package." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277144.

Full text
Abstract:
Implementation of the Fletcher-Reeves optimization strategy into the GOSPEL optimization package is examined. An explanation of the GOSPEL package is provided, followed by the presentation of the Fletcher-Reeves strategy. Performance of all strategies in the updated GOSPEL package are compared for nine test cases. A user manual for GOSPEL operation as well as the source code are also included.
APA, Harvard, Vancouver, ISO, and other styles
12

Aytekin, Arda. "Asynchronous Algorithms for Large-Scale Optimization : Analysis and Implementation." Licentiate thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-203812.

Full text
Abstract:
This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution. The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound. Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay. Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results.

QC 20170317

APA, Harvard, Vancouver, ISO, and other styles
13

Avventi, Enrico. "Spectral Moment Problems : Generalizations, Implementation and Tuning." Doctoral thesis, KTH, Optimeringslära och systemteori, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-39026.

Full text
Abstract:
Spectral moment interpolation find application in a wide array of use cases: robust control, system identification, model reduction to name the most notable ones. This thesis aims to expand the theory of such methods in three different directions. The first main contribution concerns the practical applicability. From this point of view various solving algorithm and their properties are considered. This study lead to identify a globally convergent method with excellent numerical properties. The second main contribution is the introduction of an extended interpolation problem that allows to model ARMA spectra without any explicit information of zero’s positions. To this end it was necessary for practical reasons to consider an approximated interpolation insted. Finally, the third main contribution is the application to some problems such as graphical model identification and ARMA spectral approximation.
QC 20110906
APA, Harvard, Vancouver, ISO, and other styles
14

Jones, Martin Stuart. "An object-oriented framework for the implementation of search techniques." Thesis, University of East Anglia, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Daroui, Danesh. "Implementation and optimization of partial element equivalent circuit-based solver /." Luleå, 2010. http://pure.ltu.se/ws/fbspretrieve/4634257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Jiaxiang. "Computational models of perceptual decision : neural representation, optimization, and implementation." Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495775.

Full text
Abstract:
Much experimental evidence indicates that lat perceptual decisions are made by integrating sensory information in cortical areas, until the accumulated evidence fies certain criteria. Recently proposed theories further suggest that the ain performs statistically optimal strategies during decision processes. This thesis extends and develops biologically inspired decision models from different aspects.
APA, Harvard, Vancouver, ISO, and other styles
17

Leroyer, Pierre S. M. Massachusetts Institute of Technology. "Robust crew pairing : delays analysis and implementation of optimization approaches." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/36168.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, February 2006.
Includes bibliographical references (p. 91-92).
With increasing delays and airport congestion that disturb airline operations, the development of robust schedules is becoming crucial. Increased traffic and poor weather are a few of the causes of airport congestion, rising delays and lengthening passenger trips. In this thesis, we identify the latest trend in the flight arrival and departure delays, differentiating major U.S. airports from other smaller airports. We also quantify the types of delays airlines should work to mitigate. We then analyze the effects of schedules changes that were implemented by a major U.S. airline at their largest hub. We measure the effects of these schedule changes on on-time performance, taxi time, plane utilization, and passenger connection and total travel time. We also analyze how extensive is the practice of adding buffer time to flight times to improve schedule reliability. Finally, we propose and implement a new model to achieve robust crew schedules, that is, crew schedules that are less likely to be inoperable due to disruptions during operations. We show that with an increase in crew costs of 0.2%, we can decrease the number of times crews must connect between different aircraft by 32%.
by Pierre Leroyer.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
18

venkesh, kandari. "Implementation and Performance Optimization of WebRTC Based Remote Collaboration System." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Shuo. "Development and Implementation of a Network-Level Pavement Optimization Model." University of Toledo / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1321624751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fuller, Samuel Gaylin. "Optimization and Hardware Implementation of SYBA-An Efficient Feature Descriptor." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7520.

Full text
Abstract:
Feature detection, description and matching are crucial steps in many computer vision algorithms. These rely on feature descriptors to be able to match image features across sets of images. This paper discusses a hardware implementation and various optimizations of our lab's previous work on the SYnthetic BAsis feature descriptor (SYBA). Previous work has shown that SYBA can offer superior performance to other binary descriptors, such as BRIEF. This hardware implementation on an FPGA is a high throughput and low latency solution, which is critical for applications such as: high speed object detection and tracking, stereo vision, visual odometry, structure from motion, and optical flow. Finally, we compare our solution to other hardware methods. We believe that our implementation of SYBA as a feature descriptor in hardware offers superior image feature matching performance and uses less resources than most binary feature descriptor implementations.
APA, Harvard, Vancouver, ISO, and other styles
21

Curti, Nico <1992&gt. "Implementation and optimization of algorithms in Biomedical Big Data Analytics." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9371/1/PhDThesis.pdf.

Full text
Abstract:
Big Data Analytics poses many challenges to the research community who has to handle several computational problems related to the vast amount of data. An increasing interest involves Biomedical data, aiming to get the so-called personalized medicine, where therapy plans are designed on the specific genotype and phenotype of an individual patient and algorithm optimization plays a key role to this purpose. In this work we discuss about several topics related to Biomedical Big Data Analytics, with a special attention to numerical issues and algorithmic solutions related to them. We introduce a novel feature selection algorithm tailored on omics datasets, proving its efficiency on synthetic and real high-throughput genomic datasets. We tested our algorithm against other state-of-art methods obtaining better or comparable results. We also implemented and optimized different types of deep learning models, testing their efficiency on biomedical image processing tasks. Three novel frameworks for deep learning neural network models development are discussed and used to describe the numerical improvements proposed on various topics. In the first implementation we optimize two Super Resolution models showing their results on NMR images and proving their efficiency in generalization tasks without a retraining. The second optimization involves a state-of-art Object Detection neural network architecture, obtaining a significant speedup in computational performance. In the third application we discuss about femur head segmentation problem on CT images using deep learning algorithms. The last section of this work involves the implementation of a novel biomedical database obtained by the harmonization of multiple data sources, that provides network-like relationships between biomedical entities. Data related to diseases and other biological relates were mined using web-scraping methods and a novel natural language processing pipeline was designed to maximize the overlap between the different data sources involved in this project.
APA, Harvard, Vancouver, ISO, and other styles
22

Kim, Sunwook. "Multigrid Accelerated Cellular Automata for Structural Optimization: A 1-D Implementation." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/9971.

Full text
Abstract:
Multigrid acceleration is typically used for the iterative solution of partial differential equations in physics and engineering. A typical multigrid implementation uses a base discretization method, such as finite elements or finite differences, and a set of successively coarser grids that is used for accelerating the convergence of the iterative solution on the base grid. The presented thesis extends the use of multigrid acceleration to the design optimization of a sample structural system and demonstrates it within the context of the recently introduced Cellular Automata paradigm for design optimization. Within the design context, the multigrid scheme is not only used for accelerating the analysis iterations, but is also used to help refine the design across multiple grid levels to accelerate the design convergence. A comparison of computational efficiencies achieved by different multigrid implementations, including the multigrid accelerated nested design iteration scheme, is presented. The method is described in its generic form which can be applicable not only to the Cellular Automata paradigm but also to more general finite element analysis based design schemes as well.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

Söderström, Ola. "Testing and Tuning of Optimization Algorithms : On the implementation of Radiotherapy." Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-255478.

Full text
Abstract:
When treating cancer patients using radiotherapy, careful planning is essential to ensure that the tumour region is treated while surrounding healthy tissue is not injured in the process. The radiation dose in the tumour along with the dose limitations to healthy tissue can be expressed as a constrained optimization problem. The goal of this project has been to create prototype environments in C++ for both testing and parameter tuning of optimization algorithms intended to solve radiotherapy problems. A library of test problems has been implemented on which the optimization algorithms can be tested. For the sake of simplicity, the problem solving and parameter tuning has only been carried out with the interior point solver IPOPT. The results of a parameter tuning process are displayed in tables where the effect of the tuning can be analysed. By using the implemented parameter tuning process, some settings have been found that are better than the default values when solving the implemented test problems.
APA, Harvard, Vancouver, ISO, and other styles
24

Madrigal, Marcelino. "Optimization models and techniques for implementation and pricing of electricity markets." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60555.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Bhanot, Sunil. "Implementation and optimization of a Global Navigation Satellite System software radio." Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176840392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sekerka-Bajbus, Michael A. (Michael Alexander). "The BPM design optimization and implementation of 3-branch waveguide devices /." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59606.

Full text
Abstract:
The Beam Propagation Method (BPM) together with an accurate Effective Index model is used as a Computer Aided Design (CAD) tool for the analysis, design and optimization of integrated optical devices. The performance of both active and passive 3-branch Ti:LiNbO$ sb3$ waveguide devices as a function of various design parameters is investigated. Then, the device design required to match a specified performance is deduced, and the theoretical performance predictions are verified experimentally through device fabrication and measurements.
In particular, a 3-branch passive power divider, a linear mode confinement modulator, and an active 3-branch switch were studied through their design and implementation. The theoretically predicted performance parameters were found to agree well with the measured values.
APA, Harvard, Vancouver, ISO, and other styles
27

Wan, Jin Hao M. Eng Massachusetts Institute of Technology. "Geometric modeling and optimization in 3D solar cells : implementation and algorithms." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92087.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 63).
Conversion of solar energy in three-dimensional (3D) devices has been essentially untapped. In this thesis, I design and implement a C++ program that models and optimizes a 3D solar cell ensemble embedded in a given landscape. The goal is to find the optimum arrangement of these solar cells with respect to the landscape buildings so as to maximize the total energy collected. On the modeling side, in order to calculate the energies generated from both direct and reflected sunlight, I store all the geometric inputs in a binary space partition tree; this data structure in turn efficiently supports a crucial polygon clipping algorithm. On the optimization side, I deploy simulated annealing (SA). Both advantages and limitation of SA lead me to restrict the solar cell docking sites to orthogonal grids imposed on the building surfaces. The resulting program is an elegant trade-off between accuracy and efficiency.
by Jin Hao Wan.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
28

Xu, Shiyu. "TOMOGRAPHIC IMAGE RECONSTRUCTION: IMPLEMENTATION, OPTIMIZATION AND COMPARISON IN DIGITAL BREAST TOMOSYNTHESIS." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/979.

Full text
Abstract:
Conventional 2D mammography was the most effective approach to detecting early stage breast cancer in the past decades of years. Tomosynthetic breast imaging is a potentially more valuable 3D technique for breast cancer detection. The limitations of current tomosynthesis systems include a longer scanning time than a conventional digital X-ray modality and a low spatial resolution due to the movement of the single X-ray source. Dr.Otto Zhou's group proposed the concept of stationary digital breast tomosynthesis (s-DBT) using a Carbon Nano-Tube (CNT) based X-ray source array. Instead of mechanically moving a single X-ray tube, s-DBT applies a stationary X-ray source array, which generates X-ray beams from different view angles by electronically activating the individual source prepositioned at the corresponding view angle, therefore eliminating the focal spot motion blurring from sources. The scanning speed is determined only by the detector readout time and the number of sources regardless of the angular coverage spans, such that the blur from patient's motion can be reduced due to the quick scan. S-DBT is potentially a promising modality to improve the early breast cancer detection by providing decent image quality with fast scan and low radiation dose. DBT system acquires a limited number of noisy 2D projections over a limited angular range and then mathematically reconstructs a 3D breast. 3D reconstruction is faced with the challenges of cone-beam and flat-panel geometry, highly incomplete sampling and huge reconstructed volume. In this research, we investigated several representative reconstruction methods such as Filtered backprojection method (FBP), Simultaneous algebraic reconstruction technique (SART) and Maximum likelihood (ML). We also compared our proposed statistical iterative reconstruction (IR) with particular prior and computational technique to these representative methods. Of all available reconstruction methods in this research, our proposed statistical IR appears particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description. In the following chapters, we present multiple key techniques of statistical IR to tomosynthesis imaging data to demonstrate significant image quality improvement over conventional techniques. These techniques include the physical modeling with a local voxel-pair based prior with the flexibility in its parameters to fine-tune image quality, the pre-computed parameter κ incorporated with the prior to remove the data dependence and to achieve a predictable resolution property, an effective ray-driven technique to compute the forward and backprojection and an over-sampled ray-driven method to perform high resolution reconstruction with a practical region of interest (ROI) technique. In addition, to solve the estimation problem with a fast computation, we also present a semi-quantitative method to optimize the relaxation parameter in a relaxed order-subsets framework and an optimization transfer based algorithm framework which potentially allows less iterations to achieve an acceptable convergence. The phantom data is acquired with the s-DBT prototype system to assess the performance of these particular techniques and compare our proposed method to those representatives. The value of IR is demonstrated in improving the detectability of low contrast and tiny micro-calcification, in reducing cross plane artifacts, in improving resolution and lowering noise in reconstructed images. In particular, noise power spectrum analysis (NPS) indicates a superior noise spectral property of our proposed statistical IR, especially in the high frequency range. With the decent noise property, statistical IR also provides a remarkable reconstruction MTF in general and in different areas within a focus plane. Although computational load remains a significant challenge for practical development, combined with the advancing computational techniques such as graphic computing, the superior image quality provided by statistical IR will be realized to benefit the diagnostics in real clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Tengberg, Oskar. "Implementation of Hydro Power Plant Optimization for Operation and Production Planning." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74274.

Full text
Abstract:
Output power of hydro power plant was modelled and an optimization algorithm was implemented in a tool for optimizing hydro power plants. The tool maximizes power output of a hydro power plant by distributing water over a set of active units in the power plant which will be used in planning of electricity production. This tool was built in a MATLAB environment, using the optimization toolbox, and a GUI was developed for Vattenfall. The optimization tool was based on the same architecture as the current tool used for this kind of optimization which is to be replaced by the work presented in this thesis. Therefore, the goal was to achieve the same optimal results as the current optimization tool. Power output of three of Vattenfall’s hydro power plants were computed and two of these plants were optimized. These power output results were compared to results from the optimization tool currently used. This showed differences within the inaccuracy of measurements of ≤ 0.3%. These three power plants proved that the new tool is sufficient to replace the current tool but further testing is recommended to be conducted on more of Vattenfall’s hydro power plants to prove its consistency.
APA, Harvard, Vancouver, ISO, and other styles
30

Tappenden, Rachael Elizabeth Helen. "Development & Implementation of Algorithms for Fast Image Reconstruction." Thesis, University of Canterbury. Mathematics and Statistics, 2011. http://hdl.handle.net/10092/5998.

Full text
Abstract:
Signal and image processing is important in a wide range of areas, including medical and astronomical imaging, and speech and acoustic signal processing. There is often a need for the reconstruction of these objects to be very fast, as they have some cost (perhaps a monetary cost, although often it is a time cost) attached to them. This work considers the development of algorithms that allow these signals and images to be reconstructed quickly and without perceptual quality loss. The main problem considered here is that of reducing the amount of time needed for images to be reconstructed, by decreasing the amount of data necessary for a high quality image to be produced. In addressing this problem two basic ideas are considered. The first is a subset selection problem where the aim is to extract a subset of data, of a predetermined size, from a much larger data set. To do this we first need some metric with which to measure how `good' (or how close to `best') a data subset is. Then, using this metric, we seek an algorithm that selects an appropriate data subset from which an accurate image can be reconstructed. Current algorithms use a criterion based upon the trace of a matrix. In this work we derive a simpler criterion based upon the determinant of a matrix. We construct two new algorithms based upon this new criterion and provide numerical results to demonstrate their accuracy and efficiency. A row exchange strategy is also described, which takes a given subset and performs interchanges to improve the quality of the selected subset. The second idea is, given a reduced set of data, how can we quickly reconstruct an accurate signal or image? Compressed sensing provides a mathematical framework that explains that if a signal or image is known to be sparse relative to some basis, then it may be accurately reconstructed from a reduced set of data measurements. The reconstruction process can be posed as a convex optimization problem. We introduce an algorithm that aims to solve the corresponding problem and accurately reconstruct the desired signal or image. The algorithm is based upon the Barzilai-Borwein algorithm and tailored specifically to the compressed sensing framework. Numerical experiments show that the algorithm is competitive with currently used algorithms. Following the success of compressed sensing for sparse signal reconstruction, we consider whether it is possible to reconstruct other signals with certain structures from reduced data sets. Specifically, signals that are a combination of a piecewise constant part and a sparse component are considered. A reconstruction process for signals of this type is detailed and numerical results are presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Ramachandran, Adithya. "HEV fuel optimization using interval back propagation based dynamic programming." Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55054.

Full text
Abstract:
In this thesis, the primary powertrain components of a power split hybrid electric vehicle are modeled. In particular, the dynamic model of the energy storage element (i.e., traction battery) is exactly linearized through an input transformation method to take advantage of the proposed optimal control algorithm. A lipschitz continuous and nondecreasing cost function is formulated in order to minimize the net amount of consumed fuel. The globally optimal solution is obtained using a dynamic programming routine that produces the optimal input based on the current state of charge and the future power demand. It is shown that the global optimal control solution can be expressed in closed form for a time invariant and convex incremental cost function utilizing the interval back propagation approach. The global optimality of both time varying and invariant solutions are rigorously proved. The optimal closed form solution is further shown to be applicable to the time varying case provided that the time variations of the incremental cost function are sufficiently small. The real time implementation of this algorithm in Simulink is discussed and a 32.84 % improvement in fuel economy is observed compared to existing rule based methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Khalid, Adeel S. "Development and Implementation of Rotorcraft Preliminary Design Methodology using Multidisciplinary Design Optimization." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14013.

Full text
Abstract:
A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. All the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design information from one discipline to another in a collaborative manner. Several disciplinary and system level optimization problems are solved. After all the constraints of a multidisciplinary problem have been satisfied and an optimal design has been obtained, it is compared with the initial baseline, using the earlier developed OEC, to measure the level of improvement achieved. Finally a digital preliminary design is proposed. The proposed design methodology provides an automated design framework, facilitates parallel design by removing disciplinary interdependency, current and updated information is made available to all disciplines at all times of the design through a central collaborative repository, overall design time is reduced and an optimized design is achieved.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Min. "Implementation and optimization of a modulated filter bank based on allpass filters." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9192.

Full text
Abstract:
A filter bank based on an allpass IIR filter with brick-wall response was designed by A. J. Van Leest in [17]; however, the delay in the filter bank is too long to be used in real time applications. In order to reduce the delay, the orders of coefficients, transition bandwidth and filter bank structures must be optimized. The order of coefficients can be reduced by increasing the stopband attenuation. In order to further reduce the delay, the sharpness of the filter bank has to be reduced. This thesis also discussed the number of band and filter bank structure against to filter bank delay. The filter bank can be used in non-real time application such as CD compression with high order coefficient. The minimum transition bandwidth can be reached at 0.03257pi/number of band. This thesis expands upon DCT modulations of IIR based modulated filter banks and investigate the Hartley transformation in filter bank modulation as a new modulation technique. These modulation techniques generate the real output signal with real input signals. The quantization errors from quantizing the coefficient are studied. It is concluded that at least 16 bits are required in order for a filter bank to give a good performance as designed without quantization.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiong, Xiao. "Power optimization for wireless sensor networks security based on an FPGA implementation." Thesis, University of Salford, 2009. http://usir.salford.ac.uk/26973/.

Full text
Abstract:
Security is a key consideration when deploying Wireless Sensor Networks (WSNs). Due to the constrained hardware resources in sensor network nodes, lightweight cryptographic primitives are often implemented for fast execution and small memory usage. Apart from the computational complexity of cryptographic algorithms, the technique of implementing them in practical terms is another crucial aspect in WSN security development, which directly reflects to the power efficiency. Since the battery life confines the lifetime of a sensor node, energy conservation is normally set as the first priority in developing security solution. This means that the optimal security operation for WSNs has to consume the smallest amount of energy possible while it is active. A novel power optimization methodology is proposed intended to provide a guideline to evaluate cryptographic primitives and implementation techniques for constructing the power optimal security solution. Noting the inevitable limitations of traditional security implementation techniques, a FPGA-based hybrid technique was innovated to offer high efficiency with flexibility. An interesting impact of the operating frequency on the power consumption in security development was also identified. Using this methodology, the most suitable cryptographic primitive and hardware implementation configurations were suggested for building environment monitoring application from procedural experiments. This produced an optimal security solution that proved the feasibility and effectiveness of the proposed methodology as its actual measurements fully satisfied the predetermined criteria. The outcome showed that the FPGA implementation technique was the most power efficient implementation solution for flexibility-essential applications and the adjustment of operating frequency could be a useful tool to further optimize the security solution for low power.
APA, Harvard, Vancouver, ISO, and other styles
35

Kordik, Andrew Michael. "Hardware Implementation of Post-Compression Rate-Distortion Optimization for EBCOT in JPEG2000." University of Dayton / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1313791202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kalas, Vinayak Jagannathrao. "Application-oriented optimization of robot elastostatic calibration : implementation on hexapod positioning systems." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS018.

Full text
Abstract:
Les hexapodes sont de plus en plus utilisés pour des applications de positionnement de haute précision à 6 degrés de liberté, comme pour le positionnement des miroirs des télescopes ou pour le positionnement des échantillons dans les synchrotrons. Ces robots sont conçus et commandés pour faire preuve de grande répétabilité et de grande justesse. Cependant, la souplesse structurelle de ces systèmes de positionnement limite leur précision de positionnement. Comme les exigences de précision deviennent de plus en plus strictes dans les applications émergentes, il devient nécessaire de compenser ces déformations.À cet égard, tout d'abord, une méthode d'étalonnage élastostatique des hexapodes est présentée. Cette méthode utilise un modèle de paramètre de rigidité forfaitaire pour paramétrer la relation entre les flèches de la plate-forme et la force / le moment qui lui est appliqué. Ces paramètres peuvent être estimés à l'aide de mesures de déflexion effectuées en utilisant des forces / moments connus appliqués sur la plate-forme. Les paramètres estimés peuvent ensuite être utilisés pour prévoir et corriger les erreurs de positionnement des hexapodes dues à la conformité.Deuxièmement, une nouvelle approche est présentée pour optimiser le processus d’identification des paramètres de raideur de l’étalonnage élastostatique. Cette approche repose sur l’utilisation de critères qui permettent de déterminer le meilleur ensemble de poses et de forces pour identifier les paramètres de raideur. Les paramètres identifiés dans les conditions expérimentales (poses et forces) suggérées par ces critères permettent une contribution minimum des erreurs influençant l’identification des raideurs (incertitude des mesures des déflections et erreurs des forces appliquées) sur la qualité de la compensation. De plus, suivre cette approche maximise également la précision après compensation aux poses souhaitées, le long des axes souhaités, et avec les combinaisons force/moment souhaitées sur la plateforme. Ce cadre d’optimisation pour l’identification des raideurs assure la meilleure compensation des erreurs de positionnement dues à la souplesse structurelle, selon les exigences de positionnement de l’application en question.Enfin, une méthode est présentée qui permet de s’affranchir des effets dues à la thermique sur la mesure des 6 degrés de liberté de la pose de la plateforme d’un hexapode. Cette méthode est nécessaire lorsque les déflections dues à la thermique de l’hexapode sont suffisamment importantes pour avoir un impact sur les résultats d’une étude, ce qui était le cas avec certains des tests effectués pour valider les méthodes développées dans cette thèse.L’efficacité des méthodes présentées a été validée au moyen d’études en simulation sur un bipède, et d’études expérimentales sur un système de positionnement hexapode de haute précision
Hexapods are increasingly being used for high-precision 6-DOF positioning applications such as for positioning mirrors in telescopes and for positioning samples in synchrotrons. These robots are designed and controlled to be very repeatable and accurate. However, structural compliance of these positioning systems limits their positioning accuracy. As accuracy requirements become more stringent in emerging applications, compensating for inaccuracy due to structural compliance becomes necessary.In this regard, firstly, a method for elastostatic calibration of hexapods is presented. This method uses a lumped stiffness parameter model to parametrize the relationship between the platform deflections and the force/moment applied on it. These parameters can be estimated using deflection measurements performed using known forces/moments applied on the platform. The estimated parameters can then be used to predict and correct hexapod’s positioning errors due to compliance.Secondly, a new approach is presented to optimize stiffness identification for robot elastostatic calibration. In this, a framework is proposed to formulate criteria to choose best set of poses and forces for stiffness identification experiment. The parameters identified under experimental conditions (poses and forces) suggested by these criteria ensure minimum impact of errors influencing stiffness identification (uncertainty of deflection measurements and errors in forces applied) on compensation quality. Additionally, it also maximizes accuracy after compensation at desired pose(s), along desired axe(s) of the platform and with desired forces/moments on the platform. This stiffness identification optimization framework ensures best compensation for positioning errors due to compliance as per the positioning requirements of the application at hand.Lastly, a method is presented to eliminate the influence of thermal deflection of a hexapod on the measured 6-DOF pose of its platform. This method is necessary when thermal deflections of the hexapod are large enough to impact results of a study, which was the case with some tests performed to validate methods developed in this thesis.The efficacy of presented methods have been validated by means of simulation studies on a bipod and experimental studies on a high-precision hexapod positioning system
APA, Harvard, Vancouver, ISO, and other styles
37

Hasan, Md Mubashwar. "Design, Optimization and Implementation of a High Frequency Link Multilevel Cascaded Inverter." Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/69364.

Full text
Abstract:
This thesis presents a new concept of cascaded MLI (CMLI) device reduction by utilizing low and high frequency transformer link. Two CMLI topologies, symmetric and asymmetric are proposed. Compared with counterpart CMLI topologies available in the literatures, the proposed two inverter topologies in this thesis have the advantages of utilizing least number of electronic components without compromising overall performance particularly when a high number of levels is required in the output voltage waveform.
APA, Harvard, Vancouver, ISO, and other styles
38

Olsson, Daniel. "Applications and Implementation of Kernel Principal Component Analysis to Special Data Sets." Thesis, KTH, Optimeringslära och systemteori, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-31130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kingston, Derek Bastian. "Implementation issues of real-time trajectory generation on small UAVs /." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd357.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Djehiche, Younes, and Erik Bröte. "Implementation of mean-variance and tail optimization based portfolio choice on risky assets." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-198071.

Full text
Abstract:
An asset manager's goal is to provide a high return relative the risk taken, and thus faces the challenge of how to choose an optimal portfolio. Many mathematical methods have been developed to achieve a good balance between these attributes and using di erent risk measures. In thisthesis, we test the use of a relatively simple and common approach: the Markowitz mean-variance method, and a more quantitatively demanding approach: the tail optimization method. Using active portfolio based on data provided by the Swedish fund management company Enter Fonderwe implement these approaches and compare the results. We analyze how each method weighs theunderlying assets in order to get an optimal portfolio.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yu. "Implementation of Reliability Centered Asset Management method on Power Systems." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-201717.

Full text
Abstract:
Asset management is getting increasingly important in nearly all fields, especially inthe electric power engineering. It is mainly due to the following two reasons. First isthe high investment cost include the design cost, construction cost, equipment costand the high maintenance cost. Another reason is that there is always a high penaltyfee for the system operator if an interruption happened in the system. Besides, due tothe deregulation of electricity market in these years, the electricity utilities are payingmore attentions to the investment and maintenance cost. And one of their main goalsis to maximize the maintenance performance. So the challenge for the systems is toprovide high-reliability power to the customs and meanwhile be cost-effective for thesuppliers. Reliability Centered Asset Management (RCAM) is one of the bestmethods to solve this problem.The basic RCAM method is introduced first in this thesis. The model includes themaintenance strategy definition, the maintenance cost calculation and an optimizationmodel. Based on the basic model some improvements are added and a new model isproposed. The improvements include the new improvement maintenance strategy,increasing failure rate and a new objective function. The new model is also able toprovide a time-based maintenance plan.The simulation is done to a Swedish distribution system-Birka system by GAMS. Theresults and a sensitivity analysis is presented. A maintenance strategy for 58components and in 120 months is finally found. The impact on the changing failurerate is also shown for the whole peroid.
Kapitalförvaltning har inom alla områdem blivit allt viktigare, speciellt inomelkraftsteknik. Det beror i huvudsak av två orsaker. Den första är storinvesteringskostnad, vilket inkluderar design, konstruktion, utrustning och underhåll.Den andra är den höga straffavgiften för system operatören vid elavbrott. Dessutom,på grund av den nyligen avreglerade elmarknaden, så fäster elföretagen meruppmärksamhet på investerings och underhållskostnader. En av deras huvudmål är attmaximera underhållsprestandan. Så utmaningen för operatörerna är att levereratillförlitlig elkraft till kunder, samtidigt vara kostnadseffektiva mot leveratörer.Reliability Centered Asset Management (RCAM) är bland de bästa metoderna för attlösa detta problem. En enklare RCAM metod är introducerad först i denna rapport.Modellen inkluderar en underhållsstrategi-definition, underhållskostnad-kalkyl och enIIoptimiserings modell. Grundad på denna enklare modell, andra förbättringar ärtillagda och en ny modell är föreslagen. Förbättringarna inrymmer en nyunderhållsstrategi, ökad felfrekvens och en ny målfunktion. Den nya modellentillhandahåller också en tidsbaserad underhållsplan.
APA, Harvard, Vancouver, ISO, and other styles
42

Grünauer, Florian. "Design, optimization, and implementation of the new neutron radiography facility at FRM-II." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=978958950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Boisard, Olivier. "Optimization and implementation of bio-inspired feature extraction frameworks for visual object recognition." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS016/document.

Full text
Abstract:
L'industrie a des besoins croissants en systèmes dits intelligents, capable d'analyserles signaux acquis par des capteurs et prendre une décision en conséquence. Cessystèmes sont particulièrement utiles pour des applications de vidéo-surveillanceou de contrôle de qualité. Pour des questions de coût et de consommation d'énergie,il est souhaitable que la prise de décision ait lieu au plus près du capteur. Pourrépondre à cette problématique, une approche prometteuse est d'utiliser des méthodesdites bio-inspirées, qui consistent en l'application de modèles computationels issusde la biologie ou des sciences cognitives à des problèmes industriels. Les travauxmenés au cours de ce doctorat ont consisté à choisir des méthodes d'extractionde caractéristiques bio-inspirées, et à les optimiser dans le but de les implantersur des plateformes matérielles dédiées pour des applications en vision par ordinateur.Tout d'abord, nous proposons un algorithme générique pouvant être utilisés dans différentscas d'utilisation, ayant une complexité acceptable et une faible empreinte mémoire.Ensuite, nous proposons des optimisations pour une méthode plus générale, baséesessentiellement sur une simplification du codage des données, ainsi qu'une implantationmatérielle basées sur ces optimisations. Ces deux contributions peuvent par ailleurss'appliquer à bien d'autres méthodes que celles étudiées dans ce document
Industry has growing needs for so-called “intelligent systems”, capable of not only ac-quire data, but also to analyse it and to make decisions accordingly. Such systems areparticularly useful for video-surveillance, in which case alarms must be raised in case ofan intrusion. For cost saving and power consumption reasons, it is better to perform thatprocess as close to the sensor as possible. To address that issue, a promising approach isto use bio-inspired frameworks, which consist in applying computational biology modelsto industrial applications. The work carried out during that thesis consisted in select-ing bio-inspired feature extraction frameworks, and to optimize them with the aim toimplement them on a dedicated hardware platform, for computer vision applications.First, we propose a generic algorithm, which may be used in several use case scenarios,having an acceptable complexity and a low memory print. Then, we proposed opti-mizations for a more global framework, based on precision degradation in computations,hence easing up its implementation on embedded systems. Results suggest that whilethe framework we developed may not be as accurate as the state of the art, it is moregeneric. Furthermore, the optimizations we proposed for the more complex frameworkare fully compatible with other optimizations from the literature, and provide encourag-ing perspective for future developments. Finally, both contributions have a scope thatgoes beyond the sole frameworks that we studied, and may be used in other, more widelyused frameworks as well
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu, Di. "Large Scale ETL Design, Optimization and Implementation Based On Spark and AWS Platform." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215702.

Full text
Abstract:
Nowadays, the amount of data generated by users within an Internet product is increasing exponentially, for instance, clickstream for a website application from millions of users, geospatial information from GIS-based APPs of Android and IPhone, or sensor data from cars or any electronic equipment, etc. All these data may be yielded billions every day, which is not surprisingly essential that insights could be extracted or built. For instance, monitoring system, fraud detection, user behavior analysis and feature verification, etc.Nevertheless, technical issues emerge accordingly. Heterogeneity, massiveness and miscellaneous requirements for taking use of the data from different dimensions make it much harder when it comes to the design of data pipelines, transforming and persistence in data warehouse. Undeniably, there are traditional ways to build ETLs from mainframe [1], RDBMS, to MapReduce and Hive. Yet with the emergence and popularization of Spark framework and AWS, this procedure could be evolved to a more robust, efficient, less costly and easy-to-implement architecture for collecting, building dimensional models and proceeding analytics on massive data. With the advantage of being in a car transportation company, billions of user behavior events come in every day, this paper contributes to an exploratory way of building and optimizing ETL pipelines based on AWS and Spark, and compare it with current main Data pipelines from different aspects.
Mängden data som genereras internet-produkt-användare ökar lavinartat och exponentiellt. Det finns otaliga exempel på detta; klick-strömmen från hemsidor med miljontals användare, geospatial information från GISbaserade Android och iPhone appar, eller från sensorer på autonoma bilar.Mängden händelser från de här typerna av data kan enkelt uppnå miljardantal dagligen, därför är det föga förvånande att det är möjligt att extrahera insikter från de här data-strömmarna. Till exempel kan man sätta upp automatiserade övervakningssystem eller kalibrera bedrägerimodeller effektivt. Att handskas med data i de här storleksordningarna är dock inte helt problemfritt, det finns flertalet tekniska bekymmer som enkelt kan uppstå. Datan är inte alltid på samma form, den kan vara av olika dimensioner vilket gör det betydligt svårare att designa en effektiv data-pipeline, transformera datan och lagra den persistent i ett data-warehouse. Onekligen finns det traditionella sätt att bygga ETL’s på från mainframe [1], RDBMS, till MapReduce och Hive. Dock har det med upptäckten och ökade populariteten av Spark och AWS blivit mer robust, effektivt, billigare och enklare att implementera system för att samla data, bygga dimensions-enliga modeller och genomföra analys av massiva data-set. Den här uppsatsen bidrar till en ökad förståelse kring hur man bygger och optimerar ETL-pipelines baserade på AWS och Spark och jämför med huvudsakliga nuvarande Data-pipelines med hänsyn till diverse aspekter. Uppsatsen drar nytta av att ha tillgång till ett massivt data-set med miljarder användar-events genererade dagligen från ett bil-transport-bolag i mellanöstern.
APA, Harvard, Vancouver, ISO, and other styles
45

Moberg, My. "Liquid Chromatography Coupled to Mass Spectrometry : Implementation of Chemometric Optimization and Selected Applications." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Yi. "Implementation and optimization of thread-local variables for a race-free Java dialect." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107849.

Full text
Abstract:
Despite the popularity of Java, problems may arise from potential data-race conditionsduring execution of a Java program. Data-races are considered errors in concurrent pro-gramming languages and greatly complicate both programming and runtime optimizationefforts. A race-free version of Java is therefore desirable as a way of avoiding this com-plexity and simplifying the programming model.This thesis is part of work trying to build a race-free version of Java. It implements andoptimizes thread-local accesses and comes up with a new semantics for this language. Animportant part of implementing a language without races is to distinguish thread-local datafrom shared data because these two groups of data need to be treated differently. This iscomplex in Java because in the current Java semantics all objects are allocated on a singleheap and implicitly shared by multiple threads. Furthermore, while Java does provide amechanism for thread-local storage, it is awkward to use and inefficient.Many of the new concurrent programming languages, such as OpenMP, UPC, and D,use "sharing directives" to distinguish shared data from thread-local data, and have fea-tures that make heavy use of thread-local data. Our goal here is to apply some of theselanguage ideas to a Java context in order to provide a simpler and less error-prone pro-gramming model. When porting such features as part of a language extension to Java,however, performance can suffer due to the simple, map-based implementation of Java'sbuilt-in ThreadLocal class. We implement an optimized mechanism based on program-mer annotations that can efficiently ensure class and instance variables are only accessed bytheir owner thread. Both class and instance variables inherit values from the parent threadthrough deep copying, allowing all the reachable objects of child threads to have localcopies if syntactically specified. In particular, class variable access involves direct accessto thread-local variables through a localized heap, which is faster and easier than the defaultmap mechanism defined for ThreadLocal objects. Our design improves performance sig-nificantly over the traditional thread-local access method for class variables and providesa simplified and more appealing syntax for doing so. We further evaluate our approach bymodifying non-trivial, existing benchmarks to make better use of thread-local features, il-lustrating feasibility and allowing us to measure the performance in realistic contexts. Thiswork is intended to bring us closer to designs for a complete race-free version of Java, aswell as show how improved support for use of thread-local data could be implemented inother languages.
Malgré la popularité de JAVA, de potentiels accès concurrents aux données peuvent causer des problèmes à l'exécution d'un programme. Les accès concurrents aux données sont considérés comme des erreur par les langages de programmation et compliquent grandement le processus de programmation et d'optimisation. Une version de JAVA sans accès concurrents serait la bienvenue et simplifierait ce processus. Cette thèse n'est qu'une partie d'une recherche plus importante visant à établir une version de JAVA sans accès concurrents. Elle implémente et optimise les accès en thread local et introduit une nouvelle sémantique pour ce langage. Une part importante de l'implémentation d'un langage sans concurrence est de distinguer les données locales de thread des données partagées car ces 2 types de données doivent être traitées différemment. Ceci est complexe en JAVA, car avec la sémantique actuelle, tous les objets sont alloués en un seul tas (heap) et implicitement partagés entre plusieurs threads. De plus, le mécanisme de stockage en thread local de Java est étrange et inefficace. Plusieurs des nouveaux langages concurrents, comme OpenMP, UPC et D, utilisent des "directives de partage" pour distinguer les données partagées des données locales de thread, et ont des structures faisant un usage avancé des données locales de thread. Notre but ici est d'appliquer certaines idées de ces langages dans un contexte JAVA dans le but de fournir un modéle de programmation plus simple et plus fiable. Cependant, apporter ces fonctionnalités sous forme d'extension a JAVA peut en affecter les performance du fait de la structure de la classe ThreadLocal de JAVA. Nous implémentons donc un mécanisme qui garantit efficacement que seul le processus propriétaire accède aux classes et variables d'instances. Aussi bien les classes que les variables d'instances héritent des valeurs du processus parent par copie, ce qui permet aux objets de processus enfants d'avoir des copies locales si précisé dans la syntaxe. En particulier, l'accès à des variables de classe utilise un accès direct aux variables du processus local via un tas local, ce qui est plus rapide et facile que le mécanisme par défaut de mappage défini pour les objet ThreadLocal. Notre conception améliore le performance de faon significative comparé à la méthode d'accès au processus local traditionnelle pour les variables de classe et fournit une syntaxe simplifiée et plus attrayante. Nous évaluons ensuite notre approche en modifiant des outils de test (benchmarks) complexes existants pour faire un meilleur usage de leurs fonctionnalités en processus local, ceci illustrant la faisabilité et nous permettant de mesurer les performances dans un contexte réaliste. Ce travail a pour but de nous rapprocher de la conception d'une version JAVA sans concurrence aussi bien que de montrer comment un support amélioré des données en thread local pourrait être implémenté dans d'autres langages.
APA, Harvard, Vancouver, ISO, and other styles
47

Blanchard, Roxann Russell. "Recovered energy logic--device optimization for circuit implementation in silicon and heterostructure technologies." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34066.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 91-93).
by Roxann Russell Blanchard.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
48

Roozbehani, Mardavij. "Optimization of Lyapunov invariants in analysis and implementation of safety-critical software systems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46515.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.
Includes bibliographical references (leaves 168-176).
This dissertation contributes to two major research areas in safety-critical software systems, namely, software analysis, and software implementation. In reference to the software analysis problem, the main contribution of the dissertation is the development of a novel framework, based on Lyapunov invariants and convex optimization, for verification of various safety and performance specifications for software systems. The enabling elements of the framework for software analysis are: (i) dynamical system interpretation and modeling of computer programs, (ii) Lyapunov invariants as behavior certificates for computer programs, and (iii) a computational procedure for finding the Lyapunov invariants. (i) The view in this dissertation is that software defines a rule for iterative modification of the operating memory at discrete instances of time. Hence, it can be modeled as a discrete-time dynamical system with the program variables as the state variables, and the operating memory as the state space. Three specific modeling languages are introduced which can represent a broad range of computer programs of interest to the control community. These are: Mixed Integer-Linear Models, Graph Models, and Linear Models with Conditional Switching. (ii) Inspired by the concept of Lyapunov functions in stability analysis of nonlinear dynamical systems, Lyapunov invariants are introduced and proposed for analysis of behavioral properties, and verification of various safety and performance specifications for computer programs. In the same spirit as standard Lyapunov functions, a Lyapunov invariant is an appropriately defined function of the state which satisfies a difference inequality along the trajectories. It is shown that variations of Lyapunov invariants satisfying certain technical conditions can be formulated for verification of several common specifications.
(cont.) These include but are not limited to: absence of overflow, absence of division-by-zero, termination in finite time, and certain user-specified program assertions. (iii) A computational procedure based on convex relaxation techniques and numerical optimization is proposed for finding the Lyapunov invariants that prove the specifications. The framework is complemented by the introduction of a notion of optimality for the graph models. This notion can be used for constructing efficient graph models that improve the analysis in a systematic way. It is observed that the application of the framework to (graph models of) programs that are semantically identical but syntactically different does not produce identical results. This suggests that the success or failure of the method is contingent on the choice of the graph model. Based on this observation, the concepts of graph reduction, irreducible graphs, and minimal and maximal realizations of graph models are introduced. Several new theorems that compare the performance of the original graph model of a computer program and its reduced offsprings are presented. In reference to the software implementation problem for safety-critical systems, the main contribution of the dissertation is the introduction of an algorithm, based on optimization of quadratic Lyapunov functions and semidefinite programming, for computing optimal state space implementations for digital filters. The particular implementation that is considered is a finite word-length implementation on a fixed-point processor with quantization before or after multiplication. The objective is to minimize the effects of finite word-length constraints on performance deviation while respecting the overflow limits. The problem is first formulated as a special case of controller synthesis where the controller has a specific structure, which is known to be a hard non-convex problem in general.
(cont.) It is then shown that this special case can be convexified exactly and the optimal implementation can be computed by solving a semidefinite optimization problem. It is observed that the optimal state space implementation of a digital filter on a machine with finite memory, does not necessarily define the same transfer function as that of an ideal implementation.
by Mardavij Roozbehani.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
49

Tatjana, Jakšić Krüger. "Development, implementation and theoretical analysis of the bee colony optimization meta-heuristic method." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104550&source=NDLTD&language=en.

Full text
Abstract:
The Ph.D. thesis addresses a comprehensive study of the bee colonyoptimization meta-heuristic method (BCO). Theoretical analysis of themethod is conducted with the tools of probability theory. Necessary andsufficient conditions are presented that establish convergence of the BCOmethod towards an optimal solution. Three parallelization strategies and fivecorresponding implementations are proposed for BCO for distributed-memorysystems. The influence of method’s parameters on the performance of theBCO algorithm for two combinatorial optimization problems is analyzedthrough the experimental study.
Докторска дисертације се бави испитивањем метахеуристичке методеоптимизације колонијом пчела. Извршена је теоријска анализаасимптотске конвергенције методе посматрањем конвергенције низаслучајних променљивих. Установљени су довољни и потребни условиза које метода конвергира ка оптималном решењу. Предложене су тристратегије паралелизације и пет одговарајућих имплементација конст-руктивне варијанте методе за рачунаре са дистрибуираном меморијом.Извршено је експериментално испитивање утицаја параметара методена њене перформансе за два различита комбинаторна проблема:проблем распоређивања и проблем задовољивости.
Doktorska disertacije se bavi ispitivanjem metaheurističke metodeoptimizacije kolonijom pčela. Izvršena je teorijska analizaasimptotske konvergencije metode posmatranjem konvergencije nizaslučajnih promenljivih. Ustanovljeni su dovoljni i potrebni usloviza koje metoda konvergira ka optimalnom rešenju. Predložene su tristrategije paralelizacije i pet odgovarajućih implementacija konst-ruktivne varijante metode za računare sa distribuiranom memorijom.Izvršeno je eksperimentalno ispitivanje uticaja parametara metodena njene performanse za dva različita kombinatorna problema:problem raspoređivanja i problem zadovoljivosti.
APA, Harvard, Vancouver, ISO, and other styles
50

Kanaglekar, Rohit. "Development and Implementation of Discontinuous Galerkin (DG) Finite Element Methods for Topology Optimization." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1115995821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography