To see the other types of publications on this topic, follow the link: OPTIMIZER ALGORITHM.

Dissertations / Theses on the topic 'OPTIMIZER ALGORITHM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'OPTIMIZER ALGORITHM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bhandare, Ashray Sadashiv. "Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural Networks." University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1513273210921513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lakshminarayanan, Srivathsan. "Nature Inspired Grey Wolf Optimizer Algorithm for Minimizing Operating Cost in Green Smart Home." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1438102173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martz, Matthew. "Preliminary Design of an Autonomous Underwater Vehicle Using a Multiple-Objective Genetic Optimizer." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/33291.

Full text
Abstract:
The process developed herein uses a Multiple Objective Genetic Optimization (MOGO) algorithm. The optimization is implemented in ModelCenter (MC) from Phoenix Integration. It uses a genetic algorithm that searches the design space for optimal, feasible designs by considering three Measures of Performance (MOPs): Cost, Effectiveness, and Risk. The complete synthesis model is comprised of an input module, the three primary AUV synthesis modules, a constraint module, three objective modules, and a genetic algorithm. The effectiveness rating determined by the synthesis model is based on nine attributes identified in the US Navyâ s UUV Master Plan and four performance-based attributes calculated by the synthesis model. To solve multi-attribute decision problems the Analytical Hierarchy Process (AHP) is used. Once the MOGO has generated a final generation of optimal, feasible designs the decision-maker(s) can choose candidate designs for further analysis. A sample AUV Synthesis was performed and five candidate AUVs were analyzed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Parandekar, Amey V. "Development of a Decision Support Framework forIntegrated Watershed Water Quality Management and a Generic Genetic Algorithm Based Optimizer." NCSU, 1999. http://www.lib.ncsu.edu/theses/available/etd-19990822-032656.

Full text
Abstract:

PARANDEKAR, AMEY, VIJAY. Development of a Decision Support Framework for Integrated Watershed Water Quality Management and a Generic Genetic Algorithm Based Optimizer. (Under the direction of Dr. S. Ranji Ranjithan.)The watershed management approach is a framework for addressing water quality problems at a watershed scale in an integrated manner that considers many conflicting issues including cost, environmental impact and equity in evaluating alternative control strategies. This framework enhances the capabilities of current environmental analysis frameworks by the inclusion of additional systems analytic tools such as optimization algorithms that enable efficient search for cost effective control strategies and uncertainty analysis procedures that estimate the reliability in achieving water quality targets. Traditional optimization procedures impose severe restrictions in using complex nonlinear environmental processes within a systematic search. Hence, genetic algorithms (GAs), a class of general, probabilistic, heuristic, global, search procedures, are used. Current implementation of this framework is coupled with US EPA's BASINS software system. A component of the current research is also the development of GA object classes and optimization model classes for generic use. A graphical user interface allows users to formulate mathematical programming problems and solve them using GA methodology. This set of GA object and the user interface classes together comprise the Generic Genetic Algorithm Based Optimizer (GeGAOpt), which is demonstrated through applications in solving interactively several unconstrained as well as constrained function optimization problems.Design of these systems is based on object oriented paradigm and current software engineering practices such as object oriented analysis (OOA) and object oriented design (OOD). The development follows the waterfall model for software development. The Unified Modeling Language (UML) is used for the design. The implementation is carried out using the JavaTM programming environment

APA, Harvard, Vancouver, ISO, and other styles
5

Parandekar, Amey V. "Development of a decision support framework for integrated watershed water quality management and a Generic Genetic Algorithm Based Optimizer." Raleigh, NC : North Carolina State University, 1999. http://www.lib.ncsu.edu/etd/public/etd-492632279902331/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pillai, Ajit Chitharanjan. "On the optimization of offshore wind farm layouts." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25470.

Full text
Abstract:
Layout optimization of offshore wind farms seeks to automate the design of the wind farm and the placement of wind turbines such that the proposed wind farm maximizes its potential. The optimization of an offshore wind farm layout therefore seeks to minimize the costs of the wind farm while maximizing the energy extraction while considering the effects of wakes on the resource; the electrical infrastructure required to collect the energy generated; the cost variation across the site; and all technical and consenting constraints that the wind farm developer must adhere to. As wakes, electrical losses, and costs are non-linear, this produces a complex optimization problem. This thesis describes the design, development, validation, and initial application of a new framework for the optimization of offshore wind farm layouts using either a genetic algorithm or a particle swarm optimizer. The developed methodology and analysis tool have been developed such that individual components can either be used to analyze a particular wind farm layout or used in conjunction with the optimization algorithms to design and optimize wind farm layouts. To accomplish this, separate modules have been developed and validated for the design and optimization of the necessary electrical infrastructure, the assessment of the energy production considering energy losses, and the estimation of the project costs. By including site-dependent parameters and project specific constraints, the framework is capable of exploring the influence the wind farm layout has on the levelized cost of energy of the project. Deploying the integrated framework using two common engineering metaheuristic algorithms to hypothetical, existing, and future wind farms highlights the advantages of this holistic layout optimization framework over the industry standard approaches commonly deployed in offshore wind farm design leading to a reduction in LCOE. Application of the tool to a UK Round 3 site recently under development has also highlighted how the use of this tool can aid in the development of future regulations by considering various constraints on the placement of wind turbines within the site and exploring how these impact the levelized cost of energy.
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Hui Long. "Optimized firefly algorithm and application." Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Thulo, Motlatsi Isaac. "Optimized Security-aware VM placement algorithm." Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/73387.

Full text
Abstract:
The rapidly increasing dependency on use of clouds results in Cloud Service Providers (CSPs) having to deal with high cloud services demands. To meet these demands, CSPs take advantage of virtualization technology to provide a seemingly unlimited pool of computing resources. This technology consolidates multiple instances of Virtual Machines (VMs) into the same Physical Machines (PMs) and share physical computing resources. To guarantee customer satisfaction, CSPs need to ensure optimized cloud environment that provides good Quality of Services (QoS) which conform to the performance levels stipulated in Service Level Agreements (SLAs). However, vulnerabilities associated with virtualization make it difficult to ensure optimization, more especially in multi-tenant clouds. In multi-tenant clouds, there are possibilities of consolidating VMs belonging to adversary users into the same PMs. This promotes inter-VM attacks that take advantage of shared resources to either spy, disrupt or corrupt co-located VMs. With this regard, it is important to consider placement of VMs in a manner that minimizes inter-VM attacks. This placement must, however, ensure initial objectives of providing good QoS. The aim of this study is to implement a VM placement algorithm that reduces architectural vulnerabilities brought by multi-tenancy while observing optimization objectives. It focuses on currently available VM placement algorithms and evaluates them to identify the algorithm that assumes highest optimization objectives. The identified VM placement algorithm is further augmented with security features to implement Optimized Security-aware (O-Sec) VM Placement algorithms. CloudSim Plus is used to evaluate and validate the implemented O-Sec VM placement algorithms. The evaluations in this study show that O-sec VM placement algorithm retains optimization objectives inherited from the identified VM placement algorithm. This is an algorithm that is augmented towards O-sec VM placement algorithm.
dissertation (MSc)--University of Pretoria, 2019.
Computer Science
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
9

陳從輝 and Chung-fai Chan. "MOS parameter extraction globally optimized with genetic algorithm." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31212785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pomerleau, François. "Registration algorithm optimized for simultaneous localization and mapping." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1465.

Full text
Abstract:
Building maps within an unknown environment while keeping track of the current position is a major step to accomplish safe and autonomous robot navigation. Within the last 20 years, Simultaneous Localization And Mapping (SLAM) became a topic of great interest in robotics. The basic idea of this technique is to combine proprioceptive robot motion information with external environmental information to minimize global positioning errors. Because the robot is moving in its environment, exteroceptive data comes from different points of view and must be expressed in the same coordinate system to be combined. The latter process is called registration. Iterative Closest Point (ICP) is a registration algorithm with very good performances in several 3D model reconstruction applications, and was recently applied to SLAM. However, SLAM has specific needs in terms of real-time and robustness comparatively to 3D model reconstructions, leaving room for specialized robotic mapping optimizations in relation to robot mapping. After reviewing existing SLAM approaches, this thesis introduces a new registration variant called Kd-ICP. This referencing technique iteratively decreases the error between misaligned point clouds without extracting specific environmental features. Results demonstrate that the new rejection technique used to achieve mapping registration is more robust to large initial positioning errors. Experiments with simulated and real environments suggest that Kd-ICP is more robust compared to other ICP variants. Moreover, the Kd-ICP is fast enough for real-time applications and is able to deal with sensor occlusions and partially overlapping maps. Realizing fast and robust local map registrations opens the door to new opportunities in SLAM. It becomes feasible to minimize the cumulation of robot positioning errors, to fuse local environmental information, to reduce memory usage when the robot is revisiting the same location. It is also possible to evaluate network constrains needed to minimize global mapping errors.
APA, Harvard, Vancouver, ISO, and other styles
11

Chan, Chung-fai. "MOS parameter extraction globally optimized with genetic algorithm /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1900011X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zheng, Haoxuan Ph D. Massachusetts Institute of Technology. "21 cm cosmology with optimized instrumentation and algorithms." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104536.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 213-236).
Precision cosmology has made tremendous progress in the past two decades thanks to a large amount of high quality data from the Cosmic Microwave Background (CMB), galaxy surveys and other cosmological probes. However, most of our universe's volume, corresponding to the period between the CMB and when the first stars formed, remains unexplored. Since there were no luminous objects during that period, it is called the cosmic "dark ages". 21 cm cosmology is the study of the high redshift universe using the hyperfine transition of neutral hydrogen, and it has the potential to probe that unchartered volume of our universe and the ensuing cosmic dawn, placing unprecedented constraints on our cosmic history as well as on fundamental physics. My Ph.D. thesis work tackles the most pressing observational challenges we face in the field of 21 cm cosmology: precision calibration and foreground characterization. I lead the design, deployment and data analysis of the MIT Epoch of Reionization (MITEoR) radio telescope, an interferometric array of 64-dual polarization antennas whose goal was to test technology and algorithms for incorporation into the Hydrogen Epoch of Reionization Array (HERA). In four papers, I develop, test and improve many algorithms in low frequency radio interferometry that are optimized for 21 cm cosmology. These include a set of calibration algorithms forming redundant calibration pipeline which I created and demonstrated to be the most precise and robust calibration method currently available. By applying this redundant calibration to high quality data collected by the Precision Array for Probing the Epoch of Reionization (PAPER), we have produced the tightest upper bound of the redshifted 21 cm signals to date. I have also created new imaging algorithms specifically tailored to the latest generation of radio interferometers, allowing them to make Galactic foreground maps that are not accessible through traditional radio interferometry. Lastly, I have improved on the algorithm that synthesizes foreground maps into the Global Sky Model (GSM), and used it to create an improved model of diffuse sky emission from 10 MHz through 5 THz.
by Haoxuan Zheng.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Kopel, Ariel. "NEURAL NETWORKS PERFORMANCE AND STRUCTURE OPTIMIZATION USING GENETIC ALGORITHMS." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/840.

Full text
Abstract:
Artificial Neural networks have found many applications in various fields such as function approximation, time-series prediction, and adaptive control. The performance of a neural network depends on many factors, including the network structure, the selection of activation functions, the learning rate of the training algorithm, and initial synaptic weight values, etc. Genetic algorithms are inspired by Charles Darwin’s theory of natural selection (“survival of the fittest”). They are heuristic search techniques that are based on aspects of natural evolution, such as inheritance, mutation, selection, and crossover. This research utilizes a genetic algorithm to optimize multi-layer feedforward neural network performance and structure. The goal is to minimize both the function of output errors and the number of connections of network. The algorithm is modeled in C++ and tested on several different data sets. Computer simulation results show that the proposed algorithm can successfully determine the appropriate network size for optimal performance. This research also includes studies of the effects of population size, crossover type, probability of bit mutation, and the error scaling factor.
APA, Harvard, Vancouver, ISO, and other styles
14

Mei, Zhenyu, and Ye Tian. "Optimized combination model and algorithm of parking guidance information configuration." SpringerOpen, 2011. http://hdl.handle.net/10150/610136.

Full text
Abstract:
Operators of parking guidance and information (PGI) systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.
APA, Harvard, Vancouver, ISO, and other styles
15

Nilsson, Mattias. "Evaluation of Computer Vision Algorithms Optimized for Embedded GPU:s." Thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112575.

Full text
Abstract:
The interest of using GPU:s as general processing units for heavy computations (GPGPU) has increased in the last couple of years. Manufacturers such as Nvidia and AMD make GPU:s powerful enough to outrun CPU:s in one order of magnitude, for suitable algorithms. For embedded systems, GPU:s are not as popular yet. The embedded GPU:s available on the market have often not been able to justify hardware changes from the current systems (CPU:s and FPGA:s) to systems using embedded GPU:s. They have been too hard to get, too energy consuming and not suitable for some algorithms. At SICK IVP, advanced computer vision algorithms run on FPGA:s. This master thesis optimizes two such algorithms for embedded GPU:s and evaluates the result. It also evaluates the status of the embedded GPU:s on the market today. The results indicates that embedded GPU:s perform well enough to run the evaluatedd algorithms as fast as needed. The implementations are also easy to understand compared to implementations for FPGA:s which are competing hardware.
APA, Harvard, Vancouver, ISO, and other styles
16

Azar, Danielle. "Using genetic algorithms to optimize software quality estimation models." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84985.

Full text
Abstract:
Assessing software quality is fundamental in the software developing field. Most software quality characteristics cannot be measured before a certain period of use of the software product. However, they can be predicted or estimated based on other measurable quality attributes. Software quality estimation models are built and used extensively for this purpose. Most such models are constructed using statistical or machine learning techniques. However, in this domain it is very hard to obtain data sets on which to train such models; often such data sets are proprietary, and the publicly available data sets are too small, or not representative. Hence, the accuracy of the models often deteriorates significantly when they are used to classify new data.
This thesis explores the use of genetic algorithms for the problem of optimizing existing rule-based software quality estimation models. The main contributions of this work are two evolutionary approaches to this optimization problem. In the first approach, we assume the existence of several models, and we use a genetic algorithm to combine them, and adapt them to a given data set. The second approach optimizes a single model. The core concept of this thesis is to consider existing models that have been constructed on one data set and adapt them to new data. In real applications, this can be seen as adapting already existing software quality estimation models that have been constructed on data extracted from common domain knowledge to context-specific data. Our technique maintains the white-box nature of the models which can be used as guidelines in future software development processes.
APA, Harvard, Vancouver, ISO, and other styles
17

Leong, Sio Hong. "Kinematics control of redundant manipulators using CMAC neural networks combined with Descent Gradient Optimizers & Genetic Algorithm Optimizers." Thesis, University of Macau, 2003. http://umaclib3.umac.mo/record=b1446170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nguyen, Quoc Tuan. "Using the genetic algorithm to optimize Web search: Lessons from biology." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27160.

Full text
Abstract:
Searching for information on the Web is a relatively inefficient process. My goal is to develop a method that optimizes web search queries without user intervention. Developing intelligent ways to automate this process includes the development of algorithms that automatically manipulate the use of keywords to produce the desired output. Genetic algorithms (GA) provide a potentially useful approach in this area. However, these approaches have not fully exploited the biological concepts associated with genetic reproduction and evolution. I hypothesize that an approach that uses GA but modifies it to include the biological concepts of structural and regulatory gene types and the use of a combination of deletion operator and silent genes will improve GA performance in optimizing Web search. In this paper, I describe this approach and its implementation in simulations of Web search tasks using three popular Web search engines (Google, Yahoo and Netscape). The results of this implementation are presented and are compared to the performance of a similar, but unmodified GA in the same tasks. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
19

Simaremare, Harris. "A development of secure and optimized AODV routing protocol using ant algorithm." Thesis, Mulhouse, 2013. http://www.theses.fr/2013MULH6753/document.

Full text
Abstract:
Les réseaux sans fil sont devenus une technologie importante dans le secteur des télécommunications. L'une des principales technologies des réseaux sans fil sont les réseaux mobiles ad hoc (MANET). MANET est un système d'auto-configuration (autonome) des routeurs mobiles où les routeurs sont libres de se déplacer de façon aléatoire et de s'organiser arbitrairement. La topologie des réseaux sans fil peut alors changer rapidement et de manière imprévisible avec une grande mobilité et sans aucune infrastructure fixe et sans administration centrale. Les protocoles de routage MANET sont Ad Hoc on Demand Distance Vector (AODV), Optimized Link State Routing (OLSR), Topology Dissemination Based on Reverse-Path Forwarding (TBRPF) et Dynamic Source Routing (DSR).En raison des caractéristiques des réseaux mobiles ad hoc, les principaux problèmes concernent la sécurité, les performances du réseau et de la qualité de service. En termes de performances, AODV offre de meilleures performances que les autres protocoles de routage MANET. Cette thèse porte donc sur le développement d'un protocole sécurisé et sur l'acheminement optimisé basé sur le protocole de routage AODV. Dans la première partie, nous combinons la fonction de gateway de AODV + et la méthode reverse de R-AODV pour obtenir le protocole optimisé en réseau hybride. Le protocole proposé appelé AODV-UI. Mécanisme de demande inverse dans R-AODV est utilisé pour optimiser le rendement du protocole de routage AODV et le module de passerelle de AODV + est ajouté à communiquer avec le noeud d'infrastructure. Nous effectuons la simulation en utilisant NS-2 pour évaluer la performance de AODV-UI. Paramètres d'évaluation de la performance sont le taux de livraison de paquets de bout en bout retard et les frais généraux de routage. Les résultats des simulations montrent que AODV-UI surperformé AODV + en terme de performance.La consommation d'énergie et les performances sont évaluées dans les scénarios de simulation avec un nombre différent de noeuds source, la vitesse maximale différente, et également des modèles de mobilité différents. Nous comparons ces scénarios sous Random Waypoint (RWP) et Reference Point Group Mobility (RPGM) modèles. Le résultat de la simulation montre que sous le modèle de mobilité RWP, AODV-UI consommer petite énergie lorsque la vitesse et le nombre de nœuds accéder à la passerelle sont augmentés. La comparaison des performances lors de l'utilisation des modèles de mobilité différents montre que AODV-UI a une meilleure performance lors de l'utilisation modèle de mobilité RWP. Globalement, le AODV-UI est plus appropriée pour l'utilisation de modèle de mobilité RWP.Dans la deuxième partie, nous proposons un nouveau protocole AODV sécurisé appelé Trust AODV en utilisant le mécanisme de la confiance. Les paquets de communication sont envoyés uniquement aux nœuds voisins de confiance. Calcul de confiance est basée sur les comportements et les activités d'information de chaque nœud. Il est divisé en Trust Global (TG) et Trust Local (TL). TG est un calcul de confiance basée sur le total de paquets de routage reçues et le total de l'envoi de paquets de routage. TL est une comparaison entre les paquets reçus au total et nombre total de paquets transmis par nœud voisin de nœuds spécifiques. Noeuds concluent le niveau de confiance totale de ses voisins en accumulant les valeurs TL et TG. Quand un noeud est soupçonné d'être un attaquant, le mécanisme de sécurité sera l'isoler du réseau avant que la communication est établie. [...]
Currently wireless networks have grown significantly in the field of telecommunication networks. Wireless networks have the main characteristic of providing access of information without considering the geographical and the topological attributes of a user. One of the most popular wireless network technologies is mobile ad hoc networks (MANET). A MANET is a decentralized, self-organizing and infrastructure-less network. Every node acts as a router for establishing the communication between nodes over wireless links. Since there is no administrative node to control the network, every node participating in the network is responsible for the reliable operation of the whole network. Nodes forward the communication packets between each other to find or establish the communication route. As in all networks, MANET is managed and become functional with the use of routing protocols. Some of MANET routing protocol are Ad Hoc on Demand Distance Vector (AODV), Optimized Link State Routing (OLSR), Topology Dissemination Based on Reverse-Path Forwarding (TBRPF), and Dynamic Source Routing (DSR).Due to the unique characteristics of mobile ad hoc networks, the major issues to design the routing protocol are a security aspect and network performance. In term of performance, AODV has better performance than other MANET routing protocols. In term of security, secure routing protocol is divided in two categories based on the security method, i.e. cryptographic mechanism and trust based mechanism. We choose trust mechanism to secure the protocol because it has a better performance rather than cryptography method.In the first part, we combine the gateway feature of AODV+ and reverse method from R-AODV to get the optimized protocol in hybrid network. The proposed protocol called AODV-UI. Reverse request mechanism in R-AODV is employed to optimize the performance of AODV routing protocol and gateway module from AODV+ is added to communicate with infrastructure node. We perform the simulation using NS-2 to evaluate the performance of AODV-UI. Performance evaluation parameters are packet delivery rate, end to end delay and routing overhead. Simulation results show that AODV-UI outperformed AODV+ in term of performance. The energy consumption and performance are evaluated in simulation scenarios with different number of source nodes, different maximum speed, and also different mobility models. We compare these scenarios under Random Waypoint (RWP) and Reference Point Group Mobility (RPGM) models. The simulation result shows that under RWP mobility model, AODV-UI consume small energy when the speed and number of nodes access the gateway are increased. The performance comparison when using different mobility models shows that AODV-UI has a better performance when using RWP mobility model. Overall the AODV-UI is more suitable when using RWP mobility model.In the second part, we propose a new secure AODV protocol called Trust AODV using trust mechanism. Communication packets are only sent to the trusted neighbor nodes. Trust calculation is based on the behaviors and activities information’s of each node. It is divided in to Trust Global and Trust Local. Trust global (TG) is a trust calculation based on the total of received routing packets and the total of sending routing packets. Trust local (TL) is a comparison between total received packets and total forwarded packets by neighbor node from specific nodes. Nodes conclude the total trust level of its neighbors by accumulating the TL and TG values. When a node is suspected as an attacker, the security mechanism will isolate it from the network before communication is established. [...]
APA, Harvard, Vancouver, ISO, and other styles
20

Morvan, Hervé. "Contribution à l'élaboration d'un optimiseur de structures mécaniques sous contraintes statiques et/ou dynamiques : Application a l'étude des véhicules de transport guidés." Valenciennes, 1996. https://ged.uphf.fr/nuxeo/site/esupversions/59a5dd4a-0308-4b43-a6ea-4ea3950f7e67.

Full text
Abstract:
La conception des véhicules de transports guidés fait appel à de plus en plus de domaines en mécanique. Aussi nous étudions le potentiel d'utilisation d'un optimiseur de structure. L'étude se décompose en deux étapes: l'élaboration d'un outil industriel d'optimisation, à la fois général, robuste, efficace et facilement utilisable dans un bureau d'études ; une exploitation de ce logiciel sur un site de conception de véhicules ferroviaires. Le logiciel proposé est basé sur une méthode de linéarisation convexe avec résolution par voie duale. Les améliorations apportées sont un système automatique de relaxation des contraintes, un facteur de pénalisation quadratique et une méthode de minimisation des violations de contraintes. De plus, un système contrôle la forme des modes propres durant une optimisation sous contraintes dynamiques. Un algorithme génétique en phase de pré-optimisation accroit la généralité de l'algorithme. Trois optimisations sont menées sur des structures industrielles. Elles ont été réalisées sur différents codes de calcul et démontrent la fiabilité du produit développé.
APA, Harvard, Vancouver, ISO, and other styles
21

Sohangir, Soroosh. "Optimized feature selection using NeuroEvolution of Augmenting Topologies (NEAT)." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/767.

Full text
Abstract:
AN ABSTRACT OF THE THESIS OF SOROOSH SOHANGIR, for the MASTER OF SCIENCE degree in COMPUTER SCIENCE, presented on 9 th November 2011, at Southern Illinois University Carbondale. TITLE: OPTIMIZED FEATURE SELECTION USING NEUROEVOLUTION OF AUGMENTING TOPOLOGIES (NEAT) MAJOR PROFESSOR: Dr. Shahram Rahimi Feature selection using the NeuroEvolution of Augmenting Topologies (NEAT) is a new approach. In this thesis an investigation had been carried out for implementation based on optimization of the network topology and protecting innovation through the speciation which is similar to what happens in nature. The NEAT is implemented through the JNEAT package and Utans method for feature selection is deployed. The performance of this novel method is compared with feature selection using Multilayer Perceptron (MLP) where Belue, Tekto, and Utans feature selection methods is adopted. According to unveiled data from this thesis the number of species, the training, accuracy and number of hidden neurons are notably improved as compared with conventional networks. For instance the time is reduced by factor of three.
APA, Harvard, Vancouver, ISO, and other styles
22

Elliott, Donald M. "Application of a genetic algorithm to optimize quality assurance in software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273193.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1993.
Thesis advisor(s): Ramesh, B. ; Abdel-Hamid, Tarek K. "September 1993." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
23

Johnson, Donald C. "Application of a genetic algorithm to optimize staffing levels in software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA293725.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, December 1994.
"December 1994." Thesis advisor(s): B. Ramesh, T. Hamid. Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
24

Jeong, Woo Yong. "Structural analysis and optimized design of general nonprismatic I-section members." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53020.

Full text
Abstract:
Tapered I-section members have been employed widely for the design of long-span structures such as large clear-span buildings, stadiums, and bridges because of their structural efficiency. For optimized member design providing maximum strength and stiffness at minimum cost, general non-prismatic (tapered and/or stepped cross-sections) as well as singly-symmetric cross-sections have been commonly employed. Fabricators equipped to produce web-tapered members can create a wide range of optimized members from a minimal stock of different plates and coil. Linearly tapered web plates can be nested to minimize scrap. In many cases, the savings in material and manufacturing efficiencies lead to significant cost savings relative to the use of comparable rolled shapes. To employ Design Guide 25 (DG25) which provides guidance for the application of the provisions of the AISC Specification to the design of frames composed of general non-prismatic members, designers need a robust and general capability for determining the elastic buckling loads. Furthermore, robust tools are needed to facilitate the selection of optimum non-prismatic member designs based on minimum cost. This research addresses the calculation of the elastic buckling loads for general non-prismatic members subjected to general loadings and bracing conditions (typically involving multiple brace points along a given member). This research develops an elastic buckling analysis tool (SABRE2) that can be used to define general geometries, loadings and bracing conditions and obtain a rigorous calculation of the elastic buckling load levels. The three-dimensional finite element equations using open section thin-walled beam theory are derived and formulated using a co-rotational approach including load height effects of transverse loads, stepped flange dimensions, and bracing and support height effects. In addition, this research addresses an algorithmic means to obtain automatic optimized member and frame designs using the above types of members based on Genetic Algorithms (GA). These capabilities are implemented in the tool SABRE2D, which provides a graphical user interface for optimized member and frame design based on updated DG25 provisions and the elastic buckling load calculations from SABRE2.
APA, Harvard, Vancouver, ISO, and other styles
25

Mnasri, Sami. "Contributions to the optimized deployment of connected sensors on the Internet of Things collection networks." Thesis, Toulouse 2, 2018. http://www.theses.fr/2018TOU20046/document.

Full text
Abstract:
Les réseaux de collecte de l’IoT soulèvent de nombreux problèmes d'optimisation, à cause des capacités limitées des capteurs en énergie, en traitement et en mémoire. Dans l'optique d’améliorer la performance du réseau, nous nous intéressons à une contribution liée à l'optimisation du déploiement 3D d’intérieur des nœuds sur les réseaux de capteurs sans fil en utilisant des méta-heuristiques hybrides se basant sur des modèles mathématiques multi-objectif. L’objectif principal est donc de proposer des hybridations et modifications des algorithmes d’optimisation dans le but de réaliser le positionnement 3D adéquat des nœuds dans les réseaux de capteurs sans fil avec satisfaction d’un ensemble de contraintes et objectifs qui sont souvent antagonistes. Nous proposons d'axer notre contribution sur les méta-heuristiques hybrides et combinés avec des procédures de réduction de dimentionalité et d’incorporation de préférences des utilisateurs. Ces schémas d’hybridation sont tous validés par des résultats numériques de test. Ensuite, des simulations complétées par; et confrontées à ; des expérimentations sur des testbeds réelles
IoT collection networks raise many optimization problems; in particular because the sensors have limited capacity in energy, processing and memory. In order to improve the performance of the network, we are interested in a contribution related to the optimization of the 3D indoor deployment of nodes using multi-objective mathematics models relying on hybrid meta-heuristics. Therefore, our main objective is to propose hybridizations and modifications of the optimization algorithms to achieve the appropriate 3D positioning of the nodes in the wireless sensor networks with satisfaction of a set of constraints and objectives that are often antagonistic. We propose to focus our contribution on meta-heuristics hybridized and combined with procedures to reduce dimensionality and to incorporate user preferences. These hybridization schemes are all validated by numerical tests. Then, we proposed simulations that are completed by, and confronted with experiments on real testbeds
APA, Harvard, Vancouver, ISO, and other styles
26

Guan, C. "Evolutionary and swarm algorithm optimized density-based clustering and classification for data analytics." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3021212/.

Full text
Abstract:
Clustering is one of the most widely used pattern recognition technologies for data analytics. Density-based clustering is a category of clustering methods which can find arbitrary shaped clusters. A well-known density-based clustering algorithm is Density- Based Spatial Clustering of Applications with Noise (DBSCAN). DBSCAN has three drawbacks: firstly, the parameters for DBSCAN are hard to set; secondly, the number of clusters cannot be controlled by the users; and thirdly, DBSCAN cannot directly be used as a classifier. With addressing the drawbacks of DBSCAN, a novel framework, Evolutionary and Swarm Algorithm optimised Density-based Clustering and Classification (ESA-DCC), is proposed. Evolutionary and Swarm Algorithm (ESA), has been applied in various different research fields regarding optimisation problems, including data analytics. Numerous categories of ESAs have been proposed, such as, Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), Differential Evaluation (DE) and Artificial Bee Colony (ABC). In this thesis, ESA is used to search the best parameters of density-based clustering and classification in the ESA-DCC framework to address the first drawback of DBSCAN. As method to offset the second drawback, four types of fitness functions are defined to enable users to set the number of clusters as input. A supervised fitness function is defined to use the ESA-DCC as a classifier to address the third drawback. Four ESA- DCC methods, GA-DCC, PSO-DCC, DE-DCC and ABC-DCC, are developed. The performance of the ESA-DCC methods is compared with K-means and DBSCAN using ten datasets. The experimental results indicate that the proposed ESA-DCC methods can find the optimised parameters in both supervised and unsupervised contexts. The proposed methods are applied in a product recommender system and image segmentation cases.
APA, Harvard, Vancouver, ISO, and other styles
27

MUKHERJEE, NANDINI. "3D DEFORMABLE CONTOUR SURFACE RECONSTRUCTION: AN OPTIMIZED ESTMATION METHOD." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1078255615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yan, Yu Pei. "A path planning algorithm for the mobile robot in the indoor and dynamic environment based on the optimized RRT algorithm." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fan, Jin. "Using genetic algorithms to optimise wireless sensor network design." Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/6312.

Full text
Abstract:
Wireless Sensor Networks(WSNs) have gained a lot of attention because of their potential to immerse deeper into people' lives. The applications of WSNs range from small home environment networks to large habitat monitoring. These highly diverse scenarios impose different requirements on WSNs and lead to distinct design and implementation decisions. This thesis presents an optimization framework for WSN design which selects a proper set of protocols and number of nodes before a practical network deployment. A Genetic Algorithm(GA)-based Sensor Network Design Tool(SNDT) is proposed in this work for wireless sensor network design in terms of performance, considering application-specific requirements, deployment constrains and energy characteristics. SNDT relies on offine simulation analysis to help resolve design decisions. A GA is used as the optimization tool of the proposed system and an appropriate fitness function is derived to incorporate many aspects of network performance. The configuration attributes optimized by SNDT comprise the communication protocol selection and the number of nodes deployed in a fixed area. Three specific cases : a periodic-measuring application, an event detection type of application and a tracking-based application are considered to demonstrate and assess how the proposed framework performs. Considering the initial requirements of each case, the solutions provided by SNDT were proven to be favourable in terms of energy consumption, end-to-end delay and loss. The user-defined application requirements were successfully achieved.
APA, Harvard, Vancouver, ISO, and other styles
30

Huntsman-Labed, Alice. "Algorithmes de constructions hierarchiques cherchant à optimiser le critère des moindres carrés." Aix-Marseille 3, 1997. http://www.theses.fr/1997AIX30037.

Full text
Abstract:
Etant donne un ensemble de n objets dont on connait toutes les dissemblances 2 a 2. Le travail de cette these est une recherche d'algorithmes de construction des representations hierarchiques de ces n objets, qui fournissent des hierarchies les mieux ajustees aux donnees. Le critere choisi pour evaluer des hierarchies est le critere des moindres carres sous la forme de l'ecart quadratique moyen entre la dissimilarite initiale et l'ultrametrique associee au dendrogramme final. Le nombre de hierarchies possibles a construire sur un ensemble de n objets est d'une complexite exponentielle. Malgre l'eviction d'un grand nombre de hierarchies, l'algorithme exact de chandon, lemaire et pouget (1980) du type branch et bound qui fournit des hierarchies qui optimisent notre critere d'evaluation reste tres lourd en temps de calcul, n>12. Cette une etude de type monte carlo et nous travaillons sur un ensemble de 100 jeux de donnees tirees au hasard, donc sans structure. Les hierarchies optimales sont construites par l'algorithme exact et notre critere de qualite est le nombre de succes obtenus, c'est a dire le nombre de fois ou un algorithme trouve la hierarchie optimale. Les algorithmes evalues comprennent essentiellement des algorithmes de type branch et bound, des algorithmes ascendants et des algorithmes descendants. Une variete de nouveaux algorithmes sont proposes par l'auteur.
APA, Harvard, Vancouver, ISO, and other styles
31

ALQADAH, HATIM FAROUQ. "OPTIMIZED TIME-FREQUENCY CLASSIFICATION METHODS FOR INTELLIGENT AUTOMATIC JETTISONING OF HELMET-MOUNTED DISPLAY SYSTEMS." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1185838368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Pinheiro, Tibério Magalhães. "Estudo da operação otimizada de um sistema de reservatórios considerando a evaporação através de algoritmo genético híbrido." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/18/18138/tde-18112016-161124/.

Full text
Abstract:
Os problemas de escassez hídrica enfrentados pela região Nordeste do Brasil ocorrem, principalmente, devido às condições climáticas, caracterizadas pela má distribuição das chuvas tanto temporalmente - a maior parte da precipitação anual se concentra em poucos meses - quanto espacialmente. A alta taxa de evaporação da região e a estrutura geológica do solo, predominantemente cristalino, são fatores que contribuem para agravar o problema. Fica evidenciada, portanto, a necessidade de a operação dos sistemas de recursos hídricos ser otimizada, propiciando assim, o seu melhor aproveitamento, com o menor custo para a sociedade. O presente trabalho estuda a operação otimizada dos reservatórios que abastecem a região metropolitana de Fortaleza, no Estado do Ceará, considerando a evaporação. O problema foi tratado através de um procedimento híbrido, proposto recentemente, de algoritmo genético e programação linear. O método permitiu que regras operacionais fossem extraídas sem a necessidade de fixá-las a priori, considerando diferentes condições de importe hídrico e possibilidades hidrológicas para evidenciar a robustez do método.
The problems of water shortage in the Northeast area of Brazil, are mainly due to the weather conditions, characterized by scattered rainfall depending upon the time of the year - the highest annual precipitation is concentrated in a few months - as well as the location. Very high evaporation rates in the region and the geological structure of the soil, mainly of crystalline origin, are factors that worsen shortage of water. Thus, there need for optimal operation of water resources systems, so as to obtain highest benefit at low costs for the society. The present study performs optimized operation of the Fortaleza (Ceará) metropolitan area water supply reservoirs with special attention to water losses by evaporation. The problem has been handled through a recently proposed hybrid procedure, genetic algorithm and linear programming. The method permitted extraction of operational rules without having to hypothesize their structure \"a priori\". Further, it was applied to Fortaleza water supply under different hydrologic conditions and those of inter-basin water transfers to verify the strength of the method employed.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Sean. "Use of GPU architecture to optimize Rabin fingerprint data chunking algorithm by concurrent programming." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10108186.

Full text
Abstract:

Data deduplication is introduced as a popular technique used to increase storage efficiency used in various data centers and corporate backup environments. There are various caching techniques and metadata checking available to prevent excessive file scanning. Due to the nature of content addressable chunking algorithm being a serial operation, the data deduplication chunking process often times become the performance bottleneck. This project introduces a parallelized Rabin fingerprint algorithm suitable for GPU hardware architecture that aims to optimize the performance of the deduplication process.

APA, Harvard, Vancouver, ISO, and other styles
34

White, William E. "Use of Empirically Optimized Perturbations for Separating and Characterizing Pyloric Neurons." Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1368055391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Huyan, Pengfei. "Electromagnetic digital actuators array : characterization of a planar conveyance application and optimized design." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2178/document.

Full text
Abstract:
Dans les systèmes mécaniques ou mécatroniques, les actionneurs sont les composants utilisés pour convertir l’énergie d’entrée, généralement l’énergie électrique, en tâche mécanique telles que le mouvement, la force ou une combinaison des deux. Actionneur analogique et actionneur numérique sont les deux types d’actionneurs les plus communs. Les actionneurs numériques possèdent les avantages du contrôle en boucle ouverte, faible consommation d’énergie par rapport aux actionneurs analogiques. Cependant, les actionneurs numériques présentent deux inconvénients majeurs. Les erreurs de fabrication de ces actionneurs doivent être contrôlées précisément parce que, contrairement à des actionneurs analogiques, une erreur de fabrication ne peut pas être compensée par la loi de commande. Un autre inconvénient est leur capacité à réaliser les tâches continues en raison de leur corse discrète. Un assemblage de plusieurs actionneurs numériques peut néanmoins réaliser des tâches multiples discrètes. Cette thèse porte sur la caractérisation et l’optimisation d’une conception expérimentale actionneurs tableau numériques pour l’application planaire de transport. Le premier objectif principal de la présente thèse est axé sur la caractérisation de l’ensemble des actionneurs existants et aussi une application planaire de transport sur la base du tableau des actionneurs. A cette fin, une modélisation de la matrice des actionneurs essais expérimentaux ont été effectués afin de déterminer l’influence de certains paramètres sur le comportement des actionneurs de tableau. Le deuxième objectif est de concevoir une nouvelle version du tableau actionneurs sur la base de l’expérience du premier prototype. Une optimisation de la conception a ensuite été réalisée en utilisant des techniques d’algorithmes génétiques tout en tenant compte de plusieurs critères
In mechanical or mechatronical systems, actuators are the components used to convert input energy, generally electrical energy, into mechanical tasks such as motion, force or a combination of both. Analogical actuator and digital actuator are two common types of actuators. Digital actuators have the advantages of open-loop control, low energy consumption and etc compared to analogical actuators. However, digital actuators present two main drawbacks. The manufacturing errors of these actuators have to be precisely controlled because, unlike to analogical actuators, a manufacturing error cannot be compensated using the control law. Another drawback is their inability to realize continuous tasks because of their discrete stroke. An assembly of several digital actuators can nevertheless realize multi-discrete tasks. This thesis focuses on the experimental characterization and optimization design of a digital actuators array for planar conveyance application. The firs main objective of the present thesis is focused on the characterization of the existing actuators array and also a planar conveyance application based on the actuators array. For that purpose, a modeling of the actuators array and experimental test has been carried out in order to determine the influence of some parameters on the actuators array behavior. The second objective is to design a new version of the actuators array based on the experience of the first prototype. An optimization of the design has then been realized using genetic algorithm techniques while considering several criteria
APA, Harvard, Vancouver, ISO, and other styles
36

Nagi, Alla. "Optimized semi-active PID controller for offshore cranes." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
The oil and gas industry is one of the most hazardous environments and growing demanding industries, with inherent dangers that can be difficult to manage at times. This is owing to the dangerous nature of the materials used, as well as the importance of the jobs that personnel must do. Cranes are commonly used in this industry for a variety of tasks, including transporting equipment, performing maintenance, and providing management services for offshore areas. The evidence shows that improper load lifting or handling can result in catastrophic unintentional situations such as fires, explosions, and hazardous dispersion, thus the safety and the correct control of the performance of lifting activities and model become of critical task. This work aims to complete an already done model of an offshore crane by replace a flexible beam to the rigid beam that is already exist in the old model and then compare the results by transforming time-amplitude diagram into a Bode frequency diagram so it be easily interpreted. When studying complex mechanical systems, it is useful to build models able to simulate both the dynamics of the phenomenon and the control system applied. Typically, the bodies involved are modeled as rigid bodies. This task is discussed in Chapter 2.The second task of this work, which involves around the security the structure, is to optimize the controller parameters. In this task a PID control is used to minimize the position errors. In the original model, the values of the parameters were obtained by“Trial and error” method. Artificial Intelligence algorithms were used to optimize the controller parameters.
APA, Harvard, Vancouver, ISO, and other styles
37

Kaylani, Assem. "AN ADAPTIVE MULTIOBJECTIVE EVOLUTIONARY APPROACH TO OPTIMIZE ARTMAP NEURAL NETWORKS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2538.

Full text
Abstract:
This dissertation deals with the evolutionary optimization of ART neural network architectures. ART (adaptive resonance theory) was introduced by a Grossberg in 1976. In the last 20 years (1987-2007) a number of ART neural network architectures were introduced into the literature (Fuzzy ARTMAP (1992), Gaussian ARTMAP (1996 and 1997) and Ellipsoidal ARTMAP (2001)). In this dissertation, we focus on the evolutionary optimization of ART neural network architectures with the intent of optimizing the size and the generalization performance of the ART neural network. A number of researchers have focused on the evolutionary optimization of neural networks, but no research has been performed on the evolutionary optimization of ART neural networks, prior to 2006, when Daraiseh has used evolutionary techniques for the optimization of ART structures. This dissertation extends in many ways and expands in different directions the evolution of ART architectures, such as: (a) uses a multi-objective optimization of ART structures, thus providing to the user multiple solutions (ART networks) with varying degrees of merit, instead of a single solution (b) uses GA parameters that are adaptively determined throughout the ART evolution, (c) identifies a proper size of the validation set used to calculate the fitness function needed for ART's evolution, thus speeding up the evolutionary process, (d) produces experimental results that demonstrate the evolved ART's effectiveness (good accuracy and small size) and efficiency (speed) compared with other competitive ART structures, as well as other classifiers (CART (Classification and Regression Trees) and SVM (Support Vector Machines)). The overall methodology to evolve ART using a multi-objective approach, the chromosome representation of an ART neural network, the genetic operators used in ART's evolution, and the automatic adaptation of some of the GA parameters in ART's evolution could also be applied in the evolution of other exemplar based neural network classifiers such as the probabilistic neural network and the radial basis function neural network.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
38

Wanis, Paul, and John S. Fairbanks. "Analysis of Optimized Design Tradeoffs in Application of Wavelet Algorithms to Video Compression." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605769.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
Because all video compression schemes introduce artifacts into the compressed video images, degradation occurs. These artifacts, generated by a wavelet-based compression scheme, will vary with the compression ratio and input imagery, but do show some consistent patterns across applications. There are a number of design trade-offs that can be made to mitigate the effect of these artifacts. By understanding the artifacts introduced by video compression and being able to anticipate the amount of image degradation, the video compression can be configured in a manner optimal to the application under consideration in telemetry.
APA, Harvard, Vancouver, ISO, and other styles
39

SINGH, MANVENDRA PRATAP. "A NEW APPROACH FOR DATA CLUSTERING USING MULTI-VERSE OPTIMIZER ALGORITHM." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15077.

Full text
Abstract:
Nature has always been a source of inspiration. Over the last few decades, it has stimulated many successful algorithms and computational tools for dealing with complex and optimization problems. This work proposes a new heuristic algorithm that is inspired by the Universe theory of multi-verse i.e. more than one Universe phenomenon. Similar to other population-based algorithms, the Multi-verse optimizer (MVO) starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. At each iteration of the MVO, the best candidate is selected to be the Best Universe, which then starts exchanging the objects from other Universe. Also the Universes with high inflation rate move their objects to the universe having low inflation rate in order to make abrupt changes. To evaluate the performance of the MVO algorithm, it is applied to solve the clustering problem, which is a NP-hard problem. The experimental results show that the proposed MVO clustering algorithm outperforms other traditional heuristic algorithms for five benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
40

Darera, Pooja N. "Reduction Of Query Optimizer Plan Diagrams." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/533.

Full text
Abstract:
Modern database systems use a query optimizer to identify the most efficient strategy, called "plan", to execute declarative SQL queries. Optimization is a mandatory exercise since the difference between the cost of best plan and a random choice could be in orders of magnitude. The role of query optimization is especially critical for the decision support queries featured in data warehousing and data mining applications. For a query on a given database and system configuration, the optimizer's plan choice is primarily a function of the selectivities of the base relations participating in the query. A pictorial enumeration of the execution plan choices of a database query optimizer over this relational selectivity space is called a "plan diagram". It has been shown recently that these diagrams are often remarkably complex and dense, with a large number of plans covering the space. An interesting research problem that immediately arises is whether complex plan diagrams can be reduced to a significantly smaller number of plans, without materially compromising the query processing quality. The motivation is that reduced plan diagrams provide several benefits, including quantifying the redundancy in the plan search space, enhancing the applicability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overhead of multi-plan approaches. In this thesis, we investigate the plan diagram reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan diagram reduction, w.r.t. minimizing the number of plans in the reduced diagram, is an NP-hard problem, and remains so even for a storage-constrained variation. We then present CostGreedy, a greedy reduction algorithm that has tight and optimal performance guarantees, and whose complexity scales linearly with the number of plans in the diagram. Next, we construct an extremely fast estimator, AmmEst, for identifying the location of the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Both CostGreedy and AmmEst have been incorporated in the publicly-available Picasso optimizer visualization tool. Through extensive experimentation with benchmark query templates on industrial-strength database optimizers, we demonstrate that with only a marginal increase in query processing costs, CostGreedy reduces even complex plan diagrams running to hundreds of plans to "anorexic" levels (small absolute number of plans). While these results are produced using a highly conservative upper-bounding of plan costs based on a cost monotonicity constraint, when the costing is done on "actuals" using remote plan costing, the reduction obtained is even greater - in fact, often resulting in a single plan in the reduced diagram. We also highlight how anorexic reduction provides enhanced resistance to selectivity estimate errors, a long-standing bane of good plan selection. In summary, this thesis demonstrates that complex plan diagrams can be efficiently converted to anorexic reduced diagrams, a result with useful implications for the design and use of next-generation database query optimizers.
APA, Harvard, Vancouver, ISO, and other styles
41

Darera, Pooja N. "Reduction Of Query Optimizer Plan Diagrams." Thesis, 2007. http://hdl.handle.net/2005/533.

Full text
Abstract:
Modern database systems use a query optimizer to identify the most efficient strategy, called "plan", to execute declarative SQL queries. Optimization is a mandatory exercise since the difference between the cost of best plan and a random choice could be in orders of magnitude. The role of query optimization is especially critical for the decision support queries featured in data warehousing and data mining applications. For a query on a given database and system configuration, the optimizer's plan choice is primarily a function of the selectivities of the base relations participating in the query. A pictorial enumeration of the execution plan choices of a database query optimizer over this relational selectivity space is called a "plan diagram". It has been shown recently that these diagrams are often remarkably complex and dense, with a large number of plans covering the space. An interesting research problem that immediately arises is whether complex plan diagrams can be reduced to a significantly smaller number of plans, without materially compromising the query processing quality. The motivation is that reduced plan diagrams provide several benefits, including quantifying the redundancy in the plan search space, enhancing the applicability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overhead of multi-plan approaches. In this thesis, we investigate the plan diagram reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan diagram reduction, w.r.t. minimizing the number of plans in the reduced diagram, is an NP-hard problem, and remains so even for a storage-constrained variation. We then present CostGreedy, a greedy reduction algorithm that has tight and optimal performance guarantees, and whose complexity scales linearly with the number of plans in the diagram. Next, we construct an extremely fast estimator, AmmEst, for identifying the location of the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Both CostGreedy and AmmEst have been incorporated in the publicly-available Picasso optimizer visualization tool. Through extensive experimentation with benchmark query templates on industrial-strength database optimizers, we demonstrate that with only a marginal increase in query processing costs, CostGreedy reduces even complex plan diagrams running to hundreds of plans to "anorexic" levels (small absolute number of plans). While these results are produced using a highly conservative upper-bounding of plan costs based on a cost monotonicity constraint, when the costing is done on "actuals" using remote plan costing, the reduction obtained is even greater - in fact, often resulting in a single plan in the reduced diagram. We also highlight how anorexic reduction provides enhanced resistance to selectivity estimate errors, a long-standing bane of good plan selection. In summary, this thesis demonstrates that complex plan diagrams can be efficiently converted to anorexic reduced diagrams, a result with useful implications for the design and use of next-generation database query optimizers.
APA, Harvard, Vancouver, ISO, and other styles
42

Janson, Stefan, and Martin Middendorf. "A Hierarchical Particle Swarm Optimizer and Its Adaptive Variant." 2005. https://ul.qucosa.de/id/qucosa%3A33064.

Full text
Abstract:
Ahierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes.
APA, Harvard, Vancouver, ISO, and other styles
43

Javidsharifi, M., T. Niknam, J. Aghaei, Geev Mokryani, and P. Papadopoulos. "Multi-objective day-ahead scheduling of microgrids using modified grey wolf optimizer algorithm." 2018. http://hdl.handle.net/10454/16610.

Full text
Abstract:
Yes
Investigation of the environmental/economic optimal operation management of a microgrid (MG) as a case study for applying a novel modified multi-objective grey wolf optimizer (MMOGWO) algorithm is presented in this paper. MGs can be considered as a fundamental solution in order for distributed generators’ (DGs) management in future smart grids. In the multi-objective problems, since the objective functions are conflict, the best compromised solution should be extracted through an efficient approach. Accordingly, a proper method is applied for exploring the best compromised solution. Additionally, a novel distance-based method is proposed to control the size of the repository within an aimed limit which leads to a fast and precise convergence along with a well-distributed Pareto optimal front. The proposed method is implemented in a typical grid-connected MG with non-dispatchable units including renewable energy sources (RESs), along with a hybrid power source (micro-turbine, fuel-cell and battery) as dispatchable units, to accumulate excess energy or to equalize power mismatch, by optimal scheduling of DGs and the power exchange between the utility grid and storage system. The efficiency of the suggested algorithm in satisfying the load and optimizing the objective functions is validated through comparison with different methods, including PSO and the original GWO.
Supported in part by Royal Academy of Engineering Distinguished Visiting Fellowship under Grant DVF1617\6\45
APA, Harvard, Vancouver, ISO, and other styles
44

Lin, Chin-Han, and 林金漢. "Optimal Relay Antenna Location in Indoor Environment Using Particle Swarm Optimizer and Genetic Algorithm." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/59479269710316523475.

Full text
Abstract:
碩士
淡江大學
電機工程學系碩士班
97
An optimization procedure for the location of the relay transceiver in ultra-wideband wireless communication system is presented. The impulse responses of different transceiver locations are computed by shooting and bouncing ray/image (SBR/Image) techniques and inverse fast Fourier transform (IFFT).By using the impulse responses of these multi-path channels, the bit error rate (BER) performance for binary pulse amplitude modulation (BPAM) impulse radio UWB communication system are calculated. Based on the BER performance, the outage probability for any given relay location of the transceiver can be computed. The optimal relay antenna location for minimizing the outage probability is searched by genetic algorithm (GA) and particle swarm optimizer (PSO). The transmitter is in the center of the whole indoor environment and the receivers are uniform distributed with 1.5 meter intervals in the whole indoor environment. Two cases are considered as following. (I) Two relay transceivers with two different cases are employed. First, the whole space is divided into two areas and one relay transceiver is used in each area. The optimal relay antenna locations are searched in each area respectively. Second, the two optimal relay locations are searched in the whole space directly without any prior division. (II) Four relay transceivers with two different cases are employed. First, the whole space is divided into four areas and one relay transceiver is used in each area. The optimal relay antenna locations are searched in each area respectively. Second, the four optimal relay locations are searched in the whole space directly without any prior division. Numerical results have shown that our proposed method is effective for finding the optimal location for relay antenna to reduce BER and outage probability.
APA, Harvard, Vancouver, ISO, and other styles
45

Tsai, Shang-Jeng, and 蔡尚錚. "The Study of Optimal Strategy in an Advanced Pursuit Problem: An example of Automatic Fighter Tracking Algorithm based on Particle Swarm Optimizer." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/14690457550178170317.

Full text
Abstract:
博士
國立東華大學
電機工程學系
97
The main focus of this dissertation is to develop an optimization technique and method for advanced pursuit problems. The automatic fighter tracking problem, AFT, has been chosen as the research experiment to test and confirm the fidelity of the proposed method. This research utilizes the Particle Swarm Optimizer based optimal control strategy (PSO-based OCS) as its solution to AFT problems. The PSO-based OCS is designed to obtain the control value of a pursuer through an error-feedback gain controller. Once conditions of system closed-loop stability have been satisfied, the optimal feedback gains can be obtained through PSO, the actual control values can be derived from the obtained values. Simulation results confirm the capabilities of the proposed method; it is compared with two other methods in the field, the weight matrix value defined Ricatti Equation, and the Linear Matrix Inequality (LMI) based Linear Quadratic Regulator (LQR). The performance of the proposed method is superior to that of its alternatives.
APA, Harvard, Vancouver, ISO, and other styles
46

Javidsharifi, M., T. Niknam, J. Aghaei, and Geev Mokryani. "Multi-objective short-term scheduling of a renewable-based microgrid in the presence of tidal resources and storage devices." 2017. http://hdl.handle.net/10454/15243.

Full text
Abstract:
Yes
Daily increasing use of tidal power generation proves its outstanding features as a renewable source. Due to environmental concerns, tidal current energy which has no greenhouse emission attracted researchers’ attention in the last decade. Additionally, the significant potential of tidal technologies to economically benefit the utility in long-term periods is substantial. Tidal energy can be highly forecasted based on short-time given data and hence it will be a reliable renewable resource which can be fitted into power systems. In this paper, investigations of effects of a practical stream tidal turbine in Lake Saroma in the eastern area of Hokkaido, Japan, allocated in a real microgrid (MG), is considered in order to solve an environmental/economic bi-objective optimization problem. For this purpose, an intelligent evolutionary multi-objective modified bird mating optimizer (MMOBMO) algorithm is proposed. Additionally, a detailed economic model of storage devices is considered in the problem. Results show the efficiency of the suggested algorithm in satisfying economic/environmental objectives. The effectiveness of the proposed approach is validated by making comparison with original BMO and PSO on a practical MG.
Iran National Science Foundation; Royal Academy of Engineering Distinguished Visiting Fellowship under Grant DVF1617\6\45
APA, Harvard, Vancouver, ISO, and other styles
47

Fu, Jen-Li, and 傅仁力. "Using Genetic Algorithms to Optimize Signal Parameters for the Dendritic Cell Algorithm." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/77029292850415926924.

Full text
Abstract:
碩士
輔仁大學
資訊工程學系碩士班
104
The Dendritic cell algorithm (DCA) is inspired by the human immune system, and has been successfully applied to various applications. The DCA is derived from behavioral models of natural dendritic cells whose primary role is to correlate disparate input signals and antigen and to label groups of identical antigen as normal or anomalous. To perform the DCA, a preprocessing data phase including feature selection and signal categorization is needed. Each selected feature can be categorized as Pathogen-Associated Molecular Pattern (PAMP), as Danger Signal (DS), or as Safe Signal (SS). DCA combines these signals and produces the co-stimulatory molecule signal, semi-mature signal, and mature signal. Each of the output signal values is a weight sum of three categories of input signals (PAMP, DS, and SS). The performance of DCA depends on the feature selection, signal categorization, and the parameters of the weights. A genetic algorithm (GA) is suitable for the problems in which solution space is large and an exhaustive search is impossible. Genetic algorithms have successfully helped artificial neural networks to determine their structures and parameter settings. Thus, the aim of this thesis is to design and develop an improved genetic algorithm for optimizing the structure and parameters of the DCA. Experimental results indicate the effectiveness of the proposed GA approach. In other words, the DCA is more robust with the help of our GA approach, and can be applied to more applications of anomaly detection.
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Chih-Ta, and 范志達. "Optimized Data Clustering Using Genetic Algorithm." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/60493822852102480871.

Full text
Abstract:
碩士
國立交通大學
控制工程研究所
83
Data clustering is a complex optimized problem with applications ranging from speech and image processing to data transmission and storage in technical as well as in biological systems. We discuss a genetic clustering algorithm that jointly optimizes fuzzy c-means algorithm, initial seed points, distortion errors, complexity cost and genetic algorithm. Agenetic algorithm and cost function (i, e, complexity cost and distortion cost) are used to determine the initial seed points and number of clusters for fuzzy c-means clustering algorithm. Experimental demonstrate that the algorithms can reach to the optimized or near-optimized solution.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Chien-Hsing, and 李建興. "Optimized CORDIC algorithm and architecture designs." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/28730787776968833076.

Full text
Abstract:
碩士
國立交通大學
電子研究所
81
In this thesis, efficient pipelined, high throughput rate architectures for CORDIC algorithm are presented. Since the CORDIC operation is dependent on the sign of remaining rotation or the component of Y-axis, the computation time of the CORDIC algorithm is limited by its inherently sequential relationship. However, in CORDIC algorithm, the remaining rotation angle or Y- component are always required to approach to zero. The key idea of our approach is to separate the sign detection operation of remaining rotation angle or the component of Y- axis evaluation from the rotation operation. By taking the absolute values of these variables, the angle or Y-component iteration are fixed to subtraction operation. Therefore, we can successively subtract the residues without knowing the signs of preceding remaining rotation angle or the component of Y-axis, while their signs can be detected parallely, independently and in a pipelined fashion. Doing this way the sequential relationship of CORDIC algorithm between the computation of angle calculated and rotation operation is eliminated, and the time for the CORDIC operations can be greatly reduced. The corresponding CORDIC processor we proposed consists of regular pipelined slices. Each pipeline slice contains only one or two signed-digit adder and one digit level absoluter. Therefore, the duration of a clock cycle is very short, that is about two or three signed-digit adder delays. And iteration is completed within two or three clock cycles. Since the pipeline slice is regular, the CORDIC processor is well suited for VLSI implementation. The sequential architecture for rotation mode of CORDIC algorithm is realized by 0.8um CMOS technology to verify our design. The chip, which has 32-bit operand wordlength, is 6.2mm*5.3mm in area. It can operate at 10MHz clock frequency.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Chieh-Chih, and 李建智. "Optimized Arithmetic Algorithm and Architecture Design." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/14469782495305233173.

Full text
Abstract:
碩士
國立交通大學
電子研究所
81
In this thesis, a fast radix-2 division and square-root algorithm are proposed. It achieves the best performance in both area and speed aspects over the existing algorithms and implementations. The proposed architecture basically consists of N simple carry-save adders ( CSAs ) for bit-serial implementation, and N*N CSAs for bit parallel implementation. It finished an N-bit division and square-root in 5N(O(N)) carry -save addition time and the result bits are in binary representation. In addition, a most-significant-digit(MSD) first multiplication is combined with the division and square- root algorithms for an optimal unified algorithm. Hence, the three operations can be constructed in a compatible arithmetic unit, which results in fast multi-function capability with least area. Since the hardware composed of highly regular cellular array, which is suitable for VLSI implementation, and the hardware implementation of 24-bit divider using "Magic" CAD tool is also presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography