Dissertations / Theses on the topic 'Gradient search'

To see the other types of publications on this topic, follow the link: Gradient search.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Gradient search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kevorkiants, Rouslan. "Linear scaling conjugate gradient density matrix search: implementation, validation, and application with semiempirical molecular orbital methods." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968547028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Resmer, Frank. "A gradient and RF system for open access low field MRI." Thesis, University of Aberdeen, 2004. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU208240.

Full text
Abstract:
This thesis describes work carried out to improve the operation of a 0.01 T open access low field MRI system. Two planar surface gradient coil sets and a new three channel gradient amplifier were designed, constructed and tested. Surface transmit coils and surface receive coils and a balanced preamplifier were developed to allow RF pulse generation and signal detection without compromising the single-sided access of the system. The properties of litz wire were investigated by simulation and testing, and it was shown and the established litz wire theory can also be applied to the design of cooled coils. Images were obtained to show the operation of the new gradient amplifier and to demonstrate the combined operation of a surface gradient coil, surface receive coil, surface transmit coil and the low noise preamplifier. A large surface receive coil was used to acquire in vivo images for the first time of surface regions of a volunteer's head. Images obtained with a cryogenic receive coil, showed an SNR improvement relative to a room temperature coil in agreement with theoretical predictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Bedida, Kirthi. "AN APPROACH TO INVERSE MODELING THROUGH THE INTEGRATION OF ARTIFICIAL NEURAL NETWORKS AND GENETIC ALGORITHMS." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/493.

Full text
Abstract:
A hybrid model integrating predictive capabilities of Artificial Neural Network (ANN) and optimization feature of Genetic Algorithm (GA) is developed for the purpose of inverse modeling. The proposed approach is applied to Superplastic forming of materials to predict the material properties which characterize the performance of a material. The study is carried out on two problems. For the first problem, ANN is trained to predict the strain rate sensitivity index m given the temperature and the strain rate. The performance of different gradient search methods used in training the ANN model is demonstrated. Similar approach is used for the second problem. The objective of which is to predict the input parameters, i.e. strain rate and temperature corresponding to a given flow stress value. An attempt to address one of the major drawbacks of ANN, which is the black box behavior of the model, is made by collecting information about the weights and biases used in training and formulating a mathematical expression. The results from the two problems are compared to the experimental data and validated. The results indicated proximity to the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Full text
Abstract:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
APA, Harvard, Vancouver, ISO, and other styles
5

Jacmenovic, Dennis, and dennis_jacman@yahoo com au. "Optimisation of Active Microstrip Patch Antennas." RMIT University. Electrical and Computer Engineering, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20060307.144507.

Full text
Abstract:
This thesis presents a study of impedance optimisation of active microstrip patch antennas to multiple frequency points. A single layered aperture coupled microstrip patch antenna has been optimised to match the source reflection coefficient of a transistor in designing an active antenna. The active aperture coupled microstrip patch antenna was optimised to satisfy Global Positioning System (GPS) frequency specifications. A rudimentary aperture coupled microstrip patch antenna consists of a rectangular antenna element etched on the top surface of two dielectric substrates. The substrates are separated by a ground plane and a microstrip feed is etched on the bottom surface. A rectangular aperture in the ground plane provides coupling between the feed and the antenna element. This type of antenna, which conveniently isolates any circuit at the feed from the antenna element, is suitable for integrated circuit design and is simple to fabricate. An active antenna design directly couples an antenna to an active device, therefore saving real estate and power. This thesis focuses on designing an aperture coupled patch antenna directly coupled to a low noise amplifier as part of the front end of a GPS receiver. In this work an in-house software package, dubbed ACP by its creator Dr Rod Waterhouse, for calculating aperture coupled microstrip patch antenna performance parameters was linked to HP-EEsof, a microwave computer aided design and simulation package by Hewlett-Packard. An ANSI C module in HP-EEsof was written to bind the two packages. This process affords the client the benefit of powerful analysis tools offered in HP-EEsof and the fast analysis of ACP for seamless system design. Moreover, the optimisation algorithms in HP-EEsof were employed to investigate which algorithms are best suited for optimising patch antennas. The active antenna design presented in this study evades an input matching network, which is accomplished by designing the antenna to represent the desired source termination of a transistor. It has been demonstrated that a dual-band microstrip patch antenna can be successfully designed to match the source reflection coefficient, avoiding the need to insert a matching network. Maximum power transfer in electrical circuits is accomplished by matching the impedance between entities, which is generally acheived with the use of a matching network. Passive matching networks employed in amplifier design generally consist of discrete components up to the low GHz frequency range or distributed elements at greater frequencies. The source termination for a low noise amplifier will greatly influence its noise, gain and linearity which is controlled by designing a suitable input matching network. Ten diverse search methods offered in HP-EEsof were used to optimise an active aperture coupled microstrip patch antenna. This study has shown that the algorithms based on the randomised search techniques and the Genetic algorithm provide the most robust performance. The optimisation results were used to design an active dual-band antenna.
APA, Harvard, Vancouver, ISO, and other styles
6

Fischer, Manfred M. "Learning in neural spatial interaction models: A statistical perspective." Springer, 2002. http://epub.wu.ac.at/5503/1/neural.pdf.

Full text
Abstract:
In this paper we view learning as an unconstrained non-linear minimization problem in which the objective function is defined by the negative log-likelihood function and the search space by the parameter space of an origin constrained product unit neural spatial interaction model. We consider Alopex based global search, as opposed to local search based upon backpropagation of gradient descents, each in combination with the bootstrapping pairs approach to solve the maximum likelihood learning problem. Interregional telecommunication traffic flow data from Austria are used as test bed for comparing the performance of the two learning procedures. The study illustrates the superiority of Alopex based global search, measured in terms of Kullback and Leibler's information criterion.
APA, Harvard, Vancouver, ISO, and other styles
7

Clausner, André. "Anwendung von Line-Search-Strategien zur Formoptimierung und Parameteridentifikation." Master's thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114858.

Full text
Abstract:
Die kontinuierliche Weiterentwicklung und Verbesserung technischer Prozesse erfolgt heute auf der Basis stochastischer und deterministischer Optimierungsstrategien in Kombination mit der numerischen Simulation dieser Abläufe. Da die FE-Simulation von Umformvorgängen in der Regel sehr zeitintensiv ist, bietet sich für die Optimierung solcher Prozesse der Einsatz deterministischer Methoden an, da hier weniger Optimierungsschritte und somit auch weniger FE-Simulationen notwendig sind. Eine wichtige Anforderung an solche Optimierungsverfahren ist globale Konvergenz zu lokalen Minima, da die optimalen Parametersätze nicht immer näherungsweise bekannt sind. Die zwei wichtigsten Strategien zum Ausdehnen des beschränkten Konvergenzradius der natürlichen Optimierungsverfahren (newtonschrittbasierte Verfahren und Gradientenverfahren) sind die Line-Search-Strategie und die Trust-Region-Strategie. Die Grundlagen der Line-Search-Strategie werden aufgearbeitet und die wichtigsten Teilalgorithmen implementiert. Danach wird dieses Verfahren auf eine effiziente Kombination der Teilalgorithmen und Verfahrensparameter hin untersucht. Im Anschluss wird die Leistung eines Optimierungsverfahrens mit Line-Search-Strategie verglichen mit der eines ebenfalls implementierten Optimierungsverfahrens mit skalierter Trust-Region-Strategie. Die Tests werden nach Einfügen der implementierten Verfahren in das Programm SPC-Opt anhand der Lösung eines Quadratmittelproblems aus der Materialparameteridentifikation sowie der Formoptimierung eines Umformwerkzeugs vorgenommen.
APA, Harvard, Vancouver, ISO, and other styles
8

O'Neal, Jerome W. "The use of preconditioned iterative linear solvers in interior-point methods and related topics." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-06242005-162854/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2006.
Parker, R. Gary, Committee Member ; Shapiro, Alexander, Committee Member ; Nemirovski, Arkadi, Committee Member ; Green, William, Committee Member ; Monteiro, Renato, Committee Chair.
APA, Harvard, Vancouver, ISO, and other styles
9

Zoltan, Pap. "Projektivni postupci tipa konjugovanih gradijenata za rešavanje nelinearnih monotonih sistema velikih dimenzija." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110614&source=NDLTD&language=en.

Full text
Abstract:
U disertaciji su posmatrani projektivni postupci tipa konjugovanih gradijenata za rešavanje nelinearnih monotonih sistema velikih dimenzija. Ovi postupci kombinuju projektivnu metodu sa pravcima pretraživanja tipa konjugovanih gradijenata. Zbog osobine monotonosti sistema, projektivna metoda omogućava jednostavnu globalizaciju, a pravci pretraživanja tipa konjugovanih gradijenata zahtevaju maloračunarske memorije pa su pogodni za rešavanje sistema velikih dimenzija. Projektivni postupci tipa konjugovanih gradijenata ne koriste izvode niti funkciju cilja i zasnovani su samo na izračunavanju vrednosti funkcije sistema, pa su pogodni i za rešavanje neglatkih monotonih sistema. Pošto se globalna konvergencija dokazuje bez pretpostavki o regularnosti, ovi postupci se mogu koristiti i za rešavanje sistema sa singularnim rešenjima. U disertaciji su definisana tri nova tročlana pravca pretraživanjatipa Flečer-Rivs i dva nova hibridna pravca tipa Hu-Stori. Formulisani su projektivni postupci sa novim pravcima pretraživanja i dokazana je njihova globalna konvergencija. Numeričke performanse postupaka testirane su na relevantnim primerima i poređene sa poznatim postupcima iz literature. Numerički rezultati potvrđuju da su novi postupci robusni, efikasni i uporedivi sa postojećim postupcima.
Projection based CG methods for solving large-scale nonlinear monotone systems are considered in this thesis. These methods combine hyperplane projection technique with conjugate gradient (CG) search directions. Hyperplane projection method is suitable for monotone systems, because it enables simply globalization, while CG directions are efficient for large-scale nonlinear systems, due to low memory. Projection based CG methods are funcion-value based, they don’t use merit function and derivatives, and because of that they are also suitable for solving nonsmooth monotone systems. The global convergence of these methods are ensured without additional regularity assumptions, so they can be used for solving singular systems.Three new three-term search directions of Fletcher-Reeves type and two new hybrid search directions of Hu-Storey type are defined. PCG algorithm with five new CG type directions is proposed and its global convergence is established. Numerical performances of methods are tested on relevant examples from literature. These results point out that new projection based CG methods have good computational performances. They are efficient, robust and competitive with other methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Beddiaf, Salah. "Continuous steepest descent path for traversing non-convex regions." Thesis, University of Hertfordshire, 2016. http://hdl.handle.net/2299/17175.

Full text
Abstract:
In this thesis, we investigate methods of finding a local minimum for unconstrained problems of non-convex functions with n variables, by following the solution curve of a system of ordinary differential equations. The motivation for this was the fact that existing methods (e.g. those based on Newton methods with line search) sometimes terminate at a non-stationary point when applied to functions f(x) that do not a have positive-definite Hessian (i.e. ∇²f → 0) for all x. Even when methods terminate at a stationary point it could be a saddle or maximum rather than a minimum. The only method which makes intuitive sense in non-convex region is the trust region approach where we seek a step which minimises a quadratic model subject to a restriction on the two-norm of the step size. This gives a well-defined search direction but at the expense of a costly evaluation. The algorithms derived in this thesis are gradient based methods which require systems of equations to be solved at each step but which do not use a line search in the usual sense. Progress along the Continuous Steepest Descent Path (CSDP) is governed both by the decrease in the function value and measures of accuracy of a local quadratic model. Numerical results on specially constructed test problems and a number of standard test problems from CUTEr [38] show that the approaches we have considered are more promising when compared with routines in the optimization tool box of MATLAB [46], namely the trust region method and the quasi-Newton method. In particular, they perform well in comparison with the, superficially similar, gradient-flow method proposed by Behrman [7].
APA, Harvard, Vancouver, ISO, and other styles
11

Snežana, Đorđević. "Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2015. https://www.cris.uns.ac.rs/record.jsf?recordId=94106&source=NDLTD&language=en.

Full text
Abstract:
Posmatra se problem optimizacije bez ograničenja. Za rešavanjeproblema  optimizacije bez ograničenja postoji mnoštvo raznovrsnihmetoda. Istraživanje ovde motivisano je potrebom za metodama kojeće brzo konvergirati.Cilj je sistematizacija poznatih rezultata, kao i teorijska i numeričkaanaliza mogućnosti uvođenja parametra u gradijentne metode.Najpre se razmatra problem minimizacije konveksne funkcije višepromenljivih.Problem minimizacije konveksne funkcije više promenljivih ovde serešava bez izračunavanja matrice hesijana, što je naročito aktuelno zasisteme velikih dimenzija, kao i za probleme optimizacije kod kojihne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnomvrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i upostojanju problema kod kojih je funkcija cilja rezultat simulacija.Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenjeizvesnog parametra može biti korisno, odnosno, dovodi do ubrzanjaodređenog metoda optimizacije.Takođe se predstavlja jedan novi hibridni metod konjugovanoggradijenta, kod koga je parametar konjugovanog gradijentakonveksna kombinacija dva poznata parametra konjugovanoggradijenta.U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni zapraćenje preostalih glava.U drugoj glavi daje se pregled nekih gradijentnih metoda prvog idrugog reda.Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultatavezanih za metode konjugovanih gradijenata.Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok seoriginalni doprinos predstavlja u trećoj, petoj i šestoj glavi.U trećoj glavi se opisuje izvesna modifikacija određenog metoda ukome se koristi multiplikativni parametar, izabran na slučajan način.Dokazuje se linearna konvergencija tako formiranog novog metoda.Peta glava sadrži originalne rezultate koji se odnose na metodekonjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novihibridni metod konjugovanih gradijenata, koji je konveksnakombinacija dva poznata metoda konjugovanih gradijenata.U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenihna  izvesnom skupu test funkcija, koji se odnose na metode iz treće ipete glave. Implementacija svih razmatranih algoritama rađena je upaketu MATHEMATICA. Kriterijum upoređivanja je vreme radacentralne procesorske jedinice.6
The problem under consideration is an unconstrained optimizationproblem. There are many different methods made in aim to solve theoptimization problems.  The investigation made here is motivated bythe fact that the methods which converge fast are necessary.The main goal is the systematization of some known results and alsotheoretical and numerical analysis of the possibilities to int roducesome parameters within gradient methods.Firstly, the minimization problem is considered, where the objectivefunction is a convex, multivar iable function. This problem is solvedhere without the calculation of Hessian, and such solution is veryimportant, for example, when the  big dimension systems are solved,and also for solving optimization problems with unknown values ofthe objective function and its gradient. Partially, this investigation ismotivated by the existence of problems where the objective functionis the result of simulations.Numerical results, presented in  Chapter  6, show that the introductionof a parameter is useful, i.e., such introduction results by theacceleration of the known optimization method.Further, one new hybrid conjugate gradient method is presented, inwhich the conjugate gradient parameter is a convex combination oftwo known conjugate gradient parameters.In the first chapter, there is motivation and also the basic co nceptswhich are necessary for the other chapters.The second chapter contains the survey of some first order andsecond order gradient methods.The fourth chapter contains the survey of some basic concepts andresults corresponding to conjugate gradient methods.The first, the second and the fourth  chapters are here to help inconsidering of some known results, and the original results arepresented in the chapters 3,5 and 6.In the third chapter, a modification of one unco nstrained optimizationmethod is presented, in which the randomly chosen multiplicativeparameter is used. Also, the linear convergence of such modificationis proved.The fifth chapter contains the original results, corresponding toconjugate gradient methods. Namely, one new hybrid conjugategradient method is presented, and this  method is the convexcombination of two known conjugate gradient methods.The sixth chapter consists of the numerical results, performed on a setof test functions, corresponding to methods in the chapters 3 and 5.Implementation of all considered algorithms is made in Mathematica.The comparison criterion is CPU time.
The problem under consideration is an unconstrained optimizationproblem. There are many different methods made in aim to solve theoptimization problems.  The investigation made here is motivated bythe fact that the methods which converge fast are necessary.The main goal is the systematization of some known results and alsotheoretical and numerical analysis of the possibilities to int roducesome parameters within gradient methods.Firstly, the minimization problem is considered, where the objectivefunction is a convex, multivar iable function. This problem is solvedhere without the calculation of Hessian, and such solution is veryimportant, for example, when the  big dimension systems are solved,and also for solving optimization problems with unknown values ofthe objective function and its gradient. Partially, this investigation ismotivated by the existence of problems where the objective functionis the result of simulations.Numerical results, presented in  Chapter  6, show that the introductionof a parameter is useful, i.e., such introduction results by theacceleration of the known optimization method.Further, one new hybrid conjugate gradient method is presented, inwhich the conjugate gradient parameter is a convex combination oftwo known conjugate gradient parameters.In the first chapter, there is motivation and also the basic co nceptswhich are necessary for the other chapters.Key  Words Documentation  97The second chapter contains the survey of some first order andsecond order gradient methods.The fourth chapter contains the survey of some basic concepts andresults corresponding to conjugate gradient methods.The first, the second and the fourth  chapters are here to help inconsidering of some known results, and the original results arepresented in the chapters 3,5 and 6.In the third chapter, a modification of one unco nstrained optimizationmethod is presented, in which the randomly chosen multiplicativeparameter is used. Also, the linear convergence of such modificationis proved.The fifth chapter contains the original results, corresponding toconjugate gradient methods. Namely, one new hybrid conjugategradient method is presented, and this  method is the convexcombination of two known conjugate gradient methods.The sixth chapter consists of the numerical results, performed on a setof test functions, corresponding to methods in the chapters 3 and 5.Implementation of all considered algorithms is made in Mathematica.The comparison criterion is CPU time
APA, Harvard, Vancouver, ISO, and other styles
12

Lienemann, Matthew A. "Automated Multi-Modal Search and Rescue using Boosted Histogram of Oriented Gradients." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1507.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) provides a platform for many automated tasks and with an ever increasing advances in computing, these tasks can be more complex. The use of UAVs is expanded in this thesis with the goal of Search and Rescue (SAR), where a UAV can assist fast responders to search for a lost person and relay possible search areas back to SAR teams. To identify a person from an aerial perspective, low-level Histogram of Oriented Gradients (HOG) feature descriptors are used over a segmented region, provided from thermal data, to increase classification speed. This thesis also introduces a dataset to support a Bird’s-Eye-View (BEV) perspective and tests the viability of low level HOG feature descriptors on this dataset. The low-level feature descriptors are known as Boosted Histogram of Oriented Gradients (BHOG) features, which discretizes gradients over varying sized cells and blocks that are trained with a Cascaded Gentle AdaBoost Classifier using our compiled BEV dataset. The classification is supported by multiple sensing modes with color and thermal videos to increase classification speed. The thermal video is segmented to indicate any Region of Interest (ROI) that are mapped to the color video where classification occurs. The ROI decreases classification time needed for the aerial platform by eliminating a per-frame sliding window. Testing reveals that with the use of only color data iv and a classifier trained for a profile of a person, there is an average recall of 78%, while the thermal detection results with an average recall of 76%. However, there is a speed up of 2 with a video of 240x320 resolution. The BEV testing reveals that higher resolutions are favored with a recall rate of 71% using BHOG features, and 92% using Haar-Features. In the lower resolution BEV testing, the recall rates are 42% and 55%, for BHOG and Haar-Features, respectively.
APA, Harvard, Vancouver, ISO, and other styles
13

Liut, Daniel Armando. "Neural-Network and Fuzzy-Logic Learning and Control of Linear and Nonlinear Dynamic Systems." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/29163.

Full text
Abstract:
The goal of this thesis is to develop nontraditional strategies to provide motion control for different engineering applications. We focus our attention on three topics: 1) roll reduction of ships in a seaway; 2) response reduction of buildings under seismic excitations; 3) new training strategies and neural-network configurations. The first topic of this research is based on a multidisciplinary simulation, which includes ship-motion simulation by means of a numerical model called LAMP, the modeling of fins and computation of the hydrodynamic forces produced by them, and a neural-network/fuzzy-logic controller. LAMP is based on a source-panel method to model the flowfield around the ship, whereas the fins are modeled by a general unsteady vortex-lattice method. The ship is considered to be a rigid body and the complete equations of motion are integrated numerically in the time domain. The motion of the ship and the complete flowfield are calculated simultaneously and interactively. The neural-network/fuzzy-logic controller can be progressively trained. The second topic is the development of a neural-network-based approach for the control of seismic structural response. To this end, a two-dimensional linear model and a hysteretic model of a multistory building are used. To control the response of the structure a tuned mass damper is located on the roof of the building. Such devices provide a good passive reduction. Once the mass damper is properly tuned, active control is added to improve the already efficient passive controller. This is achieved by means of a neural network. As part of the last topic, two new flexible and expeditious training strategies are developed to train the neural-network and fuzzy-logic controllers for both naval and civil engineering applications. The first strategy is based on a load-matching procedure, which seeks to adjust the controller in order to counteract the loads (forces and moments) which generate the motion that is to be reduced. A second training strategy provides training by means of an adaptive gradient search. This technique provides a wide flexibility in defining the parameters to be optimized. Also a novel neural-network approach called modal neural network is designed as a suitable controller for multiple-input multiple output control systems (MIMO).
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Strachan, N. J. C. "Automatic fish species grading using image processing and pattern recognition techniques." Thesis, University of Aberdeen, 1990. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU546342.

Full text
Abstract:
Size and species grading of fish (eg on board a fishing vessel) might in future be done entirely automatically using image analysis and pattern recognition techniques. Three methods of discriminating between pictures of seven different species of fish have been compared: using invariant moments, optimisation of the mismatch, and shape descriptors. A novel method of obtaining the moments of a polygon is described. It was found that the shape descriptors gave the best results with a sorting reliability of 90&'37. Different methods of producing symmetry lines from the shape of fish have been studied in order to describe fish bending and deformations. The simple thinning algorithm was found to work best to provide a reference axis. This axis was then used as a basis for constructing a deformation independent position reference system. Using this reference system position specific colour measurements of fish could be taken. For this to be done the video digitising system was firstly calibrated in the CIELUV colour space using the Macbeth colour chart. Colour and shape measurements were then made on 18 species of demersal and 5 species of pelagic fish. The simple shape measurements of length/width and front area/back area ratios were used to do some introductory separation of the fish. Then the variables produced by the shape descriptors and colour measurements were analysed by discriminant analysis. It was found that all of the demersal fish were sorted correctly (sorting reliability of 100&'37) and all of the pelagic fish were sorted correctly except one (sorting reliability of 98&'37). A prototype machine is now being constructed based on the methods described in this work.
APA, Harvard, Vancouver, ISO, and other styles
15

Stynsberg, John. "Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.

Full text
Abstract:
Visual tracking is a computer vision problem where the task is to follow a targetthrough a video sequence. Tracking has many important real-world applications in several fields such as autonomous vehicles and robot-vision. Since visual tracking does not assume any prior knowledge about the target, it faces different challenges such occlusion, appearance change, background clutter and scale change. In this thesis we try to improve the capabilities of tracking frameworks using discriminative correlation filters by incorporating scene depth information. We utilize scene depth information on three main levels. First, we use raw depth information to segment the target from its surroundings enabling occlusion detection and scale estimation. Second, we investigate different visual features calculated from depth data to decide which features are good at encoding geometric information available solely in depth data. Third, we investigate handling missing data in the depth maps using a modified version of the normalized convolution framework. Finally, we introduce a novel approach for parameter search using genetic algorithms to find the best hyperparameters for our tracking framework. Experiments show that depth data can be used to estimate scale changes and handle occlusions. In addition, visual features calculated from depth are more representative if they were combined with color features. It is also shown that utilizing normalized convolution improves the overall performance in some cases. Lastly, the usage of genetic algorithms for hyperparameter search leads to accuracy gains as well as some insights on the performance of different components within the framework.
APA, Harvard, Vancouver, ISO, and other styles
16

Senate, University of Arizona Faculty. "Faculty Senate Minutes March 6, 2017." University of Arizona Faculty Senate (Tucson, AZ), 2017. http://hdl.handle.net/10150/623059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kuo, Wen-Wei, and 郭文偉. "Quasi Gradient Search Evolution Strategy Method." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/92080086894239835355.

Full text
Abstract:
碩士
東海大學
工業工程與經營資訊學系
93
Evolution strategies utilize the computer to simulate the phenomenon of evolution, calculate and search the best solution of complicated problem through evolutionary computation. At present, evolution strategies apply to search, optimization, machine learning, scheduling, manufacturing and medicine application etc. How to search the global optimization solution efficiently in object function is a difficult thing in the process of solving optimization questions. In general, the efficiency of evolution strategies depends on the size of mutation strength. Therefore, when evolution strategies use the small mutation strength to search the optimization solution in object function, it will be relatively efficient in searching the details. But it evolves inefficiency when the parent lands on the smooth area of object function. Moreover, small mutation strength will make the parents to escape the local solution hardly when the object function has multi-local solutions. It can escape the local solution when evolution strategies use the large mutation strength, but it will be relatively efficient in searching as a whole. Especially, it will be difficult to close the optimization solution efficiently when parents approach the region of optimization solution and search the details. This paper brings up the Quasi Gradient Search Evolution Strategy Method (QGSES Method). The main focal point of QGSES is that parents should consider about the environment information around itself before searching the optimization solution. According to gradient vector and historical moving vector of parents judge the best direction and distance of offspring to approach the region of optimization solution. With the method of QGSES, we want to precipitate the whole evolutionary speed and improve the searching efficiency of evolution strategies. In order to achieve the goal of quickly convergence, the whole population can get the information around the optimization solution efficiency and surround the nearby region of the optimization solution quickly by this system. The result that proves via the experiment shows that the method of this research institute nice convergence efficiency in multi-variable function and excellent global searching capability in multi-local solutions function. It proves that the architecture of this research have excellent global searching capability in multi-local solutions function and steady convergence efficiency of the optimization solution. It is a nice algorithm of searching.
APA, Harvard, Vancouver, ISO, and other styles
18

Wen, Shing Chau, and 文星超. "Split-Path Gradient Search Adaptive FIR Filter." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/65274814698915494171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ku, Chi-Chuan, and 古季詮. "A Study of Orthogonal Quasi-Gradient Search Algorithm." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/92448253674482719356.

Full text
Abstract:
碩士
東海大學
工業工程與經營資訊學系
95
This paper brings up the Orthogonal Quasi Gradient Search Evolution Strategy Method (OQGS). Instead of generating initial solution randomly in traditional method to solving the optimization problem, OQGS apply method of orthogonal experimental design (OED) and orthogonal clossover for global search to build a better mechanism for the initial solution. The main focal point of evolution is that parents should consider about the environment information around itself before searching the optimization solution. According to gradient vector and historical moving vector of parents judge the best direction and distance of offspring to approach the region of optimization solution. In this way, the whole population can get the information around the optimization solution efficiency and surround the nearby region of the optimization solution quickly by this system.In order to achieve the goal of quickly convergence, this paper uses orthogonal pitch adjustment for local search, to approximate the local optimal solution nearby. With the method of OQGS, we want to precipitate the whole evolutionary speed and improve the searching efficiency of evolution strategies. We execute the proposed algorithm to solve eight test functions include of unimodal and multi-modal. The experimental type including of the experiment of fixing parameters, the experiment of changing parameters, the experiment of changing the bound of the variables randomly, the experiment of changing the number of variables Comparing with Orthogonal Genetic Algorithm (OGA) and Orthogonal Simulated Annealing Algorithm (OSA), we can find that OQGS can quicker slove problems than them and more stable find optimal or close-to-optimal solutions to eight test functions. In all of the performance tables show that OQGS can be presentable outstanding performance and it prove that OQGS has best Overall performance than others.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Yu-Chieh, and 林于傑. "Gradient Search Method for Frequency Estimation and Accuracy Analysis." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/49543818822762636242.

Full text
Abstract:
碩士
國立交通大學
機械工程學系
99
The frequency estimation for single-tone signal is an important technique for engineering area. DFT (or FFT) is a tradition method to measure the signal’s frequency approximately. However, the estimated frequency usually is not precise because of spectrum leakage and the noisy environment. So, the imprecise frequency is not enough for the application in the engineering area. To improve the precision of the estimated frequency and the robustness in the noisy environment, a lot of frequency estimators have been developed. Gradient search method (GSM) is one of the frequency estimators. In this thesis, the principle of GSM will be discussed first. And then, it is necessary to analyze the error bound for GSM. In this process, the error bound’s equation will be derived and compared with Cromer-Roa bound. The final chapter, GSM will be compared with the other estimators by simulation. It can show the advantages of GSM obviously by this comparison.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu-XianZheng and 鄭宇翔. "Detection for the MIMO System Based on Gradient Search." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/6wx54b.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
105
The multiple-input multiple-output (MIMO) technology is a model that can be used to describe the multiple antenna wireless communication system. MIMO can significantly increase the throughput and the transmitted distance without increasing the bandwidth and the total transmitted power. It can effectively increase the spectrum efficiency with appropriate detection algorithm. But usually, the good performance may be with more computational complexity, like the Sphere Decoding (SD). The SD can attain the maximum likelihood (ML) solution, but the computational complexity increases exponentially in low and medium SNRs. In our thesis, we studied a detection algorithm based on the differential metric. The searching process only needs additions. But there are some pre-processes. Based on the differential metric, we studied the gradient algorithm with stop condition, so called the modified gradient algorithm to reach the near-ML performance. Based on the properties of the differential metric, we further studied the indicative functions. It can help us to determine the ML bits in the updated initial sequence before the higher order gradient search. By doing so, it can reduce the computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
22

Millam, John Mark. "Conjugate gradient density matrix search: A linear scaling alternative to diagonalization." Thesis, 1997. http://hdl.handle.net/1911/19185.

Full text
Abstract:
Advances in the computation of the Coulomb, exchange, and correlation contributions to Gaussian-based Hartree-Fock and density functional theory Hamiltonians have demonstrated near-linear scaling with molecular size for these steps. These advances leave the ${\cal O}(N\sp3)$ diagonalization bottleneck as the rate determining step for very large systems. In this work, a conjugate gradient density matrix search (CG-DMS) method has been successfully extended and computationally implemented for use with first principles calculations. A Cholesky decomposition of the overlap matrix and its inverse, which can be formed in near linear time for sparse systems, is used to transform to and back from an orthonormal basis. Linear scaling of CPU time for the density matrix search and crossover of CPU time with diagonalization is demonstrated for polyglycine chains containing up to 493 atoms and water clusters up to 900 atoms.
APA, Harvard, Vancouver, ISO, and other styles
23

Yu-PangChung and 鍾裕邦. "Detection for the MIMO System Based on Differential Metrics and Gradient Search." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/md64bh.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
104
In recent years, with the increasing demand on transferring large amounts of data rapidly and reliably, the multiple-input multiple-output (MIMO) system has become more attractive in modern wireless communications. The MIMO system can provide efficient use of spectrum and achieves high throughput. Therefore, the study on finding fast and efficient detector for high-rate MIMO transmission is an important issue in wireless communications. The sphere decoding (SD) algorithm is an efficient approach to obtain the optimal maximum-likelihood (ML) detection. However, the SD algorithm is of much high complexity, especially at lower signal-to-noise ratio (SNR). In this thesis, we first study on low-complexity detection algorithms for the MIMO system based on differential metrics. First, we introduce the differential metrics of different orders and the recursive relations. Then we apply the indicative functions, which can be used to determine some ML bits of the initial sequence. By parallel search with different initial sequences, we can avoid the complexity of high-order search. These approaches can used to efficiently reduce the complexity in the MIMO detection. In the proposed algorithm, we apply stop condition and indicative functions in the detection of the MIMO system. The simulation results validate our study and algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Hao-ChengChu and 朱浩誠. "Detection of the MIMO System Based on the Gradient Search with Different Initial Sequences." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vydzse.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tsai, Zong-You, and 蔡宗祐. "Research on the Correction of Misaligned and Lens Aberration CCD Images by Gradient Descent Search Method." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/djr7g7.

Full text
Abstract:
碩士
龍華科技大學
電子工程系碩士班
107
In this search, the gradient descent search method is used to correct the CCD (Charge Coupled Device) image distortion such that the corrected image can be used for the determination of precise positions of objects in the image. Two major factors lead to the image distortions. One is the non-parallel and the misalignment between image plain and object plain caused by the small rotation or displacement of camera. The other is the nonlinear distortion caused by lens aberrations. These distortions can be represented by simple geometric models and parameters. By comparison of certain standard object and the corresponding real, distorted image of this object, one can extract the geometric parameters by gradient descent search algorithm. With these parameters, we can correct distorted image to the non-distorted one for future processing. The research results will be used on desktop or laptop computers, therefore we select Microsoft Visual Studio as the development platform, C# as the programming language, and WPF (Windows Presentation Foundation) as the GUI (Graphics User Interface) to facilitate the flexible application of the graphics card's hardware resources.
APA, Harvard, Vancouver, ISO, and other styles
26

Kevorkiants, Rouslan [Verfasser]. "Linear scaling conjugate gradient density matrix search: implementation, validation, and application with semiempirical molecular orbital methods / vorgelegt von Rouslan Kevorkiants." 2003. http://d-nb.info/968547028/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Merk, Angelica Maria. "Optimization of carrots and celery, acidified and thermally processed in glass jars, using response surface methodology and gradient search (senspro)." 1986. http://catalog.hathitrust.org/api/volumes/oclc/15525040.html.

Full text
Abstract:
Thesis (M.S.)--University of Wisconsin--Madison, 1986.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 91-100).
APA, Harvard, Vancouver, ISO, and other styles
28

"Performance comparison of conjugate gradient density matrix search and Chebyshev expansion methods for avoiding diagonalization in large-scale electronic structure calculations." Thesis, 1998. http://hdl.handle.net/1911/20474.

Full text
Abstract:
We report a performance comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEWI) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of CEM demonstrated. We show that the CPU requisites of CEM and CG-DMS are similar for calculations with comparable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
29

Clausner, André. "Anwendung von Line-Search-Strategien zur Formoptimierung und Parameteridentifikation." Master's thesis, 2007. https://monarch.qucosa.de/id/qucosa%3A19910.

Full text
Abstract:
Die kontinuierliche Weiterentwicklung und Verbesserung technischer Prozesse erfolgt heute auf der Basis stochastischer und deterministischer Optimierungsstrategien in Kombination mit der numerischen Simulation dieser Abläufe. Da die FE-Simulation von Umformvorgängen in der Regel sehr zeitintensiv ist, bietet sich für die Optimierung solcher Prozesse der Einsatz deterministischer Methoden an, da hier weniger Optimierungsschritte und somit auch weniger FE-Simulationen notwendig sind. Eine wichtige Anforderung an solche Optimierungsverfahren ist globale Konvergenz zu lokalen Minima, da die optimalen Parametersätze nicht immer näherungsweise bekannt sind. Die zwei wichtigsten Strategien zum Ausdehnen des beschränkten Konvergenzradius der natürlichen Optimierungsverfahren (newtonschrittbasierte Verfahren und Gradientenverfahren) sind die Line-Search-Strategie und die Trust-Region-Strategie. Die Grundlagen der Line-Search-Strategie werden aufgearbeitet und die wichtigsten Teilalgorithmen implementiert. Danach wird dieses Verfahren auf eine effiziente Kombination der Teilalgorithmen und Verfahrensparameter hin untersucht. Im Anschluss wird die Leistung eines Optimierungsverfahrens mit Line-Search-Strategie verglichen mit der eines ebenfalls implementierten Optimierungsverfahrens mit skalierter Trust-Region-Strategie. Die Tests werden nach Einfügen der implementierten Verfahren in das Programm SPC-Opt anhand der Lösung eines Quadratmittelproblems aus der Materialparameteridentifikation sowie der Formoptimierung eines Umformwerkzeugs vorgenommen.:1 Einleitung 7 2 Verfahren zur unrestringierten Optimierung 9 2.1 Vorbemerkungen 9 2.2 Der Schrittvektor sk 10 2.3 Natürliche Schrittweite und Konvergenz der Verfahren 11 2.4 Richtung des steilsten Abstiegs 12 2.5 Newtonschrittbasierte Verfahren 13 2.5.1 Newton-Verfahren 15 2.5.2 Quasi-Newton-Verfahren der Broyden-Klasse 15 2.5.3 Der BFGS-Auffrisch-Algorithmus 18 2.5.4 Die SR1-Auffrisch-Formel 19 2.5.5 Die DFP-Auffrisch-Formel 20 2.5.6 Gauß-Newton-Verfahren 20 2.6 Erzwingen der Bedingung der positiven Definitheit von Gk 21 3 Übersicht über die Verfahren zum Stabilisieren der natürlichen Schrittweiten 24 3.1 Das Prinzip der Line-Search-Verfahren 24 3.2 Das Prinzip der Trust-Region-Verfahren 26 3.3 Vergleich der Trust-Region- und der Line-Search-Strategien 27 4 Line-Search-Strategien 30 4.1 Vorbemerkungen 30 4.2 Ein prinzipieller Line-Search-Algorithmus 33 5 Die Akzeptanzkriterien für die Line-Search-Strategien 36 5.1 Die exakte Schrittweite 37 5.2 Das Armijo-Kriterium, ein Abstiegskriterium 39 5.2.1 Das klassische Armijo-Kriterium 39 5.2.2 Armijo-Kriterium mit unterer Schranke fflo > 0 40 5.3 Die Goldstein-Kriterien 42 5.4 Die Wolfe-Kriterien 44 5.4.1 Die einfachen Wolfe-Kriterien 44 5.4.2 Die starken Wolfe-Kriterien 46 5.5 Näherungsweiser Line-Search basierend auf Armijo, ff-Methode 47 6 Ermittlung der nächsten Testschrittweite ffj+1 49 6.1 Die Startschrittweite ffj=1 51 6.2 Verfahren mit konstanten Faktoren 52 6.3 Verfahren mit konstanten Summanden 53 6.4 Verfahren mit quadratischen Polynomen 54 6.5 Verfahren mit kubischen Polynomen 56 6.6 Sektionssuche mit goldenem Schnitt 58 7 Absicherung und Abbruchbedingungen des Line-Search-Verfahrens 60 7.1 Die drei Konvergenzpunkte eines Line-Search-Verfahrens 60 7.1.1 Lokales Minimum in f 60 7.1.2 Algorithmus konvergiert gegen −1 61 7.1.3 Der Winkel zwischen sk und −rfk wird 90° 61 7.2 Weitere Absicherungen 62 7.2.1 Abstiegsrichtung 62 7.2.2 Der gradientenbezogene Schrittvektor 62 7.2.3 Zulässige Schrittweiten in der Extrapolationsphase 63 7.2.4 Intervalle bei der Interpolation 63 7.2.5 Maximale Durchlaufzahlen 63 8 Implementierung 65 8.1 Grundlegende Struktur der Implementierung 65 8.2 Anwendungsgebiete 67 8.2.1 Identifikation der Materialparameter der isotropen Verfestigung und der HILLschen Fließbedingung 67 8.2.2 Optimierung der Form eines Umformwerkzeugs 70 8.3 Test des Programms anhand der Identifikation der Parameter der isotropen Verfestigung und der HILLschen Fließbedingung 71 8.3.1 Einfluss der Funktionsumgebung 71 8.3.2 Test der Line-Search-Verfahrensparameter 74 8.3.3 Einfluss der Startwerte und der Qualität der Ableitungsermittlung 77 8.3.4 Test der Quasi-Newton-Strategien 77 8.3.5 Test der Trust-Region-Skalierung 79 8.3.6 Vergleich der Trust-Region- und der Line-Search-Strategie 80 8.3.7 Tests mit den HILLschen Anisotropieparametern und drei Vorwärtsrechnungen 81 9 Zusammenfassung und Ausblick 83 9.1 Zusammenfassung 83 9.2 Ausblick 84 Liste häufig verwendeter Formelzeichen 85 Literaturverzeichnis 88 A Zusätzliches zur Implementierung 90 A.1 Parametervorschläge für die Line-Search-Verfahren 90 A.2 Fehlercode-Liste 92 A.3 Programmablaufpläne 94 A.3.1 Ablauf in main.cpp 94 A.3.2 Ablauf in OneOptLoop 95 A.3.3 Ablauf während des Trust-Region-Verfahrens 96 A.3.4 Ablauf während des Line-Search-Verfahrens 97 A.4 Steuerung der Optimierungsoptionen über OptInputData.dat 98 A.4.1 Übergeordnete Algorithmen 98 A.4.1.1 Quasi-Newton-Verfahren 98 A.4.1.2 Absichern der positiven Definitheit von Gk 99 A.4.1.3 Auswahl des Optimierungsverfahrens, Auswahl der Schrittweitensteuerung 100 A.4.1.4 Abbruchbedingungen für die Lösungsfindung 100 A.4.1.5 Wahl des Startvektors x0 101 A.4.2 Die Trust-Region-Algorithmen 102 A.4.2.1 Wahl des Anfangsradius 0 des Vertrauensbereichs 102 A.4.2.2 Wahl des Skalierungsverfahrens 102 A.4.2.3 Wahl des Startwertes l=0 für die Regularisierungsparameteriteration 103 A.4.2.4 Regularisierungsparameteriteration 103 A.4.2.5 Wahl des Verfahrens zum Auffrischen des Radius des Vertrauensbereichs 103 A.4.2.6 Bedingungen für einen akzeptablen Schritt 104 A.4.2.7 Absicherungen des Trust-Region-Verfahrens 104 A.4.3 Die Line-Search-Algorithmen 105 A.4.3.1 Die Akzeptanzkriterien 105 A.4.3.2 Die Verfahren zur Extrapolation 105 A.4.3.3 Die Verfahren zur Interpolation 106 A.4.3.4 Verfahren zur Wahl von ffj=2 106 A.4.3.5 Absicherung des Line-Search-Verfahrens 106 B Testrechnungen 107 B.1 Ausgewählte Versuchsreihen 107 B.2 Bilder der Funktionsumgebung der Materialparameteridentifikation 109 B.3 Beschreibung der digitalen Anlagen 112 Eidesstattliche Erklärung und Aufgabenstellung 113
APA, Harvard, Vancouver, ISO, and other styles
30

Molisse, Giulia. "Above ground biomass and carbon sequestration estimation -Implementation of a sentinel-2 based exploratory workflow." Master's thesis, 2021. http://hdl.handle.net/10362/113902.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
This work presents a Sentinel-2 based exploratory work ow for the estimation of Above Ground Biomass (AGB) and Carbon Sequestration (CS) in a subtropical forest. In the last decades, remote sensing-based studies on AGB have been widely investigated alongside with a variety of sensors, features and Machine Learning (ML) algorithms. Up-to-date and reliable mapping of such measures have been increasingly required by international commitments under the climate convention as well as by sustainable forest management practices. The proposed approach consists of 5 major steps: 1) generation of several Vegetation Indices (VI), biophysical parameters and texture measures; 2) feature selection with Mean Decrease in Impurity (MDI), Mean Decrease in Accuracy (MDA), L1 Regularization (LASSO), and Principal Component Analysis (PCA); 3) feature selection testing with k-Nearest Neighbour (kNN), Random Forest (RF), Extreme Gradient Boosting (XGB), and Arti cial Neural Network (ANN); 4) hyper-parameters ne-tuning with Grid Search, Random Search and Bayesian Optimization; and nally, 5) model explanation with the SHapley Additive exPlanations (SHAP) package, which to this day has not been investigated in the context of AGB mapping. The following results were obtained: 1) MDI was chosen as the best performing feature selection method by the XGB and the Deep Neural Network (DNN), MDA was chosen by the RF and the kNN, while LASSO was chosen by the Shallow Neural Network (SNN) and the Linear Neural Network (LNN); 2) before hyper-parameters optimization, the Deep Neural Network (DNN) yielded the best performance with a Root Mean Squared Error (RMSE) of 42.30 t=ha; 3) after hyper-parameters ne-tuning with Bayesian Optimization, the XGB model yielded the best performance with a RMSE of 37.79 t=ha; 4) model explanation with SHAP allowed for a deeper understanding of the features impact on the model predictions. Finally, the predicted AGB throughout the study area showed an average value of 83 t=ha, ranging from 0 t=ha to 346.56 t=ha. The related CS was estimated by using a conversion factor of 0.47.
APA, Harvard, Vancouver, ISO, and other styles
31

Κωστόπουλος, Αριστοτέλης. "Νέοι αλγόριθμοι εκπαίδευσης τεχνητών νευρωνικών δικτύων και εφαρμογές." Thesis, 2012. http://hdl.handle.net/10889/5462.

Full text
Abstract:
Η παρούσα διδακτορική διατριβή πραγματεύεται το θέμα της εκπαίδευσης εμπρόσθιων τροφοδοτούμενων τεχνητών νευρωνικών δικτύων και τις εφαρμογές τους. Η παρουσίαση των θεμάτων και των αποτελεσμάτων της διατριβής οργανώνεται ως εξής: Στο Κεφάλαιο 1 παρουσιάζονται τα τεχνητά νευρωνικά δίκτυα , τα οφέλη της χρήσης τους, η δομή και η λειτουργία τους. Πιο συγκεκριμένα, παρουσιάζεται πως από τους βιολογικούς νευρώνες μοντελοποιούνται οι τεχνητοί νευρώνες, που αποτελούν το θεμελιώδες στοιχείο των τεχνητών νευρωνικών δικτύων. Στη συνέχεια αναφέρονται οι βασικές αρχιτεκτονικές των εμπρόσθιων τροφοδοτούμενων τεχνητών νευρωνικών δικτύων. Το κεφάλαιο ολοκληρώνεται με μια ιστορική αναδρομή για τα τεχνητά νευρωνικά δίκτυα και με την παρουσίαση κάποιων εφαρμογών τους. Στο Κεφάλαιο 2 παρουσιάζονται μερικοί από τους υπάρχοντες αλγορίθμους εκπαίδευσης τεχνητών νευρωνικών δικτύων. Γίνεται μια περιληπτική αναφορά του προβλήματος της εκπαίδευσης των τεχνητών νευρωνικών δικτύων με επίβλεψη και δίνεται η μαθηματική μοντελοποίηση που αντιστοιχεί στην ελαχιστοποίηση του κόστους. Στην συνέχεια γίνεται μια περιληπτική αναφορά στις μεθόδους που βασίζονται στην κατεύθυνση της πιο απότομης καθόδου, στις μεθόδους δευτέρας τάξεως όπου απαιτείται ο υπολογισμός του Εσσιανού πίνακα της συνάρτησης κόστους, στις μεθόδους μεταβλητής μετρικής, και στις μεθόδους συζυγών κλίσεων. Κατόπιν, παρουσιάζεται ο χώρος των βαρών, η επιφάνεια σφάλματος και οι διάφορες τεχνικές αρχικοποίησης των βαρών των τεχνητών νευρωνικών δικτύων και περιγράφονται οι επιπτώσεις που έχουν στην εκπαίδευση τους. Στο Κεφάλαιο 3 παρουσιάζεται ένας νέος αλγόριθμος εκπαίδευσης τεχνητών νευρωνικών δικτύων βασισμένος στον αλγόριθμο της οπισθοδιάδοσης του σφάλματος και στην αυτόματη προσαρμογή του ρυθμού εκπαίδευσης χρησιμοποιώντας πληροφορία δυο σημείων. Η κατεύθυνση αναζήτησης του νέου αλγορίθμου είναι η κατεύθυνση της πιο απότομης καθόδου, αλλά για τον προσδιορισμό του ρυθμού εκπαίδευσης χρησιμοποιούνται προσεγγίσεις δυο σημείων της εξίσωσης χορδής των μεθόδων ψεύδο-Newton. Επιπλέον, παράγεται ένας νέος ρυθμός εκπαίδευσης προσεγγίζοντας την νέα εξίσωση χορδής, που προτάθηκε από τον Zhang, η οποία χρησιμοποιεί πληροφορία παραγώγων και συναρτησιακών τιμών. Στη συνέχεια, ένας κατάλληλος μηχανισμός επιλογής του ρυθμού εκπαίδευσης ενσωματώνεται στον αλγόριθμο εκπαίδευσης ώστε να επιλέγεται κάθε φορά ο κατάλληλος ρυθμός εκπαίδευσης. Τέλος, γίνεται μελέτη της σύγκλισης του αλγορίθμου εκπαίδευσης και παρουσιάζονται τα πειραματικά αποτελέσματα για διάφορα προβλήματα εκπαίδευσης. Στο Κεφάλαιο 4 παρουσιάζονται μερικοί αποτελεσματικοί αλγόριθμοι εκπαίδευσης οι οποίοι βασίζονται στις μεθόδους βελτιστοποίησης συζυγών κλίσεων. Στους υπάρχοντες αλγόριθμους εκπαίδευσης συζυγών κλίσεων προστίθεται ένας αλγόριθμος εκπαίδευσης που βασίζεται στη μέθοδο συζυγών κλίσεων του Perry. Επιπρόσθετα, προτείνονται νέοι αλγόριθμοι συζυγών κλίσεων που προκύπτουν από τις ίδιες αρχές που προέρχονται οι γνωστοί αλγόριθμοι συζυγών κλίσεων των Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribiere και Perry, και ονομάζονται κλιμακωτοί αλγόριθμοι συζυγών κλίσεων. Αυτή η κατηγορία αλγορίθμων βασίζεται στην φασματική παράμετρο κλιμάκωσης του προτάθηκε από τους Barzilai και Borwein. Επιπλέον, ενσωματώνεται στους αλγόριθμους εκπαίδευσης συζυγών κλίσεων μια αποδοτική τεχνική γραμμικής αναζήτησης, που βασίζεται στις συνθήκες του Wolfe και στην διασφαλισμένη κυβική παρεμβολή. Ακόμη, η παράμετρος του αρχικού ρυθμού εκπαίδευσης προσαρμόζεται αυτόματα σε κάθε επανάληψη σύμφωνα με ένα κλειστό τύπο. Στη συνέχεια, εφαρμόζεται μια αποτελεσματική διαδικασία επανεκκίνησης, έτσι ώστε να βελτιωθούν περαιτέρω οι αλγόριθμοι εκπαίδευσης συζυγών κλίσεων και να αποδειχθεί η ολική τους σύγκλιση. Τέλος, παρουσιάζονται τα πειραματικά αποτελέσματα για διάφορα προβλήματα εκπαίδευσης. Στο τελευταίο Κεφάλαιο της παρούσας διδακτορικής διατριβής, απομονώνεται και τροποποιείται ο κλιμακωτός αλγόριθμος του Perry, που παρουσιάστηκε στο προηγούμενο κεφάλαιο. Πιο συγκεκριμένα, ενώ διατηρούνται τα κύρια χαρακτηριστικά του αλγορίθμου εκπαίδευσης, εφαρμόζεται μια διαφορετική τεχνική γραμμικής αναζήτησης η οποία βασίζεται στις μη μονότονες συνθήκες του Wolfe. Επίσης προτείνεται ένας νέος αρχικός ρυθμός εκπαίδευσης για χρήση με τον κλιμακωτό αλγόριθμο εκπαίδευσης συζυγών κλίσεων, ο οποίος φαίνεται να είναι αποδοτικότερος από τον αρχικό ρυθμό εκπαίδευσης που προτάθηκε από τον Shanno όταν χρησιμοποιείται σε συνδυασμό με την μη μονότονη τεχνική γραμμικής αναζήτησης. Στη συνέχεια παρουσιάζονται τα πειραματικά αποτελέσματα για διάφορα προβλήματα εκπαίδευσης. Τέλος, ως εφαρμογή εκπαιδεύεται ένα πολυεπίπεδο εμπρόσθια τροφοδοτούμενο τεχνητό νευρωνικό δίκτυο με τον προτεινόμενο αλγόριθμο για το πρόβλημα της ταξινόμησης καρκινικών κυττάρων του εγκεφάλου και συγκρίνεται η απόδοση του με την απόδοση ενός πιθανοτικού τεχνητού νευρωνικού δικτύου. Η διατριβή ολοκληρώνεται με το Παράρτημα Α’, όπου παρουσιάζονται τα προβλήματα εκπαίδευσης τεχνητών νευρωνικών δικτύων που χρησιμοποιήθηκαν για την αξιολόγηση των προτεινόμενων αλγορίθμων εκπαίδευσης.
In this dissertation the problem of the training of feedforward artificial neural networks and its applications are considered. The presentation of the topics and the results are organized as follows: In the first chapter, the artificial neural networks are introduced. Initially, the benefits of the use of artificial neural networks are presented. In the sequence, the structure and their functionality are presented. More specifically, the derivation of the artificial neurons from the biological ones is presented followed by the presentation of the architecture of the feedforward neural networks. The historical notes and the use of neural networks in real world problems are concluding the first chapter. In Chapter 2, the existing training algorithms for the feedforward neural networks are considered. First, a summary of the training problem and its mathematical formulation, that corresponds to the uncostrained minimization of a cost function, are given. In the sequence, training algorithms based on the steepest descent, Newton, variable metric and conjugate gradient methods are presented. Furthermore, the weight space, the error surface and the techniques of the initialization of the weights are described. Their influence in the training procedure is discussed. In Chapter 3, a new training algorithm for feedforward neural networks based on the backpropagation algorithm and the automatic two-point step size (learning rate) is presented. The algorithm uses the steepest descent search direction while the learning rate parameter is calculated by minimizing the standard secant equation. Furthermore, a new learning rate parameter is derived by minimizing the modified secant equation introduced by Zhang, that uses both gradient and function value information. In the sequece a switching mechanism is incorporated into the algorithm so that the appropriate stepsize to be chosen according to the status of the current iterative point. Finaly, the global convergence of the proposed algorithm is studied and the results of some numerical experiments are presented. In Chapter 4, some efficient training algorithms, based on conjugate gradient optimization methods, are presented. In addition to the existing conjugate gradient training algorithms, we introduce Perry's conjugate gradient method as a training algorithm. Furthermore, a new class of conjugate gradient methods is proposed, called self-scaled conjugate gradient methods, which are derived from the principles of Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribiere and Perry's method. This class is based on the spectral scaling parameter. Furthermore, we incorporate to the conjugate gradient training algorithms an efficient line search technique based on the Wolfe conditions and on safeguarded cubic interpolation. In addition, the initial learning rate parameter, fed to the line search technique, was automatically adapted at each iteration by a closed formula. Finally, an efficient restarting procedure was employed in order to further improve the effectiveness of the conjugate gradient training algorithms and prove their global convergence. Experimental results show that, in general, the new class of methods can perform better with a much lower computational cost and better success performance. In the last chapter of this dissertation, the Perry's self-scaled conjugate gradient training algorithm that was presented in the previous chapter was isolated and modified. More specifically, the main characteristics of the training algorithm were maintained but in this case a different line search strategy based on the nonmonotone Wolfe conditions was utilized. Furthermore, a new initial learning rate parameter was introduced for use in conjunction with the self-scaled conjugate gradient training algorithm that seems to be more effective from the initial learning rate parameter, proposed by Shanno, when used with the nonmonotone line search technique. In the sequence the experimental results for differrent training problems are presented. Finally, a feedforward neural network with the proposed algorithm for the problem of brain astrocytomas grading was trained and compared the results with those achieved by a probabilistic neural network. The dissertation is concluded with the Appendix A', where the training problems used for the evaluation of the proposed training algorithms are presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography