Dissertations / Theses on the topic 'Gradient search'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 31 dissertations / theses for your research on the topic 'Gradient search.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Kevorkiants, Rouslan. "Linear scaling conjugate gradient density matrix search: implementation, validation, and application with semiempirical molecular orbital methods." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968547028.
Full textResmer, Frank. "A gradient and RF system for open access low field MRI." Thesis, University of Aberdeen, 2004. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU208240.
Full textBedida, Kirthi. "AN APPROACH TO INVERSE MODELING THROUGH THE INTEGRATION OF ARTIFICIAL NEURAL NETWORKS AND GENETIC ALGORITHMS." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/493.
Full textAl-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.
Full textJacmenovic, Dennis, and dennis_jacman@yahoo com au. "Optimisation of Active Microstrip Patch Antennas." RMIT University. Electrical and Computer Engineering, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20060307.144507.
Full textFischer, Manfred M. "Learning in neural spatial interaction models: A statistical perspective." Springer, 2002. http://epub.wu.ac.at/5503/1/neural.pdf.
Full textClausner, André. "Anwendung von Line-Search-Strategien zur Formoptimierung und Parameteridentifikation." Master's thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114858.
Full textO'Neal, Jerome W. "The use of preconditioned iterative linear solvers in interior-point methods and related topics." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-06242005-162854/.
Full textParker, R. Gary, Committee Member ; Shapiro, Alexander, Committee Member ; Nemirovski, Arkadi, Committee Member ; Green, William, Committee Member ; Monteiro, Renato, Committee Chair.
Zoltan, Pap. "Projektivni postupci tipa konjugovanih gradijenata za rešavanje nelinearnih monotonih sistema velikih dimenzija." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110614&source=NDLTD&language=en.
Full textProjection based CG methods for solving large-scale nonlinear monotone systems are considered in this thesis. These methods combine hyperplane projection technique with conjugate gradient (CG) search directions. Hyperplane projection method is suitable for monotone systems, because it enables simply globalization, while CG directions are efficient for large-scale nonlinear systems, due to low memory. Projection based CG methods are funcion-value based, they don’t use merit function and derivatives, and because of that they are also suitable for solving nonsmooth monotone systems. The global convergence of these methods are ensured without additional regularity assumptions, so they can be used for solving singular systems.Three new three-term search directions of Fletcher-Reeves type and two new hybrid search directions of Hu-Storey type are defined. PCG algorithm with five new CG type directions is proposed and its global convergence is established. Numerical performances of methods are tested on relevant examples from literature. These results point out that new projection based CG methods have good computational performances. They are efficient, robust and competitive with other methods.
Beddiaf, Salah. "Continuous steepest descent path for traversing non-convex regions." Thesis, University of Hertfordshire, 2016. http://hdl.handle.net/2299/17175.
Full textSnežana, Đorđević. "Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2015. https://www.cris.uns.ac.rs/record.jsf?recordId=94106&source=NDLTD&language=en.
Full textThe problem under consideration is an unconstrained optimizationproblem. There are many different methods made in aim to solve theoptimization problems. The investigation made here is motivated bythe fact that the methods which converge fast are necessary.The main goal is the systematization of some known results and alsotheoretical and numerical analysis of the possibilities to int roducesome parameters within gradient methods.Firstly, the minimization problem is considered, where the objectivefunction is a convex, multivar iable function. This problem is solvedhere without the calculation of Hessian, and such solution is veryimportant, for example, when the big dimension systems are solved,and also for solving optimization problems with unknown values ofthe objective function and its gradient. Partially, this investigation ismotivated by the existence of problems where the objective functionis the result of simulations.Numerical results, presented in Chapter 6, show that the introductionof a parameter is useful, i.e., such introduction results by theacceleration of the known optimization method.Further, one new hybrid conjugate gradient method is presented, inwhich the conjugate gradient parameter is a convex combination oftwo known conjugate gradient parameters.In the first chapter, there is motivation and also the basic co nceptswhich are necessary for the other chapters.The second chapter contains the survey of some first order andsecond order gradient methods.The fourth chapter contains the survey of some basic concepts andresults corresponding to conjugate gradient methods.The first, the second and the fourth chapters are here to help inconsidering of some known results, and the original results arepresented in the chapters 3,5 and 6.In the third chapter, a modification of one unco nstrained optimizationmethod is presented, in which the randomly chosen multiplicativeparameter is used. Also, the linear convergence of such modificationis proved.The fifth chapter contains the original results, corresponding toconjugate gradient methods. Namely, one new hybrid conjugategradient method is presented, and this method is the convexcombination of two known conjugate gradient methods.The sixth chapter consists of the numerical results, performed on a setof test functions, corresponding to methods in the chapters 3 and 5.Implementation of all considered algorithms is made in Mathematica.The comparison criterion is CPU time.
The problem under consideration is an unconstrained optimizationproblem. There are many different methods made in aim to solve theoptimization problems. The investigation made here is motivated bythe fact that the methods which converge fast are necessary.The main goal is the systematization of some known results and alsotheoretical and numerical analysis of the possibilities to int roducesome parameters within gradient methods.Firstly, the minimization problem is considered, where the objectivefunction is a convex, multivar iable function. This problem is solvedhere without the calculation of Hessian, and such solution is veryimportant, for example, when the big dimension systems are solved,and also for solving optimization problems with unknown values ofthe objective function and its gradient. Partially, this investigation ismotivated by the existence of problems where the objective functionis the result of simulations.Numerical results, presented in Chapter 6, show that the introductionof a parameter is useful, i.e., such introduction results by theacceleration of the known optimization method.Further, one new hybrid conjugate gradient method is presented, inwhich the conjugate gradient parameter is a convex combination oftwo known conjugate gradient parameters.In the first chapter, there is motivation and also the basic co nceptswhich are necessary for the other chapters.Key Words Documentation 97The second chapter contains the survey of some first order andsecond order gradient methods.The fourth chapter contains the survey of some basic concepts andresults corresponding to conjugate gradient methods.The first, the second and the fourth chapters are here to help inconsidering of some known results, and the original results arepresented in the chapters 3,5 and 6.In the third chapter, a modification of one unco nstrained optimizationmethod is presented, in which the randomly chosen multiplicativeparameter is used. Also, the linear convergence of such modificationis proved.The fifth chapter contains the original results, corresponding toconjugate gradient methods. Namely, one new hybrid conjugategradient method is presented, and this method is the convexcombination of two known conjugate gradient methods.The sixth chapter consists of the numerical results, performed on a setof test functions, corresponding to methods in the chapters 3 and 5.Implementation of all considered algorithms is made in Mathematica.The comparison criterion is CPU time
Lienemann, Matthew A. "Automated Multi-Modal Search and Rescue using Boosted Histogram of Oriented Gradients." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1507.
Full textLiut, Daniel Armando. "Neural-Network and Fuzzy-Logic Learning and Control of Linear and Nonlinear Dynamic Systems." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/29163.
Full textPh. D.
Strachan, N. J. C. "Automatic fish species grading using image processing and pattern recognition techniques." Thesis, University of Aberdeen, 1990. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU546342.
Full textStynsberg, John. "Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.
Full textSenate, University of Arizona Faculty. "Faculty Senate Minutes March 6, 2017." University of Arizona Faculty Senate (Tucson, AZ), 2017. http://hdl.handle.net/10150/623059.
Full textKuo, Wen-Wei, and 郭文偉. "Quasi Gradient Search Evolution Strategy Method." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/92080086894239835355.
Full text東海大學
工業工程與經營資訊學系
93
Evolution strategies utilize the computer to simulate the phenomenon of evolution, calculate and search the best solution of complicated problem through evolutionary computation. At present, evolution strategies apply to search, optimization, machine learning, scheduling, manufacturing and medicine application etc. How to search the global optimization solution efficiently in object function is a difficult thing in the process of solving optimization questions. In general, the efficiency of evolution strategies depends on the size of mutation strength. Therefore, when evolution strategies use the small mutation strength to search the optimization solution in object function, it will be relatively efficient in searching the details. But it evolves inefficiency when the parent lands on the smooth area of object function. Moreover, small mutation strength will make the parents to escape the local solution hardly when the object function has multi-local solutions. It can escape the local solution when evolution strategies use the large mutation strength, but it will be relatively efficient in searching as a whole. Especially, it will be difficult to close the optimization solution efficiently when parents approach the region of optimization solution and search the details. This paper brings up the Quasi Gradient Search Evolution Strategy Method (QGSES Method). The main focal point of QGSES is that parents should consider about the environment information around itself before searching the optimization solution. According to gradient vector and historical moving vector of parents judge the best direction and distance of offspring to approach the region of optimization solution. With the method of QGSES, we want to precipitate the whole evolutionary speed and improve the searching efficiency of evolution strategies. In order to achieve the goal of quickly convergence, the whole population can get the information around the optimization solution efficiency and surround the nearby region of the optimization solution quickly by this system. The result that proves via the experiment shows that the method of this research institute nice convergence efficiency in multi-variable function and excellent global searching capability in multi-local solutions function. It proves that the architecture of this research have excellent global searching capability in multi-local solutions function and steady convergence efficiency of the optimization solution. It is a nice algorithm of searching.
Wen, Shing Chau, and 文星超. "Split-Path Gradient Search Adaptive FIR Filter." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/65274814698915494171.
Full textKu, Chi-Chuan, and 古季詮. "A Study of Orthogonal Quasi-Gradient Search Algorithm." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/92448253674482719356.
Full text東海大學
工業工程與經營資訊學系
95
This paper brings up the Orthogonal Quasi Gradient Search Evolution Strategy Method (OQGS). Instead of generating initial solution randomly in traditional method to solving the optimization problem, OQGS apply method of orthogonal experimental design (OED) and orthogonal clossover for global search to build a better mechanism for the initial solution. The main focal point of evolution is that parents should consider about the environment information around itself before searching the optimization solution. According to gradient vector and historical moving vector of parents judge the best direction and distance of offspring to approach the region of optimization solution. In this way, the whole population can get the information around the optimization solution efficiency and surround the nearby region of the optimization solution quickly by this system.In order to achieve the goal of quickly convergence, this paper uses orthogonal pitch adjustment for local search, to approximate the local optimal solution nearby. With the method of OQGS, we want to precipitate the whole evolutionary speed and improve the searching efficiency of evolution strategies. We execute the proposed algorithm to solve eight test functions include of unimodal and multi-modal. The experimental type including of the experiment of fixing parameters, the experiment of changing parameters, the experiment of changing the bound of the variables randomly, the experiment of changing the number of variables Comparing with Orthogonal Genetic Algorithm (OGA) and Orthogonal Simulated Annealing Algorithm (OSA), we can find that OQGS can quicker slove problems than them and more stable find optimal or close-to-optimal solutions to eight test functions. In all of the performance tables show that OQGS can be presentable outstanding performance and it prove that OQGS has best Overall performance than others.
Lin, Yu-Chieh, and 林于傑. "Gradient Search Method for Frequency Estimation and Accuracy Analysis." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/49543818822762636242.
Full text國立交通大學
機械工程學系
99
The frequency estimation for single-tone signal is an important technique for engineering area. DFT (or FFT) is a tradition method to measure the signal’s frequency approximately. However, the estimated frequency usually is not precise because of spectrum leakage and the noisy environment. So, the imprecise frequency is not enough for the application in the engineering area. To improve the precision of the estimated frequency and the robustness in the noisy environment, a lot of frequency estimators have been developed. Gradient search method (GSM) is one of the frequency estimators. In this thesis, the principle of GSM will be discussed first. And then, it is necessary to analyze the error bound for GSM. In this process, the error bound’s equation will be derived and compared with Cromer-Roa bound. The final chapter, GSM will be compared with the other estimators by simulation. It can show the advantages of GSM obviously by this comparison.
Yu-XianZheng and 鄭宇翔. "Detection for the MIMO System Based on Gradient Search." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/6wx54b.
Full text國立成功大學
電腦與通信工程研究所
105
The multiple-input multiple-output (MIMO) technology is a model that can be used to describe the multiple antenna wireless communication system. MIMO can significantly increase the throughput and the transmitted distance without increasing the bandwidth and the total transmitted power. It can effectively increase the spectrum efficiency with appropriate detection algorithm. But usually, the good performance may be with more computational complexity, like the Sphere Decoding (SD). The SD can attain the maximum likelihood (ML) solution, but the computational complexity increases exponentially in low and medium SNRs. In our thesis, we studied a detection algorithm based on the differential metric. The searching process only needs additions. But there are some pre-processes. Based on the differential metric, we studied the gradient algorithm with stop condition, so called the modified gradient algorithm to reach the near-ML performance. Based on the properties of the differential metric, we further studied the indicative functions. It can help us to determine the ML bits in the updated initial sequence before the higher order gradient search. By doing so, it can reduce the computational complexity.
Millam, John Mark. "Conjugate gradient density matrix search: A linear scaling alternative to diagonalization." Thesis, 1997. http://hdl.handle.net/1911/19185.
Full textYu-PangChung and 鍾裕邦. "Detection for the MIMO System Based on Differential Metrics and Gradient Search." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/md64bh.
Full text國立成功大學
電腦與通信工程研究所
104
In recent years, with the increasing demand on transferring large amounts of data rapidly and reliably, the multiple-input multiple-output (MIMO) system has become more attractive in modern wireless communications. The MIMO system can provide efficient use of spectrum and achieves high throughput. Therefore, the study on finding fast and efficient detector for high-rate MIMO transmission is an important issue in wireless communications. The sphere decoding (SD) algorithm is an efficient approach to obtain the optimal maximum-likelihood (ML) detection. However, the SD algorithm is of much high complexity, especially at lower signal-to-noise ratio (SNR). In this thesis, we first study on low-complexity detection algorithms for the MIMO system based on differential metrics. First, we introduce the differential metrics of different orders and the recursive relations. Then we apply the indicative functions, which can be used to determine some ML bits of the initial sequence. By parallel search with different initial sequences, we can avoid the complexity of high-order search. These approaches can used to efficiently reduce the complexity in the MIMO detection. In the proposed algorithm, we apply stop condition and indicative functions in the detection of the MIMO system. The simulation results validate our study and algorithm.
Hao-ChengChu and 朱浩誠. "Detection of the MIMO System Based on the Gradient Search with Different Initial Sequences." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vydzse.
Full textTsai, Zong-You, and 蔡宗祐. "Research on the Correction of Misaligned and Lens Aberration CCD Images by Gradient Descent Search Method." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/djr7g7.
Full text龍華科技大學
電子工程系碩士班
107
In this search, the gradient descent search method is used to correct the CCD (Charge Coupled Device) image distortion such that the corrected image can be used for the determination of precise positions of objects in the image. Two major factors lead to the image distortions. One is the non-parallel and the misalignment between image plain and object plain caused by the small rotation or displacement of camera. The other is the nonlinear distortion caused by lens aberrations. These distortions can be represented by simple geometric models and parameters. By comparison of certain standard object and the corresponding real, distorted image of this object, one can extract the geometric parameters by gradient descent search algorithm. With these parameters, we can correct distorted image to the non-distorted one for future processing. The research results will be used on desktop or laptop computers, therefore we select Microsoft Visual Studio as the development platform, C# as the programming language, and WPF (Windows Presentation Foundation) as the GUI (Graphics User Interface) to facilitate the flexible application of the graphics card's hardware resources.
Kevorkiants, Rouslan [Verfasser]. "Linear scaling conjugate gradient density matrix search: implementation, validation, and application with semiempirical molecular orbital methods / vorgelegt von Rouslan Kevorkiants." 2003. http://d-nb.info/968547028/34.
Full textMerk, Angelica Maria. "Optimization of carrots and celery, acidified and thermally processed in glass jars, using response surface methodology and gradient search (senspro)." 1986. http://catalog.hathitrust.org/api/volumes/oclc/15525040.html.
Full textTypescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 91-100).
"Performance comparison of conjugate gradient density matrix search and Chebyshev expansion methods for avoiding diagonalization in large-scale electronic structure calculations." Thesis, 1998. http://hdl.handle.net/1911/20474.
Full textClausner, André. "Anwendung von Line-Search-Strategien zur Formoptimierung und Parameteridentifikation." Master's thesis, 2007. https://monarch.qucosa.de/id/qucosa%3A19910.
Full textMolisse, Giulia. "Above ground biomass and carbon sequestration estimation -Implementation of a sentinel-2 based exploratory workflow." Master's thesis, 2021. http://hdl.handle.net/10362/113902.
Full textThis work presents a Sentinel-2 based exploratory work ow for the estimation of Above Ground Biomass (AGB) and Carbon Sequestration (CS) in a subtropical forest. In the last decades, remote sensing-based studies on AGB have been widely investigated alongside with a variety of sensors, features and Machine Learning (ML) algorithms. Up-to-date and reliable mapping of such measures have been increasingly required by international commitments under the climate convention as well as by sustainable forest management practices. The proposed approach consists of 5 major steps: 1) generation of several Vegetation Indices (VI), biophysical parameters and texture measures; 2) feature selection with Mean Decrease in Impurity (MDI), Mean Decrease in Accuracy (MDA), L1 Regularization (LASSO), and Principal Component Analysis (PCA); 3) feature selection testing with k-Nearest Neighbour (kNN), Random Forest (RF), Extreme Gradient Boosting (XGB), and Arti cial Neural Network (ANN); 4) hyper-parameters ne-tuning with Grid Search, Random Search and Bayesian Optimization; and nally, 5) model explanation with the SHapley Additive exPlanations (SHAP) package, which to this day has not been investigated in the context of AGB mapping. The following results were obtained: 1) MDI was chosen as the best performing feature selection method by the XGB and the Deep Neural Network (DNN), MDA was chosen by the RF and the kNN, while LASSO was chosen by the Shallow Neural Network (SNN) and the Linear Neural Network (LNN); 2) before hyper-parameters optimization, the Deep Neural Network (DNN) yielded the best performance with a Root Mean Squared Error (RMSE) of 42.30 t=ha; 3) after hyper-parameters ne-tuning with Bayesian Optimization, the XGB model yielded the best performance with a RMSE of 37.79 t=ha; 4) model explanation with SHAP allowed for a deeper understanding of the features impact on the model predictions. Finally, the predicted AGB throughout the study area showed an average value of 83 t=ha, ranging from 0 t=ha to 346.56 t=ha. The related CS was estimated by using a conversion factor of 0.47.
Κωστόπουλος, Αριστοτέλης. "Νέοι αλγόριθμοι εκπαίδευσης τεχνητών νευρωνικών δικτύων και εφαρμογές." Thesis, 2012. http://hdl.handle.net/10889/5462.
Full textIn this dissertation the problem of the training of feedforward artificial neural networks and its applications are considered. The presentation of the topics and the results are organized as follows: In the first chapter, the artificial neural networks are introduced. Initially, the benefits of the use of artificial neural networks are presented. In the sequence, the structure and their functionality are presented. More specifically, the derivation of the artificial neurons from the biological ones is presented followed by the presentation of the architecture of the feedforward neural networks. The historical notes and the use of neural networks in real world problems are concluding the first chapter. In Chapter 2, the existing training algorithms for the feedforward neural networks are considered. First, a summary of the training problem and its mathematical formulation, that corresponds to the uncostrained minimization of a cost function, are given. In the sequence, training algorithms based on the steepest descent, Newton, variable metric and conjugate gradient methods are presented. Furthermore, the weight space, the error surface and the techniques of the initialization of the weights are described. Their influence in the training procedure is discussed. In Chapter 3, a new training algorithm for feedforward neural networks based on the backpropagation algorithm and the automatic two-point step size (learning rate) is presented. The algorithm uses the steepest descent search direction while the learning rate parameter is calculated by minimizing the standard secant equation. Furthermore, a new learning rate parameter is derived by minimizing the modified secant equation introduced by Zhang, that uses both gradient and function value information. In the sequece a switching mechanism is incorporated into the algorithm so that the appropriate stepsize to be chosen according to the status of the current iterative point. Finaly, the global convergence of the proposed algorithm is studied and the results of some numerical experiments are presented. In Chapter 4, some efficient training algorithms, based on conjugate gradient optimization methods, are presented. In addition to the existing conjugate gradient training algorithms, we introduce Perry's conjugate gradient method as a training algorithm. Furthermore, a new class of conjugate gradient methods is proposed, called self-scaled conjugate gradient methods, which are derived from the principles of Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribiere and Perry's method. This class is based on the spectral scaling parameter. Furthermore, we incorporate to the conjugate gradient training algorithms an efficient line search technique based on the Wolfe conditions and on safeguarded cubic interpolation. In addition, the initial learning rate parameter, fed to the line search technique, was automatically adapted at each iteration by a closed formula. Finally, an efficient restarting procedure was employed in order to further improve the effectiveness of the conjugate gradient training algorithms and prove their global convergence. Experimental results show that, in general, the new class of methods can perform better with a much lower computational cost and better success performance. In the last chapter of this dissertation, the Perry's self-scaled conjugate gradient training algorithm that was presented in the previous chapter was isolated and modified. More specifically, the main characteristics of the training algorithm were maintained but in this case a different line search strategy based on the nonmonotone Wolfe conditions was utilized. Furthermore, a new initial learning rate parameter was introduced for use in conjunction with the self-scaled conjugate gradient training algorithm that seems to be more effective from the initial learning rate parameter, proposed by Shanno, when used with the nonmonotone line search technique. In the sequence the experimental results for differrent training problems are presented. Finally, a feedforward neural network with the proposed algorithm for the problem of brain astrocytomas grading was trained and compared the results with those achieved by a probabilistic neural network. The dissertation is concluded with the Appendix A', where the training problems used for the evaluation of the proposed training algorithms are presented.