Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Neurodynamic optimization“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Neurodynamic optimization" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Neurodynamic optimization"
Ji, Zheng, Xu Cai und Xuyang Lou. „A Quantum-Behaved Neurodynamic Approach for Nonconvex Optimization with Constraints“. Algorithms 12, Nr. 7 (05.07.2019): 138. http://dx.doi.org/10.3390/a12070138.
Der volle Inhalt der QuelleLe, Xinyi, Sijie Chen, Fei Li, Zheng Yan und Juntong Xi. „Distributed Neurodynamic Optimization for Energy Internet Management“. IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, Nr. 8 (August 2019): 1624–33. http://dx.doi.org/10.1109/tsmc.2019.2898551.
Der volle Inhalt der QuelleLi, Guocheng, und Zheng Yan. „Reconstruction of sparse signals via neurodynamic optimization“. International Journal of Machine Learning and Cybernetics 10, Nr. 1 (18.05.2017): 15–26. http://dx.doi.org/10.1007/s13042-017-0694-4.
Der volle Inhalt der QuelleLeung, Man-Fai, und Jun Wang. „A Collaborative Neurodynamic Approach to Multiobjective Optimization“. IEEE Transactions on Neural Networks and Learning Systems 29, Nr. 11 (November 2018): 5738–48. http://dx.doi.org/10.1109/tnnls.2018.2806481.
Der volle Inhalt der QuelleMa, Litao, Jiqiang Chen, Sitian Qin, Lina Zhang und Feng Zhang. „An Efficient Neurodynamic Approach to Fuzzy Chance-constrained Programming“. International Journal on Artificial Intelligence Tools 30, Nr. 01 (29.01.2021): 2140001. http://dx.doi.org/10.1142/s0218213021400017.
Der volle Inhalt der QuelleYan, Zheng, Jun Wang und Guocheng Li. „A collective neurodynamic optimization approach to bound-constrained nonconvex optimization“. Neural Networks 55 (Juli 2014): 20–29. http://dx.doi.org/10.1016/j.neunet.2014.03.006.
Der volle Inhalt der QuelleWang, Tong, Hao Cui, Zhongyi Zhang und Jian Wei. „A Neurodynamic Approach for SWIPT Power Splitting Optimization“. Journal of Physics: Conference Series 2517, Nr. 1 (01.06.2023): 012010. http://dx.doi.org/10.1088/1742-6596/2517/1/012010.
Der volle Inhalt der QuelleLiu, Bao, Xuehui Mei, Haijun Jiang und Lijun Wu. „A Nonpenalty Neurodynamic Model for Complex-Variable Optimization“. Discrete Dynamics in Nature and Society 2021 (16.02.2021): 1–10. http://dx.doi.org/10.1155/2021/6632257.
Der volle Inhalt der QuelleZhao, You, Xiaofeng Liao und Xing He. „Novel projection neurodynamic approaches for constrained convex optimization“. Neural Networks 150 (Juni 2022): 336–49. http://dx.doi.org/10.1016/j.neunet.2022.03.011.
Der volle Inhalt der QuelleYan, Zheng, Jianchao Fan und Jun Wang. „A Collective Neurodynamic Approach to Constrained Global Optimization“. IEEE Transactions on Neural Networks and Learning Systems 28, Nr. 5 (Mai 2017): 1206–15. http://dx.doi.org/10.1109/tnnls.2016.2524619.
Der volle Inhalt der QuelleDissertationen zum Thema "Neurodynamic optimization"
Tassouli, Siham. „Neurodynamic chance-constrained geometric optimization“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG062.
Der volle Inhalt der QuelleIn many real-world scenarios, decision-makers face uncertainties that can affect the outcomes of their decisions. These uncertainties arise from various sources, such as variability in demand, fluctuating market conditions, or incomplete information about system parameters. Traditional deterministic optimization approaches assume that all parameters are known with certainty, which may not accurately reflect the reality of the problem. Chance-constrained optimization provides a more realistic and robust approach by explicitly accounting for the uncertainty in decision-making. Geometric programming is often misunderstood as a technique exclusively designed for posynomial problems. However, it is a versatile mathematical theory with significant value in addressing a broad range of separable problems. In fact, its true strength lies in its ability to effectively tackle seemingly inseparable problems by leveraging their linear algebraic structure. This general applicability of geometric programming makes it a valuable tool for studying and solving various optimization problems, extending its practical usefulness beyond its initial perception. Recurrent neural networks (RNNs) offer a biologically inspired computational framework with great optimization potential. By emulating the interconnected structure of neurons in the brain, RNNs excel in modeling complex and dynamic systems. This capability allows them to capture temporal dependencies and feedback loops, making them well-suited for optimization scenarios that involve sequential decision-making or iterative processes. Moreover, one of the key advantages of neurodynamic approaches is their hardware implementation feasibility. The primary objective of this thesis is to develop neurodynamic algorithms that are efficient and effective in solving chance-constrained geometric optimization problems. The thesis begins by focusing on chance-constrained geometric programs involving independent random variables. In addition, a specific type of geometric programs known as rectangular programs is also examined in detail. The objective is to understand the characteristics and complexities associated with this subclass of geometric programs. Subsequently, the thesis explores applying copula theory to address chance-constrained geometric programs with dependent random variables. Copula theory provides a mathematical framework for modeling and analyzing the dependence structure between random variables, thereby enhancing the understanding and optimization of these problems. Lastly, the thesis investigates distributionally robust geometric optimization, which considers uncertain distributions of random variables. This approach focuses on developing optimization algorithms that are robust against uncertainty in the underlying probability distributions, ensuring more reliable and stable solutions
Wu, Dawen. „Solving Some Nonlinear Optimization Problems with Deep Learning“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG083.
Der volle Inhalt der QuelleThis thesis considers four types of nonlinear optimization problems, namely bimatrix games, nonlinear projection equations (NPEs), nonsmooth convex optimization problems (NCOPs), and chance-constrained games (CCGs).These four classes of nonlinear optimization problems find extensive applications in various domains such as engineering, computer science, economics, and finance.We aim to introduce deep learning-based algorithms to efficiently compute the optimal solutions for these nonlinear optimization problems.For bimatrix games, we use Convolutional Neural Networks (CNNs) to compute Nash equilibria.Specifically, we design a CNN architecture where the input is a bimatrix game and the output is the predicted Nash equilibrium for the game.We generate a set of bimatrix games by a given probability distribution and use the Lemke-Howson algorithm to find their true Nash equilibria, thereby constructing a training dataset.The proposed CNN is trained on this dataset to improve its accuracy. Upon completion of training, the CNN is capable of predicting Nash equilibria for unseen bimatrix games.Experimental results demonstrate the exceptional computational efficiency of our CNN-based approach, at the cost of sacrificing some accuracy.For NPEs, NCOPs, and CCGs, which are more complex optimization problems, they cannot be directly fed into neural networks.Therefore, we resort to advanced tools, namely neurodynamic optimization and Physics-Informed Neural Networks (PINNs), for solving these problems.Specifically, we first use a neurodynamic approach to model a nonlinear optimization problem as a system of Ordinary Differential Equations (ODEs).Then, we utilize a PINN-based model to solve the resulting ODE system, where the end state of the model represents the predicted solution to the original optimization problem.The neural network is trained toward solving the ODE system, thereby solving the original optimization problem.A key contribution of our proposed method lies in transforming a nonlinear optimization problem into a neural network training problem.As a result, we can now solve nonlinear optimization problems using only PyTorch, without relying on classical convex optimization solvers such as CVXPY, CPLEX, or Gurobi
„A neurodynamic optimization approach to constrained pseudoconvex optimization“. 2011. http://library.cuhk.edu.hk/record=b5894791.
Der volle Inhalt der QuelleThesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 71-82).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement i --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1
Chapter 1.2 --- Recurrent Neural Networks --- p.4
Chapter 1.3 --- Thesis Organization --- p.7
Chapter 2 --- Literature Review --- p.8
Chapter 2.1 --- Pseudo convex Optimization --- p.8
Chapter 2.2 --- Recurrent Neural Networks --- p.10
Chapter 3 --- Model Description and Convergence Analysis --- p.17
Chapter 3.1 --- Model Descriptions --- p.18
Chapter 3.2 --- Global Convergence --- p.20
Chapter 4 --- Numerical Examples --- p.27
Chapter 4.1 --- Gaussian Optimization --- p.28
Chapter 4.2 --- Quadratic Fractional Programming --- p.36
Chapter 4.3 --- Nonlinear Convex Programming --- p.39
Chapter 5 --- Real-time Data Reconciliation --- p.42
Chapter 5.1 --- Introduction --- p.42
Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44
Chapter 5.3 --- Examples --- p.45
Chapter 6 --- Real-time Portfolio Optimization --- p.53
Chapter 6.1 --- Introduction --- p.53
Chapter 6.2 --- Model Description --- p.54
Chapter 6.3 --- Theoretical Analysis --- p.56
Chapter 6.4 --- Illustrative Examples --- p.58
Chapter 7 --- Conclusions and Future Works --- p.67
Chapter 7.1 --- Concluding Remarks --- p.67
Chapter 7.2 --- Future Works --- p.68
Chapter A --- Publication List --- p.69
Bibliography --- p.71
„Collective Neurodynamic Systems: Synchronization and Optimization“. 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292660.
Der volle Inhalt der QuelleBuchteile zum Thema "Neurodynamic optimization"
Jun, Wang. „Neurodynamic Optimization and Its Applications in Robotics“. In Advances in Robotics, 2. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03983-6_2.
Der volle Inhalt der QuelleQin, Sitian, Xinyi Le und Jun Wang. „A Neurodynamic Optimization Approach to Bilevel Linear Programming“. In Advances in Neural Networks – ISNN 2015, 418–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25393-0_46.
Der volle Inhalt der QuelleLeung, Man-Fai, und Jun Wang. „A Collaborative Neurodynamic Optimization Approach to Bicriteria Portfolio Selection“. In Advances in Neural Networks – ISNN 2019, 318–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22796-8_34.
Der volle Inhalt der QuelleFan, Jianchao, und Jun Wang. „A Collective Neurodynamic Optimization Approach to Nonnegative Tensor Decomposition“. In Advances in Neural Networks - ISNN 2017, 207–13. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59081-3_25.
Der volle Inhalt der QuelleLe, Xinyi, Sijie Chen, Yu Zheng und Juntong Xi. „A Multiple-objective Neurodynamic Optimization to Electric Load Management Under Demand-Response Program“. In Advances in Neural Networks - ISNN 2017, 169–77. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59081-3_21.
Der volle Inhalt der QuelleYan, Zheng, Jie Lu und Guangquan Zhang. „Distributed Model Predictive Control of Linear Systems with Coupled Constraints Based on Collective Neurodynamic Optimization“. In AI 2018: Advances in Artificial Intelligence, 318–28. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03991-2_31.
Der volle Inhalt der QuelleWang, Jiasen, Jun Wang und Dongbin Zhao. „Dynamically Weighted Model Predictive Control of Affine Nonlinear Systems Based on Two-Timescale Neurodynamic Optimization“. In Advances in Neural Networks – ISNN 2020, 96–105. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64221-1_9.
Der volle Inhalt der QuelleLe, Xinyi, und Jun Wang. „A Neurodynamic Optimization Approach to Robust Pole Assignment for Synthesizing Linear Control Systems Based on a Convex Feasibility Problem Reformulation“. In Neural Information Processing, 284–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42054-2_36.
Der volle Inhalt der QuelleChang, Hung-Jen, und Walter J. Freeman. „Parameter Optimization of Olfactory Neurodynamics“. In The Neurobiology of Computation, 191–96. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2235-5_31.
Der volle Inhalt der QuelleFu, Chunjiang, Rubin Wang und Jianting Cao. „Re-optimization Contributes to the Adaption of External VF Field“. In Advances in Cognitive Neurodynamics (II), 473–77. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9695-1_75.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Neurodynamic optimization"
Li, Xinqi, Jun Wang und Sam Kwong. „Alternative Mutation Operators in Collaborative Neurodynamic Optimization“. In 2020 10th International Conference on Information Science and Technology (ICIST). IEEE, 2020. http://dx.doi.org/10.1109/icist49303.2020.9202136.
Der volle Inhalt der QuelleWang, Yadi, Xiaoping Li und Jun Wang. „A Neurodynamic Approach to L0-Constrained Optimization“. In 2020 12th International Conference on Advanced Computational Intelligence (ICACI). IEEE, 2020. http://dx.doi.org/10.1109/icaci49185.2020.9177499.
Der volle Inhalt der QuelleFang, Xiaomeng, Xinyi Le und Fei Li. „Distributed Neurodynamic Optimization for Coordination of Redundant Robots“. In 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, 2019. http://dx.doi.org/10.1109/icist.2019.8836826.
Der volle Inhalt der QuelleWang, Jun. „Neurodynamic optimization and its applications for winners-take-all“. In 2009 2nd IEEE International Conference on Computer Science and Information Technology. IEEE, 2009. http://dx.doi.org/10.1109/iccsit.2009.5235008.
Der volle Inhalt der QuelleFan, Jianchao, Ye Wang, Jianhua Zhao, Xiang Wang und Xinxin Wang. „Blind source separation based on collective neurodynamic optimization approach“. In 2017 36th Chinese Control Conference (CCC). IEEE, 2017. http://dx.doi.org/10.23919/chicc.2017.8027656.
Der volle Inhalt der QuelleWang, Jun. „Neurodynamic optimization with its application for model predictive control“. In 2009 3rd International Workshop on Soft Computing Applications (SOFA). IEEE, 2009. http://dx.doi.org/10.1109/sofa.2009.5254883.
Der volle Inhalt der QuelleLeung, Man-Fai, und Jun Wang. „A Two-Timescale Neurodynamic Approach to Minimax Portfolio Optimization“. In 2021 11th International Conference on Information Science and Technology (ICIST). IEEE, 2021. http://dx.doi.org/10.1109/icist52614.2021.9440640.
Der volle Inhalt der QuelleChe, Hangjun, und Jun Wang. „Sparse Nonnegative Matrix Factorization Based on Collaborative Neurodynamic Optimization“. In 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, 2019. http://dx.doi.org/10.1109/icist.2019.8836758.
Der volle Inhalt der QuellePan, Yunpeng, und Jun Wang. „A neurodynamic optimization approach to nonlinear model predictive control“. In 2010 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2010. http://dx.doi.org/10.1109/icsmc.2010.5642367.
Der volle Inhalt der QuelleTassouli, Siham, und Abdel Lisser. „A Neurodynamic Duplex for Distributionally Robust Joint Chance-Constrained Optimization“. In 13th International Conference on Operations Research and Enterprise Systems. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012262100003639.
Der volle Inhalt der Quelle