Academic literature on the topic 'Neurodynamic optimization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neurodynamic optimization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Neurodynamic optimization"
Ji, Zheng, Xu Cai, and Xuyang Lou. "A Quantum-Behaved Neurodynamic Approach for Nonconvex Optimization with Constraints." Algorithms 12, no. 7 (July 5, 2019): 138. http://dx.doi.org/10.3390/a12070138.
Full textLe, Xinyi, Sijie Chen, Fei Li, Zheng Yan, and Juntong Xi. "Distributed Neurodynamic Optimization for Energy Internet Management." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, no. 8 (August 2019): 1624–33. http://dx.doi.org/10.1109/tsmc.2019.2898551.
Full textLi, Guocheng, and Zheng Yan. "Reconstruction of sparse signals via neurodynamic optimization." International Journal of Machine Learning and Cybernetics 10, no. 1 (May 18, 2017): 15–26. http://dx.doi.org/10.1007/s13042-017-0694-4.
Full textLeung, Man-Fai, and Jun Wang. "A Collaborative Neurodynamic Approach to Multiobjective Optimization." IEEE Transactions on Neural Networks and Learning Systems 29, no. 11 (November 2018): 5738–48. http://dx.doi.org/10.1109/tnnls.2018.2806481.
Full textMa, Litao, Jiqiang Chen, Sitian Qin, Lina Zhang, and Feng Zhang. "An Efficient Neurodynamic Approach to Fuzzy Chance-constrained Programming." International Journal on Artificial Intelligence Tools 30, no. 01 (January 29, 2021): 2140001. http://dx.doi.org/10.1142/s0218213021400017.
Full textYan, Zheng, Jun Wang, and Guocheng Li. "A collective neurodynamic optimization approach to bound-constrained nonconvex optimization." Neural Networks 55 (July 2014): 20–29. http://dx.doi.org/10.1016/j.neunet.2014.03.006.
Full textWang, Tong, Hao Cui, Zhongyi Zhang, and Jian Wei. "A Neurodynamic Approach for SWIPT Power Splitting Optimization." Journal of Physics: Conference Series 2517, no. 1 (June 1, 2023): 012010. http://dx.doi.org/10.1088/1742-6596/2517/1/012010.
Full textLiu, Bao, Xuehui Mei, Haijun Jiang, and Lijun Wu. "A Nonpenalty Neurodynamic Model for Complex-Variable Optimization." Discrete Dynamics in Nature and Society 2021 (February 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/6632257.
Full textZhao, You, Xiaofeng Liao, and Xing He. "Novel projection neurodynamic approaches for constrained convex optimization." Neural Networks 150 (June 2022): 336–49. http://dx.doi.org/10.1016/j.neunet.2022.03.011.
Full textYan, Zheng, Jianchao Fan, and Jun Wang. "A Collective Neurodynamic Approach to Constrained Global Optimization." IEEE Transactions on Neural Networks and Learning Systems 28, no. 5 (May 2017): 1206–15. http://dx.doi.org/10.1109/tnnls.2016.2524619.
Full textDissertations / Theses on the topic "Neurodynamic optimization"
Tassouli, Siham. "Neurodynamic chance-constrained geometric optimization." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG062.
Full textIn many real-world scenarios, decision-makers face uncertainties that can affect the outcomes of their decisions. These uncertainties arise from various sources, such as variability in demand, fluctuating market conditions, or incomplete information about system parameters. Traditional deterministic optimization approaches assume that all parameters are known with certainty, which may not accurately reflect the reality of the problem. Chance-constrained optimization provides a more realistic and robust approach by explicitly accounting for the uncertainty in decision-making. Geometric programming is often misunderstood as a technique exclusively designed for posynomial problems. However, it is a versatile mathematical theory with significant value in addressing a broad range of separable problems. In fact, its true strength lies in its ability to effectively tackle seemingly inseparable problems by leveraging their linear algebraic structure. This general applicability of geometric programming makes it a valuable tool for studying and solving various optimization problems, extending its practical usefulness beyond its initial perception. Recurrent neural networks (RNNs) offer a biologically inspired computational framework with great optimization potential. By emulating the interconnected structure of neurons in the brain, RNNs excel in modeling complex and dynamic systems. This capability allows them to capture temporal dependencies and feedback loops, making them well-suited for optimization scenarios that involve sequential decision-making or iterative processes. Moreover, one of the key advantages of neurodynamic approaches is their hardware implementation feasibility. The primary objective of this thesis is to develop neurodynamic algorithms that are efficient and effective in solving chance-constrained geometric optimization problems. The thesis begins by focusing on chance-constrained geometric programs involving independent random variables. In addition, a specific type of geometric programs known as rectangular programs is also examined in detail. The objective is to understand the characteristics and complexities associated with this subclass of geometric programs. Subsequently, the thesis explores applying copula theory to address chance-constrained geometric programs with dependent random variables. Copula theory provides a mathematical framework for modeling and analyzing the dependence structure between random variables, thereby enhancing the understanding and optimization of these problems. Lastly, the thesis investigates distributionally robust geometric optimization, which considers uncertain distributions of random variables. This approach focuses on developing optimization algorithms that are robust against uncertainty in the underlying probability distributions, ensuring more reliable and stable solutions
Wu, Dawen. "Solving Some Nonlinear Optimization Problems with Deep Learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG083.
Full textThis thesis considers four types of nonlinear optimization problems, namely bimatrix games, nonlinear projection equations (NPEs), nonsmooth convex optimization problems (NCOPs), and chance-constrained games (CCGs).These four classes of nonlinear optimization problems find extensive applications in various domains such as engineering, computer science, economics, and finance.We aim to introduce deep learning-based algorithms to efficiently compute the optimal solutions for these nonlinear optimization problems.For bimatrix games, we use Convolutional Neural Networks (CNNs) to compute Nash equilibria.Specifically, we design a CNN architecture where the input is a bimatrix game and the output is the predicted Nash equilibrium for the game.We generate a set of bimatrix games by a given probability distribution and use the Lemke-Howson algorithm to find their true Nash equilibria, thereby constructing a training dataset.The proposed CNN is trained on this dataset to improve its accuracy. Upon completion of training, the CNN is capable of predicting Nash equilibria for unseen bimatrix games.Experimental results demonstrate the exceptional computational efficiency of our CNN-based approach, at the cost of sacrificing some accuracy.For NPEs, NCOPs, and CCGs, which are more complex optimization problems, they cannot be directly fed into neural networks.Therefore, we resort to advanced tools, namely neurodynamic optimization and Physics-Informed Neural Networks (PINNs), for solving these problems.Specifically, we first use a neurodynamic approach to model a nonlinear optimization problem as a system of Ordinary Differential Equations (ODEs).Then, we utilize a PINN-based model to solve the resulting ODE system, where the end state of the model represents the predicted solution to the original optimization problem.The neural network is trained toward solving the ODE system, thereby solving the original optimization problem.A key contribution of our proposed method lies in transforming a nonlinear optimization problem into a neural network training problem.As a result, we can now solve nonlinear optimization problems using only PyTorch, without relying on classical convex optimization solvers such as CVXPY, CPLEX, or Gurobi
"A neurodynamic optimization approach to constrained pseudoconvex optimization." 2011. http://library.cuhk.edu.hk/record=b5894791.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 71-82).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement i --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1
Chapter 1.2 --- Recurrent Neural Networks --- p.4
Chapter 1.3 --- Thesis Organization --- p.7
Chapter 2 --- Literature Review --- p.8
Chapter 2.1 --- Pseudo convex Optimization --- p.8
Chapter 2.2 --- Recurrent Neural Networks --- p.10
Chapter 3 --- Model Description and Convergence Analysis --- p.17
Chapter 3.1 --- Model Descriptions --- p.18
Chapter 3.2 --- Global Convergence --- p.20
Chapter 4 --- Numerical Examples --- p.27
Chapter 4.1 --- Gaussian Optimization --- p.28
Chapter 4.2 --- Quadratic Fractional Programming --- p.36
Chapter 4.3 --- Nonlinear Convex Programming --- p.39
Chapter 5 --- Real-time Data Reconciliation --- p.42
Chapter 5.1 --- Introduction --- p.42
Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44
Chapter 5.3 --- Examples --- p.45
Chapter 6 --- Real-time Portfolio Optimization --- p.53
Chapter 6.1 --- Introduction --- p.53
Chapter 6.2 --- Model Description --- p.54
Chapter 6.3 --- Theoretical Analysis --- p.56
Chapter 6.4 --- Illustrative Examples --- p.58
Chapter 7 --- Conclusions and Future Works --- p.67
Chapter 7.1 --- Concluding Remarks --- p.67
Chapter 7.2 --- Future Works --- p.68
Chapter A --- Publication List --- p.69
Bibliography --- p.71
"Collective Neurodynamic Systems: Synchronization and Optimization." 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292660.
Full textBook chapters on the topic "Neurodynamic optimization"
Jun, Wang. "Neurodynamic Optimization and Its Applications in Robotics." In Advances in Robotics, 2. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03983-6_2.
Full textQin, Sitian, Xinyi Le, and Jun Wang. "A Neurodynamic Optimization Approach to Bilevel Linear Programming." In Advances in Neural Networks – ISNN 2015, 418–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25393-0_46.
Full textLeung, Man-Fai, and Jun Wang. "A Collaborative Neurodynamic Optimization Approach to Bicriteria Portfolio Selection." In Advances in Neural Networks – ISNN 2019, 318–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22796-8_34.
Full textFan, Jianchao, and Jun Wang. "A Collective Neurodynamic Optimization Approach to Nonnegative Tensor Decomposition." In Advances in Neural Networks - ISNN 2017, 207–13. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59081-3_25.
Full textLe, Xinyi, Sijie Chen, Yu Zheng, and Juntong Xi. "A Multiple-objective Neurodynamic Optimization to Electric Load Management Under Demand-Response Program." In Advances in Neural Networks - ISNN 2017, 169–77. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59081-3_21.
Full textYan, Zheng, Jie Lu, and Guangquan Zhang. "Distributed Model Predictive Control of Linear Systems with Coupled Constraints Based on Collective Neurodynamic Optimization." In AI 2018: Advances in Artificial Intelligence, 318–28. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03991-2_31.
Full textWang, Jiasen, Jun Wang, and Dongbin Zhao. "Dynamically Weighted Model Predictive Control of Affine Nonlinear Systems Based on Two-Timescale Neurodynamic Optimization." In Advances in Neural Networks – ISNN 2020, 96–105. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64221-1_9.
Full textLe, Xinyi, and Jun Wang. "A Neurodynamic Optimization Approach to Robust Pole Assignment for Synthesizing Linear Control Systems Based on a Convex Feasibility Problem Reformulation." In Neural Information Processing, 284–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42054-2_36.
Full textChang, Hung-Jen, and Walter J. Freeman. "Parameter Optimization of Olfactory Neurodynamics." In The Neurobiology of Computation, 191–96. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2235-5_31.
Full textFu, Chunjiang, Rubin Wang, and Jianting Cao. "Re-optimization Contributes to the Adaption of External VF Field." In Advances in Cognitive Neurodynamics (II), 473–77. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9695-1_75.
Full textConference papers on the topic "Neurodynamic optimization"
Li, Xinqi, Jun Wang, and Sam Kwong. "Alternative Mutation Operators in Collaborative Neurodynamic Optimization." In 2020 10th International Conference on Information Science and Technology (ICIST). IEEE, 2020. http://dx.doi.org/10.1109/icist49303.2020.9202136.
Full textWang, Yadi, Xiaoping Li, and Jun Wang. "A Neurodynamic Approach to L0-Constrained Optimization." In 2020 12th International Conference on Advanced Computational Intelligence (ICACI). IEEE, 2020. http://dx.doi.org/10.1109/icaci49185.2020.9177499.
Full textFang, Xiaomeng, Xinyi Le, and Fei Li. "Distributed Neurodynamic Optimization for Coordination of Redundant Robots." In 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, 2019. http://dx.doi.org/10.1109/icist.2019.8836826.
Full textWang, Jun. "Neurodynamic optimization and its applications for winners-take-all." In 2009 2nd IEEE International Conference on Computer Science and Information Technology. IEEE, 2009. http://dx.doi.org/10.1109/iccsit.2009.5235008.
Full textFan, Jianchao, Ye Wang, Jianhua Zhao, Xiang Wang, and Xinxin Wang. "Blind source separation based on collective neurodynamic optimization approach." In 2017 36th Chinese Control Conference (CCC). IEEE, 2017. http://dx.doi.org/10.23919/chicc.2017.8027656.
Full textWang, Jun. "Neurodynamic optimization with its application for model predictive control." In 2009 3rd International Workshop on Soft Computing Applications (SOFA). IEEE, 2009. http://dx.doi.org/10.1109/sofa.2009.5254883.
Full textLeung, Man-Fai, and Jun Wang. "A Two-Timescale Neurodynamic Approach to Minimax Portfolio Optimization." In 2021 11th International Conference on Information Science and Technology (ICIST). IEEE, 2021. http://dx.doi.org/10.1109/icist52614.2021.9440640.
Full textChe, Hangjun, and Jun Wang. "Sparse Nonnegative Matrix Factorization Based on Collaborative Neurodynamic Optimization." In 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, 2019. http://dx.doi.org/10.1109/icist.2019.8836758.
Full textPan, Yunpeng, and Jun Wang. "A neurodynamic optimization approach to nonlinear model predictive control." In 2010 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2010. http://dx.doi.org/10.1109/icsmc.2010.5642367.
Full textTassouli, Siham, and Abdel Lisser. "A Neurodynamic Duplex for Distributionally Robust Joint Chance-Constrained Optimization." In 13th International Conference on Operations Research and Enterprise Systems. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012262100003639.
Full text