Literatura académica sobre el tema "Neurodynamic optimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Neurodynamic optimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Neurodynamic optimization"

1

Ji, Zheng, Xu Cai y Xuyang Lou. "A Quantum-Behaved Neurodynamic Approach for Nonconvex Optimization with Constraints". Algorithms 12, n.º 7 (5 de julio de 2019): 138. http://dx.doi.org/10.3390/a12070138.

Texto completo
Resumen
This paper presents a quantum-behaved neurodynamic swarm optimization approach to solve the nonconvex optimization problems with inequality constraints. Firstly, the general constrained optimization problem is addressed and a high-performance feedback neural network for solving convex nonlinear programming problems is introduced. The convergence of the proposed neural network is also proved. Then, combined with the quantum-behaved particle swarm method, a quantum-behaved neurodynamic swarm optimization (QNSO) approach is presented. Finally, the performance of the proposed QNSO algorithm is evaluated through two function tests and three applications including the hollow transmission shaft, heat exchangers and crank–rocker mechanism. Numerical simulations are also provided to verify the advantages of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Le, Xinyi, Sijie Chen, Fei Li, Zheng Yan y Juntong Xi. "Distributed Neurodynamic Optimization for Energy Internet Management". IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, n.º 8 (agosto de 2019): 1624–33. http://dx.doi.org/10.1109/tsmc.2019.2898551.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Li, Guocheng y Zheng Yan. "Reconstruction of sparse signals via neurodynamic optimization". International Journal of Machine Learning and Cybernetics 10, n.º 1 (18 de mayo de 2017): 15–26. http://dx.doi.org/10.1007/s13042-017-0694-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Leung, Man-Fai y Jun Wang. "A Collaborative Neurodynamic Approach to Multiobjective Optimization". IEEE Transactions on Neural Networks and Learning Systems 29, n.º 11 (noviembre de 2018): 5738–48. http://dx.doi.org/10.1109/tnnls.2018.2806481.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ma, Litao, Jiqiang Chen, Sitian Qin, Lina Zhang y Feng Zhang. "An Efficient Neurodynamic Approach to Fuzzy Chance-constrained Programming". International Journal on Artificial Intelligence Tools 30, n.º 01 (29 de enero de 2021): 2140001. http://dx.doi.org/10.1142/s0218213021400017.

Texto completo
Resumen
In both practical applications and theoretical analysis, there are many fuzzy chance-constrained optimization problems. Currently, there is short of real-time algorithms for solving such problems. Therefore, in this paper, a continuous-time neurodynamic approach is proposed for solving a class of fuzzy chance-constrained optimization problems. Firstly, an equivalent deterministic problem with inequality constraint is discussed, and then a continuous-time neurodynamic approach is proposed. Secondly, a sufficient and necessary optimality condition of the considered optimization problem is obtained. Thirdly, the boundedness, global existence and Lyapunov stability of the state solution to the proposed approach are proved. Moreover, the convergence to the optimal solution of considered problem is studied. Finally, several experiments are provided to show the performance of proposed approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Yan, Zheng, Jun Wang y Guocheng Li. "A collective neurodynamic optimization approach to bound-constrained nonconvex optimization". Neural Networks 55 (julio de 2014): 20–29. http://dx.doi.org/10.1016/j.neunet.2014.03.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wang, Tong, Hao Cui, Zhongyi Zhang y Jian Wei. "A Neurodynamic Approach for SWIPT Power Splitting Optimization". Journal of Physics: Conference Series 2517, n.º 1 (1 de junio de 2023): 012010. http://dx.doi.org/10.1088/1742-6596/2517/1/012010.

Texto completo
Resumen
Abstract Simultaneous wireless information and power transfer (SWIPT) systems using energy from RF signals can effectively solve the energy shortage of wireless devices. However, the existing SWIPT optimization methods using numerical algorithms are difficult to solve the non-convex problem and to adapt to the dynamic communication circumstances. In this paper, a duplex neurodynamic optimization method is used to address the SWIPT system’s power partitioning issue. The information rate maximization problem of the SWIPT system is framed as a biconvex problem. A duplex recurrent neural network is used to concurrently execute local search and update the initial state of the neural network by a particle swarm optimization method to get the global optimum. The experimental results demonstrate that the duplex neurodynamic-based SWIPT system maximizes information rate while satisfying the minimal harvesting energy requirement in a variety of channel states.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liu, Bao, Xuehui Mei, Haijun Jiang y Lijun Wu. "A Nonpenalty Neurodynamic Model for Complex-Variable Optimization". Discrete Dynamics in Nature and Society 2021 (16 de febrero de 2021): 1–10. http://dx.doi.org/10.1155/2021/6632257.

Texto completo
Resumen
In this paper, a complex-variable neural network model is obtained for solving complex-variable optimization problems described by differential inclusion. Based on the nonpenalty idea, the constructed algorithm does not need to design penalty parameters, that is, it is easier to be designed in practical applications. And some theorems for the convergence of the proposed model are given under suitable conditions. Finally, two numerical examples are shown to illustrate the correctness and effectiveness of the proposed optimization model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhao, You, Xiaofeng Liao y Xing He. "Novel projection neurodynamic approaches for constrained convex optimization". Neural Networks 150 (junio de 2022): 336–49. http://dx.doi.org/10.1016/j.neunet.2022.03.011.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yan, Zheng, Jianchao Fan y Jun Wang. "A Collective Neurodynamic Approach to Constrained Global Optimization". IEEE Transactions on Neural Networks and Learning Systems 28, n.º 5 (mayo de 2017): 1206–15. http://dx.doi.org/10.1109/tnnls.2016.2524619.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Neurodynamic optimization"

1

Tassouli, Siham. "Neurodynamic chance-constrained geometric optimization". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG062.

Texto completo
Resumen
Dans de nombreux problèmes réels, les décideurs sont confrontés à des incertitudes qui peuvent affecter les résultats de leurs décisions. Ces incertitudes découlent de diverses sources, telles que la variabilité de la demande, les conditions fluctuantes du marché ou des informations incomplètes sur les paramètres du système. Les approches traditionnelles d'optimisation déterministe supposent que tous les paramètres sont connus avec certitude, ce qui peut ne pas refléter avec précision la réalité du problème. L'optimisation sous contraintes de probabilité offre une approche plus réaliste et robuste en tenant compte explicitement de l'incertitude dans la prise de décision. La programmation géométrique est souvent mal comprise comme une technique exclusivement conçue pour les problèmes posynômes. Cependant, c'est une théorie mathématique polyvalente qui a une valeur significative pour résoudre un large éventail de problèmes. En fait, sa véritable force réside dans sa capacité à résoudre efficacement des problèmes en apparence inséparables en exploitant leur structure algébrique linéaire. Cette applicabilité générale de la programmation géométrique en fait un outil précieux pour étudier et résoudre divers problèmes d'optimisation, étendant ainsi son utilité pratique au-delà de sa perception initiale. Les réseaux de neurones récurrents offrent un cadre de calcul inspiré de la biologie avec un grand potentiel d'optimisation. En imitant la structure interconnectée des neurones du cerveau, les réseaux de neurones récurrents excellent dans la modélisation de systèmes complexes et dynamiques. Cette capacité leur permet de capturer les dépendances temporelles et les boucles de rétroaction, ce qui les rend bien adaptés aux scénarios d'optimisation impliquant des prises de décision séquentielles ou des processus itératifs. De plus, l'un des principaux avantages des approches neurodynamiques est leur faisabilité de mise en œuvre matérielle. L'objectif principal de cette thèse est de développer des algorithmes neurodynamiques efficaces et performants pour résoudre des problèmes d'optimisation géométrique avec des contraintes de probabilité. La thèse commence par les programmes géométriques avec des contraintes de probabilité impliquant des variables aléatoires indépendantes. De plus, un type spécifique de programmes géométriques appelés programmes rectangulaires est également examiné en détail. L'objectif est de comprendre les caractéristiques et les complexités associées à cette sous-classe de programmes géométriques. Ensuite, la thèse explore l'application de la théorie des copules pour aborder les programmes géométriques avec des contraintes de probabilité impliquant des variables aléatoires dépendantes. La théorie des copules fournit un cadre mathématique pour modéliser et analyser la structure de dépendance entre les variables aléatoires, améliorant ainsi la compréhension et l'optimisation de ces problèmes. Enfin, la thèse examine l'optimisation géométrique robuste, qui prend en compte les distributions incertaines des variables aléatoires. Cette approche vise à développer des algorithmes d'optimisation résistant à l'incertitude dans les distributions de probabilité sous-jacentes, garantissant des solutions plus fiables et stables
In many real-world scenarios, decision-makers face uncertainties that can affect the outcomes of their decisions. These uncertainties arise from various sources, such as variability in demand, fluctuating market conditions, or incomplete information about system parameters. Traditional deterministic optimization approaches assume that all parameters are known with certainty, which may not accurately reflect the reality of the problem. Chance-constrained optimization provides a more realistic and robust approach by explicitly accounting for the uncertainty in decision-making. Geometric programming is often misunderstood as a technique exclusively designed for posynomial problems. However, it is a versatile mathematical theory with significant value in addressing a broad range of separable problems. In fact, its true strength lies in its ability to effectively tackle seemingly inseparable problems by leveraging their linear algebraic structure. This general applicability of geometric programming makes it a valuable tool for studying and solving various optimization problems, extending its practical usefulness beyond its initial perception. Recurrent neural networks (RNNs) offer a biologically inspired computational framework with great optimization potential. By emulating the interconnected structure of neurons in the brain, RNNs excel in modeling complex and dynamic systems. This capability allows them to capture temporal dependencies and feedback loops, making them well-suited for optimization scenarios that involve sequential decision-making or iterative processes. Moreover, one of the key advantages of neurodynamic approaches is their hardware implementation feasibility. The primary objective of this thesis is to develop neurodynamic algorithms that are efficient and effective in solving chance-constrained geometric optimization problems. The thesis begins by focusing on chance-constrained geometric programs involving independent random variables. In addition, a specific type of geometric programs known as rectangular programs is also examined in detail. The objective is to understand the characteristics and complexities associated with this subclass of geometric programs. Subsequently, the thesis explores applying copula theory to address chance-constrained geometric programs with dependent random variables. Copula theory provides a mathematical framework for modeling and analyzing the dependence structure between random variables, thereby enhancing the understanding and optimization of these problems. Lastly, the thesis investigates distributionally robust geometric optimization, which considers uncertain distributions of random variables. This approach focuses on developing optimization algorithms that are robust against uncertainty in the underlying probability distributions, ensuring more reliable and stable solutions
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wu, Dawen. "Solving Some Nonlinear Optimization Problems with Deep Learning". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG083.

Texto completo
Resumen
Cette thèse considère quatre types de problèmes d'optimisation non linéaire, à savoir les jeux de bimatrice, les équations de projection non linéaire (NPEs), les problèmes d'optimisation convexe non lisse (NCOPs) et les jeux à contraintes stochastiques (CCGs). Ces quatre classes de problèmes d'optimisation non linéaire trouvent de nombreuses applications dans divers domaines tels que l'ingénierie, l'informatique, l'économie et la finance. Notre objectif est d'introduire des algorithmes basés sur l'apprentissage profond pour calculer efficacement les solutions optimales de ces problèmes d'optimisation non linéaire.Pour les jeux de bimatrice, nous utilisons des réseaux neuronaux convolutionnels (CNNs) pour calculer les équilibres de Nash. Plus précisément, nous concevons une architecture de CNN où l'entrée est un jeu de bimatrice et la sortie est l'équilibre de Nash prédit pour le jeu. Nous générons un ensemble de jeux de bimatrice suivant une distribution de probabilité donnée et utilisons l'algorithme de Lemke-Howson pour trouver leurs véritables équilibres de Nash, constituant ainsi un ensemble d'entraînement. Le CNN proposé est formé sur cet ensemble de données pour améliorer sa précision. Une fois l'apprentissage terminée, le CNN est capable de prédire les équilibres de Nash pour des jeux de bimatrice inédits. Les résultats expérimentaux démontrent l'efficacité computationnelle exceptionnelle de notre approche basée sur CNN, au détriment de la précision.Pour les NPEs, NCOPs et CCGs, qui sont des problèmes d'optimisation plus complexes, ils ne peuvent pas être directement introduits dans les réseaux neuronaux. Par conséquent, nous avons recours à des outils avancés, à savoir l'optimisation neurodynamique et les réseaux neuronaux informés par la physique (PINNs), pour résoudre ces problèmes. Plus précisément, nous utilisons d'abord une approche neurodynamique pour modéliser un problème d'optimisation non linéaire sous forme de système d'équations différentielles ordinaires (ODEs). Ensuite, nous utilisons un modèle basé sur PINN pour résoudre le système d'ODE résultant, où l'état final du modèle représente la solution prédite au problème d'optimisation initial. Le réseau neuronal est formé pour résoudre le système d'ODE, résolvant ainsi le problème d'optimisation initial. Une contribution clé de notre méthode proposée réside dans la transformation d'un problème d'optimisation non linéaire en un problème d'entraînement de réseau neuronal. En conséquence, nous pouvons maintenant résoudre des problèmes d'optimisation non linéaire en utilisant uniquement PyTorch, sans compter sur des solveurs d'optimisation convexe classiques tels que CVXPY, CPLEX ou Gurobi
This thesis considers four types of nonlinear optimization problems, namely bimatrix games, nonlinear projection equations (NPEs), nonsmooth convex optimization problems (NCOPs), and chance-constrained games (CCGs).These four classes of nonlinear optimization problems find extensive applications in various domains such as engineering, computer science, economics, and finance.We aim to introduce deep learning-based algorithms to efficiently compute the optimal solutions for these nonlinear optimization problems.For bimatrix games, we use Convolutional Neural Networks (CNNs) to compute Nash equilibria.Specifically, we design a CNN architecture where the input is a bimatrix game and the output is the predicted Nash equilibrium for the game.We generate a set of bimatrix games by a given probability distribution and use the Lemke-Howson algorithm to find their true Nash equilibria, thereby constructing a training dataset.The proposed CNN is trained on this dataset to improve its accuracy. Upon completion of training, the CNN is capable of predicting Nash equilibria for unseen bimatrix games.Experimental results demonstrate the exceptional computational efficiency of our CNN-based approach, at the cost of sacrificing some accuracy.For NPEs, NCOPs, and CCGs, which are more complex optimization problems, they cannot be directly fed into neural networks.Therefore, we resort to advanced tools, namely neurodynamic optimization and Physics-Informed Neural Networks (PINNs), for solving these problems.Specifically, we first use a neurodynamic approach to model a nonlinear optimization problem as a system of Ordinary Differential Equations (ODEs).Then, we utilize a PINN-based model to solve the resulting ODE system, where the end state of the model represents the predicted solution to the original optimization problem.The neural network is trained toward solving the ODE system, thereby solving the original optimization problem.A key contribution of our proposed method lies in transforming a nonlinear optimization problem into a neural network training problem.As a result, we can now solve nonlinear optimization problems using only PyTorch, without relying on classical convex optimization solvers such as CVXPY, CPLEX, or Gurobi
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

"A neurodynamic optimization approach to constrained pseudoconvex optimization". 2011. http://library.cuhk.edu.hk/record=b5894791.

Texto completo
Resumen
Guo, Zhishan.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 71-82).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement i --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1
Chapter 1.2 --- Recurrent Neural Networks --- p.4
Chapter 1.3 --- Thesis Organization --- p.7
Chapter 2 --- Literature Review --- p.8
Chapter 2.1 --- Pseudo convex Optimization --- p.8
Chapter 2.2 --- Recurrent Neural Networks --- p.10
Chapter 3 --- Model Description and Convergence Analysis --- p.17
Chapter 3.1 --- Model Descriptions --- p.18
Chapter 3.2 --- Global Convergence --- p.20
Chapter 4 --- Numerical Examples --- p.27
Chapter 4.1 --- Gaussian Optimization --- p.28
Chapter 4.2 --- Quadratic Fractional Programming --- p.36
Chapter 4.3 --- Nonlinear Convex Programming --- p.39
Chapter 5 --- Real-time Data Reconciliation --- p.42
Chapter 5.1 --- Introduction --- p.42
Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44
Chapter 5.3 --- Examples --- p.45
Chapter 6 --- Real-time Portfolio Optimization --- p.53
Chapter 6.1 --- Introduction --- p.53
Chapter 6.2 --- Model Description --- p.54
Chapter 6.3 --- Theoretical Analysis --- p.56
Chapter 6.4 --- Illustrative Examples --- p.58
Chapter 7 --- Conclusions and Future Works --- p.67
Chapter 7.1 --- Concluding Remarks --- p.67
Chapter 7.2 --- Future Works --- p.68
Chapter A --- Publication List --- p.69
Bibliography --- p.71
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

"Collective Neurodynamic Systems: Synchronization and Optimization". 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292660.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Neurodynamic optimization"

1

Jun, Wang. "Neurodynamic Optimization and Its Applications in Robotics". En Advances in Robotics, 2. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03983-6_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Qin, Sitian, Xinyi Le y Jun Wang. "A Neurodynamic Optimization Approach to Bilevel Linear Programming". En Advances in Neural Networks – ISNN 2015, 418–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25393-0_46.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Leung, Man-Fai y Jun Wang. "A Collaborative Neurodynamic Optimization Approach to Bicriteria Portfolio Selection". En Advances in Neural Networks – ISNN 2019, 318–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22796-8_34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Fan, Jianchao y Jun Wang. "A Collective Neurodynamic Optimization Approach to Nonnegative Tensor Decomposition". En Advances in Neural Networks - ISNN 2017, 207–13. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59081-3_25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Le, Xinyi, Sijie Chen, Yu Zheng y Juntong Xi. "A Multiple-objective Neurodynamic Optimization to Electric Load Management Under Demand-Response Program". En Advances in Neural Networks - ISNN 2017, 169–77. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59081-3_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Yan, Zheng, Jie Lu y Guangquan Zhang. "Distributed Model Predictive Control of Linear Systems with Coupled Constraints Based on Collective Neurodynamic Optimization". En AI 2018: Advances in Artificial Intelligence, 318–28. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03991-2_31.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wang, Jiasen, Jun Wang y Dongbin Zhao. "Dynamically Weighted Model Predictive Control of Affine Nonlinear Systems Based on Two-Timescale Neurodynamic Optimization". En Advances in Neural Networks – ISNN 2020, 96–105. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64221-1_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Le, Xinyi y Jun Wang. "A Neurodynamic Optimization Approach to Robust Pole Assignment for Synthesizing Linear Control Systems Based on a Convex Feasibility Problem Reformulation". En Neural Information Processing, 284–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42054-2_36.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chang, Hung-Jen y Walter J. Freeman. "Parameter Optimization of Olfactory Neurodynamics". En The Neurobiology of Computation, 191–96. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2235-5_31.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Fu, Chunjiang, Rubin Wang y Jianting Cao. "Re-optimization Contributes to the Adaption of External VF Field". En Advances in Cognitive Neurodynamics (II), 473–77. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9695-1_75.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Neurodynamic optimization"

1

Li, Xinqi, Jun Wang y Sam Kwong. "Alternative Mutation Operators in Collaborative Neurodynamic Optimization". En 2020 10th International Conference on Information Science and Technology (ICIST). IEEE, 2020. http://dx.doi.org/10.1109/icist49303.2020.9202136.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Yadi, Xiaoping Li y Jun Wang. "A Neurodynamic Approach to L0-Constrained Optimization". En 2020 12th International Conference on Advanced Computational Intelligence (ICACI). IEEE, 2020. http://dx.doi.org/10.1109/icaci49185.2020.9177499.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fang, Xiaomeng, Xinyi Le y Fei Li. "Distributed Neurodynamic Optimization for Coordination of Redundant Robots". En 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, 2019. http://dx.doi.org/10.1109/icist.2019.8836826.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wang, Jun. "Neurodynamic optimization and its applications for winners-take-all". En 2009 2nd IEEE International Conference on Computer Science and Information Technology. IEEE, 2009. http://dx.doi.org/10.1109/iccsit.2009.5235008.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Fan, Jianchao, Ye Wang, Jianhua Zhao, Xiang Wang y Xinxin Wang. "Blind source separation based on collective neurodynamic optimization approach". En 2017 36th Chinese Control Conference (CCC). IEEE, 2017. http://dx.doi.org/10.23919/chicc.2017.8027656.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Jun. "Neurodynamic optimization with its application for model predictive control". En 2009 3rd International Workshop on Soft Computing Applications (SOFA). IEEE, 2009. http://dx.doi.org/10.1109/sofa.2009.5254883.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Leung, Man-Fai y Jun Wang. "A Two-Timescale Neurodynamic Approach to Minimax Portfolio Optimization". En 2021 11th International Conference on Information Science and Technology (ICIST). IEEE, 2021. http://dx.doi.org/10.1109/icist52614.2021.9440640.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Che, Hangjun y Jun Wang. "Sparse Nonnegative Matrix Factorization Based on Collaborative Neurodynamic Optimization". En 2019 9th International Conference on Information Science and Technology (ICIST). IEEE, 2019. http://dx.doi.org/10.1109/icist.2019.8836758.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pan, Yunpeng y Jun Wang. "A neurodynamic optimization approach to nonlinear model predictive control". En 2010 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2010. http://dx.doi.org/10.1109/icsmc.2010.5642367.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tassouli, Siham y Abdel Lisser. "A Neurodynamic Duplex for Distributionally Robust Joint Chance-Constrained Optimization". En 13th International Conference on Operations Research and Enterprise Systems. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012262100003639.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía