Статті в журналах з теми "Optimisation nonconvexe"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Optimisation nonconvexe.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-22 статей у журналах для дослідження на тему "Optimisation nonconvexe".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Smith, E. "Global Optimisation of Nonconvex MINLPs." Computers & Chemical Engineering 21, no. 1-2 (1997): S791—S796. http://dx.doi.org/10.1016/s0098-1354(97)00146-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Smith, Edward M. B., and Constantinos C. Pantelides. "Global optimisation of nonconvex MINLPs." Computers & Chemical Engineering 21 (May 1997): S791—S796. http://dx.doi.org/10.1016/s0098-1354(97)87599-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Martínez-legaz, J. E., and A. Seeger. "A formula on the approximate subdifferential of the difference of convex functions." Bulletin of the Australian Mathematical Society 45, no. 1 (February 1992): 37–41. http://dx.doi.org/10.1017/s0004972700036984.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We give a formula on the ε−subdifferential of the difference of two convex functions. As a by-product of this formula, one recovers a recent result of Hiriart-Urruty, namely, a necessary and sufficient condition for global optimality in nonconvex optimisation.
4

Scott, Carlton H., and Thomas R. Jefferson. "Duality for linear multiplicative programs." ANZIAM Journal 46, no. 3 (January 2005): 393–97. http://dx.doi.org/10.1017/s1446181100008336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLinear multiplicative programs are an important class of nonconvex optimisation problems that are currently the subject of considerable research as regards the development of computational algorithms. In this paper, we show that mathematical programs of this nature are, in fact, a special case of more general signomial programming, which in turn implies that research on this latter problem may be valuable in analysing and solving linear multiplicative programs. In particular, we use signomial programming duality theory to establish a dual program for a nonconvex linear multiplicative program. An interpretation of the dual variables is given.
5

SULTANOVA, NARGIZ. "AGGREGATE SUBGRADIENT SMOOTHING METHODS FOR LARGE SCALE NONSMOOTH NONCONVEX OPTIMISATION AND APPLICATIONS." Bulletin of the Australian Mathematical Society 91, no. 3 (March 16, 2015): 523–24. http://dx.doi.org/10.1017/s0004972715000143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

ALI, ELAF J. "CANONICAL DUAL FINITE ELEMENT METHOD FOR SOLVING NONCONVEX MECHANICS AND TOPOLOGY OPTIMISATION PROBLEMS." Bulletin of the Australian Mathematical Society 101, no. 1 (November 25, 2019): 172–73. http://dx.doi.org/10.1017/s0004972719001205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Thi, Hoai An Le, Hoai Minh Le, and Tao Pham Dinh. "Fuzzy clustering based on nonconvex optimisation approaches using difference of convex (DC) functions algorithms." Advances in Data Analysis and Classification 1, no. 2 (July 25, 2007): 85–104. http://dx.doi.org/10.1007/s11634-007-0011-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ma, Kai, Congshan Wang, Jie Yang, Chenliang Yuan, and Xinping Guan. "A pricing strategy for demand-side regulation with direct load control: a nonconvex optimisation approach." International Journal of System Control and Information Processing 2, no. 1 (2017): 74. http://dx.doi.org/10.1504/ijscip.2017.084264.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ma, Kai, Congshan Wang, Jie Yang, Chenliang Yuan, and Xinping Guan. "A pricing strategy for demand-side regulation with direct load control: a nonconvex optimisation approach." International Journal of System Control and Information Processing 2, no. 1 (2017): 74. http://dx.doi.org/10.1504/ijscip.2017.10005200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Smith, E. M. B., and C. C. Pantelides. "A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex MINLPs." Computers & Chemical Engineering 23, no. 4-5 (May 1999): 457–78. http://dx.doi.org/10.1016/s0098-1354(98)00286-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Pereira-Neto, A., C. Unsihuay, and O. R. Saavedra. "Efficient evolutionary strategy optimisation procedure to solve the nonconvex economic dispatch problem with generator constraints." IEE Proceedings - Generation, Transmission and Distribution 152, no. 5 (2005): 653. http://dx.doi.org/10.1049/ip-gtd:20045287.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Shaqfa, Mahmoud, and Katrin Beyer. "Pareto-like sequential sampling heuristic for global optimisation." Soft Computing 25, no. 14 (May 29, 2021): 9077–96. http://dx.doi.org/10.1007/s00500-021-05853-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn this paper, we propose a simple global optimisation algorithm inspired by Pareto’s principle. This algorithm samples most of its solutions within prominent search domains and is equipped with a self-adaptive mechanism to control the dynamic tightening of the prominent domains while the greediness of the algorithm increases over time (iterations). Unlike traditional metaheuristics, the proposed method has no direct mutation- or crossover-like operations. It depends solely on the sequential random sampling that can be used in diversification and intensification processes while keeping the information-flow between generations and the structural bias at a minimum. By using a simple topology, the algorithm avoids premature convergence by sampling new solutions every generation. A simple theoretical derivation revealed that the exploration of this approach is unbiased and the rate of the diversification is constant during the runtime. The trade-off balance between the diversification and the intensification is explained theoretically and experimentally. This proposed approach has been benchmarked against standard optimisation problems as well as a selected set of simple and complex engineering applications. We used 26 standard benchmarks with different properties that cover most of the optimisation problems’ nature, three traditional engineering problems, and one real complex engineering problem from the state-of-the-art literature. The algorithm performs well in finding global minima for nonconvex and multimodal functions, especially with high dimensional problems and it was found very competitive in comparison with the recent algorithmic proposals. Moreover, the algorithm outperforms and scales better than recent algorithms when it is benchmarked under a limited number of iterations for the composite CEC2017 problems. The design of this algorithm is kept simple so it can be easily coupled or hybridised with other search paradigms. The code of the algorithm is provided in C++14, Python3.7, and Octave (Matlab).
13

Ławryńczuk, Maciej. "Efficient Nonlinear Predictive Control Based on Structured Neural Models." International Journal of Applied Mathematics and Computer Science 19, no. 2 (June 1, 2009): 233–46. http://dx.doi.org/10.2478/v10006-009-0019-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Efficient Nonlinear Predictive Control Based on Structured Neural ModelsThis paper describes structured neural models and a computationally efficient (suboptimal) nonlinear Model Predictive Control (MPC) algorithm based on such models. The structured neural model has the ability to make future predictions of the process without being used recursively. Thanks to the nature of the model, the prediction error is not propagated. This is particularly important in the case of noise and underparameterisation. Structured models have much better long-range prediction accuracy than the corresponding classical Nonlinear Auto Regressive with eXternal input (NARX) models. The described suboptimal MPC algorithm needs solving on-line only a quadratic programming problem. Nevertheless, it gives closed-loop control performance similar to that obtained in fully-fledged nonlinear MPC, which hinges on online nonconvex optimisation. In order to demonstrate the advantages of structured models as well as the accuracy of the suboptimal MPC algorithm, a polymerisation reactor is studied.
14

Gustafson, Sven-Åke. "Investigating semi-infinite programs using penalty functions and Lagrangian methods." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 28, no. 2 (October 1986): 158–69. http://dx.doi.org/10.1017/s0334270000005270.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn this paper the relations between semi-infinite programs and optimisation problems with finitely many variables and constraints are reviewed. Two classes of convex semi-infinite programs are defined, one based on the fact that a convex set may be represented as the intersection of closed halfspaces, while the other class is defined using the representation of the elements of a convex set as convex combinations of points and directions. Extension to nonconvex problems is given. A common technique of solving a semi-infinite program computationally is to derive necessary conditions for optimality in the form of a nonlinear system of equations with finitely many equations and unknowns. In the three-phase algorithm, this system is constructed from the optimal solution of a discretised version of the given semi-infinite program. i.e. a problem with finitely many variables and constraints. The system is solved numerically, often by means of some linearisation method. One option is to use a direct analog of the familiar SOLVER method.
15

Immanuel Selvakumar, A., and K. Thanushkodi. "Comment: Efficient evolutionary strategy optimisation procedure to solve the nonconvex economic dispatch problem with generator constraints." IET Generation, Transmission & Distribution 1, no. 2 (2007): 364. http://dx.doi.org/10.1049/iet-gtd:20060015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Pereira-Neto, A., C. Unsihuay, and O. R. Saavedra. "Reply: Concerning the comments on ‘Efficient evolutionary strategy optimisation procedure to solve the nonconvex economic dispatch problem with generator constraints’." IET Generation, Transmission & Distribution 1, no. 2 (2007): 366. http://dx.doi.org/10.1049/iet-gtd:20060469.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Riedl, Konstantin. "Leveraging memory effects and gradient information in consensus-based optimisation: On global convergence in mean-field law." European Journal of Applied Mathematics, October 20, 2023, 1–32. http://dx.doi.org/10.1017/s0956792523000293.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In this paper, we study consensus-based optimisation (CBO), a versatile, flexible and customisable optimisation method suitable for performing nonconvex and nonsmooth global optimisations in high dimensions. CBO is a multi-particle metaheuristic, which is effective in various applications and at the same time amenable to theoretical analysis thanks to its minimalistic design. The underlying dynamics, however, is flexible enough to incorporate different mechanisms widely used in evolutionary computation and machine learning, as we show by analysing a variant of CBO which makes use of memory effects and gradient information. We rigorously prove that this dynamics converges to a global minimiser of the objective function in mean-field law for a vast class of functions under minimal assumptions on the initialisation of the method. The proof in particular reveals how to leverage further, in some applications advantageous, forces in the dynamics without loosing provable global convergence. To demonstrate the benefit of the herein investigated memory effects and gradient information in certain applications, we present numerical evidence for the superiority of this CBO variant in applications such as machine learning and compressed sensing, which en passant widen the scope of applications of CBO.
18

Riis, Erlend S., Matthias J. Ehrhardt, G. R. W. Quispel, and Carola-Bibiane Schönlieb. "A Geometric Integration Approach to Nonsmooth, Nonconvex Optimisation." Foundations of Computational Mathematics, July 29, 2021. http://dx.doi.org/10.1007/s10208-020-09489-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh–Abe methods. These are generalisations of the Itoh–Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh–Abe methods can be superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while still remaining competitive in terms of efficiency.
19

Mafakheri, Behnam, Jonathan H. Manton, and Iman Shames. "On Distributed Nonconvex Optimisation Via Modified ADMM." IEEE Control Systems Letters, 2023, 1. http://dx.doi.org/10.1109/lcsys.2023.3341100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Sturm, Kevin. "First-order differentiability properties of a class of equality constrained optimal value functions with applications." Journal of Nonsmooth Analysis and Optimization, November 25, 2020. http://dx.doi.org/10.46298/jnsao-2020-6034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper we study the right differentiability of a parametric infimum function over a parametric set defined by equality constraints. We present a new theorem with sufficient conditions for the right differentiability with respect to the parameter. Target applications are nonconvex objective functions with equality constraints arising in optimal control and shape optimisation. The theorem makes use of the averaged adjoint approach in conjunction with the variational approach of Kunisch, Ito and Peichl. We provide two examples of our abstract result: (a) a shape optimisation problem involving a semilinear partial differential equation which exhibits infinitely many solutions, (b) a finite dimensional quadratic function subject to a nonlinear equation.
21

Liu, Chao, Yingjie Ma, Dongda Zhang, and Jie Li. "A feasible path-based branch and bound algorithm for strongly nonconvex MINLP problems." Frontiers in Chemical Engineering 4 (September 19, 2022). http://dx.doi.org/10.3389/fceng.2022.983162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, a feasible path-based branch and bound (B&B) algorithm is proposed to solve mixed-integer nonlinear programming problems with highly nonconvex nature through integration of the previously proposed hybrid feasible-path optimisation algorithm and the branch and bound method. The main advantage of this novel algorithm is that our previously proposed hybrid steady-state and time-relaxation-based optimisation algorithm is employed to solve a nonlinear programming (NLP) subproblem at each node during B&B. The solution from a parent node in B&B is used to initialize the NLP subproblems at the child nodes to improve computational efficiency. This approach allows circumventing complex initialisation procedure and overcoming difficulties in convergence of process simulation. The capability of the proposed algorithm is illustrated by several process synthesis and intensification problems using rigorous models.
22

Janakiraman, Bhavithra, S. Prabu, M. Senthil Vadivu, and Dhineshkumar Krishnan. "Detection of ovarian follicles cancer cells using hybrid optimization technique with deep convolutional neural network classifier." Journal of Intelligent & Fuzzy Systems, July 13, 2023, 1–16. http://dx.doi.org/10.3233/jifs-231322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Having one’s life threatened by a disease like ovarian cancer is the single most crucial thing in the whole world. It is difficult to achieve high performance without sacrificing computational efficiency; the results of the denoising process are not as good as they could be; the proposed models are nonconvex and involve several manually chosen parameters, which provides some leeway to boost denoising performance; the methods generally involve a complex optimisation problem in the testing stage; Here at DnCNN, we’ve developed our own version of the deep ii learning model, a discriminative learning technique. The goal was to eliminate the need for the iterative optimisation technique at the time it was being evaluated. The goal was to avoid having to go through testing altogether, thus this was done. It is highly advised to use a Deep CNN model, the efficacy of which can be evaluated by comparing it to that of more traditional filters and pre-trained DnCNN. The Deep CNN strategy has been shown to be the best solution to minimise noise when an image is destroyed by Gaussian or speckle noise with known or unknown noise levels. This is because Deep CNN uses convolutional neural networks, which are trained using data. This is because convolutional neural networks, which are the foundation of Deep CNN, are designed to learn from data and then use that learning to make predictions. Deep CNN achieves a 98.45% accuracy rate during testing, with an error rate of just 0.002%.

До бібліографії