Artykuły w czasopismach na temat „Optimisation nonconvexe”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Optimisation nonconvexe.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 22 najlepszych artykułów w czasopismach naukowych na temat „Optimisation nonconvexe”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Smith, E. "Global Optimisation of Nonconvex MINLPs". Computers & Chemical Engineering 21, nr 1-2 (1997): S791—S796. http://dx.doi.org/10.1016/s0098-1354(97)00146-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Smith, Edward M. B., i Constantinos C. Pantelides. "Global optimisation of nonconvex MINLPs". Computers & Chemical Engineering 21 (maj 1997): S791—S796. http://dx.doi.org/10.1016/s0098-1354(97)87599-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Martínez-legaz, J. E., i A. Seeger. "A formula on the approximate subdifferential of the difference of convex functions". Bulletin of the Australian Mathematical Society 45, nr 1 (luty 1992): 37–41. http://dx.doi.org/10.1017/s0004972700036984.

Pełny tekst źródła
Streszczenie:
We give a formula on the ε−subdifferential of the difference of two convex functions. As a by-product of this formula, one recovers a recent result of Hiriart-Urruty, namely, a necessary and sufficient condition for global optimality in nonconvex optimisation.
Style APA, Harvard, Vancouver, ISO itp.
4

Scott, Carlton H., i Thomas R. Jefferson. "Duality for linear multiplicative programs". ANZIAM Journal 46, nr 3 (styczeń 2005): 393–97. http://dx.doi.org/10.1017/s1446181100008336.

Pełny tekst źródła
Streszczenie:
AbstractLinear multiplicative programs are an important class of nonconvex optimisation problems that are currently the subject of considerable research as regards the development of computational algorithms. In this paper, we show that mathematical programs of this nature are, in fact, a special case of more general signomial programming, which in turn implies that research on this latter problem may be valuable in analysing and solving linear multiplicative programs. In particular, we use signomial programming duality theory to establish a dual program for a nonconvex linear multiplicative program. An interpretation of the dual variables is given.
Style APA, Harvard, Vancouver, ISO itp.
5

SULTANOVA, NARGIZ. "AGGREGATE SUBGRADIENT SMOOTHING METHODS FOR LARGE SCALE NONSMOOTH NONCONVEX OPTIMISATION AND APPLICATIONS". Bulletin of the Australian Mathematical Society 91, nr 3 (16.03.2015): 523–24. http://dx.doi.org/10.1017/s0004972715000143.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

ALI, ELAF J. "CANONICAL DUAL FINITE ELEMENT METHOD FOR SOLVING NONCONVEX MECHANICS AND TOPOLOGY OPTIMISATION PROBLEMS". Bulletin of the Australian Mathematical Society 101, nr 1 (25.11.2019): 172–73. http://dx.doi.org/10.1017/s0004972719001205.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Thi, Hoai An Le, Hoai Minh Le i Tao Pham Dinh. "Fuzzy clustering based on nonconvex optimisation approaches using difference of convex (DC) functions algorithms". Advances in Data Analysis and Classification 1, nr 2 (25.07.2007): 85–104. http://dx.doi.org/10.1007/s11634-007-0011-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ma, Kai, Congshan Wang, Jie Yang, Chenliang Yuan i Xinping Guan. "A pricing strategy for demand-side regulation with direct load control: a nonconvex optimisation approach". International Journal of System Control and Information Processing 2, nr 1 (2017): 74. http://dx.doi.org/10.1504/ijscip.2017.084264.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ma, Kai, Congshan Wang, Jie Yang, Chenliang Yuan i Xinping Guan. "A pricing strategy for demand-side regulation with direct load control: a nonconvex optimisation approach". International Journal of System Control and Information Processing 2, nr 1 (2017): 74. http://dx.doi.org/10.1504/ijscip.2017.10005200.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Smith, E. M. B., i C. C. Pantelides. "A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex MINLPs". Computers & Chemical Engineering 23, nr 4-5 (maj 1999): 457–78. http://dx.doi.org/10.1016/s0098-1354(98)00286-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Pereira-Neto, A., C. Unsihuay i O. R. Saavedra. "Efficient evolutionary strategy optimisation procedure to solve the nonconvex economic dispatch problem with generator constraints". IEE Proceedings - Generation, Transmission and Distribution 152, nr 5 (2005): 653. http://dx.doi.org/10.1049/ip-gtd:20045287.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Shaqfa, Mahmoud, i Katrin Beyer. "Pareto-like sequential sampling heuristic for global optimisation". Soft Computing 25, nr 14 (29.05.2021): 9077–96. http://dx.doi.org/10.1007/s00500-021-05853-8.

Pełny tekst źródła
Streszczenie:
AbstractIn this paper, we propose a simple global optimisation algorithm inspired by Pareto’s principle. This algorithm samples most of its solutions within prominent search domains and is equipped with a self-adaptive mechanism to control the dynamic tightening of the prominent domains while the greediness of the algorithm increases over time (iterations). Unlike traditional metaheuristics, the proposed method has no direct mutation- or crossover-like operations. It depends solely on the sequential random sampling that can be used in diversification and intensification processes while keeping the information-flow between generations and the structural bias at a minimum. By using a simple topology, the algorithm avoids premature convergence by sampling new solutions every generation. A simple theoretical derivation revealed that the exploration of this approach is unbiased and the rate of the diversification is constant during the runtime. The trade-off balance between the diversification and the intensification is explained theoretically and experimentally. This proposed approach has been benchmarked against standard optimisation problems as well as a selected set of simple and complex engineering applications. We used 26 standard benchmarks with different properties that cover most of the optimisation problems’ nature, three traditional engineering problems, and one real complex engineering problem from the state-of-the-art literature. The algorithm performs well in finding global minima for nonconvex and multimodal functions, especially with high dimensional problems and it was found very competitive in comparison with the recent algorithmic proposals. Moreover, the algorithm outperforms and scales better than recent algorithms when it is benchmarked under a limited number of iterations for the composite CEC2017 problems. The design of this algorithm is kept simple so it can be easily coupled or hybridised with other search paradigms. The code of the algorithm is provided in C++14, Python3.7, and Octave (Matlab).
Style APA, Harvard, Vancouver, ISO itp.
13

Ławryńczuk, Maciej. "Efficient Nonlinear Predictive Control Based on Structured Neural Models". International Journal of Applied Mathematics and Computer Science 19, nr 2 (1.06.2009): 233–46. http://dx.doi.org/10.2478/v10006-009-0019-1.

Pełny tekst źródła
Streszczenie:
Efficient Nonlinear Predictive Control Based on Structured Neural ModelsThis paper describes structured neural models and a computationally efficient (suboptimal) nonlinear Model Predictive Control (MPC) algorithm based on such models. The structured neural model has the ability to make future predictions of the process without being used recursively. Thanks to the nature of the model, the prediction error is not propagated. This is particularly important in the case of noise and underparameterisation. Structured models have much better long-range prediction accuracy than the corresponding classical Nonlinear Auto Regressive with eXternal input (NARX) models. The described suboptimal MPC algorithm needs solving on-line only a quadratic programming problem. Nevertheless, it gives closed-loop control performance similar to that obtained in fully-fledged nonlinear MPC, which hinges on online nonconvex optimisation. In order to demonstrate the advantages of structured models as well as the accuracy of the suboptimal MPC algorithm, a polymerisation reactor is studied.
Style APA, Harvard, Vancouver, ISO itp.
14

Gustafson, Sven-Åke. "Investigating semi-infinite programs using penalty functions and Lagrangian methods". Journal of the Australian Mathematical Society. Series B. Applied Mathematics 28, nr 2 (październik 1986): 158–69. http://dx.doi.org/10.1017/s0334270000005270.

Pełny tekst źródła
Streszczenie:
AbstractIn this paper the relations between semi-infinite programs and optimisation problems with finitely many variables and constraints are reviewed. Two classes of convex semi-infinite programs are defined, one based on the fact that a convex set may be represented as the intersection of closed halfspaces, while the other class is defined using the representation of the elements of a convex set as convex combinations of points and directions. Extension to nonconvex problems is given. A common technique of solving a semi-infinite program computationally is to derive necessary conditions for optimality in the form of a nonlinear system of equations with finitely many equations and unknowns. In the three-phase algorithm, this system is constructed from the optimal solution of a discretised version of the given semi-infinite program. i.e. a problem with finitely many variables and constraints. The system is solved numerically, often by means of some linearisation method. One option is to use a direct analog of the familiar SOLVER method.
Style APA, Harvard, Vancouver, ISO itp.
15

Immanuel Selvakumar, A., i K. Thanushkodi. "Comment: Efficient evolutionary strategy optimisation procedure to solve the nonconvex economic dispatch problem with generator constraints". IET Generation, Transmission & Distribution 1, nr 2 (2007): 364. http://dx.doi.org/10.1049/iet-gtd:20060015.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Pereira-Neto, A., C. Unsihuay i O. R. Saavedra. "Reply: Concerning the comments on ‘Efficient evolutionary strategy optimisation procedure to solve the nonconvex economic dispatch problem with generator constraints’". IET Generation, Transmission & Distribution 1, nr 2 (2007): 366. http://dx.doi.org/10.1049/iet-gtd:20060469.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Riedl, Konstantin. "Leveraging memory effects and gradient information in consensus-based optimisation: On global convergence in mean-field law". European Journal of Applied Mathematics, 20.10.2023, 1–32. http://dx.doi.org/10.1017/s0956792523000293.

Pełny tekst źródła
Streszczenie:
Abstract In this paper, we study consensus-based optimisation (CBO), a versatile, flexible and customisable optimisation method suitable for performing nonconvex and nonsmooth global optimisations in high dimensions. CBO is a multi-particle metaheuristic, which is effective in various applications and at the same time amenable to theoretical analysis thanks to its minimalistic design. The underlying dynamics, however, is flexible enough to incorporate different mechanisms widely used in evolutionary computation and machine learning, as we show by analysing a variant of CBO which makes use of memory effects and gradient information. We rigorously prove that this dynamics converges to a global minimiser of the objective function in mean-field law for a vast class of functions under minimal assumptions on the initialisation of the method. The proof in particular reveals how to leverage further, in some applications advantageous, forces in the dynamics without loosing provable global convergence. To demonstrate the benefit of the herein investigated memory effects and gradient information in certain applications, we present numerical evidence for the superiority of this CBO variant in applications such as machine learning and compressed sensing, which en passant widen the scope of applications of CBO.
Style APA, Harvard, Vancouver, ISO itp.
18

Riis, Erlend S., Matthias J. Ehrhardt, G. R. W. Quispel i Carola-Bibiane Schönlieb. "A Geometric Integration Approach to Nonsmooth, Nonconvex Optimisation". Foundations of Computational Mathematics, 29.07.2021. http://dx.doi.org/10.1007/s10208-020-09489-2.

Pełny tekst źródła
Streszczenie:
AbstractThe optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh–Abe methods. These are generalisations of the Itoh–Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh–Abe methods can be superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while still remaining competitive in terms of efficiency.
Style APA, Harvard, Vancouver, ISO itp.
19

Mafakheri, Behnam, Jonathan H. Manton i Iman Shames. "On Distributed Nonconvex Optimisation Via Modified ADMM". IEEE Control Systems Letters, 2023, 1. http://dx.doi.org/10.1109/lcsys.2023.3341100.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Sturm, Kevin. "First-order differentiability properties of a class of equality constrained optimal value functions with applications". Journal of Nonsmooth Analysis and Optimization, 25.11.2020. http://dx.doi.org/10.46298/jnsao-2020-6034.

Pełny tekst źródła
Streszczenie:
In this paper we study the right differentiability of a parametric infimum function over a parametric set defined by equality constraints. We present a new theorem with sufficient conditions for the right differentiability with respect to the parameter. Target applications are nonconvex objective functions with equality constraints arising in optimal control and shape optimisation. The theorem makes use of the averaged adjoint approach in conjunction with the variational approach of Kunisch, Ito and Peichl. We provide two examples of our abstract result: (a) a shape optimisation problem involving a semilinear partial differential equation which exhibits infinitely many solutions, (b) a finite dimensional quadratic function subject to a nonlinear equation.
Style APA, Harvard, Vancouver, ISO itp.
21

Liu, Chao, Yingjie Ma, Dongda Zhang i Jie Li. "A feasible path-based branch and bound algorithm for strongly nonconvex MINLP problems". Frontiers in Chemical Engineering 4 (19.09.2022). http://dx.doi.org/10.3389/fceng.2022.983162.

Pełny tekst źródła
Streszczenie:
In this paper, a feasible path-based branch and bound (B&B) algorithm is proposed to solve mixed-integer nonlinear programming problems with highly nonconvex nature through integration of the previously proposed hybrid feasible-path optimisation algorithm and the branch and bound method. The main advantage of this novel algorithm is that our previously proposed hybrid steady-state and time-relaxation-based optimisation algorithm is employed to solve a nonlinear programming (NLP) subproblem at each node during B&B. The solution from a parent node in B&B is used to initialize the NLP subproblems at the child nodes to improve computational efficiency. This approach allows circumventing complex initialisation procedure and overcoming difficulties in convergence of process simulation. The capability of the proposed algorithm is illustrated by several process synthesis and intensification problems using rigorous models.
Style APA, Harvard, Vancouver, ISO itp.
22

Janakiraman, Bhavithra, S. Prabu, M. Senthil Vadivu i Dhineshkumar Krishnan. "Detection of ovarian follicles cancer cells using hybrid optimization technique with deep convolutional neural network classifier". Journal of Intelligent & Fuzzy Systems, 13.07.2023, 1–16. http://dx.doi.org/10.3233/jifs-231322.

Pełny tekst źródła
Streszczenie:
Having one’s life threatened by a disease like ovarian cancer is the single most crucial thing in the whole world. It is difficult to achieve high performance without sacrificing computational efficiency; the results of the denoising process are not as good as they could be; the proposed models are nonconvex and involve several manually chosen parameters, which provides some leeway to boost denoising performance; the methods generally involve a complex optimisation problem in the testing stage; Here at DnCNN, we’ve developed our own version of the deep ii learning model, a discriminative learning technique. The goal was to eliminate the need for the iterative optimisation technique at the time it was being evaluated. The goal was to avoid having to go through testing altogether, thus this was done. It is highly advised to use a Deep CNN model, the efficacy of which can be evaluated by comparing it to that of more traditional filters and pre-trained DnCNN. The Deep CNN strategy has been shown to be the best solution to minimise noise when an image is destroyed by Gaussian or speckle noise with known or unknown noise levels. This is because Deep CNN uses convolutional neural networks, which are trained using data. This is because convolutional neural networks, which are the foundation of Deep CNN, are designed to learn from data and then use that learning to make predictions. Deep CNN achieves a 98.45% accuracy rate during testing, with an error rate of just 0.002%.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii