Auswahl der wissenschaftlichen Literatur zum Thema „Optimisation nonconvexe“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Optimisation nonconvexe" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Optimisation nonconvexe"
Smith, E. „Global Optimisation of Nonconvex MINLPs“. Computers & Chemical Engineering 21, Nr. 1-2 (1997): S791—S796. http://dx.doi.org/10.1016/s0098-1354(97)00146-4.
Der volle Inhalt der QuelleSmith, Edward M. B., und Constantinos C. Pantelides. „Global optimisation of nonconvex MINLPs“. Computers & Chemical Engineering 21 (Mai 1997): S791—S796. http://dx.doi.org/10.1016/s0098-1354(97)87599-0.
Der volle Inhalt der QuelleMartínez-legaz, J. E., und A. Seeger. „A formula on the approximate subdifferential of the difference of convex functions“. Bulletin of the Australian Mathematical Society 45, Nr. 1 (Februar 1992): 37–41. http://dx.doi.org/10.1017/s0004972700036984.
Der volle Inhalt der QuelleScott, Carlton H., und Thomas R. Jefferson. „Duality for linear multiplicative programs“. ANZIAM Journal 46, Nr. 3 (Januar 2005): 393–97. http://dx.doi.org/10.1017/s1446181100008336.
Der volle Inhalt der QuelleSULTANOVA, NARGIZ. „AGGREGATE SUBGRADIENT SMOOTHING METHODS FOR LARGE SCALE NONSMOOTH NONCONVEX OPTIMISATION AND APPLICATIONS“. Bulletin of the Australian Mathematical Society 91, Nr. 3 (16.03.2015): 523–24. http://dx.doi.org/10.1017/s0004972715000143.
Der volle Inhalt der QuelleALI, ELAF J. „CANONICAL DUAL FINITE ELEMENT METHOD FOR SOLVING NONCONVEX MECHANICS AND TOPOLOGY OPTIMISATION PROBLEMS“. Bulletin of the Australian Mathematical Society 101, Nr. 1 (25.11.2019): 172–73. http://dx.doi.org/10.1017/s0004972719001205.
Der volle Inhalt der QuelleThi, Hoai An Le, Hoai Minh Le und Tao Pham Dinh. „Fuzzy clustering based on nonconvex optimisation approaches using difference of convex (DC) functions algorithms“. Advances in Data Analysis and Classification 1, Nr. 2 (25.07.2007): 85–104. http://dx.doi.org/10.1007/s11634-007-0011-2.
Der volle Inhalt der QuelleMa, Kai, Congshan Wang, Jie Yang, Chenliang Yuan und Xinping Guan. „A pricing strategy for demand-side regulation with direct load control: a nonconvex optimisation approach“. International Journal of System Control and Information Processing 2, Nr. 1 (2017): 74. http://dx.doi.org/10.1504/ijscip.2017.084264.
Der volle Inhalt der QuelleMa, Kai, Congshan Wang, Jie Yang, Chenliang Yuan und Xinping Guan. „A pricing strategy for demand-side regulation with direct load control: a nonconvex optimisation approach“. International Journal of System Control and Information Processing 2, Nr. 1 (2017): 74. http://dx.doi.org/10.1504/ijscip.2017.10005200.
Der volle Inhalt der QuelleSmith, E. M. B., und C. C. Pantelides. „A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex MINLPs“. Computers & Chemical Engineering 23, Nr. 4-5 (Mai 1999): 457–78. http://dx.doi.org/10.1016/s0098-1354(98)00286-5.
Der volle Inhalt der QuelleDissertationen zum Thema "Optimisation nonconvexe"
Jerad, Sadok. „Approches du second ordre de d'ordre élevées pour l'optimisation nonconvex avec variantes sans évaluation de la fonction objective“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP024.
Der volle Inhalt der QuelleEven though nonlinear optimization seems (a priori) to be a mature field, new minimization schemes are proposed or rediscovered for modern large-scale problems. As an example and in retrospect of the last decade, we have seen a surge of first-order methods with different analysis, despite the fact that well-known theoretical limitations of the previous methods have been thoroughly discussed.This thesis explores two main lines of research in the field of nonconvex optimization with a narrow focus on second and higher order methods.In the first series, we focus on algorithms that do not compute function values and operate without knowledge of any parameters, as the most popular currently used first-order methods fall into the latter category. We start by redefining the well-known Adagrad algorithm in a trust-region framework and use the latter paradigm to study two first-order deterministic OFFO (Objective-Free Function Optimization) classes. To enable faster exact OFFO algorithms, we then propose a pth-order deterministic adaptive regularization method that avoids the computation of function values. This approach recovers the well-known convergence rate of the standard framework when searching for stationary points, while using significantly less information.In the second set of papers, we analyze adaptive algorithms in the more classical framework where function values are used to adapt parameters. We extend adaptive regularization methods to a specific class of Banach spaces by developing a Hölder gradient descent algorithm. In addition, we investigate a second-order algorithm that alternates between negative curvature and Newton steps with a near-optimal convergence rate. To handle large problems, we propose subspace versions of the algorithm that show promising numerical performance.Overall, this research covers a wide range of optimization techniques and provides valuable insights and contributions to both parameter-free and adaptive optimization algorithms for nonconvex functions. It also opens the door for subsequent theoretical developments and the introduction of faster numerical algorithms
Giagkiozis, Ioannis. „Nonconvex many-objective optimisation“. Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/3683/.
Der volle Inhalt der QuelleWahid, Faisal. „Optimisation de la rivière : enchères à court terme de l'hydroélectricité sous incertitude“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX030.
Der volle Inhalt der QuelleThe hydro-bidding problem is about computing optimal offer policies in order to maximize the expected profit of a hydroelectric producer participating in an electricity market. It combines the decision making process of both the trader and the hydro-dispatcher into one stochastic optimization problem. It is a sequential decision making problem, and can be formulated as a multistage stochastic program.These models can be difficult to solve when the value function is not concave. In this thesis, we study some of the limitations of the hydro-bidding problem, and propose a new stochastic optimization method called the Mixed-Integer Dynamic Approximation Scheme (MIDAS). MIDAS solves nonconvex, stochastic programs with monotonic value functions. It works in similar fashion to the Stochastic Dual Dynamic Programming (SDDP), but instead of using cutting planes, it uses step functions to create an outer approximation of the value function. MIDAS will converge almost surely to (T+1)ε optimal policies for continuous state variables, and to the exact optimum policy for integer state variables.We use MIDAS to solve three types of nonconvex hydro-bidding problem. The first hydro-bidding model we solve has integer state variables due to discrete production states. In this model we demonstrate that MIDAS constructs offer policies which are better than SDDP. The next hydro-bidding model has a mean reverting autoregressive price processs instead of a Markov chain. The last hydro-bidding incorporates headwater effects, where the power generation function is dependent on both the reservoir storage level and the turbine waterflow. In all of these models, we demonstrate convergence of MIDAS in finite iterations.MIDAS takes significantly longer to converge than SDDP due to its mixed-integer program (MIP) sub-problems. For hydro-bidding models with continuous state variables, its computation time depends on the value of δ. A larger δ reduces the computation time for convergence but also increases optimality error ε.In order to speed up MIDAS, we introduced two heuristics. The first heuristic is a step function selection heuristic, which is similar to the cut selection scheme in SDDP. This heuristic improves the solution time by up to 64%. The second heuristic iteratively solves the MIP sub-problems in MIDAS using smaller MIPs, rather than as one large MIP. This heuristic improves the solution time up to 60%. Applying both of the heuristics, we were able to use MIDAS to solve a hydro-bidding problem, consisting of a 4 reservoir, 4 station hydro scheme with integer state variables
Kleniati, Polyxeni M. „Decomposition schemes for polynomial optimisation, semidefinite programming and applications to nonconvex portfolio decisions“. Thesis, Imperial College London, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509792.
Der volle Inhalt der QuelleWood, Derren Wesley. „Dual sequential approximation methods in structural optimisation“. Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20033.
Der volle Inhalt der QuelleENGLISH ABSTRACT: This dissertation addresses a number of topics that arise from the use of a dual method of sequential approximate optimisation (SAO) to solve structural optimisation problems. Said approach is widely used because it allows relatively large problems to be solved efficiently by minimising the number of expensive structural analyses required. Some extensions to traditional implementations are suggested that can serve to increase the efficacy of such algorithms. The work presented herein is concerned primarily with three topics: the use of nonconvex functions in the definition of SAO subproblems, the global convergence of the method, and the application of the dual SAO approach to large-scale problems. Additionally, a chapter is presented that focuses on the interpretation of Sigmund’s mesh independence sensitivity filter in topology optimisation. It is standard practice to formulate the approximate subproblems as strictly convex, since strict convexity is a sufficient condition to ensure that the solution of the dual problem corresponds with the unique stationary point of the primal. The incorporation of nonconvex functions in the definition of the subproblems is rarely attempted. However, many problems exhibit nonconvex behaviour that is easily represented by simple nonconvex functions. It is demonstrated herein that, under certain conditions, such functions can be fruitfully incorporated into the definition of the approximate subproblems without destroying the correspondence or uniqueness of the primal and dual solutions. Global convergence of dual SAO algorithms is examined within the context of the CCSA method, which relies on the use and manipulation of conservative convex and separable approximations. This method currently requires that a given problem and each of its subproblems be relaxed to ensure that the sequence of iterates that is produced remains feasible. A novel method, called the bounded dual, is presented as an alternative to relaxation. Infeasibility is catered for in the solution of the dual, and no relaxation-like modification is required. It is shown that when infeasibility is encountered, maximising the dual subproblem is equivalent to minimising a penalised linear combination of its constraint infeasibilities. Upon iteration, a restorative series of iterates is produced that gains feasibility, after which convergence to a feasible local minimum is assured. Two instances of the dual SAO solution of large-scale problems are addressed herein. The first is a discrete problem regarding the selection of the point-wise optimal fibre orientation in the two-dimensional minimum compliance design for fibre-reinforced composite plates. It is solved by means of the discrete dual approach, and the formulation employed gives rise to a partially separable dual problem. The second instance involves the solution of planar material distribution problems subject to local stress constraints. These are solved in a continuous sense using a sparse solver. The complexity and dimensionality of the dual is controlled by employing a constraint selection strategy in tandem with a mechanism by which inconsequential elements of the Jacobian of the active constraints are omitted. In this way, both the size of the dual and the amount of information that needs to be stored in order to define the dual are reduced.
AFRIKAANSE OPSOMMING: Hierdie proefskrif spreek ’n aantal onderwerpe aan wat spruit uit die gebruik van ’n duale metode van sekwensi¨ele benaderde optimering (SBO; sequential approximate optimisation (SAO)) om strukturele optimeringsprobleme op te los. Hierdie benadering word breedvoerig gebruik omdat dit die moontlikheid skep dat relatief groot probleme doeltreffend opgelos kan word deur die aantal duur strukturele analises wat vereis word, te minimeer. Sommige uitbreidings op tradisionele implementerings word voorgestel wat kan dien om die doeltreffendheid van sulke algoritmes te verhoog. Die werk wat hierin aangebied word, het hoofsaaklik betrekking op drie onderwerpe: die gebruik van nie-konvekse funksies in die defini¨ering van SBO-subprobleme, die globale konvergensie van die metode, en die toepassing van die duale SBO-benadering op grootskaalse probleme. Daarbenewens word ’n hoofstuk aangebied wat fokus op die interpretasie van Sigmund se maasonafhanklike sensitiwiteitsfilter (mesh independence sensitivity filter) in topologie-optimering. Dit is standaard praktyk om die benaderde subprobleme as streng konveks te formuleer, aangesien streng konveksiteit ’n voldoende voorwaarde is om te verseker dat die oplossing van die duale probleem ooreenstem met die unieke stasionˆere punt van die primaal. Die insluiting van niekonvekse funksies in die definisie van die subprobleme word selde gepoog. Baie probleme toon egter nie-konvekse gedrag wat maklik deur eenvoudige nie-konvekse funksies voorgestel kan word. In hierdie werk word daar gedemonstreer dat sulke funksies onder sekere voorwaardes met vrug in die definisie van die benaderde subprobleme inkorporeer kan word sonder om die korrespondensie of uniekheid van die primale en duale oplossings te vernietig. Globale konvergensie van duale SBO-algoritmes word ondersoek binne die konteks van die CCSAmetode, wat afhanklik is van die gebruik en manipulering van konserwatiewe konvekse en skeibare benaderings. Hierdie metode vereis tans dat ’n gegewe probleem en elk van sy subprobleme verslap word om te verseker dat die sekwensie van iterasies wat geproduseer word, toelaatbaar bly. ’n Nuwe metode, wat die begrensde duaal genoem word, word aangebied as ’n alternatief tot verslapping. Daar word vir ontoelaatbaarheid voorsiening gemaak in die oplossing van die duaal, en geen verslappings-tipe wysiging word benodig nie. Daar word gewys dat wanneer ontoelaatbaarheid te¨engekom word, maksimering van die duaal-subprobleem ekwivalent is aan minimering van sy begrensingsontoelaatbaarhede (constraint infeasibilities). Met iterasie word ’n herstellende reeks iterasies geproduseer wat toelaatbaarheid bereik, waarna konvergensie tot ’n plaaslike KKT-punt verseker word. Twee gevalle van die duale SBO-oplossing van grootskaalse probleme word hierin aangespreek. Die eerste geval is ’n diskrete probleem betreffende die seleksie van die puntsgewyse optimale veselori¨entasie in die tweedimensionele minimum meegeefbaarheidsontwerp vir veselversterkte saamgestelde plate. Dit word opgelos deur middel van die diskrete duale benadering, en die formulering wat gebruik word, gee aanleiding tot ’n gedeeltelik skeibare duale probleem. Die tweede geval behels die oplossing van in-vlak materiaalverspredingsprobleme onderworpe aan plaaslike spanningsbegrensings. Hulle word in ’n kontinue sin opgelos met die gebruik van ’n yl oplosser. Die kompleksiteit en dimensionaliteit van die duaal word beheer deur gebruik te maak van ’n strategie om begrensings te selekteer tesame met ’n meganisme waardeur onbelangrike elemente van die Jacobiaan van die aktiewe begrensings uitgelaat word. Op hierdie wyse word beide die grootte van die duaal en die hoeveelheid inligting wat gestoor moet word om die duaal te definieer, verminder.
Sutton, Matthew William. „Variable selection and dimension reduction for structured large datasets“. Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129460/1/Matthew_Sutton_Thesis.pdf.
Der volle Inhalt der QuelleRepetti, Audrey. „Algorithmes d'optimisation en grande dimension : applications à la résolution de problèmes inverses“. Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1032/document.
Der volle Inhalt der QuelleAn efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attention to computational issues and theoretical convergence properties. A first idea to build fast minimization algorithms is to make use of a preconditioning strategy by adapting, at each iteration, the underlying metric. We incorporate this technique in the forward-backward algorithm and provide an automatic method for choosing the preconditioning matrices, based on a majorization-minimization principle. The convergence proofs rely on the Kurdyka-L ojasiewicz inequality. A second strategy consists of splitting the involved data in different blocks of reduced dimension. This approach allows us to control the number of operations performed at each iteration of the algorithms, as well as the required memory. For this purpose, block alternating methods are developed in the context of both non-convex and convex optimization problems. In the non-convex case, a block alternating version of the preconditioned forward-backward algorithm is proposed, where the blocks are updated according to an acyclic deterministic rule. When additional convexity assumptions can be made, various alternating proximal primal-dual algorithms are obtained by using an arbitrary random sweeping rule. The theoretical analysis of these stochastic convex optimization algorithms is grounded on the theory of monotone operators. A key ingredient in the solution of high dimensional optimization problems lies in the possibility of performing some of the computation steps in a parallel manner. This parallelization is made possible in the proposed block alternating primal-dual methods where the primal variables, as well as the dual ones, can be updated in a quite flexible way. As an offspring of these results, new distributed algorithms are derived, where the computations are spread over a set of agents connected through a general hyper graph topology. Finally, our methodological contributions are validated on a number of applications in signal and image processing. First, we focus on optimization problems involving non-convex criteria, in particular image restoration when the original image is corrupted with a signal dependent Gaussian noise, spectral unmixing, phase reconstruction in tomography, and blind deconvolution in seismic sparse signal reconstruction. Then, we address convex minimization problems arising in the context of 3D mesh denoising and in query optimization for database management
Garrigos, Guillaume. „Descent dynamical systems and algorithms for tame optimization, and multi-objective problems“. Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS191/document.
Der volle Inhalt der QuelleIn a first part, we focus on gradient dynamical systems governed by non-smooth but also non-convex functions, satisfying the so-called Kurdyka-Lojasiewicz inequality.After obtaining preliminary results for a continuous steepest descent dynamic, we study a general descent algorithm. We prove, under a compactness assumption, that any sequence generated by this general scheme converges to a critical point of the function.We also obtain new convergence rates both for the values and the iterates. The analysis covers alternating versions of the forward-backward method, with variable metric and relative errors. As an example, a non-smooth and non-convex version of the Levenberg-Marquardt algorithm is detailed.Applications to non-convex feasibility problems, and to sparse inverse problems are discussed.In a second part, the thesis explores descent dynamics associated to constrained vector optimization problems. For this, we adapt the classic steepest descent dynamic to functions with values in a vector space ordered by a solid closed convex cone. It can be seen as the continuous analogue of various descent algorithms developed in the last years.We have a particular interest for multi-objective decision problems, for which the dynamic make decrease all the objective functions along time.We prove the existence of trajectories for this continuous dynamic, and show their convergence to weak efficient points.Then, we explore an inertial dynamic for multi-objective problems, with the aim to provide fast methods converging to Pareto points
Ho, Vinh Thanh. „Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA“. Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289.
Der volle Inhalt der QuelleIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Ho, Vinh Thanh. „Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA“. Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.
Der volle Inhalt der QuelleIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Buchteile zum Thema "Optimisation nonconvexe"
Smith, Edward M. B., und Constantinos C. Pantelides. „Global Optimisation of General Process Models“. In Nonconvex Optimization and Its Applications, 355–86. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4757-5331-8_12.
Der volle Inhalt der QuelleShao, Weijia, und Sahin Albayrak. „Adaptive Zeroth-Order Optimisation of Nonconvex Composite Objectives“. In Machine Learning, Optimization, and Data Science, 573–95. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25599-1_42.
Der volle Inhalt der QuelleByrne, R. P., und I. D. L. Bogle. „Solving Nonconvex Process Optimisation Problems Using Interval Subdivision Algorithms“. In Nonconvex Optimization and Its Applications, 155–74. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4757-5331-8_5.
Der volle Inhalt der QuellePeri, Daniele, Antonio Pinto und Emilio F. Campana. „Multi-Objective Optimisation of Expensive Objective Functions with Variable Fidelity Models“. In Nonconvex Optimization and Its Applications, 223–41. Boston, MA: Springer US, 2006. http://dx.doi.org/10.1007/0-387-30065-1_14.
Der volle Inhalt der QuelleChachuat, B., und M. A. Latifi. „A New Approach in Deterministic Global Optimisation of Problems with Ordinary Differential Equations“. In Nonconvex Optimization and Its Applications, 83–108. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4613-0251-3_5.
Der volle Inhalt der QuelleFélix Mora-Camino und Luiz Gustavo Zelaya Cruz. „Advances in Data Processing for Airlines Revenue Management“. In Computational Models, Software Engineering, and Advanced Technologies in Air Transportation, 132–45. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-800-0.ch008.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Optimisation nonconvexe"
Lourenço, Pedro, Hugo Costa, João Branco, Pierre-Loïc Garoche, Arash Sadeghzadeh, Jonathan Frey, Gianluca Frison, Anthea Comellini, Massimo Barbero und Valentin Preda. „Verification & validation of optimisation-based control systems: methods and outcomes of VV4RTOS“. In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-155.
Der volle Inhalt der Quelle