Literatura académica sobre el tema "Optimisation SDP"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Optimisation SDP".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Optimisation SDP"

1

Celeste, Alcigeimes B. y Luísa A. Ventura. "Simple simulation–optimisation vs SDP for reservoir operation". Proceedings of the Institution of Civil Engineers - Water Management 170, n.º 3 (junio de 2017): 128–38. http://dx.doi.org/10.1680/jwama.15.00018.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Chiu, Ching-Sheng y Chris Rizos. "Towards a Multi Objective Path Optimisation for Car Navigation". Journal of Navigation 65, n.º 1 (25 de noviembre de 2011): 125–44. http://dx.doi.org/10.1017/s0373463311000579.

Texto completo
Resumen
In a car navigation system the conventional information used to guide drivers in selecting their driving routes typically considers only one criterion, usually the Shortest Distance Path (SDP). However, drivers may apply multiple criteria to decide their driving routes. In this paper, possible route selection criteria together with a Multi Objective Path Optimisation (MOPO) model and algorithms for solving the MOPO problem are proposed. Three types of decision criteria were used to present the characteristics of the proposed model. They relate to the cumulative SDP, passed intersections (Least Node Path – LNP) and number of turns (Minimum Turn Path – MTP). A two-step technique which incorporates shortest path algorithms for solving the MOPO problem was tested. To demonstrate the advantage that the MOPO model provides drivers to assist in route selection, several empirical studies were conducted using two real road networks with different roadway types. With the aid of a Geographic Information System (GIS), drivers can easily and quickly obtain the optimal paths of the MOPO problem, despite the fact that these paths are highly complex and difficult to solve manually.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kostyukova, Olga I. y Tatiana V. Tchemisova. "Linear semidefinite programming problems: regularisation and strong dual formulations". Journal of the Belarusian State University. Mathematics and Informatics, n.º 3 (8 de diciembre de 2020): 17–27. http://dx.doi.org/10.33581/2520-6508-2020-3-17-27.

Texto completo
Resumen
Regularisation consists in reducing a given optimisation problem to an equivalent form where certain regularity conditions, which guarantee the strong duality, are fulfilled. In this paper, for linear problems of semidefinite programming (SDP), we propose a regularisation procedure which is based on the concept of an immobile index set and its properties. This procedure is described in the form of a finite algorithm which converts any linear semidefinite problem to a form that satisfies the Slater condition. Using the properties of the immobile indices and the described regularization procedure, we obtained new dual SDP problems in implicit and explicit forms. It is proven that for the constructed dual problems and the original problem the strong duality property holds true.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Azam, Khalide, Nureisha Cadir, Carla Madeira, Stephen H. Gillespie y Wilber Sabiiti. "OMNIgene.SPUTUM suppresses contaminants while maintainingMycobacterium tuberculosisviability and obviates cold-chain transport". ERJ Open Research 4, n.º 1 (enero de 2018): 00074–2017. http://dx.doi.org/10.1183/23120541.00074-2017.

Texto completo
Resumen
Tuberculosis (TB) diagnostics are centralised, requiring long-distance transportation of specimens in most resource-limited settings. We evaluated the ability of OMNIgene.SPUTUM (OM-S) to obviate cold-chain transport of TB specimens.A two-arm (same-day and after 5 days sample processing) study was conducted to assess contamination rates andMycobacterium tuberculosisviability in OM-S-treated samples against the standard decontamination procedure (SDP) in Mozambique, using Lowenstein Jensen (LJ) and mycobacterial growth indicator tube (MGIT) culture and molecular bacterial load assay.270 specimens were processed using OM-S and SDP in same-day and 5-day arms. Contamination was lower in OM-S-treated than SDP-treated cultures: 12%versus15% and 2%versus27% in the same-day and 5-day arms, respectively.M. tuberculosisrecovery in OM-S-treated LJ cultures was 10% and 56% higher in the same-day and 5-day arms, respectively, than SDP-treated cultures, but lower in MGIT (52% and 28% lower in the same-day and 5-day arms, respectively).M. tuberculosisviable count was 1log estimated CFU·mL−1lower in 5-day OM-S-treated sputa. OM-S was more effective at liquefying sputum with a shorter sample processing time: 22 min for culture.OM-S is simple to use and has demonstrated a high potency to suppress contaminants, maintenance of viability at ambient temperatures and higherM. tuberculosisrecovery, particularly in the solid LJ cultures. Optimisation of OM-S to achieve higher MGIT culture positivity and shorter time to result will increase its application and utility in the clinical management of TB.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lim, Y. W., H. A. Majid, A. A. Samah, M. H. Ahmad, D. R. Ossen, M. F. Harun y F. Shahsavari. "BIM and genetic algorithm optimisation for sustainable building envelope design". International Journal of Sustainable Development and Planning 13, n.º 01 (1 de enero de 2018): 151–59. http://dx.doi.org/10.2495/sdp-v13-n1-151-159.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

A. J., Anju y J. E. Judith. "Optimized Deeplearning Algorithm for Software Defects Prediction". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 9s (31 de agosto de 2023): 173–88. http://dx.doi.org/10.17762/ijritcc.v11i9s.7409.

Texto completo
Resumen
Accurate software defect prediction (SDP) helps to enhance the quality of the software by identifying potential flaws early in the development process. However, existing approaches face challenges in achieving reliable predictions. To address this, a novel approach is proposed that combines a two-tier-deep learning framework. The proposed work includes four major phases:(a) pre-processing, (b) Dimensionality reduction, (c) Feature Extraction and (d) Two-fold deep learning-based SDP. The collected raw data is initially pre-processed using a data cleaning approach (handling null values and missing data) and a Decimal scaling normalisation approach. The dimensions of the pre-processed data are reduced using the newly developed Incremental Covariance Principal Component Analysis (ICPCA), and this approach aids in solving the “curse of dimensionality” issue. Then, onto the dimensionally reduced data, the feature extraction is performed using statistical features (standard deviation, skewness, variance, and kurtosis), Mutual information (MI), and Conditional entropy (CE). From the extracted features, the relevant ones are selected using the new Euclidean Distance with Mean Absolute Deviation (ED-MAD). Finally, the SDP (decision making) is carried out using the optimized Two-Fold Deep Learning Framework (O-TFDLF), which encapsulates the RBFN and optimized MLP, respectively. The weight of MLP is fine-tuned using the new Levy Flight Cat Mouse Optimisation (LCMO) method to improve the model's prediction accuracy. The final detected outcome (forecasting the presence/ absence of defect) is acquired from optimized MLP. The implementation has been performed using the MATLAB software. By using certain performance metrics such as Sensitivity, Accuracy, Precision, Specificity and MSE the proposed model’s performance is compared to that of existing models. The accuracy achieved for the proposed model is 93.37%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Allacker, K. y F. de Troyer. "Optimisation of the environmental and financial cost of two dwellings in belgium". International Journal of Sustainable Development and Planning 7, n.º 2 (29 de junio de 2012): 186–208. http://dx.doi.org/10.2495/sdp-v7-n2-186-208.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Li, Jiangfeng, Lina Stankovic, Vladimir Stankovic, Stella Pytharouli, Cheng Yang y Qingjiang Shi. "Graph-Based Feature Weight Optimisation and Classification of Continuous Seismic Sensor Array Recordings". Sensors 23, n.º 1 (26 de diciembre de 2022): 243. http://dx.doi.org/10.3390/s23010243.

Texto completo
Resumen
Slope instabilities caused by heavy rainfall, man-made activity or earthquakes can be characterised by seismic events. To minimise mortality and infrastructure damage, a good understanding of seismic signal properties characterising slope failures is therefore crucial to classify seismic events recorded from continuous recordings effectively. However, there are limited contributions towards understanding the importance of feature selection for the classification of seismic signals from continuous noisy recordings from multiple channels/sensors. This paper first proposes a novel multi-channel event-detection scheme based on Neyman–Pearson lemma and Multi-channel Coherency Migration (MCM) on the stacked signal across multi-channels. Furthermore, this paper adapts graph-based feature weight optimisation as feature selection, exploiting the signal’s physical characteristics, to improve signal classification. Specifically, we alternatively optimise the feature weight and classification label with graph smoothness and semidefinite programming (SDP). Experimental results show that with expert interpretation, compared with the conventional short-time average/long-time average (STA/LTA) detection approach, our detection method identified 614 more seismic events in five days. Furthermore, feature selection, especially via graph-based feature weight optimisation, provides more focused feature sets with less than half of the original number of features, at the same time enhancing the classification performance; for example, with feature selection, the Graph Laplacian Regularisation classifier (GLR) raised the rockfall and slide quake sensitivities to 92% and 88% from 89% and 85%, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Fletcher, Tom y Kambiz Ebrahimi. "The Effect of Fuel Cell and Battery Size on Efficiency and Cell Lifetime for an L7e Fuel Cell Hybrid Vehicle". Energies 13, n.º 22 (11 de noviembre de 2020): 5889. http://dx.doi.org/10.3390/en13225889.

Texto completo
Resumen
The size of the fuel cell and battery of a Fuel Cell Hybrid Electric Vehicle (FCHEV) will heavily affect the overall performance of the vehicle, its fuel economy, driveability, and the rates of fuel cell degradation observed. An undersized fuel cell may experience accelerated ageing of the fuel cell membrane and catalyst due to excessive heat and transient loading. This work describes a multi-objective design exploration exercise of fuel cell size and battery capacity comparing hydrogen fuel consumption, fuel cell lifetime, vehicle mass and running cost. For each system design considered, an individually optimised Energy Management Strategy (EMS) has been generated using Stochastic Dynamic Programming (SDP) in order to prevent bias to the results due to the control strategy. It has been found that the objectives of fuel efficiency, lifetime and running cost are largely complimentary, but degradation and running costs are much more sensitive to design changes than fuel efficiency and therefore should be included in any optimisation. Additionally, due to the expense of the fuel cell, combined with the dominating effect of start/stop cycling degradation, the optimal design from an overall running cost perspective is slightly downsized from one which is optimised purely for high efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ovesná, J., K. Poláková, L. Kučera y J. Rulcová. "SNP Typing in Cereals: Comparison of SSCP and SNaPShot Markers Using the Barley Mlo Locus as a Model". Czech Journal of Genetics and Plant Breeding 39, No. 4 (23 de noviembre de 2011): 109–12. http://dx.doi.org/10.17221/3727-cjgpb.

Texto completo
Resumen
We compared the applicability of SSCP and SNaPShot markers in barley Mlo loci as a model of SNP detection in cereals. Whereas the development of SSCP markers required optimisation steps, the ddNTP primer extension (SNaPShot) procedure based on knowledge of target sequence provided expected results without any previous optimisations. We have shown, that SNPs can be easily scored using the ABI PRISM 310 Genetic Analyser.  
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Optimisation SDP"

1

Niu, Yi Shuai. "Programmation DC et DCA en optimisation combinatoire et optimisation polynomiale via les techniques de SDP : codes et simulations numériques". Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00557911.

Texto completo
Resumen
L'objectif de cette thèse porte sur des recherches théoriques et algorithmiques d'optimisation locale et globale via les techniques de programmation DC & DCA, Séparation et Evaluation (SE) ainsi que les techniques de relaxation DC/SDP, pour résoudre plusieurs types de problèmes d'optimisation non convexe (notamment en Optimisation Combinatoire et Optimisation Polynomiale). La thèse comporte quatre parties :La première partie présente les outils fondamentaux et les techniques essentielles en programmation DC & l'Algorithme DC (DCA), ainsi que les techniques de relaxation SDP, et les méthodes de séparation et évaluation (SE).Dans la deuxième partie, nous nous intéressons à la résolution de problèmes de programmation quadratique et linéaire mixte en variables entières. Nous proposons de nouvelles approches locales et globales basées sur DCA, SE et SDP. L'implémentation de logiciel et des simulations numériques sont aussi étudiées.La troisième partie explore des approches de la programmation DC & DCA en les combinant aux techniques SE et SDP pour la résolution locale et globale de programmes polynomiaux. Le programme polynomial avec des fonctions polynomiales homogènes et son application à la gestion de portefeuille avec moments d'ordre supérieur en optimisation financière ont été discutés de manière approfondie dans cette partie.Enfin, nous étudions dans la dernière partie un programme d'optimisation sous contraintes de type matrices semi-définies via nos approches de la programmation DC. Nous nous consacrons à la résolution du problème de réalisabilité des contraintes BMI et QMI en contrôle optimal.L'ensemble de ces travaux a été implémenté avec MATLAB, C/C++ ... nous permettant de confirmer l'utilisation pratique et d'enrichir nos travaux de recherche.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Fuentes, Marc. "Analyse et optimisation de problèmes sous contraintes d'autocorrélation". Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00195013.

Texto completo
Resumen
Dans ce travail de thèse, nous étudions, dans un contexte d'analyse convexe et d'optimisation, la prise en compte des contraintes dites d'autocorrélation, c'est-à-dire : nous considérons les situations où les vecteurs représentant les variables à optimiser sont contraintes à être les coefficients d'autocorrélation d'un signal discret à support fini. Cet ensemble des vecteurs à composantes autocorrélées se trouve être un cône convexe ; nous essayons d'en établir le plus de propriétés possibles : concernant sa frontière (lisse/polyédrale), ses faces, l'acuité, l'expression du cône polaire, l'évaluation du cône normal en un point, etc. Ensuite, nous étudions divers algorithmes pour résoudre des problèmes d'optimisation où le cône des vecteurs à composantes autocorrélées entre en jeu. Notre principal objet d'étude est le problème de la projection sur ce cône, dont nous proposons la résolution par trois algorithmes différents : algorithmes dits de suivi de chemin, celui des projections alternées, et via une relaxation non-convexe. Enfin, nous abordons la généralisation de la situation d'autocorrélation au cas de signaux bi-dimensionnels, avec toute la complexité que cela engendre : multiples définitions possibles, non-convexité des problèmes résultants, et complexité calculatoire accrue pour les algorithmes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fletcher, Thomas P. "Optimal energy management strategy for a fuel cell hybrid electric vehicle". Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/25567.

Texto completo
Resumen
The Energy Management Strategy (EMS) has a huge effect on the performance of any hybrid vehicle because it determines the operating point of almost every component associated with the powertrain. This means that its optimisation is an incredibly complex task which must consider a number of objectives including the fuel consumption, drive-ability, component degradation and straight-line performance. The EMS is of particular importance for Fuel Cell Hybrid Electric Vehicles (FCHEVs), not only to minimise the fuel consumption, but also to reduce the electrical stress on the fuel cell and maximise its useful lifetime. This is because the durability and cost of the fuel cell stack is one of the major obstacles preventing FCHEVs from being competitive with conventional vehicles. In this work, a novel EMS is developed, specifcally for Fuel Cell Hybrid Electric Vehicles (FCHEVs), which considers not only the fuel consumption, but also the degradation of the fuel cell in order to optimise the overall running cost of the vehicle. This work is believed to be the first of its kind to quantify effect of decisions made by the EMS on the fuel cell degradation, inclusive of multiple causes of voltage degradation. The performance of this new strategy is compared in simulation to a recent strategy from the literature designed solely to optimise the fuel consumption. It is found that the inclusion of the degradation metrics results in a 20% increase in fuel cell lifetime for only a 3.7% increase in the fuel consumption, meaning that the overall running cost is reduced by 9%. In addition to direct implementation on board a vehicle, this technique for optimising the degradation alongside the fuel consumption also allows alternative vehicle designs to be compared in an unbiased way. In order to demonstrate this, the novel optimisation technique is subsequently used to compare alternative system designs in order to identify the optimal economic sizing of the fuel cell and battery pack. It is found that the overall running cost can be minimised by using the smallest possible fuel cell stack that will satisfy the average power requirement of the duty cycle, and by using an oversized battery pack to maximise the fuel cell effciency and minimise the transient loading on the stack. This research was undertaken at Loughborough University as part of the Doctoral Training Centre (DTC) in Hydrogen, Fuel Cells and Their Applications in collaboration with the University of Birmingham and Nottingham University and with sponsorship from HORIBA-MIRA (Nuneaton, UK). A Microcab H4 test vehicle has been made available for use in testing for this research which was previously used for approximately 2 years at the University of Birmingham. The Microcab H4 is a small campus based vehicle designed for passenger transport and mail delivery at low speeds as seen on a university campus. It has a top speed of approximately 30mph, and is fitted with a 1.2kW fuel cell and a 2kWh battery pack.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Melo, Ricardo Brito. "Modelos de otimização para a gestão de ativos e passivos nos fundos de pensões". Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/9046.

Texto completo
Resumen
Mestrado em Decisão Económica e Empresarial
No futuro teme-se que as pensões de reforma por velhice possam estar subfinanciadas, devido ao envelhecimento da população e também ao início tardio da vida laboral. É possível que, para tentar contornar esta situação, seja necessário um aumento significativo no nível das contribuições feitas pelos empregadores e empregados. Uma forma de procurar minimizar esse aumento é o recurso a modelos de otimização, que serão aplicados precisamente à gestão dos ativos e passivos associados a fundos e planos de pensões. Este trabalho, que é essencialmente um estudo teórico, está dividido em duas partes. Na primeira, apresentam-se os aspetos essenciais dos planos e fundos de pensões e também os conceitos fundamentais na gestão de ativos e passivos (Assets and Liabilities Management - ALM), pois não são temas tratados na parte curricular do mestrado, nem na licenciatura. Na segunda, depois de uma análise bastante exaustiva da volumosa literatura existente sobre o tema, apresentam-se oito trabalhos que foram selecionados atendendo ao propósito de procurar dar a conhecer, tanto quanto as restrições de dimensão do texto o possibilitam, as diferentes abordagens para a resolução do importante problema em causa.
Retirement pensions may be at risk in a future not so far, because of aged populations and late start of working life. This means that an increase of contributions by the sponsors, employers and employees, may be required. Another way to mitigate the risk of this happening is to apply optimization models in the management of the assets and liabilities for pension funds and pension plans. This document is essentially a theoretical text, in two parts. The first part presents the main features about pension funds and pension plans and fundamental concepts regarding assets and liabilities management (ALM), because these are "new" topics, in the sense that there is no course in the Masters program (or in the Bachelors program) in which they are covered. The second part gives the possible review of the existing extensive literature; a particular detail is given to eight contributions that give very interesting different approaches to solve the ALM problem for pension funds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Vincent, Hugues. "Développement d'un modèle de calcul de la capacité ultime d'éléments de structure (3D) en béton armé, basé sur la théorie du calcul à la rupture". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1038/document.

Texto completo
Resumen
Pour l’évaluation de la résistance ultime des ouvrages l’ingénieur de génie civil fait appel à différentes méthodes plus ou moins empiriques, dont de nombreuses manuelles, du fait de la lourdeur excessive des méthodes par éléments finis non-linéaires mises en œuvre dans les logiciels de calcul à sa disposition. Le calcul à la rupture, théorisé par J. Salençon, indique la voie de méthodes rigoureuses, tout à fait adaptées à cette problématique, mais dont la mise en œuvre systématique dans un logiciel a longtemps buté sur l’absence de méthodes numériques efficaces. Ce verrou de mathématique numérique a été levé récemment (Algorithme de point intérieur).Dans ce contexte l’objectif de la présente thèse est de mettre au point les méthodes permettant d’analyser, au moyen du calcul à la rupture, la capacité ultime d’éléments en béton armé tridimensionnels. Les deux approches du calcul à la rupture, que sont les approches statique et cinématiques, seront mises en œuvre numériquement sous la forme d’un problème d’optimisation résolu à l’aide d’un solveur mathématique dans le cadre de la programmation semi définie positive (SDP).Une large partie du travail sera consacré à la modélisation des différents matériaux constituant le béton armé. Le choix du critère pour modéliser la résistance du béton sera discuté, tout comme la méthode pour prendre en compte le renforcement. La méthode d’homogénéisation sera utilisée dans le cas de renforcement périodique et une adaptation de cette méthode sera utilisée dans le cas de renforts isolés. Enfin, les capacités et le potentiel de l’outil développé et mis en œuvre au cours de cette thèse seront exposés au travers d’exemples d’application sur des structures massives
To evaluate the load bearing capacity of structures, civil engineers often make use of empirical methods, which are often manuals, instead of nonlinear finite element methods available in existing civil engineering softwares, which are long to process and difficult to handle. Yield design (or limit analysis) approach, formalized by J. Salençon, is a rigorous method to evaluate the capacity of structures and can be used to answer the question of structural failure. It was, yet, not possible to take advantage of these theoretical methods due to the lack of efficient numerical methods. Recent progress in this field and notably in interior point algorithms allows one to rethink this opportunity. Therefore, the main objective of this thesis is to develop a numerical model, based on the yield design approach, to evaluate the ultimate capacity of massive (3D) reinforced concrete structural elements. Both static and kinematic approaches are implemented and expressed as an optimization problem that can be solved by a mathematical optimization solver in the framework of Semi-Definite Programming (SDP).A large part of this work is on modelling the resistance of the different components of the reinforced concrete composite material. The modelling assumptions taken to model the resistance of concrete are discussed. And the method used to model reinforcement is also questioned. The homogenization method is used to model periodic reinforcement and an adaptation of this technique is developed for isolated rebars. To conclude this work, a last part is dedicated to illustrate the power and potentialities of the numerical tool developed during this PhD thesis through various examples of massive structures
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Halalchi, Houssem. "Commande linéaire à paramètres variants des robots manipulateurs flexibles". Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00762367.

Texto completo
Resumen
Les robots flexibles sont de plus en plus utilisés dans les applications pratiques. Ces robots sont caractérisés par une conception mécanique légère, réduisant ainsi leur encombrement, leur consommation d'énergie et améliorant leur sécurité. Cependant, la présence de vibrations transitoires rend difficile un contrôle précis de la trajectoire de ces systèmes. Cette thèse est précisément consacrée à l'asservissement en position des manipulateurs flexibles dans les espaces articulaire et opérationnel. Des méthodes de commande avancées, basées sur des outils de la commande robuste et de l'optimisation convexe, ont été proposées. Ces méthodes font en particulier appel à la théorie des systèmes linéaires à paramètres variants (LPV) et aux inégalités matricielles linéaires (LMI). En comparaison avec des lois de commande non-linéaires disponibles dans la littérature, les lois de commande LPV proposées permettent de considérerdes contraintes de performance et de robustesse de manière simple et systématique. L'accent est porté dans notre travail sur la gestion appropriée de la dépendance paramétrique du modèle LPV, en particulier les dépendances polynomiale et rationnelle. Des simulations numériques effectuées dans des conditions réalistes, ont permis d'observer une meilleure robustesse de la commande LPV par rapport à la commande non-linéaire par inversion de modèle face aux bruits de mesure, aux excitations de haute fréquence et aux incertitudes de modèle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Thévenet, Jean-Baptiste. "Techniques d'optimisation avancées pour la synthèse de lois de commande". Toulouse 3, 2005. http://www.theses.fr/2005TOU30125.

Texto completo
Resumen
La résolution de nombreux problèmes d'automatique n'est pas couverte par les techniques disponibles actuellement. Elle nécessite des développements algorithmiques importants en optimisation, dans le domaine des inégalités matricielles non-convexes. Cette thèse met en oeuvre plusieurs approches complémentaires dans ce contexte. En premier lieu, les méthodes "spectral SDP", baseés sur l'utilisation de Lagrangiens augmentés, nous conduisent à la conception d'un logiciel, specSDP, et à la résolution d'un grand nombre de problèmes en commande : synthèse multimodèle ou structurée, contrôle d'ordre réduit. Une étude de convergence locale est également menée pour le cas classique, présageant d'évolutions positives. La deuxième approche proposée s'inspire d'une formulation non-lisse des problèmes BMI et des techniques associées. Nous exhibons, pour cette méthode, des résultats numériques probants, obtenus sur des exemples de grande dimension, qui mettent en échec les rares méthodes existantes
This thesis research area belongs to the class of nonlinear semidefinite programming, an emerging and challenging domain in optimization which is of central importance in robust control and relaxation of hard decision problems. Our contribution addresses convergence of algorithms, practical implementations and testing on applications in the field of reduced-order output feedback control. Firstly, our augmented Lagrangian-type "spectral SDP" method has shown to be extremely efficient on a variety of middle-scale BMI programs, including simultaneous, structured, or mixed H2/Hinf synthesis problems. Local convergence properties of the algorithm were studied as well, as far as classical nonlinear programs are concerned. On the other hand, we then focused on nonsmooth strategies for large bilinear matrix inequalities. Problems with up to a few thousand variables were successfully handled through this method, where alternative approaches usually give failure
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Krémé, Ama Marina. "Modification locale et consistance globale dans le plan temps-fréquence". Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0340.

Texto completo
Resumen
Aujourd'hui, il est devenu facile de retoucher des images, par exemple de flouter une région, ou de la modifier pour faire disparaître ou apparaître un objet, une personne, etc. La retouche d'images fait partie des outils de base de la plupart des logiciels d'édition d'images. Dans le cadre des signaux audio, il est souvent plus naturel d'effectuer de telles retouches dans un domaine transformé, en particulier le domaine temps-fréquence. Là encore, c'est une pratique assez courante, mais qui ne repose pas nécessairement sur des arguments théoriques solides. Des cas d'applications incluent la restauration de régions du plan temps-fréquence où une information a été perdue (par exemple l'information de phase), la reconstruction d'un signal dégradé par une perturbation additive bien localisée dans le plan temps-fréquence, ou encore la séparation de signaux localisés dans différentes régions du plan temps-fréquence. Dans cette thèse, nous proposons et développons des méthodes théoriques et algorithmiques pour résoudre ce type de problème. Nous les abordons dans un premier temps comme un problème de reconstruction de données manquantes dans lequel il manque certaines phases des coefficients temps-fréquence. Nous formulons mathématiquement le problème, puis nous proposons trois méthodes pour le résoudre. Dans un second temps, nous proposons une approche qui consiste à atténuer une source de dégradation avec l'hypothèse que celle-ci est bien localisée dans une région spécifique du plan temps-fréquence. Nous obtenons la solution exacte du problème qui fait intervenir des opérateurs appelés multiplicateurs de Gabor
Nowadays, it has become easy to edit images, such as blurring an area, or changing it to hide or add an object, a person, etc. Image editing is one of the basic tools of most image processing software. In the context of audio signals, it is often more natural to perform such an editing in a transformed domain, in particular the time-frequency domain. Again, this is a fairly common practice, but not necessarily based on sound theoretical arguments. Application cases include the restoration of regions of the time-frequency plane where information has been lost (e.g. phase information), the reconstruction of a degraded signal by an additive perturbation well localized in the time-frequency plane, or the separation of signals localized in different regions of the time-frequency plane. In this thesis, we propose and develop theoretical and algorithmic methods to solve this issue. We first formulate the problem as a missing data reconstruction problem in which the missing data are only the phases of the time-frequency coefficients. We formulate it mathematically, then we propose three methods to solve it. Secondly, we propose an approach that consists in attenuating a source of degradation with the assumption that it is localized in a specific region of the time-frequency plane. We consider the case where the signal of interest is perturbed by an additive signal and has an energy that is more widely spread in the time-frequency plane. We formulate it as an optimization problem designed to attenuate the perturbation with precise control of the level of attenuation. We obtain the exact solution of the problem which involves operators called Gabor multipliers
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lannoy, Damien. "Optimisation de la qualité et de l'efficacité des dispositifs médicaux de perfusion simple et complexe". Phd thesis, Université du Droit et de la Santé - Lille II, 2010. http://tel.archives-ouvertes.fr/tel-00610119.

Texto completo
Resumen
La perfusion intraveineuse, continue ou intermittente, est un acte courant dans les services de soins bien que non dénué de risque. Différents dispositifs médicaux peuvent être employés pour permettre l'administration parfois simultanée de plusieurs substances actives. Ces dispositifs peuvent, de par leurs caractéristiques propres, générer des fluctuations plus ou moins importantes du débit massique de principe actif, c'est-à-dire la quantité de médicament administrée au patient par unité de temps. Le premier axe de travail concernant ces dispositifs médicaux est l'étude des prescriptions des normes, en particulier les définitions, les méthodes d'essai et les seuils de conformité attendus. Les principaux éléments de physiologie et de mécanique des fluides sont abordés afin d'appréhender la problématique. Cette étude est complétée par l'analyse des données de la littérature concernant l'impact des dispositifs médicaux sur le débit massique des principes actifs délivrés par voie intraveineuse. Une revue systématique de la littérature a été effectuée. Elle porte sur les travaux in vitro ou in vivo se rapportant au sujet et concernant tout élément susceptible de modifier le débit ou la concentration du médicament perfusé. Le premier travail expérimental réalisé in vitro concerne la perfusion simultanée de trois médicaments au moyen d'un dispositif unique de perfusion présentant plusieurs points d'accès. Les trois médicaments étaient perfusés par pousse-seringues et une solution d'hydratation par gravité. Le but de cette étude était d'évaluer l'impact des caractéristiques (volume résiduel et valve anti-retour) de deux dispositifs de perfusion, un premier présentant un très faible volume résiduel (0,046 ml) et une valve anti-retour et le second présentant un volume résiduel élevé (6,16 ml) et sans valve anti-retour) sur le débit massique de trois principes actifs. La quantification simultanée de trois principes actifs en solution (dinitrate d'isosorbide, midazolam et noradrénaline) a nécessité la mise au point d'une méthode multivariée sur spectre UV (régression partial least square (PLS)). Cette technique a permis de doser en continu (1 dosage par seconde) les trois principes actifs à la sortie de la ligne de perfusion. La méthode a été validée dans les échelles de concentrations respectives de 5-60, 10-80 et 2,5-20 µg.mL-1 pour le dinitrate d'isosorbide, le midazolam et la noradrénaline, dans des mélanges binaires et 6,67 à 30, 0,83 à 7,5 et 1,67 à 23,33 µg.ml−1 pour ces mêmes produits, dans des mélanges ternaires. La mise au point du modèle a permis de retenir la zone du spectre située entre 220 et 300 nm associée à un index Q2cum optimal. L'étude de recouvrement, employant le modèle pour prédire les compositions de 8 mélanges ternaires, retrouvait des valeurs de concentrations situées dans un intervalle de 99,5 à 101 % des valeurs théoriques. Les principaux paramètres dans cette étude étaient 1) l'évolution du débit massique des trois médicaments, 2) la valeur du plateau du débit massique à l'équilibre, et 3) l'efficience de perfusion (flow change efficiency (FCE)). Le FCE est obtenu en divisant l'aire sous la courbe du débit massique expérimental en fonction du temps par l'aire sous la courbe du débit massique attendu en fonction du temps. Ce paramètre est calculé pour chaque intervalle de 5 minutes après le début de la perfusion. Les systèmes de perfusion avec un volume résiduel réduit offrent de façon significative un meilleur FCE (53,0 ° 15,4% avec un volume résiduel très faible après 5 minutes de perfusion comparativement à 5,6 ° 8,2% avec un système de perfusion avec un volume résiduel élevé), quel que soient les conditions de changements de débit. Une relation non-linéaire a été établie entre le volume résiduel, le temps depuis le début de la perfusion et le FCE. [...]
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kirby, Hugh Christopher. "The optimisation of Daimlerchrysler's SAP-MRP system through systems analysis, design, and simulation". Thesis, Stellenbosch : University of Stellenbosch, 2004. http://hdl.handle.net/10019.1/16364.

Texto completo
Resumen
Thesis (MSc)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: This report presents the findings of a study that started as an evaluation of the possible implementation of the Options Inventory Management Model (OIMM), developed by van Wijck and Bekker [4], at DaimlerChrysler South Africa (DCSA). The OIMM System was developed as a possible alternative to the SAP-MRP System to ensure a high Customer Service Level, with the lowest possible inventory level, under the 10 Day Option Freeze Environment. DCSA indicated that although the OIMM System may be an ideal solution, in terms of optimising Plant Inventory levels whilst maximising Customer Service Levels, the practical problems associated with the possible implementation of this system would outweigh the associated benefits. This being the case, a directive was given to investigate the SAP-MRP System’s ability to provide a high Customer Service Level under the 10 Day Option Freeze Environment and not to pursue the OIMM implementation option. The objectives of this directive were to evaluate and establish the performance capabilities of the SAPMRP System under the 10 Day Option Freeze Environment as well as develop a system to aid in the customisation of the system. Design of Experiments (DOE) was utilised to plan the evaluation procedure and to ensure that a consistent approach was followed. The DOE generated huge amounts of output data that represented the Usage Category Behaviour Characteristics of the SAP-MRP System. Regression Analysis was utilised to investigate this data. A part-by-part analysis was avoided and the analysis approach followed presented results that could be applied to almost the entire range of parts, excluding bulk parts, at DCSA. The results showed that Coverage Profile alone could be used as a proactive inventory management tool to ensure maximum Customer Service Level. The Regression Analysis revealed that various combinations of Safety Time, Minimum, and Target Coverage resulted in similar or equal Avg. Plant Inventories, Avg. Number of Orders, and Avg. Order Sizes. These findings were used to develop a Decision Support Tool that could be used by DCSA when evaluating the resultant changes caused by the proposed changes in the aforementioned Input Parameters.
AFRIKAANSE OPSOMMING: Hierdie verslag stel die bevindinge van ‘n studie voor wat begin het met die evaluering van die moontlike implementering van die “Options Inventory Management Model” (OIMM), ontwikkel vir DaimlerChrysler (DCSA) deur van Wijck en Bekker [4]. Die OIMM sisteem was ontwikkel as ‘n moontlike alternatief vir die SAP-MRP sisteem om ‘n hoë verbruikersdiensvlak tesame met die laagste moontlike voorraadvlak in ‘n 10-dag opsie-vries omgewing te verseker. DCSA het aangedui dat, hoewel die OIMM sisteem ‘n ideale oplossing bleik te wees in terme van die optimisering van fabriek-voorraadvlakke tesame met die verbruikersdiensvlakke, die praktiese probleme wat met die moontlike implimentering daarvan geassosieer word, die geassosieerde voordele oorskry. Daar is dus opdrag gegee om die SAP-MRP sisteem se vermoë om hoë verbruikersdiensvlakke in die 10-dag opsie-vries omgewing te lewer te ondersoek en sodoende nie die implimentering van die OIMM sisteem te vervolg nie. Die doelwitte van hierdie opdrag was die evaluering en vestiging van die prestasievermoëns van die SAP-MRP sisteem in die 10-dag opsie-vries omgewing, asook om ‘n sisteem te ontwikkel wat as hulpmiddel kan dien in die geïndividualiseerde aanpassingsoptimisering daarvan. ‘n Eksperimentele Ontwerp (DOE) is gebruik in die beplanning van die evalueringsprosedure en ook om te verseker dat ‘n konstante benadering gevolg is. Die DOE het ‘n groot hoeveelheid uitsetdata genereer wat die prestasie van die SAP-MRP sisteem se gedragseienskappe voorgestel het. Regressie-analise is uitgevoer om die data te ondersoek. Onderdeel-by-onderdeel analise is vermy en die analise-benadering wat gevolg is het resultate gelewer wat toegepas kon word vir omtrent die hele reeks onderdele by DCSA, uitsluitende onderdele wat in grootmaat aangekoop word. Die resultate het gewys dat die “Coverage Profile” alleen gebruik kan word as ‘n pro-aktiewe voorraadbestuur hulpmiddel om maksimum verbruikersdiensvlakke te verseker. Die regressie-analise het getoon dat verskeie kombinasies van “Safety Time,” “Minimum” en “Target Coverage” gelei het tot dieselfde hoeveelheid fabrieks-voorraad, bestellingsvrystellings en bestellingsgroottes. Hierdie tendense is toegepas in die ontwikkeling van ‘n ondersteunende besluitnemingshulpmiddel wat deur DCSA gebruik sou kon word in die evaluering van die veranderinge wat onstaan vanweë die voorgestelde verandering in die voorafgenoemde insetparameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Optimisation SDP"

1

Yadav, Preksha, Luis Antonio Millán Tudela y Bartolomé Marco-Lajara. "The Role of AI in Assessing and Achieving the Sustainable Development Goals (SDGs)". En Issues of Sustainability in AI and New-Age Thematic Investing, 1–17. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-3282-5.ch001.

Texto completo
Resumen
Facing intertwined challenges like poverty, inequality, and climate change, humanity seeks solutions in technology. This chapter delves into AI's role in the Sustainable Development Goals (SDGs), balancing potential and risks. AI's strengths in data analysis, trend prediction, and resource optimisation are evident. It aids SDGs like zero hunger, forecasting crop yields for SDG 2, and anticipating climate events for SDG 13. It also enhances healthcare and education for SDGs 4 and 10. Yet, AI poses risks. Data bias reinforces inequalities (SDG 5), automation threatens jobs (SDG 1), and transparency issues affect trust (SDG 16). To unlock AI's potential, responsible deployment is vital, prioritising human-centred approaches. Addressing limitations, leveraging strengths, and ethical considerations can make AI a force for good. This chapter calls for further research on specific AI applications for each SDG, social and environmental impacts, and ethical frameworks.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Optimisation SDP"

1

DIETRICH, UDO y LAURA GARCIA RIOS. "PASSIVE ADAPTIVE STRATEGIES FOR THE OPTIMISATION OF COMFORT AND ENERGY DEMAND IN TRADITIONAL AND CONTEMPORARY BUILDINGS IN HOT, HUMID CLIMATES". En SDP 2018. Southampton UK: WIT Press, 2018. http://dx.doi.org/10.2495/sdp180041.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ku¨hl, Roland W. A. y H. D. Kno¨ll. "Evaluation of Topical Approaches to the Implementation of Standardised ERP-Systems". En ASME 2002 Engineering Technology Conference on Energy. ASMEDC, 2002. http://dx.doi.org/10.1115/etce2002/comp-29058.

Texto completo
Resumen
In spite of the great variety of potential advantages, it is also necessary to illuminate the real effects of Standard Software in practice. Recent studies have revealed that 81% of companies interviewed using SAP, do not fully exploit the software’s ability to optimise business processes, though 61% stated that SAP offers very good process optimisation opportunities.[CS01] Therefore this paper evaluates popular life cycle models with respect to their suitability to implement Standard Software in a process driven way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Palally, H. R., S. Chen, W. Yao y L. Hanzo. "Particle swarm optimisation aided semi-blind joint maximum likelihood channel estimation and data detection for MIMO systems". En 2009 IEEE/SP 15th Workshop on Statistical Signal Processing (SSP). IEEE, 2009. http://dx.doi.org/10.1109/ssp.2009.5278578.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Pauley, Michael y Jonathan H. Manton. "Global Optimisation for Time of Arrival-Based Localisation". En 2018 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2018. http://dx.doi.org/10.1109/ssp.2018.8450751.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Al-Janabi, T. A. y H. S. Al-Raweshidy. "Efficient whale optimisation algorithm-based SDN clustering for IoT focused on node density". En 2017 16th Annual Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net). IEEE, 2017. http://dx.doi.org/10.1109/medhocnet.2017.8001651.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Yan, Shuangyi, Reza Nejabati y Dimitra Simeonidou. "Data-driven network analytics and network optimisation in SDN-based programmable optical networks". En 2018 International Conference on Optical Network Design and Modeling (ONDM). IEEE, 2018. http://dx.doi.org/10.23919/ondm.2018.8396137.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bernardino, Anabela Moreira, Eugénia Moreira Bernardino, Juan M. Sánchez-Pérez, Juan A. Gómez-Pulido y Miguel A. Vega-Rodríguez. "Using a hybrid honey bees mating optimisation algorithm for solving SONET/SDH design problems". En the 4th International Symposium. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2093698.2093820.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Danjuma, Isah, Allan Mertolla Mertolla, Ali Alabdullah, Buhari Muhammad, Abdalfettah asharaa y Raed Abd-Alhameed. "A Compact UWB Antenna Array for Breast Cancer Imaging Application using Optimisation Algorithm". En Proceedings of the 1st International Multi-Disciplinary Conference Theme: Sustainable Development and Smart Planning, IMDC-SDSP 2020, Cyperspace, 28-30 June 2020. EAI, 2020. http://dx.doi.org/10.4108/eai.28-6-2020.2298124.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Abu Talib, M. Ashraf, M. Safwan Zulkifli, M. Nasrullah Annuar, M. Qayyum Ahmad Ani, Mondali Mondali, Nooraini Mohamed, Daren James Shields et al. "De-Risking and Optimisation During Gas Field Development Using Integrated Ultra-Deep Reservoir Mapping and Advanced Surveying Technology: A Case Study from Offshore Malaysia". En SPE Asia Pacific Oil & Gas Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210730-ms.

Texto completo
Resumen
Abstract Recently, Petronas Carigali Sdn. Bhd. in Malaysia has successfully drilled two horizontal wells to boost the gas production of the brown field X, offshore Malaysia. The field X has been producing for many years and production started to decrease since early 2013, requiring immediate infill drilling to cover for the production gap. However, the well planning and execution are very challenging. Due to the target location, the Extended Reach Drilling (ERD) well trajectory design was considered. It would need high drilling efficiency to minimize the extended stationary time to reduce the stuck pipe and ensure the accuracy of landing and geo-steering the well. Moreover, the high uncertainty of subsurface data like gas water contact (GWC), the complexity of carbonate reservoir heterogeneities, reservoir rugose geomorphology caused by fractures or karst and the large variation of reservoir resistivity profile added more difficulties for the pre-drill modeling and real-time execution. The planning methodology combined between several planned trajectories and possible reservoir geological models to achieve the best fit of the current reservoir condition and the planned well objectives. Then, the well placement pre-drill modeling would be performed to optimize the geo-steering execution to maximize the reservoir exposure and place the well in the desired position inside the target layer. Eventually, drilling execution was smoothly and successfully performed. The first well was drilled 614m MD horizontally at approximately 21m TVD above the GWC, exceeding the target objective of 12m TVD standoff. The second well, which was the ERD well, drilled 349m MD horizontally, approximately 6.6m TVD below the top of carbonate. Utilizing the ultra-deep reservoir mapping to identify the top of carbonate, carbonate heterogeneity layers and GWC helped precisely optimize the well in the desired position. Combination of definitive dynamic survey (DDS) technology not only provided better trajectory TVD calculation for improving reservoir mapping boundaries, but also helped to speed up the drilling operation by reducing the standard surveying time, ultimately minimizing the risk exposure for stuck pipe. This paper will describe how the combination of well placement technology ultra-deep reservoir mapping tool and latest definitive dynamic surveying technology helped Petronas achieve the objectives and de-risk and optimize the horizontal wells from planning to operation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Cuciumita, Cleopatra, Ning Qin, Shahrokh Shahpar y Howoong Namgoong. "Acoustic and Aerodynamic Performance of Serrated Leading Edges on the Bypass Outlet Guide Vanes". En ASME Turbo Expo 2023: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/gt2023-103070.

Texto completo
Resumen
Abstract The reduction of noise for commercial aircraft has become an important challenge for engineers, with engine noise being one of the major contributors to the overall sound levels near airport. For modern turbofan engines, characterized by high bypass ratios, one of the main noise sources is the interaction between the turbulent rotor wake and the leading edge of the downstream outlet guide vanes. A passive noise control method that has been gaining interest in the research community is the use of serrated leading edges for the vanes. Studies done on flat plates and airfoils with serrated leading edges showed a reduction in airfoil turbulent flow interaction noise, with an increased benefit when increasing the serrations amplitude and the existence of an optimal wavelength. From an aerodynamic standpoint, some studies show that serrations delay stall, but with a cost in aerodynamic performance at design point. Very little data exists for assessing the acoustic benefits of serrations on the leading edge of outlet guide vanes, where the noise predictions require addressing both the broadband and tonal noise. Preliminary results indicate some noise reduction, but without quantifying the benefits against the serrations parameters. It is also unclear how the total pressure losses in the outlet guide vanes row are affected by the presence of serrations. In this paper, we report a parametric study based on leading edge serrations to investigate the effect of the amplitude and wavelength on noise levels, as well as on aerodynamic performance. The NASA source diagnostic test (SDT) fan with swept outlet guide vanes was used as the test case, being representative of a modern turbofan. The unsteady flow and radiated noise were computed using a hybrid Lattice-Boltzmann/very-Large-Eddy-Simulation model implemented in the PowerFlow computational fluid dynamics solver. The flow characteristics and the noise levels were validated for the datum geometry against existing experimental data. Correlations for the far-field overall sound pressure level and the total pressure loss coefficient were built in relation to the serration parameters and a discussion on the best combination of serrations amplitude and wavelength has been given. These correlations can be further used in a multidisciplinary optimisation process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía