Gotowa bibliografia na temat „Online convex optimisation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Online convex optimisation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Online convex optimisation"

1

Fan, Wenhui, Hongwen He i Bing Lu. "Online Active Set-Based Longitudinal and Lateral Model Predictive Tracking Control of Electric Autonomous Driving". Applied Sciences 11, nr 19 (5.10.2021): 9259. http://dx.doi.org/10.3390/app11199259.

Pełny tekst źródła
Streszczenie:
Autonomous driving is a breakthrough technology in the automobile and transportation fields. The characteristics of planned trajectories and tracking accuracy affect the development of autonomous driving technology. To improve the measurement accuracy of the vehicle state and realise the online application of predictive control algorithm, an online active set-based longitudinal and lateral model predictive tracking control method of autonomous driving is proposed for electric vehicles. Integrated with the vehicle inertial measurement unit (IMU) and global positioning system (GPS) information, a vehicle state estimator is designed based on an extended Kalman filter. Based on the 3-degree-of-freedom vehicle dynamics model and the curvilinear road coordinate system, the longitudinal and lateral errors dimensionality reduction is carried out. A fast-rolling optimisation algorithm for longitudinal and lateral tracking control of autonomous vehicles is designed and implemented based on convex optimisation, online active set theory and QP solver. Finally, the performance of the proposed tracking control method is verified in the reconstructed curve road scene based on real GPS data. The hardware-in-the-loop simulation results show that the proposed MPC controller has apparent advantages compared with the PID-based controller.
Style APA, Harvard, Vancouver, ISO itp.
2

Goudarzi, Pejman, Mehdi Hosseinpour, Roham Goudarzi i Jaime Lloret. "Holistic Utility Satisfaction in Cloud Data CentreNetwork Using Reinforcement Learning". Future Internet 14, nr 12 (8.12.2022): 368. http://dx.doi.org/10.3390/fi14120368.

Pełny tekst źródła
Streszczenie:
Cloud computing leads to efficient resource allocation for network users. In order to achieve efficient allocation, many research activities have been conducted so far. Some researchers focus on classical optimisation theory techniques (such as multi-objective optimisation, evolutionary optimisation, game theory, etc.) to satisfy network providers and network users’ service-level agreement (SLA) requirements. Normally, in a cloud data centre network (CDCN), it is difficult to jointly satisfy both the cloud provider and cloud customer’ utilities, and this leads to complex combinatorial problems, which are usually NP-hard. Recently, machine learning and artificial intelligence techniques have received much attention from the networking community because of their capability to solve complicated networking problems. In the current work, at first, the holistic utility satisfaction for the cloud data centre provider and customers is formulated as a reinforcement learning (RL) problem with a specific reward function, which is a convex summation of users’ utility functions and cloud provider’s utility. The user utility functions are modelled as a function of cloud virtualised resources (such as storage, CPU, RAM), connection bandwidth, and also, the network-based expected packet loss and round-trip time factors associated with the cloud users. The cloud provider utility function is modelled as a function of resource prices and energy dissipation costs. Afterwards, a Q-learning implementation of the mentioned RL algorithm is introduced, which is able to converge to the optimal solution in an online and fast manner. The simulation results exhibit the enhanced convergence speed and computational complexity properties of the proposed method in comparison with similar approaches from the joint cloud customer/provider utility satisfaction perspective. To evaluate the scalability property of the proposed method, the results are also repeated for different cloud user population scenarios (small, medium, and large).
Style APA, Harvard, Vancouver, ISO itp.
3

Bakhsh, Pir, Muhammad Ismail, Muhammad Asif Khan, Muhammad Ali i Raheel Ahmed Memon. "Optimisation of Sentiment Analysis for E-Commerce". VFAST Transactions on Software Engineering 12, nr 3 (30.09.2024): 243–62. http://dx.doi.org/10.21015/vtse.v12i3.1907.

Pełny tekst źródła
Streszczenie:
Sentiment analysis is widely used today to make data-driven decisions in different industries, starting from marketing and including brand management, reputation monitoring, and customer satisfaction analysis. Its growing importance is closely linked with so-called ‘word-of-mouth’ communication, from reading online reviews to writing comments on social networks. Effective separation of sentiments ensures that companies' responses are timely and critical patterns are seen in big data sets. Statistical measures, information gain, correlation-based approaches, etc, have been employed for the feature selection. Still, the problem associated with text data mining is that they don’t convey the text's relative difficulty and additional features. To fill this gap, our research proposes a new feature selection technique through Ant Colony Optimization (ACO) and K Nearest Neighbour (KNN) performed on 28,000 customer reviews in different product categories. The results, therefore, showed an overall accuracy of 80.1%, with the Support Vector Machine (SVM) set at 80.5% on each selected feature, which was slightly higher than the Convolutional Neural Network (CNN), which scored a 78.41% accuracy. SVM remains on the mark of 83%, and for CNN, the rate achieved on the same was 80.8% when both were applied to the entire dataset. These facts rejected the infallibility of the simple and complex algorithms used singly in the sentiment classification, indicating that more sophisticated algorithms like ACO and KNN can provide business solutions to improve their service delivery based on customers’ feedback.
Style APA, Harvard, Vancouver, ISO itp.
4

Yu, Jichi, Jueyou Li i Guo Chen. "Online bandit convex optimisation with stochastic constraints via two-point feedback". International Journal of Systems Science, 15.06.2023, 1–17. http://dx.doi.org/10.1080/00207721.2023.2209566.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bao, C. Y., X. Zhou, P. Wang, R. Z. He i G. J. Tang. "A deep reinforcement learning-based approach to onboard trajectory generation for hypersonic vehicles". Aeronautical Journal, 8.02.2023, 1–21. http://dx.doi.org/10.1017/aer.2023.4.

Pełny tekst źródła
Streszczenie:
Abstract An onboard three-dimensional (3D) trajectory generation approach based on the reinforcement learning (RL) algorithm and deep neural network (DNN) is proposed for hypersonic vehicles in glide phase. Multiple trajectory samples are generated offline through the convex optimisation method. The deep learning (DL) is employed to pre-train the DNN for initialising the actor network and accelerating the RL process. Based on the offline deep policy deterministic actor-critic algorithm, a flight target-oriented reward function with path constraints is designed. The actor network is optimised by the end-to-end RL and policy gradients of the critic network until the reward function converges to the maximum. The actor network is considered as the onboard trajectory generator to compute optimal control values online based on the real-time motion states. The simulation results show that the single-step online planning time meets the real-time requirements of onboard trajectory generation. The significant improvement in terminal accuracy of the online trajectory and the better generalisation under biased initial states for hypersonic vehicles in glide phase is observed.
Style APA, Harvard, Vancouver, ISO itp.
6

Gasparin, Andrea, Federico Julian Camerota Verdù, Daniele Catanzaro i Lorenzo Castelli. "An evolution strategy approach for the Balanced Minimum Evolution Problem". Bioinformatics, 27.10.2023. http://dx.doi.org/10.1093/bioinformatics/btad660.

Pełny tekst źródła
Streszczenie:
Abstract Motivation The Balanced Minimum Evolution (BME) is a powerful distance based phylogenetic estimation model introduced by Desper and Gascuel and nowadays implemented in popular tools for phylogenetic analyses. It was proven to be computationally less demanding than more sophisticated estimation methods, e.g. maximum likelihood or Bayesian inference while preserving the statistical consistency and the ability to run with almost any kind of data for which a dissimilarity measure is available. BME can be stated in terms of a nonlinear non-convex combinatorial optimisation problem, usually referred to as the Balanced Minimum Evolution Problem (BMEP). Currently, the state-of-the-art among approximate methods for the BMEP is represented by FastME (version 2.0), a software which implements several deterministic phylogenetic construction heuristics combined with a local search on specific neighbourhoods derived by classical topological tree rearrangements. These combinations, however, may not guarantee convergence to close-to-optimal solutions to the problem due to the lack of solution space exploration, a phenomenon which is exacerbated when tackling molecular datasets characterised by a large number of taxa. Results To overcome such convergence issues, in this article we propose a novel metaheuristic, named PhyloES, which exploits the combination of an exploration phase based on Evolution Strategies, a special type of evolutionary algorithm, with a refinement phase based on two local search algorithms. Extensive computational experiments show that PhyloES consistently outperforms FastME, especially when tackling larger datasets, providing solutions characterised by a shorter tree length but also significantly different from the topological perspective. Availability The software is available at https://github.com/andygaspar/PHYLOES Supplementary information Supplementary data are available at Bioinformatics online.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Online convex optimisation"

1

Deswarte, Raphaël. "Régression linéaire et apprentissage : contributions aux méthodes de régularisation et d’agrégation". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX047/document.

Pełny tekst źródła
Streszczenie:
Cette thèse aborde le sujet de la régression linéaire dans différents cadres, liés notamment à l’apprentissage. Les deux premiers chapitres présentent le contexte des travaux, leurs apports et les outils mathématiques utilisés. Le troisième chapitre est consacré à la construction d’une fonction de régularisation optimale, permettant par exemple d’améliorer sur le plan théorique la régularisation de l’estimateur LASSO. Le quatrième chapitre présente, dans le domaine de l’optimisation convexe séquentielle, des accélérations d’un algorithme récent et prometteur, MetaGrad, et une conversion d’un cadre dit “séquentiel déterministe" vers un cadre dit “batch stochastique" pour cet algorithme. Le cinquième chapitre s’intéresse à des prévisions successives par intervalles, fondées sur l’agrégation de prédicteurs, sans retour d’expérience intermédiaire ni modélisation stochastique. Enfin, le sixième chapitre applique à un jeu de données pétrolières plusieurs méthodes d’agrégation, aboutissant à des prévisions ponctuelles court-terme et des intervalles de prévision long-terme
This thesis tackles the topic of linear regression, within several frameworks, mainly linked to statistical learning. The first and second chapters present the context, the results and the mathematical tools of the manuscript. In the third chapter, we provide a way of building an optimal regularization function, improving for instance, in a theoretical way, the LASSO estimator. The fourth chapter presents, in the field of online convex optimization, speed-ups for a recent and promising algorithm, MetaGrad, and shows how to transfer its guarantees from a so-called “online deterministic setting" to a “stochastic batch setting". In the fifth chapter, we introduce a new method to forecast successive intervals by aggregating predictors, without intermediate feedback nor stochastic modeling. The sixth chapter applies several aggregation methods to an oil production dataset, forecasting short-term precise values and long-term intervals
Style APA, Harvard, Vancouver, ISO itp.
2

Fernandez, Camila. "Contributions and applications to survival analysis". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS230.

Pełny tekst źródła
Streszczenie:
L'analyse de survie a suscité l'intérêt de diverses disciplines, allant de la médecine et de la maintenance prédictive à diverses applications industrielles. Sa popularité croissante peut être attribuée aux avancées significatives en matière de puissance de calcul et à la disponibilité accrue des données. Des approches variées ont été développées pour répondre au défi des données censurées, allant des outils statistiques classiques aux techniques contemporaines d'apprentissage automatique. Cependant, il reste encore une marge considérable pour l'amélioration. Cette thèse vise à introduire des approches innovantes qui fournissent des insights plus profonds sur les distributions de survie et à proposer de nouvelles méthodes avec des garanties théoriques qui améliorent la précision des prédictions.Il est notamment remarquable de constater l'absence de modèles capables de traiter les données séquentielles, une configuration pertinente en raison de sa capacité à s'adapter rapidement à de nouvelles informations et de son efficacité à gérer de grands flux de données sans nécessiter d'importantes ressources mémoire. La première contribution de cette thèse est de proposer un cadre théorique pour la modélisation des données de survie en ligne. Nous modélisons la fonction de risque comme une exponentielle paramétrique qui dépend des covariables, et nous utilisons des algorithmes d'optimisation convexe en ligne pour optimiser la vraisemblance de notre modèle, une approche qui est novatrice dans ce domaine. Nous proposons un nouvel algorithme adaptatif de second ordre, SurvONS, qui assure une robustesse dans la sélection des hyperparamètres tout en maintenant des bornes de regret rapides. De plus, nous introduisons une approche stochastique qui améliore les propriétés de convexité pour atteindre des taux de convergence plus rapides. La deuxième contribution de cette thèse est de fournir une comparaison détaillée de divers modèles de survie, incluant les modèles semi-paramétriques, paramétriques et ceux basés sur l'apprentissage automatique. Nous étudions les caractéristiques des ensembles de données qui influencent la performance des méthodes, et nous proposons une procédure d'agrégation qui améliore la précision et la robustesse des prédictions. Enfin, nous appliquons les différentes approches discutées tout au long de la thèse à une étude de cas industrielle : la prédiction de l'attrition des employés, un problème fondamental dans le monde des affaires moderne. De plus, nous étudions l'impact des caractéristiques des employés sur les prédictions d'attrition en utilisant l'importance des caractéristiques par permutation et les valeurs de Shapley
Survival analysis has attracted interest from a wide range of disciplines, spanning from medicine and predictive maintenance to various industrial applications. Its growing popularity can be attributed to significant advancements in computational power and the increased availability of data. Diverse approaches have been developed to address the challenge of censored data, from classical statistical tools to contemporary machine learning techniques. However, there is still considerable room for improvement. This thesis aims to introduce innovative approaches that provide deeper insights into survival distributions and to propose new methods with theoretical guarantees that enhance prediction accuracy. Notably, we notice the lack of models able to treat sequential data, a setting that is relevant due to its ability to adapt quickly to new information and its efficiency in handling large data streams without requiring significant memory resources. The first contribution of this thesis is to propose a theoretical framework for modeling online survival data. We model the hazard function as a parametric exponential that depends on the covariates, and we use online convex optimization algorithms to minimize the negative log-likelihood of our model, an approach that is novel in this field. We propose a new adaptive second-order algorithm, SurvONS, which ensures robustness in hyperparameter selection while maintaining fast regret bounds. Additionally, we introduce a stochastic approach that enhances the convexity properties to achieve faster convergence rates. The second contribution of this thesis is to provide a detailed comparison of diverse survival models, including semi-parametric, parametric, and machine learning models. We study the dataset character- istics that influence the methods performance, and we propose an aggregation procedure that enhances prediction accuracy and robustness. Finally, we apply the different approaches discussed throughout the thesis to an industrial case study : predicting employee attrition, a fundamental issue in modern business. Additionally, we study the impact of employee characteristics on attrition predictions using permutation feature importance and Shapley values
Style APA, Harvard, Vancouver, ISO itp.
3

Karimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.

Pełny tekst źródła
Streszczenie:
De nombreux problèmes en Apprentissage Statistique consistent à minimiser une fonction non convexe et non lisse définie sur un espace euclidien. Par exemple, les problèmes de maximisation de la vraisemblance et la minimisation du risque empirique en font partie.Les algorithmes d'optimisation utilisés pour résoudre ce genre de problèmes ont été largement étudié pour des fonctions convexes et grandement utilisés en pratique.Cependant, l'accrudescence du nombre d'observation dans l'évaluation de ce risque empirique ajoutée à l'utilisation de fonctions de perte de plus en plus sophistiquées représentent des obstacles.Ces obstacles requièrent d'améliorer les algorithmes existants avec des mis à jour moins coûteuses, idéalement indépendantes du nombre d'observations, et d'en garantir le comportement théorique sous des hypothèses moins restrictives, telles que la non convexité de la fonction à optimiser.Dans ce manuscrit de thèse, nous nous intéressons à la minimisation de fonctions objectives pour des modèles à données latentes, ie, lorsque les données sont partiellement observées ce qui inclut le sens conventionnel des données manquantes mais est un terme plus général que cela.Dans une première partie, nous considérons la minimisation d'une fonction (possiblement) non convexe et non lisse en utilisant des mises à jour incrémentales et en ligne. Nous proposons et analysons plusieurs algorithmes à travers quelques applications.Dans une seconde partie, nous nous concentrons sur le problème de maximisation de vraisemblance non convexe en ayant recourt à l'algorithme EM et ses variantes stochastiques. Nous en analysons plusieurs versions rapides et moins coûteuses et nous proposons deux nouveaux algorithmes du type EM dans le but d'accélérer la convergence des paramètres estimés
Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
Style APA, Harvard, Vancouver, ISO itp.
4

Akhavanfoomani, Aria. "Derivative-free stochastic optimization, online learning and fairness". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG001.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous étudions d'abord le problème de l'optimisation d'ordre zéro dans le cadre actif pour des fonctions lisses et trois classes différentes de fonctions : i) les fonctions qui satisfont la condition de Polyak-Łojasiewicz, ii) les fonctions fortement convexes, et iii) la classe plus large des fonctions non convexes fortement lisses.De plus, nous proposons un nouvel algorithme basé sur la randomisation de type l1, et nous étudions ses propriétés pour les fonctions convexes Lipschitz dans un cadre d'optimisation en ligne. Notre analyse est due à la dérivation d'une nouvelle inégalité de type Poincar'e pour la mesure uniforme sur la sphère l1 avec des constantes explicites.Ensuite, nous étudions le problème d'optimisation d'ordre zéro dans les schémas passifs. Nous proposons une nouvelle méthode pour estimer le minimiseur et la valeur minimale d'une fonction de régression lisse et fortement convexe f. Nous dérivons des limites supérieures pour cet algorithme et prouvons des limites inférieures minimax pour un tel cadre.Enfin, nous étudions le problème du bandit contextuel linéaire sous contraintes d'équité où un agent doit sélectionner un candidat dans un pool, et où chaque candidat appartient à un groupe sensible. Nous proposons une nouvelle notion d'équité qui est pratique dans l'exemple susmentionné. Nous concevons une politique avide qui calcule une estimation du rang relatif de chaque candidat en utilisant la fonction de distribution cumulative empirique, et nous prouvons sa propriété optimale
In this thesis, we first study the problem of zero-order optimization in the active setting for smooth and three different classes of functions: i) the functions that satisfy the Polyak-Łojasiewicz condition, ii) strongly convex functions, and iii) the larger class of highly smooth non-convex functions.Furthermore, we propose a novel algorithm that is based on l1-type randomization, and we study its properties for Lipschitz convex functions in an online optimization setting. Our analysis is due to deriving a new Poincar'e type inequality for the uniform measure on the l1-sphere with explicit constants.Then, we study the zero-order optimization problem in the passive schemes. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function f. We derive upper bounds for this algorithm and prove minimax lower bounds for such a setting.In the end, we study the linear contextual bandit problem under fairness constraints where an agent has to select one candidate from a pool, and each candidate belongs to a sensitive group. We propose a novel notion of fairness which is practical in the aforementioned example. We design a greedy policy that computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function, and we proved its optimal property
Style APA, Harvard, Vancouver, ISO itp.
5

Reiffers-Masson, Alexandre. "Compétition sur la visibilité et la popularité dans les réseaux sociaux en ligne". Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0210/document.

Pełny tekst źródła
Streszczenie:
Cette thèse utilise la théorie des jeux pour comprendre le comportement des usagers dans les réseaux sociaux. Trois problématiques y sont abordées: "Comment maximiser la popularité des contenus postés dans les réseaux sociaux?";" Comment modéliser la répartition des messages par sujets?";"Comment minimiser la propagation d’une rumeur et maximiser la diversité des contenus postés?". Après un état de l’art concernant ces questions développé dans le chapitre 1, ce travail traite, dans le chapitre 2, de la manière d’aborder l’environnement compétitif pour accroître la visibilité. Dans le chapitre 3, c’est le comportement des usagers qui est modélisé, en terme de nombre de messages postés, en utilisant la théorie des approximations stochastiques. Dans le chapitre 4, c’est une compétition pour être populaire qui est étudiée. Le chapitre 5 propose de formuler deux problèmes d’optimisation convexes dans le contexte des réseaux sociaux en ligne. Finalement, le chapitre 6 conclue ce manuscrit
This Ph.D. is dedicated to the application of the game theory for the understanding of users behaviour in Online Social Networks. The three main questions of this Ph.D. are: " How to maximize contents popularity ? "; " How to model the distribution of messages across sources and topics in OSNs ? "; " How to minimize gossip propagation and how to maximize contents diversity? ". After a survey concerning the research made about the previous problematics in chapter 1, we propose to study a competition over visibility in chapter 2. In chapter 3, we model and provide insight concerning the posting behaviour of publishers in OSNs by using the stochastic approximation framework. In chapter 4, it is a popularity competition which is described by using a differential game formulation. The chapter 5 is dedicated to the formulation of two convex optimization problems in the context of Online Social Networks. Finally conclusions and perspectives are given in chapter 6
Style APA, Harvard, Vancouver, ISO itp.
6

Ho, Vinh Thanh. "Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA". Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous développons certaines techniques avancées d'apprentissage automatique dans le cadre de l'apprentissage en ligne et de l'apprentissage par renforcement (« reinforcement learning » en anglais -- RL). L'épine dorsale de nos approches est la programmation DC (Difference of Convex functions) et DCA (DC Algorithm), et leur version en ligne, qui sont reconnues comme de outils puissants d'optimisation non convexe, non différentiable. Cette thèse se compose de deux parties : la première partie étudie certaines techniques d'apprentissage automatique en mode en ligne et la deuxième partie concerne le RL en mode batch et mode en ligne. La première partie comprend deux chapitres correspondant à la classification en ligne (chapitre 2) et la prédiction avec des conseils d'experts (chapitre 3). Ces deux chapitres mentionnent une approche unifiée d'approximation DC pour différents problèmes d'optimisation en ligne dont les fonctions objectives sont des fonctions de perte 0-1. Nous étudions comment développer des algorithmes DCA en ligne efficaces en termes d'aspects théoriques et computationnels. La deuxième partie se compose de quatre chapitres (chapitres 4, 5, 6, 7). Après une brève introduction du RL et ses travaux connexes au chapitre 4, le chapitre 5 vise à fournir des techniques efficaces du RL en mode batch basées sur la programmation DC et DCA. Nous considérons quatre différentes formulations d'optimisation DC en RL pour lesquelles des algorithmes correspondants basés sur DCA sont développés. Nous traitons les problèmes clés de DCA et montrons l'efficacité de ces algorithmes au moyen de diverses expériences. En poursuivant cette étude, au chapitre 6, nous développons les techniques du RL basées sur DCA en mode en ligne et proposons leurs versions alternatives. Comme application, nous abordons le problème du plus court chemin stochastique (« stochastic shortest path » en anglais -- SSP) au chapitre 7. Nous étudions une classe particulière de problèmes de SSP qui peut être reformulée comme une formulation de minimisation de cardinalité et une formulation du RL. La première formulation implique la norme zéro et les variables binaires. Nous proposons un algorithme basé sur DCA en exploitant une approche d'approximation DC de la norme zéro et une technique de pénalité exacte pour les variables binaires. Pour la deuxième formulation, nous utilisons un algorithme batch RL basé sur DCA. Tous les algorithmes proposés sont testés sur des réseaux routiers artificiels
In this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Style APA, Harvard, Vancouver, ISO itp.
7

Ho, Vinh Thanh. "Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous développons certaines techniques avancées d'apprentissage automatique dans le cadre de l'apprentissage en ligne et de l'apprentissage par renforcement (« reinforcement learning » en anglais -- RL). L'épine dorsale de nos approches est la programmation DC (Difference of Convex functions) et DCA (DC Algorithm), et leur version en ligne, qui sont reconnues comme de outils puissants d'optimisation non convexe, non différentiable. Cette thèse se compose de deux parties : la première partie étudie certaines techniques d'apprentissage automatique en mode en ligne et la deuxième partie concerne le RL en mode batch et mode en ligne. La première partie comprend deux chapitres correspondant à la classification en ligne (chapitre 2) et la prédiction avec des conseils d'experts (chapitre 3). Ces deux chapitres mentionnent une approche unifiée d'approximation DC pour différents problèmes d'optimisation en ligne dont les fonctions objectives sont des fonctions de perte 0-1. Nous étudions comment développer des algorithmes DCA en ligne efficaces en termes d'aspects théoriques et computationnels. La deuxième partie se compose de quatre chapitres (chapitres 4, 5, 6, 7). Après une brève introduction du RL et ses travaux connexes au chapitre 4, le chapitre 5 vise à fournir des techniques efficaces du RL en mode batch basées sur la programmation DC et DCA. Nous considérons quatre différentes formulations d'optimisation DC en RL pour lesquelles des algorithmes correspondants basés sur DCA sont développés. Nous traitons les problèmes clés de DCA et montrons l'efficacité de ces algorithmes au moyen de diverses expériences. En poursuivant cette étude, au chapitre 6, nous développons les techniques du RL basées sur DCA en mode en ligne et proposons leurs versions alternatives. Comme application, nous abordons le problème du plus court chemin stochastique (« stochastic shortest path » en anglais -- SSP) au chapitre 7. Nous étudions une classe particulière de problèmes de SSP qui peut être reformulée comme une formulation de minimisation de cardinalité et une formulation du RL. La première formulation implique la norme zéro et les variables binaires. Nous proposons un algorithme basé sur DCA en exploitant une approche d'approximation DC de la norme zéro et une technique de pénalité exacte pour les variables binaires. Pour la deuxième formulation, nous utilisons un algorithme batch RL basé sur DCA. Tous les algorithmes proposés sont testés sur des réseaux routiers artificiels
In this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Style APA, Harvard, Vancouver, ISO itp.
8

El, Gueddari Loubna. "Proximal structured sparsity regularization for online reconstruction in high-resolution accelerated Magnetic Resonance imaging". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS573.

Pełny tekst źródła
Streszczenie:
L'imagerie par résonance magnétique (IRM) est la technique d'imagerie médicale de référence pour sonder in vivo et non invasivement les tissus mous du corps humain, en particulier le cerveau.L'amélioration de la résolution de l'IRM en un temps d'acquisition standard (400µm isotrope en 15 minutes) permettrait aux médecins d'améliorer considérablement leur diagnostic et le suivi des patients. Cependant, le temps d'acquisition en IRM reste long. Pour réduire ce temps, la récente théorie de l'échantillonnage comprimée (EC) a révolutionné la façon d'acquérir des données dans plusieurs domaines dont l'IRM en surmontant le théorème de Shannon-Nyquist. Avec l'EC, les données peuvent alors être massivement sous-échantillonnées tout en assurant des conditions optimales de reconstruction des images.Dans ce contexte, les thèses de doctorat précédemment soutenue au sein du laboratoire ont été consacrées à la conception et à la mise en oeuvre de scénarios d'acquisition physiquement plausibles pour accélérer l'acquisitions. Un nouvel algorithme d'optimisation pour la conception de trajectoire non cartésienne avancée appelée SPARKLING pour Spreading Projection Algorithm for Rapid K-space samplING en est né. Les trajectoires SPARKLING générées ont conduit à des facteurs d'accélération allant jusqu'à 20 en 2D et 70 pour les acquisitions 3D sur des images à haute résolution pondérées en T*₂ acquises à 7 Tesla. Ces accélérations n'étaient accessibles que grâce au rapport signal/bruit d'entrée élevé fourni par l'utilisation de bobines de réception multi-canaux (IRMp). Cependant, ces résultats ont été obtenus au détriment d'une reconstruction longue et complexe. Dans cette thèse, l'objectif est de proposer une nouvelle approche de reconstruction en ligne d'images acquies par IRMp non cartésiennes. Pour atteindre cet objectif, nous nous appuyons sur une approche en ligne où reconstruction et acquisition s'entremèlent. Par conséquent, la reconstruction débute avant la fin de l'acquisition et un résultat partiel est délivré au cours de l'examen. L'ensemble du pipeline est compatible avec une implémentation réelle à travers l'interface Gadgetron pour produire les images reconstruites à la console du scanner.Ainsi, après avoir exposé la théorie de l'échantillonage comprimé, nous présentons l'état de l'art de la méthode dédiée à la reconstruction en imagerie multi-canaux. En particulier, nous nous concentrerons d'abord sur les méthodes d'autocalibration qui présentent l'avantage d'être adaptées à l'échantillonnage non cartésien et nous proposons une méthode simple mais efficace pour estimer le profil de sensibilité des différents cannaux. Cependant, en raison de leur dépendance au profil de sensibilité, ces méthodes ne sont pas adapatable à la reconstruction en ligne. Par conséquent, la deuxième partie se concentre sur la suppression des ces profils et celà grâce à l'utilisation de norme mixte promouvant une parcimonie structurée. Ensuite, nous adaptons différentes réularization basées sur la parcimonie structurée pour reconstruire ces images fortement corrélées. Enfin, la méthode retenue sera appliquée à l'imagerie en ligne
Magnetic resonance imaging (MRI) is the reference medical imaging technique for probing in vivo and non-invasively soft tissues in the human body, notably the brain. MR image resolution improvement in a standard scanning time (e.g., 400µm isotropic in 15 min) would allow medical doctors to significantly improve both their diagnosis and patients' follow-up. However the scanning time in MRI remains long, especially in the high resolution context. To reduce this time, the recent Compressed Sensing (CS) theory has revolutionized the way of acquiring data in several fields including MRI by overcoming the Shannon-Nyquist theorem. Using CS, data can then be massively under-sampled while ensuring conditions for optimal image recovery.In this context, previous Ph.D. thesis in the laboratory were dedicated to the design and implementation of physically plausible acquisition scenarios to accelerate the scan. Those projects deliver new optimization algorithm for the design of advanced non-Cartesian trajectory called SPARKLING: Spreading Projection Algorithm for Rapid K-space samplING. The generated SPARKLING trajectories led to acceleration factors up to 20 in 2D and 60 for 3D-acquisitions on highly resolved T₂* weighted images acquired at 7~Tesla.Those accelerations were only accessible thanks to the high input Signal-to-Noise Ratio delivered by the usage of multi-channel reception coils. However, those results are coming at a price of long and complex reconstruction.In this thesis, the objective is to propose an online approach for non-Cartesian multi-channel MR image reconstruction. To achieve this goal we rely on an online approach where the reconstruction starts from incomplete data.Hence acquisition and reconstruction are interleaved, and partial feedback is given during the scan. After exposing the Compressed Sensing theory, we present state-of the art method dedicated to multi-channel coil reconstruction. In particular, we will first focus on self-calibrating methods that presents the advantage to be adapted to non-Cartesian sampling and we propose a simple yet efficient method to estimate the coil sensitivity profile.However, owing to its dependence to user-defined parameters, this two-step approach (extraction of sensitivity maps and then image reconstruction) is not compatible with the timing constraints associated with online reconstruction. Then we studied the case of calibration-less reconstruction methods and splits them into two categories, the k-space based and the domain-based. While the k-space calibration-less method are sub-optimal for non-Cartesian reconstruction, due to the gridding procedure, we will retain the domain-based calibration-less reconstruction and prove theirs for online purposes. Hence in the second part, we first prove the advantage of mixed norm to improve the recovery guarantee in the pMRI setting. Then we studied the impact of structured sparse induced norm on the reconstruction multi-channel purposes, where then and adapt different penalty based on structured sparsity to handle those highly correlated images. Finally, the retained method will be applied to online purposes. The entire pipeline, is compatible with an implementation through the Gadgetron pipeline to deliver the reconstruction at the scanner console
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Online convex optimisation"

1

Lourenço, Pedro, Hugo Costa, João Branco, Pierre-Loïc Garoche, Arash Sadeghzadeh, Jonathan Frey, Gianluca Frison, Anthea Comellini, Massimo Barbero i Valentin Preda. "Verification & validation of optimisation-based control systems: methods and outcomes of VV4RTOS". W ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-155.

Pełny tekst źródła
Streszczenie:
VV4RTOS is an activity supported by the European Space Agency aimed at the development and validation of a framework for the verification and validation of spacecraft guidance, navigation, and control (GNC) systems based on embedded optimisation, tailored to handle different layers of abstraction, from guidance and control (G&C) requirements down to hardware level. This is grounded on the parallel design and development of real-time optimisation-based G&C software, allowing to concurrently identify, develop, consolidate, and validate a set of engineering practices and analysis & verification tools to ensure safe code execution of the designed G&C software as test cases but aimed at streamlining general industrial V&V processes. This paper presents: 1) a review of the challenges and the state-of-the-art of formal verification methods applicable to optimization-based software; 2) the implementation for an embedded application and the analysis from a V&V standpoint of a conic optimization solver; 3) the technical approach devised towards and enhanced V&V process; and 4) experimental results up to processor-in-the-loop tests and conclusions. In general, this activity aims to contribute to the widespread usage of convex optimisation-based techniques across the space industry by 1) augmenting the traditional GNC software Design & Development Verification & Validation (DDVV) methodologies to explicitly address iterative embedded optimisation algorithms that are paramount for the success of new and extremely relevant space applications (from powered landing to active debris removal, from actuator allocation to attitude guidance & control) guaranteeing safe, reliable, repeatable, and accurate execution of the SW; and 2) consolidating the necessary tools for the fast prototyping and qualification of G&C software, grounded on strong theoretical foundations for the solution of convex optimisation problems generated by posing, discretization, convexification, and transcription of nonlinear nonconvex optimal control problems to online-solvable optimisation problems. Sound guidelines are provided for the high-to-low level translation of mission requirements and objectives aiming at their interfacing with verifiable embedded solvers tailored for the underlying hardware and exploiting the structure present in the common optimisation/optimal control problems. To fulfil this mandate, two avenues of research and development were followed: the development of a benchmarking framework with optimisation-based G&C and the improvement of the V&V process – two radical advances with respect to traditional GNC DDVV. On the first topic, the new optimisation-based hierarchy was exploited, from high-level requirements and objectives that can be mathematically posed as optimal control problems, themselves organised in different levels of abstraction, complexity, and time-criticality depending on how close to the actuator level they are. The main line of this work is then focused on the core component of optimisation-based G&C – the optimisation solver – starting with a formal analysis of its mathematical properties that allowed to identify meaningful requirements for V&V, and, concurrently, with a thorough, step-by-step, design and implementation for embedding in a space target board. This application-agnostic analysis and development was associated with the DDVV of specific usecases of optimisation-based G&C for common space applications of growing complexity, exploring different challenges in the form of convex problem complexity (up to second-order cone programs), problem size (model predictive control and trajectory optimization), and nonlinearity (both translation and attitude control problems). The novel V&V approach relies on the combination and exploitation of the two main approaches: classical testing of the global on-board software, and local and compositional, formal, math-driven, verification. While the former sees systems as black boxes, feeding it with comprehensive inputs and analysing statistically the outputs, the latter delves deep into the sub-components of the software, effectively seeing it as white boxes whenever mathematically possible. In between the two approaches lies the optimal path to a thorough, dependable, mathematically sound verification and validation process: local, potentially application-agnostic, validation of the building blocks with respect to mathematical specifications leading up to application-specific testing of global complex systems, this time informed by the results of local validation and testing. The deep analysis of the mathematical properties of the optimisation algorithm allows to derive requirements with increasing complexity (e.g., from “the code implements the proper computations”, to higher level mathematical properties such as optimality, convergence, and feasibility). These are related to quantities of interest that can be both verified resorting to e-ACSL specifications and Frama-C in a C-code implementation of the solver, but also observed in online monitors in Simulink or in post-processing during the model/software-in-the-loop testing. Finally, the activity applies the devised V&V process to the benchmark designs, from model-in-the-loop Monte Carlo testing, followed by autocoding and software-in-the-loop equivalence testing in parallel with the Frama-C runtime analysis, and concluded by processor-in-the-loop testing in a Hyperion on-board computer based around a Xilinx Zynq 7000 SoC.
Style APA, Harvard, Vancouver, ISO itp.
2

Filipski, Tatiana. "The valorization of students museum education within the school – museum – family – community interconnectivity during the pandemic crisis". W Condiții pedagogice de optimizare a învățării în post criză pandemică prin prisma dezvoltării gândirii științifice. "Ion Creanga" State Pedagogical University, 2021. http://dx.doi.org/10.46728/c.18-06-2021.p262-267.

Pełny tekst źródła
Streszczenie:
The article reflects on the pandemic crisis impact on museal education of students and touches the problem of collaboration of educational institution with museum, family and community in this period, which is difficult for the entire society. In this context were developed and signed multiple collaboration contracts in the view of museum education process optimisation and educational institution-museum-family-community interoperability, building an online oriented museum education methodology, in which were actively and sistematically involved pupils, students, proffesors, school managers, proffessionals in different domains, and parents. As result of assesment of museum education activities, was especially notified that all involved actors actively shown interest in national and universal heritage, higly apreciating the developed partnership.
Style APA, Harvard, Vancouver, ISO itp.
3

Baert, Lieven, Ingrid Lepot, Caroline Sainvitu, Emmanuel Chérière, Arnaud Nouvellon i Vincent Leonardon. "Aerodynamic Optimisation of the Low Pressure Turbine Module: Exploiting Surrogate Models in a High-Dimensional Design Space". W ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91570.

Pełny tekst źródła
Streszczenie:
Abstract Further improvement of state-of-the-art Low Pressure (LP) turbines has become progressively more challenging. LP design is more than ever confronted to the need to further integrate complex models and to shift from single component design to the design of the complete LPT module at once. This leads to high-dimensional design spaces and automatically challenges its applicability within an industrial context, where CPU resources are limited and the cycle time crucial. The aerodynamic design of a multistage LP turbine is discussed for a design space defined by 350 parameters. Using an online surrogate-based optimisation (SBO) approach a significant efficiency gain of almost 0.5pt has been achieved. By discussing the sampling of the design space, the quality of the surrogate models, and the application of adequate data mining capabilities to steer the optimisation, it is shown that despite the high-dimensional nature of the design space the followed approach allows to obtain performance gains beyond target. The ability to control both global as well as local characteristics of the flow throughout the full LP turbine, in combination with an agile reaction of the search process after dynamically strengthening and/or enforcing new constraints in order to adapt to the review feedback, illustrates not only the feasibility but also the potential of a global design space for the LP module. It is demonstrated that intertwining the capabilities of dynamic SBO and efficient data mining allows to incorporate high-fidelity simulations in design cycle practices of certified engines or novel engine concepts to jointly optimise the multiple stages of the LPT.
Style APA, Harvard, Vancouver, ISO itp.
4

Tyrrell, Grainne, Donna Curley, Leonard O' Sullivan i Eoin White. "Comparing Perceptions of Human Factors - Priorities of Cardiologists and Biomedical Engineers in the Design of Cardiovascular Devices". W 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005076.

Pełny tekst źródła
Streszczenie:
This study aimed to understand perceptions of Human Factors (HF) within the Product Development Process (PDP) of catheter-based cardiovascular therapies. Attitudes of biomedical engineers were compared to those of the clinicians who use these devices. The main objectives were to:1.Determine how Engineers and Cardiologists perceive HF impact on user experience.2.Gain an understanding of how various design factors affect the user experience. 3.Identify Engineers’ familiarity with HF resources and understand what HF data they seek during the PDP.By identifying and later filling data gaps and barriers to optimise design, these findings can improve how HF is implemented during the PDP, leading to improved user experience and better patient outcomes.MethodsData were gathered from 57 Biomedical R&D Engineers and 20 Interventional Cardiologists via questionnaires and semi-structured interviews. An online form was distributed at an internal medical device company Global Catheter Summit during November 2023 targeting engineers with experience developing catheter-based devices. Data from Cardiologists were gathered across two in person events between February and April 2024. Parameters to gauge specialty and experience were gathered from both cohorts (Engineers & Interventional Cardiologists). Quantitative data were gathered in Excel and statistically analysed using SPSS. Qualitative data was thematically analysed using NVivo. DiscussionThe results highlighted that the Engineers’ priorities in the PDP differ from the prioritised needs of the Cardiologists, but both groups identified grasps/manipulations as important factors influencing user experience. Engineers focused on the factors specific to the device itself – they believe the device is what the user cares most about, however, the Cardiologists ranked the impact of having multiple operators and what surgical access site is being used highly, pointing to the importance of considering use scenario and environment. 75% of Engineers strongly agreed with the statement “I feel user centred design is important when developing a new product”, indicating that project teams place value on HF activities but there are several challenges in implementing these. Engineers often struggle to find the data and expertise they need to implement HF activities in a meaningful and impactful way, without compromising on timeline, budget, and other product development activities. Of those who identified themselves as R&D or Design Engineers 69% (N=33) struggled to find the data they wanted. User specific and context specific data, torque strength and dynamic force data were highlighted as key gaps in user data. Differences in priorities further underlines the need for user centred design, and implementation of an iterative design approach which engages the end user from design conception, to design implementation, and beyond.ConclusionOverall, both Engineers and Cardiologists respect the impact of HF on the optimisation of user interaction. They agreed on the need for further innovation to improve user experience for CBCD. Priorities of Biomedical Engineers during the design process differed from the prioritised needs of Cardiologists when using devices, however both cohorts felt manipulations required to operate devices is an important factor to consider during design. The Engineers reported a paucity of specific user related data regarding handle interaction in this field. There is a need for easily accessible literature reporting upon user force data for dynamic motion (i.e. torque, push and pull); force data for female users, and general human body measurements that are applicable to device design. This data can serve as an indicator of where academia and industry should focus their research efforts to improve the implementation of HF, and ultimately optimise the user experience.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Online convex optimisation"

1

Kaufmann, Joachim, Peter Kaufmann i Simone Maria Grabner. Assessment of completed BRIDGE Discovery projects Synthesis at programme level. BMK - Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology, grudzień 2023. http://dx.doi.org/10.22163/fteval.2023.640.

Pełny tekst źródła
Streszczenie:
The aim of this short evaluation was to systematically collect information from the completed projects of the BRIDGE Discovery programme as of June 2023. This will be used for strategic optimisation and decision making for the funding period 2025-2028. BRIDGE Discovery is an open-topic funding programme at the interface between basic and applied research, which is jointly funded and implemented by the Swiss National Science Foundation (SNSF) and Innosuisse - Swiss Agency for Innovation Promotion. The study used a mixed-methods approach to gather information about the programme context and the funded projects: analyses of programme documents; interviews with programme managers, members of the steering committee, the evaluation panel, etc.; analyses of project data and reports; interviews with principal investigators and implementation partners; and an online survey. The results of the study indicate a rather large gap between the portfolios of the SNSF and Innosuisse, resulting in a high demand for BRIDGE Discovery. Given the current budget constraints, options for a more comprehensive picture of portfolio integration are outlined.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii