Letteratura scientifica selezionata sul tema "Primal-Dual learning algorithm"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Primal-Dual learning algorithm".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Primal-Dual learning algorithm"

1

Overman, Tom, Garrett Blum e Diego Klabjan. "A Primal-Dual Algorithm for Hybrid Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 13 (24 marzo 2024): 14482–89. http://dx.doi.org/10.1609/aaai.v38i13.29363.

Testo completo
Abstract (sommario):
Very few methods for hybrid federated learning, where clients only hold subsets of both features and samples, exist. Yet, this scenario is very important in practical settings. We provide a fast, robust algorithm for hybrid federated learning that hinges on Fenchel Duality. We prove the convergence of the algorithm to the same solution as if the model was trained centrally in a variety of practical regimes. Furthermore, we provide experimental results that demonstrate the performance improvements of the algorithm over a commonly used method in federated learning, FedAvg, and an existing hybrid FL algorithm, HyFEM. We also provide privacy considerations and necessary steps to protect client data.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yang, Peng, e Ping Li. "Distributed Primal-Dual Optimization for Online Multi-Task Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 04 (3 aprile 2020): 6631–38. http://dx.doi.org/10.1609/aaai.v34i04.6139.

Testo completo
Abstract (sommario):
Conventional online multi-task learning algorithms suffer from two critical limitations: 1) Heavy communication caused by delivering high velocity of sequential data to a central machine; 2) Expensive runtime complexity for building task relatedness. To address these issues, in this paper we consider a setting where multiple tasks are geographically located in different places, where one task can synchronize data with others to leverage knowledge of related tasks. Specifically, we propose an adaptive primal-dual algorithm, which not only captures task-specific noise in adversarial learning but also carries out a projection-free update with runtime efficiency. Moreover, our model is well-suited to decentralized periodic-connected tasks as it allows the energy-starved or bandwidth-constraint tasks to postpone the update. Theoretical results demonstrate the convergence guarantee of our distributed algorithm with an optimal regret. Empirical results confirm that the proposed model is highly effective on various real-world datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Wang, Shuai, Yanqing Xu, Zhiguo Wang, Tsung-Hui Chang, Tony Q. S. Quek e Defeng Sun. "Beyond ADMM: A Unified Client-Variance-Reduced Adaptive Federated Learning Framework". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 8 (26 giugno 2023): 10175–83. http://dx.doi.org/10.1609/aaai.v37i8.26212.

Testo completo
Abstract (sommario):
As a novel distributed learning paradigm, federated learning (FL) faces serious challenges in dealing with massive clients with heterogeneous data distribution and computation and communication resources. Various client-variance-reduction schemes and client sampling strategies have been respectively introduced to improve the robustness of FL. Among others, primal-dual algorithms such as the alternating direction of method multipliers (ADMM) have been found being resilient to data distribution and outperform most of the primal-only FL algorithms. However, the reason behind remains a mystery still. In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm. While this explains the inherent robustness of federated ADMM, the vanilla version of it lacks the ability to be adaptive to the degree of client heterogeneity. Besides, the global model at the server under client sampling is biased which slows down the practical convergence. To go beyond ADMM, we propose a novel primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model. In addition, FedVRA unifies several representative FL algorithms in the sense that they are either special instances of FedVRA or are close to it. Extensions of FedVRA to semi/un-supervised learning are also presented. Experiments based on (semi-)supervised image classification tasks demonstrate superiority of FedVRA over the existing schemes in learning scenarios with massive heterogeneous clients and client sampling.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Lai, Hanjiang, Yan Pan, Cong Liu, Liang Lin e Jie Wu. "Sparse Learning-to-Rank via an Efficient Primal-Dual Algorithm". IEEE Transactions on Computers 62, n. 6 (giugno 2013): 1221–33. http://dx.doi.org/10.1109/tc.2012.62.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Tao, Wei, Wei Li, Zhisong Pan e Qing Tao. "Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex Optimization". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 11 (18 maggio 2021): 9843–50. http://dx.doi.org/10.1609/aaai.v35i11.17183.

Testo completo
Abstract (sommario):
Averaging scheme has attracted extensive attention in deep learning as well as traditional machine learning. It achieves theoretically optimal convergence and also improves the empirical model performance. However, there is still a lack of sufficient convergence analysis for strongly convex optimization. Typically, the convergence about the last iterate of gradient descent methods, which is referred to as individual convergence, fails to attain its optimality due to the existence of logarithmic factor. In order to remove this factor, we first develop gradient descent averaging (GDA), which is a general projection-based dual averaging algorithm in the strongly convex setting. We further present primal-dual averaging for strongly convex cases (SC-PDA), where primal and dual averaging schemes are simultaneously utilized. We prove that GDA yields the optimal convergence rate in terms of output averaging, while SC-PDA derives the optimal individual convergence. Several experiments on SVMs and deep learning models validate the correctness of theoretical analysis and effectiveness of algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ding, Yuhao, e Javad Lavaei. "Provably Efficient Primal-Dual Reinforcement Learning for CMDPs with Non-stationary Objectives and Constraints". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 6 (26 giugno 2023): 7396–404. http://dx.doi.org/10.1609/aaai.v37i6.25900.

Testo completo
Abstract (sommario):
We consider primal-dual-based reinforcement learning (RL) in episodic constrained Markov decision processes (CMDPs) with non-stationary objectives and constraints, which plays a central role in ensuring the safety of RL in time-varying environments. In this problem, the reward/utility functions and the state transition functions are both allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain known variation budgets. Designing safe RL algorithms in time-varying environments is particularly challenging because of the need to integrate the constraint violation reduction, safe exploration, and adaptation to the non-stationarity. To this end, we identify two alternative conditions on the time-varying constraints under which we can guarantee the safety in the long run. We also propose the Periodically Restarted Optimistic Primal-Dual Proximal Policy Optimization (PROPD-PPO) algorithm that can coordinate with both two conditions. Furthermore, a dynamic regret bound and a constraint violation bound are established for the proposed algorithm in both the linear kernel CMDP function approximation setting and the tabular CMDP setting under two alternative conditions. This paper provides the first provably efficient algorithm for non-stationary CMDPs with safe exploration.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bai, Qinbo, Amrit Singh Bedi, Mridul Agarwal, Alec Koppel e Vaneet Aggarwal. "Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Primal-Dual Approach". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 4 (28 giugno 2022): 3682–89. http://dx.doi.org/10.1609/aaai.v36i4.20281.

Testo completo
Abstract (sommario):
Reinforcement learning is widely used in applications where one needs to perform sequential decisions while interacting with the environment. The problem becomes more challenging when the decision requirement includes satisfying some safety constraints. The problem is mathematically formulated as constrained Markov decision process (CMDP). In the literature, various algorithms are available to solve CMDP problems in a model-free manner to achieve epsilon-optimal cumulative reward with epsilon feasible policies. An epsilon-feasible policy implies that it suffers from constraint violation. An important question here is whether we can achieve epsilon-optimal cumulative reward with zero constraint violations or not. To achieve that, we advocate the use of a randomized primal-dual approach to solve the CMDP problems and propose a conservative stochastic primal-dual algorithm (CSPDA) which is shown to exhibit O(1/epsilon^2) sample complexity to achieve epsilon-optimal cumulative reward with zero constraint violations. In the prior works, the best available sample complexity for the epsilon-optimal policy with zero constraint violation is O(1/epsilon^5). Hence, the proposed algorithm provides a significant improvement compared to the state of the art.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bai, Qinbo, Amrit Singh Bedi e Vaneet Aggarwal. "Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 6 (26 giugno 2023): 6737–44. http://dx.doi.org/10.1609/aaai.v37i6.25826.

Testo completo
Abstract (sommario):
We consider the problem of constrained Markov decision process (CMDP) in continuous state actions spaces where the goal is to maximize the expected cumulative reward subject to some constraints. We propose a novel Conservative Natural Policy Gradient Primal Dual Algorithm (CNPGPD) to achieve zero constraint violation while achieving state of the art convergence results for the objective value function. For general policy parametrization, we prove convergence of value function to global optimal upto an approximation error due to restricted policy class. We improve the sample complexity of existing constrained NPGPD algorithm. To the best of our knowledge, this is the first work to establish zero constraint violation with Natural policy gradient style algorithms for infinite horizon discounted CMDPs. We demonstrate the merits of proposed algorithm via experimental evaluations.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gupta, Ankita, Lakhwinder Kaur e Gurmeet Kaur. "Drought stress detection technique for wheat crop using machine learning". PeerJ Computer Science 9 (19 maggio 2023): e1268. http://dx.doi.org/10.7717/peerj-cs.1268.

Testo completo
Abstract (sommario):
The workflow of this research is based on numerous hypotheses involving the usage of pre-processing methods, wheat canopy segmentation methods, and whether the existing models from the past research can be adapted to classify wheat crop water stress. Hence, to construct an automation model for water stress detection, it was found that pre-processing operations known as total variation with L1 data fidelity term (TV-L1) denoising with a Primal-Dual algorithm and min-max contrast stretching are most useful. For wheat canopy segmentation curve fit based K-means algorithm (Cfit-kmeans) was also validated for the most accurate segmentation using intersection over union metric. For automated water stress detection, rapid prototyping of machine learning models revealed that there is a need only to explore nine models. After extensive grid search-based hyper-parameter tuning of machine learning algorithms and 10 K fold cross validation it was found that out of nine different machine algorithms tested, the random forest algorithm has the highest global diagnostic accuracy of 91.164% and is the most suitable for constructing water stress detection models.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Liu, Bo, Ian Gemp, Mohammad Ghavamzadeh, Ji Liu, Sridhar Mahadevan e Marek Petrik. "Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity". Journal of Artificial Intelligence Research 63 (15 novembre 2018): 461–94. http://dx.doi.org/10.1613/jair.1.11251.

Testo completo
Abstract (sommario):
In this paper, we introduce proximal gradient temporal difference learning, which provides a principled way of designing and analyzing true stochastic gradient temporal difference learning algorithms. We show how gradient TD (GTD) reinforcement learning methods can be formally derived, not by starting from their original objective functions, as previously attempted, but rather from a primal-dual saddle-point objective function. We also conduct a saddle-point error analysis to obtain finite-sample bounds on their performance. Previous analyses of this class of algorithms use stochastic approximation techniques to prove asymptotic convergence, and do not provide any finite-sample analysis. We also propose an accelerated algorithm, called GTD2-MP, that uses proximal "mirror maps" to yield an improved convergence rate. The results of our theoretical analysis imply that the GTD family of algorithms are comparable and may indeed be preferred over existing least squares TD methods for off-policy learning, due to their linear complexity. We provide experimental results showing the improved performance of our accelerated gradient TD methods.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Primal-Dual learning algorithm"

1

Bouvier, Louis. "Apprentissage structuré et optimisation combinatoire : contributions méthodologiques et routage d'inventaire chez Renault". Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2024. http://www.theses.fr/2024ENPC0046.

Testo completo
Abstract (sommario):
Cette thèse découle des défis de recherche opérationnelle de la chaîne logistique Renault. Pour y répondre, nous apportons des contributions à l’architecture et à l’entraînement des réseaux neuronaux avec des couches d’optimisation combinatoire (CO). Nous les combinons avec de nouvelles matheuristiques pour aborder les problèmes de routage d’inventaire de Renault. La Partie I est dédiée aux applications des réseaux neuronaux avec des couches CO en recherche opérationnelle. Nous introduisons une méthode pour approximer les contraintes. Nous utilisons de telles couches pour encoder des politiques pour des processus de décision markoviens à grands espaces d’états et d’actions. Alors que la plupart des études sur les couches CO reposent sur l’apprentissage super- visé, nous introduisons un schéma primal-dual pour la minimisation du risque empirique. Notre algorithme est compatible avec l’apprentissage profond, adapté à de grands espaces combinatoires, et générique. La Partie II est dédiée à la logistique retour des emballages Renault en Europe. Notre politique pour les décisions opérationnelles est basée sur une nouvelle matheuristique pour la variante déterministe du problème. Nous montrons son efficacité sur des instances à grande échelle, que nous publions, avec notre code et nos solutions. Une version de notre politique est utilisée quotidiennement en production depuis mars 2023. Nous abordons aussi la contractualisation de routes au niveau tactique. L’ampleur du problème empêche l’utilisation d’approches classiques d’optimisation stochastique. Nous introduisons un nouvel algorithme basé sur les contributions de la Partie I pour la minimisation du risque empirique
This thesis stems from operations research challenges faced by Renault supply chain. Toaddress them, we make methodological contributions to the architecture and training of neural networks with combinatorial optimization (CO) layers. We combine them with new matheuristics to solve Renault’s industrial inventory routing problems.In Part I, we detail applications of neural networks with CO layers in operations research. We notably introduce a methodology to approximate constraints. We also solve some off- policy learning issues that arise when using such layers to encode policies for Markov decision processes with large state and action spaces. While most studies on CO layers rely on supervised learning, we introduce a primal-dual alternating minimization scheme for empirical risk minimization. Our algorithm is deep learning-compatible, scalable to large combinatorial spaces, and generic. In Part II, we consider Renault European packaging return logistics. Our rolling-horizon policy for the operational-level decisions is based on a new large neighborhood search for the deterministic variant of the problem. We demonstrate its efficiency on large-scale industrialinstances, that we release publicly, together with our code and solutions. We combine historical data and experts’ predictions to improve performance. A version of our policy has been used daily in production since March 2023. We also consider the tactical-level route contracting process. The sheer scale of this industrial problem prevents the use of classic stochastic optimization approaches. We introduce a new algorithm based on methodological contributions of Part I for empirical risk minimization
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization". Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Testo completo
Abstract (sommario):
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Primal-Dual learning algorithm"

1

Qin, Jingsheng, Lingjian Ye, Xinmin Zhang, Feifan Shen, Wei Wang e Longying Mao. "A Twin Primal-Dual DDPG Algorithm for Safety-Constrained Reinforcement Learning". In 2024 China Automation Congress (CAC), 2400–2405. IEEE, 2024. https://doi.org/10.1109/cac63892.2024.10864510.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lee, Donghwan, e Niao He. "Stochastic Primal-Dual Q-Learning Algorithm For Discounted MDPs". In 2019 American Control Conference (ACC). IEEE, 2019. http://dx.doi.org/10.23919/acc.2019.8815275.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Lee, Donghwan, Hyungjin Yoon e Naira Hovakimyan. "Primal-Dual Algorithm for Distributed Reinforcement Learning: Distributed GTD". In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619839.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Bianchi, Pascal, Walid Hachem e Iutzeler Franck. "A stochastic coordinate descent primal-dual algorithm and applications". In 2014 IEEE 24th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2014. http://dx.doi.org/10.1109/mlsp.2014.6958866.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Wang, Shijun, Baocheng Zhu, Lintao Ma e Yuan Qi. "A Riemannian Primal-dual Algorithm Based on Proximal Operator and its Application in Metric Learning". In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852367.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Qu, Yang, Jinming Ma e Feng Wu. "Safety Constrained Multi-Agent Reinforcement Learning for Active Voltage Control". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/21.

Testo completo
Abstract (sommario):
Active voltage control presents a promising avenue for relieving power congestion and enhancing voltage quality, taking advantage of the distributed controllable generators in the power network, such as roof-top photovoltaics. While Multi-Agent Reinforcement Learning (MARL) has emerged as a compelling approach to address this challenge, existing MARL approaches tend to overlook the constrained optimization nature of this problem, failing in guaranteeing safety constraints. In this paper, we formalize the active voltage control problem as a constrained Markov game and propose a safety-constrained MARL algorithm. We expand the primal-dual optimization RL method to multi-agent settings, and augment it with a novel approach of double safety estimation to learn the policy and to update the Lagrange-multiplier. In addition, we proposed different cost functions and investigated their influences on the behavior of our constrained MARL method. We evaluate our approach in the power distribution network simulation environment with real-world scale scenarios. Experimental results demonstrate the effectiveness of the proposed method compared with the state-of-the-art MARL methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Sankaran, Raman, Francis Bach e Chiranjib Bhattacharyya. "Learning With Subquadratic Regularization : A Primal-Dual Approach". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/272.

Testo completo
Abstract (sommario):
Subquadratic norms have been studied recently in the context of structured sparsity, which has been shown to be more beneficial than conventional regularizers in applications such as image denoising, compressed sensing, banded covariance estimation, etc. While existing works have been successful in learning structured sparse models such as trees, graphs, their associated optimization procedures have been inefficient because of hard-to-evaluate proximal operators of the norms. In this paper, we study the computational aspects of learning with subquadratic norms in a general setup. Our main contributions are two proximal-operator based algorithms ADMM-η and CP-η, which generically apply to these learning problems with convex loss functions, and achieve a proven rate of convergence of O(1/T) after T iterations. These algorithms are derived in a primal-dual framework, which have not been examined for subquadratic norms. We illustrate the efficiency of the algorithms developed in the context of tree-structured sparsity, where they comprehensively outperform relevant baselines.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Wan, Yuanyu, Nan Wei e Lijun Zhang. "Efficient Adaptive Online Learning via Frequent Directions". In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/381.

Testo completo
Abstract (sommario):
By employing time-varying proximal functions, adaptive subgradient methods (ADAGRAD) have improved the regret bound and been widely used in online learning and optimization. However, ADAGRAD with full matrix proximal functions (ADA-FULL) cannot deal with large-scale problems due to the impractical time and space complexities, though it has better performance when gradients are correlated. In this paper, we propose ADA-FD, an efficient variant of ADA-FULL based on a deterministic matrix sketching technique called frequent directions. Following ADA-FULL, we incorporate our ADA-FD into both primal-dual subgradient method and composite mirror descent method to develop two efficient methods. By maintaining and manipulating low-rank matrices, at each iteration, the space complexity is reduced from $O(d^2)$ to $O(\tau d)$ and the time complexity is reduced from $O(d^3)$ to $O(\tau^2d)$, where $d$ is the dimensionality of the data and $\tau \ll d$ is the sketching size. Theoretical analysis reveals that the regret of our methods is close to that of ADA-FULL as long as the outer product matrix of gradients is approximately low-rank. Experimental results show that our ADA-FD is comparable to ADA-FULL and outperforms other state-of-the-art algorithms in online convex optimization as well as in training convolutional neural networks (CNN).
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia