To see the other types of publications on this topic, follow the link: Fair combinatorial optimization.

Journal articles on the topic 'Fair combinatorial optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 journal articles for your research on the topic 'Fair combinatorial optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bourdache, Nadjet, and Patrice Perny. "Active Preference Learning Based on Generalized Gini Functions: Application to the Multiagent Knapsack Problem." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7741–48. http://dx.doi.org/10.1609/aaai.v33i01.33017741.

Full text
Abstract:
We consider the problem of actively eliciting preferences from a Decision Maker supervising a collective decision process in the context of fair multiagent combinatorial optimization. Individual preferences are supposed to be known and represented by linear utility functions defined on a combinatorial domain and the social utility is defined as a generalized Gini Social evaluation Function (GSF) for the sake of fairness. The GSF is a non-linear aggregation function parameterized by weighting coefficients which allow a fine control of the equity requirement in the aggregation of individual utilities. The paper focuses on the elicitation of these weights by active learning in the context of the fair multiagent knapsack problem. We introduce and compare several incremental decision procedures interleaving an adaptive preference elicitation procedure with a combinatorial optimization algorithm to determine a GSF-optimal solution. We establish an upper bound on the number of queries and provide numerical tests to show the efficiency of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Kai, Haoyu Liu, Zhipeng Hu, Xiaochuan Feng, Minghao Zhao, Shiwei Zhao, Runze Wu, Xudong Shen, Tangjie Lv, and Changjie Fan. "EnMatch: Matchmaking for Better Player Engagement via Neural Combinatorial Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 9098–106. http://dx.doi.org/10.1609/aaai.v38i8.28760.

Full text
Abstract:
Matchmaking is a core task in e-sports and online games, as it contributes to player engagement and further influences the game's lifecycle. Previous methods focus on creating fair games at all times. They divide players into different tiers based on skill levels and only select players from the same tier for each game. Though this strategy can ensure fair matchmaking, it is not always good for player engagement. In this paper, we propose a novel Engagement-oriented Matchmaking (EnMatch) framework to ensure fair games and simultaneously enhance player engagement. Two main issues need to be addressed. First, it is unclear how to measure the impact of different team compositions and confrontations on player engagement during the game considering the variety of player characteristics. Second, such a detailed consideration on every single player during matchmaking will result in an NP-hard combinatorial optimization problem with non-linear objectives. In light of these challenges, we turn to real-world data analysis to reveal engagement-related factors. The resulting insights guide the development of engagement modeling, enabling the estimation of quantified engagement before a match is completed. To handle the combinatorial optimization problem, we formulate the problem into a reinforcement learning framework, in which a neural combinatorial optimization problem is built and solved. The performance of EnMatch is finally demonstrated through the comparison with other state-of-the-art methods based on several real-world datasets and online deployments on two games.
APA, Harvard, Vancouver, ISO, and other styles
3

MOULIN, HERVÉ. "COST SHARING IN NETWORKS: SOME OPEN QUESTIONS." International Game Theory Review 15, no. 02 (June 2013): 1340001. http://dx.doi.org/10.1142/s021919891340001x.

Full text
Abstract:
The fertile application of cooperative game techniques to cost sharing problems on networks has so far concentrated on the Stand Alone core test of fairness and/or stability, and ignored many combinatorial optimization problems where this core can be empty. I submit there is much room for an axiomatic discussion of fair division in the latter problems, where Stand Alone objections are not implementable. But the computational complexity of optimal solutions is still a very severe obstacle to this approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Adubi, Stephen A., Olufunke O. Oladipupo, and Oludayo O. Olugbara. "Evolutionary Algorithm-Based Iterated Local Search Hyper-Heuristic for Combinatorial Optimization Problems." Algorithms 15, no. 11 (October 31, 2022): 405. http://dx.doi.org/10.3390/a15110405.

Full text
Abstract:
Hyper-heuristics are widely used for solving numerous complex computational search problems because of their intrinsic capability to generalize across problem domains. The fair-share iterated local search is one of the most successful hyper-heuristics for cross-domain search with outstanding performances on six problem domains. However, it has recorded low performances on three supplementary problems, namely knapsack, quadratic assignment, and maximum-cut problems, which undermines its credibility across problem domains. The purpose of this study was to design an evolutionary algorithm-based iterated local search (EA-ILS) hyper-heuristic that applies a novel mutation operator to control the selection of perturbative low-level heuristics in searching for optimal sequences for performance improvement. The algorithm was compared to existing ones in the hyper-heuristics flexible (HyFlex) framework to demonstrate its performance across the problem domains of knapsack, quadratic assignment, and maximum cut. The comparative results have shown that the EA-ILS hyper-heuristic can obtain the best median objective function values on 22 out of 30 instances in the HyFlex framework. Moreover, it has achieved superiority in its generalization capability when compared to the reported top-performing hyper-heuristic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Maleš, Uroš, Dušan Ramljak, Tatjana Jakšić Krüger, Tatjana Davidović, Dragutin Ostojić, and Abhay Haridas. "Controlling the Difficulty of Combinatorial Optimization Problems for Fair Proof-of-Useful-Work-Based Blockchain Consensus Protocol." Symmetry 15, no. 1 (January 3, 2023): 140. http://dx.doi.org/10.3390/sym15010140.

Full text
Abstract:
The wide range of Blockchain (BC) applications and BC’s ubiquity come from the fact that BC, as a collection of records linked to each other, is strongly resistant to alteration, protected using cryptography, and maintained autonomously. All these benefits come with a cost, which in BC is expressed by a very high use of energy needed to execute consensus protocols. Traditionally, consensus protocols based on Proof-of-Work (PoW) ensure fairness, but are not very useful. The paradigm proposed in the recent literature, known as Proof-of-Useful-Work (PoUW), assumes the completion of additional useful work for the same amount of resources (energy) used. However, the majority of the proposed PoUW approaches do not adequately consider fairness in balancing and controlling the difficulty of the work miners need to perform. A minority of the studies that do address fairness in miners’ work utilize PoW as a tool to ensure it. Therefore, a general framework to provide a structure for understanding the difficulty of useful work and how it can be used to fine-tune the complexity of miners’ effort in PoUW-based consensus protocols is proposed in this paper. The main characteristic of the proposed framework is that controlling the difficulty and fairness of miners’ work in PoUW-based consensus protocols is achieved exclusively through the useful work. The modules of the framework are discussed, and many research challenges and opportunities are articulated. The benefits of the proposed approach are illustrated taking as an example two optimization algorithms for a variant of the scheduling problem. In addition, the steps that should be taken to make this general framework applicable to any PoUW-based consensus protocols are identified.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang, and Kay Chen Tan. "Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.

Full text
Abstract:
A plethora of fair graph neural networks (GNNs) have been proposed to promote algorithmic fairness for high-stake real-life contexts. Meanwhile, explainability is generally proposed to help machine learning practitioners debug models by providing human-understandable explanations. However, seldom work on explainability is made to generate explanations for fairness diagnosis in GNNs. From the explainability perspective, this paper explores the problem of what subgraph patterns cause the biased behavior of GNNs, and what actions could practitioners take to rectify the bias? By answering the two questions, this paper aims to produce compact, diagnostic, and actionable explanations that are responsible for discriminatory behavior. Specifically, we formulate the problem of generating diagnostic and actionable explanations as a multi-objective combinatorial optimization problem. To solve the problem, a dedicated multi-objective evolutionary algorithm is presented to ensure GNNs' explainability and fairness in one go. In particular, an influenced nodes-based gradient approximation is developed to boost the computation efficiency of the evolutionary algorithm. We provide a theoretical analysis to illustrate the effectiveness of the proposed framework. Extensive experiments have been conducted to demonstrate the superiority of the proposed method in terms of classification performance, fairness, and interpretability.
APA, Harvard, Vancouver, ISO, and other styles
7

Rokbani, Nizar, Pavel Kromer, Ikram Twir, and Adel M. Alimi. "A Hybrid Hierarchical Heuristic-ACO With Local Search Applied to Travelling Salesman Problem, AS-FA-Ls." International Journal of System Dynamics Applications 9, no. 3 (July 2020): 58–73. http://dx.doi.org/10.4018/ijsda.2020070104.

Full text
Abstract:
The combinatorial optimization problem is attracting research because they have a wide variety of applications ranging from route planning and supply chain optimization to industrial scheduling and the IoT. Solving such problems using heuristics and bio-inspired techniques is an alternative to exact solutions offering acceptable solutions at fair computational costs. In this article, a new hierarchical hybrid method is proposed as a hybridization of Ant Colony Optimization (ACO), Firefly Algorithm (FA), and local search (AS-FA-Ls). The proposed methods are compared to similar techniques on the traveling salesman problem, (TSP). ACO is used in a hierarchical collaboration schema together with FA which is used to adapt ACO parameters. A local search strategy is used which is the 2 option method to avoid suboptimal solutions. A comparative review and experimental investigations are conducted using the TSP benchmarks. The results showed that AS-FA-Ls returned better results than the listed works in the following cases: berlin52, st70, eil76, rat99, kroA100, and kroA200. Computational investigations allowed determining a set of recommended parameters to be used with ACO for the TSP instances of the study.
APA, Harvard, Vancouver, ISO, and other styles
8

Lujak, Marin, Stefano Giordani, Andrea Omicini, and Sascha Ossowski. "Decentralizing Coordination in Open Vehicle Fleets for Scalable and Dynamic Task Allocation." Complexity 2020 (July 16, 2020): 1–21. http://dx.doi.org/10.1155/2020/1047369.

Full text
Abstract:
One of the major challenges in the coordination of large, open, collaborative, and commercial vehicle fleets is dynamic task allocation. Self-concerned individually rational vehicle drivers have both local and global objectives, which require coordination using some fair and efficient task allocation method. In this paper, we review the literature on scalable and dynamic task allocation focusing on deterministic and dynamic two-dimensional linear assignment problems. We focus on multiagent system representation of open vehicle fleets where dynamically appearing vehicles are represented by software agents that should be allocated to a set of dynamically appearing tasks. We give a comparison and critical analysis of recent research results focusing on centralized, distributed, and decentralized solution approaches. Moreover, we propose mathematical models for dynamic versions of the following assignment problems well known in combinatorial optimization: the assignment problem, bottleneck assignment problem, fair matching problem, dynamic minimum deviation assignment problem, Σk-assignment problem, the semiassignment problem, the assignment problem with side constraints, and the assignment problem while recognizing agent qualification; all while considering the main aspect of open vehicle fleets: random arrival of tasks and vehicles (agents) that may become available after assisting previous tasks or by participating in the fleet at times based on individual interest.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xia, and Buhong Wang. "Thinned Virtual Array for Cramer Rao Bound Optimization in MIMO Radar." International Journal of Antennas and Propagation 2021 (January 15, 2021): 1–13. http://dx.doi.org/10.1155/2021/1408498.

Full text
Abstract:
By transmitting multiple independent waveforms at the transmit side and processing echoes of spatial targets at the receive side, Multiple Input Multiple Output (MIMO) radar enjoys virtual array aperture expansion and more degree of freedom (DOF), both of which favors the application of direction finding or estimation of direction of arrival (DOA). The expanded virtual aperture provides higher angular resolution which also promotes the precision of DOA estimation, and the extra DOF brought by waveform diversity can be leveraged to focus energy in certain spatial region for better direction-finding capacity. However, beamspace methods which match certain beampatterns suffer from deteriorated performance and complexity in implementation, and the advantage of virtual array aperture is limited by its virtual element redundancy. As an important performance indicator of DOA estimation, Cramer–Rao Bound (CRB) is closely connected to the array configuration of the system. To reduce the complexity of the system and improve CRB performance at the same time, in this paper, the virtual array of MIMO radar is designed directly by selecting outputs from matched filters at the receive side. For the sake of fair comparison, both scenarios with and without priori directions are considered to obtain optimized virtual array configuration, respectively. The original combinatorial problems are approximated by sequential convex approximations methods which produce solutions with efficiency. Numerical results demonstrate that the proposed method can provide thinned virtual arrays with excellent CRB performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Khaled, Smail, and Djebbar Bachir. "Electromagnetism-like mechanism algorithm for hybrid flow-shop scheduling problems." Indonesian Journal of Electrical Engineering and Computer Science 32, no. 3 (December 1, 2023): 1614. http://dx.doi.org/10.11591/ijeecs.v32.i3.pp1614-1620.

Full text
Abstract:
<span>Given the interest and complexity, of the hybrid flow shops (HFS) problem in industry, he has been extensively considered, the HFS, is a complex combinatorial problem supported in many original world applications. We consider a hybrid flow shop FH4(P3, P2)||C<sub>max</sub> to applied in this paper. In this papers we attempt to optimize the makespan which refers to the last task completion time by an adequate meta-heuristic algorithm based on electromagnetism mechanism (EM). We also present analysis on the performance of the EM-algorithm adapted to HFS scheduling problems. The electromagnetism-like mechanism method gave us efficient and fair results comparing to particle swarm and genetic algorithm.</span>
APA, Harvard, Vancouver, ISO, and other styles
11

Alireza, Valipour, Yadollahi Mohammadreza, Rosli Mohamad Zin, Nordin Yahaya, and Norhazilan Md Noor. "An enhanced multi-objective optimization approach for risk allocation in public–private partnership projects: a case study of Malaysia." Canadian Journal of Civil Engineering 41, no. 2 (February 2014): 164–77. http://dx.doi.org/10.1139/cjce-2013-0342.

Full text
Abstract:
The decision making for risk allocation problems in public–private partnership (PPP) projects is a vital process that directly affects the timeliness, cost, and quality of the project. Fair risk allocation is a vital factor to achieve success in the implementation of these projects. It is essential for private and public sectors to apply efficient risk allocation approaches to experience a more effective process of agreement arbitration and to reduce the appearance of dispute during the concession period. The aim of this study is to develop an optimization approach to enhance risk allocation process in PPP projects. The shared risks in projects are identified through comprehensive literature review and questionnaire survey obtained from Malaysian professionals involved in PPP projects. Objective functions are then developed to minimize the total time and cost of the project and maximize the quality while satisfying risk threshold constraints. The combinatorial nature of the risk allocation problem describes a multi-objective situation that can be simulated as a knapsack problem (KP). The formulation of the KP is described and solved applying genetic algorithm (GA). Due to the flexibility of GA, the results are Pareto Optimal solutions that describe the combinations of risk percentages for shared risks in PPP projects.
APA, Harvard, Vancouver, ISO, and other styles
12

Porter, Jason M., Marvin E. Larsen, J. Wesley Barnes, and John R. Howell. "Metaheuristic Optimization of a Discrete Array of Radiant Heaters." Journal of Heat Transfer 128, no. 10 (March 23, 2006): 1031–40. http://dx.doi.org/10.1115/1.2345430.

Full text
Abstract:
The design of radiant enclosures is an active area of research in radiation heat transfer. When design variables are discrete such as for radiant heater arrays with on-off control of individual heaters, current methods of design optimization fail. This paper reports the development of a metaheuristic thermal radiation optimization approach. Two metaheuristic optimization methods are explored: simulated annealing and tabu search. Both approaches are applied to a combinatorial radiant enclosure design problem. Configuration factors are used to develop a dynamic neighborhood for the tabu search algorithm. Results are presented from the combinatorial optimization problem. Tabu search with a problem specific dynamic neighborhood definition is shown to find better solutions than the benchmark simulated annealing approach in less computation time.
APA, Harvard, Vancouver, ISO, and other styles
13

KUNER, PETER, and BIRGIT UEBERREITER. "PATTERN RECOGNITION BY GRAPH MATCHING—COMBINATORIAL VERSUS CONTINUOUS OPTIMIZATION." International Journal of Pattern Recognition and Artificial Intelligence 02, no. 03 (September 1988): 527–42. http://dx.doi.org/10.1142/s0218001488000303.

Full text
Abstract:
A generalization of subgraph isomorphism for the fault-tolerant interpretation of disturbed line images has been achieved. Object recognition is effected by optimal matching of a reference graph to the graph of a distorted image. This optimization is based on the solution of linear and quadratic assignment problems. The efficiency of the procedures developed for this objective has been proved in practical applications. NP-complete problems such as subgraph recognition need exhaustive computation if exact (branch-and-bound) algorithms are used. In contrast to this, heuristics are very fast and sufficiently reliable for less complex relational structures of the kind investigated in the first part of this paper. Constrained continuous optimization techniques, such as relaxation labeling and neural network strategies, solve recognition problems within a reasonable time, even in rather complex relational structures where heuristics can fail. They are also well suited to parallelism. The second part of this paper is devoted exclusively to them.
APA, Harvard, Vancouver, ISO, and other styles
14

Singh, Vinay Kumar, and Vidushi Sharma. "Elitist Genetic Algorithm Based Energy Balanced Routing Strategy to Prolong Lifetime of Wireless Sensor Networks." Chinese Journal of Engineering 2014 (March 16, 2014): 1–6. http://dx.doi.org/10.1155/2014/437625.

Full text
Abstract:
Wireless sensor networks have gained worldwide attention in recent years due to the advances made in wireless communication. Unequal energy dissipation causes the nodes to fail. The factors causing the unequal energy dissipation are, firstly, the distance between the nodes and base station and, secondly, the distance between the nodes themselves. Using traditional methods, it is difficult to obtain the high precision of solution as the problem is NP hard. The routing in wireless networks is a combinatorial optimization problem; hence, genetic algorithms can provide optimized solution to energy efficient shortest path. The proposed algorithm has its inherent advantage that it keeps the elite solutions in the next generation so as to quickly converge towards the global optima also during path selection; it takes into account the energy balance of the network, so that the life time of the network can be prolonged. The results show that the algorithm is efficient for finding the optimal energy constrained route as they can converge faster than other traditional methods used for combinatorial optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Rajeswari, Muniyan, Rajakumar Ramalingam, Shakila Basheer, Keerthi Samhitha Babu, Mamoon Rashid, and Ramar Saranya. "Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem." Axioms 12, no. 4 (April 19, 2023): 395. http://dx.doi.org/10.3390/axioms12040395.

Full text
Abstract:
This article addresses the problem of converting a single-objective combinatorial problem into a multi-objective one using the Pareto front approach. Although existing algorithms can identify the optimal solution in a multi-objective space, they fail to satisfy constraints while achieving optimal performance. To address this issue, we propose a multi-objective artificial bee colony optimization algorithm with a classical multi-objective theme called fitness sharing. This approach helps the convergence of the Pareto solution set towards a single optimal solution that satisfies multiple objectives. This article introduces multi-objective optimization with an example of a non-dominated sequencing technique and fitness sharing approach. The experimentation is carried out in MATLAB 2018a. In addition, we applied the proposed algorithm to two different real-time datasets, namely the knapsack problem and the nurse scheduling problem (NSP). The outcome of the proposed MBABC-NM algorithm is evaluated using standard performance indicators such as average distance, number of reference solutions (NRS), overall count of attained solutions (TNS), and overall non-dominated generation volume (ONGV). The results show that it outperforms other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Vallet, B., B. Soheilian, and M. Brédif. "Combinatorial clustering and Its Application to 3D Polygonal Traffic Sign Reconstruction From Multiple Images." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3 (August 7, 2014): 165–72. http://dx.doi.org/10.5194/isprsannals-ii-3-165-2014.

Full text
Abstract:
The 3D reconstruction of similar 3D objects detected in 2D faces a major issue when it comes to grouping the 2D detections into clusters to be used to reconstruct the individual 3D objects. Simple clustering heuristics fail as soon as similar objects are close. This paper formulates a framework to use the geometric quality of the reconstruction as a hint to do a proper clustering. We present a methodology to solve the resulting combinatorial optimization problem with some simplifications and approximations in order to make it tractable. The proposed method is applied to the reconstruction of 3D traffic signs from their 2D detections to demonstrate its capacity to solve ambiguities.
APA, Harvard, Vancouver, ISO, and other styles
17

Ishiguro, Hisatoshi, and Kakuro Amasaka. "Proposal And Effectiveness Of A Highly Compelling Direct Mail Method - Establishment And Deployment Of PMOS-DM." International Journal of Management & Information Systems (IJMIS) 16, no. 1 (December 22, 2011): 1. http://dx.doi.org/10.19030/ijmis.v16i1.6717.

Full text
Abstract:
No clear processes are used at car dealers when deciding target customers for direct mail campaigns, and individual sales representatives rely on their experience when making such decisions. This means that dealer strategies lose their effectiveness and dealers fail to achieve the desired increase in customer visits. Thus, for this study, the author has established the Practical Method using Optimization and Statistics for Direct Mail (PMOS-DM) as a method of deciding the most suitable target customers for direct mail campaigns. Specifically, in order to both clarify the dealers target customer types and increase the number of customer visits, the author applied mathematical programming (combinatorial optimization) using statistical science to establish a model for determining the most suitable target customers for direct mail campaigns. This model has subsequently been applied at company M dealers, demonstrating significant effectiveness in increasing customer visits.
APA, Harvard, Vancouver, ISO, and other styles
18

Kickmeier-Rust, Michael, and Andreas Holzinger. "Interactive Ant Colony Optimization to Support Adaptation in Serious Games." International Journal of Serious Games 6, no. 3 (September 20, 2019): 37–50. http://dx.doi.org/10.17083/ijsg.v6i3.308.

Full text
Abstract:
The success of serious games usually depends on their capabilities to engage learners and to provide them with personalized gaming and learning experiences. Therefore, it is important to equip a game, as an autonomous computer system, with a certain level of understanding about individual learning trajectories and gaming processes. AI and machine learning technologies increasingly enter the field; these technologies often fail, however, since serious games either pose highly complex problems (combining gaming and learning process) or do not provide the extensive data bases that would be required. An interesting new direction is augmenting the strength of AI technologies with human intuition and human cognition. In the present paper, we investigated performance of the MAXMIN Ant System, a combinatorial optimization algorithm, with and without human interventions to the algorithmic procedure. As a testbed, we used a clone of the Travelling Salesman problem, the Travelling Snakesman game. We found some evidence that human interventions result in superior performance than the algorithm alone. The results are discussed regarding the applicability of this pathfinding algorithm in adaptive games, exemplified by Micro Learning Space adaptation systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Grinshpoun, T., A. Grubshtein, R. Zivan, A. Netzer, and A. Meisels. "Asymmetric Distributed Constraint Optimization Problems." Journal of Artificial Intelligence Research 47 (July 30, 2013): 613–47. http://dx.doi.org/10.1613/jair.3945.

Full text
Abstract:
Distributed Constraint Optimization (DCOP) is a powerful framework for representing and solving distributed combinatorial problems, where the variables of the problem are owned by different agents. Many multi-agent problems include constraints that produce different gains (or costs) for the participating agents. Asymmetric gains of constrained agents cannot be naturally represented by the standard DCOP model. The present paper proposes a general framework for Asymmetric DCOPs (ADCOPs). In ADCOPs different agents may have different valuations for constraints that they are involved in. The new framework bridges the gap between multi-agent problems which tend to have asymmetric structure and the standard symmetric DCOP model. The benefits of the proposed model over previous attempts to generalize the DCOP model are discussed and evaluated. Innovative algorithms that apply to the special properties of the proposed ADCOP model are presented in detail. These include complete algorithms that have a substantial advantage in terms of runtime and network load over existing algorithms (for standard DCOPs) which use alternative representations. Moreover, standard incomplete algorithms (i.e., local search algorithms) are inapplicable to the existing DCOP representations of asymmetric constraints and when they are applied to the new ADCOP framework they often fail to converge to a local optimum and yield poor results. The local search algorithms proposed in the present paper converge to high quality solutions. The experimental evidence that is presented reveals that the proposed local search algorithms for ADCOPs achieve high quality solutions while preserving a high level of privacy.
APA, Harvard, Vancouver, ISO, and other styles
20

Ranjan, Rajesh, and Jitender Kumar Chhabra. "A Modified Binary Arithmetic Optimization Algorithm for Feature Selection." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 11 (July 25, 2023): 199–205. http://dx.doi.org/10.37394/232018.2023.11.18.

Full text
Abstract:
Feature selection chooses the optimal subset from the feature set without scarifying the information carried by the dataset. It is considered a complex combinatorial problem, so classical optimization techniques fail to solve it when the feature set becomes larger. Meta-heuristic approaches are well known to solve complex optimization problems; hence these algorithms have been successfully applied to extract optimal feature subsets. The arithmetic Optimization Algorithm is a newly proposed mathematics-based meta-heuristic search algorithm successfully applied to solve optimization problems. However, it has been observed that AOA experiences a poor exploration phase. Hence in the present work, a Modified Binary Arithmetic Optimization Algorithm (MB-AOA) is proposed, which solves the poor exploration problem of standard AOA. In the MB-AOA, instead of utilizing a single best solution, an optimal solution set that gradually shrinks after each successive iteration is applied for better exploration during initial iterations. Also, instead of a fixed search parameter (μ), the MB-AOA utilizes a variable parameter suitable for binary optimization problems. The proposed method is evaluated over seven real-life datasets from the UCI repository as a feature selection wrapper method and compared with standard AOA over two performance metrics, Average Accuracy, F-score, and the generated feature subset size. MB-AOA has performed better in six datasets regarding F-score and average accuracy. The obtained results from the simulation process demonstrate that the MB-AOA can select the relevant features, thus improving the classification task’s overall accuracy levels.
APA, Harvard, Vancouver, ISO, and other styles
21

Botea, Adi, and Vadim Bulitko. "Scaling Up Search with Partial Initial States in Optimization Crosswords." Proceedings of the International Symposium on Combinatorial Search 12, no. 1 (July 21, 2021): 20–27. http://dx.doi.org/10.1609/socs.v12i1.18547.

Full text
Abstract:
Heuristic search remains a leading approach to difficult combinatorial optimization problems. Search algorithms can utilize pruning based on comparing a target score with an admissible (optimistic) estimate of the best score that can be achieved from a given state. If the former is larger they prune the state. However, when the target score is too high the search can fail by exhausting the space without finding a solution. In this paper we show that such failed searches can still be valuable. Specifically, best partial solutions encountered in such failed searches can often bear a high similarity to the corresponding part of a full high-quality or even optimal solution. Thus, a new search for a full solution, with a lower target score, can start with a best known partial solution, rather than starting from scratch. We demonstrate our ideas in a constraint optimization problem modelled on the Romanian Crosswords Competition, a challenging problem where humans perform much better than computers. Utilizing partial solutions produced by a failed search cuts down the running time of an existing state-of-the-art solver by orders of magnitude on competition-level crossword puzzle instances and allows to solve more instances.
APA, Harvard, Vancouver, ISO, and other styles
22

Crawford, Broderick, Ricardo Soto, Eric Monfroy, Carlos Castro, Wenceslao Palma, and Fernando Paredes. "A Hybrid Soft Computing Approach for Subset Problems." Mathematical Problems in Engineering 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/716069.

Full text
Abstract:
Subset problems (set partitioning, packing, and covering) are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set (S) can be partitioned into smaller subsets. All items inSmust be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions) and set covering (all items must be contained in at least one partition). Here, we present a hybrid solver based on ant colony optimization (ACO) combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.
APA, Harvard, Vancouver, ISO, and other styles
23

Akagi, Yasunori, Takuya Nishimura, Yusuke Tanaka, Takeshi Kurashima, and Hiroyuki Toda. "Exact and Efficient Inference for Collective Flow Diffusion Model via Minimum Convex Cost Flow Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3163–70. http://dx.doi.org/10.1609/aaai.v34i04.5713.

Full text
Abstract:
Collective Flow Diffusion Model (CFDM) is a general framework to find the hidden movements underlying aggregated population data. The key procedure in CFDM analysis is MAP inference of hidden variables. Unfortunately, existing approaches fail to offer exact MAP inferences, only approximate versions, and take a lot of computation time when applied to large scale problems. In this paper, we propose an exact and efficient method for MAP inference in CFDM. Our key idea is formulating the MAP inference problem as a combinatorial optimization problem called Minimum Convex Cost Flow Problem (C-MCFP) with no approximation or continuous relaxation. On the basis of this formulation, we propose an efficient inference method that employs the C-MCFP algorithm as a subroutine. Our experiments on synthetic and real datasets show that the proposed method is effective both in single MAP inference and people flow estimation with EM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Glynn, Peter W., Andrey Dolgin, Reuven Y. Rubinstein, and Radislav Vaisman. "HOW TO GENERATE UNIFORM SAMPLES ON DISCRETE SETS USING THE SPLITTING METHOD." Probability in the Engineering and Informational Sciences 24, no. 3 (April 23, 2010): 405–22. http://dx.doi.org/10.1017/s0269964810000057.

Full text
Abstract:
The goal of this work is twofold. We show the following:1.In spite of the common consensus on the classic Markov chain Monte Carlo (MCMC) as a universal tool for generating samples on complex sets, it fails to generate points uniformly distributed on discrete ones, such as that defined by the constraints of integer programming. In fact, we will demonstrate empirically that not only does it fail to generate uniform points on the desired set, but typically it misses some of the points of the set.2.Thesplitting, also called thecloningmethod – originally designed for combinatorial optimization and for counting on discrete sets and presenting a combination of MCMC, like the Gibbs sampler, with a specially designed splitting mechanism—can also be efficiently used for generating uniform samples on these sets. Without introducing the appropriate splitting mechanism, MCMC fails. Although we do not have a formal proof, we guess (conjecture) that the main reason that the classic MCMC is not working is that its resulting chain is not irreducible. We provide valid statistical tests supporting the uniformity of generated samples by the splitting method and present supportive numerical results.
APA, Harvard, Vancouver, ISO, and other styles
25

Berman, Kenneth A., and Chad Yoshikawa. "Why locally-fair maximal flows in client-server networks perform well." Journal of Combinatorial Optimization 22, no. 3 (May 22, 2010): 426–37. http://dx.doi.org/10.1007/s10878-010-9321-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Farhi, Edward, Jeffrey Goldstone, Sam Gutmann, and Leo Zhou. "The Quantum Approximate Optimization Algorithm and the Sherrington-Kirkpatrick Model at Infinite Size." Quantum 6 (July 7, 2022): 759. http://dx.doi.org/10.22331/q-2022-07-07-759.

Full text
Abstract:
The Quantum Approximate Optimization Algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization problems whose performance can only improve with the number of layers p. While QAOA holds promise as an algorithm that can be run on near-term quantum computers, its computational power has not been fully explored. In this work, we study the QAOA applied to the Sherrington-Kirkpatrick (SK) model, which can be understood as energy minimization of n spins with all-to-all random signed couplings. There is a recent classical algorithm by Montanari that, assuming a widely believed conjecture, can efficiently find an approximate solution for a typical instance of the SK model to within (1&#x2212;&#x03F5;) times the ground state energy. We hope to match its performance with the QAOA.Our main result is a novel technique that allows us to evaluate the typical-instance energy of the QAOA applied to the SK model. We produce a formula for the expected value of the energy, as a function of the 2p QAOA parameters, in the infinite size limit that can be evaluated on a computer with O(16p) complexity. We evaluate the formula up to p=12, and find that the QAOA at p=11 outperforms the standard semidefinite programming algorithm. Moreover, we show concentration: With probability tending to one as n&#x2192;&#x221E;, measurements of the QAOA will produce strings whose energies concentrate at our calculated value. As an algorithm running on a quantum computer, there is no need to search for optimal parameters on an instance-by-instance basis since we can determine them in advance. What we have here is a new framework for analyzing the QAOA, and our techniques can be of broad interest for evaluating its performance on more general problems where classical algorithms may fail.
APA, Harvard, Vancouver, ISO, and other styles
27

Ghosh, Bishwamittra, Dmitry Malioutov, and Kuldeep S. Meel. "Efficient Learning of Interpretable Classification Rules." Journal of Artificial Intelligence Research 74 (August 30, 2022): 1823–63. http://dx.doi.org/10.1613/jair.1.13482.

Full text
Abstract:
Machine learning has become omnipresent with applications in various safety-critical domains such as medical, law, and transportation. In these domains, high-stake decisions provided by machine learning necessitate researchers to design interpretable models, where the prediction is understandable to a human. In interpretable machine learning, rule-based classifiers are particularly effective in representing the decision boundary through a set of rules comprising input features. Examples of such classifiers include decision trees, decision lists, and decision sets. The interpretability of rule-based classifiers is in general related to the size of the rules, where smaller rules are considered more interpretable. To learn such a classifier, the brute-force direct approach is to consider an optimization problem that tries to learn the smallest classification rule that has close to maximum accuracy. This optimization problem is computationally intractable due to its combinatorial nature and thus, the problem is not scalable in large datasets. To this end, in this paper we study the triangular relationship among the accuracy, interpretability, and scalability of learning rule-based classifiers. The contribution of this paper is an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for synthesizing classification rules expressible in proposition logic. IMLI considers a joint objective function to optimize the accuracy and the interpretability of classification rules and learns an optimal rule by solving an appropriately designed MaxSAT query. Despite the progress of MaxSAT solving in the last decade, the straightforward MaxSAT-based solution cannot scale to practical classification datasets containing thousands to millions of samples. Therefore, we incorporate an efficient incremental learning technique inside the MaxSAT formulation by integrating mini-batch learning and iterative rule-learning. The resulting framework learns a classifier by iteratively covering the training data, wherein in each iteration, it solves a sequence of smaller MaxSAT queries corresponding to each mini-batch. In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability. For instance, IMLI attains a competitive prediction accuracy and interpretability w.r.t. existing interpretable classifiers and demonstrates impressive scalability on large datasets where both interpretable and non-interpretable classifiers fail. As an application, we deploy IMLI in learning popular interpretable classifiers such as decision lists and decision sets. The source code is available at https://github.com/meelgroup/mlic.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Lei, Xi Yu, Jiannong Cao, Xuxun Liu, and Pan Zhou. "Exploring Deep Reinforcement Learning for Task Dispatching in Autonomous On-Demand Services." ACM Transactions on Knowledge Discovery from Data 15, no. 3 (April 12, 2021): 1–23. http://dx.doi.org/10.1145/3442343.

Full text
Abstract:
Autonomous on-demand services, such as GOGOX (formerly GoGoVan) in Hong Kong, provide a platform for users to request services and for suppliers to meet such demands. In such a platform, the suppliers have autonomy to accept or reject the demands to be dispatched to him/her, so it is challenging to make an online matching between demands and suppliers. Existing methods use round-based approaches to dispatch demands. In these works, the dispatching decision is based on the predicted response patterns of suppliers to demands in the current round, but they all fail to consider the impact of future demands and suppliers on the current dispatching decision. This could lead to taking a suboptimal dispatching decision from the future perspective. To solve this problem, we propose a novel demand dispatching model using deep reinforcement learning. In this model, we make each demand as an agent. The action of each agent, i.e., the dispatching decision of each demand, is determined by a centralized algorithm in a coordinated way. The model works in the following two steps. (1) It learns the demand’s expected value in each spatiotemporal state using historical transition data. (2) Based on the learned values, it conducts a Many-To-Many dispatching using a combinatorial optimization algorithm by considering both immediate rewards and expected values of demands in the next round. In order to get a higher total reward, the demands with a high expected value (short response time) in the future may be delayed to the next round. On the contrary, the demands with a low expected value (long response time) in the future would be dispatched immediately. Through extensive experiments using real-world datasets, we show that the proposed model outperforms the existing models in terms of Cancellation Rate and Average Response Time.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Wei, and Tongbin Li. "Optimization of Nursing Care Resource Allocation in an Aging Population Based on Combinatorial Optimization Algorithm." Applied Mathematics and Nonlinear Sciences, October 28, 2023. http://dx.doi.org/10.2478/amns.2023.2.00823.

Full text
Abstract:
Abstract This paper first considers population aging as a breakthrough, analyzes the demand for long-term care due to population aging, and classifies the models and characteristics of long-term care. Secondly, the combinatorial optimization algorithm and its classical cases are analyzed, and a care resource allocation system is built based on the solution set performance metrics and multi-objective combinatorial optimization algorithm for an aging population society. In city A, a research experiment examines the elderly population and the allocation of nursing care resources. The results show that the Gini coefficient of the population distribution of nursing institutions is 0.027 and the Gini coefficient of the population distribution of the number of institutional beds is 0.296, both of which are less than 0.3, thus indicating that the overall structure of the current social nursing resource allocation is relatively fair.
APA, Harvard, Vancouver, ISO, and other styles
30

Kawase, Yasushi, and Hanna Sumita. "Randomized Strategies for Robust Combinatorial Optimization with Approximate Separation." Algorithmica, October 10, 2023. http://dx.doi.org/10.1007/s00453-023-01175-3.

Full text
Abstract:
AbstractIn this paper, we study the following robust optimization problem. Given a set family representing feasibility and candidate objective functions, we choose a feasible set, and then an adversary chooses one objective function, knowing our choice. The goal is to find a randomized strategy (i.e., a probability distribution over the feasible sets) that maximizes the expected objective value in the worst case. This problem is fundamental in wide areas such as artificial intelligence, machine learning, game theory, and optimization. To solve the problem, we provide a general framework based on the dual linear programming problem. In the framework, we utilize the ellipsoid algorithm with the approximate separation algorithm. We prove that there exists an $$\alpha $$ α -approximation algorithm for our robust optimization problem if there exists an $$\alpha $$ α -approximation algorithm for finding a (deterministic) feasible set that maximizes a nonnegative linear combination of the candidate objective functions. Using our result, we provide approximation algorithms for the max–min fair randomized allocation problem and the maximum cardinality robustness problem with a knapsack constraint.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Su, Hrayer Aprahamian, Maher Nouiehed, and Hadi El-Amine. "An Optimization-Based Order-and-Cut Approach for Fair Clustering of Data Sets." INFORMS Journal on Data Science, August 16, 2023. http://dx.doi.org/10.1287/ijds.2022.0005.

Full text
Abstract:
Machine learning algorithms have been increasingly integrated into applications that significantly affect human lives. This surged an interest in designing algorithms that train machine learning models to minimize training error and imposing a certain level of fairness. In this paper, we consider the problem of fair clustering of data sets. In particular, given a set of items each associated with a vector of nonsensitive attribute values and a categorical sensitive attribute (e.g., gender, race, etc.), our goal is to find a clustering of the items that minimizes the loss (i.e., clustering objective) function and imposes fairness measured by Rényi correlation. We propose an efficient and scalable in-processing algorithm, driven by findings from the field of combinatorial optimization, that heuristically solves the underlying optimization problem and allows for regulating the trade-off between clustering quality and fairness. The approach does not restrict the analysis to a specific loss function, but instead considers a more general form that satisfies certain desirable properties. This broadens the scope of the algorithm’s applicability. We demonstrate the effectiveness of the algorithm for the specific case of k-means clustering as it is one of the most extensively studied and widely adopted clustering schemes. Our numerical experiments reveal the proposed algorithm significantly outperforms existing methods by providing a more effective mechanism to regulate the trade-off between loss and fairness. History: Rema Padman served as the senior editor for this article. Data Ethics & Reproducibility Note: The code capsule is available on Code Ocean at https://codeocean.com/capsule/9556728/tree and in the e-Companion to the this article (available at https://doi.org/10.1287/ijds.2022.0005 ).
APA, Harvard, Vancouver, ISO, and other styles
32

Ali, Sura Mazin, Noor Thamer Mahmood, and Samer Amil Yousif. "Meerkat Clan Algorithm for Solving N-Queen Problems." Iraqi Journal of Science, July 1, 2021, 2082–89. http://dx.doi.org/10.24996/ijs.2021.62.6.33.

Full text
Abstract:
The swarm intelligence and evolutionary methods are commonly utilized by researchers in solving the difficult combinatorial and Non-Deterministic Polynomial (NP) problems. The N-Queen problem can be defined as a combinatorial problem that became intractable for the large ‘n’ values and, thereby, it is placed in the NP class of problems. In the present study, a solution is suggested for the N-Queen problem, on the basis of the Meerkat Clan Algorithm (MCA). The problem of n-Queen can be mainly defined as one of the generalized 8-Queen problem forms, for which the aim is placing 8 queens in a way that none of the queens has the ability of killing the others with the use of the standard moves of the chess queen. The Meerkat Clan environment is a directed graph, called the search space, produced for the efficient search of valid n-queens’ placement, in a way that they do not cause harm to one another. This paper also presents the development of an intelligent heuristic function which is helpful to find the solution with high speed and effectiveness. This study includes a detailed discussion of the problem background, problem complexity, Meerkat Clan Algorithm, and comparisons of the problem solution with the Practical Swarm Optimization (PSO) and Genetic Algorithm (GA. It is an entirely review-based work which implemented the suggested designs and architectures of the methods and a fair amount of experimental results.
APA, Harvard, Vancouver, ISO, and other styles
33

Bassett, Kimberly L., Tylan Watkins, Jonathan Coleman, Nathan Bianco, Lauren S. Bailey, Jamin Pillars, Samuel Garrett Williams, et al. "A Workflow for Accelerating Multimodal Data Collection for Electrodeposited Films." Integrating Materials and Manufacturing Innovation, October 26, 2023. http://dx.doi.org/10.1007/s40192-023-00315-5.

Full text
Abstract:
AbstractFuture machine learning strategies for materials process optimization will likely replace human capital-intensive artisan research with autonomous and/or accelerated approaches. Such automation enables accelerated multimodal characterization that simultaneously minimizes human errors, lowers costs, enhances statistical sampling, and allows scientists to allocate their time to critical thinking instead of repetitive manual tasks. Previous acceleration efforts to synthesize and evaluate materials have often employed elaborate robotic self-driving laboratories or used specialized strategies that are difficult to generalize. Herein we describe an implemented workflow for accelerating the multimodal characterization of a combinatorial set of 915 electroplated Ni and Ni–Fe thin films resulting in a data cube with over 160,000 individual data files. Our acceleration strategies do not require manufacturing-scale resources and are thus amenable to typical materials research facilities in academic, government, or commercial laboratories. The workflow demonstrated the acceleration of six characterization modalities: optical microscopy, laser profilometry, X-ray diffraction, X-ray fluorescence, nanoindentation, and tribological (friction and wear) testing, each with speedup factors ranging from 13–46x. In addition, automated data upload to a repository using FAIR data principles was accelerated by 64x.
APA, Harvard, Vancouver, ISO, and other styles
34

Neria, Gal, and Michal Tzur. "The Dynamic Pickup and Allocation with Fairness Problem." Transportation Science, May 30, 2024. http://dx.doi.org/10.1287/trsc.2023.0228.

Full text
Abstract:
Urban logistic applications that involve pickup and distribution of goods require making routing and allocation decisions with respect to a set of sites. In cases where the supply quantities and the time in which they become available are unknown in advance, these decisions must be determined in real time based on information that arrives gradually. Furthermore, in many applications that satisfy the described setting, fair allocation is desired in addition to system effectiveness. In this paper, we consider the problem of determining a vehicle route that visits two types of sites in any order: pickup points (PPs), from which the vehicle collects supplies, and demand points (DPs), to which these supplies are delivered. The supply quantities offered by each PP are uncertain, and the information on their value arrives gradually over time. We model this problem as a stochastic dynamic routing and resource allocation problem, with the aim of delivering as many goods as possible while obtaining equitable allocations to DPs. We present a Markov decision process formulation for the problem; however, it suffers from the curse of dimensionality. Therefore, we develop a heuristic framework that presents a novel combination of operations research and machine learning and is applicable for many dynamic stochastic combinatorial optimization problems. Specifically, we use a large neighborhood search (LNS) to explore possible decisions combined with a neural network (NN) model that approximates the future value given any state and action. We present a new reinforcement learning method to train the NN when the decision space is too large to enumerate. A numerical experiment with 38 to 180 site instances, based on data from the Berlin Foodbank and randomly generated data sets, confirms that the heuristic obtains solutions that are on average approximately 28.2%, 41.6%, and 57.9% better than three benchmark solutions. Funding: This research was partially supported by the Israel Science Foundation [Grant 463/15], by the Shlomo Shmeltzer Institute for Smart Transportation at Tel Aviv University, by the Israeli Smart Transportation Research Center (ISTRC), and by the Council for Higher Education in Israel (VATAT). Supplemental Material: The online appendix is available at https://doi.org/10.1287/trsc.2023.0228 .
APA, Harvard, Vancouver, ISO, and other styles
35

Patel, Yash J., Sofiene Jerbi, Thomas Bäck, and Vedran Dunjko. "Reinforcement learning assisted recursive QAOA." EPJ Quantum Technology 11, no. 1 (January 17, 2024). http://dx.doi.org/10.1140/epjqt/s40507-023-00214-w.

Full text
Abstract:
AbstractIn recent years, variational quantum algorithms such as the Quantum Approximation Optimization Algorithm (QAOA) have gained popularity as they provide the hope of using NISQ devices to tackle hard combinatorial optimization problems. It is, however, known that at low depth, certain locality constraints of QAOA limit its performance. To go beyond these limitations, a non-local variant of QAOA, namely recursive QAOA (RQAOA), was proposed to improve the quality of approximate solutions. The RQAOA has been studied comparatively less than QAOA, and it is less understood, for instance, for what family of instances it may fail to provide high-quality solutions. However, as we are tackling -hard problems (specifically, the Ising spin model), it is expected that RQAOA does fail, raising the question of designing even better quantum algorithms for combinatorial optimization. In this spirit, we identify and analyze cases where (depth-1) RQAOA fails and, based on this, propose a reinforcement learning enhanced RQAOA variant (RL-RQAOA) that improves upon RQAOA. We show that the performance of RL-RQAOA improves over RQAOA: RL-RQAOA is strictly better on these identified instances where RQAOA underperforms and is similarly performing on instances where RQAOA is near-optimal. Our work exemplifies the potentially beneficial synergy between reinforcement learning and quantum (inspired) optimization in the design of new, even better heuristics for complex problems.
APA, Harvard, Vancouver, ISO, and other styles
36

Okamoto, Yasuharu. "Maximizing gerrymandering through ising model optimization." Scientific Reports 11, no. 1 (December 2021). http://dx.doi.org/10.1038/s41598-021-03050-z.

Full text
Abstract:
AbstractBy using the Ising model formulation for combinatorial optimization with 0–1 binary variables, we investigated the extent to which partisan gerrymandering is possible from a random but even distribution of supporters. Assuming that an electoral district consists of square subareas and that each subarea shares at least one edge with other subareas in the district, it was possible to find the most tilted assignment of seats in most cases. However, in cases where supporters' distribution included many enclaves, the maximum tilted assignment was usually found to fail. We also discussed the proposed algorithm is applicable to other fields such as the redistribution of delivery destinations.
APA, Harvard, Vancouver, ISO, and other styles
37

Coudert, David, Stéphane Pérennes, Hervé Rivano, and Marie-Emilie Voge. "Combinatorial optimization in networks with Shared Risk Link Groups." Discrete Mathematics & Theoretical Computer Science Vol. 18 no. 3, Distributed Computing and... (May 3, 2016). http://dx.doi.org/10.46298/dmtcs.1297.

Full text
Abstract:
The notion of Shared Risk Link Groups (SRLG) captures survivability issues when a set of links of a network may fail simultaneously. The theory of survivable network design relies on basic combinatorial objects that are rather easy to compute in the classical graph models: shortest paths, minimum cuts, or pairs of disjoint paths. In the SRLG context, the optimization criterion for these objects is no longer the number of edges they use, but the number of SRLGs involved. Unfortunately, computing these combinatorial objects is NP-hard and hard to approximate with this objective in general. Nevertheless some objects can be computed in polynomial time when the SRLGs satisfy certain structural properties of locality which correspond to practical ones, namely the star property (all links affected by a given SRLG are incident to a unique node) and the span 1 property (the links affected by a given SRLG form a connected component of the network). The star property is defined in a multi-colored model where a link can be affected by several SRLGs while the span property is defined only in a mono-colored model where a link can be affected by at most one SRLG. In this paper, we extend these notions to characterize new cases in which these optimization problems can be solved in polynomial time. We also investigate the computational impact of the transformation from the multi-colored model to the mono-colored one. Experimental results are presented to validate the proposed algorithms and principles.
APA, Harvard, Vancouver, ISO, and other styles
38

Gebler, Colin, Jochen Rethmann, and Peer Ueberholz. "QUBO Models for the FIFO Stack-Up Problem and Experimental Evaluation on a Quantum Annealer." SN Computer Science 5, no. 7 (August 23, 2024). http://dx.doi.org/10.1007/s42979-024-03082-y.

Full text
Abstract:
AbstractQuantum annealing has been applied to combinatorial optimization problems in recent years. In this paper we study the possibility to use quantum annealing for solving the combinatorial FIFO Stack-Up problem, where bins have to be stacked-up from a conveyor belt onto pallets. The problem is NP-hard and can be solved using linear programming approaches. We developed two QUBO (quadratic unconstrained binary optimization) objective functions based on a bin stack-up solution and a pallet stack-up solution for this problem suitable for a quantum annealer. The number of variables was minimized to increase the performance and their dependence on the number of bins and pallets was discussed. The performances of both methods were studied for various small problem sizes on a D-Wave quantum annealer. We found that only tiny instances could be solved and looked at the terms of the QUBO-formulations, which cause the quantum annealer to fail for larger problem sizes. Furthermore we compare the results to the performance of a classic computer using the same QUBO-formulations.
APA, Harvard, Vancouver, ISO, and other styles
39

Oki, Taihei. "Improved structural methods for nonlinear differential-algebraic equations via combinatorial relaxation." IMA Journal of Numerical Analysis, December 22, 2021. http://dx.doi.org/10.1093/imanum/drab094.

Full text
Abstract:
Abstract Differential-algebraic equations (DAEs) are widely used for modelling dynamical systems. In the numerical analysis of DAEs, consistent initialization and index reduction are important preprocessing steps prior to numerical integration. Existing DAE solvers commonly adopt structural preprocessing methods based on combinatorial optimization. Unfortunately, structural methods fail if the DAE has a singular system Jacobian matrix. For such DAEs, methods have been proposed to modify them to other DAEs to which structural methods are applicable, based on the combinatorial relaxation technique. Existing modification methods, however, work only for DAEs that are linear or close to linear. This paper presents two new modification methods for nonlinear DAEs: the substitution method and the augmentation method. Both methods are based on the combinatorial relaxation approach and are applicable to a large class of nonlinear DAEs. The substitution method symbolically solves equations for some derivatives based on the implicit function theorem and substitutes the solution back into the system. Instead of solving equations, the augmentation method modifies DAEs by appending new variables and equations. Our methods are implemented as a MATLAB library using MuPAD, and through its application to practical DAEs, we show that our methods can be used as a promising preprocessing of DAEs that the index reduction procedure in MATLAB cannot handle.
APA, Harvard, Vancouver, ISO, and other styles
40

Bürgisser, Peter, and Christian Ikenmeyer. "A max-flow algorithm for positivity of Littlewood-Richardson coefficients." Discrete Mathematics & Theoretical Computer Science DMTCS Proceedings vol. AK,..., Proceedings (January 1, 2009). http://dx.doi.org/10.46298/dmtcs.2749.

Full text
Abstract:
International audience Littlewood-Richardson coefficients are the multiplicities in the tensor product decomposition of two irreducible representations of the general linear group $\mathrm{GL}(n,\mathbb{C})$. They have a wide variety of interpretations in combinatorics, representation theory and geometry. Mulmuley and Sohoni pointed out that it is possible to decide the positivity of Littlewood-Richardson coefficients in polynomial time. This follows by combining the saturation property of Littlewood-Richardson coefficients (shown by Knutson and Tao 1999) with the well-known fact that linear optimization is solvable in polynomial time. We design an explicit $\textit{combinatorial}$ polynomial time algorithm for deciding the positivity of Littlewood-Richardson coefficients. This algorithm is highly adapted to the problem and it is based on ideas from the theory of optimizing flows in networks. Les coefficients de Littlewood-Richardson sont les multiplicités dans la décomposition du produit tensoriel de deux représentations irréductibles du groupe général linéaire $\mathrm{GL}(n,\mathbb{C})$. Ces coefficients ont plusieurs interprétations en combinatoire, en théorie des représentations et en géométrie. Mulmuley et Sohoni ont observé qu'on peut décider si un coefficient de Littlewood-Richardson est positif en temps polynomial. C'est une conséquence de la propriété de saturation des coefficients de Littlewood-Richardson (démontrée par Knutson et Tao en 1999) et le fait bien connu que la programmation linéaire est possible en temps polynomial. Nous décrivons un algorithme $\textit{combinatoire}$ pour décider si un coefficient de Littlewood-Richardson est positif. Cet algorithme est bien adapté au problème et il utilise des idées de la théorie des flots maximaux sur des réseaux.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Fei, Xianglong Cheng, Xin Xia, Chunhou Zheng, and Yansen Su. "Adaptive Space Search-based Molecular Evolution Optimization Algorithm." Bioinformatics, July 23, 2024. http://dx.doi.org/10.1093/bioinformatics/btae446.

Full text
Abstract:
Abstract Motivation In drug development process, a significant portion of budget and research time are dedicated to the lead compound optimization procedure in order to identify potential drugs. This procedure focuses on enhancing the pharmacological and bioactive properties of compounds by optimizing their local substructures. However, due to the vast and discrete chemical structure space and the unpredictable element combinations within this space, the optimization process is inherently complex. Various structure enumeration-based combinatorial optimization methods have shown certain advantages. However, they still have limitations. Those methods fail to consider the differences between molecules and struggle to explore the unknown outer search space. Results In this study, we propose an adaptive space search-based molecular evolution optimization algorithm (ASSMOEA). It consists of three key modules: construction of molecule-specific search space, molecular evolutionary optimization, and adaptive expansion of molecule-specific search space. Specifically, we design a fragment similarity tree in molecule-specific search space, and apply a dynamic mutation strategy in this space to guide molecular optimization. Then we utilize an encoder-encoder structure to adaptively expand the space. Those three modules are circled iteratively to optimize molecules. Our experiments demonstrate that ASSMOEA outperforms existing methods in terms of molecular optimization. It not only enhances the efficiency of the molecular optimization process, but also exhibits a robust ability to search for correct solutions. Availability and Implementation The code is freely available on the web at https://github.com/bbbbb-b/MEOAFST Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhong, Rui, Enzhi Zhang, and Masaharu Munetomo. "Cooperative coevolutionary differential evolution with linkage measurement minimization for large-scale optimization problems in noisy environments." Complex & Intelligent Systems, January 17, 2023. http://dx.doi.org/10.1007/s40747-022-00957-6.

Full text
Abstract:
AbstractMany optimization problems suffer from noise, and the noise combined with the large-scale attributes makes the problem complexity explode. Cooperative coevolution (CC) based on divide and conquer decomposes the problems and solves the sub-problems alternately, which is a popular framework for solving large-scale optimization problems (LSOPs). Many studies show that the CC framework is sensitive to decomposition, and the high-accuracy decomposition methods such as differential grouping (DG), DG2, and recursive DG (RDG) are extremely sensitive to sampling accuracy, which will fail to detect the interactions in noisy environments. Therefore, solving LSOPs in noisy environments based on the CC framework faces unprecedented challenges. In this paper, we propose a novel decomposition method named linkage measurement minimization (LMM). We regard the decomposition problem as a combinatorial optimization problem and design the linkage measurement function (LMF) based on Linkage Identification by non-linearity check for real-coded GA (LINC-R). A detailed theoretical analysis explains why our proposal can determine the interactions in noisy environments. In the optimization, we introduce an advanced optimizer named modified differential evolution with distance-based selection (MDE-DS), and the various mutation strategy and distance-based selection endow MDE-DS with strong anti-noise ability. Numerical experiments show that our proposal is competitive with the state-of-the-art decomposition methods in noisy environments, and the introduction of MDE-DS can accelerate the optimization in noisy environments significantly.
APA, Harvard, Vancouver, ISO, and other styles
43

Jiang, Ling, Shengfu Wang, Jun Kang, Xiaoku Yang, Xiuyuan Yao, Yitao Zhang, Zhenghong Bao, and Yongsheng Zhang. "Study on debugging of lightning impulse voltage waveform for large capacity ±800kV converter transformer in high altitude area." Frontiers in Energy Research 11 (October 13, 2023). http://dx.doi.org/10.3389/fenrg.2023.1226519.

Full text
Abstract:
The Qinghai-Henan ±800kV UHV DC power transmission project is an UHV channel specially constructed for clean energy delivery, ±800kV converter transformers of the project are assembled and tested in Xining at an altitude of 2500 m, full-wave lightning impulse test is one of the main means to evaluate converter transformer insulation performance, its valve side lightning impulse voltage waveform parameters often fail to meet IEC60076-3 standards because of the large winding to the ground capacitance. In order to effectively assess the lightning tolerance of converter transformer, a simulation model of impulse test circuit of ±800kV converter transformer was built based on the existing 6000kV/560kJ impulse voltage generator and ATP-EMTP simulation software, three kinds of optimization methods are proposed, which include increasing the wavetail resistance, increasing load capacitor, and connecting the wave head in parallel with different inductors, the results show that the single optimization method can not meet the waveform requirements. Therefore, a combinatorial optimization test method of increasing the wavetail resistance and connecting the wave head in parallel with different inductors is proposed, which is completed and verified in the full-wave lightning impulse test on the valve side of Qinghai ±800kV high-end converter transformer, the results show that the optimized test waveform parameters meet the standard requirements.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu, Kaiyuan, Alejandro A. Schäffer, Welles Robinson, Junyan Xu, Eytan Ruppin, A. Funda Ergun, Yuzhen Ye, and S. Cenk Sahinalp. "Strain level microbial detection and quantification with applications to single cell metagenomics." Nature Communications 13, no. 1 (October 28, 2022). http://dx.doi.org/10.1038/s41467-022-33869-7.

Full text
Abstract:
AbstractComputational identification and quantification of distinct microbes from high throughput sequencing data is crucial for our understanding of human health. Existing methods either use accurate but computationally expensive alignment-based approaches or less accurate but computationally fast alignment-free approaches, which often fail to correctly assign reads to genomes. Here we introduce CAMMiQ, a combinatorial optimization framework to identify and quantify distinct genomes (specified by a database) in a metagenomic dataset. As a key methodological innovation, CAMMiQ uses substrings of variable length and those that appear in two genomes in the database, as opposed to the commonly used fixed-length, unique substrings. These substrings allow to accurately decouple mixtures of highly similar genomes resulting in higher accuracy than the leading alternatives, without requiring additional computational resources, as demonstrated on commonly used benchmarking datasets. Importantly, we show that CAMMiQ can distinguish closely related bacterial strains in simulated metagenomic and real single-cell metatranscriptomic data.
APA, Harvard, Vancouver, ISO, and other styles
45

Naveen Babu, M. "Analysis of Radial Distribution Systems by using Particle Swarm Optimization under Uncertain Conditions." Contemporary Mathematics, October 19, 2023. http://dx.doi.org/10.37256/cm.5120243478.

Full text
Abstract:
Abstract: Losses in the network are one of the most important parts of a power distribution network, and work should be done to lower their value. The research used the Particle Swarm Optimisation (PSO) metaheuristic algorithm to investigate the impact of concurrently optimising phase balance and conductor size on the planning issues and objective functions of an imbalanced distribution system. These objective functions include power loss, voltage unbalance, total neutral current, and complicated power unbalance. Firstly, the optimisation process is applied to each goal function. Then, they are put together with weights to form a multi-objective optimisation problem. In this study, it was tried to find out how to minimise losses in electrical power distribution networks that aren't fair. Power flow and optimal DG placement are two PSO techniques that may be used to reduce losses. These changes may be applied to existing distribution systems using an effective load-flow method for a three-phase imbalanced radial distribution network. Knowing the node voltage, angle, branch current, actual power loss, wattles power loss, branch losses, etc. helps determine the network's true state. Simple formulae may be used to describe the relationship between the voltage at one end of the distribution system, the voltage at the other end, and the voltage drops throughout the whole system. An approach is developed to identify the relevant variables. The voltage's angle at the target is calculated with its magnitude. It's a process that requires time and effort. From the substation to each terminal node, the constant voltage of 1p.u. is considered. Voltage magnitude and phase angle are varied between repetitions, and voltage reductions are computed using the new parameters. The suggested approach has been applied to 19- and 25-node networks with unequal distribution. To demonstrate its efficacy, the recommended approach's speed requirements were compared to those of another recently developed technology. Good outcomes are achieved, and DG proves to be a viable option for reducing costs and improving performance.
APA, Harvard, Vancouver, ISO, and other styles
46

Lorch, Lars, Heiner Kremer, William Trouleau, Stratis Tsirtsis, Aron Szanto, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. "Quantifying the Effects of Contact Tracing, Testing, and Containment Measures in the Presence of Infection Hotspots." ACM Transactions on Spatial Algorithms and Systems, April 21, 2022. http://dx.doi.org/10.1145/3530774.

Full text
Abstract:
Multiple lines of evidence strongly suggest that infection hotspots, where a single individual infects many others, play a key role in the transmission dynamics of COVID-19. However, most of the existing epidemiological models fail to capture this aspect by neither representing the sites visited by individuals explicitly nor characterizing disease transmission as a function of individual mobility patterns. In this work, we introduce a temporal point process modeling framework that specifically represents visits to the sites where individuals get in contact and infect each other. Under our model, the number of infections caused by an infectious individual naturally emerges to be overdispersed. Using an efficient sampling algorithm, we demonstrate how to estimate the transmission rate of infectious individuals at the sites they visit and in their households using Bayesian optimization and longitudinal case data. Simulations using fine-grained and publicly available demographic data and site locations from Bern, Switzerland showcase the flexibility of our framework. To facilitate research and analyses of other cities and regions, we release an open-source implementation of our framework.
APA, Harvard, Vancouver, ISO, and other styles
47

Sreenivasula Reddy, T., R. Sathya, and Mallikhanjuna Rao Nuka. "Mining the High Dimensional Biological Dataset Using Optimized Colossal Pattern with Dimensionality Reduction." Contemporary Mathematics, March 8, 2024, 645–64. http://dx.doi.org/10.37256/cm.5120242460.

Full text
Abstract:
Recent years have seen a lot of attention paid to the mining of enormous item sets from high-dimensional databases. Small and mid-sized data sets take a long time to mine with traditional algorithms since they don’t include the complete and relevant info needed for decision making. Many applications, particularly in bioinformatics, benefit greatly from the extraction of (FCCI) Frequent Colossal Closed Itemsets from a large dataset. In order to extract FCCI from a dataset, present preprocessing strategies fail to remove all extraneous characteristics and rows from the data set completely. In addition, the most current algorithms for this kind are sequential and computationally expensive. A high-dimensional dataset is pruned of all extraneous characteristics and rows using two alternative dimensionality reduction strategies presented in this paper. Then, an optimal feature value is identified by using Equilibrium Optimizer (EO) to identify the threshold value for reduced features. It is designed to discover common items and build association rules if the feature value is smaller than the frequency mining algorithm (IFRS) in conjunction with the Fruit fly Algorithm (FFA). If the feature value exceeds the optimal threshold, then optimized Length restrictions can be used to solve the CP mining problem (LC). Random search is utilized to identify the optimal threshold values of the restrictions and extract the enormous pattern using the Differential Evolutionary Arithmetic Optimization Algorithm. The experiments are carried on twenty biological datasets that us extracted from UCI websites and validated the proposed models in terms of various metrics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography