Journal articles on the topic 'Operator Execution Algorithms'

To see the other types of publications on this topic, follow the link: Operator Execution Algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Operator Execution Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

MARION-POTY, VIRGINIE, and SERGE MIGUET. "DATA ALLOCATION STRATEGIES FOR PARALLEL IMAGE PROCESSING ALGORITHMS." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 04 (August 1995): 615–34. http://dx.doi.org/10.1142/s0218001495000237.

Full text
Abstract:
This paper discusses several data allocation strategies used for the parallel implementation of basic imaging operators. It shows that depending on the operator (sequential or parallel, with regular or irregular execution time), the image data must be partitioned in very different manners: The square sub-domains are best adapted for minimizing the communication volume, but rectangles can perform better when we take into account the time for constructing messages. Block allocations are well adapted for inherently parallel operators since they minimize interprocessor interactions, but in the case of recursive operators, they lead to nearly sequential executions. In this framework, we show the usefulness of block-cyclic allocations. Finally, we illustrate the fact that allocating the same amount of image data to each processor can lead to severe load imbalance in the case of some operators with data-dependant execution times.
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Wenyan, Mingfei Wu, Fang Dai, and Yufan Qiang. "Improved Environmental Stimulus and Biological Competition Tactics Interactive Artificial Ecological Optimization Algorithm for Clustering." Biomimetics 8, no. 2 (June 7, 2023): 242. http://dx.doi.org/10.3390/biomimetics8020242.

Full text
Abstract:
An interactive artificial ecological optimization algorithm (SIAEO) based on environmental stimulus and a competition mechanism was devised to find the solution to a complex calculation, which can often become bogged down in local optimum because of the sequential execution of consumption and decomposition stages in the artificial ecological optimization algorithm. Firstly, the environmental stimulus defined by population diversity makes the population interactively execute the consumption operator and decomposition operator to abate the inhomogeneity of the algorithm. Secondly, the three different types of predation modes in the consumption stage were regarded as three different tasks, and the task execution mode was determined by the maximum cumulative success rate of each individual task execution. Furthermore, the biological competition operator is recommended to modify the regeneration strategy so that the SIAEO algorithm can provide consideration to the exploitation in the exploration stage, break the equal probability execution mode of the AEO, and promote the competition among operators. Finally, the stochastic mean suppression alternation exploitation problem is introduced in the later exploitation process of the algorithm, which can tremendously heighten the SIAEO algorithm to run away the local optimum. A comparison between SIAEO and other improved algorithms is performed on the CEC2017 and CEC2019 test set.
APA, Harvard, Vancouver, ISO, and other styles
3

Nobre, Tiago, Silvia Regina Vergilio, and Aurora Pozo. "Reducing Interface Mutation Costs with Multiobjective Optimization Algorithms." International Journal of Natural Computing Research 3, no. 3 (July 2012): 21–40. http://dx.doi.org/10.4018/jncr.2012070102.

Full text
Abstract:
To reduce mutation test costs, different strategies were proposed to find a set of essential operators that generates a reduced number of mutants without decreasing the mutation score. However, the operator selection is influenced by other factors, such as: number of test data, execution time, number of revealed faults, etc. In fact this is a multiobjective problem. For that, different good solutions exist. To properly deal with this problem, a selection strategy based on multiobjective algorithms was proposed and investigated for unit testing. This work explores the use of such strategy in the integration testing phase. Three multiobjective algorithms are used and evaluated with real programs: one algorithm based on tabu search (MTabu), one based on Genetic Algorithm (NSGA-II) and the third one based on Ant Colony Optimization (PACO). The results are compared with traditional strategies and contrasted with essential operators obtained in the unit testing level.
APA, Harvard, Vancouver, ISO, and other styles
4

Lougmiri, Zekri. "A New Progressive Method for Computing Skyline Queries." Journal of Information Technology Research 10, no. 3 (July 2017): 1–21. http://dx.doi.org/10.4018/jitr.2017070101.

Full text
Abstract:
Skyline queries are important in many fields, especially for decision making. In this context, objects or tuples of databases are defined according to some numerical and non numerical attributes. The skyline operator acts on the numerical ones. The algorithms that implements this skyline operator are genrally of progressive or non progressive. The progressive ones return the skyline operator during its execution while non preogressive alogrithms return the result at the end of its execution. This paper presents a new progressive algorithm for computing the skyline points. This algorithm is based on sorting as a preprocessing of the input. The authors present new theorems for deducing promptly the first skyline points and reducing the candidate space. A new version of Divide-and-Conquer algorithm is used for computing the final skyline. Intensive experimentations on both real and synthetic datasets show that our algorithm presents best performance comparatively to other methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Lenin, K. "REDUCTION OF ACTIVE POWER LOSS BY VOLITION PARTICLE SWARM OPTIMIZATION." International Journal of Research -GRANTHAALAYAH 6, no. 6 (June 30, 2018): 346–56. http://dx.doi.org/10.29121/granthaalayah.v6.i6.2018.1379.

Full text
Abstract:
This paper projects Volition Particle Swarm Optimization (VP) algorithm for solving optimal reactive power problem. Particle Swarm Optimization algorithm (PSO) has been hybridized with the Fish School Search (FSS) algorithm to improve the capability of the algorithm. FSS presents an operator, called as collective volition operator, which is capable to auto-regulate the exploration-exploitation trade-off during the algorithm execution. Since the PSO algorithm converges faster than FSS but cannot auto-adapt the granularity of the search, we believe the FSS volition operator can be applied to the PSO in order to mitigate this PSO weakness and improve the performance of the PSO for dynamic optimization problems. In order to evaluate the efficiency of the proposed Volition Particle Swarm Optimization (VP) algorithm, it has been tested in standard IEEE 30 bus test system and compared to other reported standard algorithms. Simulation results show that Volition Particle Swarm Optimization (VP) algorithm is more efficient then other algorithms in reducing the real power losses with control variables are within the limits.
APA, Harvard, Vancouver, ISO, and other styles
6

Khamprapai, Wanida, Cheng-Fa Tsai, and Paohsi Wang. "Analyzing the Performance of the Multiple-Searching Genetic Algorithm to Generate Test Cases." Applied Sciences 10, no. 20 (October 17, 2020): 7264. http://dx.doi.org/10.3390/app10207264.

Full text
Abstract:
Software testing using traditional genetic algorithms (GAs) minimizes the required number of test cases and reduces the execution time. Currently, GAs are adapted to enhance performance when finding optimal solutions. The multiple-searching genetic algorithm (MSGA) has improved upon current GAs and is used to find the optimal multicast routing in network systems. This paper presents an analysis of the optimization of test case generations using the MSGA by defining suitable values of MSGA parameters, including population size, crossover operator, and mutation operator. Moreover, in this study, we compare the performance of the MSGA with a traditional GA and hybrid GA (HGA). The experimental results demonstrate that MSGA reaches the maximum executed branch statements in the lowest execution time and the smallest number of test cases compared to the GA and HGA.
APA, Harvard, Vancouver, ISO, and other styles
7

Ebrahimi, Maryam, Parviz Dinari, Mohamad Samaei, Rouhollah Sohrabi, and Soheil Sherafatianfini. "A New Approach Based on the Learning Effect for Sequence-Dependent Parallel Machine Scheduling Problem under Uncertainty." Discrete Dynamics in Nature and Society 2022 (September 10, 2022): 1–9. http://dx.doi.org/10.1155/2022/2648936.

Full text
Abstract:
Production system design has lots of restrictions and complex assumptions that cause difficulty in decision making. One of the most important of them is the complexity of the relationship between man and machine. In this regard, operator learning is recognized as an effective element in completing tasks in the production system. In this research, a mathematical model for scheduling the parallel machines in terms of job degradation and operator learning is presented. As one of the most important assumptions, the sequence-dependent setup time is of concern. In other words, jobs are processed sequentially, and there is a sequence-dependent setup time. Moreover, the processing time and delivery due date are considered uncertain, and a fuzzy conversion method is used to deal with this uncertainty. The proposed mathematical model is a multiobjective one and tries to minimize speed and completion time. In order to optimize this mathematical model, the genetic algorithm (GA) and variable neighborhood search (VNS) algorithms have been used. A new hybrid algorithm has also been developed for this problem. The results show that the hybrid algorithm can provide more substantial results than classical algorithms. Moreover, it is revealed that a large percentage of Pareto solutions in the proposed algorithm have a generation time of more than 80% of the algorithm’s execution time.
APA, Harvard, Vancouver, ISO, and other styles
8

García, José, Gino Astorga, and Víctor Yepes. "An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics." Mathematics 9, no. 3 (January 24, 2021): 225. http://dx.doi.org/10.3390/math9030225.

Full text
Abstract:
The optimization methods and, in particular, metaheuristics must be constantly improved to reduce execution times, improve the results, and thus be able to address broader instances. In particular, addressing combinatorial optimization problems is critical in the areas of operational research and engineering. In this work, a perturbation operator is proposed which uses the k-nearest neighbors technique, and this is studied with the aim of improving the diversification and intensification properties of metaheuristic algorithms in their binary version. Random operators are designed to study the contribution of the perturbation operator. To verify the proposal, large instances of the well-known set covering problem are studied. Box plots, convergence charts, and the Wilcoxon statistical test are used to determine the operator contribution. Furthermore, a comparison is made using metaheuristic techniques that use general binarization mechanisms such as transfer functions or db-scan as binarization methods. The results obtained indicate that the KNN perturbation operator improves significantly the results.
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Yi, Minghui Cai, Naiqi Li, and Yongmei Liu. "A First-Order Interpreter for Knowledge-Based Golog with Sensing based on Exact Progression and Limited Reasoning." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 734–42. http://dx.doi.org/10.1609/aaai.v26i1.8230.

Full text
Abstract:
While founded on the situation calculus, current implementations of Golog are mainly based on the closed-world assumption or its dynamic versions or the domain closure assumption. Also, they are almost exclusively based on regression. In this paper, we propose a first-order interpreter for knowledge-based Golog with sensing based on exact progression and limited reasoning. We assume infinitely many unique names and handle first-order disjunctive information in the form of the so-called proper+ KBs. Our implementation is based on the progression and limited reasoning algorithms for proper+ KBs proposed by Liu, Lakemeyer and Levesque. To improve efficiency, we implement the two algorithms by grounding via a trick based on the unique name assumption. The interpreter is online but the programmer can use two operators to specify offline execution for parts of programs. The search operator returns a conditional plan, while the planning operator is used when local closed-world information is available and calls a modern planner to generate a sequence of actions.
APA, Harvard, Vancouver, ISO, and other styles
10

GURMAN, Ivan, Pavlo MOTORNYI, Viktor CHESHUN, Andrii DZHULIY, and Vitalii CHORNENKYI. "ALGORITHM OF SYNTHESIS OF DISCRETE TESTS BASED ON EVOLUTIONARY THEORIES." Herald of Khmelnytskyi National University. Technical sciences 309, no. 3 (May 26, 2022): 40–43. http://dx.doi.org/10.31891/2307-5732-2022-309-3-40-43.

Full text
Abstract:
The complexity of modern digital components and devices as objects of diagnosis, in most cases, makes it impossible for experts to synthesize tests manually in the time allowed to solve the problem of organizing diagnostic tests. This determines the relevance of the development and implementation of intelligent methods and tools for automated synthesis of test sequences. One of the promising options for the synthesis and optimization of discrete tests is the solution of this problem based on the theory of evolution or genetic algorithms. The paper describes an algorithm for the synthesis of discrete tests using genetic algorithms. The algorithm of realization of diagnostic experiment contains stages of formation of initial population, crossing, processing of the received results and check of a condition of a stop of work of genetic algorithm. The main characteristics of the algorithm: the type of evolutionary algorithms used – hybrid; method of generating binary words of chromosome codes by crossing operator from values of diagnostic vectors of elementary tests; the method of selection of applicants for crossing – proportional, based on the probability function of fitness and the control operator of the compatibility assessment; type of crossover operator – uniform; the method of forming a new generation – the descendants of crosses of individuals based on the results of proportional selection and elitistly selected individuals that do not allow crossings under the control operator of compatibility assessment; the main way to stop the evolutionary algorithm – the loss of the ability to implement crossings of individuals in the resulting generation; additional (possible) ways to stop the algorithm – limiting the execution time or the number of iterations (generated generations).
APA, Harvard, Vancouver, ISO, and other styles
11

Suzuki, Hideaki. "An Approach to Biological Computation: Unicellular Core-Memory Creatures Evolved Using Genetic Algorithms." Artificial Life 5, no. 4 (October 1999): 367–86. http://dx.doi.org/10.1162/106454699568827.

Full text
Abstract:
A novel machine language genetic programming system that uses one-dimensional core memories is proposed and simulated. The core is compared to a biochemical reaction space, and in imitation of biological molecules, four types of data words (Membrane, Pure data, Operator, and Instruction) are prepared in the core. A program is represented by a sequence of Instructions. During execution of the core, Instructions are transcribed into corresponding Operators, and Operators modify, create, or transfer Pure data. The core is hierarchically partitioned into sections by the Membrane data, and the data transfer between sections by special channel Operators constitutes a tree data-flow structure among sections in the core. In the experiment, genetic algorithms are used to modify program information. A simple machine learning problem is prepared for the environment data set of the creatures (programs), and the fitness value of a creature is calculated from the Pure data excreted by the creature. Breeding of programs that can output the predefined answer is successfully carried out. Several future plans to extend this system are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Abdel-Basset, Mohamed, Reda Mohamed, Waleed Abd Abd Elkhalik, Marwa Sharawi, and Karam M. Sallam. "Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution." Mathematics 10, no. 21 (October 31, 2022): 4049. http://dx.doi.org/10.3390/math10214049.

Full text
Abstract:
Task scheduling is one of the most significant challenges in the cloud computing environment and has attracted the attention of various researchers over the last decades, in order to achieve cost-effective execution and improve resource utilization. The challenge of task scheduling is categorized as a nondeterministic polynomial time (NP)-hard problem, which cannot be tackled with the classical methods, due to their inability to find a near-optimal solution within a reasonable time. Therefore, metaheuristic algorithms have recently been employed to overcome this problem, but these algorithms still suffer from falling into a local minima and from a low convergence speed. Therefore, in this study, a new task scheduler, known as hybrid differential evolution (HDE), is presented as a solution to the challenge of task scheduling in the cloud computing environment. This scheduler is based on two proposed enhancements to the traditional differential evolution. The first improvement is based on improving the scaling factor, to include numerical values generated dynamically and based on the current iteration, in order to improve both the exploration and exploitation operators; the second improvement is intended to improve the exploitation operator of the classical DE, in order to achieve better results in fewer iterations. Multiple tests utilizing randomly generated datasets and the CloudSim simulator were conducted, to demonstrate the efficacy of HDE. In addition, HDE was compared to a variety of heuristic and metaheuristic algorithms, including the slime mold algorithm (SMA), equilibrium optimizer (EO), sine cosine algorithm (SCA), whale optimization algorithm (WOA), grey wolf optimizer (GWO), classical DE, first come first served (FCFS), round robin (RR) algorithm, and shortest job first (SJF) scheduler. During trials, makespan and total execution time values were acquired for various task sizes, ranging from 100 to 3000. Compared to the other metaheuristic and heuristic algorithms considered, the results of the studies indicated that HDE generated superior outcomes. Consequently, HDE was found to be the most efficient metaheuristic scheduling algorithm among the numerous methods researched.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yingguo, Ming Chen, Jun Wen, Yingwu Chen, and Wen Xiang. "An Adaptive Large Neighborhood Search Algorithm for the Satellite Data Transmission Scheduling Problem." International Journal of Aerospace Engineering 2020 (September 16, 2020): 1–12. http://dx.doi.org/10.1155/2020/5243749.

Full text
Abstract:
Satellite establishes a satellite ground station to satellite link with a ground station to complete data transmission. However, in China, satellite ground stations only exist within the country. With the rapid increase in the number of satellites, effectively dispatching satellite communications and maximizing the performance of ground stations are necessary. We propose a heuristic adaptive large neighborhood search algorithm (H-ALNS) to solve the satellite data transmission scheduling (SDTS) problem. The algorithm includes two heuristic rules for generating the initial scheme and the conflict reduction process after updating the neighborhood. A heuristic task assignment method is used to select the execution time window and position for the task sequence. The adaptive operator is used to update the delete operator and insert operator weights into the H-ALNS. The quality of the generated planning scheme improved through continuous neighborhood destruction and repair. Through experimental analysis, the algorithm we propose is feasible for solving SDTS problems and surpasses the planning results acquired using other comparison algorithms. The H-ALNS has good prospects for practical engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Dan Liu, Dan Liu, Shu-Wen Yao Dan Liu, Yan Zeng Shu-Wen Yao, Ke Zhao Yan Zeng, Yan-An Guo Ke Zhao, and Li Li Yan-An Guo. "Research of the Service Composition Optimization Model Based on Petri Net." 電腦學刊 32, no. 5 (October 2021): 101–14. http://dx.doi.org/10.53106/199115992021103205008.

Full text
Abstract:
Cloud service systems bring together a wide variety of flexible and scalable mass services. The service combination scheme which can meet the user’s needs is provided to the user through the flexible and changing service mode of service combination technology. However, service combination failures often occur. In view of the above problems, this paper establishes the cloud service combination optimization model. Firstly, it is proposed that the Service Combination Optimization Petri Net models and analyzes the service selection and combination. In order to avoid local convergence, combining Service Combination Optimization Petri Net with improved Genetic Algorithm, a service combination model based on Local Search Operator Genetic Algorithm is proposed, and the legitimacy of service combination sequence is verified by Petri net. The results of experiment show that the Petri net service composition model can effectively verify the constraints of service composition and the logical rationality of the system, the Local Search Operator Genetic Algorithm effectively reduces the search space and improves the convergence rate. Compared with other algorithms, the execution efficiency and accuracy of Local Search Operator Genetic Algorithm are improved.
APA, Harvard, Vancouver, ISO, and other styles
15

García, José, Paola Moraga, Broderick Crawford, Ricardo Soto, and Hernan Pinto. "Binarization Technique Comparisons of Swarm Intelligence Algorithm: An Application to the Multi-Demand Multidimensional Knapsack Problem." Mathematics 10, no. 17 (September 3, 2022): 3183. http://dx.doi.org/10.3390/math10173183.

Full text
Abstract:
In order to minimize execution times, improve the quality of solutions, and address more extensive target situations, optimization techniques, particularly metaheuristics, are continually improved. Hybridizing procedures are one of these noteworthy strategies due to their wide range of applications. This article describes a hybrid algorithm that combines the k-means method to produce a binary version of the cuckoo search and sine cosine algorithms. The binary algorithms are applied on the NP-hard multi-demand multidimensional knapsack problem. This problem is of particular interest because it has two types of constraints. The first group of constraints is related to the capacity of the knapsacks, and a second type is associated with the demand that must be met. Experiments were undertaken to acquire insight into the contribution of the k-means technique and the local search operator to the final results. Additionally, a comparison is made with two other types of binarization, the first based on a random method and the second based on the percentile concept. The results reveal that the k-means hybrid algorithm consistently provides superior results in most cases studied. In particular, incorporating the local search operator improved the results by an average of 0.23%. On the other hand, when comparing the results with 100 items and 30-30 restrictions, k-means was 1.06% better on average than the random operator.
APA, Harvard, Vancouver, ISO, and other styles
16

Agnes Shifani, S., G. Ramkumar, V. Nanammal, and R. Thandaiah Prabu. "Exploration of Morphological Procedure on the Recognition of Fundus Image." Journal of Computational and Theoretical Nanoscience 17, no. 5 (May 1, 2020): 2014–20. http://dx.doi.org/10.1166/jctn.2020.8841.

Full text
Abstract:
A gainful fuzzy k-means clustering algorithm under Morphological Image Processing (MIP) is performed. Image processing is one of quickly developing examination territory nowadays and now it is particularly coordinated with all identified with science field. Image Processing can be utilized for breaking down various restorative and MRI Image to get the uncommon and anomaly in the image. Image segmentation manages segmentation of vein segmentation algorithm utilizing fundus Image. In this task, this segmentation is done utilizing k-means clustering and c-means clustering algorithm and Morphological operator for better execution. This upgrades the vein variations from the norm progressively and in a moderately brief time when contrasted with numerous other clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
17

Adamski, Mirosław, Andrzej Żyluk, and Marcin Chodnicki. "Analysis of Methods and Control Systems of Unmanned Platforms." Journal of KONBiN 51, no. 4 (December 1, 2021): 143–56. http://dx.doi.org/10.2478/jok-2021-0048.

Full text
Abstract:
Abstract The key aspect affecting the safety of routing and the unmanned platform mission execution is the autonomy of control systems. To achieve the mission goal, control algorithms supported by advanced sensors have to estimate the obstacle location. Moreover, it is needed to identify potential obstacles, as well as algorithms for trajectory planning in two or three dimensions space. The use of these algorithms allows to create an intelligent object that performs tasks in difficult conditions in which communication between the platform and the operator is constricted. The article mainly focuses on unmanned aerial vehicle (UAV) control systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Xiang, Nuozhou Sun, Yunqian Luo, and Mingyu Gao. "SODA: A Set of Fast Oblivious Algorithms in Distributed Secure Data Analytics." Proceedings of the VLDB Endowment 16, no. 7 (March 2023): 1671–84. http://dx.doi.org/10.14778/3587136.3587142.

Full text
Abstract:
Cloud systems are now a prevalent platform to host large-scale big-data analytics applications such as machine learning and relational database. However, data privacy remains as a critical concern for public cloud systems. Existing trusted hardware could provide an isolated execution domain on an untrusted platform, but also suffers from access-pattern-based side channels at various levels including memory, disks, and networking. Oblivious algorithms can address these vulnerabilities by hiding the program data access patterns. Unfortunately, current oblivious algorithms for data analytics are limited to single-machine execution, only support simple operations, and/or suffer from significant performance overheads due to the use of expensive global sort and excessive data padding. In this work, we propose SODA, a set of efficient and oblivious algorithms for distributed data analytics operators, including filter, aggregate, and binary equi-join. To improve performance, SODA completely avoids the expensive oblivious global sort primitive, and minimizes the data padding overheads. SODA makes use of low-cost (pseudo-)random communication instead of expensive global sort to ensure uniform data traffic in oblivious filter and aggregate. It also adopts a novel two-level bin-packing approach in oblivious join to alleviate both input redistribution and join product skewness, thus minimizing necessary data padding. Compared to the state-of-the-art system, SODA not only extends the functionality but also improves the performance. It achieves 1.1× to 14.6× speedups on complex multi-operator data analytics workloads.
APA, Harvard, Vancouver, ISO, and other styles
19

García, José, José Lemus-Romani, Francisco Altimiras, Broderick Crawford, Ricardo Soto, Marcelo Becerra-Rozas, Paola Moraga, et al. "A Binary Machine Learning Cuckoo Search Algorithm Improved by a Local Search Operator for the Set-Union Knapsack Problem." Mathematics 9, no. 20 (October 16, 2021): 2611. http://dx.doi.org/10.3390/math9202611.

Full text
Abstract:
Optimization techniques, specially metaheuristics, are constantly refined in order to decrease execution times, increase the quality of solutions, and address larger target cases. Hybridizing techniques are one of these strategies that are particularly noteworthy due to the breadth of applications. In this article, a hybrid algorithm is proposed that integrates the k-means algorithm to generate a binary version of the cuckoo search technique, and this is strengthened by a local search operator. The binary cuckoo search algorithm is applied to the NP-hard Set-Union Knapsack Problem. This problem has recently attracted great attention from the operational research community due to the breadth of its applications and the difficulty it presents in solving medium and large instances. Numerical experiments were conducted to gain insight into the contribution of the final results of the k-means technique and the local search operator. Furthermore, a comparison to state-of-the-art algorithms is made. The results demonstrate that the hybrid algorithm consistently produces superior results in the majority of the analyzed medium instances, and its performance is competitive, but degrades in large instances.
APA, Harvard, Vancouver, ISO, and other styles
20

Иванова, Е. В., and Л. Б. Соколинский. "Using Intel Xeon Phi coprocessors for execution of natural join on compressed data." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 4 (December 18, 2015): 534–42. http://dx.doi.org/10.26089/nummet.v16r450.

Full text
Abstract:
В статье описывается сопроцессор баз данных для высокопроизводительных кластерных вычислительных систем с многоядерными ускорителями, использующий распределенные колоночные индексы с интервальной фрагментацией. Работа сопроцессора рассматривается на примере выполнения операции естественного соединения. Параллельная декомпозиция естественного соединения выполняется на основе использования распределенных колоночных индексов. Предложенный подход позволяет выполнять реляционные операции на кластерных вычислительных системах без массовых обменов данными. Приводятся результаты вычислительных экспериментов с использованием сопроцессоров Intel Xeon Phi, подтверждающие эффективность разработанных методов и алгоритмов. A database coprocessor for high-performance cluster computing systems with many-core accelerators is described. This coprocessor uses distributed columnar indexes with interval fragmentation. The operation of the coprocessor engine is considered by an example of natural join processing. The parallel decomposition of natural join operator is performed using distributed columnar indexes. The proposed approach allow one to perform relational operators on computing clusters without massive data exchange. The results of computational experiments on Intel Xeon Phi confirm the efficiency of the developed methods and algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Sundaram, Ramakrishnan, Ravichandran KS, Premaladha Jayaraman, and Venkatraman B. "Extraction of Blood Vessels in Fundus Images of Retina through Hybrid Segmentation Approach." Mathematics 7, no. 2 (February 13, 2019): 169. http://dx.doi.org/10.3390/math7020169.

Full text
Abstract:
A hybrid segmentation algorithm is proposed is this paper to extract the blood vesselsfrom the fundus image of retina. Fundus camera captures the posterior surface of the eye and thecaptured images are used to diagnose diseases, like Diabetic Retinopathy, Retinoblastoma, Retinalhaemorrhage, etc. Segmentation or extraction of blood vessels is highly required, since the analysisof vessels is crucial for diagnosis, treatment planning, and execution of clinical outcomes in the fieldof ophthalmology. It is derived from the literature review that no unique segmentation algorithm issuitable for images of different eye-related diseases and the degradation of the vessels differ frompatient to patient. If the blood vessels are extracted from the fundus images, it will make thediagnosis process easier. Hence, this paper aims to frame a hybrid segmentation algorithmexclusively for the extraction of blood vessels from the fundus image. The proposed algorithm ishybridized with morphological operations, bottom hat transform, multi-scale vessel enhancement(MSVE) algorithm, and image fusion. After execution of the proposed segmentation algorithm, thearea-based morphological operator is applied to highlight the blood vessels. To validate theproposed algorithm, the results are compared with the ground truth of the High-Resolution Fundus(HRF) images dataset. Upon comparison, it is inferred that the proposed algorithm segments theblood vessels with more accuracy than the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Sosimi, Adeyanju, Folorunso Oladimeji Ogunwolu, and Taoreed Adegbola. "A Makespan Optimization Scheme for NP-Hard Gari Processing Job Scheduling Using Improved Genetic Algorithm." Journal of Industrial Engineering 2014 (April 6, 2014): 1–10. http://dx.doi.org/10.1155/2014/628640.

Full text
Abstract:
An optimization scheme for minimizing makespan of Gari processing jobs using improved initial population Genetic Algorithm (GA) is proposed. GA with initial population improved by using job sequencing and dispatching rules of First Come First Served (FCFS), Shortest Processing Time (SPT), Longest Processing Time (LPT), and Modified Johnson’s Algorithm for m-machines in order to obtain better schedules than is affordable by GA with freely generated initial population and by individual traditional sequencing and dispatching rules was used. The traditional GA crossover and mutation operators as well as a custom-made remedial operator were used together with a hybrid of elitism and roulette wheel algorithms in the selection process based on job completion times. A test problem of 20 jobs with specified job processing and arrival times was simulated through the integral 5-process Gari production routine using the sequencing and dispatching rules, GA with freely generated initial population, and the improved GA. Comparisons based on performance measures such as optimal makespan, mean makespan, execution time, and solution improvement rate established the superiority of the improved initial population GA over the traditional sequencing and dispatching rules and freely generated initial population GA.
APA, Harvard, Vancouver, ISO, and other styles
23

Guerrero, José L., Antonio Berlanga, and José M. Molina. "A Guided Mutation Operator for Dynamic Diversity Enhancement in Evolutionary Strategies." International Journal of Natural Computing Research 4, no. 2 (April 2014): 20–39. http://dx.doi.org/10.4018/ijncr.2014040102.

Full text
Abstract:
Diversity in evolutionary algorithms is a critical issue related to the performance obtained during the search process and strongly linked to convergence issues. The lack of the required diversity has been traditionally linked to problematic situations such as early stopping in the presence of local optima (usually faced when the number of individuals in the population is insufficient to deal with the search space). Current proposal introduces a guided mutation operator to cope with these diversity issues, introducing tracking mechanisms of the search space in order to feed the required information to this mutation operator. The objective of the proposed mutation operator is to guarantee a certain degree of coverage over the search space before the algorithm is stopped, attempting to prevent early convergence, which may be introduced by the lack of population diversity. A dynamic mechanism is included in order to determine, in execution time, the degree of application of the technique, adapting the number of cycles when the technique is applied. The results have been tested over a dataset of ten standard single objective functions with different characteristics regarding dimensionality, presence of multiple local optima, search space range and three different dimensionality values, 30D, 300D and 1000D. Thirty different runs have been performed in order to cover the effect of the introduced operator and the statistical relevance of the measured results
APA, Harvard, Vancouver, ISO, and other styles
24

Xing, Lining, Rui Wu, Jiaxing Chen, and Jun Li. "Knowledge-Based Evolutionary Optimizing Makespan and Cost for Cloud Workflows." Mathematics 11, no. 1 (December 22, 2022): 38. http://dx.doi.org/10.3390/math11010038.

Full text
Abstract:
Workflow scheduling is essential to simultaneously optimize the makespan and economic cost for cloud services and has attracted intensive interest. Most of the existing multi-objective cloud workflow scheduling algorithms regard the focused problems as black-boxes and design evolutionary operators to perform random searches, which are inefficient in dealing with the elasticity and heterogeneity of cloud resources as well as complex workflow structures. This study explores the characteristics of cloud resources and workflow structures to design a knowledge-based evolutionary optimization operator, named KEOO, with two novel features. First, we develop a task consolidation mechanism to reduce the number of cloud resources used, reducing the economic cost of workflow execution without delaying its finish time. Then, we develop a critical task adjustment mechanism to selectively move the critical predecessors of some tasks to the same resources to eliminate the data transmission overhead between them, striving to improve the economic cost and finish time simultaneously. At last, we embed the proposed KEOO into four classical multi-objective algorithms, i.e., NSGA-II, HypE, MOEA/D, and RVEA, forming four variants: KEOO-NSGA-II, KEOO-HypE, KEOO-MOEA/D, and KEOO-RVEA, for comparative experiments. The comparison results demonstrate the effectiveness of the KEOO in improving these four algorithms in solving cloud workflow scheduling problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Hongbo, Zhe Xu, and Wenchao Wei. "Bi-Objective Scheduling Optimization for Discrete Time/Cost Trade-Off in Projects." Sustainability 10, no. 8 (August 7, 2018): 2802. http://dx.doi.org/10.3390/su10082802.

Full text
Abstract:
In sustainable project management, time and cost are two critical factors affecting the success of a project. Time/cost trade-offs in projects accelerate the execution of some activities by increasing the amount of non-renewable resources committed to them and therefore shorten the project duration. The discrete time/cost trade-off problem (DTCTP) has been extensively studied during the past 20 years. However, due to its complexity, the DTCTP—especially the DTCTP curve problem (DTCTP-C)—has only been solved for relatively small instances. To the best of our knowledge, there is no computational performance analysis for solving the DTCTP-C on large project instances with up to 500 activities. This paper aims to fill this gap. We present two bi-objective heuristic algorithms for the DTCTP-C where both project duration and cost are minimized. The objective is to obtain a good appropriate efficient set for the large-scale instances. The first algorithm is based on the non-dominated sorting genetic algorithm II (NSGA-II) and uses a specially designed critical path-based crossover operator. The second algorithm is a steepest descent heuristic which generates efficient solutions by iteratively solving the DTCTP with different deadlines. Computational experiments are conducted to validate the proposed algorithms on a large set of randomly generated problem instances.
APA, Harvard, Vancouver, ISO, and other styles
26

Salman, Ayed A., Kishan Mehrotra, and Chilukuri K. Mohan. "Adaptive Linkage Crossover." Evolutionary Computation 8, no. 3 (September 2000): 341–70. http://dx.doi.org/10.1162/106365600750078817.

Full text
Abstract:
Problem-specific knowledge is often implemented in search algorithms using heuristics to determine which search paths are to be explored at any given instant. As in other search methods, utilizing this knowledge will more quickly lead a genetic algorithm (GA) towards better results. In many problems, crucial knowledge is not found in individual components, but in the interrelations between those components. For such problems, we develop an interrelation (linkage) based crossover operator that has the advantage of liberating GAs from the constraints imposed by the fixed representations generally chosen for problems. The strength of linkages between components of a chromosomal structure can be explicitly represented in a linkage matrix and used in the reproduction step to generate new individuals. For some problems, such a linkage matrix is known a priorifrom the nature of the problem. In other cases, the linkage matrix may be learned by successive minor adaptations during the execution of the evolutionary algorithm. This paper demonstrates the success of such an approach for several problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Chunsheng, and Chunping Ren. "Research on Coal-Rock Fracture Image Edge Detection Based on Tikhonov Regularization and Fractional Order Differential Operator." Journal of Electrical and Computer Engineering 2019 (May 2, 2019): 1–13. http://dx.doi.org/10.1155/2019/9594301.

Full text
Abstract:
Aiming at the conventional image edge detection algorithm, the first-order differential edge detection method is easy to lose the image details and the second-order differential edge detection method is more sensitive to noise. To deal with the problem, the Tikhonov regularization method is adopted to reconstruct the input coal-rock infrared images, so as to reduce the noise interference, and then, the reconstructed image is transformed by gray level. Finally, we consider the frequency characteristics and long memory properties of fractional differential, the classical first-order Sobel and second-order Laplacian edge detection algorithms are extended to fractional order pattern, and a new pattern of fractional order differential image edge detection is constructed to realize the coal-rock fracture edge features identification. The results show that, compared with integer order differential, the error rate and omission rate of fractional order differential algorithm are smaller, the quality factor is larger, and the execution time and memory footprint are smaller. From the point of view of location criteria and location accuracy, the fractional order differential algorithm is better than the integer order. In addition, the proposed method is compared with Canny algorithm, B-spline wavelet transform, and multidirection fuzzy morphological edge detection method, can detect more coal-rock fracture infrared image edge details, and is more robust to noise.
APA, Harvard, Vancouver, ISO, and other styles
28

Zuhtuogullari, Kursat, Novruz Allahverdi, and Nihat Arikan. "A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems." Computational and Mathematical Methods in Medicine 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/587564.

Full text
Abstract:
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
APA, Harvard, Vancouver, ISO, and other styles
29

Ramirez-Atencia, Cristian, and David Camacho. "Extending QGroundControl for Automated Mission Planning of UAVs." Sensors 18, no. 7 (July 18, 2018): 2339. http://dx.doi.org/10.3390/s18072339.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) have become very popular in the last decade due to some advantages such as strong terrain adaptation, low cost, zero casualties, and so on. One of the most interesting advances in this field is the automation of mission planning (task allocation) and real-time replanning, which are highly useful to increase the autonomy of the vehicle and reduce the operator workload. These automated mission planning and replanning systems require a Human Computer Interface (HCI) that facilitates the visualization and selection of plans that will be executed by the vehicles. In addition, most missions should be assessed before their real-life execution. This paper extends QGroundControl, an open-source simulation environment for flight control of multiple vehicles, by adding a mission designer that permits the operator to build complex missions with tasks and other scenario items; an interface for automated mission planning and replanning, which works as a test bed for different algorithms, and a Decision Support System (DSS) that helps the operator in the selection of the plan. In this work, a complete guide of these systems and some practical use cases are provided
APA, Harvard, Vancouver, ISO, and other styles
30

Minnetti, Elisa, Paolo Chiariotti, Nicola Paone, Gisela Garcia, Helder Vicente, Luca Violini, and Paolo Castellini. "A Smartphone Integrated Hand-Held Gap and Flush Measurement System for in Line Quality Control of Car Body Assembly." Sensors 20, no. 11 (June 10, 2020): 3300. http://dx.doi.org/10.3390/s20113300.

Full text
Abstract:
This paper presents the design and the characterization of a portable laser triangulation measurement system for measuring gap and flush in the car body assembly process. Targeting Human in the Loop (HILT) operations in the manufacturing sector, and in line with the vision of human empowerment with Industry 4.0 technologies, the instrument embeds features to ease operators’ activity and compensate possible misuse that could affect the robustness and the quality of data acquired. The device is based on a smartphone integrated with a miniaturized laser triangulation system installed in a cover. The device embodies additional sensors and control systems in order to guarantee operators’ safety (switching on and off the laser line based on specific conditions), support operators during the measurement execution task, and optimize the image acquisition process for minimizing the uncertainty associated to the measurement. The smartphone performs on-board processing and allows Wi-Fi communication with the plant IT infrastructure. Compliance to Industry 4.0 requirements is guaranteed using OPC-UA (Open Platform Communications—Unified Architecture) communication protocol enabling the exchange of live data with the plant middleware. The smartphone provides also an advanced high-resolution color display and well proven and ergonomic human–machine interfaces, which have been fully exploited in the design. The paper introduces the system optical layout and then presents the algorithms implemented to realize the gap and flush measurement. The paper finally presents the calibration of the instrument and estimates its calibration uncertainty in laboratory conditions. Then it discusses how performance decays when the operator handles the instrument on a reference car body. Finally, it shows the analysis of uncertainty when the device is used on real car bodies of different colors in a production line. It is observed that the measurement uncertainty of the whole measurement chain (measurand + instrument + operator + uncontrolled environmental conditions) is larger than the instrument calibration uncertainty because the measurement process is affected by the operator and the variable conditions of the production line.
APA, Harvard, Vancouver, ISO, and other styles
31

Piskunov, M. A., and D. V. Adamov. "Kinematic analysis of connection of hydraulic cylinders of a forwarder crane under execution of defined motion of grapple." Traktory i sel hozmashiny 83, no. 4 (April 15, 2016): 42–47. http://dx.doi.org/10.17816/0321-4443-66153.

Full text
Abstract:
The article deals with the kinematic analysis of forwarder crane; it determines the parameters of extension of hydraulic-cylinder rods for execution of defined trajectory of grapple motion. The current systems of forwarder crane control do not allow an operator to realize optimal trajectories and speed ability of motion of crane links. This disadvantage can be partly corrected, for example, with the use of automatic control of crane motion in certain phases of technological cycle. The analysis of matched connection of hydraulic cylinders of a typical forwarder crane is presented in the article. It determines how hydraulic-cylinder rods should move to provide the linear trajectory of grapple at the stage of motion from the moment of log gripping to the moment of its lodging in the middle of upper edge of a forwarder load space. It is necessary to provide a smooth acceleration at the start and a smooth deceleration at the end of grapple motion. The hydraulic-cylinder rods should also smoothly accelerate at the start of motion and smoothly decelerate at its end. The kinematic analysis uses well-known methods that are presented in the article in shortened form. It is shown that for typical design of a forwarder crane both conditions of smooth motion and linearity of trajectory cannot be fulfilled simultaneously. The results can be used for developing the algorithms of automatic control of crane at a separate phase of log loading when there are numerous points of log gripping (gripping control provided by operator) and one delivery point. There is no necessity in additional systems of environmental objects recognition at this phase, that allows to integrate these algorithms into the control system of a forwarder by low-cost facilities.
APA, Harvard, Vancouver, ISO, and other styles
32

Mohiuddin, Khalid, Mohamed Nadhmi Miladi, Mohiuddin Ali Khan, Mohammad Abdul Khaleel, Sajid Ali Khan, Samreen Shahwar, A. Nasr, and Mohammad Aminul Islam. "Mobile Learning New Trends in Emerging Computing Paradigms: An Analytical Approach Seeking Performance Efficiency." Wireless Communications and Mobile Computing 2022 (September 5, 2022): 1–17. http://dx.doi.org/10.1155/2022/6151168.

Full text
Abstract:
Mobile learning (m-learning) adoption has increased and shall be demonstrated superior performance by implementing related computing paradigms, such as IoT, edge, mobile edge, fog, AI, and 5G. Mobile cloud architectures (MCAs) enable m-learning with several benefits and face limitations while executing real-time applications. This study investigates the state-of-the-art m-learning architectures, determines a layered m-learning-MCA obtaining numerous benefits of related computing paradigms, and expands m-learning functional structure. It evaluates m-learning performance across the four physical layer’s MCAs—distance cloud, cloudlet, operator-centric cloud, ad hoc cloud, and emerging computing architectures. Surprisingly, only distance-cloud MCA is adopted for developing m-learning systems by ignoring the other three. Performance evaluation shows m-learning gets terrific benefits and users QoE in related computing paradigms. Mobile edge computing offers ultralow latency, whereas the current architecture improves task execution time (1.87, 2.01, 2.63, and 3.97) for the resource-intensive application (i.e., 4.2 MB). Fog using AI algorithms is exceptional for more complex learning objects, IoT is superior for intelligent learning tools, and 5G ultrawideband services are more significant for intelligent video analytics. These findings help learners, educators, and institutions adopt an appropriate model for achieving their academic objectives across educational disciplines. The presented approach enables future research to design innovative architectures considering resource-intensive m-learning application execution requirements, such as video content analytics and virtual reality learning models.
APA, Harvard, Vancouver, ISO, and other styles
33

Smirnova, Ekaterina, and Boris Spassky. "Organization of supervisory control in scenarios of extreme robotics." Robotics and Technical Cybernetics 8, no. 4 (December 30, 2020): 245–58. http://dx.doi.org/10.31776/rtcj.8401.

Full text
Abstract:
The article describes the general principles of supervisory control organization in extreme robotics scenarios with different levels of autonomy. It is shown that increasing the level of autonomy of the robot allows reducing the operator’s load due to the division of labor between man and machine and to the implementation of the supervisory control mode, the essence of which is the decomposition of the mission into scenarios, which describe algorithms for solving functionally completed tasks and consists of sequence of simple technological operations. Such scenarios are executed automatically. In this case, the operator mainly implements the functions of monitoring and diagnostics, and the control functions are reduced to making decisions about the execution of the next scenario or intervention into execution of the current scenario in cases when the robot makes an obvious mistake or encounters a problem that it cannot solve on its own. Examples of supervisory control implementation when performing work on critical infrastructure facilities, performing search and rescue and other special operations under challenging conditions, including those of heterogeneous groups of robots, are given.
APA, Harvard, Vancouver, ISO, and other styles
34

Khosravi, Faramarz, Alexander Rass, and Jürgen Teich. "Efficient Computation of Probabilistic Dominance in Multi-objective Optimization." ACM Transactions on Evolutionary Learning and Optimization 1, no. 4 (December 31, 2021): 1–26. http://dx.doi.org/10.1145/3469801.

Full text
Abstract:
Real-world problems typically require the simultaneous optimization of multiple, often conflicting objectives. Many of these multi-objective optimization problems are characterized by wide ranges of uncertainties in their decision variables or objective functions. To cope with such uncertainties, stochastic and robust optimization techniques are widely studied aiming to distinguish candidate solutions with uncertain objectives specified by confidence intervals, probability distributions, sampled data, or uncertainty sets. In this scope, this article first introduces a novel empirical approach for the comparison of candidate solutions with uncertain objectives that can follow arbitrary distributions. The comparison is performed through accurate and efficient calculations of the probability that one solution dominates the other in terms of each uncertain objective. Second, such an operator can be flexibly used and combined with many existing multi-objective optimization frameworks and techniques by just substituting their standard comparison operator, thus easily enabling the Pareto front optimization of problems with multiple uncertain objectives. Third, a new benchmark for evaluating uncertainty-aware optimization techniques is introduced by incorporating different types of uncertainties into a well-known benchmark for multi-objective optimization problems. Fourth, the new comparison operator and benchmark suite are integrated into an existing multi-objective optimization framework that features a selection of multi-objective optimization problems and algorithms. Fifth, the efficiency in terms of performance and execution time of the proposed comparison operator is evaluated on the introduced uncertainty benchmark. Finally, statistical tests are applied giving evidence of the superiority of the new comparison operator in terms of \epsilon -dominance and attainment surfaces in comparison to previously proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Jongseok, Ribin Balachandran, Konstantin Kondak, Andre Coelho, Marco De Stefano, Matthias Humt, Jianxiang Feng, Tamim Asfour, and Rudolph Triebel. "Virtual Reality via Object Pose Estimation and Active Learning: Realizing Telepresence Robots with Aerial Manipulation Capabilities." Field Robotics 3, no. 1 (January 10, 2023): 323–67. http://dx.doi.org/10.55417/fr.2023010.

Full text
Abstract:
This paper presents a novel telepresence system for advancing aerial manipulation in dynamic and unstructured environments. The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot’s workspace as well as a haptic guidance to its remotely located operator. To realize this, multiple sensors, namely, a LiDAR, cameras, and IMUs are utilized. For processing of the acquired sensory data, pose estimation pipelines are devised for industrial objects of both known and unknown geometries. We further propose an active learning pipeline in order to increase the sample efficiency of a pipeline component that relies on a Deep Neural Network (DNN) based object detector. All these algorithms jointly address various challenges encountered during the execution of perception tasks in industrial scenarios. In the experiments, exhaustive ablation studies are provided to validate the proposed pipelines. Methodologically, these results commonly suggest how an awareness of the algorithms’ own failures and uncertainty (“introspection”) can be used to tackle the encountered problems. Moreover, outdoor experiments are conducted to evaluate the effectiveness of the overall system in enhancing aerial manipulation capabilities. In particular, with flight campaigns over days and nights, from spring to winter, and with different users and locations, we demonstrate over 70 robust executions of pick-andplace, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM). As a result, we show the viability of the proposed system in future industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Sun, Xue, Ping Chou, Chao-Chin Wu, and Liang-Rui Chen. "Quality-Oriented Study on Mapping Island Model Genetic Algorithm onto CUDA GPU." Symmetry 11, no. 3 (March 2, 2019): 318. http://dx.doi.org/10.3390/sym11030318.

Full text
Abstract:
Genetic algorithm (GA), a global search method, has widespread applications in various fields. One very promising variant model of GA is the island model GA (IMGA) that introduces the key idea of migration to explore a wider search space. Migration will exchange chromosomes between islands, resulting in better-quality solutions. However, IMGA takes a long time to solve the large-scale NP-hard problems. In order to shorten the computation time, modern graphic process unit (GPU), as highly-parallel architecture, has been widely adopted in order to accelerate the execution of NP-hard algorithms. However, most previous studies on GPUs are focused on performance only, because the found solution qualities of the CPU and the GPU implementation of the same method are exactly the same. Therefore, it is usually previous work that did not report on quality. In this paper, we investigate how to find a better solution within a reasonable time when parallelizing IMGA on GPU, and we take the UA-FLP as a study example. Firstly, we propose an efficient approach of parallel tournament selection operator on GPU to achieve a better solution quality in a shorter amount of time. Secondly, we focus on how to tune three important parameters of IMGA to obtain a better solution efficiently, including the number of islands, the number of generations, and the number of chromosomes. In particular, different parameters have a different impact on solution quality improvement and execution time increment. We address the challenge of how to trade off between solution quality and execution time for these parameters. Finally, experiments and statistics are conducted to help researchers set parameters more efficiently to obtain better solutions when GPUs are used to accelerate IMGA. It has been observed that the order of influence on solution quality is: The number of chromosomes, the number of generations, and the number of islands, which can guide users to obtain better solutions efficiently with moderate increment of execution time. Furthermore, if we give higher priority on reducing execution time on GPU, the quality of the best solution can be improved by about 3%, with an acceleration that is 29 times faster than the CPU counterpart, after applying our suggested parameter settings. However, if we give solution quality a higher priority, i.e., the GPU execution time is close to the CPU’s, the solution quality can be improved up to 8%.
APA, Harvard, Vancouver, ISO, and other styles
37

Salhi, Dhai Eddine, Abelkamel Tari, and Mohand Tahar Kechadi. "Using E-Reputation for Sentiment Analysis." International Journal of Cloud Applications and Computing 11, no. 2 (April 2021): 32–47. http://dx.doi.org/10.4018/ijcac.2021040103.

Full text
Abstract:
In a competitive world, companies are looking to gain a positive reputation through these clients. Electronic reputation is part of this reputation mainly in social networks, where everyone is free to express their opinion. Sentiment analysis of the data collected in these networks is very necessary to identify and know the reputation of a companies. This paper focused on one type of data, Twits on Twitter, where the authors analyzed them for the company Djezzy (mobile operator in Algeria), to know their satisfaction. The study is divided into two parts: The first part was the pre-processing phase, where this research filtered the Twits (eliminate useless words, use the tokenization) to keep the necessary information for a better accuracy. The second part was the application of machine learning algorithms (SVM and logistic regression) for a supervised classification since the results are binary. The strong point of this study was the possibility to run the chosen algorithms on a cloud in order to save execution time; the solution also supports the three languages: Arabic, English, and French.
APA, Harvard, Vancouver, ISO, and other styles
38

Mu`ñoz, Daniel M., Diego F. Sanchez, Carlos H. Llanos, and Mauricio Ayala-Rincón. "Tradeoff of FPGA Design of a Floating-point Library for Arithmetic Operators." Journal of Integrated Circuits and Systems 5, no. 1 (November 21, 2010): 42–52. http://dx.doi.org/10.29292/jics.v5i1.309.

Full text
Abstract:
Many scientific and engineering applications require to perform a large number of arithmetic operations that must be computed in an efficient manner using a high precision and a large dynamic range. Commonly, these applications are implemented on personal computers taking advantage of the floating-point arithmetic to perform the computations and high operational frequencies. However, most common software architectures execute the instructions in a sequential way due to the von Neumann model and, consequently, several delays are introduced in the data transfer between the program memory and the Arithmetic Logic Unit (ALU). There are several mobile applications which require to operate with a high performance in terms of accuracy of the computations and execution time as well as with low power consumption. Modern Field Programmable Gate Arrays (FPGAs) are a suitable solution for high performance embedded applications given the flexibility of their architectures and their parallel capabilities, which allows the implementation of complex algorithms and performance improvements. This paper describes a parameterizable floating-point library for arithmetic operators based on FPGAs. A general architecture was implemented for addition/subtraction and multiplication and two different architectures based on the Goldschmidt’s and the Newton-Raphson algorithms were implemented for division and square root. Additionally, a tradeoff analysis of the hardware implementation was performed, which enables the designer to choose, for general purpose applications, the suitable bit-width representation and error associated, as well as the area cost, elapsed time and power consumption for each arithmetic operator. Synthesis results have demonstrated the effectiveness of the implemented cores on commercial FPGAs and showed that the most critical parameter is the dedicated Digital Signal Processing (DSP) slices consumption. Simulation results were addressed to compute the mean square error (MSE) and maximum absolute error demonstrating the correctness of the implemented floating-point library and achieving and experimental error analysis. The Newton-Raphson algorithm achieves similar MSE results as the Goldschmidt’s algorithm, operating with similar frequencies; however, the first one saves more logic area and dedicated DSP blocks.
APA, Harvard, Vancouver, ISO, and other styles
39

Zapridou, Eleni, Ioannis Mytilinis, and Anastasia Ailamaki. "Dalton." Proceedings of the VLDB Endowment 16, no. 3 (November 2022): 491–504. http://dx.doi.org/10.14778/3570690.3570699.

Full text
Abstract:
To sustain the input rate of high-throughput streams, modern stream processing systems rely on parallel execution. However, skewed data yield imbalanced load assignments and create stragglers that hinder scalability Deciding on a static partitioning for a given set of "hot" keys is not sufficient as these keys are not known in advance, and even worse, the data distribution can change unpredictably. Existing algorithms either optimize for a specific distribution or, in order to adapt, assume a centralized partitioner that processes every incoming tuple and observes the whole workload. However, this is not realistic in a distributed environment, where multiple parallel upstream operators exist, as the centralized partitioner itself becomes the bottleneck and limits scalability In this work, we propose Dalton: a lightweight, adaptive, yet scalable partitioning operator that relies on reinforcement learning. By memoizing state and dynamically keeping track of recent experience, Dalton: i) adjusts its policy at runtime and quickly adapts to the workload, ii) avoids redundant computations and minimizes the per-tuple partitioning overhead, and iii) efficiently scales out to multiple instances that learn cooperatively and converge to a joint policy Our experiments indicate that Dalton scales regardless of the input data distribution and sustains 1.3X - 6.7X higher throughput than existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
40

Koutsoukos, Dimitrios, Ingo Müller, Renato Marroquín, Ana Klimovic, and Gustavo Alonso. "Modularis." Proceedings of the VLDB Endowment 14, no. 13 (September 2021): 3308–21. http://dx.doi.org/10.14778/3484224.3484229.

Full text
Abstract:
The enormous quantity of data produced every day together with advances in data analytics has led to a proliferation of data management and analysis systems. Typically, these systems are built around highly specialized monolithic operators optimized for the underlying hardware. While effective in the short term, such an approach makes the operators cumbersome to port and adapt, which is increasingly required due to the speed at which algorithms and hardware evolve. To address this limitation, we present Modularis , an execution layer for data analytics based on sub-operators , i.e., composable building blocks resembling traditional database operators but at a finer granularity. To demonstrate the feasibility and advantages of our approach, we use Modularis to build a distributed query processing system supporting relational queries running on an RDMA cluster, a serverless cloud platform, and a smart storage engine. Modularis requires minimal code changes to execute queries across these three diverse hardware platforms, showing that the sub-operator approach reduces the amount and complexity of the code to maintain. In fact, changes in the platform affect only those sub-operators that depend on the underlying hardware (in our use cases, mainly the sub-operators related to network communication). We show the end-to-end performance of Modularis by comparing it with a framework for SQL processing (Presto), a commercial cluster database (SingleStore), as well as Query-as-a-Service systems (Athena, BigQuery). Modularis outperforms all these systems, proving that the design and architectural advantages of a modular design can be achieved without degrading performance. We also compare Modularis with a hand-optimized implementation of a join for RDMA clusters. We show that Modularis has the advantage of being easily extensible to a wider range of join variants and group by queries, all of which are not supported in the hand-tuned join.
APA, Harvard, Vancouver, ISO, and other styles
41

Moroz, Serhii, Anatolii Tkachuk, Mykola Khvyshchun, Stanislav Prystupa, and Mykola Yevsiuk. "METHODS FOR ENSURING DATA SECURITY IN MOBILE STANDARDS." Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 12, no. 1 (March 31, 2022): 4–9. http://dx.doi.org/10.35784/iapgos.2877.

Full text
Abstract:
The analysis of mobile communication standards is carried out, the functional structure and interfaces of interaction between the structural elements of the cellular network are considered. To understand the principle of communication according to the GSM standard, a block diagram of a mobile switching center (MSC), base station equipment (BSS), control and service center (MCC), mobile stations (MS) is presented. The main algorithms for ensuring the confidentiality and security of mobile subscribers' data, in different types of standards, as well as the vulnerabilities of information flows are considered. In particular, the following dangerous types of attacks have been identified, to which mobile network subscribers are sensitive: sniffing; leakage of personal data; leakage of geolocation data; spoofing; remote capture of SIM-card, execution of arbitrary code (RCE); denial of service (DoS). It is established that the necessary function of the mobile network is the identification of subscribers, which is performed by IMSI, which is recorded in the SIM card of the subscriber and the HLR of the operator. To protect against spoofing, the network authenticates the subscriber before starting its service. In the case of subscriber identification, the subscriber and the network operator are protected from the effects of fraudulent access. In addition, the user must be protected from eavesdropping. This is achieved by encrypting the data transmitted over the radio interface. Thus, user authentication in UMTS, as well as in the GSM network, is carried out using encryption with a common key using the "hack-response" protocol (the authenticating party sends a random number to the authenticated party, which encrypts it according to a certain algorithm using a common key and returns the result back).
APA, Harvard, Vancouver, ISO, and other styles
42

P., Kiran, and Parameshachari B. D. "Resource-Optimized Selective Image Encryption of Medical Images Using Multiple Chaotic Systems." International Journal of e-Collaboration 18, no. 1 (January 1, 2022): 1–20. http://dx.doi.org/10.4018/ijec.304379.

Full text
Abstract:
Securing medical images becomes a major concern, to avoid leaking the confidential data. This problem motivated to develop many low computational complexity methods to encrypt these medical images. In this research work, Block Cipher based Region of interest medical image encryption with multiple maps is proposed. Primarily, Region of Interest (ROI) regions and Region of Background (ROB) are extracted with the help of Laplacian edge detection operator. Further important ROI regions are permuted in a circular fashion with the help of Arnold cat map and angle value. Then permuted ROI part is encrypted using the duffling system and unimportant regions are unchanged. The advantage of proposed work is that encrypt only selected/important part and that will achieve fast execution speed and reduction in computation complexity. The approach presented here enables the storage and transmission of medical image data within an open network. These results show that the security in the proposed method is much better than many chaotic encryption algorithms proposed in the recent times.
APA, Harvard, Vancouver, ISO, and other styles
43

Barry, Abdoul, Wantao Li, Juan A. Becerra, and Pere L. Gilabert. "Comparison of Feature Selection Techniques for Power Amplifier Behavioral Modeling and Digital Predistortion Linearization." Sensors 21, no. 17 (August 27, 2021): 5772. http://dx.doi.org/10.3390/s21175772.

Full text
Abstract:
The power amplifier (PA) is the most critical subsystem in terms of linearity and power efficiency. Digital predistortion (DPD) is commonly used to mitigate nonlinearities while the PA operates at levels close to saturation, where the device presents its highest power efficiency. Since the DPD is generally based on Volterra series models, its number of coefficients is high, producing ill-conditioned and over-fitted estimations. Recently, a plethora of techniques have been independently proposed for reducing their dimensionality. This paper is devoted to presenting a fair benchmark of the most relevant order reduction techniques present in the literature categorized by the following: (i) greedy pursuits, including Orthogonal Matching Pursuit (OMP), Doubly Orthogonal Matching Pursuit (DOMP), Subspace Pursuit (SP) and Random Forest (RF); (ii) regularization techniques, including ridge regression and least absolute shrinkage and selection operator (LASSO); (iii) heuristic local search methods, including hill climbing (HC) and dynamic model sizing (DMS); and (iv) global probabilistic optimization algorithms, including simulated annealing (SA), genetic algorithms (GA) and adaptive Lipschitz optimization (adaLIPO). The comparison is carried out with modeling and linearization performance and in terms of runtime. The results show that greedy pursuits, particularly the DOMP, provide the best trade-off between execution time and linearization robustness against dimensionality reduction.
APA, Harvard, Vancouver, ISO, and other styles
44

Vasilev, Vladimir S., and Alexander I. Legalov. "Loop-invariant Optimization in the Pifagor Language." Modeling and Analysis of Information Systems 25, no. 4 (August 27, 2018): 347–57. http://dx.doi.org/10.18255/1818-1015-2018-4-347-357.

Full text
Abstract:
The paper considers methods of program transformation equivalent to optimizing the cycle invariant, applied to the functional data-flow model implemented in the Pifagor programming language. Optimization of the cycle invariant in imperative programming languages is reduced to a displacement from the cycle of computations that do not depend on variables that are changes in the loop. A feature of the functional data flow parallel programming language Pifagor is the absence of explicitly specified cyclic computations (the loop operator). However, recurring calculations in this language can be specified recursively or by applying specific language constructs (parallel lists). Both mechanisms provide the possibility of parallel execution. In the case of optimizing a recursive function, repeated calculations are carried out into an auxiliary function, the main function performing only the calculation of the invariant. When optimizing the invariant in computations over parallel lists, the calculation of the invariant moves from the function that executes over the list items to the function containing the call. The paper provides a definition of ”invariant” applied to the Pifagor language, algorithms for its optimization, and examples of program source codes, their graph representations (the program dependence graph) before and after optimization. The algorithm shown for computations over parallel lists is applicable only to the Pifagor language, because it rests upon specific data structures and the computational model of this language. However, the algorithm for transforming recursive functions may be applied to other programming languages.
APA, Harvard, Vancouver, ISO, and other styles
45

Andrabi, Umer Mukhtar, M. G. Kanishcheva, and S. N. Stepanov. "OBSERVATION SYSTEM RESOURCE PLANNING IN PRESENCE OF ACCESS CONTROL BASED ON VOLUME OF RESOURCE OCCUPIED BY TRAFFIC FLOWS." T-Comm 16, no. 8 (2022): 54–62. http://dx.doi.org/10.36724/2072-8735-2022-16-8-54-62.

Full text
Abstract:
Due to various recent improvements in cellular technology, communication networks now offer information flows with a wide range of transmission speed and service quality characteristics. These characteristics are more highly noticeable in intelligent networks, particularly when IoT technology is used. Regulation and high-quality transmission of such massive volumes of data present severe issues among virtual operators engaged in the communications sector and utilizing cellular network infrastructure to provide video surveillance services for its consumers. Unfortunately, no single approach exists that enables the execution of high-quality incoming traffic flow maintenance and the allocation of information transmission resources in diverse networks. One of the methods to address these issues is mathematical modelling, which takes into consideration the unique characteristics of traffic flows that emerge and are approved for servicing. Our work addresses this problem by designing a resource allocation strategy for an isolated LTE network cell with heterogeneous traffic flows while sharing its radio resources. Traffic sources considered are operator surveillance system's video cameras that use cellular network infrastructure to deliver data to analytical centers. A mathematical model is used to study the process of sharing a resource, which takes into consideration the priority of real-time traffic, the elasticity property during file transfer, and the access restriction for all threads, based on the quantity of resource held by each thread. When sharing the resources of an isolated LTE network cell with heterogeneous traffic flows. Traffic sources are surveillance system operators' video cameras that use cellular network infrastructure to deliver data to analysis centers. Algorithms for evaluating the characteristics of the service of applications are constructed and the dependence of the characteristics on the restriction of access is investigated. Using the developed model, a scenario of dynamic distribution of the access node resource between incoming sessions has been developed, which allows creating conditions for differentiated servicing of heterogeneous traffic flows based on access restrictions and resource loading by each of the considered flows.
APA, Harvard, Vancouver, ISO, and other styles
46

Selvarani, P., and N. Malarvizhi. "Secure data in cloud with multimodal key generation." International Journal of Engineering & Technology 7, no. 1.7 (February 5, 2018): 27. http://dx.doi.org/10.14419/ijet.v7i1.7.9382.

Full text
Abstract:
Data Security is the Major problem in Cloud Computing. In order to overcome the data security problem the proposed technique utilizes effective data storage using biometric-based cryptographic authentication to support the user authentication for the cloud environment. For user authentication here we are considering iris and fingerprint. Initially the feature values are extracted from the iris and fingerprint using local binary pattern and Minutiae extraction respectively. Local binary pattern operator works with the eight neighbors of a pixel, using the value of this center pixel as a threshold. Minutiae points are the major features of a fingerprint image and are used in the matching of fingerprints. These minutiae points are used to determine the uniqueness of a fingerprint image. Based on that the proposed feature values are extracted from the iris and fingerprint image. In order to improve the security, the suggested technique utilizes the optimal features. For selecting the optimal features hybrid particle swarm optimization and genetic algorithm (HPSOGA) is utilized. Particle swarm optimization (PSO) is a population based stochastic optimization technique. The system is initialized with a population of random solutions and searches for optima by updating generations. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetics. In our proposed method these two optimization algorithm is hybrid for more secure. From the optimization algorithm the suggested technique selects the optimal features. and then the optimal features are used to encrypt the input data. For encryption and decryption, the proposed technique utilizes Triple DES algorithm. Finally the encrypted data is stored in cloud. The performance of the proposed technique is evaluated in terms of encryption and decryption time, memory utilization and overall execution time. Our proposed data storage using biometric-based authentication is implemented with the help of Cloud simulator in the working platform of java.
APA, Harvard, Vancouver, ISO, and other styles
47

Tavarageri, Sanket, Alexander Heinecke, Sasikanth Avancha, Bharat Kaul, Gagandeep Goyal, and Ramakrishna Upadrasta. "PolyDL." ACM Transactions on Architecture and Code Optimization 18, no. 1 (January 21, 2021): 1–27. http://dx.doi.org/10.1145/3433103.

Full text
Abstract:
Deep Neural Networks (DNNs) have revolutionized many aspects of our lives. The use of DNNs is becoming ubiquitous, including in software for image recognition, speech recognition, speech synthesis, language translation, to name a few. The training of DNN architectures, however, is computationally expensive. Once the model is created, its use in the intended application—the inference task, is computationally heavy too and the inference needs to be fast for real time use. For obtaining high performance today, the code of Deep Learning (DL) primitives optimized for specific architectures by expert programmers exposed via libraries is the norm. However, given the constant emergence of new DNN architectures, creating hand optimized code is expensive, slow and is not scalable. To address this performance-productivity challenge, in this article we present compiler algorithms to automatically generate high-performance implementations of DL primitives that closely match the performance of hand optimized libraries. We develop novel data reuse analysis algorithms using the polyhedral model to derive efficient execution schedules automatically. In addition, because most DL primitives use some variant of matrix multiplication at their core, we develop a flexible framework where it is possible to plug in library implementations of the same in lieu of a subset of the loops. We show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance. We develop compiler algorithms to also perform operator fusions that reduce data movement through the memory hierarchy of the computer system. Using Convolution Neural Network (CNN) models and matrix multiplication operations, we demonstrate that our approach automatically creates high performing DNN building blocks whose performance matches the performance of hand-crafted kernels of Intel’s oneDNN library on high end CPUs. At the same time, our techniques take only a fraction of time (1/20 or less) compared to AutoTVM, a deep learning auto-tuner to create optimized implementations.
APA, Harvard, Vancouver, ISO, and other styles
48

Triantafyllis, Nikolaos, Ioannis E. Venetis, Ioannis Fountoulakis, Erion-Vasilis Pikoulis, Efthimios Sokos, and Christos P. Evangelidis. "Gisola: A High-Performance Computing Application for Real-Time Moment Tensor Inversion." Seismological Research Letters 93, no. 2A (December 22, 2021): 957–66. http://dx.doi.org/10.1785/0220210153.

Full text
Abstract:
Abstract Automatic moment tensor (MT) determination is essential for real-time seismological applications. In this article, Gisola, a highly evolved software for MT determination, oriented toward high-performance computing, is presented. The program employs enhanced algorithms for waveform data selection via quality metrics, such as signal-to-noise ratio, waveform clipping, data and metadata inconsistency, long-period disturbances, and station evaluation based on power spectral density measurements in parallel execution. The inversion code, derived from ISOLated Asperities—an extensively used manual MT retrieval utility—has been improved by exploiting the performance efficiency of multiprocessing on the CPU and GPU. Gisola offers the ability for a 4D spatiotemporal adjustable MT grid search and multiple data resources interconnection to the International Federation of Digital Seismograph Networks Web Services (FDSNWS), the SeedLink protocol, and the SeisComP Data Structure standard. The new software publishes its results in various formats such as QuakeML and SC3ML, includes a website suite for MT solutions review, an e-mail notification system, and an integrated FDSNWS-event for MT solutions distribution. Moreover, it supports the ability to apply user-defined scripts, such as dispatching the MT solution to SeisComP. The operator has full control of all calculation aspects with an extensive and adjustable configuration. MT’s quality performance, for 531 manual MT solutions in Greece between 2012 and 2021, was measured and proved to be highly efficient.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Weiwei, Jing Yang, and Haifeng Song. "The Improved SIFT Algorithm Based on Rectangular Operator and Its Parallel Implementation." Journal of Information Technology Research 12, no. 1 (January 2019): 1–17. http://dx.doi.org/10.4018/jitr.2019010101.

Full text
Abstract:
The SIFT algorithm is one of the most widely used algorithm which bases on local feature extraction. But it could not meet the requirement of the real-time application due to the high time complexity and low execution efficiency. In order to improve these drawback, the authors optimized the SIFT algorithm by using the Gaussian convolution scale of adaptive scale space. The authors also provided the executive process of the improved SIFT algorithm on the MapReduce programming model and compared its performance in terms of the stand-alone and cluster environment. The experiment result showed that compared to the traditional algorithm, the improved algorithm had high execution efficiency, good speedup, scalability and is suitable for massive amounts of image data processing.
APA, Harvard, Vancouver, ISO, and other styles
50

Bannour, Ahmed, Ahmed Harbaoui, and Fawaz Alsolami. "Connected Objects Geo-Localization Based on SS-RSRP of 5G Networks." Electronics 10, no. 22 (November 11, 2021): 2750. http://dx.doi.org/10.3390/electronics10222750.

Full text
Abstract:
The Global Positioning System (GPS) is not the only way to solve connected objects’ geo-localization problems; it is also possible to use the mobile network infrastructure to geo-locate objects connected to the network, using antennas and signals designed for voice and data transfer, such as the 5th generation network. 5G is considered as a least expensive solution because there is no specific equipment to set up. As long as the object is in an area covered by the network, it connects to the nearest 5G Micro-Cell (MC). Through exchange of signals with the MC node we can locate the object. Currently, this location is very fast with less than 5 s but not very precise because it depends on the number of MC antennas of the operator in question and their distance. This paper presents a novel technique to geo-locate connected object in a covered 5G area. We exploit the 5G SS-RSRP used for signal quality measurement, to estimate the distance between two Connected Objects (COs) in move and in a dense urban area. The overall goal is to present a new concept laying on the 5G SS-RSRP signalling. The proposed solution takes into consideration the Deterministic and the Stochastic effect of the received signals which is not treated by the previous works. The accuracy is optimum even after approaching to the distance of one meter which is not reached in previous works too. Our method can also be deployed in the upcoming 5G network since it relies on 5G signals itself. This work and that of Wang are both based on RSRP and give comparable theoretical complexities therefore comparable theoretical execution times as well. However, to obtain a reliable learning Wang requires a huge amount of data which makes it difficult to get a real time aspect of this algorithm. The use of RSRP and the elimination of the learning phase will give more chance to our work to achieve desired performances. Numerical results show the appropriateness of the proposed algorithms and good location accuracy of around one meter. The Cramer Rao Lower Bound derivations shows the robustness of the proposed estimator and consolidate the work.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography