To see the other types of publications on this topic, follow the link: Computer algorithms.

Journal articles on the topic 'Computer algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ataeva, Gulsina Isroilovna, and Lola Dzhalolovna Yodgorova. "METHODS AND ALGORITHMS OF COMPUTER GRAPHICS." Scientific Reports of Bukhara State University 4, no. 1 (February 26, 2020): 43–47. http://dx.doi.org/10.52297/2181-1466/2020/4/1/3.

Full text
Abstract:
Methods and algorithms of computer graphics are considered in the article. Implementation of transformation of graphic objects by means of operations of transfer, scaling, rotation, the types of geometric models are considered. Methods of computer graphics include methods of converting graphic objects, representing (scanning) lines in raster form, selecting a window, removing hidden lines, projecting, painting images.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Zheng Guang, Chen Chen, and Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor." Advanced Materials Research 659 (January 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Full text
Abstract:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
3

Cropper, Andrew. "The Automatic Computer Scientist." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15434. http://dx.doi.org/10.1609/aaai.v37i13.26801.

Full text
Abstract:
Algorithms are ubiquitous: they track our sleep, help us find cheap flights, and even help us see black holes. However, designing novel algorithms is extremely difficult, and we do not have efficient algorithms for many fundamental problems. The goal of my research is to accelerate algorithm discovery by building an automatic computer scientist. To work towards this goal, my research focuses on inductive logic programming, a form of machine learning in which my collaborators and I have demonstrated major advances in automated algorithm discovery over the past five years. In this talk and paper, I survey these advances.
APA, Harvard, Vancouver, ISO, and other styles
4

Moosakhah, Fatemeh, and Amir Massoud Bidgoli. "Congestion Control in Computer Networks with a New Hybrid Intelligent Algorithm." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 8 (August 23, 2014): 4688–706. http://dx.doi.org/10.24297/ijct.v13i8.7068.

Full text
Abstract:
With invention of computer networks, transferring data from one computer to another became possible, but as the number of computers that transfer data to each other increased and common communication channel bandwidth among them in a network limited, has led to a phenomenon called congestion, so that some of data packets would be dropped and never arrive to destination. Different algorithms have been proposed for overcoming congestion. These are divided into two general groups: 1- flow based algorithms and 2- class based algorithms. In present study, using class based algorithm with optimization of its control by fuzzy logic and new Cuckoo algorithm, we increased the number of packets that reach to destination and reduced the number of dropped packets considerably during congestion. Simulation results indicate a great improvement of efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

Pelter, Michele M., and Mary G. Carey. "ECG Computer Algorithms." American Journal of Critical Care 17, no. 6 (November 1, 2008): 581–82. http://dx.doi.org/10.4037/ajcc2008.17.6.581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kaltofen, E. "Computer Algebra Algorithms." Annual Review of Computer Science 2, no. 1 (June 1987): 91–118. http://dx.doi.org/10.1146/annurev.cs.02.060187.000515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rakhimov, Bakhtiyar Saidovich, Feroza Bakhtiyarovna Rakhimova, Sabokhat Kabulovna Sobirova, Furkat Odilbekovich Kuryazov, and Dilnoza Boltabaevna Abdirimova. "Review And Analysis Of Computer Vision Algorithms." American Journal of Applied sciences 03, no. 05 (May 31, 2021): 245–50. http://dx.doi.org/10.37547/tajas/volume03issue05-39.

Full text
Abstract:
Computer vision as a scientific discipline refers to the theories and technologies for creating artificial systems that receive information from an image. Despite the fact that this discipline is quite young, its results have penetrated almost all areas of life. Computer vision is closely related to other practical fields like image processing, the input of which is two-dimensional images obtained from a camera or artificially created. This form of image transformation is aimed at noise suppression, filtering, color correction and image analysis, which allows you to directly obtain specific information from the processed image. This information may include searching for objects, keypoints, segments, and annexes;
APA, Harvard, Vancouver, ISO, and other styles
8

Schlingemann, D. "Cluster states, algorithms and graphs." Quantum Information and Computation 4, no. 4 (July 2004): 287–324. http://dx.doi.org/10.26421/qic4.4-4.

Full text
Abstract:
The present paper is concerned with the concept of the one-way quantum computer, beyond binary-systems, and its relation to the concept of stabilizer quantum codes. This relation is exploited to analyze a particular class of quantum algorithms, called graph algorithms, which correspond in the binary case to the Clifford group part of a network and which can efficiently be implemented on a one-way quantum computer. These algorithms can ``completely be solved" in the sense that the manipulation of quantum states in each step can be computed explicitly. Graph algorithms are precisely those which implement encoding schemes for graph codes. Starting from a given initial graph, which represents the underlying resource of multipartite entanglement, each step of the algorithm is related to a explicit transformation on the graph.
APA, Harvard, Vancouver, ISO, and other styles
9

Handayani, Dwipa, and Abrar Hiswara. "KAMUS ISTILAH ILMU KOMPUTER DENGAN ALGORITMA BOYER MOORE BERBASIS WEB." Jurnal Informatika 19, no. 2 (December 26, 2019): 90–97. http://dx.doi.org/10.30873/ji.v19i2.1519.

Full text
Abstract:
A dictionary is a reference book that contains words and phrases that are usually arranged in alphabetical order along with an explanation of their meaning, usage and translation and function to help recognize new terms. The field of computer science certainly has specific terms related to computers, so it is needed a dictionary of computer terms, currently the existing dictionary is still conventional in its use ineffective and inefficient. The design and manufacture of applications using algorithms by performing a sequence of logical steps in solving problems that are arranged systematically. Algorithms for searching are now growing day by day. Boyer Moore algorithm is one of the search algorithms that is considered to have the best results, namely the algorithm that moves matching strings from right to left. With this web-based dictionary the user is expected to be able to get information quickly, without any limitations on space and time. Keywords: Boyer Moore's Algorithm, Computer Science, Glossary of Terms, Web.
APA, Harvard, Vancouver, ISO, and other styles
10

Bunin, Y. V., E. V. Vakulik, R. N. Mikhaylusov, V. V. Negoduyko, K. S. Smelyakov, and O. V. Yasinsky. "Estimation of lung standing size with the application of computer vision algorithms." Experimental and Clinical Medicine 89, no. 4 (December 17, 2020): 87–94. http://dx.doi.org/10.35339/ekm.2020.89.04.13.

Full text
Abstract:
Evaluation of spiral computed tomography data is important to improve the diagnosis of gunshot wounds and the development of further surgical tactics. The aim of the work is to improve the results of the diagnosis of foreign bodies in the lungs by using computer vision algorithms. Image gradation correction, interval segmentation, threshold segmentation, three-dimensional wave method, principal components method are used as a computer vision device. The use of computer vision algorithm allows to clearly determine the size of the foreign body of the lung with an error of 6.8 to 7.2%, which is important for in-depth diagnosis and development of further surgical tactics. Computed vision techniques increase the detail of foreign bodies in the lungs and have significant prospects for the use of spiral computed tomography for in-depth data processing. Keywords: computer vision, spiral computed tomography, lungs, foreign bodies.
APA, Harvard, Vancouver, ISO, and other styles
11

Singh, Varun, Varun Sharma, and Vasu Bachchas. "Sudoku Solving Using Quantum Computer." International Journal for Research in Applied Science and Engineering Technology 11, no. 2 (February 28, 2023): 622–29. http://dx.doi.org/10.22214/ijraset.2023.49094.

Full text
Abstract:
Abstract: We use ability of quantum computing such as superposition and entanglement to solve the sudoku. In recent years, quantum computers have shown promise as a new technology for solving complex problems in various fields, including optimization and cryptography. In this paper, we investigate the potential of quantum computers for solving Sudoku puzzles. We present a quantum algorithm for solving Sudoku puzzles, and compare its performance to classical algorithms. Our results show that the quantum algorithm outperforms classical algorithms in terms of both speed and accuracy, and provides a new tool for solving Sudoku puzzles efficiently. Additionally, we discuss the implications of our results for the development of quantum algorithms for solving other combinatorial problems.
APA, Harvard, Vancouver, ISO, and other styles
12

STEWART, IAIN A. "ON TWO APPROXIMATION ALGORITHMS FOR THE CLIQUE PROBLEM." International Journal of Foundations of Computer Science 04, no. 02 (June 1993): 117–33. http://dx.doi.org/10.1142/s0129054193000080.

Full text
Abstract:
We look at well-known polynomial-time approximation algorithms for the optimization problem MAX-CLIQUE (“find the size of the largest clique in a graph”) with regard to how easy it is to compute the actual cliques yielded by these approximation algorithms. We show that even for two “pretty useless” deterministic polynomial-time approximation algorithms, it is unlikely that the resulting clique can be computed efficiently in parallel. We also show that for each non-deterministic algorithm, it is unlikely that there is some deterministic polynomial-time algorithm that decides whether any given vertex appears in some clique yielded by that nondeterministic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Figueiredo, Marco A., Clay S. Gloster, Mark Stephens, Corey A. Graves, and Mouna Nakkar. "Implementation of Multispectral Image Classification on a Remote Adaptive Computer." VLSI Design 10, no. 3 (January 1, 2000): 307–19. http://dx.doi.org/10.1155/2000/31983.

Full text
Abstract:
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms is justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of magnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application that can benefit from implementation on an FPGA-based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm (implemented on a typical general-purpose computer).
APA, Harvard, Vancouver, ISO, and other styles
14

He, Bo. "Fast Distributed Algorithm of Mining Global Frequent Itemsets." Advanced Materials Research 219-220 (March 2011): 191–94. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.191.

Full text
Abstract:
Most distributed algorithms of mining global frequent itemsets worked on net structure network and adopted Apriori-like algorithm. Whereas there were some problems in these algorithms: a lot of candidate itemsets and heavy communication traffic. Aiming at these problems, this paper proposed a fast distributed algorithm of mining global frequent itemsets, namely, FDMGFI algorithm, which set centre node. FDMGFI algorithm made computer nodes compute local frequent itemsets independently with FP-growth algorithm, then the centre node exchanged data with other computer nodes and combined, finally, global frequent itemsets were gained. FDMGFI algorithm required far less communication traffic by the searching strategies of top-down and bottom-up. Theoretical analysis and experimental results suggest that FDMGFI algorithm is fast and effective.
APA, Harvard, Vancouver, ISO, and other styles
15

Popov, Oleksandr, and Oleksiy Chystiakov. "On the Efficiency of Algorithms with Multi-level Parallelism." Physico-mathematical modelling and informational technologies, no. 33 (September 5, 2021): 133–37. http://dx.doi.org/10.15407/fmmit2021.33.133.

Full text
Abstract:
The paper investigates the efficiency of algorithms for solving computational mathematics problems that use a multilevel model of parallel computing on heterogeneous computer systems. A methodology for estimating the acceleration of algorithms for computers using a multilevel model of parallel computing is proposed. As an example, the parallel algorithm of the iteration method on a subspace for solving the generalized algebraic problem of eigenvalues of symmetric positive definite matrices of sparse structure is considered. For the presented algorithms, estimates of acceleration coefficients and efficiency were obtained on computers of hybrid architecture using graphics accelerators, on multi-core computers with shared memory and multi-node computers of MIMD-architecture.
APA, Harvard, Vancouver, ISO, and other styles
16

García-Sánchez, Pedro A., Christopher O’Neill, and Gautam Webb. "The computation of factorization invariants for affine semigroups." Journal of Algebra and Its Applications 18, no. 01 (January 2019): 1950019. http://dx.doi.org/10.1142/s0219498819500191.

Full text
Abstract:
We present several new algorithms for computing factorization invariant values over affine semigroups. In particular, we give (i) the first known algorithm to compute the delta set of any affine semigroup, (ii) an improved method of computing the tame degree of an affine semigroup, and (iii) a dynamic algorithm to compute catenary degrees of affine semigroup elements. Our algorithms rely on theoretical results from combinatorial commutative algebra involving Gröbner bases, Hilbert bases, and other standard techniques. Implementation in the computer algebra system GAP is discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Jiang, Dazhi, and Zhun Fan. "The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators." Mathematical Problems in Engineering 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/474805.

Full text
Abstract:
At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
APA, Harvard, Vancouver, ISO, and other styles
18

Arguello, F. "Quantum wavelet transforms of any order." Quantum Information and Computation 9, no. 5&6 (May 2009): 414–22. http://dx.doi.org/10.26421/qic9.5-6-5.

Full text
Abstract:
Many classical algorithms are known to efficiently compute the wavelet transforms. However, those classical algorithms cannot be directly translated to quantum algorithms. Recently, efficient and complete quantum algorithms for two representative wavelet transforms (quantum Haar and quantum Daubechies of fourth order) have been proposed. In this paper, we generalize these algorithms in order to they can be applied to Daubechies wavelet kernels of any order. Specifically, we develop a method that efficiently factorize those kernels. The factorization is compatible with the existing pyramidal and packet quantum wavelet algorithms. All steps of the algorithm are unitary and easily implementable on a quantum computer.
APA, Harvard, Vancouver, ISO, and other styles
19

Wadhai, Prajwal Ashok. "Algolizer Using ReactJS." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 17, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem30733.

Full text
Abstract:
The Algorithm Visualizer Project is an interactive and educational tool designed to illustrate various algorithms' functionality and efficiency through visual representations. Algorithms are fundamental to computer science, but their abstract nature can be challenging to comprehend. This project aims to bridge that gap by providing a user-friendly interface that visually demonstrates algorithms in action. The visualizer offers a platform where users can select from a range of algorithms, such as sorting (e.g., Bubble Sort, Merge Sort). Each algorithm is showcased step-by-step, allowing users to observe how data structures evolve and how the algorithms operate on them. Through dynamic visualizations, users can track the algorithm's progress, see how data is manipulated, and understand the underlying logic behind each step. Additionally, the tool provides options for adjusting parameters, such as input size or speed, enabling users to experiment with different scenarios and grasp the impact on algorithm performance. This project not only serves as a learning resource for students studying computer science and programming but also appeals to enthusiasts seeking a deeper understanding of algorithms. By offering an intuitive and engaging visual representation, the Algorithm Visualizer Project aims to make complex algorithms accessible and comprehensible to a wider audience.
APA, Harvard, Vancouver, ISO, and other styles
20

SOBHY, MOHAMED I., and ALAA-EL-DIN SHEHATA. "SECURE COMPUTER COMMUNICATION USING CHAOTIC ALGORITHMS." International Journal of Bifurcation and Chaos 10, no. 12 (December 2000): 2831–39. http://dx.doi.org/10.1142/s021812740000181x.

Full text
Abstract:
In this paper the application of chaotic algorithms in sending computer messages is described. The communication is achieved through email. Other transmission media can also be used. The algorithm has a degree of security many orders of magnitude higher than systems based on physical electronic circuitry. Both text, image or recorded voice messages can be transmitted. The algorithm can be used for computer communication and for secure databases.
APA, Harvard, Vancouver, ISO, and other styles
21

PREVE, NIKOLAOS P., and EMMANUEL N. PROTONOTARIOS. "MONTE CARLO SIMULATION ON COMPUTATIONAL FINANCE FOR GRID COMPUTING." International Journal of Modeling, Simulation, and Scientific Computing 03, no. 03 (May 17, 2012): 1250010. http://dx.doi.org/10.1142/s1793962312500109.

Full text
Abstract:
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating complex systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. In finance, Monte Carlo simulation method is used to calculate the value of companies, to evaluate economic investments and financial derivatives. On the other hand, Grid Computing applies heterogeneous computer resources of many geographically disperse computers in a network in order to solve a single problem that requires a great number of computer processing cycles or access to large amounts of data. In this paper, we have developed a simulation based on Monte Carlo method which is applied on grid computing in order to predict through complex calculations the future trends in stock prices.
APA, Harvard, Vancouver, ISO, and other styles
22

Vorobeichikova, O. V. "APPLICATION OF COMPUTER TECHNOLOGIES IN TEACHING OF MEDICAL STUDENTS." Bulletin of Siberian Medicine 13, no. 4 (August 28, 2014): 27–31. http://dx.doi.org/10.20538/1682-0363-2014-4-27-31.

Full text
Abstract:
he purpose of the given research are situational tasks from the point of view of algorithms of their decision and application of computer technologies for realization of similar algorithms. In the beginning the concept of a situational task and an opportunity of their use for training medical students is considered. The analysis of existing situational clinical tasks is spent and classification of algorithms of the decision is resulted. The opportunity of application of computer technologies for realization of similar algorithms is considered. Among all existing algorithms of the decision one in which the algorithm can be applied to the decision of the same tasks of one class is especially allocated. The technology of construction of such algorithm is resulted and the description of a program complex which realizes such algorithm of the decision of situational tasks is given.
APA, Harvard, Vancouver, ISO, and other styles
23

Tian, Pengyi, Dinggen Xu, and Xiuyuan Zhang. "Computer-Based Electronic Engineering Technology." Journal of Physics: Conference Series 2146, no. 1 (January 1, 2022): 012038. http://dx.doi.org/10.1088/1742-6596/2146/1/012038.

Full text
Abstract:
Abstract Most of the current image fusion algorithms directly process the original image, neglect the analysis of the main components of the image, and have a great influence on the effect of image fusion. In this paper, the main component analysis method is used to decompose the image, divided into low rank matrix and sparse matrix, introduced compression perception technology and NSST transformation algorithm to process the two types of matrix, according to the corresponding fusion rules to achieve image fusion, through experimental results: this algorithm has greater mutual information compared with traditional algorithms, structural information similarity and average gradient.
APA, Harvard, Vancouver, ISO, and other styles
24

Fitt, Alistair, Keith O. Geddes, Stephen R. Czapor, and George Labahn. "Algorithms for Computer Algebra." Mathematical Gazette 79, no. 484 (March 1995): 242. http://dx.doi.org/10.2307/3620124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Holt, D. "ALGORITHMS FOR COMPUTER ALGEBRA." Bulletin of the London Mathematical Society 26, no. 1 (January 1994): 107–8. http://dx.doi.org/10.1112/blms/26.1.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Leach, Ronald J. "Complexity of computer algorithms." Rocky Mountain Journal of Mathematics 17, no. 1 (March 1987): 167–88. http://dx.doi.org/10.1216/rmj-1987-17-1-167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tron, Roberto, and Rene Vidal. "Distributed Computer Vision Algorithms." IEEE Signal Processing Magazine 28, no. 3 (May 2011): 32–45. http://dx.doi.org/10.1109/msp.2011.940399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Maulana Hasan and Yahfizham. "Pengenalan Algoritma pada Pembelajaran Pemrograman Komputer." Comit: Communication, Information and Technology Journal 2, no. 2 (December 31, 2023): 285–99. http://dx.doi.org/10.47467/comit.v2i2.1386.

Full text
Abstract:
An algorithm is defined as a series of actions to solve a problem. The solutions implemented must be sequential, logical and systematic. In this way, algorithms can become the basis for creating computer programs. Meanwhile, a computer program is a set of commands that a computer is told to execute when solving a problem. Algorithms play a very important role in computer programming because the heart of computer science is algorithms. Many discussions from various branches of the computer field refer to the definition of algorithms. The purpose of this scientific work is to help students be able to first recognize the definition of an algorithm so that it can facilitate students' understanding of computer programming learning and increase student motivation in learning. The author used the literature study method in writing this scientific work. Keywords: Algorithms, Programming.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Hongxin. "Optimization Strategies for Mathematical Algorithms in Computer Programming." Journal of Big Data and Computing 1, no. 1 (March 2023): 16–19. http://dx.doi.org/10.62517/jbdc.202301104.

Full text
Abstract:
Computer programming is an important part of computing information technology, mathematical operation is one of the main modules of computer programming, through the optimization of mathematical operation to simple computer programming algorithm, can improve the efficiency of computer software. Therefore, in order to improve the efficiency of computer operation, it is particularly important to optimize the mathematical algorithms. Based on this, this paper studies the optimization strategy of mathematical algorithm in computer programming. Firstly, a brief overview of mathematical algorithm and computer programming is made, secondly, the role of mathematical algorithm in computer programming is analyzed, and finally, the optimization strategy of mathematical algorithm in computer programming is given.
APA, Harvard, Vancouver, ISO, and other styles
30

Porsani, Milton J., and Bjørn Ursin. "Direct multichannel predictive deconvolution." GEOPHYSICS 72, no. 2 (March 2007): H11—H27. http://dx.doi.org/10.1190/1.2432260.

Full text
Abstract:
The Levinson principle generally can be used to compute recursively the solution of linear equations. It can also be used to update the error terms directly. This is used to do single-channel deconvolution directly on seismic data without computing or applying a digital filter. Multichannel predictive deconvolution is used for seismic multiple attenuation. In a standard procedure, the prediction-error filter matrices are computed with a Levinson recursive algorithm, using a covariance matrix of the input data. The filtered output is the prediction errors or the nonpredictable part of the data. Starting with the classical Levinson recursion,wehave derived new algorithms for direct recursive calculationof the prediction errors without computing the data covariance-matrix or computing the prediction-error filters. One algorithm generates recursively the one-step forward and backward predic-tion errors and the L-step forward prediction error, computing only the filter matrices with the highest index. A numerically more stable algorithm uses reduced QR decomposition or singular-value decomposition (SVD) in a direct recursive computation of the prediction errors without computing any filter matrix. The new, stable, predictive algorithms require more arithmetic opera-tions in the computer, but the computer programs and data flow are much simpler than for standard predictive deconvolution.
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Xuandiyang. "Research on Biological Population Evolutionary Algorithm and Individual Adaptive Method Based on Quantum Computing." Wireless Communications and Mobile Computing 2022 (March 22, 2022): 1–9. http://dx.doi.org/10.1155/2022/5188335.

Full text
Abstract:
On the basis of classical computer, quantum computer has been developed. In dealing with some large-scale parallel problems, quantum computer is simpler and faster than traditional computer. Nowadays, physical qubit computers have many limitations. Classical computers have many ways to simulate quantum computing, the most effective of which are quantum superiority and quantum algorithm. Ensuring computational efficiency, accuracy, and precision is of great significance to the study of large-scale quantum computing. Compared with other algorithms, genetic algorithm has more advantages, so it can be more widely used. For example, strong adaptability and global optimization ability are the advantages of genetic algorithm. Through the research in Chapter 4, we can conclude that the variance of A2C is obviously smaller than that of PPO. Furthermore, it can be concluded that A2C has better robustness.
APA, Harvard, Vancouver, ISO, and other styles
32

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu, and Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Full text
Abstract:
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmed, Asad, Osman Hasan, Falah Awwad, Nabil Bastaki, and Syed Rafay Hasan. "Formal Asymptotic Analysis of Online Scheduling Algorithms for Plug-In Electric Vehicles’ Charging." Energies 12, no. 1 (December 21, 2018): 19. http://dx.doi.org/10.3390/en12010019.

Full text
Abstract:
A large-scale integration of plug-in electric vehicles (PEVs) into the power grid system has necessitated the design of online scheduling algorithms to accommodate the after-effects of this new type of load, i.e., PEVs, on the overall efficiency of the power system. In online settings, the low computational complexity of the corresponding scheduling algorithms is of paramount importance for the reliable, secure, and efficient operation of the grid system. Generally, the computational complexity of an algorithm is computed using asymptotic analysis. Traditionally, the analysis is performed using the paper-pencil proof method, which is error-prone and thus not suitable for analyzing the mission-critical online scheduling algorithms for PEV charging. To overcome these issues, this paper presents a formal asymptotic analysis approach for online scheduling algorithms for PEV charging using higher-order-logic theorem proving, which is a sound computer-based verification approach. For illustration purposes, we present the complexity analysis of two state-of-the-art online algorithms: the Online cooRdinated CHARging Decision (ORCHARD) algorithm and online Expected Load Flattening (ELF) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Zadiraka, Valerii, Oleksandr Khimich, and Inna Shvidchenko. "Models of Computer Calculations." Cybernetics and Computer Technologies, no. 2 (September 30, 2022): 38–51. http://dx.doi.org/10.34229/2707-451x.22.2.4.

Full text
Abstract:
Introduction. The complexity of computational algorithms for solving typical problems of computational, applied, and discrete mathematics is analyzed from the perspective of the theory of computation, depending on the computer architecture and the used computing model: single-processor, multiprocessor, and quantum. The following classes of problems are considered: systems of linear algebraic equations, the Cauchy problem for systems of ordinary differential equations, numerical integration, boundary value problems for ordinary differential equations, factorization of numbers, finding the discrete logarithm of a number in multiplicative integer groups, searching for the necessary record in an unordered database, etc. The purposes of the paper are: 1. To investigate how the computational complexity depends on the computer architecture and the computational model. 2. To show that the construction of the computational process under the given conditions of calculations is related to the solution of the following problems: – the existence ε-solution to the problem; – the existence of T-effective computing algorithms; – the possibility of building a real computing process under the given computing conditions. 3. To investigate the effect of rounding numbers on computational complexity (especially when solving problems of transcomputational complexity). 4. To give the complexity estimates and total error of the computational algorithm for a number of typical problems of computational, applied, and discrete mathematics. The results. The complexity estimates of computational algorithms of the listed classes of problems for single-processor, multiprocessor and quantum computing models are given. The main focus is on high-performance computing: using the principles of parallel data processing and quantum mechanics. Conclusions. The connection of complexity estimates of computational algorithms with the architecture of computers and models of calculations is demonstrated. The characteristics of the first quantum computers (2016 – 2022), which have gone beyond laboratory research, are given. Keywords: computer technologies, rounding error, sequential, parallel and quantum computing models, complexity estimate.
APA, Harvard, Vancouver, ISO, and other styles
35

Shultz, Thomas R., Marcel Montrey, and Lucy M. Aplin. "Modelling the spread of innovation in wild birds." Journal of The Royal Society Interface 14, no. 131 (June 2017): 20170215. http://dx.doi.org/10.1098/rsif.2017.0215.

Full text
Abstract:
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

Ahmad Al-hafiz Sagala and Yahfizham Yahfizham. "Analisis Pengenalan Konsep Algoritma Pemrograman Matematika Pada Kehidupan Sehari Hari." Morfologi: Jurnal Ilmu Pendidikan, Bahasa, Sastra dan Budaya 2, no. 1 (January 3, 2024): 01–16. http://dx.doi.org/10.61132/morfologi.v2i1.267.

Full text
Abstract:
Algorithms and programming play a crucial role in problem solving and software development. The origin of the word "algorithm" comes from "algorism," and its history can be traced to Abu Ja'far Muhammad Ibn Musa Al-khuwarizmi. This article presents definitions of algorithms from several sources, such as Kani (2020), Jando &; Nani (2018), Munir & Lidya (2016), and Sismoro (2005). The programming algorithm, as a series of logical and systematic steps, becomes the core in Troubleshooting. The research method applied is descriptive and qualitative, by collecting information from various sources, including books, journal articles, and research reports. The discussion in this article covers the significance of algorithms in everyday life and their application in computer programming. In addition, this article details the basic concepts of algorithms, the role of programming languages, and algorithm representation techniques. In the discussion section, it was revealed that algorithms involve logical and systematic steps to solve a task or problem. The basic structure of the algorithm includes repetition, branching, and sequential order. Programming languages contribute to simplifying human interaction with computers, increasing productivity, as well as facilitating software maintenance and development. Algorithms can be represented in the form of descriptive sentences, pseudocode, or flowcharts. Flowcharts use symbols to illustrate the steps of solving a problem. Repetition, branching, and sequential structures can be clearly described through flowcharts. This article provides a comprehensive overview of algorithms and programming, giving readers insight into the basic concepts, implementation, and importance of algorithms in the context of computer programming.
APA, Harvard, Vancouver, ISO, and other styles
37

Bi, Bo, Muhammad Kamran Jamil, Khawaja Muhammad Fahd, Tian-Le Sun, Imran Ahmad, and Lei Ding. "Algorithms for Computing Wiener Indices of Acyclic and Unicyclic Graphs." Complexity 2021 (May 3, 2021): 1–6. http://dx.doi.org/10.1155/2021/6663306.

Full text
Abstract:
Let G = V G , E G be a molecular graph, where V G and E G are the sets of vertices (atoms) and edges (bonds). A topological index of a molecular graph is a numerical quantity which helps to predict the chemical/physical properties of the molecules. The Wiener, Wiener polarity, and the terminal Wiener indices are the distance-based topological indices. In this paper, we described a linear time algorithm (LTA) that computes the Wiener index for acyclic graphs and extended this algorithm for unicyclic graphs. The same algorithms are modified to compute the terminal Wiener index and the Wiener polarity index. All these algorithms compute the indices in time O n .
APA, Harvard, Vancouver, ISO, and other styles
38

Rejer, Izabela. "Genetic Algorithms for Feature Selection for Brain–Computer Interface." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 05 (July 9, 2015): 1559008. http://dx.doi.org/10.1142/s0218001415590089.

Full text
Abstract:
The crucial problem that has to be solved when designing an effective brain–computer interface (BCI) is: how to reduce the huge space of features extracted from raw electroencephalography (EEG) signals. One of the strategies for feature selection that is often applied by BCI researchers is based on genetic algorithms (GAs). The two types of GAs that are most commonly used in BCI research are the classic algorithm and the Culling algorithm. This paper presents both algorithms and their application for selecting features crucial for the correct classification of EEG signals recorded during imagery movements of the left and right hand. The results returned by both algorithms are compared to those returned by an algorithm with aggressive mutation and an algorithm with melting individuals, both of which have been proposed by the author of this paper. While the aggressive mutation algorithm has been published previously, the melting individuals algorithm is presented here for the first time.
APA, Harvard, Vancouver, ISO, and other styles
39

Khatri, Sumeet, Ryan LaRose, Alexander Poremba, Lukasz Cincio, Andrew T. Sornborger, and Patrick J. Coles. "Quantum-assisted quantum compiling." Quantum 3 (May 13, 2019): 140. http://dx.doi.org/10.22331/q-2019-05-13-140.

Full text
Abstract:
Compiling quantum algorithms for near-term quantum computers (accounting for connectivity and native gate alphabets) is a major challenge that has received significant attention both by industry and academia. Avoiding the exponential overhead of classical simulation of quantum dynamics will allow compilation of larger algorithms, and a strategy for this is to evaluate an algorithm's cost on a quantum computer. To this end, we propose a variational hybrid quantum-classical algorithm called quantum-assisted quantum compiling (QAQC). In QAQC, we use the overlap between a target unitaryUand a trainable unitaryVas the cost function to be evaluated on the quantum computer. More precisely, to ensure that QAQC scales well with problem size, our cost involves not only the global overlapTr(V†U)but also the local overlaps with respect to individual qubits. We introduce novel short-depth quantum circuits to quantify the terms in our cost function, and we prove that our cost cannot be efficiently approximated with a classical algorithm under reasonable complexity assumptions. We present both gradient-free and gradient-based approaches to minimizing this cost. As a demonstration of QAQC, we compile various one-qubit gates on IBM's and Rigetti's quantum computers into their respective native gate alphabets. Furthermore, we successfully simulate QAQC up to a problem size of 9 qubits, and these simulations highlight both the scalability of our cost function as well as the noise resilience of QAQC. Future applications of QAQC include algorithm depth compression, black-box compiling, noise mitigation, and benchmarking.
APA, Harvard, Vancouver, ISO, and other styles
40

Ivancova, Olga, Vladimir Korenkov, Olga Tyatyushkina, Sergey Ulyanov, and Toshio Fukuda. "Quantum supremacy in end-to-end intelligent IT. PT. III. Quantum software engineering – quantum approximate optimization algorithm on small quantum processors." System Analysis in Science and Education, no. 2 (2020) (June 30, 2020): 115–76. http://dx.doi.org/10.37005/2071-9612-2020-2-115-176.

Full text
Abstract:
Principles and methodologies of quantum algorithmic gate-based design on small quantum computer described. The possibilities of quantum algorithmic gates simulation on classical computers discussed. A new approach to a circuit implementation design of quantum algorithm gates for fast quantum massive parallel computing presented. SW & HW support sophisticated smart toolkit of supercomputing accelerator of quantum algorithm simulation on small quantum programmable computer algorithm gate (that can program in SW to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates) described
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Yang, Shi Jun Ji, and Li Jun Yang. "General Subdivision Inferred from Catmull-Clark Subdivision Algorithm." Materials Science Forum 532-533 (December 2006): 789–92. http://dx.doi.org/10.4028/www.scientific.net/msf.532-533.789.

Full text
Abstract:
Subdivision algorithms have emerged recently as a powerful and useful technique in modeling free-form surfaces. Subdivision algorithms exited at present however, being their disadvantages, can’t meet the demand of wide application in modeling surfaces and don’t still belong to a general theory. In this paper, a general subdivision algorithm is presented which is a general conclusion inferred from classical Catmull-Clark subdivision algorithm and can produce existing subdivision algorithm by selecting reasonable vertical weights and horizontal weights. The subdivision algorithm is an ideal resolution for keeping shape feature such as crease, corner and dart contrast to all existing subdivision algorithms, it also have the advantage of flexible weights selection, easily control of shape and high compute speed. Therefore, the algorithms are extensively applicable for shape modeling in computer aided geometric design, industrial prototype design and reverse engineering.
APA, Harvard, Vancouver, ISO, and other styles
42

Pu, Chun Wang. "Research on the Design of Football Teaching System Based on Computer 3D Human Motion Recognition." Advanced Materials Research 791-793 (September 2013): 2013–17. http://dx.doi.org/10.4028/www.scientific.net/amr.791-793.2013.

Full text
Abstract:
The performance of computer hardware and software has been improved, so computer 3D virtual simulation is gradually applied in all walks of life. It needs to handle large amounts of data in computer 3D simulation process. In the data storage and calculation process, it must choose efficient algorithms to solve the problem of redundancy data. Based on this, the paper respectively applies the BHIK algorithm and CCD algorithm, the two algorithms to the simulation of computer 3D motion recognition process, and it has compared the efficiency of the two algorithms. By the comparing, the calculation speed of BHIK algorithm and convergence are significantly better than the CCD algorithm. The execution time of BHIK algorithm is only 1/10 of CCD algorithm, and the convergence speed is 4 times of CCD algorithm. So we choose BHIK algorithm as the computer 3D simulation algorithm. Finally, it takes the football teaching system as an example to verify the validity and reliability of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Mihelič, Jurij, and Uroš Čibej. "EXPERIMENTAL COMPARISON OF MATRIX ALGORITHMS FOR DATAFLOW COMPUTER ARCHITECTURE." Acta Electrotechnica et Informatica 18, no. 3 (September 27, 2018): 47–56. http://dx.doi.org/10.15546/aeei-2018-0025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Amaro, David, Carlo Modica, Matthias Rosenkranz, Mattia Fiorentini, Marcello Benedetti, and Michael Lubasch. "Filtering variational quantum algorithms for combinatorial optimization." Quantum Science and Technology 7, no. 1 (January 1, 2022): 015021. http://dx.doi.org/10.1088/2058-9565/ac3e54.

Full text
Abstract:
Abstract Current gate-based quantum computers have the potential to provide a computational advantage if algorithms use quantum hardware efficiently. To make combinatorial optimization more efficient, we introduce the filtering variational quantum eigensolver which utilizes filtering operators to achieve faster and more reliable convergence to the optimal solution. Additionally we explore the use of causal cones to reduce the number of qubits required on a quantum computer. Using random weighted MaxCut problems, we numerically analyze our methods and show that they perform better than the original VQE algorithm and the quantum approximate optimization algorithm. We also demonstrate the experimental feasibility of our algorithms on a Quantinuum trapped-ion quantum processor powered by Honeywell.
APA, Harvard, Vancouver, ISO, and other styles
45

Suryadibrata, Alethea, and Julio Christian Young. "Visualisasi Algoritma sebagai Sarana Pembelajaran K-Means Clustering." Ultimatics : Jurnal Teknik Informatika 12, no. 1 (July 2, 2020): 25–29. http://dx.doi.org/10.31937/ti.v12i1.1523.

Full text
Abstract:
Algorithm Visualization (AV) is often used in computer science to represents how an algorithm works. Educators believe that visualization can help students to learn difficult algorithms. In this paper, we put our interest in visualizing one of Machine Learning (ML) algorithms. ML algorithms are used in various fields. Some of the algorithms are used to classify, predict, or cluster data. Unfortunately, many students find that ML algorithms are hard to learn since some of these algorithms include complicated mathematical equations. We hope this research can help computer science students to understand K-Means Clustering in an easier way.
APA, Harvard, Vancouver, ISO, and other styles
46

Niu, Yiming, Wenyong Du, and Zhenying Tang. "Computer Network Security Defense Model." Journal of Physics: Conference Series 2146, no. 1 (January 1, 2022): 012041. http://dx.doi.org/10.1088/1742-6596/2146/1/012041.

Full text
Abstract:
Abstract With the rapid development of the Internet industry, hundreds of millions of online resources are also booming. In the information space with huge and complex resources, it is necessary to quickly help users find the resources they are interested in and save users time. At this stage, the content industry’s application of the recommendation model in the content distribution process has become the mainstream. The content recommendation model provides users with a highly efficient and highly satisfying reading experience, and solves the problem of information redundancy to a certain extent. Knowledge tag personalized dynamic recommendation technology is currently widely used in the field of e-commerce. The purpose of this article is to study the optimization of the knowledge tag personalized dynamic recommendation system based on artificial intelligence algorithms. This article first proposes a hybrid recommendation algorithm based on the comparison between content-based filtering and collaborative filtering algorithms. It mainly introduces user browsing behavior analysis and design, KNN-based item similarity algorithm design, and hybrid recommendation algorithm implementation. Finally, through algorithm simulation experiments, the effectiveness of the algorithm in this paper is verified, and the accuracy of the recommendation has been improved.
APA, Harvard, Vancouver, ISO, and other styles
47

Khimich, Alexander, Victor Polyanko, and Tamara Chistyakova. "Parallel Algorithms for Solving Linear Systems on Hybrid Computers." Cybernetics and Computer Technologies, no. 2 (July 24, 2020): 53–66. http://dx.doi.org/10.34229/2707-451x.20.2.6.

Full text
Abstract:
Introduction. At present, in science and technology, new computational problems constantly arise with large volumes of data, the solution of which requires the use of powerful supercomputers. Most of these problems come down to solving systems of linear algebraic equations (SLAE). The main problem of solving problems on a computer is to obtain reliable solutions with minimal computing resources. However, the problem that is solved on a computer always contains approximate data regarding the original task (due to errors in the initial data, errors when entering numerical data into the computer, etc.). Thus, the mathematical properties of a computer problem can differ significantly from the properties of the original problem. It is necessary to solve problems taking into account approximate data and analyze computer results. Despite the significant results of research in the field of linear algebra, work in the direction of overcoming the existing problems of computer solving problems with approximate data is further aggravated by the use of contemporary supercomputers, do not lose their significance and require further development. Today, the most high-performance supercomputers are parallel ones with graphic processors. The architectural and technological features of these computers make it possible to significantly increase the efficiency of solving problems of large volumes at relatively low energy costs. The purpose of the article is to develop new parallel algorithms for solving systems of linear algebraic equations with approximate data on supercomputers with graphic processors that implement the automatic adjustment of the algorithms to the effective computer architecture and the mathematical properties of the problem, identified in the computer, as well with estimates of the reliability of the results. Results. A methodology for creating parallel algorithms for supercomputers with graphic processors that implement the study of the mathematical properties of linear systems with approximate data and the algorithms with the analysis of the reliability of the results are described. The results of computational experiments on the SKIT-4 supercomputer are presented. Conclusions. Parallel algorithms have been created for investigating and solving linear systems with approximate data on supercomputers with graphic processors. Numerical experiments with the new algorithms showed a significant acceleration of calculations with a guarantee of the reliability of the results. Keywords: systems of linear algebraic equations, hybrid algorithm, approximate data, reliability of the results, GPU computers.
APA, Harvard, Vancouver, ISO, and other styles
48

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan, and Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning." International Journal of Intelligent Systems 2023 (November 10, 2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Full text
Abstract:
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
APA, Harvard, Vancouver, ISO, and other styles
49

Michael, James Bret, and Jeffrey Voas. "Algorithms, Algorithms, Algorithms." Computer 53, no. 11 (November 2020): 13–15. http://dx.doi.org/10.1109/mc.2020.3016534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sokolov, Sergey, Andrey Boguslavsky, and Sergei Romanenko. "Implementation of the visual data processing algorithms for onboard computing units." Robotics and Technical Cybernetics 9, no. 2 (June 30, 2021): 106–11. http://dx.doi.org/10.31776/rtcj.9204.

Full text
Abstract:
According to the short analysis of modern experience of hardware and software for autonomous mobile robots a role of computer vision systems in the structure of those robots is considered. A number of configurations of onboard computers and implementation of algorithms for visual data capturing and processing are described. In original configuration space the «algorithms-hardware» plane is considered. For software designing the realtime vision system framework is used. Experiments with the computing module based on the Intel/Altera Cyclone IV FPGA (implementation of the histogram computation algorithm and the Canny's algorithm), with the computing module based on the Xilinx FPGA (implementation of a sparse and dense optical flow algorithms) are described. Also implementation of algorithm of graph segmentation of grayscale images is considered and analyzed. Results of the first experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography