Academic literature on the topic 'Computer algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer algorithms":

1

Ataeva, Gulsina Isroilovna, and Lola Dzhalolovna Yodgorova. "METHODS AND ALGORITHMS OF COMPUTER GRAPHICS." Scientific Reports of Bukhara State University 4, no. 1 (February 26, 2020): 43–47. http://dx.doi.org/10.52297/2181-1466/2020/4/1/3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Methods and algorithms of computer graphics are considered in the article. Implementation of transformation of graphic objects by means of operations of transfer, scaling, rotation, the types of geometric models are considered. Methods of computer graphics include methods of converting graphic objects, representing (scanning) lines in raster form, selecting a window, removing hidden lines, projecting, painting images.
2

Pelter, Michele M., and Mary G. Carey. "ECG Computer Algorithms." American Journal of Critical Care 17, no. 6 (November 1, 2008): 581–82. http://dx.doi.org/10.4037/ajcc2008.17.6.581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kaltofen, E. "Computer Algebra Algorithms." Annual Review of Computer Science 2, no. 1 (June 1987): 91–118. http://dx.doi.org/10.1146/annurev.cs.02.060187.000515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rakhimov, Bakhtiyar Saidovich, Feroza Bakhtiyarovna Rakhimova, Sabokhat Kabulovna Sobirova, Furkat Odilbekovich Kuryazov, and Dilnoza Boltabaevna Abdirimova. "Review And Analysis Of Computer Vision Algorithms." American Journal of Applied sciences 03, no. 05 (May 31, 2021): 245–50. http://dx.doi.org/10.37547/tajas/volume03issue05-39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computer vision as a scientific discipline refers to the theories and technologies for creating artificial systems that receive information from an image. Despite the fact that this discipline is quite young, its results have penetrated almost all areas of life. Computer vision is closely related to other practical fields like image processing, the input of which is two-dimensional images obtained from a camera or artificially created. This form of image transformation is aimed at noise suppression, filtering, color correction and image analysis, which allows you to directly obtain specific information from the processed image. This information may include searching for objects, keypoints, segments, and annexes;
5

Xu, Zheng Guang, Chen Chen, and Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor." Advanced Materials Research 659 (January 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
6

Moosakhah, Fatemeh, and Amir Massoud Bidgoli. "Congestion Control in Computer Networks with a New Hybrid Intelligent Algorithm." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 8 (August 23, 2014): 4688–706. http://dx.doi.org/10.24297/ijct.v13i8.7068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With invention of computer networks, transferring data from one computer to another became possible, but as the number of computers that transfer data to each other increased and common communication channel bandwidth among them in a network limited, has led to a phenomenon called congestion, so that some of data packets would be dropped and never arrive to destination. Different algorithms have been proposed for overcoming congestion. These are divided into two general groups: 1- flow based algorithms and 2- class based algorithms. In present study, using class based algorithm with optimization of its control by fuzzy logic and new Cuckoo algorithm, we increased the number of packets that reach to destination and reduced the number of dropped packets considerably during congestion. Simulation results indicate a great improvement of efficiency.
7

Bunin, Y. V., E. V. Vakulik, R. N. Mikhaylusov, V. V. Negoduyko, K. S. Smelyakov, and O. V. Yasinsky. "Estimation of lung standing size with the application of computer vision algorithms." Experimental and Clinical Medicine 89, no. 4 (December 17, 2020): 87–94. http://dx.doi.org/10.35339/ekm.2020.89.04.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Evaluation of spiral computed tomography data is important to improve the diagnosis of gunshot wounds and the development of further surgical tactics. The aim of the work is to improve the results of the diagnosis of foreign bodies in the lungs by using computer vision algorithms. Image gradation correction, interval segmentation, threshold segmentation, three-dimensional wave method, principal components method are used as a computer vision device. The use of computer vision algorithm allows to clearly determine the size of the foreign body of the lung with an error of 6.8 to 7.2%, which is important for in-depth diagnosis and development of further surgical tactics. Computed vision techniques increase the detail of foreign bodies in the lungs and have significant prospects for the use of spiral computed tomography for in-depth data processing. Keywords: computer vision, spiral computed tomography, lungs, foreign bodies.
8

STEWART, IAIN A. "ON TWO APPROXIMATION ALGORITHMS FOR THE CLIQUE PROBLEM." International Journal of Foundations of Computer Science 04, no. 02 (June 1993): 117–33. http://dx.doi.org/10.1142/s0129054193000080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We look at well-known polynomial-time approximation algorithms for the optimization problem MAX-CLIQUE (“find the size of the largest clique in a graph”) with regard to how easy it is to compute the actual cliques yielded by these approximation algorithms. We show that even for two “pretty useless” deterministic polynomial-time approximation algorithms, it is unlikely that the resulting clique can be computed efficiently in parallel. We also show that for each non-deterministic algorithm, it is unlikely that there is some deterministic polynomial-time algorithm that decides whether any given vertex appears in some clique yielded by that nondeterministic algorithm.
9

Schlingemann, D. "Cluster states, algorithms and graphs." Quantum Information and Computation 4, no. 4 (July 2004): 287–324. http://dx.doi.org/10.26421/qic4.4-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The present paper is concerned with the concept of the one-way quantum computer, beyond binary-systems, and its relation to the concept of stabilizer quantum codes. This relation is exploited to analyze a particular class of quantum algorithms, called graph algorithms, which correspond in the binary case to the Clifford group part of a network and which can efficiently be implemented on a one-way quantum computer. These algorithms can ``completely be solved" in the sense that the manipulation of quantum states in each step can be computed explicitly. Graph algorithms are precisely those which implement encoding schemes for graph codes. Starting from a given initial graph, which represents the underlying resource of multipartite entanglement, each step of the algorithm is related to a explicit transformation on the graph.
10

Singh, Varun, Varun Sharma, and Vasu Bachchas. "Sudoku Solving Using Quantum Computer." International Journal for Research in Applied Science and Engineering Technology 11, no. 2 (February 28, 2023): 622–29. http://dx.doi.org/10.22214/ijraset.2023.49094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: We use ability of quantum computing such as superposition and entanglement to solve the sudoku. In recent years, quantum computers have shown promise as a new technology for solving complex problems in various fields, including optimization and cryptography. In this paper, we investigate the potential of quantum computers for solving Sudoku puzzles. We present a quantum algorithm for solving Sudoku puzzles, and compare its performance to classical algorithms. Our results show that the quantum algorithm outperforms classical algorithms in terms of both speed and accuracy, and provides a new tool for solving Sudoku puzzles efficiently. Additionally, we discuss the implications of our results for the development of quantum algorithms for solving other combinatorial problems.

Dissertations / Theses on the topic "Computer algorithms":

1

Mosca, Michele. "Quantum computer algorithms." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.


Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.

3

Rhodes, Daniel Thomas. "Hardware accelerated computer graphics algorithms." Thesis, Nottingham Trent University, 2008. http://irep.ntu.ac.uk/id/eprint/201/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones. This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of computer graphics but which provide a great deal of scope for further investigation in the light of recent hardware advances. A new hardware implementation of a texture filtering technique, aimed at consumer level hardware, is presented. This novel technique utilises Fourier space image filtering to reduce aliasing. Investigation shows that the technique provides reduced levels of aliasing along with comparable levels of detail to currently popular techniques. This adds to the community's knowledge by expanding the range of techniques available, as well as increasing the number of techniques which offer the potential for easy integration with current consumer level graphics hardware along with real-time performance. Bump mapping is a long-standing and well understood technique. Variations and extensions of it have been popular in real-time 3D computer graphics for many years. A new hardware implementation of a technique termed Super Bump Mapping (SBM) is introduced. Expanding on the work of Cant and Langensiepen [1], the SBM technique adopts the novel approach of using normal maps which supply multiple vectors per texel. This allows the retention of much more detail and overcomes some of the aliasing deficiencies of standard bump mapping caused by the standard single vector approach and the non-linearity of the bump mapping process. A novel depth of field algorithm is proposed, which is an extension of the authors previous work [2][3][4]. The technique is aimed at consumer level hardware and attempts to raise the bar for realism by providing support for the 'see-through' effect. This effect is a vital factor in the realistic appearance of simulated depth of field and has been overlooked in real time computer graphics due to the complexities of an accurate calculation. The implementation of this new algorithm on current consumer level hardware is investigated and it is concluded that while current hardware is not yet capable enough, future iterations will provide the necessary functional and performance increases.
4

Mims, Mark McGrew. "Dynamical stability of quantum algorithms /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
6

Tran, Chan-Hung. "Fast clipping algorithms for computer graphics." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Interactive computer graphics allow achieving a high bandwidth man-machine communication only if the graphics system meets certain speed requirements. Clipping plays an important role in the viewing process, as well as in the functions zooming and panning; thus, it is desirable to develop a fast clipper. In this thesis, the intersection problem of a line segment against a convex polygonal object has been studied. Adaption of the the clip algorithms for parallel processing has also been investigated. Based on the conventional parametric clipping algorithm, two families of 2-D generalized line clipping algorithms are proposed: the t-para method and the s-para method. Depending on the implementation both run either linearly in time using a sequential tracing or logarithmically in time by applying the numerical bisection method. The intersection problem is solved after the sector locations of the endpoints of a line segment are determined by a binary search. Three-dimensional clipping with a sweep-defined object using translational sweeping or conic sweeping is also discussed. Furthermore, a mapping method is developed for rectangular clipping. The endpoints of a line segment are first mapped onto the clip boundaries by an interval-clip operation. Then a pseudo window is-defined and a set of conditions is derived for trivial acceptance and rejection. The proposed algorithms are implemented and compared with the Liang-Barsky algorithm to estimate their practical efficiency. Vectorization of the 2-D and 3-D rectangular clipping algorithms on an array processor has also been attempted.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
7

Viloria, John A. (John Alexander) 1978. "Optimizing clustering algorithms for computer vision." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
9

O'Brien, Neil. "Algorithms for scientific computing." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/355716/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
There has long been interest in algorithms for simulating physical systems. We are concernedwith two areaswithin this field: fastmultipolemethods andmeshlessmethods. Since Greengard and Rokhlin’s seminal paper in 1987, considerable interest has arisen in fast multipole methods for finding the energy of particle systems in two and three dimensions, and more recently in many other applications where fast matrix-vector multiplication is called for. We develop a new fast multipole method that allows the calculation of the energy of a system of N particles in O(N) time, where the particles’ interactions are governed by the 2D Yukawa potential which takes the form of a modified Bessel function Kv. We then turn our attention to meshless methods. We formulate and test a new radial basis function finite differencemethod for solving an eigenvalue problemon a periodic domain. We then applymeshlessmethods to modelling photonic crystals. After an initial background study of the field, we detail the Maxwell equations, which govern the interaction of the light with the photonic crystal, and show how photonic band gaps may be given rise to. We present a novel meshless weak-strong form method with reduced computational cost compared to the existing meshless weak form method. Furthermore, we develop a new radial basis function finite differencemethod for photonic band gap calculations. Throughout the work we demonstrate the application of cutting-edge technologies such as cloud computing to the development and verification of algorithms for physical simulations.
10

Nofal, Samer. "Algorithms for argument systems." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12173/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Argument systems are computational models that enable an artificial intelligent agent to reason via argumentation. Basically, the computations in argument systems can be viewed as search problems. In general, for a wide range of such problems existing algorithms lack five important features. Firstly, there is no comprehensive study that shows which algorithm among existing others is the most efficient in solving a particular problem. Secondly, there is no work that establishes the use of cost-effective heuristics leading to more efficient algorithms. Thirdly, mechanisms for pruning the search space are understudied, and hence, further pruning techniques might be neglected. Fourthly, diverse decision problems, for extended models of argument systems, are left without dedicated algorithms fine-tuned to the specific requirements of the respective extended model. Fifthly, some existing algorithms are presented in a high level that leaves some aspects of the computations unspecified, and therefore, implementations are rendered open to different interpretations. The work presented in this thesis tries to address all these concerns. Concisely, the presented work is centered around a widely studied view of what computationally defines an argument system. According to this view, an argument system is a pair: a set of abstract arguments and a binary relation that captures the conflicting arguments. Then, to resolve an instance of argument systems the acceptable arguments must be decided according to a set of criteria that collectively define the argumentation semantics. For different motivations there are various argumentation semantics. Equally, several proposals in the literature present extended models that stretch the basic two components of an argument system usually by incorporating more elements and/or broadening the nature of the existing components. This work designs algorithms that solve decision problems in the basic form of argument systems as well as in some other extended models. Likewise, new algorithms are developed that deal with different argumentation semantics. We evaluate our algorithms against existing algorithms experimentally where sufficient indications highlight that the new algorithms are superior with respect to their running time.

Books on the topic "Computer algorithms":

1

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Horowitz, Ellis. Computer algorithms. New York: Computer Science Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Horowitz, Ellis. Computer algorithms. New York: Computer Science Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baase, Sara. Computer algorithms: Introduction to design and analysis. 3rd ed. Delhi: Pearson Education, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koren, Israel. Computer arithmetic algorithms. Englewood Cliffs, N.J: Prentice Hall, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computer algorithms":

1

Phan, Vinhthuy. "Algorithms, Computer." In Encyclopedia of Sciences and Religions, 71–74. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-1-4020-8265-8_1476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zobel, Justin. "Algorithms." In Writing for Computer Science, 115–28. London: Springer London, 2004. http://dx.doi.org/10.1007/978-0-85729-422-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zobel, Justin. "Algorithms." In Writing for Computer Science, 145–55. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6639-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lim, Daniel. "Algorithms." In Philosophy through Computer Science, 22–29. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003271284-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baratz, Alan, Inder Gopal, and Adrian Segall. "Fault tolerant queries in computer networks." In Distributed Algorithms, 30–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0019792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Roosta, Seyed H. "Computer Architecture." In Parallel Processing and Parallel Algorithms, 1–56. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mehlhorn, Kurt. "The Physarum Computer." In WALCOM: Algorithms and Computation, 8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19094-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Erciyes, K. "Algorithms." In Undergraduate Topics in Computer Science, 41–61. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-61115-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sutinen, Erkki, and Matti Tedre. "ICT4D: A Computer Science Perspective." In Algorithms and Applications, 221–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12476-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Srivastav, Anand, Axel Wedemeyer, Christian Schielke, and Jan Schiemann. "Algorithms for Big Data Problems in de Novo Genome Assembly." In Lecture Notes in Computer Science, 229–51. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21534-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractDe novo genome assembly is a fundamental task in life sciences. It is mostly a typical big data problem with sometimes billions of reads, a big puzzle in which the genome is hidden. Memory and time efficient algorithms are sought, preferably to run even on desktops in labs. In this chapter we address some algorithmic problems related to genome assembly. We first present an algorithm which heavily reduces the size of input data, but with no essential compromize on the assembly quality. In such and many other algorithms in bioinformatics the counting of k-mers is a botleneck. We discuss counting in external memory. The construction of large parts of the genome, called contigs, can be modelled as the longest path problem or the Euler tour problem in some graphs build on reads or k-mers. We present a linear time streaming algorithm for constructing long paths in undirected graphs, and a streaming algorithm for the Euler tour problem with optimal one-pass complexity.

Conference papers on the topic "Computer algorithms":

1

Efimov, Aleksey Igorevich, and Dmitry Igorevich Ustukov. "Comparative Analysis of Stereo Vision Algorithms Implementation on Various Architectures." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-484-489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A comparative analysis of the functionality of stereo vision algorithms on various hardware architectures has been carried out. The quantitative results of stereo vision algorithms implementation are presented, taking into account the specifics of the applied hardware base. The description of the original algorithm for calculating the depth map using the summed-area table is given. The complexity of the algorithm does not depend on the size of the search window. The article presents the content and results of the implementation of the stereo vision method on standard architecture computers, including multi-threaded implementation, a single-board computer and FPGA. The proposed results may be of interest in the design of vision systems for applied applications.
2

Spector, Lee. "Evolving quantum computer algorithms." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Spector, Lee. "Evolving quantum computer algorithms." In the 13th annual conference companion. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2001858.2002128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Freeman, William T. "Where computer vision needs help from computer science." In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2011. http://dx.doi.org/10.1137/1.9781611973082.64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

E. Fantacci, M., S. Bagnasco, N. Camarlinghi, E. Fiorina, E. Lopez Torres, F. Pennanzio, c. Peroni, et al. "A Web-based Computer Aided Detection System for Automated Search of Lung Nodules in Thoracic Computed Tomography Scans." In International Conference on Bioinformatics Models, Methods and Algorithms. SCITEPRESS - Science and and Technology Publications, 2015. http://dx.doi.org/10.5220/0005280102130218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bulavintsev, Vadim, and Dmitry Zhdanov. "Method for Adaptation of Algorithms to GPU Architecture." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-930-941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose a generalized method for adapting and optimizing algorithms for efficient execution on modern graphics processing units (GPU). The method consists of several steps. First, build a control flow graph (CFG) of the algorithm. Next, transform the CFG into a tree of loops and merge non-parallelizable loops into parallelizable ones. Finally, map the resulting loops tree to the tree of GPU computational units, unrolling the algorithm’s loops as necessary for the match. The mapping should be performed bottom-up, from the lowest GPU architecture levels to the highest ones, to minimize off-chip memory access and maximize register file usage. The method provides programmer with a convenient and robust mental framework and strategy for GPU code optimization. We demonstrate the method by adapting to a GPU the DPLL backtracking search algorithm for solving the Boolean satisfiability problem (SAT). The resulting GPU version of DPLL outperforms the CPU version in raw tree search performance sixfold for regular Boolean satisfiability problems and twofold for irregular ones.
7

"Computer aspects of numerical algorithms." In 2008 International Multiconference on Computer Science and Information Technology. IEEE, 2008. http://dx.doi.org/10.1109/imcsit.2008.4747248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Computer aspects of numerical algorithms." In 2010 International Multiconference on Computer Science and Information Technology (IMCSIT 2010). IEEE, 2010. http://dx.doi.org/10.1109/imcsit.2010.5680064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sobelman, G. E. "Computer algebra and fast algorithms." In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kiktenko, A. A., M. N. Lunkovskiy, and K. A. Nikiforov. "Confidence complexity of computer algorithms." In 2014 2nd International Conference on Emission Electronics (ICEE). IEEE, 2014. http://dx.doi.org/10.1109/emission.2014.6893971.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer algorithms":

1

Poggio, Tomaso, and James Little. Parallel Algorithms for Computer Vision. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada203947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leach, Ronald J. Analysis of Blending Algorithms in Computer Graphics. Fort Belvoir, VA: Defense Technical Information Center, October 1988. http://dx.doi.org/10.21236/ada201921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dixon, L. C., and R. C. Price. Optimisation Algorithms for Highly Parallel Computer Architectures. Fort Belvoir, VA: Defense Technical Information Center, December 1990. http://dx.doi.org/10.21236/ada235911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Leach, Ronald J. Analysis of Blending Algorithms in Computer Graphics. Fort Belvoir, VA: Defense Technical Information Center, November 1991. http://dx.doi.org/10.21236/ada244279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kupinski, Matthew A. Investigation of Genetic Algorithms for Computer-Aided Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, October 2000. http://dx.doi.org/10.21236/ada393995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schnabel, R. Concurrent Algorithms for Numerical Computation on Hypercube Computer. Fort Belvoir, VA: Defense Technical Information Center, February 1988. http://dx.doi.org/10.21236/ada195502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kupinski, Matthew A. Investigation of Genetic Algorithms for Computer-Aided Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, October 1999. http://dx.doi.org/10.21236/ada391457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ainsworth, James S., and Steven Kubala. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada230252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stroup, David W. A catalog of compartment fire model algorithms and associated computer subroutines. Gaithersburg, MD: National Bureau of Standards, 1987. http://dx.doi.org/10.6028/nbs.ir.87-3607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kennington, Jeffrey L. Optimization Algorithms for New Computer Architectures with Applications to Routing and Scheduling. Fort Belvoir, VA: Defense Technical Information Center, February 1992. http://dx.doi.org/10.21236/ada251959.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography