Literatura científica selecionada sobre o tema "Computer algorithms"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Computer algorithms".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Computer algorithms"

1

Ataeva, Gulsina Isroilovna, e Lola Dzhalolovna Yodgorova. "METHODS AND ALGORITHMS OF COMPUTER GRAPHICS". Scientific Reports of Bukhara State University 4, n.º 1 (26 de fevereiro de 2020): 43–47. http://dx.doi.org/10.52297/2181-1466/2020/4/1/3.

Texto completo da fonte
Resumo:
Methods and algorithms of computer graphics are considered in the article. Implementation of transformation of graphic objects by means of operations of transfer, scaling, rotation, the types of geometric models are considered. Methods of computer graphics include methods of converting graphic objects, representing (scanning) lines in raster form, selecting a window, removing hidden lines, projecting, painting images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Xu, Zheng Guang, Chen Chen e Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor". Advanced Materials Research 659 (janeiro de 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Texto completo da fonte
Resumo:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Cropper, Andrew. "The Automatic Computer Scientist". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junho de 2023): 15434. http://dx.doi.org/10.1609/aaai.v37i13.26801.

Texto completo da fonte
Resumo:
Algorithms are ubiquitous: they track our sleep, help us find cheap flights, and even help us see black holes. However, designing novel algorithms is extremely difficult, and we do not have efficient algorithms for many fundamental problems. The goal of my research is to accelerate algorithm discovery by building an automatic computer scientist. To work towards this goal, my research focuses on inductive logic programming, a form of machine learning in which my collaborators and I have demonstrated major advances in automated algorithm discovery over the past five years. In this talk and paper, I survey these advances.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Moosakhah, Fatemeh, e Amir Massoud Bidgoli. "Congestion Control in Computer Networks with a New Hybrid Intelligent Algorithm". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, n.º 8 (23 de agosto de 2014): 4688–706. http://dx.doi.org/10.24297/ijct.v13i8.7068.

Texto completo da fonte
Resumo:
With invention of computer networks, transferring data from one computer to another became possible, but as the number of computers that transfer data to each other increased and common communication channel bandwidth among them in a network limited, has led to a phenomenon called congestion, so that some of data packets would be dropped and never arrive to destination. Different algorithms have been proposed for overcoming congestion. These are divided into two general groups: 1- flow based algorithms and 2- class based algorithms. In present study, using class based algorithm with optimization of its control by fuzzy logic and new Cuckoo algorithm, we increased the number of packets that reach to destination and reduced the number of dropped packets considerably during congestion. Simulation results indicate a great improvement of efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Pelter, Michele M., e Mary G. Carey. "ECG Computer Algorithms". American Journal of Critical Care 17, n.º 6 (1 de novembro de 2008): 581–82. http://dx.doi.org/10.4037/ajcc2008.17.6.581.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kaltofen, E. "Computer Algebra Algorithms". Annual Review of Computer Science 2, n.º 1 (junho de 1987): 91–118. http://dx.doi.org/10.1146/annurev.cs.02.060187.000515.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Rakhimov, Bakhtiyar Saidovich, Feroza Bakhtiyarovna Rakhimova, Sabokhat Kabulovna Sobirova, Furkat Odilbekovich Kuryazov e Dilnoza Boltabaevna Abdirimova. "Review And Analysis Of Computer Vision Algorithms". American Journal of Applied sciences 03, n.º 05 (31 de maio de 2021): 245–50. http://dx.doi.org/10.37547/tajas/volume03issue05-39.

Texto completo da fonte
Resumo:
Computer vision as a scientific discipline refers to the theories and technologies for creating artificial systems that receive information from an image. Despite the fact that this discipline is quite young, its results have penetrated almost all areas of life. Computer vision is closely related to other practical fields like image processing, the input of which is two-dimensional images obtained from a camera or artificially created. This form of image transformation is aimed at noise suppression, filtering, color correction and image analysis, which allows you to directly obtain specific information from the processed image. This information may include searching for objects, keypoints, segments, and annexes;
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Schlingemann, D. "Cluster states, algorithms and graphs". Quantum Information and Computation 4, n.º 4 (julho de 2004): 287–324. http://dx.doi.org/10.26421/qic4.4-4.

Texto completo da fonte
Resumo:
The present paper is concerned with the concept of the one-way quantum computer, beyond binary-systems, and its relation to the concept of stabilizer quantum codes. This relation is exploited to analyze a particular class of quantum algorithms, called graph algorithms, which correspond in the binary case to the Clifford group part of a network and which can efficiently be implemented on a one-way quantum computer. These algorithms can ``completely be solved" in the sense that the manipulation of quantum states in each step can be computed explicitly. Graph algorithms are precisely those which implement encoding schemes for graph codes. Starting from a given initial graph, which represents the underlying resource of multipartite entanglement, each step of the algorithm is related to a explicit transformation on the graph.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Handayani, Dwipa, e Abrar Hiswara. "KAMUS ISTILAH ILMU KOMPUTER DENGAN ALGORITMA BOYER MOORE BERBASIS WEB". Jurnal Informatika 19, n.º 2 (26 de dezembro de 2019): 90–97. http://dx.doi.org/10.30873/ji.v19i2.1519.

Texto completo da fonte
Resumo:
A dictionary is a reference book that contains words and phrases that are usually arranged in alphabetical order along with an explanation of their meaning, usage and translation and function to help recognize new terms. The field of computer science certainly has specific terms related to computers, so it is needed a dictionary of computer terms, currently the existing dictionary is still conventional in its use ineffective and inefficient. The design and manufacture of applications using algorithms by performing a sequence of logical steps in solving problems that are arranged systematically. Algorithms for searching are now growing day by day. Boyer Moore algorithm is one of the search algorithms that is considered to have the best results, namely the algorithm that moves matching strings from right to left. With this web-based dictionary the user is expected to be able to get information quickly, without any limitations on space and time. Keywords: Boyer Moore's Algorithm, Computer Science, Glossary of Terms, Web.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Bunin, Y. V., E. V. Vakulik, R. N. Mikhaylusov, V. V. Negoduyko, K. S. Smelyakov e O. V. Yasinsky. "Estimation of lung standing size with the application of computer vision algorithms". Experimental and Clinical Medicine 89, n.º 4 (17 de dezembro de 2020): 87–94. http://dx.doi.org/10.35339/ekm.2020.89.04.13.

Texto completo da fonte
Resumo:
Evaluation of spiral computed tomography data is important to improve the diagnosis of gunshot wounds and the development of further surgical tactics. The aim of the work is to improve the results of the diagnosis of foreign bodies in the lungs by using computer vision algorithms. Image gradation correction, interval segmentation, threshold segmentation, three-dimensional wave method, principal components method are used as a computer vision device. The use of computer vision algorithm allows to clearly determine the size of the foreign body of the lung with an error of 6.8 to 7.2%, which is important for in-depth diagnosis and development of further surgical tactics. Computed vision techniques increase the detail of foreign bodies in the lungs and have significant prospects for the use of spiral computed tomography for in-depth data processing. Keywords: computer vision, spiral computed tomography, lungs, foreign bodies.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Computer algorithms"

1

Mosca, Michele. "Quantum computer algorithms". Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer". Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Texto completo da fonte
Resumo:

Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.


Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.

Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Rhodes, Daniel Thomas. "Hardware accelerated computer graphics algorithms". Thesis, Nottingham Trent University, 2008. http://irep.ntu.ac.uk/id/eprint/201/.

Texto completo da fonte
Resumo:
The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones. This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of computer graphics but which provide a great deal of scope for further investigation in the light of recent hardware advances. A new hardware implementation of a texture filtering technique, aimed at consumer level hardware, is presented. This novel technique utilises Fourier space image filtering to reduce aliasing. Investigation shows that the technique provides reduced levels of aliasing along with comparable levels of detail to currently popular techniques. This adds to the community's knowledge by expanding the range of techniques available, as well as increasing the number of techniques which offer the potential for easy integration with current consumer level graphics hardware along with real-time performance. Bump mapping is a long-standing and well understood technique. Variations and extensions of it have been popular in real-time 3D computer graphics for many years. A new hardware implementation of a technique termed Super Bump Mapping (SBM) is introduced. Expanding on the work of Cant and Langensiepen [1], the SBM technique adopts the novel approach of using normal maps which supply multiple vectors per texel. This allows the retention of much more detail and overcomes some of the aliasing deficiencies of standard bump mapping caused by the standard single vector approach and the non-linearity of the bump mapping process. A novel depth of field algorithm is proposed, which is an extension of the authors previous work [2][3][4]. The technique is aimed at consumer level hardware and attempts to raise the bar for realism by providing support for the 'see-through' effect. This effect is a vital factor in the realistic appearance of simulated depth of field and has been overlooked in real time computer graphics due to the complexities of an accurate calculation. The implementation of this new algorithm on current consumer level hardware is investigated and it is concluded that while current hardware is not yet capable enough, future iterations will provide the necessary functional and performance increases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Mims, Mark McGrew. "Dynamical stability of quantum algorithms /". Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004342.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Texto completo da fonte
Resumo:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Tran, Chan-Hung. "Fast clipping algorithms for computer graphics". Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26336.

Texto completo da fonte
Resumo:
Interactive computer graphics allow achieving a high bandwidth man-machine communication only if the graphics system meets certain speed requirements. Clipping plays an important role in the viewing process, as well as in the functions zooming and panning; thus, it is desirable to develop a fast clipper. In this thesis, the intersection problem of a line segment against a convex polygonal object has been studied. Adaption of the the clip algorithms for parallel processing has also been investigated. Based on the conventional parametric clipping algorithm, two families of 2-D generalized line clipping algorithms are proposed: the t-para method and the s-para method. Depending on the implementation both run either linearly in time using a sequential tracing or logarithmically in time by applying the numerical bisection method. The intersection problem is solved after the sector locations of the endpoints of a line segment are determined by a binary search. Three-dimensional clipping with a sweep-defined object using translational sweeping or conic sweeping is also discussed. Furthermore, a mapping method is developed for rectangular clipping. The endpoints of a line segment are first mapped onto the clip boundaries by an interval-clip operation. Then a pseudo window is-defined and a set of conditions is derived for trivial acceptance and rejection. The proposed algorithms are implemented and compared with the Liang-Barsky algorithm to estimate their practical efficiency. Vectorization of the 2-D and 3-D rectangular clipping algorithms on an array processor has also been attempted.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Viloria, John A. (John Alexander) 1978. "Optimizing clustering algorithms for computer vision". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86847.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Texto completo da fonte
Resumo:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 71-72).
Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.
by Pramook Khungurn.
M.Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

O'Brien, Neil. "Algorithms for scientific computing". Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/355716/.

Texto completo da fonte
Resumo:
There has long been interest in algorithms for simulating physical systems. We are concernedwith two areaswithin this field: fastmultipolemethods andmeshlessmethods. Since Greengard and Rokhlin’s seminal paper in 1987, considerable interest has arisen in fast multipole methods for finding the energy of particle systems in two and three dimensions, and more recently in many other applications where fast matrix-vector multiplication is called for. We develop a new fast multipole method that allows the calculation of the energy of a system of N particles in O(N) time, where the particles’ interactions are governed by the 2D Yukawa potential which takes the form of a modified Bessel function Kv. We then turn our attention to meshless methods. We formulate and test a new radial basis function finite differencemethod for solving an eigenvalue problemon a periodic domain. We then applymeshlessmethods to modelling photonic crystals. After an initial background study of the field, we detail the Maxwell equations, which govern the interaction of the light with the photonic crystal, and show how photonic band gaps may be given rise to. We present a novel meshless weak-strong form method with reduced computational cost compared to the existing meshless weak form method. Furthermore, we develop a new radial basis function finite differencemethod for photonic band gap calculations. Throughout the work we demonstrate the application of cutting-edge technologies such as cloud computing to the development and verification of algorithms for physical simulations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Nofal, Samer. "Algorithms for argument systems". Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12173/.

Texto completo da fonte
Resumo:
Argument systems are computational models that enable an artificial intelligent agent to reason via argumentation. Basically, the computations in argument systems can be viewed as search problems. In general, for a wide range of such problems existing algorithms lack five important features. Firstly, there is no comprehensive study that shows which algorithm among existing others is the most efficient in solving a particular problem. Secondly, there is no work that establishes the use of cost-effective heuristics leading to more efficient algorithms. Thirdly, mechanisms for pruning the search space are understudied, and hence, further pruning techniques might be neglected. Fourthly, diverse decision problems, for extended models of argument systems, are left without dedicated algorithms fine-tuned to the specific requirements of the respective extended model. Fifthly, some existing algorithms are presented in a high level that leaves some aspects of the computations unspecified, and therefore, implementations are rendered open to different interpretations. The work presented in this thesis tries to address all these concerns. Concisely, the presented work is centered around a widely studied view of what computationally defines an argument system. According to this view, an argument system is a pair: a set of abstract arguments and a binary relation that captures the conflicting arguments. Then, to resolve an instance of argument systems the acceptable arguments must be decided according to a set of criteria that collectively define the argumentation semantics. For different motivations there are various argumentation semantics. Equally, several proposals in the literature present extended models that stretch the basic two components of an argument system usually by incorporating more elements and/or broadening the nature of the existing components. This work designs algorithms that solve decision problems in the basic form of argument systems as well as in some other extended models. Likewise, new algorithms are developed that deal with different argumentation semantics. We evaluate our algorithms against existing algorithms experimentally where sufficient indications highlight that the new algorithms are superior with respect to their running time.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Computer algorithms"

1

Horowitz, Ellis. Computer algorithms. 2a ed. Summit, NJ: Silicon Press, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Horowitz, Ellis. Computer algorithms. New York: Computer Science Press, 1997.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Horowitz, Ellis. Computer algorithms. 2a ed. Summit, NJ: Silicon Press, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Horowitz, Ellis. Computer algorithms. 2a ed. Summit, NJ: Silicon Press, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Horowitz, Ellis. Computer algorithms. 2a ed. Summit, NJ: Silicon Press, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Horowitz, Ellis. Computer algorithms. New York: Computer Science Press, 1998.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2a ed. Reading, Mass: Addison-Wesley Pub. Co., 1991.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2a ed. Reading, Mass: Addison-Wesley Pub. Co., 1988.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Salander, Elisabeth C., e Elisabeth C. Salander. Computer search algorithms. Hauppauge, N.Y: Nova Science Publishers, 2010.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Horowitz, Ellis. Computer algorithms/C++. 2a ed. Summit, NJ: Silicon Press, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Computer algorithms"

1

Phan, Vinhthuy. "Algorithms, Computer". In Encyclopedia of Sciences and Religions, 71–74. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-1-4020-8265-8_1476.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zobel, Justin. "Algorithms". In Writing for Computer Science, 115–28. London: Springer London, 2004. http://dx.doi.org/10.1007/978-0-85729-422-7_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zobel, Justin. "Algorithms". In Writing for Computer Science, 145–55. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6639-9_10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Lim, Daniel. "Algorithms". In Philosophy through Computer Science, 22–29. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003271284-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Baratz, Alan, Inder Gopal e Adrian Segall. "Fault tolerant queries in computer networks". In Distributed Algorithms, 30–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0019792.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Roosta, Seyed H. "Computer Architecture". In Parallel Processing and Parallel Algorithms, 1–56. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Mehlhorn, Kurt. "The Physarum Computer". In WALCOM: Algorithms and Computation, 8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19094-0_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Erciyes, K. "Algorithms". In Undergraduate Topics in Computer Science, 41–61. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-61115-6_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Srivastav, Anand, Axel Wedemeyer, Christian Schielke e Jan Schiemann. "Algorithms for Big Data Problems in de Novo Genome Assembly". In Lecture Notes in Computer Science, 229–51. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21534-6_13.

Texto completo da fonte
Resumo:
AbstractDe novo genome assembly is a fundamental task in life sciences. It is mostly a typical big data problem with sometimes billions of reads, a big puzzle in which the genome is hidden. Memory and time efficient algorithms are sought, preferably to run even on desktops in labs. In this chapter we address some algorithmic problems related to genome assembly. We first present an algorithm which heavily reduces the size of input data, but with no essential compromize on the assembly quality. In such and many other algorithms in bioinformatics the counting of k-mers is a botleneck. We discuss counting in external memory. The construction of large parts of the genome, called contigs, can be modelled as the longest path problem or the Euler tour problem in some graphs build on reads or k-mers. We present a linear time streaming algorithm for constructing long paths in undirected graphs, and a streaming algorithm for the Euler tour problem with optimal one-pass complexity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Sutinen, Erkki, e Matti Tedre. "ICT4D: A Computer Science Perspective". In Algorithms and Applications, 221–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12476-1_16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Computer algorithms"

1

Efimov, Aleksey Igorevich, e Dmitry Igorevich Ustukov. "Comparative Analysis of Stereo Vision Algorithms Implementation on Various Architectures". In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-484-489.

Texto completo da fonte
Resumo:
A comparative analysis of the functionality of stereo vision algorithms on various hardware architectures has been carried out. The quantitative results of stereo vision algorithms implementation are presented, taking into account the specifics of the applied hardware base. The description of the original algorithm for calculating the depth map using the summed-area table is given. The complexity of the algorithm does not depend on the size of the search window. The article presents the content and results of the implementation of the stereo vision method on standard architecture computers, including multi-threaded implementation, a single-board computer and FPGA. The proposed results may be of interest in the design of vision systems for applied applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Spector, Lee. "Evolving quantum computer algorithms". In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570420.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Spector, Lee. "Evolving quantum computer algorithms". In the 13th annual conference companion. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2001858.2002128.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Milne, Darran. "Computer-Generated Holography Algorithms". In Frontiers in Optics. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/fio.2023.fm1a.4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Freeman, William T. "Where computer vision needs help from computer science". In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2011. http://dx.doi.org/10.1137/1.9781611973082.64.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kosovskaya, Tatiana, e Juan Zhou. "Algorithms for Checking Isomorphism of Two Elementary Conjunctiоns". In Computer Science and Information Technologies 2023. Institute for Informatics and Automation Problems, 2023. http://dx.doi.org/10.51408/csit2023_01.

Texto completo da fonte
Resumo:
When solving AI problems related to the study of complex structured objects, a convenient tool for describing such objects is the predicate calculus language. The paper presents two algorithms for checking two elementary conjunctions of predicate formulas for isomorphism (matches up to the names of variables and the order of conjunctive terms). The first of the algorithms checks for isomorphism elementary conjunctions containing a single predicate symbol. In addition, if the formulas are isomorphic, then it finds a one-to-one correspondence between the arguments of these formulas. If all predicates are binary, the proposed algorithm is an algorithm for checking two directed graphs for isomorphism. The second algorithm checks for isomorphism elementary conjunctions containing several predicate symbols. Estimates of their time complexity are given for both algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

E. Fantacci, M., S. Bagnasco, N. Camarlinghi, E. Fiorina, E. Lopez Torres, F. Pennanzio, c. Peroni et al. "A Web-based Computer Aided Detection System for Automated Search of Lung Nodules in Thoracic Computed Tomography Scans". In International Conference on Bioinformatics Models, Methods and Algorithms. SCITEPRESS - Science and and Technology Publications, 2015. http://dx.doi.org/10.5220/0005280102130218.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Czakoova, Krisztina. "DEVELOPING ALGORITHMIC THINKING BY EDUCATIONAL COMPUTER GAMES". In eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-003.

Texto completo da fonte
Resumo:
Basics of algorithmic thinking should not be limited to create right solutions and express them by a computer program, but should also be used a suitable methodology based on problem solving, preferably in a playful way. In many cases at school most of the learners consider the topic of algorithms as hard and not very attractive. For beginners in programming the knowledge of specific algorithms is not so important. The ability to understand principles of algorithms, as well as to find own algorithms for new problems are more desirable. One main educational objective is to know that an algorithm prescribes exactly what to do in the possible situations. The educational computer games based on the use of basic control structures do a good service to pupils can understand correctly how to get to the solution, using clearly defined steps with immediate feedback, with the possibility of visualizing the sequence of steps (with the possibility of corrections). Students gain new knowledge based on their own observation and discovery. The games also motivate the students to improve their algorithms to find more efficient solutions in the strategy of games. The aim is for pupils to acquire new knowledge by exploring and learning by doing. The main aim of the paper is to show a way of learning principles and concepts of algorithms by using computer game that is much easier to comprehend by the learners and makes them more fun. During the creation of the game, which was inspired by the well-known programmable toy Bee-bot, we tried to comply with the didactic principles of illustrationality, appropriaty and individual approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Bulavintsev, Vadim, e Dmitry Zhdanov. "Method for Adaptation of Algorithms to GPU Architecture". In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-930-941.

Texto completo da fonte
Resumo:
We propose a generalized method for adapting and optimizing algorithms for efficient execution on modern graphics processing units (GPU). The method consists of several steps. First, build a control flow graph (CFG) of the algorithm. Next, transform the CFG into a tree of loops and merge non-parallelizable loops into parallelizable ones. Finally, map the resulting loops tree to the tree of GPU computational units, unrolling the algorithm’s loops as necessary for the match. The mapping should be performed bottom-up, from the lowest GPU architecture levels to the highest ones, to minimize off-chip memory access and maximize register file usage. The method provides programmer with a convenient and robust mental framework and strategy for GPU code optimization. We demonstrate the method by adapting to a GPU the DPLL backtracking search algorithm for solving the Boolean satisfiability problem (SAT). The resulting GPU version of DPLL outperforms the CPU version in raw tree search performance sixfold for regular Boolean satisfiability problems and twofold for irregular ones.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

"Computer aspects of numerical algorithms". In 2008 International Multiconference on Computer Science and Information Technology. IEEE, 2008. http://dx.doi.org/10.1109/imcsit.2008.4747248.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Computer algorithms"

1

Poggio, Tomaso, e James Little. Parallel Algorithms for Computer Vision. Fort Belvoir, VA: Defense Technical Information Center, março de 1988. http://dx.doi.org/10.21236/ada203947.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Leach, Ronald J. Analysis of Blending Algorithms in Computer Graphics. Fort Belvoir, VA: Defense Technical Information Center, outubro de 1988. http://dx.doi.org/10.21236/ada201921.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Dixon, L. C., e R. C. Price. Optimisation Algorithms for Highly Parallel Computer Architectures. Fort Belvoir, VA: Defense Technical Information Center, dezembro de 1990. http://dx.doi.org/10.21236/ada235911.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Leach, Ronald J. Analysis of Blending Algorithms in Computer Graphics. Fort Belvoir, VA: Defense Technical Information Center, novembro de 1991. http://dx.doi.org/10.21236/ada244279.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kupinski, Matthew A. Investigation of Genetic Algorithms for Computer-Aided Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, outubro de 2000. http://dx.doi.org/10.21236/ada393995.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Schnabel, R. Concurrent Algorithms for Numerical Computation on Hypercube Computer. Fort Belvoir, VA: Defense Technical Information Center, fevereiro de 1988. http://dx.doi.org/10.21236/ada195502.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kupinski, Matthew A. Investigation of Genetic Algorithms for Computer-Aided Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, outubro de 1999. http://dx.doi.org/10.21236/ada391457.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Lewis, Dustin, Naz Modirzadeh e Gabriella Blum. War-Algorithm Accountability. Harvard Law School Program on International Law and Armed Conflict, agosto de 2016. http://dx.doi.org/10.54813/fltl8789.

Texto completo da fonte
Resumo:
In War-Algorithm Accountability (August 2016), we introduce a new concept—war algorithms—that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems” (AWS). We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. We focus largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation—and how those algorithms might already fit within the existing regulatory system established by international law.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Varastehpour, Soheil, Hamid Sharifzadeh e Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Texto completo da fonte
Resumo:
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ainsworth, James S., e Steven Kubala. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms. Fort Belvoir, VA: Defense Technical Information Center, setembro de 1990. http://dx.doi.org/10.21236/ada230252.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia