Academic literature on the topic 'Algorithmic structures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithmic structures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithmic structures"

1

Esponda-Argüero, Margarita. "Techniques for Visualizing Data Structures in Algorithmic Animations." Information Visualization 9, no. 1 (January 29, 2009): 31–46. http://dx.doi.org/10.1057/ivs.2008.26.

Full text
Abstract:
This paper deals with techniques for the design and production of appealing algorithmic animations and their use in computer science education. A good visual animation is both a technical artifact and a work of art that can greatly enhance the understanding of an algorithm's workings. In the first part of the paper, I show that awareness of the composition principles used by other animators and visual artists can help programmers to create better algorithmic animations. The second part shows how to incorporate those ideas in novel animation systems, which represent data structures in a visually intuitive manner. The animations described in this paper have been implemented and used in the classroom for courses at university level.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Guo Jin, Kai Zhang, and Ji Yun Li. "Discovering Algorithmic Relationship between Programming Resources on the Web." Applied Mechanics and Materials 347-350 (August 2013): 2430–35. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2430.

Full text
Abstract:
Algorithmic relationships are discovered here for programming tutoring. There are two kinds of algorithmic relationships between programming resources on the web: associative relationship and structural similarity relationship. They can be organized as a hierarchical body. An algorithm can solve different programming problems and a programming problem also can be solved by different algorithms. Thus, there is such algorithmic relationship, or associative relationship, between these programming resources on the web. The algorithmic structures of source codes can be mined by neural computing. Different source codes may have a structural similarity relationship between them, meaning that they are similar in their algorithmic structures. A learner can learn algorithms from simple to complicated structures or from similarities in their structures. In our experiment, we use a tree structure to organize the algorithmic relationships.
APA, Harvard, Vancouver, ISO, and other styles
3

Kalimullin, I. "Algorithmic reducibilities of algebraic structures." Journal of Logic and Computation 22, no. 4 (September 10, 2010): 831–43. http://dx.doi.org/10.1093/logcom/exq046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cohn, H., and A. Kumar. "Algorithmic design of self-assembling structures." Proceedings of the National Academy of Sciences 106, no. 24 (June 16, 2009): 9570–75. http://dx.doi.org/10.1073/pnas.0901636106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mikhailovsky, George. "Structuredness as a Measure of the Complexity of the Structure and the Role of Post-Dissipative Structures and Ratchet Processes in Evolution." Journal of Evolutionary Science 1, no. 2 (January 23, 2020): 40–52. http://dx.doi.org/10.14302/issn.2689-4602.jes-19-3155.

Full text
Abstract:
As shown earlier, the algorithmic complexity, like Shannon information and Boltzmann entropy, tends to increase in accordance with the general law of complification. However, the algorithmic complexity of most material systems does not reach its maximum, i.e. chaotic state, due to the various laws of nature that create certain structures. The complexity of such structures is very different from the algorithmic complexity, and we intuitively feel that its maximal value should be somewhere between order and chaos. I propose a formula for calculation such structural complexity, which can be called - structuredness. The structuredness of any material system is determined by structures of three main types: stable, dissipative, and post-dissipative. The latter are defined as stable structures created by dissipative ones, directly or indirectly. Post-dissipative structures, as well as stable, can exist for an unlimited time, but at the micro level only, without energy influx. The appearance of such structures leads to the “ratchet” process, which determines the structure genesis in non-living and, especially, in living systems. This process allows systems with post-dissipative structures to develop in the direction of maximum structuring due to the gradual accumulation of these structures, even when such structuring contradicts the general law of complification.
APA, Harvard, Vancouver, ISO, and other styles
6

Harizanov, Valentina S. "Computability-Theoretic Complexity of Countable Structures." Bulletin of Symbolic Logic 8, no. 4 (December 2002): 457–77. http://dx.doi.org/10.2178/bsl/1182353917.

Full text
Abstract:
Computable model theory, also called effective or recursive model theory, studies algorithmic properties of mathematical structures, their relations, and isomorphisms. These properties can be described syntactically or semantically. One of the major tasks of computable model theory is to obtain, whenever possible, computability-theoretic versions of various classical model-theoretic notions and results. For example, in the 1950's, Fröhlich and Shepherdson realized that the concept of a computable function can make van der Waerden's intuitive notion of an explicit field precise. This led to the notion of a computable structure. In 1960, Rabin proved that every computable field has a computable algebraic closure. However, not every classical result “effectivizes”. Unlike Vaught's theorem that no complete theory has exactly two nonisomorphic countable models, Millar's and Kudaibergenov's result establishes that there is a complete decidable theory that has exactly two nonisomorphic countable models with computable elementary diagrams. In the 1970's, Metakides and Nerode [58], [59] and Remmel [71], [72], [73] used more advanced methods of computability theory to investigate algorithmic properties of fields, vector spaces, and other mathematical structures.
APA, Harvard, Vancouver, ISO, and other styles
7

Zenil, Hector, Fernando Soler-Toscano, Jean-Paul Delahaye, and Nicolas Gauvrit. "Two-dimensional Kolmogorov complexity and an empirical validation of the Coding theorem method by compressibility." PeerJ Computer Science 1 (September 30, 2015): e23. http://dx.doi.org/10.7717/peerj-cs.23.

Full text
Abstract:
We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluatingn-dimensional complexity by using ann-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complexn-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.
APA, Harvard, Vancouver, ISO, and other styles
8

Jarrahi, Mohammad Hossein, Gemma Newlands, Min Kyung Lee, Christine T. Wolf, Eliscia Kinder, and Will Sutherland. "Algorithmic management in a work context." Big Data & Society 8, no. 2 (July 2021): 205395172110203. http://dx.doi.org/10.1177/20539517211020332.

Full text
Abstract:
The rapid development of machine-learning algorithms, which underpin contemporary artificial intelligence systems, has created new opportunities for the automation of work processes and management functions. While algorithmic management has been observed primarily within the platform-mediated gig economy, its transformative reach and consequences are also spreading to more standard work settings. Exploring algorithmic management as a sociotechnical concept, which reflects both technological infrastructures and organizational choices, we discuss how algorithmic management may influence existing power and social structures within organizations. We identify three key issues. First, we explore how algorithmic management shapes pre-existing power dynamics between workers and managers. Second, we discuss how algorithmic management demands new roles and competencies while also fostering oppositional attitudes toward algorithms. Third, we explain how algorithmic management impacts knowledge and information exchange within an organization, unpacking the concept of opacity on both a technical and organizational level. We conclude by situating this piece in broader discussions on the future of work, accountability, and identifying future research steps.
APA, Harvard, Vancouver, ISO, and other styles
9

Ratsaby, Joel, and J. Chaskalovic. "On the algorithmic complexity of static structures." Journal of Systems Science and Complexity 23, no. 6 (December 2010): 1037–53. http://dx.doi.org/10.1007/s11424-010-8465-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Chun-Teh, Francisco J. Martin-Martinez, Gang Seob Jung, and Markus J. Buehler. "Polydopamine and eumelanin molecular structures investigated with ab initio calculations." Chemical Science 8, no. 2 (2017): 1631–41. http://dx.doi.org/10.1039/c6sc04692d.

Full text
Abstract:
A set of computational methods that contains a brute-force algorithmic generation of chemical isomers, molecular dynamics (MD) simulations, and density functional theory (DFT) calculations is reported and applied to investigate nearly 3000 probable molecular structures of polydopamine (PDA) and eumelanin.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Algorithmic structures"

1

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Vialette, Stéphane. "Algorithmic Contributions to Computational Molecular Biology." Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00862069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

King, Stephen. "Higher-level algorithmic structures in the refinement calculus." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hashemolhosseini, Sepehr. "Algorithmic component and system reliability analysis of truss structures." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85710.

Full text
Abstract:
Thesis (MScEng)-- Stellenbosch University, 2013.
ENGLISH ABSTRACT: Most of the parameters involved in the design and analysis of structures are of stochastic nature. This is, therefore, of paramount importance to be able to perform a fully stochastic analysis of structures both in component and system level to take into account the uncertainties involved in structural analysis and design. To the contrary, in practice, the (computerised) analysis of structures is based on a deterministic analysis which fails to address the randomness of design and analysis parameters. This means that an investigation on the algorithmic methodologies for a component and system reliability analysis can help pave the way towards the implementation of fully stochastic analysis of structures in a computer environment. This study is focused on algorithm development for component and system reliability analysis based on the various proposed methodologies. Truss structures were selected for this purpose due to their simplicity as well as their wide use in the industry. Nevertheless, the algorithms developed in this study can be used for other types of structures such as moment-resisting frames with some simple modi cations. For a component level reliability analysis of structures different methods such as First Order Reliability Methods (FORM) and simulation methods are proposed. However, implementation of these methods for the statistically indeterminate structures is complex due to the implicit relation between the response of the structural system and the load effect. As a result, the algorithm developed for the purpose of component reliability analysis should be based on the concepts of Stochastic Finite Element Methods (SFEM) where a proper link between the finite element analysis of the structure and the reliability analysis methodology is ensured. In this study various algorithms are developed based on the FORM method, Monte Carlo simulation, and the Response Surface Method (RSM). Using the FORM method, two methodologies are considered: one is based on the development of a finite element code where required alterations are made to the FEM code and the other is based on the usage of a commercial FEM package. Different simulation methods are also implemented: Direct Monte Carlo Simulation (DMCS), Latin Hypercube Sampling Monte Carlo (LHCSMC), and Updated Latin Hypercube Sampling Monte Carlo (ULHCSMC). Moreover, RSM is used together with simulation methods. Throughout the thesis, the effciency of these methods was investigated. A Fully Stochastic Finite Element Method (FSFEM) with alterations to the finite element code seems the fastest approach since the linking between the FEM package and reliability analysis is avoided. Simulation methods can also be effectively used for the reliability evaluation where ULHCSMC seemed to be the most efficient method followed by LHCSMC and DMCS. The response surface method is the least straight forward method for an algorithmic component reliability analysis; however, it is useful for the system reliability evaluation. For a system level reliability analysis two methods were considered: the ß-unzipping method and the branch and bound method. The ß-unzipping method is based on a level-wise system reliability evaluation where the structure is modelled at different damaged levels according to its degree of redundancy. In each level, the so-called unzipping intervals are defined for the identification of the critical elements. The branch and bound method is based on the identification of different failure paths of the structure by the expansion of the structural failure tree. The evaluation of the damaged states for both of the methods is the same. Furthermore, both of the methods lead to the development of a parallel-series model for the structural system. The only difference between the two methods is in the search approach used for the failure sequence identification. It was shown that the ß-unzipping method provides a better algorithmic approach for evaluating the system reliability compared to the branch and bound method. Nevertheless, the branch and bound method is a more robust method in the identification of structural failure sequences. One possible way to increase the efficiency of the ß-unzipping method is to define bigger unzipping intervals in each level which can be possible through a computerised analysis. For such an analysis four major modules are required: a general intact structure module, a damaged structure module, a reliability analysis module, and a system reliability module. In this thesis different computer programs were developed for both system and component reliability analysis based on the developed algorithms. The computer programs are presented in the appendices of the thesis.
AFRIKAANSE OPSOMMING: Meeste van die veranderlikes betrokke by die ontwerp en analise van strukture is stogasties in hul aard. Om die onsekerhede betrokke in ontwerp en analise in ag te neem is dit dus van groot belang om 'n ten volle stogastiese analise te kan uitvoer op beide komponent asook stelsel vlak. In teenstelling hiermee is die gerekenariseerde analise van strukture in praktyk gebaseer op deterministiese analise wat nie suksesvol is om die stogastiese aard van ontwerp veranderlikes in ag te neem nie. Dit beteken dat die ondersoek na die algoritmiese metodiek vir komponent en stelsel betroubaarheid analise kan help om die weg te baan na die implementering van ten volle rekenaarmatige stogastiese analise van strukture. Di e studie se fokus is op die ontwikkeling van algoritmes vir komponent en stelsel betroubaarheid analise soos gegrond op verskeie voorgestelde metodes. Vakwerk strukture is gekies vir die doeleinde as gevolg van hulle eenvoud asook hulle wydverspreide gebruik in industrie. Die algoritmes wat in die studie ontwikkel is kan nietemin ook vir ander tipes strukture soos moment-vaste raamwerke gebruik word, gegewe eenvoudige aanpassings. Vir 'n komponent vlak betroubaarheid analise van strukture word verskeie metodes soos die "First Order Reliability Methods" (FORM) en simulasie metodes voorgestel. Die implementering van die metodes vir staties onbepaalbare strukture is ingewikkeld as gevolg van die implisiete verband tussen die gedrag van die struktuur stelsel en die las effek. As 'n gevolg, moet die algoritme wat ontwikkel word vir die doel van komponent betroubaarheid analise gebaseer word op die konsepte van stogastiese eindige element metodes ("SFEM") waar 'n duidelike verband tussen die eindige element analise van die struktuur en die betroubaarheid analise verseker is. In hierdie studie word verskeie algoritmes ontwikkel wat gebaseer is op die FORM metode, Monte Carlo simulasie, en die sogenaamde "Response Surface Method" (RSM). Vir die gebruik van die FORM metode word twee verdere metodologieë ondersoek: een gebaseer op die ontwikkeling van 'n eindige element kode waar nodige verandering aan die eindige element kode self gemaak word en die ander waar 'n kommersiële eindige element pakket gebruik word. Verskillende simulasie metodes word ook geïmplimenteer naamlik Direkte Monte Carlo Simulasie (DMCS), "Latin Hypercube Sampling Monte Carlo" (LHCSMC) en sogenaamde "Updated Latin Hypercube Sampling Monte Carlo" (ULHCSMC). Verder, word RSM tesame met die simulasie metodes gebruik. In die tesis word die doeltreffendheid van die bostaande metodes deurgaans ondersoek. 'n Ten volle stogastiese eindige element metode ("FSFEM") met verandering aan die eindige element kode blyk die vinnigste benadering te wees omdat die koppeling tussen die eindige element metode pakket en die betroubaarheid analise verhoed word. Simulasie metodes kan ook effektief aangewend word vir die betroubaarheid evaluasie waar ULHCSMC as die mees doeltre end voorgekom het, gevolg deur LHCSMC en DMCS. The RSM metode is die mees komplekse metode vir algoritmiese komponent betroubaarheid analise. Die metode is egter nuttig vir sisteem betroubaarheid analise. Vir sisteem-vlak betroubaarheid analise is twee metodes oorweeg naamlik die "ß-unzipping" metode and die "branch-and-bound" metode. Die "ß-unzipping" metode is gebaseer op 'n sisteem-vlak betroubaarheid ontleding waar die struktuur op verskillende skade vlakke gemodelleer word soos toepaslik vir die hoeveelheid addisionele las paaie. In elke vlak word die sogenaamde "unzipping" intervalle gedefinieer vir die identifikasie van die kritiese elemente. Die "branch-and-bound" metode is gebaseer op die identifikasie van verskillende faling roetes van die struktuur deur uitbreiding van die falingsboom. The ondersoek van die skade toestande vir beide metodes is dieselfde. Verder kan beide metodes lei tot die ontwikkeling van 'n parallelserie model van die strukturele stelsel. Die enigste verskil tussen die twee metodes is in die soek-benadering vir die uitkenning van falingsmodus volgorde. Dit word getoon dat die "ß-unzipping" metode 'n beter algoritmiese benadering is vir die ontleding van sisteem betroubaarheid vergeleke met die "branch-and-bound" metode. Die "branch-and- bound" metode word nietemin as 'n meer robuuste metode vir die uitkenning van die falings volgorde beskou. Een moontlike manier om die doeltre endheid van die "ß-unzipping" metode te verhoog is om groter "unzipping" intervalle te gebruik, wat moontlik is vir rekenaarmatige analise. Vir so 'n analise word vier hoof modules benodig naamlik 'n algemene heel-struktuur module, 'n beskadigde-struktuur module, 'n betroubaarheid analise module en 'n sisteem betroubaarheid analise module. In die tesis word verskillende rekenaar programme ontwikkel vir beide sisteem en komponent betroubaarheid analise. Die rekenaar programme word in die aanhangsels van die tesis aangebied.
APA, Harvard, Vancouver, ISO, and other styles
5

Breuils, Stéphane. "Structures algorithmiques pour les opérateurs d'algèbre géométrique et application aux surfaces quadriques." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1142/document.

Full text
Abstract:
L'algèbre géométrique est un outil permettant de représenter et manipuler les objets géométriques de manière générique, efficace et intuitive. A titre d'exemple, l'Algèbre Géométrique Conforme (CGA), permet de représenter des cercles, des sphères, des plans et des droites comme des objets algébriques. Les intersections entre ces objets sont incluses dans la même algèbre. Il est possible d'exprimer et de traiter des objets géométriques plus complexes comme des coniques, des surfaces quadriques en utilisant une extension de CGA. Cependant due à leur représentation requérant un espace vectoriel de haute dimension, les implantations de l'algèbre géométrique, actuellement disponible, n'autorisent pas une utilisation efficace de ces objets. Dans ce manuscrit, nous présentons tout d'abord une implantation de l'algèbre géométrique dédiée aux espaces vectoriels aussi bien basses que hautes dimensions. L'approche suivie est basée sur une solution hybride de code pré-calculé en vue d'une exécution rapide pour des espaces vectoriels de basses dimensions, ce qui est similaire aux approches de l'état de l'art. Pour des espaces vectoriels de haute dimension, nous proposons des méthodes de calculs ne nécessitant que peu de mémoire. Pour ces espaces, nous introduisons un formalisme récursif et prouvons que les algorithmes associés sont efficaces en termes de complexité de calcul et complexité de mémoire. Par ailleurs, des règles sont définies pour sélectionner la méthode la plus appropriée. Ces règles sont basées sur la dimension de l'espace vectoriel considéré. Nous montrons que l'implantation obtenue est bien adaptée pour les espaces vectoriels de hautes dimensions (espace vectoriel de dimension 15) et ceux de basses dimensions. La dernière partie est dédiée à une représentation efficace des surfaces quadriques en utilisant l'algèbre géométrique. Nous étudions un nouveau modèle en algèbre géométrique de l'espace vectoriel $mathbb{R}^{9,6}$ pour manipuler les surfaces quadriques. Dans ce modèle, une surface quadrique est construite par l'intermédiaire de neuf points. Nous montrerons que ce modèle permet non seulement de représenter de manière intuitive des surfaces quadriques mais aussi de construire des objets en utilisant les définitions de CGA. Nous présentons le calcul de l'intersection de surfaces quadriques, du vecteur normal, du plan tangent à une surface en un point de cette surface. Enfin, un modèle complet de traitement des surfaces quadriques est détaillé
Geometric Algebra is considered as a very intuitive tool to deal with geometric problems and it appears to be increasingly efficient and useful to deal with computer graphics problems. The Conformal Geometric Algebra includes circles, spheres, planes and lines as algebraic objects, and intersections between these objects are also algebraic objects. More complex objects such as conics, quadric surfaces can also be expressed and be manipulated using an extension of the conformal Geometric Algebra. However due to the high dimension of their representations in Geometric Algebra, implementations of Geometric Algebra that are currently available do not allow efficient realizations of these objects. In this thesis, we first present a Geometric Algebra implementation dedicated for both low and high dimensions. The proposed method is a hybrid solution that includes precomputed code with fast execution for low dimensional vector space, which is somehow equivalent to the state of the art method. For high dimensional vector spaces, we propose runtime computations with low memory requirement. For these high dimensional vector spaces, we introduce new recursive scheme and we prove that associated algorithms are efficient both in terms of computationnal and memory complexity. Furthermore, some rules are defined to select the most appropriate choice, according to the dimension of the algebra and the type of multivectors involved in the product. We will show that the resulting implementation is well suited for high dimensional spaces (e.g. algebra of dimension 15) as well as for lower dimensional spaces. The next part presents an efficient representation of quadric surfaces using Geometric Algebra. We define a novel Geometric Algebra framework, the Geometric Algebra of $mathbb{R}^{9,6}$ to deal with quadric surfaces where an arbitrary quadric surface is constructed by merely the outer product of nine points. We show that the proposed framework enables us not only to intuitively represent quadric surfaces but also to construct objects using Conformal Geometric Algebra. In the proposed framework, the computation of the intersection of quadric surfaces, the normal vector, and the tangent plane of a quadric surface are provided. Finally, a computational framework of the quadric surfaces will be presented with the main operations required in computer graphics
APA, Harvard, Vancouver, ISO, and other styles
6

Raymond, Jean-Florent. "Structural and algorithmic aspects of partial orderings of graphs." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT289.

Full text
Abstract:
Le thème central à cette thèse est l'étude des propriétés des classes de graphes définies par sous-structures interdites et leurs applications.La première direction que nous suivons a trait aux beaux ordres. À l'aide de théorèmes de décomposition dans les classes de graphes interdisant une sous-structure, nous identifions celles qui sont bellement-ordonnées. Les ordres et sous-structures considérés sont ceux associés aux notions de contraction et mineur induit. Ensuite, toujours en considérant des classes de graphes définies par sous-structures interdites, nous obtenons des bornes sur des invariants comme le degré, la largeur arborescente, la tree-cut width et un nouvel invariant généralisant la maille.La troisième direction est l'étude des relations entre les invariants combinatoires liés aux problèmes de packing et de couverture de graphes. Dans cette direction, nous établissons de nouvelles relations entre ces invariants pour certaines classes de graphes. Nous présentons également des applications algorithmiques de ces résultats
The central theme of this thesis is the study of the properties of the classes of graphs defined by forbidden substructures and their applications.The first direction that we follow concerns well-quasi-orders. Using decomposition theorems on graph classes forbidding one substructure, we identify those that are well-quasi-ordered. The orders and substructures that we consider are those related to the notions of contraction and induced minor.Then, still considering classes of graphs defined by forbidden substructures, we obtain bounds on invariants such as degree, treewidth, tree-cut width, and a new invariant generalizing the girth.The third direction is the study of the links between the combinatorial invariants related to problems of packing and covering of graphs. In this direction, we establish new connections between these invariants for some classes of graphs. We also present algorithmic applications of the results
APA, Harvard, Vancouver, ISO, and other styles
7

Bessy, Stéphane. "Some problems in graph theory and graphs algorithmic theory." Habilitation à diriger des recherches, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00806716.

Full text
Abstract:
This document is a long abstract of my research work, concerning graph theory and algorithms on graphs. It summarizes some results, gives ideas of the proof for some of them and presents the context of the different topics together with some interesting open questions connected to them The first part precises the notations used in the rest of the paper; the second part deals with some problems on cycles in digraphs; the third part is an overview of two graph coloring problems and one problem on structures in colored graphs; finally the fourth part focus on some results in algorithmic graph theory, mainly in parametrized complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Mohan, Rathish. "Algorithmic Optimization of Sensor Placement on Civil Structures for Fault Detection and Isolation." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353156107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ballage, Marion. "Algorithmes de résolution rapide de problèmes mécaniques sur GPU." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30122/document.

Full text
Abstract:
Dans le contexte de l'analyse numérique en calcul de structures, la génération de maillages conformes sur des modèles à géométrie complexe conduit à des tailles de modèles importantes, et amène à imaginer de nouvelles approches éléments finis. Le temps de génération d'un maillage est directement lié à la complexité de la géométrie, augmentant ainsi considérablement le temps de calcul global. Les processeurs graphiques (GPU) offrent de nouvelles opportunités pour le calcul en temps réel. L'architecture grille des GPU a été utilisée afin d'implémenter une méthode éléments finis sur maillage cartésien. Ce maillage est particulièrement adapté à la parallélisation souhaitée par les processeurs graphiques et permet un gain de temps important par rapport à un maillage conforme à la géométrie. Les formulations de la méthode des éléments finis ainsi que de la méthode des éléments finis étendue ont été reprises afin d'être adaptées à notre méthode. La méthode des éléments finis étendus permet de prendre en compte la géométrie et les interfaces à travers un choix adéquat de fonctions d'enrichissement. Cette méthode discrétise par exemple sans mailler explicitement les fissures, et évite surtout de remailler au cours de leur propagation. Des adaptations de cette méthode sont faites afin de ne pas avoir besoin d'un maillage conforme à la géométrie. La géométrie est définie implicitement par une fonction surfaces de niveau, ce qui permet une bonne approximation de la géométrie et des conditions aux limites sans pour autant s'appuyer sur un maillage conforme. La géométrie est représentée par une fonction surfaces de niveau que nous appelons la densité. La densité est supérieure à 0.5 à l'intérieur du domaine de calcul et inférieure à 0.5 à l'extérieur. Cette fonction densité, définie par ses valeurs aux points noeuds du maillage, est interpolée à l'intérieur de chaque élément. Une méthode d'intégration adaptée à cette représentation géométrique est proposée. En effet, certains éléments sont coupés par la fonction surfaces de niveau et l'intégration de la matrice de raideur ne doit se faire que sur la partie pleine de l'élément. La méthode de quadrature de Gauss qui permet d'intégrer des polynômes de manière exacte n'est plus adaptée. Nous proposons d'utiliser une méthode de quadrature avec des points d'intégration répartis sur une grille régulière et dense. L'intégration peut s'avérer coûteuse en temps de calcul, c'est pour cette raison que nous proposons une technique d'apprentissage donnant la matrice élémentaire de rigidité en fonction des valeurs de la fonction surfaces de niveau aux sommets de l'élément considéré. Cette méthode d'apprentissage permet de grandes améliorations du temps de calcul des matrices élémentaires. Les résultats obtenus après analyse par la méthode des éléments finis standard ou par la méthode des éléments finis sur maillage cartésien ont une taille qui peut croître énormément selon la complexité des modèles, ainsi que la précision des schémas de résolution. Dans un contexte de programmation sur processeurs graphiques, où la mémoire est limitée, il est intéressant d'arriver à compresser ces données. Nous nous sommes intéressés à la compression des modèles et des résultats éléments finis par la transformée en ondelettes. La compression mise en place aidera aussi pour les problèmes de stockage en réduisant la taille des fichiers générés, et pour la visualisation des données
Generating a conformal mesh on complex geometries leads to important model size of structural finite element simulations. The meshing time is directly linked to the geometry complexity and can contribute significantly to the total turnaround time. Graphics processing units (GPUs) are highly parallel programmable processors, delivering real performance gains on computationally complex, large problems. GPUs are used to implement a new finite element method on a Cartesian mesh. A Cartesian mesh is well adapted to the parallelism needed by GPUs and reduces the meshing time to almost zero. The novel method relies on the finite element method and the extended finite element formulation. The extended finite element method was introduced in the field of fracture mechanics. It consists in enriching the basis functions to take care of the geometry and the interface. This method doesn't need a conformal mesh to represent cracks and avoids refining during their propagation. Our method is based on the extended finite element method, with a geometry implicitly defined, wich allows for a good approximation of the geometry and boundary conditions without a conformal mesh.To represent the model on a Cartesian grid, we use a level set representing a density. This density is greater than 0.5 inside the domain and less than 0.5 outside. It takes 0.5 on the boundary. A new integration technique is proposed, adapted to the geometrical representation. For the element cut by the levet set, only the part full of material has to be integrated. The Gauss quadrature is no longer adapted. We introduce a quadrature method with integration points on a cartesian dense grid.In order to reduce the computational effort, a learning approach is then considered to form the elementary stiffness matrices as function of density values on the vertices of the elements. This learning method reduces the stiffness matrices time computation. Results obtained after analysis by finite element method or the novel finite element method can have important storage size, dependant of the model complexity and the resolution scheme exactitude. Due to the limited direct memory of graphics processing units, the data results are compressed. We compress the model and the element finite results with a wavelet transform. The compression will help for storage issue and also for data visualization
APA, Harvard, Vancouver, ISO, and other styles
10

Bricage, Marie. "Modélisation et Algorithmique de graphes pour la construction de structures moléculaires." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV031/document.

Full text
Abstract:
Dans cette thèse, nous présentons une approche algorithmique permettant la génération de guides de construction de cages moléculaires organiques. Il s'agit d'architectures semi-moléculaires possédant un espace interne défini capable de piéger une molécule cible appelée substrat. De nombreuses œuvres proposent de générer des cages organiques moléculaires obtenues à partir de structures symétriques, qui ont une bonne complexité, mais elles ne sont pas spécifiques car elles ne prennent pas en compte des cibles précises. L'approche proposée permet de générer des guides de construction de cages moléculaires organiques spécifiques à un substrat donné. Afin de garantir la spécificité de la cage moléculaire pour le substrat cible, une structure intermédiaire, qui est une expansion de l'enveloppe du substrat cible, est utilisée. Cette structure définie la forme de l'espace dans lequel est piégé le substrat. Des petits ensembles d'atomes, appelés motifs moléculaires liants, sont ensuite intégrés à cette structure intermédiaire. Ces motifs moléculaires sont les ensembles d'atomes nécessaires aux cages moléculaires pour leur permettre d’interagir avec le substrat afin de le capturer
In this thesis, we present an algorithmic approach allowing the generation of construction guides of organic molecular cages. These semi-molecular architectures have a defined internal space capable of trapping a target molecule called substrate. Many works propose to generate molecular organic cages obtained from symmetrical structures, which have a good complexity, but they are not specific because they do not take into account precise targets. The proposed approach makes it possible to generate guides for the construction of organic molecular cages specific to a given substrate. In order to ensure the specificity of the molecular cage for the target substrate, an intermediate structure, which is an expansion of the envelope of the target substrate, is used. This structure defines the shape of the space in which the substrate is trapped. Small sets of atoms, called molecular binding patterns, are then integrated into this intermediate structure. These molecular patterns are the sets of atoms needed by molecular cages to allow them to interact with the substrate to capture it
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Algorithmic structures"

1

Hajnicz, Elżbieta. Time structures: Formal description and algorithmic representation. Berlin: Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Time structures: Formal description and algorithmic representation. Berlin: Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Engeler, Erwin. Algorithmic properties of structures: Selected papers of E. Engeler. Singapore: World Scientific, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Algorithmic properties of structure: Selected papers of Erwin Engeler. Singapore: World Scientific, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

1946-, Bunt Richard B., ed. An introduction to computer science: An algorithmic approach. 2nd ed. New York: McGraw-Hill, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jean-Paul, Tremblay. An introduction to computer science: An algorithmic approach. New York: McGraw-Hill, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Conceptual data modeling and database design: A fully algorithmic approach : The shortest advisable path. Oakville, ON: Apple Academic Press, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Atallah, Mikhail. Frontiers in Algorithmics and Algorithmic Aspects in Information and Management: Joint International Conference, FAW-AAIM 2011, Jinhua, China, May 28-31, 2011. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pinyan, Lu, Su Kaile, Wang Lusheng, and SpringerLink (Online service), eds. Frontiers in Algorithmics and Algorithmic Aspects in Information and Management: Joint International Conference, FAW-AAIM 2012, Beijing, China, May 14-16, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fellows, Michael. Frontiers in Algorithmics and Algorithmic Aspects in Information and Management: Third Joint International Conference, FAW-AAIM 2013, Dalian, China, June 26-28, 2013. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Algorithmic structures"

1

Mahout, Vincent. "Algorithmic and Data Structures." In Assembly Language Programming, 87–118. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118562123.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Möller, Bernhard. "Calculating With Pointer Structures." In Algorithmic Languages and Calculi, 24–48. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-0-387-35264-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Novosád, Tomá, Václav Snáel, Ajith Abraham, and Jack Y. Yang. "Discovering 3D Protein Structures for Optimal Structure Alignment." In Algorithmic and Artificial Intelligence Methods for Protein Bioinformatics, 281–98. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118567869.ch14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Danny Z., and Ewa Misiołek. "Algorithms for Interval Structures with Applications." In Frontiers in Algorithmics and Algorithmic Aspects in Information and Management, 196–207. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21204-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nievergelt, Jürg, and Peter Widmayer. "Spatial data structures: Concepts and design choices." In Algorithmic Foundations of Geographic Information Systems, 153–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63818-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Botorog, George Horatiu, and Herbert Kuchen. "Using algorithmic skeletons with dynamic data structures." In Parallel Algorithms for Irregularly Structured Problems, 263–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0030116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sioutas, Spyros, Gerasimos Vonitsanos, Nikolaos Zacharatos, and Christos Zaroliagis. "Scalable and Hierarchical Distributed Data Structures for Efficient Big Data Management." In Algorithmic Aspects of Cloud Computing, 122–60. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58628-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sioutas, Spyros, Phivos Mylonas, Alexandros Panaretos, Panagiotis Gerolymatos, Dimitrios Vogiatzis, Eleftherios Karavaras, Thomas Spitieris, and Andreas Kanavos. "Survey of Machine Learning Algorithms on Spark Over DHT-based Structures." In Algorithmic Aspects of Cloud Computing, 146–56. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57045-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schwank, Inge. "Cognitive Structures and Cognitive Strategies in Algorithmic Thinking." In NATO ASI Series, 249–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-662-11334-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Segura, Clara, Isabel Pita, Rafael del Vado Vírseda, Ana Isabel Saiz, and Pablo Soler. "Interactive Learning of Data Structures and Algorithmic Schemes." In Computational Science – ICCS 2008, 800–809. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-69384-0_85.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithmic structures"

1

Huang, Yijiang, Latifa Alkhayat, Catherine De Wolf, and Caitlin Mueller. "Algorithmic circular design with reused structural elements: method and tool." In International fib Symposium - Conceptual Design of Structures 2021. fib. The International Federation for Structural Concrete, 2021. http://dx.doi.org/10.35789/fib.proc.0055.2021.cdsymp.p056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Jiawei, Russell Impagliazzo, Antonina Kolokolova, and Ryan Williams. "Completeness for First-Order Properties on Sparse Structures with Algorithmic Applications." In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2017. http://dx.doi.org/10.1137/1.9781611974782.141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Markov, Igor L., and Dong-Jin Lee. "Algorithmic tuning of clock trees and derived non-tree structures." In 2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2011. http://dx.doi.org/10.1109/iccad.2011.6105342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Melnyk, Sergiy I., Sergiy M. Labazov, and Serhii S. Melnyk. "Algorithmic Method of Reconstruction of Subsurface Structures in Georadar Studies." In 2020 IEEE Ukrainian Microwave Week (UkrMW). IEEE, 2020. http://dx.doi.org/10.1109/ukrmw49653.2020.9252683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gantovnik, Vladimir, Georges Fadel, and Zafer Gu¨rdal. "An Improved Genetic Algorithm for the Optimization of Composite Structures." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99423.

Full text
Abstract:
This paper describes a new approach for reducing the number of the fitness and constraint function evaluations required by a genetic algorithm (GA) for optimization problems with mixed continuous and discrete design variables. The proposed modification improves the efficiency of the memory constructed in terms of the continuous variables. The work presents the algorithmic implementation of the proposed memory scheme and demonstrates the efficiency of the proposed multivariate approximation procedure for the weight optimization of a segmented open cross section composite beam subjected to axial tension load. Results are generated to demonstrate the advantages of the proposed improvements to a standard genetic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

"EDAPPLETS: A WEB TOOL FOR TEACHING DATA STRUCTURES AND ALGORITHMIC TECHNIQUES." In International Conference on Computer Supported Education. SciTePress - Science and and Technology Publications, 2009. http://dx.doi.org/10.5220/0001980203090312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Poulsen, Seth. "Using Spatio-Algorithmic Problem Solving Strategies to Increase Access to Data Structures." In ITiCSE '20: Innovation and Technology in Computer Science Education. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3341525.3394004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Iliopoulos, Athanasios, and John G. Michopoulos. "High Performance Parallelized Centroid Estimation of Image Components for Full Field Measurements." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34937.

Full text
Abstract:
Full field measurement methods require digital image processing algorithms to accomplish centroid identification of components of the image of a deforming structure and track them through subsequent video frames in order to establish displacement and strain measurements. Unfortunately, these image processing algorithms are the most computationally expensive tasks performed in such methods. In this work we present a set of new algorithms that can be used to identify centroids of image features that are shown to be orders of magnitude faster than conventional algorithms. These algorithms are based on employing efficient data structures and algorithmic flows tailored to optimally fit in shared memory parallel architectures.
APA, Harvard, Vancouver, ISO, and other styles
9

del Vado Vírseda, Rafael. "A visualization tool for tutoring the interactive learning of data structures and algorithmic schemes." In the 41st ACM technical symposium. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1734263.1734325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bei, Xiaohui, Youming Qiao, and Shengyu Zhang. "Networked Fairness in Cake Cutting." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/508.

Full text
Abstract:
We introduce a graphical framework for fair division in cake cutting, where comparisons between agents are limited by an underlying network structure. We generalize the classical fairness notions of envy-freeness and proportionality in this graphical setting. An allocation is called envy-free on a graph if no agent envies any of her neighbor's share, and is called proportional on a graph if every agent values her own share no less than the average among her neighbors, with respect to her own measure. These generalizations enable new research directions in developing simple and efficient algorithms that can produce fair allocations under specific graph structures. On the algorithmic frontier, we first propose a moving-knife algorithm that outputs an envy-free allocation on trees. The algorithm is significantly simpler than the discrete and bounded envy-free algorithm introduced in [Aziz and Mackenzie, 2016] for compete graphs. Next, we give a discrete and bounded algorithm for computing a proportional allocation on transitive closure of trees, a class of graphs by taking a rooted tree and connecting all its ancestor-descendant pairs.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Algorithmic structures"

1

Shadwick, B. A., W. F. Buell, and J. C. Bowman. Structure-Preserving Integration Algorithms. Fort Belvoir, VA: Defense Technical Information Center, November 2000. http://dx.doi.org/10.21236/ada384935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
3

Chun, Joohwan. Fast Array Algorithms for Structured Matrices. Fort Belvoir, VA: Defense Technical Information Center, June 1989. http://dx.doi.org/10.21236/ada238977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thomas, Robin. Graph Minors: Structure Theory and Algorithms. Fort Belvoir, VA: Defense Technical Information Center, January 1993. http://dx.doi.org/10.21236/ada271851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

GEORGIA INST OF TECH ATLANTA. Graph Minors: Structure Theory and Algorithms. Fort Belvoir, VA: Defense Technical Information Center, April 1993. http://dx.doi.org/10.21236/ada266033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gazonas, George A., Daniel S. Weile, Raymond Wildman, and Anuraag Mohan. Genetic Algorithm Optimization of Phononic Bandgap Structures. Fort Belvoir, VA: Defense Technical Information Center, September 2006. http://dx.doi.org/10.21236/ada456655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Douglas R. Theory of Algorithm Structure and Design. Fort Belvoir, VA: Defense Technical Information Center, September 1992. http://dx.doi.org/10.21236/ada257948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ercegovac, Miloes D., and Tomas Lang. On-Line Arithmetic Algorithms and Structures for VLSI. Fort Belvoir, VA: Defense Technical Information Center, November 1988. http://dx.doi.org/10.21236/ada203421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dickinson, Bradley W. Efficient Algorithms and Structures for Robust Signal Processing. Fort Belvoir, VA: Defense Technical Information Center, May 1985. http://dx.doi.org/10.21236/ada166147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dickinson, Bradley W. Efficient Algorithms and Structures for Robust Signal Processing. Fort Belvoir, VA: Defense Technical Information Center, September 1986. http://dx.doi.org/10.21236/ada190311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography