Dissertationen zum Thema „Algorithmic structures“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Algorithmic structures" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Li, Quan Ph D. Massachusetts Institute of Technology. „Algorithms and algorithmic obstacles for probabilistic combinatorial structures“. Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (pages 209-214).
We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.
by Quan Li.
Ph. D.
Vialette, Stéphane. „Algorithmic Contributions to Computational Molecular Biology“. Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00862069.
Der volle Inhalt der QuelleKing, Stephen. „Higher-level algorithmic structures in the refinement calculus“. Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300129.
Der volle Inhalt der QuelleHashemolhosseini, Sepehr. „Algorithmic component and system reliability analysis of truss structures“. Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85710.
Der volle Inhalt der QuelleENGLISH ABSTRACT: Most of the parameters involved in the design and analysis of structures are of stochastic nature. This is, therefore, of paramount importance to be able to perform a fully stochastic analysis of structures both in component and system level to take into account the uncertainties involved in structural analysis and design. To the contrary, in practice, the (computerised) analysis of structures is based on a deterministic analysis which fails to address the randomness of design and analysis parameters. This means that an investigation on the algorithmic methodologies for a component and system reliability analysis can help pave the way towards the implementation of fully stochastic analysis of structures in a computer environment. This study is focused on algorithm development for component and system reliability analysis based on the various proposed methodologies. Truss structures were selected for this purpose due to their simplicity as well as their wide use in the industry. Nevertheless, the algorithms developed in this study can be used for other types of structures such as moment-resisting frames with some simple modi cations. For a component level reliability analysis of structures different methods such as First Order Reliability Methods (FORM) and simulation methods are proposed. However, implementation of these methods for the statistically indeterminate structures is complex due to the implicit relation between the response of the structural system and the load effect. As a result, the algorithm developed for the purpose of component reliability analysis should be based on the concepts of Stochastic Finite Element Methods (SFEM) where a proper link between the finite element analysis of the structure and the reliability analysis methodology is ensured. In this study various algorithms are developed based on the FORM method, Monte Carlo simulation, and the Response Surface Method (RSM). Using the FORM method, two methodologies are considered: one is based on the development of a finite element code where required alterations are made to the FEM code and the other is based on the usage of a commercial FEM package. Different simulation methods are also implemented: Direct Monte Carlo Simulation (DMCS), Latin Hypercube Sampling Monte Carlo (LHCSMC), and Updated Latin Hypercube Sampling Monte Carlo (ULHCSMC). Moreover, RSM is used together with simulation methods. Throughout the thesis, the effciency of these methods was investigated. A Fully Stochastic Finite Element Method (FSFEM) with alterations to the finite element code seems the fastest approach since the linking between the FEM package and reliability analysis is avoided. Simulation methods can also be effectively used for the reliability evaluation where ULHCSMC seemed to be the most efficient method followed by LHCSMC and DMCS. The response surface method is the least straight forward method for an algorithmic component reliability analysis; however, it is useful for the system reliability evaluation. For a system level reliability analysis two methods were considered: the ß-unzipping method and the branch and bound method. The ß-unzipping method is based on a level-wise system reliability evaluation where the structure is modelled at different damaged levels according to its degree of redundancy. In each level, the so-called unzipping intervals are defined for the identification of the critical elements. The branch and bound method is based on the identification of different failure paths of the structure by the expansion of the structural failure tree. The evaluation of the damaged states for both of the methods is the same. Furthermore, both of the methods lead to the development of a parallel-series model for the structural system. The only difference between the two methods is in the search approach used for the failure sequence identification. It was shown that the ß-unzipping method provides a better algorithmic approach for evaluating the system reliability compared to the branch and bound method. Nevertheless, the branch and bound method is a more robust method in the identification of structural failure sequences. One possible way to increase the efficiency of the ß-unzipping method is to define bigger unzipping intervals in each level which can be possible through a computerised analysis. For such an analysis four major modules are required: a general intact structure module, a damaged structure module, a reliability analysis module, and a system reliability module. In this thesis different computer programs were developed for both system and component reliability analysis based on the developed algorithms. The computer programs are presented in the appendices of the thesis.
AFRIKAANSE OPSOMMING: Meeste van die veranderlikes betrokke by die ontwerp en analise van strukture is stogasties in hul aard. Om die onsekerhede betrokke in ontwerp en analise in ag te neem is dit dus van groot belang om 'n ten volle stogastiese analise te kan uitvoer op beide komponent asook stelsel vlak. In teenstelling hiermee is die gerekenariseerde analise van strukture in praktyk gebaseer op deterministiese analise wat nie suksesvol is om die stogastiese aard van ontwerp veranderlikes in ag te neem nie. Dit beteken dat die ondersoek na die algoritmiese metodiek vir komponent en stelsel betroubaarheid analise kan help om die weg te baan na die implementering van ten volle rekenaarmatige stogastiese analise van strukture. Di e studie se fokus is op die ontwikkeling van algoritmes vir komponent en stelsel betroubaarheid analise soos gegrond op verskeie voorgestelde metodes. Vakwerk strukture is gekies vir die doeleinde as gevolg van hulle eenvoud asook hulle wydverspreide gebruik in industrie. Die algoritmes wat in die studie ontwikkel is kan nietemin ook vir ander tipes strukture soos moment-vaste raamwerke gebruik word, gegewe eenvoudige aanpassings. Vir 'n komponent vlak betroubaarheid analise van strukture word verskeie metodes soos die "First Order Reliability Methods" (FORM) en simulasie metodes voorgestel. Die implementering van die metodes vir staties onbepaalbare strukture is ingewikkeld as gevolg van die implisiete verband tussen die gedrag van die struktuur stelsel en die las effek. As 'n gevolg, moet die algoritme wat ontwikkel word vir die doel van komponent betroubaarheid analise gebaseer word op die konsepte van stogastiese eindige element metodes ("SFEM") waar 'n duidelike verband tussen die eindige element analise van die struktuur en die betroubaarheid analise verseker is. In hierdie studie word verskeie algoritmes ontwikkel wat gebaseer is op die FORM metode, Monte Carlo simulasie, en die sogenaamde "Response Surface Method" (RSM). Vir die gebruik van die FORM metode word twee verdere metodologieë ondersoek: een gebaseer op die ontwikkeling van 'n eindige element kode waar nodige verandering aan die eindige element kode self gemaak word en die ander waar 'n kommersiële eindige element pakket gebruik word. Verskillende simulasie metodes word ook geïmplimenteer naamlik Direkte Monte Carlo Simulasie (DMCS), "Latin Hypercube Sampling Monte Carlo" (LHCSMC) en sogenaamde "Updated Latin Hypercube Sampling Monte Carlo" (ULHCSMC). Verder, word RSM tesame met die simulasie metodes gebruik. In die tesis word die doeltreffendheid van die bostaande metodes deurgaans ondersoek. 'n Ten volle stogastiese eindige element metode ("FSFEM") met verandering aan die eindige element kode blyk die vinnigste benadering te wees omdat die koppeling tussen die eindige element metode pakket en die betroubaarheid analise verhoed word. Simulasie metodes kan ook effektief aangewend word vir die betroubaarheid evaluasie waar ULHCSMC as die mees doeltre end voorgekom het, gevolg deur LHCSMC en DMCS. The RSM metode is die mees komplekse metode vir algoritmiese komponent betroubaarheid analise. Die metode is egter nuttig vir sisteem betroubaarheid analise. Vir sisteem-vlak betroubaarheid analise is twee metodes oorweeg naamlik die "ß-unzipping" metode and die "branch-and-bound" metode. Die "ß-unzipping" metode is gebaseer op 'n sisteem-vlak betroubaarheid ontleding waar die struktuur op verskillende skade vlakke gemodelleer word soos toepaslik vir die hoeveelheid addisionele las paaie. In elke vlak word die sogenaamde "unzipping" intervalle gedefinieer vir die identifikasie van die kritiese elemente. Die "branch-and-bound" metode is gebaseer op die identifikasie van verskillende faling roetes van die struktuur deur uitbreiding van die falingsboom. The ondersoek van die skade toestande vir beide metodes is dieselfde. Verder kan beide metodes lei tot die ontwikkeling van 'n parallelserie model van die strukturele stelsel. Die enigste verskil tussen die twee metodes is in die soek-benadering vir die uitkenning van falingsmodus volgorde. Dit word getoon dat die "ß-unzipping" metode 'n beter algoritmiese benadering is vir die ontleding van sisteem betroubaarheid vergeleke met die "branch-and-bound" metode. Die "branch-and- bound" metode word nietemin as 'n meer robuuste metode vir die uitkenning van die falings volgorde beskou. Een moontlike manier om die doeltre endheid van die "ß-unzipping" metode te verhoog is om groter "unzipping" intervalle te gebruik, wat moontlik is vir rekenaarmatige analise. Vir so 'n analise word vier hoof modules benodig naamlik 'n algemene heel-struktuur module, 'n beskadigde-struktuur module, 'n betroubaarheid analise module en 'n sisteem betroubaarheid analise module. In die tesis word verskillende rekenaar programme ontwikkel vir beide sisteem en komponent betroubaarheid analise. Die rekenaar programme word in die aanhangsels van die tesis aangebied.
Breuils, Stéphane. „Structures algorithmiques pour les opérateurs d'algèbre géométrique et application aux surfaces quadriques“. Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1142/document.
Der volle Inhalt der QuelleGeometric Algebra is considered as a very intuitive tool to deal with geometric problems and it appears to be increasingly efficient and useful to deal with computer graphics problems. The Conformal Geometric Algebra includes circles, spheres, planes and lines as algebraic objects, and intersections between these objects are also algebraic objects. More complex objects such as conics, quadric surfaces can also be expressed and be manipulated using an extension of the conformal Geometric Algebra. However due to the high dimension of their representations in Geometric Algebra, implementations of Geometric Algebra that are currently available do not allow efficient realizations of these objects. In this thesis, we first present a Geometric Algebra implementation dedicated for both low and high dimensions. The proposed method is a hybrid solution that includes precomputed code with fast execution for low dimensional vector space, which is somehow equivalent to the state of the art method. For high dimensional vector spaces, we propose runtime computations with low memory requirement. For these high dimensional vector spaces, we introduce new recursive scheme and we prove that associated algorithms are efficient both in terms of computationnal and memory complexity. Furthermore, some rules are defined to select the most appropriate choice, according to the dimension of the algebra and the type of multivectors involved in the product. We will show that the resulting implementation is well suited for high dimensional spaces (e.g. algebra of dimension 15) as well as for lower dimensional spaces. The next part presents an efficient representation of quadric surfaces using Geometric Algebra. We define a novel Geometric Algebra framework, the Geometric Algebra of $mathbb{R}^{9,6}$ to deal with quadric surfaces where an arbitrary quadric surface is constructed by merely the outer product of nine points. We show that the proposed framework enables us not only to intuitively represent quadric surfaces but also to construct objects using Conformal Geometric Algebra. In the proposed framework, the computation of the intersection of quadric surfaces, the normal vector, and the tangent plane of a quadric surface are provided. Finally, a computational framework of the quadric surfaces will be presented with the main operations required in computer graphics
Raymond, Jean-Florent. „Structural and algorithmic aspects of partial orderings of graphs“. Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT289.
Der volle Inhalt der QuelleThe central theme of this thesis is the study of the properties of the classes of graphs defined by forbidden substructures and their applications.The first direction that we follow concerns well-quasi-orders. Using decomposition theorems on graph classes forbidding one substructure, we identify those that are well-quasi-ordered. The orders and substructures that we consider are those related to the notions of contraction and induced minor.Then, still considering classes of graphs defined by forbidden substructures, we obtain bounds on invariants such as degree, treewidth, tree-cut width, and a new invariant generalizing the girth.The third direction is the study of the links between the combinatorial invariants related to problems of packing and covering of graphs. In this direction, we establish new connections between these invariants for some classes of graphs. We also present algorithmic applications of the results
Bessy, Stéphane. „Some problems in graph theory and graphs algorithmic theory“. Habilitation à diriger des recherches, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00806716.
Der volle Inhalt der QuelleMohan, Rathish. „Algorithmic Optimization of Sensor Placement on Civil Structures for Fault Detection and Isolation“. University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353156107.
Der volle Inhalt der QuelleBallage, Marion. „Algorithmes de résolution rapide de problèmes mécaniques sur GPU“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30122/document.
Der volle Inhalt der QuelleGenerating a conformal mesh on complex geometries leads to important model size of structural finite element simulations. The meshing time is directly linked to the geometry complexity and can contribute significantly to the total turnaround time. Graphics processing units (GPUs) are highly parallel programmable processors, delivering real performance gains on computationally complex, large problems. GPUs are used to implement a new finite element method on a Cartesian mesh. A Cartesian mesh is well adapted to the parallelism needed by GPUs and reduces the meshing time to almost zero. The novel method relies on the finite element method and the extended finite element formulation. The extended finite element method was introduced in the field of fracture mechanics. It consists in enriching the basis functions to take care of the geometry and the interface. This method doesn't need a conformal mesh to represent cracks and avoids refining during their propagation. Our method is based on the extended finite element method, with a geometry implicitly defined, wich allows for a good approximation of the geometry and boundary conditions without a conformal mesh.To represent the model on a Cartesian grid, we use a level set representing a density. This density is greater than 0.5 inside the domain and less than 0.5 outside. It takes 0.5 on the boundary. A new integration technique is proposed, adapted to the geometrical representation. For the element cut by the levet set, only the part full of material has to be integrated. The Gauss quadrature is no longer adapted. We introduce a quadrature method with integration points on a cartesian dense grid.In order to reduce the computational effort, a learning approach is then considered to form the elementary stiffness matrices as function of density values on the vertices of the elements. This learning method reduces the stiffness matrices time computation. Results obtained after analysis by finite element method or the novel finite element method can have important storage size, dependant of the model complexity and the resolution scheme exactitude. Due to the limited direct memory of graphics processing units, the data results are compressed. We compress the model and the element finite results with a wavelet transform. The compression will help for storage issue and also for data visualization
Bricage, Marie. „Modélisation et Algorithmique de graphes pour la construction de structures moléculaires“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV031/document.
Der volle Inhalt der QuelleIn this thesis, we present an algorithmic approach allowing the generation of construction guides of organic molecular cages. These semi-molecular architectures have a defined internal space capable of trapping a target molecule called substrate. Many works propose to generate molecular organic cages obtained from symmetrical structures, which have a good complexity, but they are not specific because they do not take into account precise targets. The proposed approach makes it possible to generate guides for the construction of organic molecular cages specific to a given substrate. In order to ensure the specificity of the molecular cage for the target substrate, an intermediate structure, which is an expansion of the envelope of the target substrate, is used. This structure defines the shape of the space in which the substrate is trapped. Small sets of atoms, called molecular binding patterns, are then integrated into this intermediate structure. These molecular patterns are the sets of atoms needed by molecular cages to allow them to interact with the substrate to capture it
Gaillard, Anne-Laure. „Identification de motifs au sein des structures biologiques arborescentes“. Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00652227.
Der volle Inhalt der QuellePimenta, Tales Cleber. „Optimal design of VLSI structures with built-in self test based on reduced pseudo-exhaustive testing“. Ohio : Ohio University, 1992. http://www.ohiolink.edu/etd/view.cgi?ohiou1173755841.
Der volle Inhalt der QuelleKropf, Carsten. „Efficient Reorganisation of Hybrid Index Structures Supporting Multimedia Search Criteria“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-216425.
Der volle Inhalt der QuelleKeyhani, Ali. „A Study On The Predictive Optimal Active Control Of Civil Engineering Structures“. Thesis, Indian Institute of Science, 2000. http://hdl.handle.net/2005/223.
Der volle Inhalt der QuelleTchvagha, Zeine Ahmed. „Contribution à l’optimisation multi-objectifs sous contraintes : applications à la mécanique des structures“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR13/document.
Der volle Inhalt der QuelleThe objective of this thesis is the development of multi-objective optimization methods for solving mechanical design problems. Indeed, most of the real problems in the field of mechanical structures have several objectives that are often antagonistic. For example, it is about designing structures by optimizing their weight, their size, and their production costs. The goal of multi-objective optimization methods is the search for compromise solutions between objectives given the impossibility to satisfy all simultaneously. Metaheuristics are optimization methods capable of solving multi-objective optimization problems in a reasonable calculation time without guaranteeing the optimality of the solution (s). In recent years, these algorithms have been successfully applied to solve the problem of structural mechanics. In this thesis, two metaheuristics have been developed for the resolution of multi-objective optimization problems in general and of mechanical structures design in particular. The first algorithm called MOBSA used the crossover and mutation operators of the BSA algorithm. The second one named NNIA+X is a hybridization of an immune algorithm and three crossover inspired by the original crossover operator of the BSA algorithm. To evaluate the effectiveness and efficiency of these two algorithms, tests on some problems in literature have been made with a comparison with algorithms well known in the field of multi-objective optimization. The comparison results using metrics widely used in the literature have shown that our two algorithms can compete with their predecessors
Hachimi, Hanaa. „Hybridations d'algorithmes métaheuristiques en optimisation globale et leurs applications“. Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00905604.
Der volle Inhalt der QuelleSaleh, Amgad. „Parallel algorithms for integrated structural/control optimization of large structures /“. The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487940308432707.
Der volle Inhalt der QuelleBae, Sung Eun. „Sequential and Parallel Algorithms for the Generalized Maximum Subarray Problem“. Thesis, University of Canterbury. Computer Science and Software Engineering, 2007. http://hdl.handle.net/10092/1202.
Der volle Inhalt der QuelleBonichon, Nicolas. „Quelques algorithmes entre le monde des graphes et les nuages de points“. Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00922501.
Der volle Inhalt der QuelleLopes, Rodrigo Aranha Pereira. „Computational strategies applied to product design“. Master's thesis, Universidade de Lisboa, Faculdade de Arquitetura, 2018. http://hdl.handle.net/10400.5/17993.
Der volle Inhalt der QuelleEm diferentes ocasiões, Richard Sennett e Vilém Flusser descreveram que a prática e a teoria, a técnica e a expressão, a arte e a tecnologia, o criador e o usuário, antes compartilhavam a mesma raíz. Ao longo da história, no entanto, estes conceitos se dividiram com o design posicionado ao centro. Esta proposta de pesquisa visa, em primeiro lugar, contribuir para a diminuição desta herdada separação. Isso, por meio do uso de estratégias computacionais aplicadas ao design. O presente estudo aplicará essa abordagem ao projeto e construção de uma prancha de surfe. Um dos objetivos é desenvolver uma plataforma de codesign que permita aos usuários gerarem suas próprias pranchas de surf, por meio de modelagem algorítmica / paramétrica (Grasshopper e ShapeDiver). Um segundo aspecto considera criticamente os materiais utilizados na indústria do surf, com o objetivo de desenvolver produtos que utilizem materiais menos nocivos ao meio ambiente e com maior capacidade de controle e alteração em relação às capacidades de desempenho. Em particular, esta proposta visa desenvolver um algoritmo para gerar objetos com seus núcleos internos compostos por estruturas de papel. O objeto específico a ser gerado neste caso é uma prancha de surf.
ABSTRACT: As pointed out on different occasions by both Richard Sennett and Villém Flusser, practice and theory, technique and expression, art and technology, maker and user, once shared a common ground. Throughout history, however, they have become divided. Design stands in between. This research proposal firstly aims to contribute to the diminishing of this historical inheritance. This, by means of providing a workflow for designers with the use of computational strategies. The present study will apply this approach to the design and building of a surfboard. The goal is to develop a co-designing platform allowing users to generate their own tailor-made surfboard by means of algorithmic/parametric modeling (Grasshopper and Shapediver). A second aspect critically considers the materials used in the surf industry, with the objective of developing products using materials that are less harmful to the environment and with a greater capacity of control and alteration with regards to performance capabilities. In particular, this proposal aims to develop an algorithm that can be used to generate objects of paper structures composing their inner core. The specific object to be generated in this case, is a surfboard.
N/A
Pham, Hong Phong. „Studies on Optimal Colorful Structures in Vertex-Colored Graphs“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS528.
Der volle Inhalt der QuelleIn this thesis, we study different maximum colorful problems in vertex-colored graphs. We focus on finding structures with the possible maximum number of colors by efficient polynomial-time algorithms, or prove these problems as NP-hard for specific graphs. In particular, we first study the maximum colorful matching problem. We show that this problem can be efficiently solved in polynomial time. Moreover, we also consider a specific version of this problem, namely tropical matching, that is to find a matching containing all colors of the original graph, if any. Similarly, a polynomial time algorithm is also provided for the problem of tropical matching with the minimum cardinality and the problem of maximal tropical matching with the minimum cardinality. Then, we study the maximum colorful paths problem. There are two versions for this problem: the shortest tropical path problem, i.e., finding a tropical path with the minimum total weight, and the maximum colorful path problem, i.e., finding a path with the maximum number of colors possible. We show that both versions of this problem are NP-hard for directed acyclic graphs, cactus graphs and interval graphs where the longest path problem is easy. Moreover, we also provide a fixed parameter algorithm for the former in general graphs and several polynomial time algorithms for the latter in specific graphs, including bipartite chain graphs, threshold graphs, trees, block graphs, and proper interval graphs. Next we consider the maximum colorful cycles problem. We first show that the problem is NP-hard even for simple graphs such as split graphs, biconnected graphs, interval graphs. Then we provide polynomial-time algorithms for classes of threshold graphs and bipartite chain graphs and proper interval graphs. Later, we study the maximum colorful cliques problem. We first show that the problem is NP-hard even for several cases where the maximum clique problem is easy, such as complement graphs of bipartite permutation graphs, complement graphs of bipartite convex graphs, and unit disk graphs, and also for properly vertex-colored graphs. Next, we propose a XP parameterized algorithm and polynomial-time algorithms for classes of complement graphs of bipartite chain graphs, complete multipartite graphs and complement graphs of cycle graphs. Finally, we focus on the maximum colorful independent set problem. We first prove that the problem is NP-hard even for some cases where the maximum independent set problem is easy, such as cographs and P₅-free graphs. Next, we provide polynomial time algorithms for cluster graphs and trees
Bliven, Spencer Edward. „Structure-Preserving Rearrangements| Algorithms for Structural Comparison and Protein Analysis“. Thesis, University of California, San Diego, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3716489.
Der volle Inhalt der QuelleProtein structure is fundamental to a deep understanding of how proteins function. Since structure is highly conserved, structural comparison can provide deep information about the evolution and function of protein families. The Protein Data Bank (PDB) continues to grow rapidly, providing copious opportunities for advancing our understanding of proteins through large-scale searches and structural comparisons. In this work I present several novel structural comparison methods for specific applications, as well as apply structure comparison tools systematically to better understand global properties of protein fold space.
Circular permutation describes a relationship between two proteins where the N-terminal portion of one protein is related to the C-terminal portion of the other. Proteins that are related by a circular permutation generally share the same structure despite the rearrangement of their primary sequence. This non-sequential relationship makes them difficult for many structure alignment tools to detect. Combinatorial Extension for Circular Permutations (CE-CP) was developed to align proteins that may be related by a circular permutation. It is widely available due to its incorporation into the RCSB PDB website.
Symmetry and structural repeats are common in protein structures at many levels. The CE-Symm tool was developed in order to detect internal pseudosymmetry within individual polypeptide chains. Such internal symmetry can arise from duplication events, so aligning the individual symmetry units provides insights about conservation and evolution. In many cases, internal symmetry can be shown to be important for a number of functions, including ligand binding, allostery, folding, stability, and evolution.
Structural comparison tools were applied comprehensively across all PDB structures for systematic analysis. Pairwise structural comparisons of all proteins in the PDB have been computed using the Open Science Grid computing infrastructure, and are kept continually up-to-date with the release of new structures. These provide a network-based view of protein fold space. CE-Symm was also applied to systematically survey the PDB for internally symmetric proteins. It is able to detect symmetry in ~20% of all protein families. Such PDB-wide analyses give insights into the complex evolution of protein folds.
Maria, Clément. „Algorithmes et structures de données en topologie algorithmique“. Thesis, Nice, 2014. http://www.theses.fr/2014NICE4081/document.
Der volle Inhalt der QuelleThe theory of homology generalizes the notion of connectivity in graphs to higher dimensions. It defines a family of groups on a domain, described discretely by a simplicial complex that captures the connected components, the holes, the cavities and higher-dimensional equivalents. In practice, the generality and flexibility of homology allows the analysis of complex data, interpreted as point clouds in metric spaces. The theory of persistent homology introduces a robust notion of homology for topology inference. Its applications are various and range from the description of high dimensional configuration spaces of complex dynamical systems, classification of shapes under deformations and learning in medical imaging. In this thesis, we explore the algorithmic ramifications of persistent homology. We first introduce the simplex tree, an efficient data structure to construct and maintain high dimensional simplicial complexes. We then present a fast implementation of persistent cohomology via the compressed annotation matrix data structure. We also refine the computation of persistence by describing ideas of homological torsion in this framework, and introduce the modular reconstruction method for computation. Finally, we present an algorithm to compute zigzag persistent homology, an algebraic generalization of persistence. To do so, we introduce new local transformation theorems in quiver representation theory, called diamond principles. All algorithms are implemented in the computational library Gudhi
Gilardet, Mathieu. „Étude d'algorithmes de restauration d'images sismiques par optimisation de forme non linéaire et application à la reconstruction sédimentaire“. Phd thesis, Université de Pau et des Pays de l'Adour, 2013. http://tel.archives-ouvertes.fr/tel-00952964.
Der volle Inhalt der QuelleSalikhov, Kamil. „Algorithmes et structures de données efficaces pour l’indexation de séquences d’ADN“. Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1232/document.
Der volle Inhalt der QuelleAmounts of data generated by Next Generation Sequencing technologies increase exponentially in recent years. Storing, processing and transferring this data become more and more challenging tasks. To be able to cope with them, data scientists should develop more and more efficient approaches and techniques.In this thesis we present efficient data structures and algorithmic methods for the problems of approximate string matching, genome assembly, read compression and taxonomy based metagenomic classification.Approximate string matching is an extensively studied problem with countless number of published papers, both theoretical and practical. In bioinformatics, read mapping problem can be regarded as approximate string matching. Here we study string matching strategies based on bidirectional indices. We define a framework, called search schemes, to work with search strategies of this type, then provide a probabilistic measure for the efficiency of search schemes, prove several combinatorial properties of efficient search schemes and provide experimental computations supporting the superiority of our strategies.Genome assembly is one of the basic problems of bioinformatics. Here we present Cascading Bloom filter data structure, that improves standard Bloom filter and can be applied to several problems like genome assembly. We provide theoretical and experimental results proving properties of Cascading Bloom filter. We also show how Cascading Bloom filter can be used for solving another important problem of read compression.Another problem studied in this thesis is metagenomic classification. We present a BWT-based approach that improves the BWT-index for quick and memory-efficient k-mer search. We mainly focus on data structures that improve speed and memory usage of classical BWT-index for our application
Borges, André de Ávila. „Otimização de forma e paramétrica de estruturas treliçadas através dos métodos meta-heurísticos Harmony Search e Firefly Algorithm“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/96635.
Der volle Inhalt der QuelleStructural optimization is a relatively new area that has been increasingly exploited. There are many classical methods, and newer are emerging to compete on efficiency, reliability and speed in obtaining an optimal result. The algorithms are classified into deterministic algorithms, which use the gradient information, i.e., use the values of the functions and their derivatives, and meta-heuristic algorithms, random optimization methods which are probabilistic methods not based on gradient, i.e., they use only objective function evaluation. Two relatively recent meta-heuristics algorithms are presented, Harmony Search, based on musical improvisation in search of the perfect harmony, and Firefly Algorithm, which is inspired by the behavior of the light of fireflies. Several benchmarks of 2-D and 3-D trusses considering size and shape optimization, with stress, displacement, buckling and natural frequency constraints, are presented to demonstrate the effectiveness of the methods. The results are compared to the others authors using different methods found in the literature. The results indicate that optimization algorithms studied in this work are better than or as efficient as others. Finally, the methods are applied to the structure of an adapted engineering design.
Sigrist, Zoé. „Contribution à l'identification de systèmes non-linéaires en milieu bruité pour la modélisation de structures mécaniques soumises à des excitations vibratoires“. Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14655/document.
Der volle Inhalt der QuelleThis PhD deals with the caracterisation of mechanical structures, by its structural parameters, when only noisy observations disturbed by additive measurement noises, assumed to be zero-mean white and Gaussian, are available. For this purpose, we suggest using discrete-time models with distinct linear and nonlinear parts. The first one allows the structural parameters to be retrieved whereas the second one gives information on the nonlinearity. When dealing with non-recursive Volterra series, we propose an errors-in-variables (EIV) method to jointly estimate the noise variances and the Volterra kernels. We also suggest a modified unbiased LMS algorithm to estimate the model parameters provided that the input-noise variance is known. When dealing with recursive polynomial model, we propose two methods using evolutionary algorithms. The first includes a stop protocol that takes into account the output-noise variance. In the second one, the fitness functions are based on correlation criteria in which the noise influence is removed or compensated
Renaud, Yoan. „Quelques aspects algorithmiques sur les systèmes de fermeture“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00731341.
Der volle Inhalt der QuelleKakani, Naveen Kumar. „Algorithms for Efficient Utilization of Wireless Bandwidth and to Provide Quality-of-Service in Wireless Networks“. Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2635/.
Der volle Inhalt der QuelleMirza, Shemila. „Algorithms for finite structures“. Thesis, Royal Holloway, University of London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369096.
Der volle Inhalt der QuelleRobert, Michel. „Contribution au développement du compilateur structurel P.R.I.N.T. algorithmes d'évaluation des performances temporelles des structures CMOS /“. Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb376094250.
Der volle Inhalt der QuelleKosová, Barbora. „Design parametrické ortézy horní končetiny“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-319490.
Der volle Inhalt der QuelleVera, Antonio. „Analyses de l'algorithme de Gauss : applications à l'analyse de l'algorithme LLL“. Phd thesis, Université de Caen, 2009. http://tel.archives-ouvertes.fr/tel-01073359.
Der volle Inhalt der QuelleDo, Viet Phuong. „Développement et applications de méthodes bioinformatiques pour l'identification des répétitions en tandem dans les structures des protéines“. Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT072.
Der volle Inhalt der QuelleIn general, protein structures can be divided into: repetitive and aperiodic structures. Most of the aperiodic structures are globular proteins. The repetitive proteins contain arrays of repeats that are adjacent to each other, called Tandem Repeats (TRs). Proteins containing TRs are abundant and have fundamental functional importance. Numerous studies demonstrated the involvement of such TR-containing proteins in human diseases. Furthermore, genetic instability of these regions can lead to emerging infection threats. Additionally, TR-containing structures have generated significant interest with respect to protein design as they can make excellent scaffolds for specific recognition of target molecules. Therefore, the discovery of these domains, understanding of their sequence–structure–function relationship promises to be a fertile direction for research.The growth of structural genomics initiatives, in combination with improvements in crystallographic and NMR techniques aimed at non-globular proteins, has resulted in an increase in structurally elucidated TR proteins. This has necessitated the development of classification schemes. Structural repeats were broadly divided into five classes mainly based on repeat length; Class I – crystalline aggregates; Class II – fibrous structures such as collagen; Class III – elongated structures where the repetitive units require each other for structural stability such as solenoid proteins; Class IV – closed repetitive structures, such as TIM-barrels and Class V – bead on a string structures such as tandems of Ig-fold domains. Despite this progress, the majority of bioinformatics approaches have focused on non-repetitive globular proteins.In recent years, efforts have been made to develop bioinformatics tools for the detection and analysis of repetitive elements in protein structures (3D TRs). Depending on the size and character of the repeats, some methods perform better than others, but currently no best approach exists to cover the whole range of repeats. This served as a motivation for the development of our method called the TAndem PrOtein detector (TAPO). TAPO exploits, periodicities of atomic coordinates and other types of structural representation, including strings generated by conformational alphabets, residue contact maps, and arrangements of vectors of secondary structure elements. Currently, seven feature based scores produced by TAPO are combined using a Support Vector Machine, producing a score to enable the differentiation between proteins with and without 3D TRs. TAPO shows an improved performance over other cutting edge methods, achieving 94% sensitivity and 97% specificity on the current benchmark. The development of TAPO provided new opportunities for large scale analysis of proteins with 3D TRs. In accordance with our analysis of PDB using TAPO, 19% of proteins contain 3D TRs. The large scale analysis of the 3D TR structures in PDB also allows us to discover several new types of TR structures that were absent in the existing classification. Some of them are described in the thesis manuscript. This suggests that TAPO can be used to regularly update the collection and classification of existing repetitive structures. In particular, a comprehensive analysis of 3D TRs related to Rossmann Fold (RF) was undertaken. Our special interest in RFs was based on the observation that many proteins with RFs represent borderline cases between repetitive and non-repetitive structures. In principle, α-helix-β-strand units of RFs should have a strong potential to stack one over the other, forming repetitive structures. To probe the question of how frequently RFs form long arrays of stacked repeats, we selected by using TAPO known RF-containing structures and classified them. Our analysis shows that typical RFs cannot be clearly defined as repetitive, rather they are part of globular structures with up to 3 αβ-repeats. We provide some explanations for this surprising observation
Benbernou, Nadia M. „Geometric algorithms for reconfigurable structures“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68480.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (p. 159-163).
In this thesis, we study three problems related to geometric algorithms of reconfigurable structures. In the first problem, strip folding, we present two universal hinge patterns for a strip of material that enable the folding of any integral orthogonal polyhedron of only a constant factor smaller surface area, and using only a constant number of layers at any point. These geometric results offer a new way to build programmable matter that is substantially more efficient than what is possible with a square N x N sheet of material, which can achieve all integral orthogonal shapes only of surface area O(N) and may use E(N 2) layers at one point [BDDO1O]. To achieve these results, we develop new approximation algorithms for milling the surface of an integral orthogonal polyhedron of genus 0, which simultaneously give a 2-approximation in tour length and an 8/3-approximation in the number of turns. Both length and turns consume area when folding strip, so we build on previous approximation algorithms for these two objectives from 2D milling. In the second problem, maxspan, the goal is to maximize the distance between the endpoints of a fixed-angle chain. We prove a necessary and sufficient condition for characterizing maxspan configurations. The condition states that a fixed-angle chain is in maxspan configuration if and only if the configuration is line-piercing (that is, the line through each of the links intersects the line segment through the endpoints of the chain in the natural order). We call this the Line-Piercing Theorem. The Line-Piercing Theorem was originally proved by [BSO8] using Morse-Bott theory and Mayer-Vietoris sequences, but we give an elementary proof based on purely geometric arguments. The Line-Piercing Theorem also leads to efficient algorithms for computing the maxspan of fixed-angle chains. In the third problem, efficient reconfiguration of pivoting tiles, we present an algorithmic framework for reconfiguring a modular robot consisting of identical 2D tiles, where the basic move is to pivot one tile around another at a shared vertex. The robot must remain connected and avoid collisions throughout all moves. For square tiles, and hexagonal tiles on either a triangular or hexagonal lattice, we obtain optimal O(n 2)-move reconfiguration algorithms. In particular, we give the first proofs of universal reconfigurability for the first two cases, and generalize a previous result for the third case. We also consider a model analyzed by Dumitrescu and Pach [DP06] where tiles slide instead of pivot (making it easier to avoid collisions), and obtain an optimal O(n2)-move reconfiguration algorithm, improving their 0(n') bound.
by Nadia M. Benbernou.
Ph.D.
Chen, Calvin Ching-Yuen. „Efficient Parallel Algorithms and Data Structures Related to Trees“. Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc332626/.
Der volle Inhalt der QuelleWang, Wei. „Alignement pratique de structure-séquence d'ARN avec pseudonœuds“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS563/document.
Der volle Inhalt der QuelleAligning macromolecules such as proteins, DNAs and RNAs in order to reveal, or conversely exploit, their functional homology is a classic challenge in bioinformatics, with far-reaching applications in structure modelling and genome annotation. In the specific context of complex RNAs, featuring pseudoknots, multiple interactions and non-canonical base pairs, multiple algorithmic solutions and tools have been proposed for the structure sequence alignment problem. However, such tools are seldom used in practice, due in part to their extreme computational demands, and because of their inability to support general types of structures. Recently, Rinaudo et al. gave a fully general parameterised algorithm for structure-sequence comparison, which is able to take as input any type of pseudoknotted structures. The parameterised algorithm is a tree decomposition based dynamic programming. To accelerate the dynamic programming algorithm without losing two much accuracy, we introduced a banded dynamic programming. Then three algorithms are introduced to get the suboptimal structure-sequence alignments. Furthermore, we introduce the notation Maximum Expected structure-sequence Alignment (MEA) to compute an alignment with maximum expected accuracy over a set of alignments. The Boltzmann match probability are computed based on the inside-outside algorithm. The algorithms are implemented in a software named LiCoRNA (aLignment of Complex RNAs). We first evaluate the performance of LiCoRNA on the seed alignment in the pseudoknotted RFAM families. Compared to the state-of-the-art algorithms, LiCoRNA shows generally equivalent or better results than its competitors. With the high accuracy showed by LiCoRNA, we further curate RFAM full pseudoknotted alignment. The reason why we realign full alignments is that covariance model does not support pseudoknot which may lead to misalign when building the full alignment
Jenatton, Rodolphe. „Structured sparsity-inducing norms : statistical and algorithmic properties with applications to neuroimaging“. Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00668379.
Der volle Inhalt der QuelleScholefield, Adam. „Quadtree structured approximation algorithms“. Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/18900.
Der volle Inhalt der QuelleThiebaut, Jocelyn. „Algorithmic and structural results on directed cycles in dense digraphs“. Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS059.
Der volle Inhalt der QuelleIn this thesis, we are interested in some algorithmic and structural problems of (oriented) cycle packing in dense digraphs. These problems are mainly motivated by understanding the structure of such graphs, but also because many algorithmic problems are easy (i.e. resolvable in polynomial time) on acyclic digraphs while they are NP-difficult in the general case.More specifically, we first study the packing of cycles and the packing of triangles in tournaments. These problems are the two dual problems (from a linear programming point of view) of feedback arc/vertex set that have received a lot of attention in literature. Among other things, we show that there is no polynomial algorithm to find a maximum collection of cycles (respectively triangles) vertex or arc-disjoint in tournaments, unless P = NP. We are also interested in algorithms of approximations and parameterized complexity of these different problems.Then, we study these problems in the specific case where the tournament admits a feedback arc set which is a matching. Such tournaments are said to be sparse. Surprisingly, the problem remains difficult in the case of vertex-disjoint triangles, but the packing of triangles and the packing of arc-disjoint cycles become polynomial. Thus, we explore the approximation and parameterized complexity of the vertex-disjoint case in sparse tournaments.Finally, we answer positively to a structural conjecture on k-regular bipartite tournaments by Manoussakis, Song and Zhang from 1994. Indeed, we show that all digraphs of this non-isomorphic class to a particular digraph have for every p even with 4 leq p leq |V(D)| - 4 a C cycle of size p such that D V(C) is Hamiltonian
Correa, Leonardo de Lima. „Uma proposta de algoritmo memético baseado em conhecimento para o problema de predição de estruturas 3-D de proteínas“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/156640.
Der volle Inhalt der QuelleMemetic algorithms are evolutionary metaheuristics intrinsically concerned with the exploiting and incorporation of all available knowledge about the problem under study. In this dissertation, we present a knowledge-based memetic algorithm to tackle the threedimensional protein structure prediction problem without the explicit use of template experimentally determined structures. The algorithm was divided into two main steps of processing: (i) sampling and initialization of the algorithm solutions; and (ii) optimization of the structural models from the previous stage. The first step aims to generate and classify several structural models for a determined target protein, by the use of the strategy Angle Probability List, aiming the definition of different structural groups and the creation of better structures to initialize the initial individuals of the memetic algorithm. The Angle Probability List takes advantage of structural knowledge stored in the Protein Data Bank in order to reduce the complexity of the conformational search space. The second step of the method consists in the optimization process of the structures generated in the first stage, through the applying of the proposed memetic algorithm, which uses a tree-structured population, where each node can be seen as an independent subpopulation that interacts with others, over global search operations, aiming at information sharing, population diversity, and better exploration of the multimodal search space of the problem The method also encompasses ad-hoc global search operators, whose objective is to increase the exploration capacity of the method turning to the characteristics of the protein structure prediction problem, combined with the Artificial Bee Colony algorithm to be used as a local search technique applied to each node of the tree. The proposed algorithm was tested on a set of 24 amino acid sequences, as well as compared with two reference methods in the protein structure prediction area, Rosetta and QUARK. The results show the ability of the method to predict three-dimensional protein structures with similar foldings to the experimentally determined protein structures, regarding the structural metrics Root-Mean-Square Deviation and Global Distance Total Score Test. We also show that our method was able to reach comparable results to Rosetta and QUARK, and in some cases, it outperformed them, corroborating the effectiveness of our proposal.
Clément, Julien. „Algorithmes, mots et textes aléatoires“. Habilitation à diriger des recherches, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-00913127.
Der volle Inhalt der QuelleCeolin, Celina. „A equação unidimensional de difusão de nêutrons com modelo multigrupo de energia e meio heterogêneo : avaliação do fluxo para problemas estacionários e de cinética“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96762.
Der volle Inhalt der QuelleIn the present dissertation the one-dimensional neutron diffusion equation for stationary and kinetic problems in a multi-layer slab has been solved considering the multi-group energy model. One of the objectives and innovation in this work is to obtain an approximate solution with error estimation, accuracy control and in the form of an analytical expression. With this solution there is no need for interpolation schemes, which are usually needed in case of discretization of the domain. The neutron flux is expanded in a Taylor series whose coefficients are found using the differential equation and the boundary and interface conditions. The domain is divided into several layers, whose size and the polynomial order can be adjusted according to the required accuracy. To solve the eigenvalue problem the conventional power method has been used. The methodology is applied in a benchmark problem consisting of the solution of the diffusion equation as an initial condition and solving kinetic problems for different transients. The results are compared successfully with the ones in the literature. The convergence of the series is guaranteed by applying a criterion based on the Lipschitz criterion for continuous functions. Note that the solution obtained, together with the convergence analysis, shows the robustness and accuracy of this methodology.
Lévêque, Benjamin. „Coloration de graphes : structures et algorithmes“. Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00187797.
Der volle Inhalt der QuelleDurand, de gevigney Olivier. „Orientations des graphes : structures et algorithmes“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00989808.
Der volle Inhalt der QuelleDurand, de Gevigney Olivier. „Orientations des graphes : structures et algorithmes“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM027/document.
Der volle Inhalt der QuelleOrienting an undirected graph means replacing each edge by an arc with the same ends. We investigate the connectivity of the resulting directed graph. Orientations with arc-connectivity constraints are now deeply understood but very few results are known in terms of vertex-connectivity. Thomassen conjectured that sufficiently highly vertex-connected graphs have a k-vertex- connected orientation while Frank conjectured a characterization of the graphs admitting such an orientation. The results of this thesis are structures around the concepts of orientation, packing, connectivity and matroid. First, we disprove a conjecture of Recski on decomposing a graph into trees having orientations with specified indegrees. We also prove a new result on packing rooted arborescences with matroid constraints. This generalizes a fundamental result of Edmonds. Moreover, we show a new packing theorem for the bases of count matroids that induces an improvement of the only known result on Thomassen's conjecture. Secondly, we give a construction and an augmentation theorem for a family of graphs related to Frank's conjecture. To conclude, we disprove the conjecture of Frank and prove that, for every integer k >= 3, the problem of deciding whether a graph admits a k-vertex-orientation is NP-complete
Cledat, Romain. „Programming models for speculative and optimistic parallelism based on algorithmic properties“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42749.
Der volle Inhalt der QuelleRinaudo, Philippe. „Algorithmique de l'alignement structure-séquence d'ARN : une approche générale et paramétrée“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00847745.
Der volle Inhalt der QuelleAbbas, Boushra. „Méthode de Newton régularisée pour les inclusions monotones structurées : étude des dynamiques et algorithmes associés“. Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS250/document.
Der volle Inhalt der QuelleThis thesis is devoted to finding zeroes of structured maximal monotone operators, by using discrete and continuous dissipative dynamical systems. The solutions are obtained as the limits of trajectories when the time t tends towards infinity.We pay special attention to the dynamics that are obtained by Levenberg-Marquardt regularization of Newton's method. We also revisit the approaches based on some related dynamical systems.In a Hilbert framework, we are interested in finding zeroes of a structured maximal monotone operator M = A + B, where A is a general maximal monotone operator, and B is monotone and locally Lipschitz continuous. We introduce discrete and continuous dynamical systems which are linked to Newton's method. They involve separately B and the resolvents of A, and are designed to splitting methods. Based on the Minty representation of A as a Lipschitz manifold, we show that these dynamics can be formulated as differential systems, which are relevant to the Cauchy-Lipschitz theorem. We focus on the particular case where A is the subdifferential of a convex lower semicontinuous proper function, and B is the gradient of a convex, continuously differentiable function. We study the asymptotic behavior of trajectories. When the regularization parameter does not tend to zero too rapidly, and by using Lyapunov asymptotic analysis, we show the convergence of trajectories. Besides, we show the Lipschitz continuous dependence of the solution with respect to the regularization term.Then we extend our study by considering various classes of dynamical systems which aim at solving inclusions governed by structured monotone operators M = $partialPhi$+ B, where $partialPhi$ is the subdifferential of a convex lower semicontinuous function, and B is a monotone cocoercive operator. By a Lyapunov analysis, we show the convergence properties of the orbits of these systems. The time discretization of these dynamics gives various forward-backward splittingmethods (some new).Finally, we focus on the study of the asymptotic behavior of trajectories of the regularized Newton dynamics, in which we introduce an additional vanishing Tikhonov-like viscosity term.We thus obtain the asymptotic selection of the solution of minimal norm
Fischer, Johannes. „Data Structures for Efficient String Algorithms“. Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-75053.
Der volle Inhalt der Quelle