Rozprawy doktorskie na temat „Complete search”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Complete search.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Complete search”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ferreira, Junior Valnir, i N/A. "Improvements to Clause Weighting Local Search for Propositional Satisfiability". Griffith University. Institute for Integrated and Intelligent Systems, 2007. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070823.123257.

Pełny tekst źródła
Streszczenie:
The propositional satisfiability (SAT) problem is of considerable theoretical and practical relevance to the artificial intelligence (AI) community and has been used to model many pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction. Computational methods for SAT have historically fallen into two broad categories: complete search and local search. Within the local search category, clause weighting methods are amongst the best alternatives for SAT, becoming particularly attractive on problems where a complete search is impractical or where there is a need to find good candidate solutions within a short time. The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The main contributions are: A component-based framework for the functional analysis of local search methods. A clause weighting local search heuristic that exploits longer-term memory arising from clause weight manipulations. The approach first learns which clauses are globally hardest to satisfy and then uses this information to treat these clauses differentially during weight manipulation [Ferreira Jr and Thornton, 2004]. A study of heuristic tie breaking in the domain of additive clause weighting local search methods, and the introduction of a competitive method that uses heuristic tie breaking instead of the random tie breaking approach used in most existing methods [Ferreira Jr and Thornton, 2005]. An evaluation of backbone guidance for clause weighting local search, and the introduction of backbone guidance to three state-of-the-art clause weighting local search methods [Ferreira Jr, 2006]. A new clause weighting local search method for SAT that successfully exploits synergies between the longer-term memory and tie breaking heuristics developed in the thesis to significantly improve on the performance of current state-of-the-art local search methods for SAT-encoded instances containing identifiable CSP structure. Portions of this thesis have appeared in the following refereed publications: Longer-term memory in clause weighting local search for SAT. In Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, volume 3339 of Lecture Notes in Artificial Intelligence, pages 730-741, Cairns, Australia, 2004. Tie breaking in clause weighting local search for SAT. In Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, volume 3809 of Lecture Notes in Artificial Intelligence, pages 70–81, Sydney, Australia, 2005. Backbone guided dynamic local search for propositional satisfiability. In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, AI&M, Fort Lauderdale, Florida, 2006.
Style APA, Harvard, Vancouver, ISO itp.
2

Ferreira, Junior Valnir. "Improvements to Clause Weighting Local Search for Propositional Satisfiability". Thesis, Griffith University, 2007. http://hdl.handle.net/10072/365857.

Pełny tekst źródła
Streszczenie:
The propositional satisfiability (SAT) problem is of considerable theoretical and practical relevance to the artificial intelligence (AI) community and has been used to model many pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction. Computational methods for SAT have historically fallen into two broad categories: complete search and local search. Within the local search category, clause weighting methods are amongst the best alternatives for SAT, becoming particularly attractive on problems where a complete search is impractical or where there is a need to find good candidate solutions within a short time. The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The main contributions are: A component-based framework for the functional analysis of local search methods. A clause weighting local search heuristic that exploits longer-term memory arising from clause weight manipulations. The approach first learns which clauses are globally hardest to satisfy and then uses this information to treat these clauses differentially during weight manipulation [Ferreira Jr and Thornton, 2004]. A study of heuristic tie breaking in the domain of additive clause weighting local search methods, and the introduction of a competitive method that uses heuristic tie breaking instead of the random tie breaking approach used in most existing methods [Ferreira Jr and Thornton, 2005]. An evaluation of backbone guidance for clause weighting local search, and the introduction of backbone guidance to three state-of-the-art clause weighting local search methods [Ferreira Jr, 2006]. A new clause weighting local search method for SAT that successfully exploits synergies between the longer-term memory and tie breaking heuristics developed in the thesis to significantly improve on the performance of current state-of-the-art local search methods for SAT-encoded instances containing identifiable CSP structure. Portions of this thesis have appeared in the following refereed publications: Longer-term memory in clause weighting local search for SAT. In Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, volume 3339 of Lecture Notes in Artificial Intelligence, pages 730-741, Cairns, Australia, 2004. Tie breaking in clause weighting local search for SAT. In Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, volume 3809 of Lecture Notes in Artificial Intelligence, pages 70–81, Sydney, Australia, 2005. Backbone guided dynamic local search for propositional satisfiability. In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, AI&M, Fort Lauderdale, Florida, 2006.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
Style APA, Harvard, Vancouver, ISO itp.
3

McGowan, Robert. "A Search for Understanding Why Male, Long Term High School Dropouts Resist Returning to Complete a Secondary Credential". Thesis, University of Arkansas, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3702758.

Pełny tekst źródła
Streszczenie:

Much of the resistance for returning to education seems to be related to the same reasons students left school to begin with. The reason for dropping out and resisting to return to school may be a result of too much emphasis on academic preparation and too little emphasis on satisfying the perceived needs for preparing a youth for adulthood. Four themes emerged from the field-note based interviews: (1) all students do not learn the same way, (2) there is a need for more participatory learning, (3) learning should be relevant to life as perceived by the student, and (4) there is a dislike of computer content that is not supported by personal instruction. While these themes are supportive of past research efforts and may not seem remarkable, the solutions offered by the participants to improve these problems are worthy of consideration, and may be of use to both secondary and adult education.

Style APA, Harvard, Vancouver, ISO itp.
4

Cowhig, Patrick Carpenter. "A Complete & Practical Approach to Ensure the Legality of a Signal Transmitted by a Cognitive Radio". Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/34969.

Pełny tekst źródła
Streszczenie:
The computational power and algorithms needed to create a cognitive radio are quickly becoming available. There are many advantages to having a radio operated by cognitive engine, and so cognitive radios are likely to become very popular in the future. One of the main difficulties associated with the cognitive radio is ensuring the signal transmitted will follow all FCC rules. The work presented in this thesis provides a methodology to guarantee that all signals will be legal and valid. The first part to achieving this is a practical and easy to use software testing program based on the tabu search algorithm that tests the software off-line. The primary purpose of the software testing program is to find most of the errors, specially structural errors, while the radio is not in use so that it does not affect the performance of the system. The software testing program does not provide a complete assurance that no errors exist, so to supplement this deficit, a built-in self-test (BIST) is employed. The BIST is designed with two parts, one that is embedded into the cognitive engine and one that is placed into the radio's API. These two systems ensure that all signals transmitted by the cognitive radio will follow FCC rules while consuming a minimal amount of computational power. The software testing approach based on the tabu search is shown to be a viable method to test software with improved results over previous methods. Also, the software BIST demonstrated its ability to find errors in the signal production and is dem to only require an insignificant amount of computational power. Overall, the methods presented in this paper provide a complete and practical approach to assure the FCC of the legality of all signals in order to obtain a license for the product.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
5

Simões, Manuel Areias Sobrinho. "In search of the original leukemic clone in chronic myeloid leukemia patients in complete molecular remission after stem cell transplantation or imatinib". Doctoral thesis, Faculdade de Medicina da Universidade do Porto, 2010. http://hdl.handle.net/10216/56734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Simões, Manuel Areias Sobrinho. "In search of the original leukemic clone in chronic myeloid leukemia patients in complete molecular remission after stem cell transplantation or imatinib". Tese, Faculdade de Medicina da Universidade do Porto, 2010. http://hdl.handle.net/10216/56734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Jurčík, Lukáš. "Evoluční algoritmy při řešení problému obchodního cestujícího". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2014. http://www.nusl.cz/ntk/nusl-224447.

Pełny tekst źródła
Streszczenie:
This diploma thesis deals with evolutionary algorithms used for travelling salesman problem (TSP). In the first section, there are theoretical foundations of a graph theory and computational complexity theory. Next section contains a description of chosen optimization algorithms. The aim of the diploma thesis is to implement an application that solve TSP using evolutionary algorithms.
Style APA, Harvard, Vancouver, ISO itp.
8

Eyono, Obono Séraphin Désiré. "Recherche efficace d'images morphiques de mots". Rouen, 1995. http://www.theses.fr/1995ROUE5014.

Pełny tekst źródła
Streszczenie:
Nous étudions dans cette thèse la recherche d'un motif homomorphe dans un texte et montrons que ce problème est NP-complet. Nous établissons une première classification des motifs: les motifs simples (ou motifs élémentaires) et les motifs non simples. Nous donnons une caractérisation de la rationalité des motifs simples. Nous proposons ensuite une classification des motifs simples selon leurs degrés, ainsi qu'un algorithme efficace de recherche dans un texte des images de tout motif de degré un ou deux, qui fait gagner, ou presque gagner, un ou deux degrés de complexité par rapport à l'algorithme naïf que nous présentons également. Nous proposons enfin une généralisation de cet algorithme à toute classe de motifs de degré supérieur ou égal à deux. Nous nous inspirons du cas de la classe des motifs de degré un pour proposer un algorithme efficace de résolution des équations aux mots à une variable
Style APA, Harvard, Vancouver, ISO itp.
9

Kopřiva, Jan. "Srovnání algoritmů při řešení problému obchodního cestujícího". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-222126.

Pełny tekst źródła
Streszczenie:
The Master Thesis deals with logistic module innovation of information system ERP. The principle of innovation is based on implementation of heuristic algorithms which solve Travel Salesman Problems (TSP). The software MATLAB is used for analysis and tests of these algorithms. The goal of Master Thesis is the comparison of selections algorithm, which are suitable for economic purposes (accuracy of solution, speed of calculation and memory demands).
Style APA, Harvard, Vancouver, ISO itp.
10

Arendt, Dustin Lockhart. "In Search of Self-Organization". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26465.

Pełny tekst źródła
Streszczenie:
Many who study complex systems believe that the complexity we observe in the world around us is frequently the product of a large number of interactions between components following a simple rule. However, the task of discerning the rule governing the evolution of any given system is often quite difficult, requiring intuition, guesswork, and a great deal of expertise in that domain. To circumvent this issue, researchers have considered the inverse problem where one searches among many candidate rules to reveal those producing interesting behavior. This approach has its own challenges because the search space grows exponentially and interesting behavior is rare and difficult to rigorously define. Therefore, the contribution of this work includes tools and techniques for searching for dimer automaton rules that exhibit self-organization (the transformation of disorder into structure in the absence of centralized control). Dimer automata are simple, discrete, asynchronous rewriting systems that operate over the edges of an arbitrary graph. Specifically, these contributions include a number of novel, surprising, and useful applications of dimer automata, practical methods for measuring self-organization, advanced techniques for searching for dimer automaton rules, and two efficient GPU parallelizations of dimer automata to make searching and simulation more tractable.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
11

Rees, Leigh H. "Chirality : in search of organometallic second order non-linear optic materials". Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265324.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

AXENOVICH, Tatiana I., i Pavel M. BORODIN. "Search for Complex Disease Genes: Achievements and Failures". Research Institute of Environmental Medicine, Nagoya University, 2002. http://hdl.handle.net/2237/2772.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Helleren, Caroline Anne. "A search for bridging-dinitrogen heterobimetallic complexes containing iron and molybdenum or tungsten". Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Fielden, D. J. "The role of Spt4/5 and the search for antitermination complexes in archaea". Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1455740/.

Pełny tekst źródła
Streszczenie:
Spt4/5 and its bacterial homologue NusG are the only known universally conserved RNAP- associated transcription elongation factors. In the hyperthermophilic archaeon Methanocaldococcus jannaschii, Spt5 comprises an N-terminal NGN domain and a C-terminal KOW domain, and is bound at its NGN domain by Spt4. NusG and Spt5 increase the processivity of RNAP by binding to the RNAP clamp via the NGN domain. This maintains the RNAP clamp in a closed conformation, thereby enabling RNAP to remain bound to the template DNA. The NusG KOW domain interacts with ribosomes, thereby coupling transcription to translation. The functions of Spt4/5 in archaea are less well characterised. The work contained within this thesis demonstrates that in the context of M. jannaschii cell extract, Spt4/5 is found in the same fractions as ribosomes and RNAP, and therefore has the potential to couple transcription and translation. Furthermore, data obtained by microscale thermophoresis suggests that the KOW domain of Spt5 interacts with purified ribosomes. Electron paramagnetic resonance was performed on Spt4/5, demonstrating that Spt5 is conformationally flexible, and that the presence of Spt4 restricts its mobility. Limited proteolysis and thermofluor assays support the notion that Spt4 stabilises the Spt5 NGN domain. In E. coli, NusA binds to RNAP as a component of the antitermination complex, along with NusG, NusB, and NusE. This enables RNAP to enter a pause and termination-resistant state. M. jannaschii NusA consists of two KH domains. Mutational analysis identified the contribution of the two KH domains to RNA binding and identified additional residues involved in the interaction. Archaeal NusA does not coelute with RNAP, raising the possibility that archaeal NusA does not have antitermination functions. In summary this thesis argues that Spt4/5 likely couples transcription and translation in archaea and indicates that archaeal NusA binds to RNA via a novel binding site.
Style APA, Harvard, Vancouver, ISO itp.
15

Duong, Thach-Thao Nguyen. "Improving Diversification in Local Search for Propositional Satisfiability". Thesis, Griffith University, 2014. http://hdl.handle.net/10072/365717.

Pełny tekst źródła
Streszczenie:
In recent years, the Propositional Satisfiability (SAT) has become standard for encoding real world complex constrained problems. SAT has significant impacts on various research fields in Artificial Intelligence (AI) and Constraint Programming (CP). SAT algorithms have also been successfully used in solving many practical and industrial applications that include electronic design automation, default reasoning, diagnosis, planning, scheduling, image interpretation, circuit design, and hardware and software verification. The most common representation of a SAT formula is the Conjunctive Normal Form (CNF). A CNF formula is a conjunction of clauses where each clause is a disjunction of Boolean literals. A SAT formula is satisfiable if there is a truth assignment for each variable such that all clauses in the formula are satisfied. Solving a SAT problem is to determine a truth assignment that satisfies a CNF formula. SAT is the first problem proved to be NP-complete [20]. There are many algorithmic methodologies to solve SAT. The most obvious one is systematic search; however another popular and successful approach is stochastic local search (SLS). Systematic search is usually referred to as complete search or backtrack-style search. In contrast, SLS is a method to explore the search space by randomisation and perturbation operations. Although SLS is an incomplete search method, it is able to find the solutions effectively by using limited time and resources. Moreover, some SLS solvers can solve hard SAT problems in a few minutes while these problems could be beyond the capacity of systematic search solvers.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
16

Wåhlén, Herje. "Voice Assisted Visual Search". Thesis, Umeå universitet, Institutionen för informatik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-38204.

Pełny tekst źródła
Streszczenie:
The amount and variety of visual information presented on electronic displays is ever-increasing. Finding and acquiring relevant information in the most effective manner possible is of course desirable. While there are advantages to presenting a large number of information objects on a screen at the same time, it can also hinder fast detection of objects of interest. One way of addressing that problem is Voice Assisted Visual Search (VAVS). A user supported by VAVS calls out an object of interest and is immediately guided to the object by a highlighting cue. This thesis is an initial study of the VAVS user interface technique. The findings suggest that VAVS is a promising approach, supported by theory and practice. A working prototype shows that locating objects of interest can be sped up significantly, requiring only half the amount of time taken without the use of VAVS, on average.
Voice-Assisted Visual Search
Style APA, Harvard, Vancouver, ISO itp.
17

White, Bradley Michael. "Experimental Development of Automated Search Techniques for Discrete Combinatorial Optimisation". Thesis, Griffith University, 2009. http://hdl.handle.net/10072/365420.

Pełny tekst źródła
Streszczenie:
A suite of techniques for finding the optimal solutions for a set of discrete combinatorial problems was developed. An experimental approach was used, with a suitable test-bed found in a class of word-puzzles. The crux of such research is that seeking optimal solutions to discrete combinatorial problems requires the use of deterministic algorithms. Attention was focused on the development of new techniques capable of exhausting the search space more efficiently. Although research was restricted to tractable problems, exhaustion of the search space was recognised to be practically infeasible for all but small problem instances. Thus the size and complexity of the problems examined was necessarily restricted. On these grounds the selection of an appropriate test-bed was fundamental to the research. Complex word problems were used because they encompass a wide range of discrete combinatorial problems, but have only a small literature. The specific puzzle examples employed as test-beds had all been used in public competitions with solutions submitted by thousands of humans, with the winning solutions and scores published. This allowed a simple and independent initial benchmark of success. The techniques developed could be judged to be at least partially successful in that they were able to at least equal and in some cases beat the highest recorded scores. The general problem of benchmarking is discussed. It was observed that small changes to the test bed puzzles or to the techniques would often impact dramatically on the results. In an attempt to isolate the reasons for this, a focused view of the search algorithms was adopted. Complex holistic algorithms were broken into smaller sub-algorithmic categories, such as: node selection, domain maintenance, forward tracking, backtracking, branch-and-bound, primary slot selection, variable ordering, value ordering, and constraint ordering. Within each of these categories a range of variations is presented. Techniques for removing inconsistencies prior to search were also experimented with. These consistency pre-processors were found to have a minimal and at times detrimental effect on search times when a good selection of search techniques was used. However, they were found to offer considerable benefits in instances where a poor selection of search techniques was chosen. As such these consistency pre-processors may be viewed as useful in terms of a risk management strategy for solving these problems. Whilst not the primary focus of this research experimentation with stochastic techniques within a deterministic framework was performed. The purpose of which was to gauge the impact of generating good solutions prior to an exhaustive search. A technique developed was observed to frequently improve the time taken to form an optimal solution, and improve the total time taken to exhaust the search space. While the major effort in the research was necessarily spent in developing and testing these algorithms and their implementations, specific attention was paid to the methodological problems inherent in experimental approaches to program development.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith Business School
Griffith Business School
Full Text
Style APA, Harvard, Vancouver, ISO itp.
18

Mason, Chris. "The search for low-valent transition metal complexes for oligomerization and polymerization of ethylene". Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28515.

Pełny tekst źródła
Streszczenie:
Combination of the potassium salt of the triazenide anion (R)NNN(R) - with CrCl2(THF)2 leads to the formation of dimeric complexes {Cr[mu,eta,eta'-bis-1,3-(2',5'-diisopropylphenyl) triazenide]2}2(2.1), [Cr(mu,eta,eta')-1,3-diphenyltriazenide) 2]2 (2.2), {Cr[mu,eta,eta')-bis-1,3-(2'-methoxyphenyl)triazenide] 2}2 (2.3) which are paddlewheel type structures featuring bridging triazenide anions. These complexes led to very low activity for the polymerization and oligomerization of ethylene upon activation with known activators MAO, TIBAO. Combination of the potassium salt of the triazenide anion with CrCl 3(THF)3 leads to the formation of mononuclear complexes [bis-1,3-(2',5'-diisopropylphenyl) triazenide]Cr(THF)2Cl2 (2.5) and [bis-1,3-(2'-methoxyphenyl) triazenide]Cr(THF)2Cl2 (2.6). These complexes showed high activity for the non-selective oligomerization of ethylene, fitting a Schulz-Flory distribution. Reduction of these complexes lead to the dinuclear CrII species {[bis-1,3-(2',5'-diisopropylphenyl)triazenide]Cr(THF)(muCl)} 2(2.7), but did not lead to the target Cr1 complexes as ligand salts were isolated upon further reduction. One pot reaction of ZrCl4 with 2 equivalents of mono-pyrrole based ligands and 4 equivalents of AlR3 leads to the formation of zirconocene type structures (eta)5[2,5-Me2C 4H2N{AlCl2Et}]2ZrCl2 ( 3.1), (eta5[2,5-Me2C4H 2N{AlClMe2}])2ZrClMe (3.2), (eta 5[2,5-Me2C4H2N{AlClMe2}]) 2ZrCl2 (3.3), (eta5[C 4H4N{AlClMe2}])2ZrMe2 ( 3.4). These ZrIV complexes produced lower than expected activity for polymerization of ethylene when activated by MAO, as compared to their Cp analogues. When dipyrrole ligands were combined with ZrCl4 and 4 equivalents of AlR3 in one pot, formation of ZrIII compounds were observed. Octameric ([{Ph2C(C4H3N) 2}Zr2]4(mu-Cl)16) [mu-Cl] 2AlCl2)4 (3.5) provides a Zr III complex unlike any other observed in literature. Reduction was performed from Al(Me)3 in this case, however coordination of Al in the structure is unlike the previous 3.1,3.2,3.3,3.4. The formation of ([AlEtCl{mu-Cl}2][{C4H 4N}2AlClEt]Zr2(mu-Cl)4)2 · 4 (C7H8) (3.7) has many interesting features. The Zr-Zr distance of 3.045(2)A is the shortest yet observed for Zr. Additionally, the dipyrrole starting material was cleaved during the reaction to basic pyrrole, which suggests insight into the mechanism of formation of 3.5 and suggests the possibility of in situ Zr II formation.
Style APA, Harvard, Vancouver, ISO itp.
19

Dunbar, Alexander. "The search for submarine fan complexes in the Upper Cretaceous, Browse Basin, Northwest Shelf, Australia /". Adelaide, 2000. http://web4.library.adelaide.edu.au/theses/09SB/09sbd898.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Combe, Caroline Jane. "Hepatic receptor(s) for serine protease-inhibitor complexes". Thesis, University of Aberdeen, 1995. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU549619.

Pełny tekst źródła
Streszczenie:
A number of questions about the hepatic mechanisms of tissue-type plasminogen activator (t-PA) clearance still remain unanswered. Although certain liver endothelial cell receptors have been implicated, the parenchymal cell system, which is responsible for most clearance, still remains a mystery. The aim of this project, in the most simple terms, was to solve this mystery. The foundation upon which this project was built was that t-PA is cleared, by a hepatic receptor, in complex with its primary inhibitor, plasminogen activator inhibitor type 1 (PAI-1). The affinity of binding was estimated to be 0.8-1.0 nM and the number of binding sites per cell, 35 000-70 000. Affinity chromatography and chemical cross-linking resulted in a band of A?70 kDa which was presumed to be the receptor. This project was designed to characterize this hepatic receptor for t-PA-PAI-1 and determine whether plasmin-2-antiplasmin (PAP) is recognised by the same receptor. Characterizing the receptor was attempted initially by employing cell binding assays using the human hepatoma cell line, Hep G2. This methodology required the formation and characterization of pure pre-formed ligands which was achieved by overcoming preliminary problems. The binding assays showed that competition between t-PA-PAI-1 and PAP was occurring but that high non-specific binding and error between duplicate samples suggested that this system was not suitable for characterization of the receptor. The data accumulated in this study suggested that LRP was primarily responsible for hepatic uptake of t-PA and that proteases were recognised preferentially in complex with their inhibitors.
Style APA, Harvard, Vancouver, ISO itp.
21

Matthews, Cameron. "Synthesis, crystal structures and molecular modelling of rare earth complexes with bis(2-pyridylmethyl)amine: aim topological analysis and ligand conformation search". Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/8230.

Pełny tekst źródła
Streszczenie:
Eight rare earth complexes with bis(2-pyridylmethyl)amine (DPA) were synthesised and recrystallised, under air-sensitive or low moisture conditions. The crystal structures were successfully determined, via SC-XRD, and the asymmetric units of five complexes (1, 3, 5, 6 and 7) were submitted for DFT molecular modelling calculations, which involved geometry optimisation and frequency calculations. The neutral complexes obtained were bis(bis(2-pyridylmethyl)amine)-trichloro-lanthanum(III) [LaCl3(DPA)2] (1), bis(bis(2-pyridylmethyl)amine)-trichloro-cerium(III)) [CeCl3(DPA)2] (2), bis(μ2-chloro)-diaqua-tetrachloro-bis(bis(2-pyridylmethyl)amine)-di-praseodymium(III) [PrCl2(μ-Cl)(DPA)(OH2)]2 (3) and bis(μ2-methoxo)-bis(bis(2-pyridylmethyl)amine)- tetrachloro-di-dysprosium(III) [DyCl2(μ-OCH3)(DPA)]2 (4). The cationic complexes obtained in this study were dichloro-bis(bis(2-pyridylmethyl)amine)- neodymium(III) chloride methanol solvate [NdCl2(DPA)2]Cl·CH3OH (5), dichloro-bis(bis(2- pyridylmethyl)amine)-dysprosium(III) chloride methanol solvate [DyCl2(DPA)2]Cl·CH3OH (6), dichloro-bis(bis(2-pyridylmethyl)amine)-yttrium(III) chloride methanol solvate [YCl2(DPA)2]Cl·CH3OH (7) and dichloro-bis(bis(2-pyridylmethyl)amine)-lutetium(III) chloride methanol solvate [LuCl2(DPA)2]Cl·CH3OH (8). The ‘Quantum theory of atoms in molecules’ approach was used to investigate the electron density topology, primarily in order to investigate the hydrogen and coordination bonds for three of the eight complexes. Two of the neutral complexes contain the ‘early’ rare earth elements lanthanum and praseodymium and one cationic complex contains the ‘late’ lanthanide element dysprosium. Noncovalent interaction analysis was also performed on the aforementioned complexes in order to gain a deeper understanding of the intra-molecular stereo-electronic interactions. Spin density analysis was used to investigate the distribution of unpaired electron density at and around the metal centres of the aforementioned paramagnetic Pr- and Dy-complexes. A ligand conformation search for DPA was undertaken and 32 low energy conformers were identified and their relative energies were determined using two DFT functionals, namely M06 and M06-2X.
Style APA, Harvard, Vancouver, ISO itp.
22

Metay, Estelle. "Méthodologie d'accès à des benzolactones de taille moyenne". Paris 12, 2005. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002281830204611&vid=upec.

Pełny tekst źródła
Streszczenie:
Les lactones de taille moyenne présentent un grand intérêt en synthèse organique puisqu’elles peuvent être utilisées comme intermédiaires dans la préparation de nombreux composés. Ces cycles sont également présents dans des produits naturels et peuvent présenter diverses applications. Les lactones de taille inférieure à six chaînons ou supérieure à 12 chaînons sont obtenues facilement en utilisant quand cela est nécessaire des conditions de hautes dilutions. Par contre l’accès aux lactones de taille moyenne reste très difficile. Nous nous sommes intéressés à la préparation de benzolactones de taille moyenne, peu décrites dans la littérature. La première méthode testée suppose une réaction électrochimique intramoléculaire catalysée par des complexes du nickel, sur une molécule aromatique possédant à la fois une liaison C,Br sur le noyau aromatique et une double liaison électrophile. Des essais préliminaires ont montré que l’étape électrochimique demeure peu efficace (formation de produits de réduction). Nous avons envisagé une autre approche pour préparer ces composés. Cette stratégie implique le passage par une arylation électrochimique intermoléculaire suivie d’une étape de lactonisation. Cette méthode simple et efficace a permis l’accès à quatre benzolactones. Afin de diminuer le nombre d’étapes de synthèse, et éviter notamment les étapes de protection et déprotection, une seconde méthode a été développée. Elle consiste à introduire par voie électrochimique une chaîne portant la fonction alcool sous la forme d’un carbonyle, réduit ultérieurement. Le dérivé ortho-halogéné possédant déjà une fonction ester sera facilement saponifié avant lactonisation. Ces résultats laissent entrevoir la possibilité d’accéder à des lactones chirales. Cette seconde approche, très efficace, a permis la préparation de lactones possédant soit une double liaison ou encore des variations structurales sur le noyau aromatique ou hétéroaromatique
Medium ring compounds are important in organic chemistry. They are contained in a large number of natural products. They are also intermediates in many synthetic applications. We have focused on the preparation of benzolactones. Lactones containing up to 6 members are easily prepared. Also, macrocyclic lactones larger than 12-membered rings can be efficiently obtained in high dilution reaction conditions. On the contrary, medium ring lactones, like other medium-size ring compounds, are reported to be the most difficult to prepare by cyclisation. Our purpose was to apply an efficient C,C bond forming electrochemical reaction, involving the nickel catalyzed arylation of activated olefins, to the formation of medium ring benzolactones. The first idea was to perform this reaction intramolecularily. In such a simple route, the activated olefin is first tethered to the aryl moiety before the electrochemical arylation of the C,C-double bond, thus leading to the expected fused benzolactone in only 2 steps. Unfortunately until now, this method is of low efficiency, as the main products in the electrochemical step come from reduction of the starting compound. As an alternative, we could first form the C,C-bond by a bimolecular process before closing the ring by lactonisation. This simple method to access rapidly to medium ring lactones using an electrochemical step is efficient. Benzolactones were obtained in good yield. To decrease the number of steps and notably avoid the protection/deprotection steps we found it quite convenient to first tethered the carboxylic group to the aromatic ring and then introduce an alkyl chain bearing the hydroxyl group precursor in the form of a carbonyl. Apart from reducing the number of steps, this enables access to chiral lactones. Other benzolactones have been prepared having notably a heteroaryl ring or a double bond in the lactone ring
Style APA, Harvard, Vancouver, ISO itp.
23

Le, Gac Jean-Pierre. "Etude comparative des concepts de GYSI et de SEARS en prothèse complète". Brest, 1992. http://www.theses.fr/1992BRES4014.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Altman, Michael Darren. "Computational ligand design and analysis in protein complexes using inverse methods, combinatorial search, and accurate solvation modeling". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36258.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemistry, 2006.
Vita.
Includes bibliographical references (p. 207-230).
This thesis presents the development and application of several computational techniques to aid in the design and analysis of small molecules and peptides that bind to protein targets. First, an inverse small-molecule design algorithm is presented that can explore the space of ligands compatible with binding to a target protein using fast combinatorial search methods. The inverse design method was applied to design inhibitors of HIV-1 protease that should be less likely to induce resistance mutations because they fit inside a consensus substrate envelope. Fifteen designed inhibitors were chemically synthesized, and four of the tightest binding compounds to the wild-type protease exhibited broad specificity against a panel of drug resistance mutant proteases in experimental tests. Inverse protein design methods and charge optimization were also applied to improve the binding affinity of a substrate peptide for an inactivated mutant of HIV-1 protease, in an effort to learn more about the thermodynamics and mechanisms of peptide binding. A single mutant peptide calculated to have improved binding electrostatics exhibited greater than 10-fold improved affinity experimentally.
(cont.) The second half of this thesis presents an accurate method for evaluating the electrostatic component of solvation and binding in molecular systems, based on curved boundary-element method solutions of the linearized Poisson-Boltzmann equation. Using the presented FFTSVD matrix compression algorithm and other techniques, a full linearized Poisson-Boltzmann equation solver is described that is capable of solving multi-region problems in molecular continuum electrostatics to high precision.
Michael Darren Altman.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
25

Moraes, Maurício Coutinho. "Towards completely automatized HTML form discovery on the web". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/70194.

Pełny tekst źródła
Streszczenie:
The forms discovered by our proposal can be directly used as training data by some form classifiers. Our experimental validation used thousands of real Web forms, divided into six domains, including a representative subset of the publicly available DeepPeep form base (DEEPPEEP, 2010; DEEPPEEP REPOSITORY, 2011). Our results show that it is feasible to mitigate the demanding manual work required by two cutting-edge form classifiers (i.e., GFC and DSFC (BARBOSA; FREIRE, 2007a)), at the cost of a relatively small loss in effectiveness.
Style APA, Harvard, Vancouver, ISO itp.
26

Bouvet, Diane. "Études structurales de la stabilité en solution de complexes de platine anticancéreux". Paris 12, 2003. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002110950204611&vid=upec.

Pełny tekst źródła
Streszczenie:
Trois complexes de platine sont actuellement utilisés en France dans de nombreux protocoles anticancéreux. Ces dérivés semblent présenter une haute réactivité aux nucléophiles en solution. Leur dégradation a deux conséquences : in vitro, elle peut compromettre la stabilité du médicament en solution avant administration ; in vivo, la modification de structure de ces molécules peut induire des changements notables dans leurs modes d’action. Le suivi structural de la dégradation de solutions de carboplatine et d’oxaliplatine en présence de différents nucléophiles ne peut être entrepris par diffraction de rayons X. Il s’agit en effet de composés non- cristallisés, leur caractérisation structurale nécessite donc l’utilisation de la spectroscopie d’absorption des rayons X (EXAFS). Nous avons mis en évidence la dégradation des complexes par les nucléophiles en solution. L’EXAFS a apporté, dans presque tous les cas, les éléments nécessaires à la caractérisation structurale des produits formés
Three platinum complexes are curently used in chemotherapy in France. These drugs seem to react rapidly with nucleophilic species in solution. Their degradation has two consequences: in vitro, it can compromise the stability of the drug in solution before administration; in vivo, the structural modificatjon of these molecules can induce notable changes in their modes of action. Due to the non-crystalline properties of the degradation products of carboplatin and oxaliplatin in solution, the structural characterization must be done with X Ray Absorption Spectroscopy (EXAFS). In most cases, this technique allowed us to characterize the degradation products of the platinum drugs in solution, in presence of nucleophilic species
Style APA, Harvard, Vancouver, ISO itp.
27

Zalaket, Joseph. "Planification dans des strucures complexes". Toulouse 3, 2004. http://www.theses.fr/2004TOU30188.

Pełny tekst źródła
Streszczenie:
La planification d'actions est un élément indispensable pour donner à un agent autonome la possibilité du raisonnement sur la façon d'atteindre un but. Plusieurs hypothèses ajoutées à la modélisation des problèmes de planification ont facilité la tâche de planification sur certains domaines de benchmark, mais elles ont décalé l'application de la planification sur des domaines du monde réel. Plusieurs travaux ont été faits pour relaxer l'une ou l'autre de ces hypothèses. Dans ce travail de thèse je propose la relaxation de l'hypothèse qui impose que l'ensemble d'états dans l'espace soit fini en permettant la planification des connaissances numériques qui peuvent induire la génération des nouveaux objets dans le monde. Dans un premier temps je propose la planification dans un monde décrit sous forme orientée-objet dans lequel les fonctions sont utilisées pour représenter les relations entre les objets et pour définir les effets des actions. Dans un deuxième temps je propose une approche fonctionnelle dans laquelle les données numériques peuvent être mises à jour par application des fonctions
AI planning is an essential element to give to an autonomous agent the possibility of reasoning on the way of achieving a goal. Several assumptions have been added to the modeling of planning problems facilitating the task of planning on certain domains of benchmark. However these assumptions shifted the application of planning on real world domains. Several works was made to release one or the other of these assumptions. In this work of thesis I propose the relieving of the assumption which imposes that the set of states in the space is finite and that by allowing the planning of numerical knowledge, which can induce with the generation of new objects in the world. I propose planning in a world described in an Object-Oriented aspect. In this aspect the functions are used to represent the relations between the objects and to define the effects of the actions. Thus, I propose a functional approach in which the numerical data can be updated by application of functions
Style APA, Harvard, Vancouver, ISO itp.
28

Amatore, Muriel. "Synthèse de liaisons carbone-carbone via l'utilisation d'une catalyse par des complexes du cobalt". Paris 12, 2006. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002520930204611&vid=upec.

Pełny tekst źródła
Streszczenie:
Ce travail de thèse est consacré à l'utilisation de sels de cobalt (II), associés ou non à des ligands de type 2,2'-bipyridine ou triphénylphosphine, pour la réalisation de réactions de couplage croisé de manière directe, par voie purement chimique. Ces réactions mettent en jeu des hallogénures ou pseudo-hallogénures aromatiques et hétéroaromatiques, ainsi que des réctifs divers comme les acétates vinyliques, les oléfines activées ou encore des hallogénures d'alkyle. Toutes ces réactions impliquent des espèces organométalliques catalytiques de type aryl-cobalt. Ces intermédiaires catalytiques nous permettent alors d'orienter nos réactions soit pricipalement vers la synthèse directe de liaisons carbone-carbone, soit vers la formation d'arylzinciques, que nous avons pu obtenir à partir des chlorures aromatiques correspondants. Ce manuscrit, divisé en quatre chapitres, présente les résultats que nous avons pu obtenir au cours de ce travail. L'utilisation de différents systèmes catalytiques tels que COBR2(BPY) ou COBR2(PPH3) associés au manganèse comme métal réducteur, ou bien COBR2(BPY) ou COBR2 associés au zinc, nous a permis de développer des réactions de vinylation (chapitre I), d'addition conjuguée (chapitre II), d'alkylation via le passage par un organozincique ou non (chapitre III) et enfin de synthèse de biaryles dissymétriques (chapitre IV)
This work is devoted to the realization of direct chemical cross-coupling reactions, using cobalt (II) salts associated to 2,2'-bipyridine or triphenylphosphine. These reactions involve aromatic or heteroamromatic halides or pseudo-halides, as well as a broad range of reagents such as vinylic acetates, activated olefins or alkyl halides. All these cross-coupling reactions are based on the formation, in catalytic amounts, of organometallic derivatives, the aryl-cobalt species. These catalytic intermediates allow us to direct the reactions either mainly towards carbon-carbon bond formation or towards the synthesis of arylzinc reagents, that we obtained in good yields strating from the corresponding aromatic chlorides. This manuscript, divided into four chapters, presents the results we obtained during this research work. The use of various catalytic systems such as COBR2(BPY) or COBR2(PPH3) in association with manganese as reductant, or else COBR2(BPY) or COBR2 in association with zinc powder, allowed us to develop vinylation reactions (chapter I), conjugate addition reactions (chapter II), alkylation reactions via an organozinc reagent or not (chapter III) and lastly, non symetric biaryls synthesis (chapter IV)
Style APA, Harvard, Vancouver, ISO itp.
29

Marie, Benjamin. "Exploitation d’informations riches pour guider la traduction automatique statistique". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS066/document.

Pełny tekst źródła
Streszczenie:
S'il est indéniable que de nos jours la traduction automatique (TA) facilite la communication entre langues, et plus encore depuis les récents progrès des systèmes de TA statistiques, ses résultats sont encore loin du niveau de qualité des traductions obtenues avec des traducteurs humains.Ce constat résulte en partie du mode de fonctionnement d'un système de TA statistique, très contraint sur la nature des modèles qu'il peut utiliser pour construire et évaluer de nombreuses hypothèses de traduction partielles avant de parvenir à une hypothèse de traduction complète. Il existe cependant des types de modèles, que nous qualifions de « complexes », qui sont appris à partir d'informations riches. Si un enjeu pour les développeurs de systèmes de TA consiste à les intégrer lors de la construction initiale des hypothèses de traduction, cela n'est pas toujours possible, car elles peuvent notamment nécessiter des hypothèses complètes ou impliquer un coût de calcul très important. En conséquence, de tels modèles complexes sont typiquement uniquement utilisés en TA pour effectuer le reclassement de listes de meilleures hypothèses complètes. Bien que ceci permette dans les faits de tirer profit d'une meilleure modélisation de certains aspects des traductions, cette approche reste par nature limitée : en effet, les listes d'hypothèses reclassées ne représentent qu'une infime partie de l'espace de recherche du décodeur, contiennent des hypothèses peu diversifiées, et ont été obtenues à l'aide de modèles dont la nature peut être très différente des modèles complexes utilisés en reclassement.Nous formulons donc l'hypothèse que de telles listes d'hypothèses de traduction sont mal adaptées afin de faire s'exprimer au mieux les modèles complexes utilisés. Les travaux que nous présentons dans cette thèse ont pour objectif de permettre une meilleure exploitation d'informations riches pour l'amélioration des traductions obtenues à l'aide de systèmes de TA statistique.Notre première contribution s'articule autour d'un système de réécriture guidé par des informations riches. Des réécritures successives, appliquées aux meilleures hypothèses de traduction obtenues avec un système de reclassement ayant accès aux mêmes informations riches, permettent à notre système d'améliorer la qualité de la traduction.L'originalité de notre seconde contribution consiste à faire une construction de listes d'hypothèses par passes multiples qui exploitent des informations dérivées de l'évaluation des hypothèses de traduction produites antérieurement à l'aide de notre ensemble d'informations riches. Notre système produit ainsi des listes d'hypothèses plus diversifiées et de meilleure qualité, qui s'avèrent donc plus intéressantes pour un reclassement fondé sur des informations riches. De surcroît, notre système de réécriture précédent permet d'améliorer les hypothèses produites par cette deuxième approche à passes multiples.Notre troisième contribution repose sur la simulation d'un type d'information idéalisé parfait qui permet de déterminer quelles parties d'une hypothèse de traduction sont correctes. Cette idéalisation nous permet d'apporter une indication de la meilleure performance atteignable avec les approches introduites précédemment si les informations riches disponibles décrivaient parfaitement ce qui constitue une bonne traduction. Cette approche est en outre présentée sous la forme d'une traduction interactive, baptisée « pré-post-édition », qui serait réduite à sa forme la plus simple : un système de TA statistique produit sa meilleure hypothèse de traduction, puis un humain apporte la connaissance des parties qui sont correctes, et cette information est exploitée au cours d'une nouvelle recherche pour identifier une meilleure traduction
Although communication between languages has without question been made easier thanks to Machine Translation (MT), especially given the recent advances in statistical MT systems, the quality of the translations produced by MT systems is still well below the translation quality that can be obtained through human translation. This gap is partly due to the way in which statistical MT systems operate; the types of models that can be used are limited because of the need to construct and evaluate a great number of partial hypotheses to produce a complete translation hypothesis. While more “complex” models learnt from richer information do exist, in practice, their integration into the system is not always possible, would necessitate a complete hypothesis to be computed or would be too computationally expensive. Such features are therefore typically used in a reranking step applied to the list of the best complete hypotheses produced by the MT system.Using these features in a reranking framework does often provide a better modelization of certain aspects of the translation. However, this approach is inherently limited: reranked hypothesis lists represent only a small portion of the decoder's search space, tend to contain hypotheses that vary little between each other and which were obtained with features that may be very different from the complex features to be used during reranking.In this work, we put forward the hypothesis that such translation hypothesis lists are poorly adapted for exploiting the full potential of complex features. The aim of this thesis is to establish new and better methods of exploiting such features to improve translations produced by statistical MT systems.Our first contribution is a rewriting system guided by complex features. Sequences of rewriting operations, applied to hypotheses obtained by a reranking framework that uses the same features, allow us to obtain a substantial improvement in translation quality.The originality of our second contribution lies in the construction of hypothesis lists with a multi-pass decoding that exploits information derived from the evaluation of previously translated hypotheses, using a set of complex features. Our system is therefore capable of producing more diverse hypothesis lists, which are globally of a better quality and which are better adapted to a reranking step with complex features. What is more, our forementioned rewriting system enables us to further improve the hypotheses produced with our multi-pass decoding approach.Our third contribution is based on the simulation of an ideal information type, designed to perfectly identify the correct fragments of a translation hypothesis. This perfect information gives us an indication of the best attainable performance with the systems described in our first two contributions, in the case where the complex features are able to modelize the translation perfectly. Through this approach, we also introduce a novel form of interactive translation, coined "pre-post-editing", under a very simplified form: a statistical MT system produces its best translation hypothesis, then a human indicates which fragments of the hypothesis are correct, and this new information is then used during a new decoding pass to find a new best translation
Style APA, Harvard, Vancouver, ISO itp.
30

Galant, Céline. "Nouveaux complexes polyélectrolytes impliquant un polymère de ß-cyclodextrine, un tensioactif cationique et un polyanion". Paris 12, 2003. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002111230204611&vid=upec.

Pełny tekst źródła
Streszczenie:
Un nouveau modèle de complexe polyélectrolyte a été élaboré, basé d’une part sur des complexes d’inclusion entre un polymère neutre de β-CD et un tensioactif cationique (DTAC) et, d’autre part, sur des complexes par charges opposées entre le DTAC et un polyanion. Des interactions d’inclusion entre le poly(β-CD) et le DTAC seuls en solution ont tout d’abord été mises en évidence et évaluées au moyen de plusieurs techniques incluant des mesures conductimétriques et fluorimétriques et des mesures de tension de surface. La structure des agrégats résultants a été étudiée par viscosimétrie et par SANS en fonction de la force ionique et de la stoechiométrie du mélange. La formation de complexes ternaires solubles dans l’eau a ensuite été prouvée lors de l’addition d’un polyanion au mélange poly(β- CD)/DTAC. Pour trois polyanions de natures et d’architectures différentes (NaPSS, NaDxS et ADN), les propriétés structurales des complexes ont été analysées par SANS, par viscosimétrie et par DLS en fonction de la concentration en DTAC. La stabilité et la réversibilité des complexes ont également été étudiées en faisant varier la force ionique du milieu et la concentration en compétiteurs
A new model of polyelectrolyte compiex has been elaborated, based on inclusion complexes between a neutral polyrner of β -CD and a cationic surfactant (DTAC). And complexes of opposite charges between DTAC and a polyanion. Inclusion interactions between poly(β-CD) and DTAC alone in solution have first been characterized with several techniques including conductimetric and fluorimetric measurements and surface tension measurements. Structure of the resulting aggregates has been studied by viscosimetry and SANS as a function of the ionic strength and stoechiometry of the mixture. Then, the addition of a polyanion to the poly(β-CD)/DTAC mixture has been shown to form water soluble ternary complexes. For three polyanions of different natures and architectures NaPSS, NaDxS and DNA), the structural properties of the complexes have been analyzed by SANS, viscosimetry and DLS as a function of the DTAC concentration. Stability and reversibility of the complexes have also been studied by varying the ionic strength of the medium and the concentration in competitors
Style APA, Harvard, Vancouver, ISO itp.
31

Vincent, Fanny. "Interactions protéine-protéine au cours de l'étirement des cardiomyocytes : mise en évidence d'une interaction entre la calcineurine et la PKC ε". Paris 12, 2005. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002323710204611&vid=upec.

Pełny tekst źródła
Streszczenie:
La transduction du signal induit par l'étirement des cardiomyocytes en culture, stimulus principal de l'hypertrophie cardiaque, est caractérisée par l'activation simultanée de nombreux seconds messagers parmi lesquels la protéine kinase C (PKC) conduisant à l'activation des mitogen-activated protein kinases (MAPK). Le résultat principal de cette étude montre une double coopération entre la calcineurine (Cn) et la PKCε au cours de l'étirement des cardiomyocytes. Premièrement, l'étirement induit une translocation de la PKCε de la fraction cytosolique vers la fraction particulaire dépendante de la Cn. Le deuxième niveau d'interaction implique la formation d'un complexe Cn-PKCε au niveau de la région périnucléaire lors de l'étirement des cardiomyocytes. La caractérisation des complexes protéiques permettra dans l'avenir de développer de nouveaux agents thérapeutiques pour atteindre un niveau de spécificité et d'efficacité qu'il n'est pas possible d'atteindre avec les drogues actuelles
Myocardial stretch activates a number of pathways including the Protein kinase C (PKC) that in turn activates mitogen-activated protein kinases (MAPK), leading to gene expression stimulation and ventricular hypertrophy. The major finding of this study is the evidence in neonatal rat cardiomyocytes, of a dual level of interaction between calcineurin and PKCε, both being involved in stretch-induced ERK and JNK activations. In a first level, stretch induced a Calcineurin-dependent translocation of PKCε from cytosol to particulate fractions. The second level of interaction is a novel protein-protein complex formation induced by stretch where both PKCε and Calcineurin co-localized at the level of the perinuclear membrane as shown by immunofluorescent studies. Therapeutic agents designed to effectively promote or disrupt complex formation involved in pathological phenotype would likely achieve a level of specificity and efficacy not possible with present srategies
Style APA, Harvard, Vancouver, ISO itp.
32

Tran, Công Tâm. "Simulations de fluides complexes à l'échelle mésoscopique sur GPU". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0024/document.

Pełny tekst źródła
Streszczenie:
Les suspensions colloïdales ont été étudiées par simulations numériques à partir de deux modèles : la dynamique Brownienne (BD) et la SRD-MD (Stochastic Rotation Dynamics - Molecular Dynamics). Ces études ont consisté à reprendre des travaux existants pour les porter sur GPU, tout en cherchant différentes optimisations possibles adaptées à ces simulations. Une amélioration de la recherche de voisinage de la littérature a pu être utilisée pour toutes ces simulations de type BD. Une simulation de SRD-MD avec couplage de force qui n'avait pas encore été parallélisée sur GPU dans la littérature, a été implémentée en utilisant un nouveau schéma de décomposition adapté à cette simulation, améliorant considérablement les performances. Ces simulations ont pu donner lieu par la suite à des études sur des suspensions colloïdales plus complexes : une hétéroagrégation entre deux suspensions avec des particules de même taille, une hétéroagrégation entre deux populations de colloïdes de tailles très différentes, et en dehors des suspensions colloïdales, une simulation de nanoalliages. Enfin, le modèle de SRD a été adapté afin d'être utilisé dans le cadre d'animation physique de fluide réaliste dans le contexte de l'informatique graphique. Des adaptations du modèle pour y incorporer des notions comme la gestion de la compressibilité, de la tension de surface ont dues être apportées. Des premiers résultats ont pu permettre de réaliser quelques simulations, dont une chute d'eau dans une verre
Colloïdal suspensions have been studied by means of numerical simulation, using two physical models : Brownian dynamics and Stochastic Rotation Dynamics - Molecular Dynamics. These studies consist in parallizing colloïdal simulations from previous studies on GPU, and find some new optimisations for these specific simulations. An improvement of the neigborhood search has been implemented in all our BD type simulations. A SRD-MD with force coupling have been implemented for the first time in the literature, using a new decomposition scheme, which improves significantly its performances. Then, theses simulations have been adapted to study more complex colloidal suspensions : an interfacial heteroaggregation of colloidal suspensions, a heteroaggregation between two types of particles with a large size ratio, and outside this context, a nanoalloy simulation. Finally, the SRD model has been adapted to realistic fluid animtion from computer science context. Theses adaptations require to add to SRD model, the notion of compressibility and surface tension. First results have been released, like a pouring water into a glass simulation
Style APA, Harvard, Vancouver, ISO itp.
33

Morice, Élise. "Études comportementales des souris invalidées pour le transporteur de la dopamine utilisées comme modèle d'analyse génétique de traits complexes". Paris 12, 2004. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990003943180204611&vid=upec.

Pełny tekst źródła
Streszczenie:
J'ai réalisé une analyse comportementale chez des souris invalidées pour le transporteur de la dopamine, responsable de sa recapture par les terminaisons pré-synaptiques. Cette étude illustre l'impact des interactions entre une mutation et l'environnement génétique dans lequel elle s'exprime à la fois au niveau physiologique (survie, développement pondéral, allaitement) et comportemental (activité spontanée, comportement maternel, réponse aux drogues). Elle souligne le rôle clé de la transmission dopaminergique dans la mise en place de la latéralisation comportementale et confirme l'implication de ce système dans la flexibilité comportementale et dans les processus d'apprentissage et de mémoire associatif. La latéralisation et les troubles cognitifs sont des marqueurs de susceptibilité à différents troubles psychiatriques. La compréhension de la contribution des processus dopaminergiques dans chacun de ces endophénotypes permettra de répondre à des questions posées par la clinique
During my thesis, I have carried out a behavioural analysis using the dopamine transporter (DAT) knockout mice. The DAT is responsible for the rapid uptake of dopamine into presynaptic terminals. We showed that changing the genetic background revealed the extent of phenotypic variation associated with the DAT mutation both at the physiological (survival, growth rate, lactation) and behavioural levels (spontaneous activity, maternal behaviour, sensitivity to psychostimulants). The data emphasised the key role of the dopaminergic transmission in the development of the behavioural lateralization, in behavioural flexibility, and as well as in associative learning and memory. The understanding of the contribution of the dopaminergic system to each of theses endophenotypes will allow us to make progress in psychiatric research
Style APA, Harvard, Vancouver, ISO itp.
34

Ayadim, Abderrahime. "Structure et thermodynamique des suspensions colloïdales en phase volumique et confinée par la théorie des mélanges binaires". Paris 12, 2005. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990002513970204611&vid=upec.

Pełny tekst źródła
Streszczenie:
Les équations intégrales d'Ornstein-Zernike sont utilisées pour étudier la structure et la thermodynamique de suspensions colloïdales modélisées par un mélange binaire très dissymétrique en taille. Nous proposons une nouvelle fermeture surmontant le problème de non-convergence de la fermeture RHNC dans la variante de la théorie des mesures fondaùmentales de Rosenfeld. Après l'avoir testée sur les fonctions de distributions radiales, nous calculons la ligne de coexistence de phase fluide-fluide du mélange de sphères dures de rapport taille R=10. Nous confirmons ainsi à partir du diagramme de phase la validité de l'approche du fluide effectif. Nous examinons ensuite ces systèmes en phase inhomogène (face à un mur ou confinées entre deux murs) en incluant des forces attractives entre les différents constituants. La fermeture RHNC est testée alors au niveau de la structure et du potentiel de force moyenne intercolloïdale. La méthode est ensuite utilisée pour étudier la solvation
The Ornstein-Zernike integral equations are used to study the structure and the thermodynamics of colloidal suspensions modelled as a highly asymmetric solute-solvent mixture. We propose a new closure to remedy the non-convergence problem of the RHNC closure. After having tested it on the radial distribution functions, we determine the fluid-fluid coexistence line of asymmetric binary hard spheres with diameter ration R=10. We thus confirm from the phase diagram the validity of the effective one-component approach. Then, we examine these sytems in inhomogeneous phase (colloids near a wall or in a slit pore) and include attractions between the various components. The RHNC closure is tested then at the level of the structure and the potential of mean force. The method is finally used to study solvation effects of confined colloids
Style APA, Harvard, Vancouver, ISO itp.
35

Amri, Anis. "Autour de quelques statistiques sur les arbres binaires de recherche et sur les automates déterministes". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0301.

Pełny tekst źródła
Streszczenie:
Cette thèse comporte deux parties indépendantes. Dans la première partie, nous nous intéressons à l’analyse asymptotique de quelques statistiques sur les arbres binaires de recherche (ABR). Dans la deuxième partie, nous nous intéressons à l’étude du problème du collectionneur de coupons impatient. Dans la première partie, en suivant le modèle introduit par Aguech, Lasmar et Mahmoud [Probab. Engrg. Inform. Sci. 21 (2007) 133—141], on définit la profondeur pondérée d’un nœud dans un arbre binaire enraciné étiqueté comme la somme de toutes les clés sur le chemin qui relie ce nœud à la racine. Nous analysons alors dans ABR, les profondeurs pondérées des nœuds avec des clés données, le dernier nœud inséré, les nœuds ordonnés selon le processus de recherche en profondeur, la profondeur pondérée des trajets, l’indice de Wiener pondéré et les profondeurs pondérées des nœuds avec au plus un enfant. Dans la deuxième partie, nous étudions la forme asymptotique de la courbe de la complétion de la collection conditionnée à T_n≤ (1+Λ), Λ>0, où T_n≃n ln⁡n désigne le temps nécessaire pour compléter la collection. Puis, en tant qu’application, nous étudions les automates déterministes et accessibles et nous fournissons une nouvelle dérivation d’une formule due à Korsunov [Kor78, Kor86]
This Phd thesis is divided into two independent parts. In the first part, we provide an asymptotic analysis of some statistics on the binary search tree. In the second part, we study the coupon collector problem with a constraint. In the first part, following the model introduced by Aguech, Lasmar and Mahmoud [Probab. Engrg. Inform. Sci. 21 (2007) 133—141], the weighted depth of a node in a labelled rooted tree is the sum of all labels on the path connecting the node to the root. We analyze the following statistics : the weighted depths of nodes with given labels, the last inserted node, nodes ordered as visited by the depth first search procees, the weighted path length, the weighted Wiener index and the weighted depths of nodes with at most one child in a random binary search tree. In the second part, we study the asymptotic shape of the completion curve of the collection conditioned to T_n≤ (1+Λ), Λ>0, where T_n≃n ln⁡n is the time needed to complete accessible automata, we provide a new derivation of a formula due to Korsunov [Kor78, Kor86]
Style APA, Harvard, Vancouver, ISO itp.
36

Serrà, Julià Joan. "Identification of versions of the same musical composition by processing audio descriptions". Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/22674.

Pełny tekst źródła
Streszczenie:
This work focuses on the automatic identification of musical piece versions (alternate renditions of the same musical composition like cover songs, live recordings, remixes, etc.). In particular, we propose two core approaches for version identification: model-free and model-based ones. Furthermore, we introduce the use of post-processing strategies to improve the identification of versions. For all that we employ nonlinear signal analysis tools and concepts, complex networks, and time series models. Overall, our work brings automatic version identification to an unprecedented stage where high accuracies are achieved and, at the same time, explores promising directions for future research. Although our steps are guided by the nature of the considered signals (music recordings) and the characteristics of the task at hand (version identification), we believe our methodology can be easily transferred to other contexts and domains.
Aquest treball es centra en la identificació automàtica de versions musicals (interpretacions alternatives d'una mateixa composició: 'covers', directes, remixos, etc.). En concret, proposem dos tiupus d'estratègies: la lliure de model i la basada en models. També introduïm tècniques de post-processat per tal de millorar la identificació de versions. Per fer tot això emprem conceptes relacionats amb l'anàlisi no linial de senyals, xarxes complexes i models de sèries temporals. En general, el nostre treball porta la identificació automàtica de versions a un estadi sense precedents on s'obtenen bons resultats i, al mateix temps, explora noves direccions de futur. Malgrat que els passos que seguim estan guiats per la natura dels senyals involucrats (enregistraments musicals) i les característiques de la tasca que volem solucionar (identificació de versions), creiem que la nostra metodologia es pot transferir fàcilment a altres àmbits i contextos.
Style APA, Harvard, Vancouver, ISO itp.
37

Lorett, Velasquez Vaneza Paola Verfasser], Matthias [Akademischer Betreuer] Westerhausen i Alexander [Akademischer Betreuer] [Schiller. "Synthesis of ruthenium and manganese carbonyl complexes : a search for new carbon monoxide releasing molecules (CORMs) / Vaneza Paola Lorett Velasquez. Gutachter: Matthias Westerhausen ; Alexander Schiller". Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2014. http://d-nb.info/104709696X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Zhou, Yi. "Optimization Algorithms for Clique Problems". Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0013/document.

Pełny tekst źródła
Streszczenie:
Cette thèse présente des algorithmes de résolution de quatre problèmes de clique : clique de poids maximum (MVWCP), s-plex maximum (MsPlex), clique maximum équilibrée dans un graphe biparti (MBBP) et clique partition (CPP). Les trois premiers problèmes sont des généralisations ou relaxations du problème de la clique maximum, tandis que le dernier est un problème de couverture. Ces problèmes, ayant de nombreuses applications pratiques, sont NP-difficiles, rendant leur résolution ardue dans le cas général. Nous présentons ici des algorithmes de recherche locale, principalement basés sur la recherche tabou, permettant de traiter efficacement ces problèmes ; chacun de ces algorithmes emploie des composants originaux et spécifiquement adaptés aux problèmes traités, comme de nouveaux opérateurs ou mécanismes perturbatifs. Nous y intégrons également des stratégies telles que la réduction de graphe ou la propagation afin de traiter des réseaux de plus grande taille. Des expérimentations basées sur des jeux d’instances nombreux et variés permettent de montrer la compétitivité de nos algorithmes en comparaison avec les autres stratégies existantes
This thesis considers four clique problems: the maximum vertex weight clique problem (MVWCP), the maximum s-plex problem (MsPlex), the maximum balanced biclique problem (MBBP) and the clique partitioning problem (CPP). The first three are generalization and relaxation of the classic maximum clique problem (MCP), while the last problem belongs to a clique grouping problem. These combinatorial problems have numerous practical applications. Given that they all belong to the NP-Hard family, it is computationally difficult to solve them in the general case. For this reason, this thesis is devoted to develop effective algorithms to tackle these challenging problems. Specifically, we propose two restart tabu search algorithms based on a generalized PUSH operator for MVWCP, a frequency driven local search algorithms for MsPlex, a graph reduction based tabu search as well as effective exact branch and bound algorithms for MBBP and lastly, a three phase local search algorithm for CPP. In addition to the design of efficient move operators for local search algorithms, we also integrate components like graph reduction or upper bound propagation in order to deal deal with very large real-life networks. The experimental tests on a wide range of instances show that our algorithms compete favorably with the main state-of-the-art algorithms
Style APA, Harvard, Vancouver, ISO itp.
39

Del, Val Noguera Elena. "Semantic Service management for service-oriented MAS". Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/27556.

Pełny tekst źródła
Streszczenie:
Actualmente, los sistemas informáticos complejos se describen en términos de entidades que actúan como proveedores y consumidores. Estas entidades ofrecen su funcionalidad a través de servicios e interactúan entre ellas para ofrecer o pedir estos servicios. La integración de Sistemas Multi-Agente Abiertos y de Sistemas Orientados a Servicios es adecuada para implementar este tipo de sistemas. En los SMA abiertos, los agentes entran y salen del sistema, interactúan con los demás de una manera flexible, y se consideran como entidades reactivas y proactivas, capaces de razonar acerca de lo que sucede en su entorno y llevar a cabo acciones locales sobre la base de sus observaciones para alcanzar sus metas. El área de la computación orientada a servicios proporciona los bloques de construcción básicos para aplicaciones empresariales complejas que son los servicios. Los servicios son independientes de la plataforma y pueden ser descubiertos y compuestos de manera dinámica. Estas características hacen que los servicios sean adecuados para hacer frente a la elevada tasa de cambios en las demandas de las empresas. Sin embargo, la complejidad de los sistemas informáticos, los cambios en las condiciones del entorno y el conocimiento parcial de los agentes sobre el sistema requieren que los agentes cuenten con mecanismos que les faciliten tareas como el descubrimiento de servicios, la auto-organización de sus relaciones estructurales conforme se producen cambios en la demanda de servicios, y la promoción y mantenimiento de un comportamiento cooperativo entre los agentes para garantizar el buen desarrollo de la actividad de descubrimiento de servicios en el sistema. La principal aportación de esta tesis doctoral es la propuesta de un marco para Sistemas Multi-Agente Abiertos Orientados a Servicios. Este marco integra agentes que se encuentran en una red sin ningún tipo de estructura predefinida, y agentes que además de estar en esa red forman parte de grupos dinámicos más comp
Del Val Noguera, E. (2013). Semantic Service management for service-oriented MAS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/27556
Palancia
Style APA, Harvard, Vancouver, ISO itp.
40

Åberg, Johan. "Open Quantum Systems : Effects in Interferometry, Quantum Computation, and Adiabatic Evolution". Doctoral thesis, Uppsala University, Quantum Chemistry, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5893.

Pełny tekst źródła
Streszczenie:

The effects of open system evolution on single particle interferometry, quantum computation, and the adiabatic approximation are investigated.

Single particle interferometry: Three concepts concerning completely positive maps (CPMs) and trace preserving CPMs (channels), named subspace preserving (SP) CPMs, subspace local channels, and gluing of CPMs, are introduced. SP channels preserve probability weights on given orthogonal sum decompositions of the Hilbert space of a quantum system. Subspace locality determines what channels act locally with respect to such decompositions. Gluings are the possible total channels obtainable if two evolution devices, characterized by channels, act jointly on a superposition of a particle in their inputs. It is shown that gluings are not uniquely determined by the two channels. We determine all possible interference patterns in single particle interferometry for given channels acting in the interferometer paths. It is shown that the standard interferometric setup cannot distinguish all gluings, but a generalized setup can.

Quantum computing: The robustness of local and global adiabatic quantum search subject to decoherence in the instantaneous eigenbasis of the search Hamiltonian, is examined. In both the global and local search case the asymptotic time-complexity of the ideal closed case is preserved, as long as the Hamiltonian dynamics is present. In the case of pure decoherence, where the environment monitors the search Hamiltonian, it is shown that the local adiabatic quantum search performs as the classical search with scaling N, and that the global search scales like N3/2 , where N is the list length. We consider success probabilities p<1 and prove bounds on the run-time with the same scaling as in the conditions for the p → 1 limit.

Adiabatic evolution: We generalize the adiabatic approximation to the case of open quantum systems in the joint limit of slow change and weak open system disturbances.

Style APA, Harvard, Vancouver, ISO itp.
41

Unsworth, David I. "Individual differences in complex memory span and episodic retrieval examining the dynamics of delayed and continuous distractor free recall / by David I. Unsworth". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10463.

Pełny tekst źródła
Streszczenie:
Individual differences on complex memory spans predict a variety of higher-order cognitive tasks (e.g. reading comprehension, reasoning, following direction) as well as low-level attention tasks (e.g. Stroop, dichotic listening, antisaccade). The current study attempted to better determine the role of individual differences in complex memory span and episodic retrieval. Specifically, two experiments explored the possibility that individual differences in complex memory span reflect differences in the ability to successfully retrieve items from secondary memory via a cue-dependent search process. High and low complex span participants were tested in delayed (Experiment 1) and continuous distractor (Experiment 2) free recall with varying list-lengths. Across both experiments low spans recalled fewer items than high spans, recalled more previous list intrusions than high spans, and recalled at a slower rate than high spans. It is argued that low spans search through a larger set of items than high spans and, thus low spans episodic retrieval deficits are associated with an inability to use cues to guide a search and retrieval process of secondary memory. Implications for dual-component models of memory are discussed.
Style APA, Harvard, Vancouver, ISO itp.
42

Ruiz, Echartea Maria Elisa. "Pairwise and Multi-Component Protein-Protein Docking Using Exhaustive Branch-and-Bound Tri-Dimensional Rotational Searches". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0306.

Pełny tekst źródła
Streszczenie:
La détermination des structures tri-dimensionnelles (3D) des complexes protéiques est cruciale pour l’avancement des recherches sur les processus biologiques qui permettent, par exemple, de comprendre le développement de certaines maladies et, si possible, de les prévenir ou de les traiter. Face à l’intérêt des complexes protéiques pour la recherche, les difficultés et le coût élevé des méthodes expérimentales de détermination des structures 3D des protéines ont encouragé l’utilisation de l’informatique pour développer des outils capables de combler le fossé, comme par exemple les algorithmes d’amarrage protéiques. Le problème de l’amarrage protéique a été étudié depuis plus de 40 ans. Cependant, le développement d’algorithmes d’amarrages précis et efficaces demeure un défi à cause de la taille de l’espace de recherche, de la nature approximée des fonctions de score utilisées, et souvent de la flexibilité inhérente aux structures de protéines à amarrer. Cette thèse présente un algorithme pour l’amarrage rigide des protéines, qui utilise une série de recherches exhaustives rotationnelles au cours desquelles seules les orientations sans clash sont quantifiées par ATTRACT. L’espace rotationnel est représenté par une hyper-sphère à quaternion, qui est systématiquement subdivisée par séparation et évaluation, ce qui permet un élagage efficace des rotations qui donneraient des clashs stériques entre les deux protéines. Les contributions de cette thèse peuvent être décrites en trois parties principales comme suit. 1) L’algorithme appelé EROS-DOCK, qui permet d’amarrer deux protéines. Il a été testé sur 173 complexes du jeu de données “Docking Benchmark”. Selon les critères de qualité CAPRI, EROS-DOCK renvoie typiquement plus de solutions de qualité acceptable ou moyenne que ATTRACT et ZDOCK. 2) L’extension de l’algorithme EROS-DOCK pour permettre d’utiliser les contraintes de distance entre atomes ou entre résidus. Les résultats montrent que le fait d’utiliser une seule contrainte inter-résidus dans chaque interface d’interaction est suffisant pour faire passer de 51 à 121 le nombre de cas présentant une solution dans le top-10, sur 173 cas d’amarrages protéine-protéine. 3) L’extension de EROSDOCK à l’amarrage de complexes trimériques. Ici, la méthode proposée s’appuie sur l’hypothèse selon laquelle chacune des trois interfaces de la solution finale doit être similaire à au moins l’une des interfaces trouvées dans les solutions des amarrages pris deux-à-deux. L’algorithme a été testé sur un benchmark de 11 complexes à 3 protéines. Sept complexes ont obtenu au moins une solution de qualité acceptable dans le top-50 des solutions. À l’avenir, l’algorithme EROS-DOCK pourra encore évoluer en intégrant des fonctions de score améliorées et d’autres types de contraintes. De plus il pourra être utilisé en tant que composant dans des workflows élaborés pour résoudre des problèmes complexes d’assemblage multi-protéiques
Determination of tri-dimensional (3D) structures of protein complexes is crucial to increase research advances on biological processes that help, for instance, to understand the development of diseases and their possible prevention or treatment. The difficulties and high costs of experimental methods to determine protein 3D structures and the importance of protein complexes for research have encouraged the use of computer science for developing tools to help filling this gap, such as protein docking algorithms. The protein docking problem has been studied for over 40 years. However, developing accurate and efficient protein docking algorithms remains a challenging problem due to the size of the search space, the approximate nature of the scoring functions used, and often the inherent flexibility of the protein structures to be docked. This thesis presents an algorithm to rigidly dock proteins using a series of exhaustive 3D branch-and-bound rotational searches in which non-clashing orientations are scored using ATTRACT. The rotational space is represented as a quaternion “π-ball”, which is systematically sub-divided in a “branch-and-bound” manner, allowing efficient pruning of rotations that will give steric clashes. The contribution of this thesis can be described in three main parts as follows. 1) The algorithm called EROS-DOCK to assemble two proteins. It was tested on 173 Docking Benchmark complexes. According to the CAPRI quality criteria, EROS-DOCK typically gives more acceptable or medium quality solutions than ATTRACT and ZDOCK. 2)The extension of the EROS-DOCK algorithm to allow the use of atom-atom or residue-residue distance restraints. The results show that using even just one residue-residue restraint in each interaction interface is sufficient to increase the number of cases with acceptable solutions within the top-10 from 51 to 121 out of 173 pairwise docking cases. Hence, EROS-DOCK offers a new improved search strategy to incorporate experimental data, of which a proof-of-principle using data-driven computational restraints is demonstrated in this thesis, and this might be especially important for multi-body complexes. 3)The extension of the algorithm to dock trimeric complexes. Here, the proposed method is based on the premise that all of the interfaces in a multi-body docking solution should be similar to at least one interface in each of the lists of pairwise docking solutions. The algorithm was tested on a home-made benchmark of 11 three-body cases. Seven complexes obtained at least one acceptable quality solution in the top-50. In future, the EROS-DOCK algorithm can evolve by integrating improved scoring functions and other types of restraints. Moreover, it can be used as a component in elaborate workflows to efficiently solve complex problems of multi-protein assemblies
Style APA, Harvard, Vancouver, ISO itp.
43

Mora, Campos Armando. "Estudio de Arquitecturas VLSI de la etapa de predicción de la compensación de movimiento, para compresión de imágenes y video con Algoritmos full-search. Aplicación al estándar H.264/AVC". Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/3446.

Pełny tekst źródła
Streszczenie:
En esta tesis doctoral se presenta el diseño y realización de arquitecturas VLSI de estimación de movimiento, en sus versiones de pixeles enteros y fraccionarios, para la etapa de predicción de la compensación de movimiento del estándar de codificación de video H.264/AVC. Las arquitecturas propuestas son estructuras de procesamiento pipeline-paralelas con alta eficiencia en su data_path y una administración optima de la memoria. Utilizando el algoritmo full-search block matching, los diseños cumplen los requerimientos de tamaño de bloque variable y resolución de ¼ de píxel del estándar con máxima calidad. Los estimadores de movimiento combinan las características de las arquitecturas consideradas en el estado del arte junto con la aplicación de nuevos esquemas y algoritmos hardware, en el proceso de codificación del componente luma de la señal de video. Diseñadas como coprocesadores de aceleración hardware para procesadores de 32 bits, las arquitecturas que se presentan han sido simuladas y sintetizadas para FPGA Virtex-4 de Xilinx, utilizando el lenguaje de descripción de hardware VHDL.
Mora Campos, A. (2008). Estudio de Arquitecturas VLSI de la etapa de predicción de la compensación de movimiento, para compresión de imágenes y video con Algoritmos full-search. Aplicación al estándar H.264/AVC [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3446
Palancia
Style APA, Harvard, Vancouver, ISO itp.
44

Nguyen, Khac Duy. "Structural damage identification using experimental modal parameters via correlation approach". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/117289/2/Khac%20Duy%20Nguyen.pdf.

Pełny tekst źródła
Streszczenie:
This research provides a new damage identification strategy using experimental modal parameters via correlation approach. Two damage identification algorithms using modal strain energy-eigenvalue ratio (MSEE) are presented. Firstly, a method using a simplified term of MSEE called geometric modal strain energy-eigenvalue ratio (GMSEE) is developed. Secondly, the original method is modified using the full term of MSEE, proving better capability of damage identification when used with fewer vibration modes. Performance of the proposed damage identification algorithms has been successfully validated with a numerical model and some experimental models of various scales from small to large.
Style APA, Harvard, Vancouver, ISO itp.
45

Jacomini, Ricardo de Souza. "Inferência de redes gênicas por agrupamento, busca exaustiva e análise de predição intrinsecamente multivariada". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05092017-111639/.

Pełny tekst źródła
Streszczenie:
A inferência de redes gênicas (GN) a partir de dados de expressão gênica temporal é um problema crucial e desafiador em Biologia Sistêmica. Os conjuntos de dados de expressão geralmente consistem em dezenas de amostras temporais e as redes consistem em milhares de genes, tornando inúmeros métodos de inferência inviáveis na prática. Para melhorar a escalabilidade dos métodos de inferência de GNs, esta tese propõe um arcabouço chamado GeNICE, baseado no modelo de redes gênicas probabilísticas. A principal novidade é a introdução de um procedimento de agrupamento de genes, com perfis de expressão relacionados, para fornecer uma solução aproximada com complexidade computacional reduzida. Os agrupamentos definidos são usados para reduzir a dimensionalidade permitindo uma busca exaustiva mais eficiente pelos melhores subconjuntos de genes preditores para cada gene alvo de acordo com funções critério multivariadas. GeNICE reduz consideravelmente o espaço de busca porque os candidatos a preditores ficam restritos a um gene representante por agrupamento. No final, uma análise multivariada é realizada para cada subconjunto preditor definido, visando recuperar subconjuntos mínimos para simplificar a rede gênica inferida. Em experimentos com conjuntos de dados sintéticos, GeNICE obteve uma redução substancial de tempo quando comparado a uma solução anterior sem a etapa de agrupamento, preservando a precisão da predição de expressão gênica mesmo quando o número de agrupamentos é pequeno (cerca de cinquenta) e o número de genes é grande (ordem de milhares). Para um conjunto de dados reais de microarrays de Plasmodium falciparum, a precisão da predição alcançada pelo GeNICE foi de aproximadamente 97% em média. As redes inferidas para os genes alvos da glicólise e do apicoplasto refletem propriedades topológicas de redes complexas do tipo \"mundo pequeno\" e \"livre de escala\", para os quais grande parte das conexões são estabelecidas entre os genes de um mesmo módulo e algumas poucas conexões fazem o papel de estabelecer uma ponte entre os módulos (redes mundo pequeno), e o grau de distribuição das conexões entre os genes segue uma lei de potência, na qual a maioria dos genes têm poucas conexões e poucos genes (hubs) apresentam um elevado número de conexões (redes livres de escala), como esperado.
Gene network (GN) inference from temporal gene expression data is a crucial and challenging problem in Systems Biology. Expression datasets usually consist of dozens of temporal samples, while networks consist of thousands of genes, thus rendering many inference methods unfeasible in practice. To improve the scalability of GN inference methods, this work proposes a framework called GeNICE, based on Probabilistic Gene Networks; the main novelty is the introduction of a clustering procedure to group genes with related expression profiles, to provide an approximate solution with reduced computational complexity. The defined clusters were used to perform an exhaustive search to retrieve the best predictor gene subsets for each target gene, according to multivariate criterion functions. GeNICE greatly reduces the search space because predictor candidates are restricted to one representative gene per cluster. Finally, a multivariate analysis is performed for each defined predictor subset to retrieve minimal subsets and to simplify the network. In experiments with in silico generated datasets, GeNICE achieved substantial computational time reduction when compared to an existing solution without the clustering step, while preserving the gene expression prediction accuracy even when the number of clusters is small (about fifty) relative to the number of genes (order of thousands). For a Plasmodium falciparum microarray dataset, the prediction accuracy achieved by GeNICE was roughly 97% on average. The inferred networks for the apicoplast and glycolytic target genes reflects the topological properties of \"small-world\"and \"scale-free\"complex network models in which a large part of the connections is established between genes of the same functional module (smallworld networks) and the degree distribution of the connections between genes tends to form a power law, in which most genes present few connections and few genes (hubs) present a large number of connections (scale-free networks), as expected.
Style APA, Harvard, Vancouver, ISO itp.
46

Mamani, Alexander Victor Ocsa. "Soluções aproximadas para algoritmos escaláveis de mineração de dados em domínios de dados complexos usando GPGPU". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-22112011-132339/.

Pełny tekst źródła
Streszczenie:
A crescente disponibilidade de dados em diferentes domínios tem motivado o desenvolvimento de técnicas para descoberta de conhecimento em grandes volumes de dados complexos. Trabalhos recentes mostram que a busca em dados complexos é um campo de pesquisa importante, já que muitas tarefas de mineração de dados, como classificação, detecção de agrupamentos e descoberta de motifs, dependem de algoritmos de busca ao vizinho mais próximo. Para resolver o problema da busca dos vizinhos mais próximos em domínios complexos muitas abordagens determinísticas têm sido propostas com o objetivo de reduzir os efeitos da maldição da alta dimensionalidade. Por outro lado, algoritmos probabilísticos têm sido pouco explorados. Técnicas recentes relaxam a precisão dos resultados a fim de reduzir o custo computacional da busca. Além disso, em problemas de grande escala, uma solução aproximada com uma análise teórica sólida mostra-se mais adequada que uma solução exata com um modelo teórico fraco. Por outro lado, apesar de muitas soluções exatas e aproximadas de busca e mineração terem sido propostas, o modelo de programação em CPU impõe restrições de desempenho para esses tipos de solução. Uma abordagem para melhorar o tempo de execução de técnicas de recuperação e mineração de dados em várias ordens de magnitude é empregar arquiteturas emergentes de programação paralela, como a arquitetura CUDA. Neste contexto, este trabalho apresenta uma proposta para buscas kNN de alto desempenho baseada numa técnica de hashing e implementações paralelas em CUDA. A técnica proposta é baseada no esquema LSH, ou seja, usa-se projeções em subespac¸os. O LSH é uma solução aproximada e tem a vantagem de permitir consultas de custo sublinear para dados em altas dimensões. Usando implementações massivamente paralelas melhora-se tarefas de mineração de dados. Especificamente, foram desenvolvidos soluções de alto desempenho para algoritmos de descoberta de motifs baseados em implementações paralelas de consultas kNN. As implementações massivamente paralelas em CUDA permitem executar estudos experimentais sobre grandes conjuntos de dados reais e sintéticos. A avaliação de desempenho realizada neste trabalho usando GeForce GTX470 GPU resultou em um aumento de desempenho de até 7 vezes, em média sobre o estado da arte em buscas por similaridade e descoberta de motifs
The increasing availability of data in diverse domains has created a necessity to develop techniques and methods to discover knowledge from huge volumes of complex data, motivating many research works in databases, data mining and information retrieval communities. Recent studies have suggested that searching in complex data is an interesting research field because many data mining tasks such as classification, clustering and motif discovery depend on nearest neighbor search algorithms. Thus, many deterministic approaches have been proposed to solve the nearest neighbor search problem in complex domains, aiming to reduce the effects of the well-known curse of dimensionality. On the other hand, probabilistic algorithms have been slightly explored. Recently, new techniques aim to reduce the computational cost relaxing the quality of the query results. Moreover, in large-scale problems, an approximate solution with a solid theoretical analysis seems to be more appropriate than an exact solution with a weak theoretical model. On the other hand, even though several exact and approximate solutions have been proposed, single CPU architectures impose limits on performance to deliver these kinds of solution. An approach to improve the runtime of data mining and information retrieval techniques by an order-of-magnitude is to employ emerging many-core architectures such as CUDA-enabled GPUs. In this work we present a massively parallel kNN query algorithm based on hashing and CUDA implementation. Our method, based on the LSH scheme, is an approximate method which queries high-dimensional datasets with sub-linear computational time. By using the massively parallel implementation we improve data mining tasks, specifically we create solutions for (soft) realtime time series motif discovery. Experimental studies on large real and synthetic datasets were carried out thanks to the highly CUDA parallel implementation. Our performance evaluation on GeForce GTX 470 GPU resulted in average runtime speedups of up to 7x on the state-of-art of similarity search and motif discovery solutions
Style APA, Harvard, Vancouver, ISO itp.
47

Lima, Danielli Araújo. "Autômatos celulares e sistemas bio-inspirados aplicados ao controle inteligente de robôs". Universidade Federal de Uberlândia, 2017. http://dx.doi.org/10.14393/ufu.te.2018.26.

Pełny tekst źródła
Streszczenie:
CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico
Em diversas situações, o volume de tarefas a serem cumpridas não pode ser realizado por um único robô. Assim, um campo que tem despertado crescente interesse é a investigação do comportamento de enxame de robôs de busca. Estratégias de cooperação e controle desse enxame devem ser consideradas para um desempenho eficiente do time de robôs. Existem várias técnicas clássicas em inteligência artificial que são capazes de resolver este problema. Neste trabalho um conjunto de técnicas bio-inspiradas, que engloba um modelo baseado em autômatos celulares com memória e feromônio invertido, foi considerado inicialmente para coordenar um time de robôs na tarefa de forrageamento para ambientes previamente conhecidos. Os robôs do time compartilham o mesmo ambiente, comunicando-se através do feromônio invertido, que é depositado por todos os agentes a cada passo de tempo, resultando em forças de repulsão e maior cobertura do ambiente. Por outro lado, o processo de retorno para o ninho é baseado no comportamento social observado no processo de evacuação de pedestres, resultando em forças de atração. Todos os movimentos deste processo são de primeira escolha e a resolução de conflitos proporciona uma característica não-determinista ao modelo. Posteriormente, o modelo base foi adaptado para a aplicação nas tarefas de coleta seletiva e busca e resgate. Os resultados das simulações foram apresentados em diferentes condições de ambiente. Além disso, parâmetros como quantidade e disposição da comida, posição dos ninhos e largura, constantes relacionadas ao feromônio, e tamanho da memória foram analisados nos experimentos. Em seguida, o modelo base proposto neste trabalho para tarefa de forrageamento, foi implementado usando os robôs e-Puck no ambiente de simulação Webots, com as devidas adaptações. Por fim, uma análise teórica do modelo investigado foi analisado através da teoria dos grafos e das filas. O método proposto neste trabalho mostrou-se eficiente e passível de ser implementado num alto nível de paralelismo e distribuição. Assim, o modelo torna-se interessante para a aplicação em outras tarefas robóticas, especialmente em problemas que envolvam busca multi-objetiva paralela.
In several situations, the volume of tasks to be accomplished can not be performed by a single robot. Thus, a field that has attracted growing interest is the behavior investigation of the search swarm robots. Cooperation and control strategies of this swarm should be considered for an efficient performance of the robot team. There are several classic techniques in artificial intelligence that are able to solve this problem. In this work a set of bio-inspired techniques, which includes a model based on cellular automata with memory and inverted pheromone, was initially considered to coordinate a team of robots in the task of foraging to previously known environments. The team's robots share the same environment, communicating through the inverted pheromone, which is deposited by all agents at each step of time, resulting in repulsive forces and increasing environmental coverage. On the other hand, the return process to the nest is based on the social behavior observed in the process of pedestrian evacuation, resulting in forces of attraction. All movements in this process are first choice and conflict resolution provides a non-deterministic characteristic to the model. Subsequently, the base model was adapted for the application in the tasks of selective collection and search and rescue. The results of the simulations were presented under different environment conditions. In addition, parameters such as amount and arrangement of food, nest position and width, pheromone-related constants, and memory size were analyzed in the experiments. Then, the base model proposed in this work for foraging task, was implemented using the e-Puck robots in the simulation environment Webots, with the appropriate adaptations. Finally, a theoretical analysis of the investigated model was analyzed through the graphs and queuing theory. The method proposed in this work proved to be efficient and capable of being implemented at a high level of parallelism and distribution. Thus, the model becomes interesting for the application in other robotic tasks, especially in problems that involve parallel multi-objective search.
Tese (Doutorado)
Style APA, Harvard, Vancouver, ISO itp.
48

Randau, Emma, i Frida Tordsson. "Är detaljhandlarna fast i det förflutna? : En fallstudie av IKEAs köksavdelning". Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-23980.

Pełny tekst źródła
Streszczenie:
Konsumenters beteende har förändrats och i dag söker kunder inte efter produktrelaterad information enbart i den fysiska butiken inför att de ska genomföra ett komplext köp. Teknisk utveckling och butiker online har möjliggjort för konsumenten att söka information var och när de vill. Detta har lett till att konsumenter är mer informerade än någonsin. Denna förändring av konsumentbeteende är viktigt att studera eftersom företag måste förstå sina konsumenter för att skapa hållbara affärsmodeller och strategier. Därför syftar vår studie till att skapa en större förståelse för detta förändrade konsumentbeteende och hur det påverkar den fysiska butikens roll när konsumenten förbereder sig inför ett komplext köp. Studien är design är en fallstudie av IKEA:s köksavdelning. Vår intention var att förstå viktiga aspekter av konsumentens förberedelse, informationssökning och beteende inför ett komplext köp. För att få en större förståelse för om den fysiska butikens roll har förändrats användes en mixad metod. Detta studerades genom observationer, enkäter och intervjuer. Genom att använda eye-tracking utrustning under observationerna kunde vi studera respondenternas beteende mer djupgående än vad tidigare studier gjort. Vilket resulterade i att studiens slutsats är att den fysiska butiken fortfarande har en viktig roll när konsumenten förbereder sig inför ett komplext köp. Framförallt förser butiken konsumenter med möjligheten att känna på produkten, samt att ge dem ett helhetsintryck av vad de ska köpa. Vår huvudsakliga slutsats blev därför att den fysiska butikens roll inför ett komplext köp är att vara ett komplement till den information som finns tillgänglig online genom att möjliggöra för konsumenten att interagera med produkterna i en verklig butiksmiljö.
Consumer behaviour has changed, today consumers do not solely search for information in the physical store prior to a complex purchase. Technological development and online stores have enabled consumers to search for information whenever and wherever they want. This has led to consumers being more informed than ever. The change and development of consumer behaviour is an important research subject, as companies must understand their consumers in order to create the best business strategies and business models possible. Therefore, is the aim of this thesis to gain a deeper understanding of this changed consumer behaviour and if the physical store might have a different role during consumer preparation prior to a complex purchase.The design used was a case study of IKEA’s kitchen department. Our intention was to understand important aspects of consumer preparation, information search and behaviour prior to a complex purchase. Therefore, was a mixed method strategy was used, which allowed us to understand if the role of the physical store has changed during consumer preparation prior to a complex purchase. This was studied through observations, questionnaire and interviews. Due to the usage of eye-tracking technology during the observation, we could study the respondent’s behaviour more in detail than previous research has done. The conclusion of the study is that the physical store still has an important role when consumers purchase complex products. Foremost, due to consumers’ having a great need to touch and feel the product, and to gain the whole picture of what they intend to purchase. Therefore, was the main conclusion that the role of the physical store prior to a complex purchase is to complement the information available online, by enabling consumers to interact with the products in a real world environment.
Style APA, Harvard, Vancouver, ISO itp.
49

Мінакова, О. О. "Економічне планування та прогнозування розвитку харчової промисловості України". Thesis, Одесса, 2019. http://ir.stu.cn.ua/123456789/17631.

Pełny tekst źródła
Streszczenie:
Мінакова, О. О. Економічне планування та прогнозування розвитку харчової промисловості України : дис. ... канд. екон. наук : 08.00.03 / О. О. Мінакова. - Одесса, 2019. - 231 с.
Дисертація присвячена систематизації теоретичних засад та обґрунтуванню практичних рекомендацій щодо розвитку сучасних напрямів економічного планування і прогнозування розвитку харчової промисловості України. Обґрунтовано трактування сутності економічної категорії «планування» як складової процесу прийняття управлінських рішень, спрямованих на забезпечення координованості та узгодженості дій зацікавлених суб’єктів з огляду на параметри економічних процесів на рівні держави, регіону, галузі або підприємства, що дозволить забезпечити необхідні пропорції в розвитку харчової промисловості відповідно до встановлених цілей. Визначено спільне та відмінності планування та прогнозування в економіці за ознаками масштабу, об’єктів, призначенням, цілями, конкретною установкою. Розвинуто науково-методичний підхід до планування розвитку галузі, який базується на комплексному аналізі досвіду використання методів соціально-економічного прогнозування та планування в провідних країнах світу й надають змогу обґрунтувати перспективні напрями розвитку галузі. На підставі узагальнення практики організації та функціонування харчової промисловості, обґрунтовано необхідність створення інтегрованих структур, які носять багатопрофільний характер їх розвитку від простих до складних формувань, в яких беруть участь декілька підприємств. Розроблено методику нормативного прогнозування розвитку харчової галузі на основі індикативного планування та формальних методів соціально-економічної статистики та модель добової енергетичної цінності на основі множинної регресії.
Диссертация посвящена систематизации теоретических основ и обосновании практических рекомендаций по развитию направлений экономического планирования и прогнозирования развития пищевой промышленности Украины. Обоснована трактовка сущности экономической категории «планирование» как составляющей процесса принятия управленческих решений, направленных на обеспечение скоординированности и согласованности действий заинтересованных сторон, учитывая параметры экономических процессов на уровне государства, региона, отрасли или предприятия, что позволит обеспечить необходимые пропорции в развитии пищевой промышленности в соответствии с установленными целями. Определены общие черты и различия при планировании и прогнозировании в экономике по признакам масштаба, объектов, назначения, целям, конкретной установке. Предложен научно-методический подход к планированию развития отрасли, основанный на комплексном анализе опыта использования методов социально-экономического прогнозирования и планирования в ведущих странах мира и даёт возможность обосновать перспективные направления развития отрасли. На основании обобщения практики организации и функционирования пищевой промышленности, обоснована необходимость создания интегрированных структур, которые носят многопрофильный характер от простых к сложным формированиям, в которых принимают участие несколько предприятий. Разработана методика нормативного прогнозирования развития пищевой отрасли на основе индикативного планирования и формальных методов социально-экономической статистики и модель суточной энергетической ценности на основе множественной регрессии. При реформировании структуры и экономического механизма управления пищевой промышленностью, в работе получило дальнейшего развития исследование проблемы обеспечения населения основными видами продуктов питания на основе разработки множественной регрессионной зависимости суточной энергетической ценности на 1 человека от потребления основных продуктов питания в разрезе регионов Украины. Полученная модель позволила исследовать региональную структуру потребления продуктов животного и растительного происхождения и определить регионы со значительным отклонением от рекомендованных значений. Обоснованы приоритетные направления государственного управления на основе прогнозных моделей, которые исходили из необходимости совершенствования методики определения емкости внутреннего рынка пищевых продуктов и планирования доходов отрасли. В основе указанных расчетов положены перспективное прогнозирование экономического роста государства и доходов населения на основе трехлетнего бюджетного планирования, численности населения по демографическим методам.
The dissertation is devoted to the systematization of theoretical principles and the substantiation of practical recommendations for the development of modern areas of economic planning and forecasting of the food industry development in Ukraine. The essence of the economic category of "planning" is substantiated. The common and differences of planning and forecasting in economy are determined on the basis of scale, objects, purpose, goals, specific installation. The scientific and methodical approach to planning of the industry development is developed, which is based on a comprehensive analysis of the experience of using socio-economic forecasting and planning methods in the leading countries of the world and provides an opportunity to substantiate the perspective directions of the industry's development. Based on the generalization of the food industry organization practice, the necessity of creating integrated structures that have a multiprofile nature of their development from simple to complex formations, which involves several enterprises, is substantiated. The method of normative forecasting of food industry development on the basis of indicative planning and formal methods of socio-economic statistics and model of daily energy value on the basis of multiple regression is developed. The scientific and methodical approach to the planning of the industry development is improved, which is based on a comprehensive analysis of the experience of using socio-economic forecasting and planning methods in the leading countries of the world.
Style APA, Harvard, Vancouver, ISO itp.
50

Кравчук, Володимир Вікторович. "Комплекс програм для визначення нероздільних завадостійких кодів". Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/35023.

Pełny tekst źródła
Streszczenie:
Бакалаврський проєкт включає пояснювальну записку (55 с., 45 рис., 4 додатки). В даній роботі досліджена тема завадостійкого кодування та пошуку максимальної кліки на графі. Розглянуто різні типи кодування, описана проблема аналітичної швидкості коду, проаналізовано алгоритм Брона-Кербоша для пошуку клік. На основі особливостей еквівалентних кодів та графа Хемінга, запропоновано способи покращення алгоритму для вирішення задачі пошуку максимального нероздільного завадостійкого коду. Було вирішено розробити комплекс програм, який допоможе спростити визначення та дослідження нероздільних завадостійких кодів. Було сформовано конкретні вимоги та функціональність для комплексу, а саме: можливість пошуку максимальних нероздільних завадостійких кодів відповідно до заданих користувачем параметрів, зупинка роботи комплексу в певний момент часу із збереженням проміжних даних з якими працював алгоритм, завантаження збережених даних та продовження роботи після зупинки, можливість виконання різних операцій над кодами, таких як визначення мінімальної кодової відстані, визначення кодової відстані кодослова до коду, сортування коду, надання користувачу простого та зрозумілого графічного інтерфейсу для зручності роботи з програмою. Комплекс програм реалізований мовою програмування Java, яка підтримується усіма популярними операційними системами, з використанням стандартної бібліотеки JavaFX, для розробки графічних інтерфейсів.
The bachelors project includes an explanatory note (97 pages, 41 drawings, 7 annexes). In this work, the topics of error correction and error detection coding, finding maximal clique of graph have been researched. Different types of coding were considered, the problem of analytic speed of code was described and Bron-Kerbosh algorithm was analyzed. Based on specifics of equivalents codes and Hamming graph the methods of algorithm optimization for finding maximal undivided error correcting code were suggested. It has been decided to develop a complex of program which will help to calculate and research error correcting codes. The concrete requirements and functionality for the complex were formulated: possibility to search maximal undivided error correcting code according to parameters provided by user, stop work of complex in the moment with saving intermediate data algorithm are working with, loading the saved data and continue work after algorithm had been stopped, the possibility to perform some operations with codes like compute the minimal code distance, compute minimal code distance between a word and a code, sort code, provide simple and understandable graphical user interface for comfortable working with program. The complex of programs is implemented by Java programming language which is supported by all the most popular operation systems using native library JavaFX for developing graphical user interface.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii