Thèses sur le sujet « Algorithme de tri »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Algorithme de tri ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Durand, Marie. « PaVo Un tri parallèle adaptatif ». Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00959995.
Texte intégralGuedj, Mikaël. « Peut-on déléguer le tri des urgences ophtalmologiques à un algorithme informatisé auto-implémenté par le patient ? : le projet ICARE (Interactive Care Assessment of Risk factors and Emergency levels) ». Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCB190.
Texte intégralEvery year in France, 4 million emergency consultations are not justified from a medical point of view and almost half of the patients presenting to the Emergency Department could be treated elsewhere, thus releasing the Emergency Departments to take care of the real urgent situations. The overcrowding of the ERs led the services to set up a prioritization of care for the reception of patients; this prioritization is not standardized nor rationalized. We designed a computerized tool to sort emergency levels based on a patient's symptoms, background, and medical context. By this tool, called iCare, the patient alone or assisted by a third party, must be able to detect and prioritize his symptoms leading to an urgent consultation, as opposed to less urgent or non-urgent symptoms. The evaluation of the iCare algorithm regarding ocular pathologies aims to run a generalizable and reproducible sorting tool within the different care units, but also to improve patient autonomy in the understanding of their symptoms and their use of the healthcare system (e-health concept of empowerment). The main objective of our "interventional research involving only minimal risks and constraints" was to validate the iCare sorting algorithm, determining the appropriate level of urgency corresponding to the clinical situations encountered. This validity was based on the calculations of sensitivity, specificity, positive and negative predictive values. The chosen gold-standard was the level of emergency determined by the doctor after his consultation. A number of 1000 patients presenting for an ophthalmological emergency were offered to participate in research upon arrival at the reception of two Parisian health centers (Rothschild Foundation, Vernes Institute), from the date of protocol acceptance by the Committee for the Protection of Persons in biomedical research (CPP). If the patient consented to participate in research, a Clinical Study Technician (CST) made him fill in the iCare algorithm, presented as an interactive questionnaire on touchscreen tablet, whose implementation took less than two minutes. At the end of this implementation, a level of emergency A, B, C or D was provided by the program. The level of emergency attributed by the algorithm was unknown to either the patient or the doctor who was going to examine him. The level of emergency determined by the physician at the end of his clinical examination (gold standard of the primary endpoint) was collected as a level A, B, C, D or as binary choice Urgent / No Urgent (U / NU). Other parameters, such as the time required and the need for filling assistance, the reason for consultation, demographics and on-site waiting time were also analyzed. This thesis outlines a state of the art of the word "e-health" in 2018, addresses the current public health issues related to high traffic in emergency services in France, and features the iCare tool as a potential solution to simplify and rationalize the sorting of emergency levels in ophthalmology (public health feature), as a means of health education and empowerment of patients in the reading of their symptoms (empowerment feature), but also as a generalizable tool for big data reporting of the reasons for consultation in the emergency wards, private practices or even at home health-related internet researches (epidemiological feature)
Guewouo, Thomas. « Système Intégré et Multi-Fonctionnel de Stockage Electrique-Thermique avec l’Option de Tri-Génération ». Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4009/document.
Texte intégralTo address climate change, the transition to a decarbonized energy system is self-evident. The renewable energy sources to support this energy transition are intermittent. Therefore, they should be coupled at an electrical storage system to ensure the reliability of power system using same. Compressed air energy storage (CAES) happens to be one of these technologies of energy storage. Unfortunately, in its current configuration, CAES requires the combustion of natural gas during the discharging periods to improve the global energy efficiency of system. This work contributes to the reduction of the environmental footprint of compressed air energy storage by proposing a small-scale CAES using no fossil fuel energy source. Initially, a careful thermodynamic modeling of such a storage system is made according to the types of components chosen and to their thermal behavior (adiabatic or polytropic). Subsequently, for demonstrating its feasibility, a comprehensive experimental investigation was performed on experimental prototype existing in our lab. The very low experimental conversion efficiency obtained (4%) although confirming the technical feasibility, it has suggested that the proposed storage system should be optimized. A modified real coded genetic algorithm to stabilize and accelerate its convergence is documented here and used to identify a set of thirteen parameters who maximize the global exergy efficiency of proposed electric energy storage system. The result of the optimization indicates that in the optimum operating point, the electrical efficiency of storage system is about 20% for a round trip efficiency of 75%
Chibirova, Olga. « Interactions fonctionnelles dans les ganglions de la base étudiées par l'enregistrement simultané des activités unitaires discriminées par un algorithme non supervisé de tri de potentiels d'action ». Phd thesis, Université Joseph Fourier (Grenoble), 2006. http://tel.archives-ouvertes.fr/tel-00115720.
Texte intégralLa méthode présentée dans la première partie de cette thèse est une nouvelle approche à ce problème qui décrit les potentiels d'action à l'aide des équations différentielles avec perturbation caractérisant la variation interne de leur forme. Le logiciel permettant le tri de potentiels d'action non supervisé développé à partir de cette méthode comprends un algorithme automatique d'évaluation d'étalons de classes et de leurs rayons.
La seconde partie présente l'application de la méthode à l'analyse de l'activité neuronale des ganglions de la base. Les donnés pour les analyses ont été recueillis au bloque chirurgical du département de neurochirurgie de l'Hôpital Universitaire de Grenoble pendent l'électrophysiologie intra chirurgicale et représentent le STN (950 enregistrements), le GPI (183) et le SNR (105) de 13 patients parkinsoniens et 2 patients souffrant de dystonie. Les analyses sont destinées à définir les formes typiques de potentiel d'action et à révéler un parallèle entre la nature de l'activité neuronale et la gravité des symptômes de la maladie de Parkinson.
Moine, Pascale. « Analyse tri-dimensionnelle de la structure de l'atmosphère à partir d'observations satellitaires : classification des masses d'air observées et extension à la haute stratosphère (30-50 km) ». Paris 6, 1986. http://www.theses.fr/1986PA066329.
Texte intégralAbdayem, Anthony. « Stratégies de contrôle optimisées pour les convertisseurs multiniveaux modulaires (MMCs) connectés au réseau basse tension ». Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1301.
Texte intégralThe modular multilevel converter (MMC) has emerged as one of the most promising topologies for medium- to high-voltage, high-power applications. Recently, it has also shown potential for applications requiring low voltages, known as mini MMCs, which contain a smaller number of submodules per arm. Key features of MMCs include modularity, voltage and power scalability, fault tolerance, transformer-less operation, and high-quality output waveforms. In recent years, numerous research studies have been conducted to address the technical challenges associated with MMC operation, control, and topology.One of the most significant applications for MMCs is in grid-connected systems. These converters offer the advantage of reducing current and voltage harmonics without the need for bulky passive components. Moreover, MMCs demonstrate reliability due to their structure, which enables them to continue operating even if one or more power switches fail. However, their control is complex due to the numerous switching configurations, necessitating sophisticated control algorithms. This thesis focuses on implementing advanced control techniques to enhance MMC performance. It aims to explore MMCs, improve existing power structures for novel applications, and increase efficiency and reliability through control design and modulation techniques. The research also investigates controlling MMCs using novel Model Predictive Control methods.Specifically, this thesis comprises a series of investigations addressing challenges and enhancing MMC performance across various applications. The first set of studies focuses on a new control design for MMCs, allowing separate control of capacitor voltages in the upper and lower arms. The research also targets single-phase MMCs, enabling control under unbalanced power conditions between the upper and lower arms. Additionally, the study addresses modulation and voltage balancing techniques. A new modulation technique, the Integral Modulation Technique, an advancement of the Nearest Level Modulation Technique, is introduced. A novel sorting algorithm is also proposed to enhance MMC efficiency by reducing the number of commutations per second for existing modulation techniques such as NLM, IM, and PWM.The research extends to fault-tolerant operation in three-phase MMCs, proposing a method that injects DC and frequency harmonic circulating currents to sustain operation in the event of a faulty arm. A significant contribution involves developing a single-horizon finite control set model predictive control (FCS-MPC) algorithm for single-phase MMCs, which outperforms traditional methods in terms of commutations, grid current waveform quality, and capacitor voltage variance. Six FCS-MPC algorithms for MMCs are introduced, offering insights into their performance compared to a classic cascaded control scheme. The thesis concludes with a novel configuration for a Cascaded H-Bridge (CHB) converter designed for renewable energy integration, demonstrating effectiveness through simulations.In summary, this thesis presents a comprehensive exploration of MMCs, addressing control challenges, fault tolerance, modulation techniques, and innovative configurations for renewable energy integration. The findings contribute to advancing MMC technologies in various applications
Le, Trung-Dung. « Gestion de masses de données dans une fédération de nuages informatiques ». Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S101.
Texte intégralCloud federations can be seen as major progress in cloud computing, in particular in the medical domain. Indeed, sharing medical data would improve healthcare. Federating resources makes it possible to access any information even on a mobile person with distributed hospital data on several sites. Besides, it enables us to consider larger volumes of data on more patients and thus provide finer statistics. Medical data usually conform to the Digital Imaging and Communications in Medicine (DICOM) standard. DICOM files can be stored on different platforms, such as Amazon, Microsoft, Google Cloud, etc. The management of the files, including sharing and processing, on such platforms, follows the pay-as-you-go model, according to distinct pricing models and relying on various systems (Relational Data Management Systems or DBMSs or NoSQL systems). In addition, DICOM data can be structured following traditional (row or column) or hybrid (row-column) data storages. As a consequence, medical data management in cloud federations raises Multi-Objective Optimization Problems (MOOPs) for (1) query processing and (2) data storage, according to users preferences, related to various measures, such as response time, monetary cost, qualities, etc. These problems are complex to address because of heterogeneous database engines, the variability (due to virtualization, large-scale communications, etc.) and high computational complexity of a cloud federation. To solve these problems, we propose a MedIcal system on clouD federAtionS (MIDAS). First, MIDAS extends IReS, an open source platform for complex analytics workflows executed over multi-engine environments, to solve MOOP in the heterogeneous database engines. Second, we propose an algorithm for estimating of cost values in a cloud environment, called Dynamic REgression AlgorithM (DREAM). This approach adapts the variability of cloud environment by changing the size of data for training and testing process to avoid using the expire information of systems. Third, Non-dominated Sorting Genetic Algorithm based ob Grid partitioning (NSGA-G) is proposed to solve the problem of MOOP is that the candidate space is large. NSGA-G aims to find an approximate optimal solution, while improving the quality of the optimal Pareto set of MOOP. In addition to query processing, we propose to use NSGA-G to find an approximate optimal solution for DICOM data configuration. We provide experimental evaluations to validate DREAM, NSGA-G with various test problem and dataset. DREAM is compared with other machine learning algorithms in providing accurate estimated costs. The quality of NSGA-G is compared to other NSGAs with many problems in MOEA framework. The DICOM dataset is also experimented with NSGA-G to find optimal solutions. Experimental results show the good qualities of our solutions in estimating and optimizing Multi-Objective Problem in a cloud federation
Auger, Nicolas. « Analyse réaliste d'algorithmes standards ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1110/document.
Texte intégralAt first, we were interested in TimSort, a sorting algorithm which was designed in 2002, at a time where it was hard to imagine new results on sorting. Although it is used in many programming languages, the efficiency of this algorithm has not been studied formally before our work. The fine-grain study of TimSort leads us to take into account, in our theoretical models, some modern features of computer architecture. In particular, we propose a study of the mechanisms of branch prediction. This theoretical analysis allows us to design variants of some elementary algorithms (like binary search or exponentiation by squaring) that rely on this feature to achieve better performance on recent computers. Even if uniform distributions are usually considered for the average case analysis of algorithms, it may not be the best framework for studying sorting algorithms. The choice of using TimSort in many programming languages as Java and Python is probably driven by its efficiency on almost-sorted input. To conclude this dissertation, we propose a mathematical model of non-uniform distribution on permutations, for which permutations that are almost sorted are more likely, and provide a detailed probabilistic analysis
McNally, Jeffrey Mark. « Fast parallel algorithms for tri-diagonal symmetric Toeplitz systems ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0017/MQ54635.pdf.
Texte intégralDias, Vieira Braga Marilia. « L'espace de solutions du tri par inversions et son utilisation dans l'analyse de réarrangements de génomes ». Lyon 1, 2009. http://n2t.net/ark:/47881/m6rv0kvd.
Texte intégralCalculating the reversal distance and searching for optimal sequences of reversals to transform a genome into another when gene duplications are not allowed are useful algorithmic tools to analyse real evolutionary scenarios. However, the number of sorting sequences is usually huge. Using a model previously proposed to group the sorting sequences into classes of equivalence, we developed an algorithm to direct generate the classes without enumerating all sequences, reducing thus the size of the set to be handled. We then propose the use of different biological constraints, such as the common intervals initially and progressively detected, to reduce the universe of sequences and classes, and show how to apply these methods to analyze real cases in evolution. In particular, we analyzed the evolution of the Rickettsia bacterium, and of the sexual chromosomes X and Y in human. We obtain a better characterization of the evolutionary scenarios of these genomes, with respect to the results of previous studies, that were based on a single sorting sequence. All the algorithms developed in this work are implemented, integrated to baobabLUNA, a java framework to deal with genomes and reversals. Download and tutorial for baobabLUNA are available on-line
Bouguila, Maissa. « Μοdélisatiοn numérique et οptimisatiοn des matériaux à changement de phase : applicatiοns aux systèmes cοmplexes ». Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR05.
Texte intégralPhase-change materials exhibit considerable potential in the field of thermal management.These materials offer a significant thermal storage capacity. Excessive heat dissipated by miniature electronic components could lead to serious failures. A cooling system based on phase-change materials is among the most recommended solutions to guarantee the reliable performance of these microelectronic components. However, the low conductivity of these materials is considered a major limitation to their use in thermal management applications. The primary objective of this thesis is to address the challenge of improving the thermal conductivity of these materials. Numerical modeling is conducted, in the first chapters, to determine the optimal configuration of a heat sink, based on the study of several parameters such as fin insertion, nanoparticle dispersion, and the use of multiple phase-change materials. The innovation in this parametric study lies in the modeling of heat transfer from phase-change materials with relatively high nanoparticle concentration compared to the low concentration found in recent literature (experimental researchs). Significant conclusions are deducted from this parametric study, enabling us to propose a new model based on multiple phase-change materials improved with nanoparticles (NANOMCP). Reliable optimization studies are then conducted. Initially, a mono-objective reliability optimization study is carried out to propose a reliable and optimal model based on multiple NANOMCPs. The Robust Hybrid Method (RHM)proposes a reliable and optimal model, compared with the Deterministic Design Optimization method (DDO) and various Reliability Design Optimization methods (RBDO). Furthermore,the integration of a developed RBDO method (RHM) for the thermal management applicationis considered an innovation in recent literature. Additionally, a reliable multi-objective optimization study is proposed, considering two objectives: the total volume of the heat sink and the discharge time to reach ambient temperature. The RHM optimization method and the non-dominated sorting genetics algorithm (C-NSGA-II) were adopted to search for the optimal and reliable model that offers the best trade-off between the two objectives. Besides, An advanced metamodel is developed to reduce simulation time, considering the large number of iterations involved in finding the optimal model
Kaufmann, Benoit. « Spécification et conception d'un système auto-stéréoscopique multi-vues pour l'affichage tri-dimensionnel ». Marne-la-Vallée, 2006. http://www.theses.fr/2006MARN0319.
Texte intégralThere are actually many 3D display systems. Each of them has its advantages and disadvantages. For example, many of them need the observers to wear special glasses or limit the observation positions to a certain distance. We improved one of these display systems types: autostereoscopic displays, which holds concurrently all the existing systems' advantages for the observer, without having the disadvantages. This thesis presents the principle of this new system and discusses about its advantages and disadvantages compared to the existing 3D display systems. It also proposes a solution to different problems arising from this principle, related to the significant number of views to be computed: computing time to generate these views and memory size needed to store them before displaying. It consists in a lossless 3D compression algorithm and its real-time implementation on a GPU
Hermant, Audrey. « Sur l'algorithme de tir pour les problèmes de commande optimale avec contraintes sur l'état ». Phd thesis, Ecole Polytechnique X, 2008. http://tel.archives-ouvertes.fr/tel-00348227.
Texte intégralCatelli, Ezio. « Isomorfismo tra alberi : algoritmi e complessità computazionale ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6940/.
Texte intégralSandberg, Roland. « Generation of floating islands using height maps ». Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3183.
Texte intégralGalvan, Javier. « Simulation of Tri-generation Systems with application of optimization ». Thesis, KTH, Tillämpad termodynamik och kylteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-95191.
Texte intégralBarros, Fabien de. « Conception, Réalisation et Caractérisation de papiers fonctionnels pour des applications de filtrage électromagnétique ». Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT118/document.
Texte intégralThe electromagnetic smog in which we live today is nowadays a real issue because it limits the use of certain technologies and also because there are some potential health risks associated with it, even if the latter is still a controversial subject. The importance of the interferences between wireless networks or the possibility of data hacking on the same networks are two examples. The aim of this thesis is to develop a new way to protect buildings against some of these electromagnetic waves. More specifically, this work focuses on a technology able to filter only the WiFi and the GSM waves through large areas of a home, like a wall for example. To do this, the functionalization of a standard component of buildings, the wallpaper, was studied. The use of frequency selective surface (FSS) was chosen. These patterns are printed directly on paper with a conductive ink printing technology: the flexography. The study also focuses on the realization of innovative filter designs. Simulation results show that these novel FSS are able to filter two or three bands. They are almost insensitive to the polarization and to the angle of incidence in the range of 0° to ±80°. The realization feasibility of this concept in a laboratory or in industrial conditions was demonstrated. Next, an experimental demonstration of this concept in the WiFi bands was carried out. In this context, the transmission coefficient was reached -30 dB. Finally, an experimental validation of the product in real conditions of use was conducted, namely the wallpaper was put over plasterboards or over wood panels. Also, the influence of the glue on the general performances and the placement of a decorative wallpaper over the FSS wallpaper were studied. In conclusion, the practical results obtained confirm and validate the theoretical predictions of this new concept, called metapaper, and show that the practical realizations are efficient enough to allow the reduction of WiFi or GSM signals
Piselli, Riccardo. « Innovazione finanziaria e algoritmi : tra trasparenza e opacità ». Doctoral thesis, Luiss Guido Carli, 2020. http://hdl.handle.net/11385/204275.
Texte intégralMu, Shin-Cheng. « A calculational approach to program inversion / ». Oxford : Oxford University Computing Laboratory, 2004. http://web.comlab.ox.ac.uk/oucl/publications/tr/rr-04-03.html.
Texte intégralMaia, Júnior Antonio Geraldo Pinto. « Uso do tempo de resposta para melhorar a convergência do algoritmo de testes adaptativos informatizados ». reponame:Repositório Institucional da UnB, 2015. http://repositorio.unb.br/handle/10482/18834.
Texte intégralSubmitted by Patrícia Nunes da Silva (patricia@bce.unb.br) on 2015-11-23T15:47:56Z No. of bitstreams: 1 2015_AntonioGeraldoPintoMaiaJunior_Parcial.pdf: 2852414 bytes, checksum: 1303cf1106fe5224e001f2e527d46d67 (MD5)
Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2015-12-03T10:59:49Z (GMT) No. of bitstreams: 1 2015_AntonioGeraldoPintoMaiaJunior_Parcial.pdf: 2852414 bytes, checksum: 1303cf1106fe5224e001f2e527d46d67 (MD5)
Made available in DSpace on 2015-12-03T10:59:49Z (GMT). No. of bitstreams: 1 2015_AntonioGeraldoPintoMaiaJunior_Parcial.pdf: 2852414 bytes, checksum: 1303cf1106fe5224e001f2e527d46d67 (MD5)
O presente trabalho tem como objetivo central melhorar os Testes Adaptativos Informatizados (Computerized Adaptative Tests, CATs na sigla, em inglês) clássicos, que são aqueles administrados por computador e que ajustam os itens do teste à medida que ele é realizado. Isso é possível, pois, dada a resposta do respondente, estima-se a sua habilidade momentânea, obtendo-se o próximo item a ser administrado, com base em um critério estatístico (Máxima Informação, Máxima Informação Global ou Máxima Informação Esperada). Para isso, inseriu-se a covariável Tempo de Resposta ao modelo. Pois, acreditou-se que há informação nessa covariável e, portanto, ao se considerá-la, o teste pode ser encurtado, melhorando, assim, a convergência do algoritmo. Nessa perspectiva, fez-se uma revisão bibliográfica de TRI (sigla de Teoria de Resposta ao Item) e CAT, para se estruturar o novo modelo com a covariável Tempo de Resposta, calculando-se todas as equações que serão utilizadas na aplicação. Por fim, a aplicação com dados simulados concluiu nosso estudo, pois, ao comparar a convergência do algoritmo de um CAT tradicional em relação ao novo CAT, observou-se que os objetivos do presente trabalho foram cumpridos. ___________________________________________________________________________ ABSTRACT
Computerized adaptive tests (CATs) are tests administered by computer which adjust the test items as the test is carried out. This work proposes to improve CATs by taking into account the time that the respondents use to answer the different questions to obtain provisional estimates of their ability in order to choose the next item. This information is used to modify the classical criteria (maximal information, overall maximum information or maximum information expected). It is believed that the use of this covariate may improve the convergence of the CAT algorithm, thus allowing for shorter tests. The dissertation presents a review of TRI and CAT and the new model which takes into account the response time. An application using simulated data is used to compare the convergence of a traditional CAT algorithm and that of the model using the response time.
Ben, Jmaa Chtourou Yomna. « Implémentation temps réel des algorithmes de tri dans les applications de transports intelligents en se basant sur l'outil de synthèse haut niveau HLS ». Thesis, Valenciennes, 2019. http://www.theses.fr/2019VALE0013.
Texte intégralIntelligent transport systems play an important role in minimizing accidents, traffic congestion, and air pollution. Among these systems, we mention the avionics domain, which uses in several cases the sorting algorithms, which are one of the important operations for real-time embedded applications. However, technological evolution is moving towards more and more complex architectures to meet the application requirements. In this respect, designers find their ideal solution in reconfigurable computing, based on heterogeneous CPU / FPGA architectures that house multi-core processors (CPUs) and FPGAs that offer high performance and adaptability to real-time constraints. Of the application. The main objective of my work is to develop hardware implementations of sorting algorithms on the heterogeneous CPU / FPGA architecture by using the high-level synthesis tool to generate the RTL design from the behavioral description. This step requires additional efforts on the part of the designer in order to obtain an efficient hardware implementation by using several optimizations with different use cases: software, optimized and nonoptimized hardware and for several permutations / vectors generated using the generator pf permutation based on Lehmer method. To improve performance, we calculated the runtime, standard deviation and resource number used for sorting algorithms by considering several data sizes ranging from 8 to 4096 items. Finally, we compared the performance of these algorithms. This algorithm will integrate the applications of decision support, planning the flight plan
Sedlář, František. « Algoritmy pro vyhledání nejdelšího shodného prefixu ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236363.
Texte intégralPolanský, Jan. « Návrh automatického obchodního systému pro měnový trh ». Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2016. http://www.nusl.cz/ntk/nusl-254259.
Texte intégralManyatsi, Sanele Mduduzi Innocent. « Investigating some heuristic solutions for the two-dimensional cutting stock problem / S.M. Manyatsi ». Thesis, North-West University, 2010. http://hdl.handle.net/10394/4390.
Texte intégralThesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
Kuna, Martin. « Využití umělé inteligence na kapitálových trzích ». Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2010. http://www.nusl.cz/ntk/nusl-433196.
Texte intégralBirgersson, Emil. « Applicering av en 2D dungeon algoritm i en 3D rymd : Hur bra presterar TinyKeeps dungeon algoritm i tre dimensioner ? » Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19862.
Texte intégralDet finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.
Ficiarà, Lorenzo. « Implementazione di un algoritmo per il calcolo della condizione di trim di elicotteri convenzionali ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19855/.
Texte intégralJaramillo, Juan Carlos Burbano. « Otimização exergoeconômica de sistema tetra-combinado de trigeração ». Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3150/tde-11082011-134622/.
Texte intégralEnergy is the largest contributor to operating costs of any industry; therefore, studies for improving systems efficiency that use some energy source are essential. This work aims to obtain optimal configurations in order to satisfy required demands for electricity and thermal loads for heating and cooling from a primary source of energy, evaluating the impact of the electricity, steam and chilled water production costs. These types of systems are known as trigeneration systems. The performance evaluation of trigeneration systems is carried out by the application of exergy and exergoeconomic analysis of the proposed alternatives in order to determine exergy efficiency and exergy based costs on production of this type of system utilities. After presenting a brief discussion about efficient and rational use of primary energies and an overview of situation for trigeneration systems application, various technologies involved in this type of systems and some configurations proposed by several authors are described. This research shows the impact of trigeneration technologies in exergy-based costs of products: electricity, steam process and chilled water. Absorption refrigeration systems of simple effect, double effect and the hybrid absorption/ejecto compression are analyzed, as part of the trigeneration systems study. Several trigeneration systems, including the tetra-combined system, are compared with each other, satisfying energetic demands for three different applications: a dairy industry, a hospital and a drinks industry. The configurations in study are optimized using the Genetic Algorithm method. The results show that the hybrid absorption/ejecto compression refrigeration system is a good alternative for chilled water production due to that the coefficient of performance (COP) and the exergetic efficiency are higher than simple effect absorption refrigeration system. Observing the impact in the formation of the energy conversion costs for trigeneration systems proposed, the systems that use a double effect absorption refrigeration system presents the less impact. When tetra-combined system is compared with the system using a simple effect absorption refrigeration system, the results show a reduction in the impact of costs formation. The fuel consumption and exergy destruction of the different systems is reflected in the exergy based costs of the different products. The optimization with genetic algorithms shown important profits in the exergy based costs of products, by means of the exergetic efficiency maximization of the different trigeneration systems. The genetic algorithm method is a robust method for energy conversion systems optimization, even that it demands a great computational effort.
Aronna, Maria Soledad. « Analyse du second ordre des problèmes de commande optimale avec des arcs singuliers. Conditions d'optimalité et un algorithme de tir ». Phd thesis, Ecole Polytechnique X, 2011. http://tel.archives-ouvertes.fr/tel-00682276.
Texte intégralSeverini, Silvia. « Studio preliminare per l'individuazione di regioni genomiche più diffuse tra le popolazioni ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14473/.
Texte intégralWinck, Ana Trindade. « 3D-Tri : um algoritmo de indução de árvore de regressão para propriedades tridimensionais - um estudo sobre dados de docagem molecular considerando a flexibilidade do receptor ». Pontifícia Universidade Católica do Rio Grande do Sul, 2012. http://hdl.handle.net/10923/1662.
Texte intégralWith the growth of biological experiments, solving and analyzing the massive amount of data being generated has been one of the challenges in bioinformatics, where one important research area is the rational drug design (RDD). The interaction between biological macromolecules called receptors, and small molecules called ligands, is the fundamental principle of RDD. In in-silico molecular docking experiments it is investigated the best bind and conformation of a ligand into a receptor. A docking result can be discriminated by a continue value called Free Energy of Binding (FEB). We are attempting on mining data from molecular docking results, aiming at selecting promising receptor conformations to the next docking experiments. In this sense, we have developed a comprehensive repository to store our molecular docking data. Having such repository, we were able to apply preprocessing strategies on the stored data and submit them to different data mining tasks. Among the techniques applied, the most promising results were obtained with regression model trees. Although we have already addressed important issues and achieved significant results, there are some properties in these experiments turning it difficult to properly select conformations. Hence, a strategy was proposed considering the three-dimensional (3D) properties of the receptor conformations, to predict FEB. This thesis presents the 3D-Tri, a novel algorithm able to handle and treat spatial coordinates in a x, y,z format, and induce a tree that predicts FEB value by representing such properties. The algorithm uses such coordinates to split a node in two parts, where the edges evaluate whether the atom being tested by the node is part of a given interval [(xi, xf );(yi, yf );(zi, zf )], where i indicates the initial position of the coordinate, and f its final position. The induced model can help a domain specialist to select promising conformations, based on the region of the atoms in the model, to perform new molecular docking experiments.
Com o avanço nos experimentos biológicos, a manipulação e análise do grande volume de dados sendo gerados por esses experimentos têm sido um dos desafios em bioinformática, onde uma importante área de pesquisa é o desenho racional de fármacos (RDD - Rational Drug Desing). A interação entre macromoléculas biológicas, chamadas de receptores, e pequenas moléculas, chamadas ligantes, é o princípio fundamental do RDD. É em experimentos in silico de docagem molecular que se investiga o melhor encaixe e conformação de um ligante em uma cavidade do receptor. O resultado de um experimento de docagem pode ser avaliado a partir de um valor contínuo de energia livre de ligação (FEB - Free Energy of Binding). Tem-se empregado esforços em minerar dados de resultados de docagem molecular, com o objetivo de selecionar conformações relevantes para reduzir o tempo de futuros experimentos de docagem. Nesse sentido, foi desenvolvido um repositório para armazenar todos os dados a respeito desses experimentos, em nível de detalhe. Com esse repositório, os dados foram devidamente pré-processados e submetidos a diferentes tarefas de mineração de dados. Dentre as técnicas aplicadas, a que apresentou-se mais promissora para o tipo de dados sendo utilizado foi árvore de decisão para regressão. Apesar dos resultados alcançados por esses experimentos serem promissores, existem algumas propriedades nos experimentos que dificultam a efetiva seleção de conformações. Dessa forma, propõe-se uma estratégia que considera as propriedades tridimensionais (3D) do receptor para predizer o valor de FEB. Assim, nesta Tese é apresentado o 3D-Tri, um algoritmo de indução de árvore de regressão que considera essas propriedades 3D, onde essas propriedades são definidas como atributos no formato x, y,z. O algoritmo proposto faz uso dessas coordenadas para dividir um nodo em duas partes, onde o átomo sendo testado para o nodo é avaliado em termos de sua posição em um bloco [(xi, xf );(yi, yf );(zi, zf )] que melhor represente sua posição no espaço, onde i indica a posição inicial de uma coordenada, e f indica a posição final. O modelo induzido pode ser útil para um especialista de domínio para selecionar conformações promissoras do receptor, tendo como base as regiões dos átomos que aparecem no modelo e que indicam melhores valores de FEB.
Thames, John Lane. « Advancing cyber security with a semantic path merger packet classification algorithm ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45872.
Texte intégralRuiz, Echartea Maria Elisa. « Pairwise and Multi-Component Protein-Protein Docking Using Exhaustive Branch-and-Bound Tri-Dimensional Rotational Searches ». Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0306.
Texte intégralDetermination of tri-dimensional (3D) structures of protein complexes is crucial to increase research advances on biological processes that help, for instance, to understand the development of diseases and their possible prevention or treatment. The difficulties and high costs of experimental methods to determine protein 3D structures and the importance of protein complexes for research have encouraged the use of computer science for developing tools to help filling this gap, such as protein docking algorithms. The protein docking problem has been studied for over 40 years. However, developing accurate and efficient protein docking algorithms remains a challenging problem due to the size of the search space, the approximate nature of the scoring functions used, and often the inherent flexibility of the protein structures to be docked. This thesis presents an algorithm to rigidly dock proteins using a series of exhaustive 3D branch-and-bound rotational searches in which non-clashing orientations are scored using ATTRACT. The rotational space is represented as a quaternion “π-ball”, which is systematically sub-divided in a “branch-and-bound” manner, allowing efficient pruning of rotations that will give steric clashes. The contribution of this thesis can be described in three main parts as follows. 1) The algorithm called EROS-DOCK to assemble two proteins. It was tested on 173 Docking Benchmark complexes. According to the CAPRI quality criteria, EROS-DOCK typically gives more acceptable or medium quality solutions than ATTRACT and ZDOCK. 2)The extension of the EROS-DOCK algorithm to allow the use of atom-atom or residue-residue distance restraints. The results show that using even just one residue-residue restraint in each interaction interface is sufficient to increase the number of cases with acceptable solutions within the top-10 from 51 to 121 out of 173 pairwise docking cases. Hence, EROS-DOCK offers a new improved search strategy to incorporate experimental data, of which a proof-of-principle using data-driven computational restraints is demonstrated in this thesis, and this might be especially important for multi-body complexes. 3)The extension of the algorithm to dock trimeric complexes. Here, the proposed method is based on the premise that all of the interfaces in a multi-body docking solution should be similar to at least one interface in each of the lists of pairwise docking solutions. The algorithm was tested on a home-made benchmark of 11 three-body cases. Seven complexes obtained at least one acceptable quality solution in the top-50. In future, the EROS-DOCK algorithm can evolve by integrating improved scoring functions and other types of restraints. Moreover, it can be used as a component in elaborate workflows to efficiently solve complex problems of multi-protein assemblies
Nivollet, Pierre-Yves. « Un habitat capable de reconna??tre les activit??s planifi??es dans un calendrier ??lectronique ». Mémoire, Universit? ? de Sherbrooke, 2013. http://savoirs.usherbrooke.ca/handle/11143/46.
Texte intégralWinck, Ana Trindade. « 3D-Tri : um algoritmo de indu??o de ?rvore de regress?o para propriedades tridimensionais - um estudo sobre dados de docagem molecular considerando a flexibilidade do receptor ». Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2012. http://tede2.pucrs.br/tede2/handle/tede/5156.
Texte intégralWith the growth of biological experiments, solving and analyzing the massive amount of data being generated has been one of the challenges in bioinformatics, where one important research area is the rational drug design (RDD). The interaction between biological macromolecules called receptors, and small molecules called ligands, is the fundamental principle of RDD. In in-silico molecular docking experiments it is investigated the best bind and conformation of a ligand into a receptor. A docking result can be discriminated by a continue value called Free Energy of Binding (FEB). We are attempting on mining data from molecular docking results, aiming at selecting promising receptor conformations to the next docking experiments. In this sense, we have developed a comprehensive repository to store our molecular docking data. Having such repository, we were able to apply preprocessing strategies on the stored data and submit them to different data mining tasks. Among the techniques applied, the most promising results were obtained with regression model trees. Although we have already addressed important issues and achieved significant results, there are some properties in these experiments turning it difficult to properly select conformations. Hence, a strategy was proposed considering the three-dimensional (3D) properties of the receptor conformations, to predict FEB. This thesis presents the 3D-Tri, a novel algorithm able to handle and treat spatial coordinates in a x,y,z format, and induce a tree that predicts FEB value by representing such properties. The algorithm uses such coordinates to split a node in two parts, where the edges evaluate whether the atom being tested by the node is part of a given interval [(xi,xf );(yi,yf );(zi,zf )], where i indicates the initial position of the coordinate, and f its final position. The induced model can help a domain specialist to select promising conformations, based on the region of the atoms in the model, to perform new molecular docking experiments
Com o avan?o nos experimentos biol?gicos, a manipula??o e an?lise do grande volume de dadossendo gerados por esses experimentos t?m sido um dos desafios em bioinform?tica, onde umaimportante ?rea de pesquisa ? o desenho racional de f?rmacos (RDD - Rational Drug Desing). Aintera??o entre macromol?culas biol?gicas, chamadas de receptores, e pequenas mol?culas, chamadasligantes, ? o princ?pio fundamental do RDD. ? em experimentos in silico de docagem molecularque se investiga o melhor encaixe e conforma??o de um ligante em uma cavidade do receptor. Oresultado de um experimento de docagem pode ser avaliado a partir de um valor cont?nuo de energialivre de liga??o (FEB - Free Energy of Binding). Tem-se empregado esfor?os em minerar dados deresultados de docagem molecular, com o objetivo de selecionar conforma??es relevantes para reduziro tempo de futuros experimentos de docagem. Nesse sentido, foi desenvolvido um reposit?rio paraarmazenar todos os dados a respeito desses experimentos, em n?vel de detalhe. Com esse reposit?rio,os dados foram devidamente pr?-processados e submetidos a diferentes tarefas de minera??ode dados. Dentre as t?cnicas aplicadas, a que apresentou-se mais promissora para o tipo de dadossendo utilizado foi ?rvore de decis?o para regress?o. Apesar dos resultados alcan?ados por essesexperimentos serem promissores, existem algumas propriedades nos experimentos que dificultam aefetiva sele??o de conforma??es. Dessa forma, prop?e-se uma estrat?gia que considera as propriedadestridimensionais (3D) do receptor para predizer o valor de FEB. Assim, nesta Tese ? apresentadoo 3D-Tri, um algoritmo de indu??o de ?rvore de regress?o que considera essas propriedades 3D,onde essas propriedades s?o definidas como atributos no formato x,y,z. O algoritmo proposto fazuso dessas coordenadas para dividir um nodo em duas partes, onde o ?tomo sendo testado parao nodo ? avaliado em termos de sua posi??o em um bloco [(xi,xf );(yi,yf );(zi,zf )] que melhorrepresente sua posi??o no espa?o, onde i indica a posi??o inicial de uma coordenada, e f indicaa posi??o final. O modelo induzido pode ser ?til para um especialista de dom?nio para selecionarconforma??es promissoras do receptor, tendo como base as regi?es dos ?tomos que aparecem nomodelo e que indicam melhores valores de FEB
Sastre, Javier M. « Efficient finite-state algorithms for the application of local grammars ». Phd thesis, Université de Marne la Vallée, 2011. http://tel.archives-ouvertes.fr/tel-00621249.
Texte intégralMartucci, Marco. « Quantificazione delle Biomasse residuali agricole nel Sud Italia e successivo confronto tra algoritmi di mappatura nativi ed esterni in QGIS ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20658/.
Texte intégralSpilla, Emma. « Caratterizzazione del cammino sulla sabbia per mezzo di sensori inerziali indossabili : confronto tra algoritmi per la stima degli eventi del passo ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Trouver le texte intégralDupuy, Grégor. « Les collections volumineuses de documents audiovisuels : segmentation et regroupement en locuteurs ». Thesis, Le Mans, 2015. http://www.theses.fr/2015LEMA1006/document.
Texte intégralThe task of speaker diarization, as defined by NIST, considers the recordings from a corpus as independent processes. The recordings are processed separately, and the overall error rate is a weighted average. In this context, detected speakers are identified by anonymous labels specific to each recording. Therefore, a speaker appearing in several recordings will be identified by a different label in each of the recordings. Yet, this situation is very common in broadcast news data: hosts, journalists and other guests may appear recurrently. Consequently, speaker diarization has been recently considered in a broader context, where recurring speakers must be uniquely identified in every recording that compose a corpus. This generalization of the speaker partitioning problem goes hand in hand with the emergence of the concept of collections, which refers, in the context of speaker diarization, to a set of recordings sharing one or more common characteristics.The work proposed in this thesis concerns speaker clustering of large audiovisual collections (several tens of hours of recordings). The main objective is to propose (or adapt) clustering approaches in order to efficiently process large volumes of data, while detecting recurrent speakers. The effectiveness of the proposed approaches is discussed from two point of view: first, the quality of the produced clustering (in terms of error rate), and secondly, the time required to perform the process. For this purpose, we propose two architectures designed to perform cross-show speaker diarization with collections of recordings. We propose a simplifying approach to decomposing a large clustering problem in several independent sub-problems. Solving these sub-problems is done with either of two clustering approaches which takeadvantage of the recent advances in speaker modeling
Martin, Hugo. « Optimisation multi-objectifs et élicitation de préférences fondées sur des modèles décisionnels dépendants du rang et des points de référence ». Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS101.
Texte intégralThis thesis work falls within the research field of algorithmic decision theory, which is defined at the junction of decision theory, artificial intelligence and operations research. This work focuses on the consideration of sophisticated behaviors in complex decision environments (multicriteria decision making, collective decision making and decision under risk and uncertainty). We first propose methods for multi-objective optimization on implicit sets when preferences are represented by rank-dependent models (Choquet integral, bipolar OWA, Cumulative Prospect Theory and bipolar Choquet integral). These methods are based on mathematical programming and discrete algorithmics approaches. Then, we present methods for the incremental parameter elicitation of rank-dependent model that take into account the presence of a reference point in the decision maker's preferences (bipolar OWA, Cumulative Prospect Theory, Choquet integral with capacities and bicapacities). Finally, we address the structural modification of solutions under constraints (cost, quality) in multiple reference point sorting methods. The different approaches proposed in this thesis have been tested and we present the obtained numerical results to illustrate their practical efficiency
Quaglia, Roberto. « Calibrazione di modelli afflussi-deflussi su un set di bacini austriaci : confronto tra modelli spazialmente distribuiti e concentrati e scelta degli algoritmi di ottimizzazione ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Trouver le texte intégralC?mara, Paulo C?sar de Souza. « Implementa??o de um algoritmo para a an?lise de curtos-circuitos de alta imped?ncia baseado no fluxo de carga soma de pot?ncias ». Universidade Federal do Rio Grande do Norte, 2002. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15248.
Texte intégralThis work has as main objective to show all the particularities regarding the Three-phase Power Summation Method, used for load flow calculation, in what it says respect to the influence of the magnetic coupling among the phases, as well as to the losses presented in all the existent transformers in the feeder to be analyzed. Besides, its application is detailed in the study of the short-circuits, that happen in the presence of high impedance values, which possess a problem, that is its difficult detection and consequent elimination on the part of common devices of protection. That happens due to the characteristic presented by the current of short? circuit, in being generally of the same order of greatness that the load currents. Results of simulations accomplished in several situations will be shown, objectifying a complete analysis of the behavior of the proposed method in several types of short-circuits. Confront of the results obtained by the method with results of another works will be presented to verify its effectiveness
Este trabalho tem como objetivo principal mostrar todas as particularidades a respeito do M?todo da Soma de Pot?ncias Trif?sico, utilizado em c?lculo de fluxo de carga trif?sico, no que diz respeito ? influ?ncia do acoplamento magn?tico entre as fases, bem como ?s perdas apresentadas em todos os transformadores existentes no alimentador a ser analisado. Al?m disso, detalha-se a sua aplica??o no estudo dos curtos-circuitos, que ocorrem na presen?a de altos valores de imped?ncia, os quais possuem um problema a mais, que ? sua dif?cil detec??o e conseq?ente elimina??o por parte de dispositivos comuns de prote??o. Isso ocorre devido ? caracter?stica apresentada pela corrente de curto, em ser geralmente da mesma ordem de grandeza que as correntes de carga. Ser?o mostrados resultados de simula??es realizadas em diversas situa??es, buscando uma an?lise completa do comportamento do m?todo proposto em v?rios tipos de curtos-circuitos. Confronto dos resultados obtidos pelo m?todo com os de outros trabalhos ser?o apresentados como forma de verifica??o de sua efic?cia
Cénac, Peggy. « Récursivité au carrefour de la modélisation de séquences, des arbres aléatoires, des algorithmes stochastiques et des martingales ». Habilitation à diriger des recherches, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00954528.
Texte intégralRobin, Cyril. « Modèles et algorithmes pour systèmes multi-robots hétérogènes : application à la patrouille et au suivi de cible ». Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0037/document.
Texte intégralDetecting, localizing or following targets is at the core of numerous robotic applications in industrial, civilian and military application contexts. Much work has been devoted in various research communities to planning for such problems, each community with a different standpoint. Our thesis first provides a unifying taxonomy to go beyond the frontiers of specific communities and specific problems, and to enlarge the scope of prior surveys. We review various work related to each class of problems identified in the taxonomy, highlighting the different approaches, models and results. This analysis specifically points out the lack of representativityof the exploited models, which are in vast majority only 2D single-layer models where motion and sensing are mixed up. We consider those unrealistic models as too restrictive to handle the full synergistic potential of an heterogeneous team of cooperative robots. In response to this statement, we suggest a new organisation of the necessary models, stating clearly the links and separation between models and planning algorithms. This has lead to the development of a C++ library that structures the available models and defines the requests required by the planning process. We then exploit this library through a set of algorithms tackling area patrolling and target tracking. These algorithms are supported by a sound formalism and we study the impact of the models on the observed performances, with an emphasis on the complexity and the quality of the resultingsolutions. As a more general consideration, models are an essential link between Artificial Intelligence and applied Robotics : improving their expressiveness and studying them rigorously are the keys leading toward better robot behaviours and successful robotic missions. This thesis help to show how important the models are for planning and other decision processes formulti-robot missions
Haushalterová, Gabriela. « Vysokofrekvenční obchodovaní a jeho dopad na stabilitu finančního trhu ». Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359578.
Texte intégralJabeur, Mohamed. « Méthodes géometriques en mécanique spatiale et aspects numériques ». Phd thesis, Université de Bourgogne, 2005. http://tel.archives-ouvertes.fr/tel-00012145.
Texte intégralrecherche sur le contrôle optimal de véhicules spatiaux.
Le premier est consacré au problème du transfert orbital. Le modèle étudié est celui du contrôle en temps minimal d'un satellite que l'on souhaite insérer sur une orbite géostationnaire. Ce type de problème classique a été réactualisé avec l'évolution de la technologie des moteurs à poussée faible et continue. Notre contribution est de deux ordres. Géométrique, tout d'abord, puisqu'on étudie la contrôlabilité du système ainsi que
la géométrie des transferts (structure de la commande) à l'aide d'outils de contrôle géométrique (principe du minimum). Sont ensuite présentés l'algorithme de tir et la méthode de continuation. Ces approches permettent de traiter numériquement le problème du transfert orbital dont la poussée est forte à faible.
Le second concerne le calcul des trajectoires de rentrée
atmosphérique pour la navette spatiale. Le problème
décrivant les trajectoires est de dimension $6,$ le contrôle est l'angle de gîte cinématique ou sa dérivée et le coût est l'intégrale du flux thermique. Par ailleurs, il y a des contraintes sur l'état (flux thermique, accélération normale et pression dynamique). Notre étude est fondée sur l'obtention des conditions nécessaires d'optimalité (principe du minimum avec contraintes sur l'état) applicables à notre cas, sur le calcul des
paramètres $(\eta,\nu,u_b)$ associées à la contrainte sur l'état et sur l'analyse des synthèses optimales au voisinage de la contrainte. Une fois la trajectoire optimale déterminée, on utilise l'algorithme de tir multiple et la méthode de continuation pour les évaluations numériques.
Točevová, Radka. « Testování úspěšnosti trading a trending indikátorů technické analýzy ». Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-360518.
Texte intégralVacher, Blandine. « Techniques d'optimisation appliquées au pilotage de la solution GTP X-PTS pour la préparation de commandes intégrant un ASRS ». Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2566.
Texte intégralThe work presented in this PhD thesis deals with optimization problems in the context of internal warehouse logistics. The field is subject to strong competition and extensive growth, driven by the growing needs of the market and favored by automation. SAVOYE builds warehouse storage handling equipment and offers its own GTP (Goods-To-Person) solution for order picking. The solution uses an Automated Storage and Retrieval System (ASRS) called X-Picking Tray System (X-PTS) and automatically routes loads to workstations via carousels to perform sequenced operations. It is a highly complex system of systems with many applications for operational research techniques. All this defines the applicative and theoretical scope of the work carried out in this thesis. In this thesis, we have first dealt with a specific scheduling Job Shop problem with precedence constraints. The particular context of this problem allowed us to solve it in polynomial time with exact algorithms. These algorithms made it possible to calculate the injection schedule of the loads coming from the different storage output streams to aggregate on a carousel in a given order. Thus, the inter-aisle management of the X-PTS storage was improved and the throughput of the load flow was maximized, from the storage to a station. In the sequel of this work, the radix sort LSD (Least Significant Digit) algorithm was studied and a dedicated online sorting algorithm was developed. The second one is used to drive autonomous sorting systems called Buffers Sequencers (BS), which are placed upstream of each workstation in the GTP solution. Finally, a sequencing problem was considered, consisting of finding a linear extension of a partial order minimizing a distance with a given order. An integer linear programming approach, different variants of dynamic programming and greedy algorithms were proposed to solve it. An efficient heuristic was developed based on iterative calls of dynamic programming routines, allowing to reach a solution close or equal to the optimum in a very short time. The application of this problem to the unordered output streams of X-PTS storage allows pre-sorting at the carousel level. The various solutions developed have been validated by simulation and some have been patented and/or already implemented in warehouses
Šalovský, Vojtěch. « Aplikace pro algoritmické obchodování ». Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359188.
Texte intégralBrnka, Radim. « Využití umělé inteligence na kapitálových trzích ». Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223594.
Texte intégral