Thèses sur le sujet « Algorithmes de mise en cache »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Algorithmes de mise en cache ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Boussofiane, Fatiha. « Algorithmes de mise en correspondance pour la reconnaissance d'objets partiellement caches ». Paris 11, 1992. http://www.theses.fr/1992PA112451.
Texte intégralZhao, Hui. « High performance cache-aided downlink systems : novel algorithms and analysis ». Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS366.
Texte intégralThe thesis first addresses the worst-user bottleneck of wireless coded caching, which is known to severely diminish cache-aided multicasting gains. We present a novel scheme, called aggregated coded caching, which can fully recover the coded caching gains by capitalizing on the shared side information brought about by the effectively unavoidable file-size constraint. The thesis then transitions to scenarios with transmitters with multi-antenna arrays. In particular, we now consider the multi-antenna cache-aided multi-user scenario, where the multi-antenna transmitter delivers coded caching streams, thus being able to serve multiple users at a time, with a reduced radio frequency (RF) chains. By doing so, coded caching can assist a simple analog beamformer (only a single RF chain), thus incurring considerable power and hardware savings. Finally, after removing the RF-chain limitation, the thesis studies the performance of the vector coded caching technique, and reveals that this technique can achieve, under several realistic assumptions, a multiplicative sum-rate boost over the optimized cacheless multi-antenna counterpart. In particular, for a given downlink MIMO system already optimized to exploit both multiplexing and beamforming gains, our analysis answers a simple question: What is the multiplicative throughput boost obtained from introducing reasonably-sized receiver-side caches?
Abousabea, Emad Mohamed Abd Elrahman. « Optimization algorithms for video service delivery ». Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0030/document.
Texte intégralThe aim of this thesis is to provide optimization algorithms for accessing video services either in unmanaged or managed ways. We study recent statistics about unmanaged video services like YouTube and propose suitable optimization techniques that could enhance files accessing and reduce their access costs. Moreover, this cost analysis plays an important role in decision making about video files caching and hosting periods on the servers. Under managed video services called IPTV, we conducted experiments for an open-IPTV collaborative architecture between different operators. This model is analyzed in terms of CAPEX and OPEX costs inside the domestic sphere. Moreover, we introduced a dynamic way for optimizing the Minimum Spanning Tree (MST) for multicast IPTV service. In nomadic access, the static trees could be unable to provide the service in an efficient manner as the utilization of bandwidth increases towards the streaming points (roots of topologies). Finally, we study reliable security measures in video streaming based on hash chain methodology and propose a new algorithm. Then, we conduct comparisons between different ways used in achieving reliability of hash chains based on generic classifications
Tsigkari, Dimitra. « Algorithms and Cooperation Models in Caching and Recommendation Systems ». Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS210.
Texte intégralIn the context of on-demand video streaming services, both the caching allocation and the recommendation policy have an impact on the user satisfaction, and financial implications for the Content Provider (CP) and the Content Delivery Network (CDN). Although caching and recommendations are traditionally decided independently of each other, the idea of co-designing these decisions can lead to lower delivery costs and to less traffic at the backbone Internet. This thesis follows this direction of exploiting the interplay of caching and recommendations in the setting of streaming services. It approaches the subject through the perspective of the users, and then from a network-economical point of view. First, we study the problem of jointly optimizing caching and recommendations with the goal of maximizing the overall experience of the users. This joint optimization is possible for CPs that simultaneously act as CDN owners in today’s or future architectures. Although we show that this problem is NP-hard, through a careful analysis, we provide the first approximation algorithm for the joint problem. We then study the case where recommendations and caching are decided by two separate entities (the CP and the CDN, respectively) who want to maximize their individual profits. Based on tools from game theory and optimization theory, we propose a novel cooperation mechanism between the two entities on the grounds of recommendations. This cooperation allows them to design a cache-friendly recommendation policy that ensures a fair split of the resulting gains
Abousabea, Emad Mohamed Abd Elrahman. « Optimization algorithms for video service delivery ». Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0030.
Texte intégralThe aim of this thesis is to provide optimization algorithms for accessing video services either in unmanaged or managed ways. We study recent statistics about unmanaged video services like YouTube and propose suitable optimization techniques that could enhance files accessing and reduce their access costs. Moreover, this cost analysis plays an important role in decision making about video files caching and hosting periods on the servers. Under managed video services called IPTV, we conducted experiments for an open-IPTV collaborative architecture between different operators. This model is analyzed in terms of CAPEX and OPEX costs inside the domestic sphere. Moreover, we introduced a dynamic way for optimizing the Minimum Spanning Tree (MST) for multicast IPTV service. In nomadic access, the static trees could be unable to provide the service in an efficient manner as the utilization of bandwidth increases towards the streaming points (roots of topologies). Finally, we study reliable security measures in video streaming based on hash chain methodology and propose a new algorithm. Then, we conduct comparisons between different ways used in achieving reliability of hash chains based on generic classifications
Furis, Mihai Alexandru Johnson Jeremy. « Cache miss analysis of Walsh-Hadamard Transform algorithms / ». Philadelphia : Drexel University, 2003. http://dspace.library.drexel.edu/handle/1721.1/109.
Texte intégralWang, Yun. « Stéréovision robotique algorithmes de mise en correspondance / ». Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37610700h.
Texte intégralAdelbaset, Ahmed Yaser. « Algorithmes de mise en correspondance en stéréovision passive ». Rennes 1, 2001. http://www.theses.fr/2001REN10013.
Texte intégralPREVOST, DONALD. « Retines artificielles stochastiques : algorithmes et mise en uvre ». Paris 11, 1995. http://www.theses.fr/1995PA112505.
Texte intégralIssa, Hazem. « Mise en correspondance stéréoscopique par algorithmes génétiques : nouveaux codages ». Lille 1, 2004. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2004/50376-2004-3-4.pdf.
Texte intégralIssa, Hazem Postaire Jack-Gérard Ruichek Yassine. « Mise en correspondance stéréoscopique par algorithmes génétiques nouveaux codages / ». Villeneuve d'Ascq : Université des sciences et technologies de Lille, 2004. https://iris.univ-lille1.fr/dspace/handle/1908/412.
Texte intégralN° d'ordre (Lille 1) : 3441. Résumé en français et en anglais. Titre provenant de la page de titre du document numérisé. Bibliogr. p. [164]-170. Liste des publications.
Wang, Yun. « Stéréovision robotique : algorythmes de mise en correspondance ». Paris 12, 1987. http://www.theses.fr/1987PA120006.
Texte intégralAnnichini, Collomb Aurore. « Vérification d'automates étendus : algorithmes d'analyse symbolique et mise en oeuvre ». Phd thesis, Université Joseph Fourier (Grenoble), 2001. http://tel.archives-ouvertes.fr/tel-00004334.
Texte intégralBen, Ammar Hamza. « On models for performance evaluation and cache resources placement in multi-cache networks ». Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S006/document.
Texte intégralIn the last few years, Content Providers (CPs) have experienced a high increase in requests for video contents and rich media services. In view of the network scaling limitations and beyond Content Delivery Networks (CDNs), Internet Service Providers (ISPs) are developing their own caching systems in order to improve the network performance. These factors explain the enthusiasm around the Content-Centric Networking (CCN) concept and its in-network caching feature. The analytical quantification of caching performance is, however, not sufficiently explored in the literature. Moreover, setting up an efficient caching system within a network infrastructure is very complex and remains an open problem. To address these issues, we provide first in this thesis a fairly generic and accurate model of caching nodes named MACS (Markov chain-based Approximation of Caching Systems) that can be adapted very easily to represent different caching schemes and which can be used to compute different performance metrics of multi-cache networks. We tackled after that the problem of cache resources allocation in cache-enabled networks. By means of our analytical tool MACS, we present an approach that solves the trade-off between different performance metrics using multi-objective optimization and we propose an adaptation of the metaheuristic GRASP to solve the optimization problem
Constantin, Camélia. « Classement de services et de données par leur utilisation ». Paris 6, 2007. http://www.theses.fr/2007PA066321.
Texte intégralThe emergence of peer-to-peer systems and the possibility to use web services to perform computations and to exchange data lead to large-scale integration systems where query evaluation and other complex tasks are performed through service composition. A crucial problem in such systems is the lack of global knowledge. Therefore it is difficult to find the best peer for query routing, the best service for composition or to decide which local data of a peer must be refreshed or cached. Making a choice implies to perform a ranking. Although it is possible to rank entities according to their content or to other associated metadata, these techniques are generally based on homogeneous and semantically rich descriptions. An interesting alternative in the context of large-scale systems is a link-based ranking that exploits relations between the different entities and allows to make choices according to global information. This thesis presents a new generic service ranking model based on their collaboration links. We define a global service importance by exploiting specific knowledge about its contribution to other services through received calls and exchanged data. The importance may be computed efficiently by an asynchronous algorithm without additional messages. Our notion of contribution is abstract and we study its instantiation in the context of three applications: (i) service ranking based on calls where the contribution reflects the service semantics and usage; (ii) service ranking based on data usage where the service contribution is based on the usage of its data during the query evaluations in a distributed warehouse; (iii) distributed cache strategies based on the contribution of a data cache on a peer to reduce the cost the system workload
Nguer, El Hadji Mamadou. « Bibliothèques numériques à taxonomie centrale : modélisation et mise en oeuvre ». Paris 11, 2010. http://www.theses.fr/2010PA112105.
Texte intégralDuring the last decade, the development of large digital libraries has seen a steadily rising activity. Based on the success of the U. S. Programs on Digital Libraries (Digital Library Initiative-1 and 2 and National Science Digital Library Program), many national and international research projects have been financed by national libraries, public archives or various academic and research institutions. Among the recent projects, the "European Initiative" currently brings together a large number of research institutes, museums, cultural institutions and European national archives for the design and implementation of the European Digital Library Europeana. These projects have given rise to a number of digital libraries, collaborative systems, and e-learning environments storing very large volumes of data. Faced with such large amounts of data, users need support in two ways, among others: (a) an easy to use query facility for searching the data and (b) a personalized presentation of the data returned. The work presented in this thesis is in the area of digital libraries and has three major contributions: (a) supports users of digital libraries to customize their queries (preferences queries), (b) proposes efficient algorithms for evaluating preferences queries on large databases and (c) presents experimental results validating the performance of the proposed algorithms. Moreover, a prototype implementation of a digital library has been conducted, based on a central taxonomy and offering the following personalization services to its users: (a) querying with preferences and (b) a profile-based notification service alerting users when events of interest to them (such as insertion, modification or deletion of a document) occur in the library
Yakoubi, Driss. « Analyse et mise en oeuvre de nouveaux algorithmes en méthodes spectrales ». Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00361368.
Texte intégralLa seconde partie est consacrée à une extension des méthodes spectrales dans des géométries complexes. Cette nouvelle méthode s'appuie sur deux idées: traitement des conditions aux limites de Dirichlet par pénalisation, en suivant la méthode de Nitsche, et une approximation de la géométrie par des pavés, en utilisant une octree (par exemple).
Nous donnons des erreurs de projection polyômiale et des estimations a priori.
Enfin, la dernière partie est consacrée au calucl scientifique où on a implémenté en C++ et validé cette méthode dans le logiciel FreeFem3d.
Bengoechea, Endika. « Mise en correspondance inexacte de graphes par algorithmes d'estimation de distributions ». Paris, ENST, 2002. http://www.theses.fr/2002ENST0034.
Texte intégralBengoetxea, Endika. « Mise en correspondance inexacte de graphes par algorithmes d'estimation de distributions / ». Paris : École nationale supérieure des télécommunications, 2003. http://catalogue.bnf.fr/ark:/12148/cb39066885q.
Texte intégralBibliogr. p. 193-211. Résumé en français et en anglais. Index.
Loudni, Samir. « Conception et mise en oeuvre d'algorithmes anytime : une approche à base de contraintes ». Nantes, 2002. http://www.theses.fr/2002NANT2063.
Texte intégralSharify, Meisam. « Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle ». Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00643836.
Texte intégralKrajecki, Michaël Gardan Yvon. « EQUILIBRE DE CHARGE DYNAMIQUE : ETUDE ET MISE EN UVRE DANS LE CADRE DES APPLICATIONS A NOMBRE FINI DE TACHES INDEPENDANTES ET IRREGULIERES / ». [S.l.] : [s.n.], 1998. ftp://ftp.scd.univ-metz.fr/pub/Theses/1998/Krajecki.Michael.SMZ9818.pdf.
Texte intégralGatineau, Laurent. « Algorithmes concurrents pour les problemes de mise en correspondance dans les sequences d'images ». Evry-Val d'Essonne, 2000. http://www.theses.fr/2000EVRY0004.
Texte intégralBonnier, Nicolas. « Contribution aux algorithmes de mise en correspondance de gammes de couleurs spatialement adaptatifs ». Paris, ENST, 2008. http://pastel.archives-ouvertes.fr/pastel-00004856.
Texte intégralAchieving an accurate print reproduction of a color present in a given image might be impossible when this color is not part of the gamut of colors that the printer can reproduce. Usually the reproduction is then achieved by replacing this color with a color perceived as close in the color gamut of the printer. This mapping to another color is made by a gamut mapping algorithm. In this thesis we describe the work carried out in the development of new spatial and color adaptive gamut mapping algorithms. These algorithms act locally in the image to generate a reproduction perceived as close to the original. Their goal is to preserve both the color values of the pixels and the colorimetric relations between neighbors. We first propose a mathematical framework encompassing the existing spatial gamut mapping algorithms. Next we introduce two new algorithms within the proposed mathematical framework. In the spatial and color adaptive compression we project each color pixel lying outside the output gamut toward the center, more or less deeply inside the gamut depending on its neighbors. In the spatial and color adaptive clipping the direction of the projection of each color pixel is a variable, set per pixel to better preserve the local energy in the resulting image. We then consider the role of the Modulation Transfer Function of the printing system in the perceived quality of the reproduction. We design a bias-dependent algorithm to optimally compensate for the MTF of the printing system. Lastly we present the evaluation of the proposed algorithms conducted within a psychophysical experiment, its results demonstrate the improvement in the quality of reproduction
Skhiri, Faouzi. « Étude et mise en oeuvre d'algorithmes de navigation d'un robot mobile autonome dans un environnement partiellement connu ». Paris 12, 1994. http://www.theses.fr/1994PA120051.
Texte intégralMuracciole, Vincent. « Définition et mise en place d'un outil temps réel d'analyse des caractéristiques physiques des semences sèches ». Phd thesis, Université d'Angers, 2009. http://tel.archives-ouvertes.fr/tel-00466401.
Texte intégralAuger, Nicolas. « Analyse réaliste d'algorithmes standards ». Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1110/document.
Texte intégralAt first, we were interested in TimSort, a sorting algorithm which was designed in 2002, at a time where it was hard to imagine new results on sorting. Although it is used in many programming languages, the efficiency of this algorithm has not been studied formally before our work. The fine-grain study of TimSort leads us to take into account, in our theoretical models, some modern features of computer architecture. In particular, we propose a study of the mechanisms of branch prediction. This theoretical analysis allows us to design variants of some elementary algorithms (like binary search or exponentiation by squaring) that rely on this feature to achieve better performance on recent computers. Even if uniform distributions are usually considered for the average case analysis of algorithms, it may not be the best framework for studying sorting algorithms. The choice of using TimSort in many programming languages as Java and Python is probably driven by its efficiency on almost-sorted input. To conclude this dissertation, we propose a mathematical model of non-uniform distribution on permutations, for which permutations that are almost sorted are more likely, and provide a detailed probabilistic analysis
Gildemyn, Éric. « Caractérisation des procédés de frabrication de pièces de sécurité automobile. Optimisation multiobjectifs de la mise en forme ». Paris, ENSAM, 2008. https://pastel.archives-ouvertes.fr/pastel-00004895.
Texte intégralAbstract : Automotive safety parts manufactured out of steel sheets are more and more expansive due to the increase in price of raw materials. Moreover, these parts are subjected to increasingly demanding European standards. This is why automobile equipment supplier, such as company DEVILLÉ S. A. Are looking to develop numerical tools in order to optimize and predict the behavior of these parts, in-service, by integrating the entire manufacturing process. The work presented here is a contribution to this end. The use of optimization methods, in particular a genetic algorithm called NSGA-2, coupled with CAD and finite elements codes made it possible to improve various cost functions like the maximum material damage value obtained in the part at the time of its design or the maximum effort necessary for unbending, etc. The use of neural networks to reduce the total calculation time has also been studied in this work. These numerical methods require the use of material behavior and damage laws which were the subject of an experimental study as well as the identification of the model parameters. In parti
Hoxha, Fatmir. « Calcul simultané des racines d'un polynome complexe : contribution à l'algorithmique et mise en oeuvre sur un réseau de processeurs ». Toulouse, INPT, 1988. http://www.theses.fr/1988INPT048H.
Texte intégralAwad, Mohamad M. « Mise en oeuvre d'un système coopératif adaptatif de segmentation d'images multicomposantes ». Rennes 1, 2008. http://www.theses.fr/2008REN1S031.
Texte intégralDans le domaine de la télédétection, l'exploitation des images acquises par divers capteurs présente un large champ d'investigation et pose de nombreux problèmes à tous les niveaux dans la chaîne de traitement des images. Aussi, le développement d’approches de segmentation et de fusion optimisées et adaptatives, s’avère indispensable. La segmentation et la fusion sont deux étapes essentielles dans tout système de reconnaissance ou d’interprétation par vision: Le taux d'identification ou la qualité de l'interprétation dépend en effet, étroitement de la qualité de l'analyse et la pertinence des résultats de ces phases. Bien que le sujet ait été étudié en détail dans la littérature, il n'existe pas de méthodes universelles et efficaces de segmentation et de fusion qui permettent une identification précise des classes d'une image réelle lorsque celle-ci est composée à la fois de régions uniformes (faible variation locale de luminance) et texturées. En outre, la majorité de ces méthodes nécessitent des connaissances a priori qui sont en pratique difficilement accessibles. En outre, certaines d’entre elles supposent l'existence de modèles dont les paramètres doivent être estimés. Toutefois, une telle approche paramétrique est non robuste et ses performances sont sévèrement altérées par l’ajustement de l'utilisation de modèles paramétriques. Dans le cadre de cette thèse, un système coopératif et adaptatif de segmentation des images multicomposantes est développé. Ce système est non-paramétrique et utilise le minimum de connaissances a priori. Il permet l’analyse de l'image à plusieurs niveaux hiérarchiques en fonction de la complexité tout en intégrant plusieurs méthodes dans les mécanismes de coopération. Trois approches sont intégrées dans le processus coopératif: L’Algorithme Génétique Hybride, l'Algorithme des C-Moyennes Floues, le Réseau de Kohonen (SOM) et la modélisation géométrique par ’’Non-Uniform Rational B-Spline’’. Pour fusionner les différents résultats issus des méthodes coopératives, l’algorithme génétique est appliqué. Le système est évalué sur des images multicomposantes satellitaires et aériennes. Les différents résultats obtenus montrent la grande efficacité et la précision de ce système
Hamila, Nahiène. « Simulation de la mise en forme des renforts composites mono et multi plis ». Lyon, INSA, 2007. http://theses.insa-lyon.fr/publication/2007ISAL0092/these.pdf.
Texte intégralThe simulation of the forming process of composite reinforcements has several objectives. It allows to determine feasibility or the conditions of this feasibility and above all it allows to know the positions of the fibers after forming. This is important to determine the mechanical characteristics of the composite in service and to calculate the permeability after draping necessary for a correct analysis of the injection moulding process. Simulations avoid the expensive experimental studies by test-errors. The work presented in this document relates to the first point: forming process of woven reinforcements. The contributions of this work are as follow: definition of a three node element with arbitrary directions of the yarns with regard to the element sides, consideration of the bending stiffness, and implementation of a contact management allowing simultaneous multiplis forming process. Finally, a set of simulations of forming processes emphasizes the importance of the different stiffnesses of woven reinforcements and the possibility of neglecting some of them
Hamila, Nahiène Boisse Philippe Brunet Michel. « Simulation de la mise en forme des renforts composites mono et multi plis ». Villeurbanne : Doc'INSA, 2008. http://docinsa.insa-lyon.fr/these/pont.php?id=hamila.
Texte intégralMétivier, Jean-Philippe. « Relaxation de contraintes globales : mise en œuvre et application ». Caen, 2010. http://www.theses.fr/2010CAEN2012.
Texte intégralIn Constraint Programming, global constraints have led to major changes in terms of modeling (synthesizing sets of constraints) and solving (using filtering techniques inherited from other areas such as Operational Research or Artificial Intelligence). Moreover, many real life problems are over-constrained (they have no solution). In this case, it is necessary to soften some constraints. Many studies have been conducted on unary and binary constraints, but very few on global constraints. In this thesis, we study global constraint softening with preferences. We propose different semantics of violation for several global constraints (i. E. AllDifferent, Gcc and Regular). For each softening semantics, we propose algorithms to check consistency and to remove inconsistent values (filtering). The results of this thesis have been successfully applied to Nurse Rostering Problems (NRPs) that are generally over-constrained and very difficult to solve
Danoy, Grégoire. « Uneapproche multi-agent pour les algorithmes génétiques coévolutionnaires hybrides et dynamiques : modèle d'organisation multi-agent et mise en oeuvre sur des problèmes métiers ». Saint-Etienne, EMSE, 2008. http://tel.archives-ouvertes.fr/docs/00/78/56/95/PDF/2008_these_G_Danoy.pdf.
Texte intégralIn this dissertation we assert that modeling Coevolutionary Genetic Algorithms (CGAs) as organizational multi-agent systems overcomes the lack of explicitness at the level of the algorithms structure, interactions and adaptation to existing models and platforms. We therefore introduce MAS4EVO, Multi-Agent Systems for EVolutionary Optimization, a new agent (re-)organizational model based on Moise+ and dedicated to evolutionary optimization. This model was used to describe existing CGAs as well as to build two new variants, hybrid and dynamic, of a competitive CGA. MAS4EVO is implemented in DAFO (Distributed Agent Framework for Optimization) which permits the use, the manipulation and the distribution of these CGAs, on hard optimization problems. The CGAs experimentations were conducted on two business problems, the first one being an inventory management problem and the second one being a new topology control problem in wireless ad hoc networks
Constantin, Camelia. « Classement de Services et de Données par leur Utilsation ». Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00809638.
Texte intégralAgosto, Franco Layda J. « Optimisation d'un réseau social d'échange d'information par recommendation de mise en relation ». Chambéry, 2005. http://www.theses.fr/2005CHAMS051.
Texte intégralToday, the World Wide Web is getting essential while surfing for information. Surfers have different information needs. Thanks to results from social network analysis and many other experiences observed from existing recommender systems, we have concluded that a tendency is to prefer information having certain approval : "asking to a friend" means to point out the person having a good level of knowledge about the information needed. As others before us, we have also verified that in many information exchange systems (as mailing groups) only few people produce actively information but a lot of them take it. Can we really modify this strong tendency? Try to answer to this question is the principal objective of our work. To do it in a positive way, we have imagined a way to influence user's motivation to exchange information. For that, we use regulation mechanisms that are intended to promote a dynamic information exchange, to allow users to control their personal information (thanks to bookmarks) and to exhibit a social awareness. This is why we have proposed some recommender algorithms. They exploit the network topology formed of relations between persons exchanging information and the information that they handle. Our approach is then supported by a collaborative web system named SoMeONe (Social Media using Opinions through a trust Network). We think that the most important contribution is our idea of recommending contacts instead of information. For that, we want to validate the efficiency of information flow in the social exchange network. We have then proposed some postulates, principles and hypothesis to validate our approach. The hypothesis take in count the users' objectives (information needed) , and for that some quality criterias have been developed in order to also validate the system's objectives (optimize the social network structure). To raise those objectives we have introduced some social indicators (which are our algorithms) that we named SocialRank
Duvinage, Isabelle. « Création et mise en cohérence de modèles structuraux à partir d'horizons extraits de données sismiques tridimensionnelles ». Vandoeuvre-les-Nancy, INPL, 2000. http://www.theses.fr/2000INPL049N.
Texte intégralBicking, Frédérique. « Définition et mise au point d'un nouvel algorithme de type génétique : application à la conduite d'un bioprocédé semi-continu ». Vandoeuvre-les-Nancy, INPL, 1994. http://www.theses.fr/1994INPL061N.
Texte intégralCousin, Jean-Gabriel. « Methodologies de conception de coeurs de processeurs specifiques (asip) : mise en oeuvre sous contraintes, estimation de la consommation ». Rennes 1, 1999. http://www.theses.fr/1999REN10085.
Texte intégralBenay, Stephan. « Mise au point des outils analytiques et formels utilisés dans la recherche préclinique en oncologie ». Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM5501.
Texte intégralA nonlinear pharmacokinetic-pharmacodynamic model has been devised do simultaneously describe the loss of erlotinib and its effect on the cell growth over time, in order to analyze impedance-based data of erlotinib effect on A431 cells growth in vitro over time. The model non-linearity requiring the use of iterative methods for parameter estimation, several steps of the model identification were studied, and solutions proposed, with application examples to cancer drugs :Choice of the optimization criterion - superiotity of the geometric mean functionnal relationship for non-linear model identification. Real data application : calibration curve of a bevacizumab ELISA quantification experiment.Choice of the most appropriate algorithm for the pharmacokinetic process identification problem. The derivative algorithms perform better. Real data application : simultaneous identification of the 5-fluorouracil and of its main metabolite pharmacokinetic system.Transform of the differential initial continuous-time model in a recursive discrete time model. The transformed model becomes linear with respect to its parameters, allowing straightforward parameter estimation without using any optimization algorithm. It is then also possible to track the parameter variations over time. Real data application : pharmacokinetic model parameter estimation of fotemustine, mitoxantrone and 5-fluorouracil
Brown, Naïma. « Vérification et mise en œuvre distribuée des programmes unity ». Nancy 1, 1994. http://www.theses.fr/1994NAN10384.
Texte intégralBlondel, Jean-Louis. « Étude et mise en oeuvre d'une méthode de décomposition additive pour le calcul des valeurs propres d'une matrice ». Caen, 1988. http://www.theses.fr/1988CAEN2035.
Texte intégralRobin, Frédéric. « Etude d'architectures VLSI numériques parallèles et asynchrones pour la mise en oeuvre de nouveaux algorithmes d'analyse et rendu d'images ». Phd thesis, Ecole nationale supérieure des telecommunications - ENST, 1997. http://tel.archives-ouvertes.fr/tel-00465691.
Texte intégralMichel, Dominique. « Contribution à la conception, la mise en oeuvre et l'amélioration des algorithmes de calcul des intersections de carreaux NURBS ». Metz, 1992. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1992/Michel.Dominique.SMZ9211.pdf.
Texte intégralSolid objects shape modelling is a basic sight of CAD. The shape of a solid is usually defined by assembling several surfaces (Boundary representation). Many mathematic surface models, suitables for CAD/CAM have been built. This paper deals with NURBS (Non Uniform Rational B-spline) surfaces ; they generalize the Bézier Patches and provide an exact representation of quadric surfaces. Building objects by assembling several patches leads the solid modeler to be able to trim the surface patches and, hence, to be able to compute intersections between surfaces. In this report we study the different ways to solve the problem parametric surface-parametric surface intersection. The standard methods are recalled. We mean : the well known divide and conquer method which reduces the general problem to the case of flat surfaces intersection ; Farouki's method which deals with the case of parametric surface implicit surface intersection ; the tracing algorithm which computes a sequence of close-set points along the intersection curve. The three approaches are interesting, they suit to differents sights of the same problem. Some solutions which try to combine the different advantages, are developped. A new simplifying NURBS patches algorithm which generalizes the first standard method is designed ; its associates recursive subdivision and degree decrease. The resultant's theory provides a complete solution of the problem degree 2 rational curve-bicadric rational patche intersection. It can be efficiently applied to the computation of intersection curves. At last an algorithm based on the tracing curves scheme is implemented. In this algorithm, the being built curve is parameterized by the arc length. This allows one to get, for each point, geometric informations which are used to accurate the curve behaviour. They contribute to improve the performance of the algorithm. Uniting these three operations leads to a general algorithm. The main difficulties of builiding intersection curves, detection of closed curves and dealing with singularities are discussed and partially solved in some cases
Robin, Frédéric. « Etude d'architectures vlsi numériques parallèles et asynchrones pour la mise en œuvre de nouveaux algorithmes d'analyse et rendu d'images ». Paris, ENST, 1997. http://www.theses.fr/1997ENST0021.
Texte intégralRobin, Frédéric. « Étude d'architectures VLSI numériques parallèles et asynchrones pour la mise en oeuvre de nouveaux algorithmes d'analyse et rendu d'images / ». Paris : École nationale supérieure des télécommunications, 1997. http://catalogue.bnf.fr/ark:/12148/cb367038459.
Texte intégralMichel, Dominique Gardan Yvon. « CONTRIBUTION A LA CONCEPTION, LA MISE EN UVRE ET L'AMELIORATION DES ALGORITHMES DE CALCUL DES INTERSECTIONS DE CARREAUX NURBS ». [S.l.] : [s.n.], 1992. ftp://ftp.scd.univ-metz.fr/pub/Theses/1992/Michel.Dominique.SMZ9211.pdf.
Texte intégralAbouzahir, Mohamed. « Algorithmes SLAM : Vers une implémentation embarquée ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS058/document.
Texte intégralAutonomous navigation is a main axis of research in the field of mobile robotics. In this context, the robot must have an algorithm that allow the robot to move autonomously in a complex and unfamiliar environments. Mapping in advance by a human operator is a tedious and time consuming task. On the other hand, it is not always reliable, especially when the structure of the environment changes. SLAM algorithms allow a robot to map its environment while localizing it in the space.SLAM algorithms are becoming more efficient, but there is no full hardware or architectural implementation that has taken place . Such implantation of architecture must take into account the energy consumption, the embeddability and computing power. This scientific work aims to evaluate the embedded systems implementing locatization and scene reconstruction (SLAM). The methodology will adopt an approach AAM ( Algorithm Architecture Matching) to improve the efficiency of the implementation of algorithms especially for systems with high constaints. SLAM embedded system must have an electronic and software architecture to ensure the production of relevant data from sensor information, while ensuring the localization of the robot in its environment. Therefore, the objective is to define, for a chosen algorithm, an architecture model that meets the constraints of embedded systems. The first work of this thesis was to explore the different algorithmic approaches for solving the SLAM problem. Further study of these algorithms is performed. This allows us to evaluate four different kinds of algorithms: FastSLAM2.0, ORB SLAM, SLAM RatSLAM and linear. These algorithms were then evaluated on multiple architectures for embedded systems to study their portability on energy low consumption systems and limited resources. The comparison takes into account the time of execution and consistency of results. After having deeply analyzed the temporal evaluations for each algorithm, the FastSLAM2.0 was finally chosen for its compromise performance-consistency of localization result and execution time, as a candidate for further study on an embedded heterogeneous architecture. The second part of this thesis is devoted to the study of an embedded implementing of the monocular FastSLAM2.0 which is dedicated to large scale environments. An algorithmic modification of the FastSLAM2.0 was necessary in order to better adapt it to the constraints imposed by the largescale environments. The resulting system is designed around a parallel multi-core architecture. Using an algorithm architecture matching approach, the FastSLAM2.0 was implemeted on a heterogeneous CPU-GPU architecture. Uisng an effective algorithme partitioning, an overall acceleration factor o about 22 was obtained on a recent dedicated architecture for embedded systems. The nature of the execution of FastSLAM2.0 algorithm could benefit from a highly parallel architecture. A second instance hardware based on programmable FPGA architecture is proposed. The implantation was performed using high-level synthesis tools to reduce development time. A comparison of the results of implementation on the hardware architecture compared to GPU-based architectures was realized. The gains obtained are promising, even compared to a high-end GPU that currently have a large number of cores. The resulting system can map a large environments while maintainingthe balance between the consistency of the localization results and real time performance. Using multiple calculators involves the use of a means of data exchange between them. This requires strong coupling (communication bus and shared memory). This thesis work has put forward the interests of parallel heterogeneous architectures (multicore, GPU) for embedding the SLAM algorithms. The FPGA-based heterogeneous architectures can particularly become potential candidatesto bring complex algorithms dealing with massive data
Chabanis, Manuel. « Estimation des variations de la pression motrice respiratoire à partir des mesures du débit ventilatoire : mise au point et validation d'un algorithme ». Université Joseph Fourier (Grenoble), 1992. http://www.theses.fr/1992GRE19003.
Texte intégralBergogne, Laurent. « Quelques algorithmes parallèles sur des "séquences de" pour différents modèles de calcul parallèle ». Amiens, 1999. http://www.theses.fr/1999AMIE0130.
Texte intégral