Rozprawy doktorskie na temat „Recommandation à grande échelle”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Recommandation à grande échelle”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Moin, Afshin. "Les techniques de recommandation et de visualisation pour les données à une grande échelle". Rennes 1, 2012. https://tel.archives-ouvertes.fr/tel-00724121.
Pełny tekst źródłaNous avons assisté au développement rapide de la technologie de l'information au cours de la dernière décennie. D'une part, la capacité du traitement et du stockage des appareils numériques est en constante augmentation grâce aux progrès des méthodes de construction. D'autre part, l'interaction entre ces dispositifs puissants a été rendue possible grâce à la technologie de réseautage. Une conséquence naturelle de ces progrès, est que le volume des données générées dans différentes applications a grandi à un rythme sans précédent. Désormais, nous sommes confrontés à de nouveaux défis pour traiter et représenter efficacement la masse énorme de données à notre disposition. Cette thèse est centrée autour des deux axes de recommandation du contenu pertinent et de sa visualisation correcte. Le rôle des systèmes de recommandation est d'aider les utilisateurs dans le processus de prise de décision pour trouver des articles avec un contenu pertinent et une qualité satisfaisante au sein du vaste ensemble des possibilités existant dans le Web. D'autre part, la représentation correcte des données traitées est un élément central à la fois pour accroître l’utilité des données pour l'utilisateur final et pour la conception des outils d'analyse efficaces. Dans cet exposé, les principales approches des systèmes de recommandation ainsi que les techniques les plus importantes de la visualisation des données sous forme de graphes sont discutées. En outre, il est montré comment quelques-unes des mêmes techniques appliquées aux systèmes de recommandation peuvent être modifiées pour tenir compte des exigences de visualisation
Draidi, Fady. "Recommandation Pair-à-Pair pour Communautés en Ligne à Grande Echelle". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00766963.
Pełny tekst źródłaSakhi, Otmane. "Offline Contextual Bandit : Theory and Large Scale Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG011.
Pełny tekst źródłaThis thesis presents contributions to the problem of learning from logged interactions using the offline contextual bandit framework. We are interested in two related topics: (1) offline policy learning with performance certificates, and (2) fast and efficient policy learning applied to large scale, real world recommendation. For (1), we first leverage results from the distributionally robust optimisation framework to construct asymptotic, variance-sensitive bounds to evaluate policies' performances. These bounds lead to new, more practical learning objectives thanks to their composite nature and straightforward calibration. We then analyse the problem from the PAC-Bayesian perspective, and provide tighter, non-asymptotic bounds on the performance of policies. Our results motivate new strategies, that offer performance certificates before deploying the policies online. The newly derived strategies rely on composite learning objectives that do not require additional tuning. For (2), we first propose a hierarchical Bayesian model, that combines different signals, to efficiently estimate the quality of recommendation. We provide proper computational tools to scale the inference to real world problems, and demonstrate empirically the benefits of the approach in multiple scenarios. We then address the question of accelerating common policy optimisation approaches, particularly focusing on recommendation problems with catalogues of millions of items. We derive optimisation routines, based on new gradient approximations, computed in logarithmic time with respect to the catalogue size. Our approach improves on common, linear time gradient computations, yielding fast optimisation with no loss on the quality of the learned policies
Griesner, Jean-Benoit. "Systèmes de recommandation de POI à large échelle". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0037.
Pełny tekst źródłaThe task of points-of-interest (POI) recommendations has become an essential feature in location-based social networks. However it remains a challenging problem because of specific constraints of these networks. In this thesis I investigate new approaches to solve the personalized POI recommendation problem. Three main contributions are proposed in this work. The first contribution is a new matrix factorization model that integrates geographical and temporal influences. This model is based on a specific processing of geographical data. The second contribution is an innovative solution against the implicit feedback problem. This problem corresponds to the difficulty to distinguish among unvisited POI the actual "unknown" from the "negative" ones. Finally the third contribution of this thesis is a new method to generate recommendations with large-scale datasets. In this approach I propose to combine a new geographical clustering algorithm with users’ implicit social influences in order to define local and global mobility scales
Gueye, Modou. "Gestion de données de recommandation à très large échelle". Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0083.
Pełny tekst źródłaIn this thesis, we address the scalability problem of recommender systems. We propose accu rate and scalable algorithms. We first consider the case of matrix factorization techniques in a dynamic context, where new ratings..are continuously produced. ln such case, it is not possible to have an up to date model, due to the incompressible time needed to compute it. This happens even if a distributed technique is used for matrix factorization. At least, the ratings produced during the model computation will be missing. Our solution reduces the loss of the quality of the recommendations over time, by introducing some stable biases which track users' behavior deviation. These biases are continuously updated with the new ratings, in order to maintain the quality of recommendations at a high leve for a longer time. We also consider the context of online social networks and tag recommendation. We propose an algorithm that takes account of the popularity of the tags and the opinions of the users' neighborhood. But, unlike common nearest neighbors' approaches, our algorithm doe not rely on a fixed number of neighbors when computing a recommendation. We use a heuristic that bounds the network traversai in a way that allows to faster compute the recommendations while preserving the quality of the recommendations. Finally, we propose a novel approach that improves the accuracy of the recommendations for top-k algorithms. Instead of a fixed list size, we adjust the number of items to recommend in a way that optimizes the likelihood that ail the recommended items will be chosen by the user, and find the best candidate sub-list to recommend to the user
Tauvel, Claire. "Optimisation stochastique à grande échelle". Phd thesis, Grenoble 1, 2008. http://www.theses.fr/2008GRE10305.
Pełny tekst źródłaIn this thesis we study iterative algorithms in order to solve constrained and unconstrained convex optimization problems, variational inequalities with monotone operators and saddle point problems. We study these problems when the dimension of the search space is high and when the values of the functions of interest are unknown and we just can deal with a stochastic oracle. The algorithms we study are stochastic adaptation of two algorithms : the first one is a variant of the mirror descent algorithm proposed by Nemirovski and Yudin and the second one a variant of the dual extrapolation algorithm by Nesterov. For both of them, we provide bounds for the expected value and bounds for moderate deviations of the approximation error with different regularity hypothesis for all the unconstrained problems we study and we propose adaptative versions of the algorithms in order to get rid of the knowledge of some parameters depending on the problem studied and unavailable in practice. At last we show how to solve constrained stochastic optimization problems thanks to an auxiliary algorithm inspired by the Newton descent one and thanks to the results we obtained for the saddle point problems
Tauvel, Claire. "Optimisation stochastique à grande échelle". Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00364777.
Pełny tekst źródłaBleuse, Raphaël. "Appréhender l'hétérogénéité à (très) grande échelle". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM053/document.
Pełny tekst źródłaThe demand for computation power is steadily increasing, driven by the need tosimulate more and more complex phenomena with an increasing amount ofconsumed/produced data.To meet this demand, the High Performance Computing platforms grow in both sizeand heterogeneity.Indeed, heterogeneity allows splitting problems for a more efficient resolutionof sub-problems with ad hoc hardware or algorithms.This heterogeneity arises in the platforms' architecture and in the variety ofprocessed applications.Consequently, the performances become more sensitive to the execution context.We study in this thesis how to qualitatively bring—at a reasonablecost—context-awareness/obliviousness into allocation and scheduling policies.This study is conducted from two standpoints: within single applications, andat the whole platform scale from an inter-applications perspective.We first study the minimization of the makespan of sequential tasks onplatforms with a mixed architecture composed of multiple CPUs and GPUs.We integrate context-awareness into schedulers with an affinity mechanism thatimproves local behavior.This mechanism has been implemented in a parallel run-time, and experimentsshow that it is able to reduce the memory transfers while maintaining a lowmakespan.We then extend the model to implicitly consider parallelism on the CPUs withthe moldable-task model.We propose an efficient algorithm formulated as an integer linear program witha constant performance guarantee of 3/2+ε.Second, we devise a new modeling framework where constraints are a first-classtool.Rather than extending existing models to consider all possible interactions, wereduce the set of feasible schedules by further constraining existing models.We propose a set of reasonable constraints to model application spreading andI/O traffic.We then instantiate this framework for unidimensional topologies, and propose acomprehensive case study of the makespan minimization under convex and localconstraints
Cameron, Alexandre. "Effets de grande échelle en turbulence". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE029/document.
Pełny tekst źródłaThis manuscript describes how solutions of the Navier-Stokes equations behave in the large scales when forced in the small scales. It analyzes also the large scale behavior of magnetic fields solution of the kinetic induction equation when the velocity is in the small scales. The results were acquired with direct numeric simulation (DNS) using pseudo-spectral algorithms of the equations as well as their Floquet development. In the hydrodynamical case, the Floquet DNS were able to confirm the results of the AKA effect at low Reynolds number and extend them for Reynolds number of order one. The DNS were also used to study AKA-stable flows and identified a new instability that can be interpreted as a negative viscosity effect. In the magnetic case, the alpha effect is observe for a range of scale separation exceed know results by several orders of magnitude. It is also shown that the growth rate of the instability becomes independent of the scale separation once the magnetic field is destabilized in its small scales. The energy spectrum and the correlation time of absolute equilibrium solution of the truncated Euler equation are presented. A new regime where the correlation time is governed by helicity is exhibited. These results are also compared with those coming from large scale modes of solutions of the Navier-Stokes equation forced in the small scales. They show that the correlation time increases with the helicity of the flow
Touré, Mahamadou Abdoulaye. "Administration d'applications réparties à grande échelle". Electronic Thesis or Diss., Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0039.
Pełny tekst źródłaAdministration of distributed systems is a task increasingly complex and expensive. It consists in to carry out two main activities : deployment and management of application in the process of running. The activity of the deployment is subdivided into several activities : description of hardware and software, conguration, installation and starting the application. This thesis work focuses on large-scale administration which consists to deploy and manage a distributed legacy application composed of several thousands of software entities on physical infrastructure grid made up of hundreds or thousands of machines. The administration of this type of infrastructure creates many problems of expressiveness, performance, heterogeneity and dynamicity (breakdown of machine, network, ...). These problems are generally caused to the scale and geographical distribution of the sites (set of clusters). This thesis contributes to resolve the problems previously cited. Therefore, we propose higher-level descriptions formalisms to describe the structure of hardware and software infrastructures. To reduce the load and increase the performance of the administration, we propose to distribute the deployment system in a hierarchical way in order to distribute the load. The work of this thesis comes the scope of the TUNe (autonomic management system) project. Therefore, we propose to hierarchize TUNe in order to adapt in the context of large-scale administration. We showhow to describe the hierarchy of systems. We also show how to take into account the specicity of the hardware infrastructure at the time of deployment notably the topology, characteristics and types of machine. We dene a process langage allowing to describe the process installation which allow managers to dene thier own installation constraints, according to their needs and preferences. We explore the management of heterogeneity during deployment. Finaly our prototype is validated by an implementation and in the context of a real experimentation
Le, Chat Gaétan. "Étude du vent solaire à grande échelle". Phd thesis, Université Paris-Diderot - Paris VII, 2010. http://tel.archives-ouvertes.fr/tel-00547571.
Pełny tekst źródłaBenoit, Gaëtan. "Métagénomique comparative de novo à grande échelle". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S088/document.
Pełny tekst źródłaMetagenomics studies the genomic content of a sample extracted from a natural environment. Among available analyses, comparative metagenomics aims at estimating the similarity between two or more environmental samples at the genomic level. The traditional approach compares the samples based on their content in known identified species. However, this method is biased by the incompleteness of reference databases. By contrast, de novo comparative metagenomics does not rely on a priori knowledge. Sample similarity is estimated by counting the number of similar DNA sequences between datasets. A metagenomic project typically generates hundreds of datasets. Each dataset contains tens of millions of short DNA sequences ranging from 100 to 150 base pairs (called reads). In the context of this thesis, it would require years to compare such an amount of data with usual methods. This thesis presents novel de novo approaches to quickly compute the similarity between numerous datasets. The main idea underlying our work is to use the k-mer (word of size k) as a comparison unit of the metagenomes. The main method developed during this thesis, called Simka, computes several similarity measures by replacing species counts by k-mer counts (k > 21). Simka scales-up today’s metagenomic projects thanks to a new parallel k-mer counting strategy on multiple datasets. Experiments on data from the Human Microbiome Project and Tara Oceans show that the similarities computed by Simka are well correlated with reference-based and OTU-based similarities. Simka processed these projects (more than 30 billions of reads distributed in hundreds of datasets) in few hours. It is currently the only tool able to scale-up such projects, while providing precise and extensive comparison results
Bethune, William. "Dynamique à grande échelle des disques protoplanétaires". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAY022/document.
Pełny tekst źródłaThis thesis is devoted to the transport of angular momentum and magnetic flux through weakly ionized and weakly magnetized accretion disks ; the role of microphysical effects on the large- scale dynamics of the disk is of primary importance. As a first step, I exclude stratification effects and examine the impact of non-ideal MHD effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough ; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the disk-wind interaction via a global model of stratified disk. This model is the first to compute non-ideal MHD effects from a simplified chemical network in a global geometry. It reveals that the flow is essentially laminar, and that the magnetic field can adopt different global configurations, drastically affecting the transport processes. A new self-organization process is identified, also leading to the formation of axisymmetric structures, whereas the previous mechanism is discarded by the action of the wind. The properties of magneto-thermal winds are examined for various magnetizations, allowing discrimination between magnetized and photo-evaporative winds based upon their ejection efficiency
Le, Chat Gaétan. "Etude du vent solaire à grande échelle". Paris 7, 2010. http://www.theses.fr/2010PA077162.
Pełny tekst źródłaSome features of the solar wind still remain not understood, as the transport of the energy in collision less plasmas. Quasi-thermal noise spectroscopy is a reliable tool for measuring accurately the electron density, temperature and non thermal properties which can give important dues to understand the transport properties. This noise is produced by the quasi-thermal fluctuations of the particles and allow to measure the moments of their velocity distributions. This method, using a sum of Maxwellian as the electron velocity distribution, has produced a large amount of results with the Ulysses mission. Nevertheless, some limitations on the radio receiver prevent an accurate measurement of the total temperature of the electrons with this model. A new method using kappa distribution is proposed and its application on the Ulysses data shows a variation of the temperature between an adiabatic and isothermal behaviour, and a constant kappa parameter. Then, two examples of plasma-dust interactions are studied: the acceleration of nano dust by the solar wind and their detection in the solar wind at one astronomic unit; and the interplanetary magnetic field enhancements possibly due to an interaction between the solar wind and cometary dust. Finally, a more global point of view is taken. The energy flux of the solar wind is almost constant, nearly independent on wind speed and solar activity. A comparison of the energy flux of a spread of stellar winds is made. A shared process at the origin and acceleration of the main-sequence stars and cool giants' winds is suggested. T-Tauri stars' winds show a possible result of an accretion powered wind
Tixeuil, Sébastien. "Vers l'auto-stabilisation des systèmes à grande échelle". Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2006. http://tel.archives-ouvertes.fr/tel-00124848.
Pełny tekst źródłaRalambondrainy, Tiana. "Observation de simulations multi-agents à grande échelle". La Réunion, 2009. http://elgebar.univ-reunion.fr/login?url=http://thesesenligne.univ.run/09_02ralambondrainy.pdf.
Pełny tekst źródłaThe goal of simulation of ecological or social complex systems is to help observers to answer questions about these systems. In individual-based models, the complexity of the system represented, the entities and their interactions produce a huge mass of results. These results are paradoxically as difficult to understand as the real system, that the model is suppoded to simplify. My research interests is to facilitate the observation and analysis of these results by the user, for a better understanding of multi-agent simulations. I have identified a list of requirements, that a multi-agent simulation platform should verify in order to facilitate observation by the user. A domain ontology dedicated to observation formalizes the concepts relative to the observation task. This observation ontology is useful both for humans involved in the simulation process, and for the software entities which can use this ontologyas a common vocabulary in their interactions. Several means are proposed to improve observation management in multi-agent simulation platforms, in term of architecture and visualisation. The interactions between agents are the source of emerging glabal phenomena : it is necessary to observe them at every relevant scales ranging from global to local. Hence, I have proposed the concept of conversation, and visual generic representations dedicated to large scale interactions. These proposals have been valited thanks to the simulation of the management of animal wastes fluxes between farms at a territory scale in Reunion Island
Stref, Philippe. "Application à grande échelle d'un modèle hydrodispersif tridimensionnel". Montpellier 2, 1987. http://www.theses.fr/1987MON20068.
Pełny tekst źródłaRawat, Subhandu. "Dynamique cohérente de mouvements turbulents à grande échelle". Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0116/document.
Pełny tekst źródłaMy thesis work focused on ‘dynamical systems’ understanding of the large-scale dynamics in fully developed turbulent shear flow. In plane Couette flow, large-eddy simulation (L.E.S) is used to model small scale motions and to only resolve large-scale motions in order to compute nonlinear traveling waves (NTW) and relative periodic orbits (RPO). Artificial over-damping has been used to quench an increasing range of small-scale motions and prove that the motions in large-scale are self-sustained. The lower-branch traveling wave solutions that lie on laminar-turbulent basin boundary are obtained for these over-damped simulation and further continued in parameter space to upper branch solutions. This approach would not have been possible if, as conjectured in some previous investigations, large-scale motions in wall bounded shear flows are forced by mechanism based on the existence of active structures at smaller scales. In Poseuille flow, relative periodic orbits with shift-reflection symmetry on the laminar-turbulent basin boundary are computed using DNS. We show that the found RPO are connected to the pair of traveling wave (TW) solution via global bifurcation (saddle-node-infinite period bifurcation). The lower branch of this TW solution evolve into a spanwise localized state when the spanwise domain is increased. The upper branch solution develops multiple streaks with spanwise spacing consistent with large-scale motions in turbulent regime
Chopin, Pierre. "Modélisation à grande échelle pour les phénomènes éruptifs". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX063/document.
Pełny tekst źródłaThe object of this thesis is the modelisation of the magnetic field of the solar corona using the non linear reconstruction code XTRAPOLS, with a special emphasis on eruptive phenomena environments.The innovative nature of the studies we undertook is the spherical global aspect of the method.Three main works are presented in this dissertation. The first one is about the February 2011 geoeffective events, featuring a large active region. We highlight several twisted flux ropes structures,and characterize their relationship with large scale structures.The second work is about the events of August 3rd and 4th. Several active region are present on the disk,and two of them feature a high eruptive activity. Here again, we find twisted flux ropes in each of the active regions, and we highlight the topological relationship between them.The third is a study performed in the context of an NLFFF group, in order to study the non linear modeling of the global corona. The reconstruction is performed at a date corresponding to the total solar eclipse of march 20th 2015. We discuss the impact of the different types of data and models used, and emphasize on the importance of data temporal coherence and of taking into account coronal currents.Thus, the works presented in this dissertation allowed to characterize the global environment of eruptive active regions, to study the relationship between features at different scales. To go further, we present different methods for extending the model beyond the source surface
Vergassola, Massimo. "Dynamique à grande échelle en turbulence et cosmologie". Nice, 1994. http://www.theses.fr/1994NICE4712.
Pełny tekst źródłaCattan, Oralie. "Systèmes de questions-réponses interactifs à grande échelle". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG090.
Pełny tekst źródłaInformational search has evolved with our need for immediacy and intuitiveness into a form of natural language querying, no longer solely focused on the use of relevant keywords. The study of these interactions raises major issues in the field of machine understanding with regard to the contextualization of questions. Indeed, questions are rarely asked in isolation. Grouped together, they form a dialogue that is built and structured over the course of the conversation. In the following series of questions: “How much does a hotel room cost in Montreal? », « how to prepare a Basque cake », « what are black bees? », « do they sleep? », the interpretation of some questions depends on the questions and answers previously asked. In this context, designing an interactive question-answering system capable of sustaining a conversation that is not limited to a simple succession of sporadic questions and answers constitutes a challenge in terms of contextual modeling and high-performance computing. The evolution of intensive computing techniques and solutions, the availability of large volumes of raw data (in the case of unsupervised learning) or enriched with linguistic or semantic information (in the case of supervised learning) have allowed machine learning methods to experience significant development, with considerable applications in the industrial sector. Despite their success, these domain and language models, learned from a massive amount of data with a large number of parameters, raise questions of usability and today appear less than optimal, given the new challenges of digital sobriety. In a real business scenario, where systems are developed rapidly and are expected to work robustly for an increasing variety of domains, tasks and languages, fast and efficient learning from a limited number of examples is essential. In this thesis we deepen each of the aforementioned issues and propose approaches based on the knowledge transfer from latent and contextual representations to optimize performance and facilitate a cost-effective large-scale deployment of systems
Akata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images". Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-00873807.
Pełny tekst źródłaAkata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM003/document.
Pełny tekst źródłaBuilding algorithms that classify images on a large scale is an essential task due to the difficulty in searching massive amount of unlabeled visual data available on the Internet. We aim at classifying images based on their content to simplify the manageability of such large-scale collections. Large-scale image classification is a difficult problem as datasets are large with respect to both the number of images and the number of classes. Some of these classes are fine grained and they may not contain any labeled representatives. In this thesis, we use state-of-the-art image representations and focus on efficient learning methods. Our contributions are (1) a benchmark of learning algorithms for large scale image classification, and (2) a novel learning algorithm based on label embedding for learning with scarce training data. Firstly, we propose a benchmark of learning algorithms for large scale image classification in the fully supervised setting. It compares several objective functions for learning linear classifiers such as one-vs-rest, multiclass, ranking and weighted average ranking using the stochastic gradient descent optimization. The output of this benchmark is a set of recommendations for large-scale learning. We experimentally show that, online learning is well suited for large-scale image classification. With simple data rebalancing, One-vs-Rest performs better than all other methods. Moreover, in online learning, using a small enough step size with respect to the learning rate is sufficient for state-of-the-art performance. Finally, regularization through early stopping results in fast training and a good generalization performance. Secondly, when dealing with thousands of classes, it is difficult to collect sufficient labeled training data for each class. For some classes we might not even have a single training example. We propose a novel algorithm for this zero-shot learning scenario. Our algorithm uses side information, such as attributes to embed classes in a Euclidean space. We also introduce a function to measure the compatibility between an image and a label. The parameters of this function are learned using a ranking objective. Our algorithm outperforms the state-of-the-art for zero-shot learning. It is flexible and can accommodate other sources of side information such as hierarchies. It also allows for a smooth transition from zero-shot to few-shots learning
Donnet, Benoit. "Algorithmes pour la découverte de topologie à grande échelle". Paris 6, 2006. http://www.theses.fr/2006PA066562.
Pełny tekst źródłaCampusano, Luis Eduardo. "Inhomogénéités à grande échelle dans la distribution des quasars". Toulouse 3, 1992. http://www.theses.fr/1992TOU30196.
Pełny tekst źródłaGürgen, Levent. "Gestion à grande échelle de données de capteurs hétérogènes". Grenoble INPG, 2007. http://www.theses.fr/2007INPG0093.
Pełny tekst źródłaThis dissertation deals with the issues related to scalable management of heterogeneous sensor data. Ln fact, sensors are becoming less and less expensive, more and more numerous and heterogeneous. This naturally raises the scalability problem and the need for integrating data gathered from heterogeneous sensors. We propose a distributed and service-oriented architecture in which data processing tasks are distributed at severallevels in the architecture. Data management functionalities are provided in terms of "services", in order to hide sensor heterogeneity behind generic services. We equally deal with system management issues in sensor farms, a subject not yet explored in this context
Gomez, Castano Mayte. "Métamatériaux optiques : conception, fabrication à grande échelle et caractérisation". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0094.
Pełny tekst źródłaMetamaterials are artificially structured materials, thoroughly designed for achieving electromagnetic properties not observed in nature such as the negative refractive index. The purpose of this thesis is the development of up-scalable optical metamaterials that can be easily incorporated into actual devices. By combining colloidal lithography and electrodeposition, we report an entirely bottom-up fishnet metamaterial made of gold and air layers. A proper theoretical and experimental design gives rise to tunable refractive index, from positive to negative values in the near infrared. This structure is extended to multilayered fishnet metamaterials made by nanoimprint lithography and electrodeposition. We thoroughly analyze the optical response of the structures, which lead to strong negative index from the visible to near infrared. Their performance as optical sensors is studied when infiltrating different liquids through the air cavities. These techniques are used to fabricate nanostructured metallic substrates for studying the collective spontaneous emission of fluorescent molecules
Bland, Céline. "Innovations pour l'annotation protéogénomique à grande échelle du vivant". Thesis, Montpellier 1, 2013. http://www.theses.fr/2013MON13508.
Pełny tekst źródłaProteogenomics is a recent field at the junction of genomics and proteomics which consists of refining the annotation of the genome of model organisms with the help of high-throughput proteomic data. Structural and functional errors are still frequent and have been reported on several occasions. Innovative methodologies to prevent such errors are essential. N-terminomics enables experimental validation of initiation codons and certification of the annotation data. With this objective in mind, two innovative strategies have been developed combining: i) selective N-terminal labeling of proteins, ii) multienzymatic digestion in parallel, and iii) specific enrichment of most N-terminal labeled peptides using either successive liquid chromatography steps or immunocapture directed towards the N-terminal label. Efficiency of these methodologies has been demonstrated using Roseobacter denitrificans as bacterial model organism. After enrichment with chromatography, 480 proteins were validated and 46 re-annotated. Several start sites for translation initiation were detected and homology driven annotation was challenged in some cases. After immunocapture, 269 proteins were characterized of which 40% were identified specifically after enrichment. Three novel genes were also annotated for the first time. Complementary results obtained after tandem mass spectrometry analysis allows easier data interpretation to reveal real start sites of translation initiation of proteins and to identify novel expressed products. In this way, the re-annotation process may become automatic and systematic to improve protein databases
Gramoli, Vincent. "Mémoire partagée distribuée pour systèmes dynamiques à grande échelle". Phd thesis, Rennes 1, 2007. ftp://ftp.irisa.fr/techreports/theses/2007/gramoli-e.pdf.
Pełny tekst źródłaThis thesis focuses on newly arising challenges in the context of datasharing due to the recent scale shift of distributed systems. Distributed systems keep enlarging very rapidly. Not only tend users to communicate with more people over the world, but the amount of individual objects that get connected is increasing. Such large scale systems experience an inherent dynamism due to the unpredictability of the users behaviors. This drawback prevents traditional solutions from being adapted to this challenging context. More basically, this affects the communication among distinct computing entities. This thesis investigates the existing research work and proposes research suggestions to solve a fundamental issue, the distributed shared memory problem, in such a large scale and inherently dynamic environment
Gramoli, Vincent. "Mémoire partagée distribuée pour systèmes dynamiques à grande échelle". Phd thesis, Université Rennes 1, 2007. http://tel.archives-ouvertes.fr/tel-00491439.
Pełny tekst źródłaOudinet, Johan. "Approches combinatoires pour le test statistique à grande échelle". Paris 11, 2010. http://www.theses.fr/2010PA112347.
Pełny tekst źródłaThis thesis focuses on the development of combinatorial methods for testing and formal verification. Particularly on probabilistic approaches because exhaustive verification is often not tractable for complex systems. For model-based testing, I guide the random exploration of the model to ensure a satisfaction with desired probability of the expected coverage criterion, regardless of the underlying topology of the explored model. Regarding model-checking, I show how to generate a random number of finite paths to check if a property is satisfied with a certain probability. In the first part, I compare different algorithms for generating uniformly at random paths in an automaton. Then I propose a new algorithm that offers a good compromise with a sub-linear space complexity in the path length and a almost-linear time complexity. This algorithm allows the exploration of large models (tens of millions of states) by generating long paths (hundreds of thousands of transitions). In a second part, I present a way to combine partial order reduction and on-the-fly generation techniques to explore concurrent systems without constructing the global model, but relying on models of the components only. Finally, I show how to bias the previous algorithms to satisfy other coverage criteria. When a criterion is not based on paths, but on a set of states or transitions, we use a mixed solution to ensure both various ways of exploring those states or transitions and the criterion satisfaction with a desired probability
Lombard, Pierre. "NFSP : Une solution de stockage distribué pour architectures grande échelle". Phd thesis, Grenoble INPG, 2003. http://tel.archives-ouvertes.fr/tel-00004373.
Pełny tekst źródłaVargas-Magaña, Mariana. "Analyse des structures à grande échelle avec SDSS-III/BOSS". Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00726113.
Pełny tekst źródłaSamaké, Abdoulaye. "Méthodes non-conformes de décomposition de domaine à grande échelle". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM066/document.
Pełny tekst źródłaThis thesis investigates domain decomposition methods, commonly classified as either overlapping Schwarz methods or iterative substructuring methods relying on nonoverlapping subdomains. We mainly focus on the mortar finite element method, a nonconforming approach of substructuring method involving weak continuity constraints on the approximation space. We introduce a finiteelement framework for the design and the analysis of the substructuring preconditioners for an efficient solution of the linear system arising from such a discretization method. Particular consideration is given to the construction of the coarse grid preconditioner, specifically the main variantproposed in this work, using a Discontinuous Galerkin interior penalty method as coarse problem. Other domain decomposition methods, such as Schwarz methods and the so-called three-field method are surveyed with the purpose of establishing a generic teaching and research programming environment for a wide range of these methods. We develop an advanced computational framework dedicated to the parallel implementation of numerical methods and preconditioners introduced in this thesis. The efficiency and the scalability of the preconditioners, and the performance of parallel algorithms are illustrated by numerical experiments performed on large scale parallel architectures
Signorini, Jacqueline. "Programmation par configurations des ordinateurs cellulaires à très grande échelle". Paris 8, 1992. http://www.theses.fr/1992PA080699.
Pełny tekst źródłaCôté, Benoit, i Benoit Côté. "Modèle d’évolution de galaxies pour simulations cosmologiques à grande échelle". Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25550.
Pełny tekst źródłaTableau d'honneur de la Faculté des études supérieures et postdorales, 2014-2015
Nous présentons un modèle semi-analytique (MSA) conçu pour être utilisé dans une simulation hydrodynamique à grande échelle comme traitement de sous-grille afin de générer l’évolution des galaxies dans un contexte cosmologique. Le but ultime de ce projet est d’étudier l’histoire de l’enrichissement chimique du milieu intergalactique (MIG) ainsi que les interactions entre les galaxies et leur environnement. Le MSA inclut tous les ingrédients nécessaires pour reproduire l’évolution des galaxies de faible masse et de masse intermédiaire. Cela comprend l’accrétion du halo galactique et du MIG, le refroidissement radiatif, la formation stellaire, l’enrichissement chimique et la production de vents galactiques propulsés par l’énergie mécanique et la radiation des étoiles massives. La physique des bulles interstellaires est appliquée à chaque population d’étoiles qui se forme dans le modèle afin de relier l’activité stellaire à la production des vents galactiques propulsés par l’énergie mécanique. Nous utilisons des modèles stellaires à jour pour générer l’évolution de chacune des populations d’étoiles en fonction de leur masse, de leur métallicité et de leur âge. Cela permet d’inclure, dans le processus d’enrichissement, les vents stellaires des étoiles massives, les supernovae de Type II, Ib et Ic, les hypernovae, les vents stellaires des étoiles de faible masse et de masse intermédiaire ainsi que les supernovae de Type Ia. Avec ces ingrédients, notre modèle peut reproduire les abondances de plusieurs éléments observées dans les étoiles du voisinage solaire. De manière plus générale, notre MSA peut reproduire la relation actuelle observée entre la masse stellaire des galaxies et la masse de leur halo de matière sombre. Il peut aussi reproduire la métallicité, la quantité d’hydrogène et le taux de formation stellaire spécifique observés dans les galaxies de l’Univers local. Notre modèle est également consistant avec les observations suggérant que les galaxies de faible masse sont davantage affectées par la rétroaction stellaire que les galaxies plus massives. De plus, le modèle peut reproduire les différents comportements, soit oscillatoire ou stable, observés dans l’évolution du taux de formation stellaire des galaxies. Tous ces résultats démontrent que notre MSA est suffisamment qualifié pour traiter l’évolution des galaxies à l’intérieur d’une simulation cosmologique.
Nous présentons un modèle semi-analytique (MSA) conçu pour être utilisé dans une simulation hydrodynamique à grande échelle comme traitement de sous-grille afin de générer l’évolution des galaxies dans un contexte cosmologique. Le but ultime de ce projet est d’étudier l’histoire de l’enrichissement chimique du milieu intergalactique (MIG) ainsi que les interactions entre les galaxies et leur environnement. Le MSA inclut tous les ingrédients nécessaires pour reproduire l’évolution des galaxies de faible masse et de masse intermédiaire. Cela comprend l’accrétion du halo galactique et du MIG, le refroidissement radiatif, la formation stellaire, l’enrichissement chimique et la production de vents galactiques propulsés par l’énergie mécanique et la radiation des étoiles massives. La physique des bulles interstellaires est appliquée à chaque population d’étoiles qui se forme dans le modèle afin de relier l’activité stellaire à la production des vents galactiques propulsés par l’énergie mécanique. Nous utilisons des modèles stellaires à jour pour générer l’évolution de chacune des populations d’étoiles en fonction de leur masse, de leur métallicité et de leur âge. Cela permet d’inclure, dans le processus d’enrichissement, les vents stellaires des étoiles massives, les supernovae de Type II, Ib et Ic, les hypernovae, les vents stellaires des étoiles de faible masse et de masse intermédiaire ainsi que les supernovae de Type Ia. Avec ces ingrédients, notre modèle peut reproduire les abondances de plusieurs éléments observées dans les étoiles du voisinage solaire. De manière plus générale, notre MSA peut reproduire la relation actuelle observée entre la masse stellaire des galaxies et la masse de leur halo de matière sombre. Il peut aussi reproduire la métallicité, la quantité d’hydrogène et le taux de formation stellaire spécifique observés dans les galaxies de l’Univers local. Notre modèle est également consistant avec les observations suggérant que les galaxies de faible masse sont davantage affectées par la rétroaction stellaire que les galaxies plus massives. De plus, le modèle peut reproduire les différents comportements, soit oscillatoire ou stable, observés dans l’évolution du taux de formation stellaire des galaxies. Tous ces résultats démontrent que notre MSA est suffisamment qualifié pour traiter l’évolution des galaxies à l’intérieur d’une simulation cosmologique.
We present a semi-analytical model (SAM) designed to be used in a large-scale hydrodynamical simulation as a sub-grid treatment in order to generate the evolution of galaxies in a cosmological context. The ultimate goal of this project is to study the chemical enrichment history of the intergalactic medium (IGM) and the interactions between galaxies and their surrounding. Presently, the SAM takes into account all the ingredients needed to compute the evolution of low- and intermediate-mass galaxies. This includes the accretion of the galactic halo and the IGM, radiative cooling, star formation, chemical enrichment, and the production of galactic outflows driven by the mechanical energy and the radiation of massive stars. The physics of interstellar bubbles is applied to every stellar population which forms in the model in order to link the stellar activity to the production of outflows driven by mechanical energy. We use up-to-date stellar models to generate the evolution of each stellar population as a function of their mass, metallicity, and age. This enables us to include, in the enrichment process, the stellar winds from massive stars, Type II, Ib, and Ic supernovae, hypernovae, the stellar winds from low- and intermediate-mass stars in the asymptotic giant branch, and Type Ia supernovae. With these ingredients, our model can reproduce the abundances of several elements observed in the stars located in the solar neighborhood. More generally, our SAM reproduces the current stellar-to-dark-halo mass relation observed in galaxies. It can also reproduce the metallicity, the hydrogen mass fraction, and the specific star formation rate observed in galaxies as a function of their stellar mass. Our model is also consistent with observations which suggest that low-mass galaxies are more affected by stellar feedback than higher-mass galaxies. Moreover, the model can reproduce the periodic and the stable behaviors observed in the star formation rate of galaxies. All these results show that our SAM is sufficiently qualified to treat the evolution of low- and intermediate-mass galaxies inside a large-scale cosmological simulation.
We present a semi-analytical model (SAM) designed to be used in a large-scale hydrodynamical simulation as a sub-grid treatment in order to generate the evolution of galaxies in a cosmological context. The ultimate goal of this project is to study the chemical enrichment history of the intergalactic medium (IGM) and the interactions between galaxies and their surrounding. Presently, the SAM takes into account all the ingredients needed to compute the evolution of low- and intermediate-mass galaxies. This includes the accretion of the galactic halo and the IGM, radiative cooling, star formation, chemical enrichment, and the production of galactic outflows driven by the mechanical energy and the radiation of massive stars. The physics of interstellar bubbles is applied to every stellar population which forms in the model in order to link the stellar activity to the production of outflows driven by mechanical energy. We use up-to-date stellar models to generate the evolution of each stellar population as a function of their mass, metallicity, and age. This enables us to include, in the enrichment process, the stellar winds from massive stars, Type II, Ib, and Ic supernovae, hypernovae, the stellar winds from low- and intermediate-mass stars in the asymptotic giant branch, and Type Ia supernovae. With these ingredients, our model can reproduce the abundances of several elements observed in the stars located in the solar neighborhood. More generally, our SAM reproduces the current stellar-to-dark-halo mass relation observed in galaxies. It can also reproduce the metallicity, the hydrogen mass fraction, and the specific star formation rate observed in galaxies as a function of their stellar mass. Our model is also consistent with observations which suggest that low-mass galaxies are more affected by stellar feedback than higher-mass galaxies. Moreover, the model can reproduce the periodic and the stable behaviors observed in the star formation rate of galaxies. All these results show that our SAM is sufficiently qualified to treat the evolution of low- and intermediate-mass galaxies inside a large-scale cosmological simulation.
Benmerzoug, Fateh. "Analyse, modélisation et visualisation de données sismiques à grande échelle". Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30077.
Pełny tekst źródłaThe main goal of the oil and gas industry is to locate and extract hydrocarbon resources, mainly petroleum and natural gas. To do this efficiently, numerous seismic measurements are conducted to gather up as much data as possible on terrain or marine surface area of interest. Using a multitude of sensors, seismic data are acquired and processed resulting in large cube-shaped data volumes. These volumes are then used to further compute additional attributes that helps in the understanding of the inner geological and geophysical structure of the earth. The visualization and exploration, called surveys, of these volumes are crucial to understand the structure of the underground and localize natural reservoirs where oil or gas are trapped. Recent advancements in both processing and imaging technologies enables engineers and geoscientists to perform larger seismic surveys. Modern seismic measurements yield large multi-hundred gigabytes of data volumes. The size of the acquired volumes presents a real challenge, both for processing such large volumes as well as their storage and distribution. Thus, data compression is a much- desired feature that helps answering the data size challenge. Another challenging aspect is the visualization of such large volumes. Traditionally, a volume is sliced both vertically and horizontally and visualized by means of 2-dimensional planes. This method necessitates the user having to manually scrolls back and forth be- tween successive slices in order to locate and track interesting geological features. Even though slicing provides a detailed visualization with a clear and concise representation of the physical space, it lacks the depth aspect that can be crucial in the understanding of certain structures. Additionally, the larger the volume gets, the more tedious and repetitive this task can be. A more intuitive approach for visualization is volume rendering. Rendering the seismic data as a volume presents an intuitive and hands on approach. By defining the appropriate color and opacity filters, the user can extract and visualize entire geo-bodies as individual continuous objects in a 3-dimensional space. In this thesis, we present a solution for both the data size and large data visualization challenges. We give an overview of the seismic data and attributes that are present in a typical seismic survey. We present an overview of data compression in a whole, discussing the necessary tools and methods that are used in the industry. A seismic data compression algorithm is then proposed, based on the concept of ex- tended transforms. By employing the GenLOT , Generalized Lapped Orthogonal Trans- forms we derive an appropriate transform filter that decorrelates the seismic data so they can be further quantized and encoded using P-SPECK, our proposed compression algorithm based on block-coding of bit-planes. Furthermore, we proposed a ray-casting out-of-core volume rendering framework that enables the visualization of arbitrarily large seismic cubes. Data are streamed on-demand and rendered using the user provided opacity and color filters, resulting in a fairly easy to use software package
Vega, Baez Germàn Eduardo. "Développement d'applications à grande échelle par composition de méta-modèles". Université Joseph Fourier (Grenoble), 2005. http://www.theses.fr/2005GRE10278.
Pełny tekst źródłaModel Driven Software Engineering (MDSE) is a Software Engineering approach that addresses the ever increasing complexity of software development and maintenance through a unified conceptual framework in which the whole software life cycle is seen as a process of model production, refinement and integration. This thesis contributes to this MDSE trend. We focus mainly on the issues raised by the complexity and diversity of the domains of expertise involved in large size software applications, and we propose to address these issues in an MDSE perspective. A domain is an expertise area, potentially shared by many different software applications. The knowledge and know-how in a domain are major assets. This expertise can be formalized and reused when captured by a Domain Specific Language (DSL). We propose an approach in which the target system is described by different models, written in different DSL. In this approach, composing these different models allows for modeling complex application covering simultaneously different domains. Our approach is an original contribution in that each DSL is specified by a meta model precise enough to build, in a semi automatic way, a domain virtual machine ; it is this virtual machine that interprets the domain models. Then, it is possible to compose these meta models to define new and more complex domains. Meta model composition increases modularity and reuse, and allows building domain with much larger functional scope than possible with traditional approaches
Etcheverry, Arnaud. "Simulation de la dynamique des dislocations à très grande échelle". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0263/document.
Pełny tekst źródłaThis research work focuses on bringing performances in 3D dislocation dynamics simulation, to run efficiently on modern computers. First of all, we introduce some algorithmic technics, to reduce the complexity in order to target large scale simulations. Second of all, we focus on data structure to take into account both memory hierachie and algorithmic data access. On one side we build this adaptive data structure to handle dynamism of data and on the other side we use an Octree to combine hierachie decompostion and data locality in order to face intensive arithmetics with force field computation and collision detection. Finnaly, we introduce some parallel aspects of our simulation. We propose a classical hybrid parallelism, with task based openMP threads and domain decomposition technics for MPI
Hoyos-Idrobo, Andrés. "Ensembles des modeles en fMRI : l'apprentissage stable à grande échelle". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS029/document.
Pełny tekst źródłaIn medical imaging, collaborative worldwide initiatives have begun theacquisition of hundreds of Terabytes of data that are made available to thescientific community. In particular, functional Magnetic Resonance Imaging --fMRI-- data. However, this signal requires extensive fitting and noise reduction steps to extract useful information. The complexity of these analysis pipelines yields results that are highly dependent on the chosen parameters.The computation cost of this data deluge is worse than linear: as datasetsno longer fit in cache, standard computational architectures cannot beefficiently used.To speed-up the computation time, we considered dimensionality reduction byfeature grouping. We use clustering methods to perform this task. We introduce a linear-time agglomerative clustering scheme, Recursive Nearest Agglomeration (ReNA). Unlike existing fast agglomerative schemes, it avoids the creation of giant clusters. We then show empirically how this clustering algorithm yields very fast and accurate models, enabling to process large datasets on budget.In neuroimaging, machine learning can be used to understand the cognitiveorganization of the brain. The idea is to build predictive models that are used to identify the brain regions involved in the cognitive processing of an external stimulus. However, training such estimators is a high-dimensional problem, and one needs to impose some prior to find a suitable model.To handle large datasets and increase stability of results, we propose to useensembles of models in combination with clustering. We study the empirical performance of this pipeline on a large number of brain imaging datasets. This method is highly parallelizable, it has lower computation time than the state-of-the-art methods and we show that, it requires less data samples to achieve better prediction accuracy. Finally, we show that ensembles of models improve the stability of the weight maps and reduce the variance of prediction accuracy
Dandouna, Makarem. "Librairies numériques réutilisables pour le calcul distribué à grande échelle". Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0063.
Pełny tekst źródłaWe propose, in this thesis, a design model for numerical libraries based on a component oriented approach and a strict separation between data management, computation operations and communication control of an application. This model allows the sequential/parallel reusability as well as the expression of the multi-levels parallelism. The abstraction of the three principals aspects of a parallel library suggested by our design model allows the independence of this one from the communication mecanisms. One of the consequences of this independence is the possibility to make more scalable the existing parallel libraries and those built according to this model. To validate the proposed approach, we realize our design model basing on some existing numerical libraries designed differently used jointly with a scientifc workfow environment called YML. Experiments performed on the supercomputer HopperII from the National Energy Research Scientifc Computing Center (NERSC) and on the national French Grid'5000 platform show the effciency and the scalability of our approach
Pérès, Olivier. "Construction de topologies autostabilisante dans les systèmes à grande échelle". Paris 11, 2008. http://www.theses.fr/2008PA112119.
Pełny tekst źródłaLarge scale systems do not allow the use of regular techniques for writing distributed Algorithms. In this thesis, a new model is proposed. It is scalable and allows to write self-stabilizing algorithms. Such algorithms converge towards a state in which they verify their specification. They are thus capable of recovering from the transient failures that inevitably affect such systems : arrival and departure of processes, memory corruption, bad network links
Gallet, Basile. "Dynamique d'un champ à grande échelle engendré sur un fond turbulent". Phd thesis, Ecole Normale Supérieure de Paris - ENS Paris, 2011. http://tel.archives-ouvertes.fr/tel-00655623.
Pełny tekst źródłaLaumay, Philippe. "Configuration et déploiement d'intergiciel asynchrone sur système hétérogène à grande échelle". Phd thesis, Grenoble INPG, 2004. http://tel.archives-ouvertes.fr/tel-00005409.
Pełny tekst źródłaCôté, Benoît. "Modèle de vents galactiques destiné aux simulations cosmologiques à grande échelle". Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27873/27873.pdf.
Pełny tekst źródłaMohamed, Drissi. "Un modèle de propagation de feux de végétation à grande échelle". Phd thesis, Université de Provence - Aix-Marseille I, 2013. http://tel.archives-ouvertes.fr/tel-00931806.
Pełny tekst źródłaCordonnier, Guillaume. "Modèles à couches pour simuler l'évolution de paysages à grande échelle". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM072/document.
Pełny tekst źródłaThe development of new technologies allows the interactive visualization of virtual worlds showing an increasing amount of details and spacial extent. The production of plausible landscapes within these worlds becomes a major challenge, not only because the important part that terrain features and ecosystems play in the quality and realism of 3D sceneries, but also from the editing complexity of large landforms at mountain range scales. Interactive authoring is often achieved by coupling editing techniques with computationally and time demanding numerical simulation, whose calibration is harder as the number of non-intuitive parameters increases.This thesis explores new methods for the simulation of large-scale landscapes. Our goal is to improve both the control and the realism of the synthetic scenes. Our strategy to increase the plausibility consist on building our methods on physically and geomorphologically-inspired laws: we develop new solving schemes, which, combined with intuitive control tools, improve user experience.By observing phenomena triggered by compression areas within the Earth's crust, we propose a method for the intuitive control of the uplift based on a metaphor on the sculpting of the tectonic plates. Combined with new efficient methods for fluvial and glacial erosion, this allows for the fast sculpting of large mountain ranges. In order to visualize the resulting landscapes withing human sight, we demonstrate the need of combining the simulation of various phenomena with different time spans, and we propose a stochastic simulation technique to solve this complex cohabitation. This methodology is applied to the simulation of geological processes such as erosion interleaved with ecosystems formation. This method is then implemented on the GPU, combining long term effects (snow fall, phase changes of water) with highly dynamics ones (avalanches, skiers impact).Our methods allow the simulation of the evolution of large scale, visually plausible landscapes, while accounting for user control. These results were validated by user studies as well as comparisons with data obtained from real landscapes
Guenot, Damien. "Simulation des effets instationnaires à grande échelle dans les écoulements décollés". École nationale supérieure de l'aéronautique et de l'espace (Toulouse ; 1972-2007), 2004. http://www.theses.fr/2004ESAE0009.
Pełny tekst źródłaGuedj, Mickaël. "Méthodes Statistiques pour l’analyse de données génétiques d’association à grande échelle". Evry-Val d'Essonne, 2007. http://www.biblio.univ-evry.fr/theses/2007/2007EVRY0015.pdf.
Pełny tekst źródłaThe increasing availability of dense Single Nucleotide Polymorphisms (SNPs) maps due to rapid improvements in Molecular Biology and genotyping technologies have recently led geneticists towards genome-wide association studies with hopes of encouraging results concerning our understanding of the genetic basis of complex diseases. The analysis of such high-throughput data implies today new statistical and computational problematic to face, which constitute the main topic of this thesis. After a brief description of the main questions raised by genome-wide association studies, we deal with single-marker approaches by a power study of the main association tests. We consider then the use of multi-markers approaches by focusing on the method we developed which relies on the Local Score. Finally, this thesis also deals with the multiple-testing problem: our Local Score-based approach circumvents this problem by reducing the number of tests; in parallel, we present an estimation of the Local False Discovery Rate by a simple Gaussian mixed model
Benveniste, Jérôme. "Observer la circulation des océans à grande échelle par altimétrie satellitaire". Toulouse 3, 1989. http://www.theses.fr/1989TOU30229.
Pełny tekst źródła