Tesis sobre el tema "Petits jeux de données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Petits jeux de données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Gay, Antonin. "Pronostic de défaillance basé sur les données pour la prise de décision en maintenance : Exploitation du principe d'augmentation de données avec intégration de connaissances à priori pour faire face aux problématiques du small data set". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0059.
Texto completoThis CIFRE PhD is a joint project between ArcelorMittal and the CRAN laboratory, with theaim to optimize industrial maintenance decision-making through the exploitation of the available sources of information, i.e. industrial data and knowledge, under the industrial constraints presented by the steel-making context. Current maintenance strategy on steel lines is based on regular preventive maintenance. Evolution of preventive maintenance towards a dynamic strategy is done through predictive maintenance. Predictive maintenance has been formalized within the Prognostics and Health Management (PHM) paradigm as a seven steps process. Among these PHM steps, this PhD's work focuses on decision-making and prognostics. The Industry 4.0 context put emphasis on data-driven approaches, which require large amount of data that industrial systems cannot ystematically supply. The first contribution of the PhD consists in proposing an equation to link prognostics performances to the number of available training samples. This contribution allows to predict prognostics performances that could be obtained with additional data when dealing with small datasets. The second contribution of the PhD focuses on evaluating and analyzing the performance of data augmentation when applied to rognostics on small datasets. Data augmentation leads to an improvement of prognostics performance up to 10%. The third contribution of the PhD consists in the integration of expert knowledge into data augmentation. Statistical knowledge integration proved efficient to avoid performance degradation caused by data augmentation under some unfavorable conditions. Finally, the fourth contribution consists in the integration of prognostics in maintenance decision-making cost modeling and the evaluation of prognostics impact on maintenance decision cost. It demonstrates that (i) the implementation of predictive maintenance reduces maintenance cost up to 18-20% and ii) the 10% prognostics improvement can reduce maintenance cost by an additional 1%
Coveliers, Alexandre. "Sensibilité aux jeux de données de la compilation itérative". Paris 11, 2007. http://www.theses.fr/2007PA112255.
Texto completoIn the context of architecture processor conception, the performance research leads to a constant growth of architecture complexity. This growth of architecture complexity made more difficult the exploitation of their potential performance. To improve architecture performance exploitation, new optimization techniques based on dynamic behavior –i. E. Run time behavior- has been proposed Iterative compilation is a such an optimization approach. This approach allows to determine more relevant transformation than those obtained by static analysis. The main drawback of this optimization method is based on the fact that the information that lead to the code transformation are specific to a particular data set. Thus the determined optimizations are dependent on the data set used during the optimization process. In this thesis, we study the optimized application performance variations according to the data set used for two iterative code transformation techniques. We introduce different metrics to quantify this sensitivity. Also, we propose data set selection methods for choosing which data set to use during code transformation process. Selected data sets enable to obtain an optimized code with good performance with all other available data sets
Caron, Maxime. "Données confidentielles : génération de jeux de données synthétisés par forêts aléatoires pour des variables catégoriques". Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25935.
Texto completoConfidential data are very common in statistics nowadays. One way to treat them is to create partially synthetic datasets for data sharing. We will present an algorithm based on random forest to generate such datasets for categorical variables. We are interested by the formula used to make inference from multiple synthetic dataset. We show that the order of the synthesis has an impact on the estimation of the variance with the formula. We propose a variant of the algorithm inspired by differential privacy, and show that we are then not able to estimate a regression coefficient nor its variance. We show the impact of synthetic datasets on structural equations modeling. One conclusion is that the synthetic dataset does not really affect the coefficients between latent variables and measured variables.
Ben, Ellefi Mohamed. "La recommandation des jeux de données basée sur le profilage pour le liage des données RDF". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT276/document.
Texto completoWith the emergence of the Web of Data, most notably Linked Open Data (LOD), an abundance of data has become available on the web. However, LOD datasets and their inherent subgraphs vary heavily with respect to their size, topic and domain coverage, the schemas and their data dynamicity (respectively schemas and metadata) over the time. To this extent, identifying suitable datasets, which meet specific criteria, has become an increasingly important, yet challenging task to supportissues such as entity retrieval or semantic search and data linking. Particularlywith respect to the interlinking issue, the current topology of the LOD cloud underlines the need for practical and efficient means to recommend suitable datasets: currently, only well-known reference graphs such as DBpedia (the most obvious target), YAGO or Freebase show a high amount of in-links, while there exists a long tail of potentially suitable yet under-recognized datasets. This problem is due to the semantic web tradition in dealing with "finding candidate datasets to link to", where data publishers are used to identify target datasets for interlinking.While an understanding of the nature of the content of specific datasets is a crucial prerequisite for the mentioned issues, we adopt in this dissertation the notion of "dataset profile" - a set of features that describe a dataset and allow the comparison of different datasets with regard to their represented characteristics. Our first research direction was to implement a collaborative filtering-like dataset recommendation approach, which exploits both existing dataset topic proles, as well as traditional dataset connectivity measures, in order to link LOD datasets into a global dataset-topic-graph. This approach relies on the LOD graph in order to learn the connectivity behaviour between LOD datasets. However, experiments have shown that the current topology of the LOD cloud group is far from being complete to be considered as a ground truth and consequently as learning data.Facing the limits the current topology of LOD (as learning data), our research has led to break away from the topic proles representation of "learn to rank" approach and to adopt a new approach for candidate datasets identication where the recommendation is based on the intensional profiles overlap between differentdatasets. By intensional profile, we understand the formal representation of a set of schema concept labels that best describe a dataset and can be potentially enriched by retrieving the corresponding textual descriptions. This representation provides richer contextual and semantic information and allows to compute efficiently and inexpensively similarities between proles. We identify schema overlap by the help of a semantico-frequential concept similarity measure and a ranking criterion based on the tf*idf cosine similarity. The experiments, conducted over all available linked datasets on the LOD cloud, show that our method achieves an average precision of up to 53% for a recall of 100%. Furthermore, our method returns the mappings between the schema concepts across datasets, a particularly useful input for the data linking step.In order to ensure a high quality representative datasets schema profiles, we introduce Datavore| a tool oriented towards metadata designers that provides rankedlists of vocabulary terms to reuse in data modeling process, together with additional metadata and cross-terms relations. The tool relies on the Linked Open Vocabulary (LOV) ecosystem for acquiring vocabularies and metadata and is made available for the community
Bouillot, Flavien. "Classification de textes : de nouvelles pondérations adaptées aux petits volumes". Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS167.
Texto completoEvery day, classification is omnipresent and unconscious. For example in the process of decision when faced with something (an object, an event, a person), we will instinctively think of similar elements in order to adapt our choices and behaviors. This storage in a particular category is based on past experiences and characteristics of the element. The largest and the most accurate will be experiments, the most relevant will be the decision. It is the same when we need to categorize a document based on its content. For example detect if there is a children's story or a philosophical treatise. This treatment is of course more effective if we have a large number of works of these two categories and if books had a large number of words. In this thesis we address the problem of decision making precisely when we have few learning documents and when the documents had a limited number of words. For this we propose a new approach based on new weights. It enables us to accurately determine the weight to be given to the words which compose the document.To optimize treatment, we propose a configurable approach. Five parameters make our adaptable approach, regardless of the classification given problem. Numerous experiments have been conducted on various types of documents in different languages and in different configurations. According to the corpus, they highlight that our proposal allows us to achieve superior results in comparison with the best approaches in the literature to address the problems of small dataset. The use of parameters adds complexity since it is then necessary to determine optimitales values. Detect the best settings and best algorithms is a complicated task whose difficulty is theorized through the theorem of No-Free-Lunch. We treat this second problem by proposing a new meta-classification approach based on the concepts of distance and semantic similarities. Specifically we propose new meta-features to deal in the context of classification of documents. This original approach allows us to achieve similar results with the best approaches to literature while providing additional features. In conclusion, the work presented in this manuscript has been integrated into various technical implementations, one in the Weka software, one in a industrial prototype and a third in the product of the company that funded this work
Coatélan, Stéphane. "Conception et évaluation d'un système de transmission sur canal acoustique sous-marin horizontal petits fonds". Brest, 1996. http://www.theses.fr/1996BRES2001.
Texto completoDumonceaux, Frédéric. "Approches algébriques pour la gestion et l’exploitation de partitions sur des jeux de données". Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=c655f585-5cf3-4554-bea2-8e488315a2b9.
Texto completoThe rise of data analysis methods in many growing contexts requires the design of new tools, enabling management and handling of extracted data. Summarization process is then often formalized through the use of set partitions whose handling depends on applicative context and inherent properties. Firstly, we suggest to model the management of aggregation query results over a data cube within the algebraic framework of the partition lattice. We highlight the value of such an approach with a view to minimize both required space and time to generate those results. We then deal with the consensus of partitions issue in which we emphasize challenges related to the lack of properties that rule partitions combination. The idea put forward is to deepen algebraic properties of the partition lattice for the purpose of strengthening its understanding and generating new consensus functions. As a conclusion, we propose the modelling and implementation of operators defined over generic partitions and we carry out some experiences allowing to assert the benefit of their conceptual and operational use
Fan, Qingfeng. "Stratégie de transfert de données dans les grilles de capteurs". Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0012.
Texto completoBig data era is coming, and the amount of data increases dramatically in many application fields every day. This thesis mostly focuses on the big data transmission strategy for query optimization in Grid infrastructure. Firstly, we discuss over file degree: the ring and thread replication strategy, and under file degree: the file-parted replication strategy to improve the efficiency of Data Grid. We also tackle the data packets degree using multicast data transfer within a Sensor Grid, which is widely utilized in the in-network query operation. The system comprehensively considers the location factor and data factor, and combines them in a general weighted vector. In a third stage, we extended our model to account for the energy factor to deal with wireless sensor grids, which corresponds to a 3 vectors correlation problem. We show that our approach can be extended further to any finite-dimensional factors. The last part deals with the mobile context, i. E. When users and the queried resources are mobile. We proposed an extension of the semantic cache based optimization for such mobile distributed queries. In this context, the query optimization depends, not only on the cache size and its freshness, but also on the mobility of the user
Abdelmoula, Mariem. "Génération automatique de jeux de tests avec analyse symbolique des données pour les systèmes embarqués". Thesis, Nice, 2014. http://www.theses.fr/2014NICE4149/document.
Texto completoOne of the biggest challenges in hardware and software design is to ensure that a system is error-free. Small errors in reactive embedded systems can have disastrous and costly consequences for a project. Preventing such errors by identifying the most probable cases of erratic system behavior is quite challenging. Indeed, tests in industry are overall non-exhaustive, while formal verification in scientific research often suffers from combinatorial explosion problem. We present in this context a new approach for generating exhaustive test sets that combines the underlying principles of the industrial test technique and the academic-based formal verification approach. Our approach builds a generic model of the system under test according to the synchronous approach. The goal is to identify the optimal preconditions for restricting the state space of the model such that test generation can take place on significant subspaces only. So, all the possible test sets are generated from the extracted subspace preconditions. Our approach exhibits a simpler and efficient quasi-flattening algorithm compared with existing techniques and a useful compiled internal description to check security properties and reduce the state space combinatorial explosion problem. It also provides a symbolic processing technique of numeric data that provides a more expressive and concrete test of the system. We have implemented our approach on a tool called GAJE. To illustrate our work, this tool was applied to verify an industrial project on contactless smart cards security
Modrzejewski, Richard. "Recalage déformable, jeux de données et protocoles d'évaluation pour la chirurgie mini-invasive abdominale augmentée". Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC044.
Texto completoThis thesis deals with deformable registration techniques of preoperative data to the intra-operative sceneas an indispensable step in the realisation of augmented reality for abdominal surgery. Such techniques arethus discussed as well as evaluation methodologies associated with them. Two contexts are considered : theregistration for computer-assisted laparoscopic surgery and the postural registration of the patient on theoperating table. For these two contexts, the needs to be met by the registration algorithms considered arediscussed, as well as the main limitations of the existing solutions. Algorithms developped during this thesis,allowing to meet these needs are thus proposed and discussed. Special attention is given to their evaluation.Different datasets allowing a quantitative evaluation of the accuracy of the registration algorithms, also realizedduring this thesis, and made public, are also discussed. Such data are extremely important because they respondto a lack of evaluation data needed in order to evaluate the registration error in a quantitative way, and thus tocompare the different algorithms. The modeling of the illumination of the laparoscopic scene, allowing one toextract strong constraints between the data to be registered and the surface of the observed organ, and thus tobe used to constrain these registration problems, is also discussed. This manuscript has seven parts. The firstdeals with the context surrounding this thesis. Minimally invasive surgery is presented as well as various generalcomputer vision problems which, when applied to the medical context, allow the definition of computer-assistedsurgery. The second part deals with the prerequisites for reading the thesis. The pre-processing of pre-operativeand per-operative data, before their use by the presented registration algorithms, is thus discussed. The thirdpart corresponds to the registration of hepatic data in laparoscopy, and the evaluation associated with thisproblems. The fourth part deals with the problem of postural registration. The fifth part proposes a modellingof the lighting in laparoscopy which can be used to obtain strong constraints between the observed surfaceand the laparoscopic images. The sixth part proposes a use of the light models discussed in the previous partin order to refine and densify reconstructions of the laparoscopic scene. Finally, the seventh and last partcorresponds to our conclusions regarding the issues addressed during this thesis, and to future work
Simon, Franck. "Découverte causale sur des jeux de données classiques et temporels. Application à des modèles biologiques". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS528.
Texto completoThis thesis focuses on the field of causal discovery : the construction of causal graphs from observational data, and in particular, temporal causal discovery and the reconstruction of large gene regulatory networks. After a brief history, this thesis introduces the main concepts, hypotheses and theorems underlying causal graphs as well as the two main approaches: score-based and constraint-based methods. The MIIC (Multivariate Information-based Inductive Causation) method, developed in our laboratory, is then described with its latest improvements: Interpretable MIIC. The issues and solutions implemented to construct a temporal version (tMIIC) are presented as well as benchmarks reflecting the advantages of tMIIC compared to other state-of-the-art methods. The application to sequences of images taken with a microscope of a tumor environment reconstituted on microchips illustrates the capabilities of tMIIC to recover, solely from data, known and new relationships. Finally, this thesis introduces the use of a consequence a priori to apply causal discovery to the reconstruction of gene regulatory networks. By assuming that all genes, except transcription factors, are only consequence genes, it becomes possible to reconstruct graphs with thousands of genes. The ability to identify key transcription factors de novo is illustrated by an application to single cell RNA sequencing data with the discovery of two transcription factors likely to be involved in the biological process of interest
Soler, Maxime. "Réduction et comparaison de structures d'intérêt dans des jeux de données massifs par analyse topologique". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS364.
Texto completoIn this thesis, we propose different methods, based on topological data analysis, in order to address modern problematics concerning the increasing difficulty in the analysis of scientific data. In the case of scalar data defined on geometrical domains, extracting meaningful knowledge from static data, then time-varying data, then ensembles of time-varying data proves increasingly challenging. Our approaches for the reduction and analysis of such data are based on the idea of defining structures of interest in scalar fields as topological features. In a first effort to address data volume growth, we propose a new lossy compression scheme which offers strong topological guarantees, allowing topological features to be preserved throughout compression. The approach is shown to yield high compression factors in practice. Extensions are proposed to offer additional control over the geometrical error. We then target time-varying data by designing a new method for tracking topological features over time, based on topological metrics. We extend the metrics in order to overcome robustness and performance limitations. We propose a new efficient way to compute them, gaining orders of magnitude speedups over state-of-the-art approaches. Finally, we apply and adapt our methods to ensemble data related to reservoir simulation, for modeling viscous fingering in porous media. We show how to capture viscous fingers with topological features, adapt topological metrics for capturing discrepancies between simulation runs and a ground truth, evaluate the proposed metrics with feedback from experts, then implement an in-situ ranking framework for rating the fidelity of simulation runs
Allart, Thibault. "Apprentissage statistique sur données longitudinales de grande taille et applications au design des jeux vidéo". Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1136/document.
Texto completoThis thesis focuses on longitudinal time to event data possibly large along the following tree axes : number of individuals, observation frequency and number of covariates. We introduce a penalised estimator based on Cox complete likelihood with data driven weights. We introduce proximal optimization algorithms to efficiently fit models coefficients. We have implemented thoses methods in C++ and in the R package coxtv to allow everyone to analyse data sets bigger than RAM; using data streaming and online learning algorithms such that proximal stochastic gradient descent with adaptive learning rates. We illustrate performances on simulations and benchmark with existing models. Finally, we investigate the issue of video game design. We show that using our model on large datasets available in video game industry allows us to bring to light ways of improving the design of studied games. First we have a look at low level covariates, such as equipment choices through time and show that this model allows us to quantify the effect of each game elements, giving to designers ways to improve the game design. Finally, we show that the model can be used to extract more general design recommendations such as dificulty influence on player motivations
Allart, Thibault. "Apprentissage statistique sur données longitudinales de grande taille et applications au design des jeux vidéo". Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1136.
Texto completoThis thesis focuses on longitudinal time to event data possibly large along the following tree axes : number of individuals, observation frequency and number of covariates. We introduce a penalised estimator based on Cox complete likelihood with data driven weights. We introduce proximal optimization algorithms to efficiently fit models coefficients. We have implemented thoses methods in C++ and in the R package coxtv to allow everyone to analyse data sets bigger than RAM; using data streaming and online learning algorithms such that proximal stochastic gradient descent with adaptive learning rates. We illustrate performances on simulations and benchmark with existing models. Finally, we investigate the issue of video game design. We show that using our model on large datasets available in video game industry allows us to bring to light ways of improving the design of studied games. First we have a look at low level covariates, such as equipment choices through time and show that this model allows us to quantify the effect of each game elements, giving to designers ways to improve the game design. Finally, we show that the model can be used to extract more general design recommendations such as dificulty influence on player motivations
Chamekh, Rabeb. "Stratégies de jeux pour quelques problèmes inverses". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4103.
Texto completoIn this PHD-Thesis, we focused on solving the coupling problem of data completion and parameter identification. The Cauchy problem is a problem of identification of boundary condition on a part of the boundary from overabundant data on the remaining part. Parameter identification is a problem of the system parameter. These two problems are known to be ill-posed in the sense of Hadamard. This Thesis is divided into four parts. The first part is dedicated to a bibliography study. In the second chapter, we applied the game theory on the resolution of the coupling problem of data completion and the conductivity identification in electrocardiography. We talked about the identifiability of the conductivity. We have shown the uniqueness of this parameter using only the Cauchy data on a part of the edge. Our numerical experiments target medical applications in electrocardiography. We applied our procedure in a two-dimensional and three-dimensional thorax. The third part is dedicated to the resolution of the coupling problem in linear elasticity applying the game theory. A numerical study has been done where we considered a particular configuration to ensure the parameters identifiability. In the last part, we are interested in a problem of thermoelasticity. It’s about coupling two different disciplines : thermal and elasticity. The problem of crack identification is a natural application in this case
Istiqomah, Istiqomah. "Solides organiques dans les petits corps glacés : approches expérimentales et interprétation des données spectrales issues de mission VIRTIS/Rosetta". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU006.
Texto completoThe Rosetta space mission explored comet 67P/Churyumov-Gerasimenko between July 2014 and September 2016. During two years, extensive mappings in the visible and infrared ranges have been achieved by the VIRTIS imaging spectrometer (Visible InfraRed Thermal Imaging Spectrometer). This instrument has revealed a very dark and reddish surface, which has been interpreted by the presence of a dark carbonaceous material mixed up with opaque minerals (presumable Fe-Ni alloys and pyrrhotite). VIRTIS has also revealed, for the first for a comet, a broad band at 3.2 µm. The nature of this band was unclear at the beginning of this thesis, and two main semi-volatile compounds were suspected: ammonium salts and carboxylic acids.In this thesis, we have investigated these two hypotheses through laboratory experiments. We first conducted FTIR transmission experiments on pure solid carboxylic acids and ammonium salts. In a second step, we collected reflectance spectra of analogs of the refractory crust. A particular attention was devoted to the production of such analogs, and we developed dedicated grinding and mixing protocols. We found that the most suitable analogs are those produced from the sublimation of ice + refractory + semi-volatile mixtures in a vacuum chamber. They account well for the fine-grained and highly porous cometary material. Our experiments show that the 3.2 µm band in VIRTIS spectra is consistent with the presence of ammonium salts, which are ubiquitous across the surface of the comet. These ammonium salts constitute a new reservoir of nitrogen in comet, which might at least partially account for the missing nitrogen in comets.The abundance of the ammonium salt could however not be determined. Our experiments reveal the lack of between the band depth and the ammonium abundance in the samples, pointing that the parameters that control the band depth are not elucidated yet. This result points to the difficult question of the characterization of the porous texture of the sublimation residues and of their complex geometries. The grain size distribution is definitely only one parameter among other ones, and future studies should focus on this point. At last, the modeling approaches based on Hapke models are definitely not suitable for these dark semi-volatile bearing materials, and great care should be devoted with values published so far in literature
Laugier, Claire. "Contribution à l'étude des infestations par des petits strongles chez le cheval en Normandie : données épidemiologiques et aspects lésionnels". Montpellier 2, 2002. http://www.theses.fr/2002MON20125.
Texto completoFirmo, Drumond Thalita. "Apports croisées de l'apprentissage hiérarchique et la modélisation du système visuel : catégorisation d'images sur des petits corpus de données". Thesis, Bordeaux, 2020. https://tel.archives-ouvertes.fr/tel-03129189.
Texto completoDeep convolutional neural networks (DCNN) have recently protagonized a revolution in large-scale object recognition. They have changed the usual computer vision practices of hand-engineered features, with their ability to hierarchically learn representative features from data with a pertinent classifier. Together with hardware advances, they have made it possible to effectively exploit the ever-growing amounts of image data gathered online. However, in specific domains like healthcare and industrial applications, data is much less abundant, and expert labeling costs higher than those of general purpose image datasets. This scarcity scenario leads to this thesis' core question: can these limited-data domains profit from the advantages of DCNNs for image classification? This question has been addressed throughout this work, based on an extensive study of literature, divided in two main parts, followed by proposal of original models and mechanisms.The first part reviews object recognition from an interdisciplinary double-viewpoint. First, it resorts to understanding the function of vision from a biological stance, comparing and contrasting to DCNN models in terms of structure, function and capabilities. Second, a state-of-the-art review is established aiming to identify the main architectural categories and innovations in modern day DCNNs. This interdisciplinary basis fosters the identification of potential mechanisms - inspired both from biological and artificial structures — that could improve image recognition under difficult situations. Recurrent processing is a clear example: while not completely absent from the "deep vision" literature, it has mostly been applied to videos — due to their inherently sequential nature. From biology however it is clear such processing plays a role in refining our perception of a still scene. This theme is further explored through a dedicated literature review focused on recurrent convolutional architectures used in image classification.The second part carries on in the spirit of improving DCNNs, this time focusing more specifically on our central question: deep learning over small datasets. First, the work proposes a more detailed and precise discussion of the small sample problem and its relation to learning hierarchical features with deep models. This discussion is followed up by a structured view of the field, organizing and discussing the different possible paths towards adapting deep models to limited data settings. Rather than a raw listing, this review work aims to make sense out of the myriad of approaches in the field, grouping methods with similar intent or mechanism of action, in order to guide the development of custom solutions for small-data applications. Second, this study is complemented by an experimental analysis, exploring small data learning with the proposition of original models and mechanisms (previously published as a journal paper).In conclusion, it is possible to apply deep learning to small datasets and obtain good results, if done in a thoughtful fashion. On the data path, one shall try gather more information from additional related data sources if available. On the complexity path, architecture and training methods can be calibrated in order to profit the most from any available domain-specific side-information. Proposals concerning both of these paths get discussed in detail throughout this document. Overall, while there are multiple ways of reducing the complexity of deep learning with small data samples, there is no universal solution. Each method has its own drawbacks and practical difficulties and needs to be tailored specifically to the target perceptual task at hand
Legtchenko, Sergey. "Adaptation dynamique des architectures réparties pour jeux massivement multijoueurs". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00931865.
Texto completoRoy-Pomerleau, Xavier. "Inférence d'interactions d'ordre supérieur et de complexes simpliciaux à partir de données de présence/absence". Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/66994.
Texto completoDespite the effectiveness of networks to represent complex systems, recent work has shownthat their structure sometimes limits the explanatory power of the theoretical models, sinceit only encodes dyadic interactions. If a more complex interaction exists in the system, it isautomatically reduced to a group of pairwise interactions that are of the first order. We thusneed to use structures that can take higher-order interactions into account. However, whetherrelationships are of higher order or not is rarely explicit in real data sets. This is the case ofpresence/absence data, that only indicate which species (of animals, plants or others) can befound (or not) on a site without showing the interactions between them.The goal of this project is to develop an inference method to find higher-order interactionswithin presence/absence data. Here, two frameworks are examined. The first one is based onthe comparison of the topology of the data, obtained with a non-restrictive hypothesis, andthe topology of a random ensemble. The second one uses log-linear models and hypothesistesting to infer interactions one by one until the desired order. From this framework, we havedevelopped several inference methods to generate simplicial complexes (or hypergraphs) thatcan be studied with regular tools of network science as well as homology. In order to validatethese methods, we have developed a generative model of presence/absence data in which thetrue interactions are known. Results have also been obtained on real data sets. For instance,from presence/absence data of nesting birds in Québec, we were able to infer co-occurrencesof order two
Lavallard, Anne. "Exploration interactive d'archives de forums : Le cas des jeux de rôle en ligne". Phd thesis, Université de Caen, 2008. http://tel.archives-ouvertes.fr/tel-00292617.
Texto completoTeyssière, Gilles. "Processus d'appariements sur le marché du travail : une étude à partir de données d'une agence locale de l'ANPE". Aix-Marseille 2, 1991. http://www.theses.fr/1991AIX24001.
Texto completoThe purpose of this thesis is to determine the explicative elements of employer's hiring decision when he meets a worker through the national agency for employment. We use for this study a theoretical framework constituted by matching models. These models explain the level of wage that receive the worker by this labour productivity (or his level of education) and the alternative meeting opportunities of the two agents. We adapt these models to a sample of observed meetings and we explain the worker's hiring probability with a nested Logit model. We use for explicative variables the individual characteristics of the worker (like age, sex, marital status, level of education, his past situation in the labour market. . . ), the characteristics of the vacancies (like type of labour contract, offered wage. . . ) And the employer's alternative meeting opportunities. . . Moreover, we explain the employers hiring behaviour throughout time with survival models. We can observe from the estimation results a segmentation in the labor market on the basis of worker's level of education. A worker is hired only if his level of education is greater than a level fixed by the employer
Marjanović, David. "Phylogeny of the limbed vertebrates with special consideration of the origin of the modern amphibians". Paris 6, 2010. http://www.theses.fr/2010PA060690.
Texto completoClement, Virginie. "Estimation des paramètres génétiques des petits ruminants en milieu d'élevage traditionnel au Sénégal. Importance de la structure des données et du choix du modèle d'analyse". Paris, Institut national d'agronomie de Paris Grignon, 1999. http://www.theses.fr/1999INAP0031.
Texto completoEdoh-Alové, Djogbénuyè Akpé. "Conception et développement d'un service Web de contexte spatial dédié aux téléphones intelligents dans le cadre de jeux éducatifs interactifs". Master's thesis, Université Laval, 2012. http://hdl.handle.net/20.500.11794/23516.
Texto completoCurrently, with the rise of ubiquitous computing, one is growing interest in using the user context, for adapting applications and their contents to users’ needs and activities in real time, in different fields (tourism, smart homes, and hospitals). Many projects dealing with the definition and context models and management platforms have emerged. In the GeoEduc3D project, exploiting the context to improve the immersive and interactive aspects of interactive educational games is to be explored. This particular application framework (serious games, augmented reality, smart phones) is completely different and brings the issue of interoperability, which was not really addressed in previous work. Therefore, our work aims to design and implement a solution dedicated to the acquisition and dissemination of spatial context in a multi-players environment on and for smart phones. For this purpose, we first propose a definition and modeling of spatial context. Then we define the architecture of a service-oriented system for managing that information. To test our approach, a Web service prototype was developed according to three main functions: retrieving information from smart phones, storing data in the database and query flexible synchronous or asynchronous data. This research opens the way for the design and development of context-aware serious games applications for any type of smart phone, in a multiplayer environment.
De, Moliner Anne. "Estimation robuste de courbes de consommmation électrique moyennes par sondage pour de petits domaines en présence de valeurs manquantes". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK021/document.
Texto completoIn this thesis, we address the problem of robust estimation of mean or total electricity consumption curves by sampling in a finite population for the entire population and for small areas. We are also interested in estimating mean curves by sampling in presence of partially missing trajectories.Indeed, many studies carried out in the French electricity company EDF, for marketing or power grid management purposes, are based on the analysis of mean or total electricity consumption curves at a fine time scale, for different groups of clients sharing some common characteristics.Because of privacy issues and financial costs, it is not possible to measure the electricity consumption curve of each customer so these mean curves are estimated using samples. In this thesis, we extend the work of Lardin (2012) on mean curve estimation by sampling by focusing on specific aspects of this problem such as robustness to influential units, small area estimation and estimation in presence of partially or totally unobserved curves.In order to build robust estimators of mean curves we adapt the unified approach to robust estimation in finite population proposed by Beaumont et al (2013) to the context of functional data. To that purpose we propose three approaches : application of the usual method for real variables on discretised curves, projection on Functional Spherical Principal Components or on a Wavelets basis and thirdly functional truncation of conditional biases based on the notion of depth.These methods are tested and compared to each other on real datasets and Mean Squared Error estimators are also proposed.Secondly we address the problem of small area estimation for functional means or totals. We introduce three methods: unit level linear mixed model applied on the scores of functional principal components analysis or on wavelets coefficients, functional regression and aggregation of individual curves predictions by functional regression trees or functional random forests. Robust versions of these estimators are then proposed by following the approach to robust estimation based on conditional biais presented before.Finally, we suggest four estimators of mean curves by sampling in presence of partially or totally unobserved trajectories. The first estimator is a reweighting estimator where the weights are determined using a temporal non parametric kernel smoothing adapted to the context of finite population and missing data and the other ones rely on imputation of missing data. Missing parts of the curves are determined either by using the smoothing estimator presented before, or by nearest neighbours imputation adapted to functional data or by a variant of linear interpolation which takes into account the mean trajectory of the entire sample. Variance approximations are proposed for each method and all the estimators are compared to each other on real datasets for various missing data scenarios
Dari, Bekara Kheira. "Protection des données personnelles côté utilisateur dans le e-commerce". Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00923175.
Texto completoMareuil, Fabien. "DaDiModO un algorithme génétique pour l'étude de protéines à domaines à l'aide de données de RMN et de SAXS : application à la protéine ribosomale S1 d'Escherichia Coli". Paris 7, 2008. http://www.theses.fr/2008PA077191.
Texto completoTo increase our Knowledge about the biological properties of macromolecules, especially proteins, it is necessary to know their three-dimensional structures. About one thousand of different domains are sufficient to build most proteins and it is estimated that half of these domain structures is determined (Koonin et al. 2002). Eventually, it will be possible to obtain close models of protein domain structures. However the information concerning the relative position of the domains will always be missing. Hence, having a tool that finds the relative position of domains by using experimental data easy to obtain is a major issue. For that purpose, we have developed an algorithm that uses NMR and SAXS data to position the domains of a multi-domain protein. The main advantage of this tool is to leave the user free to choose the deformability of the domains. We validated our method on two test cases and thus showed that when the definition of domains is accurate enough and the experimental data are of fairly good quality, our program could approach the structural solution with an error of less than 1 A. We have then applied our method to the structural study of two fragments of the ribosomal protein S1 which is composed of six repetitions of the S1 domain. This study focused on the fragment; made of domains 3-4 and 4-5. The structure of the domain 4 was determined by NMR. The domain: 3 and 5 were obtained by homology modelling. Our study allowed us to validate a biologically relevant model of the fragment 3-5
Tremblay, Maxime. "Vision numérique avec peu d'étiquettes : segmentation d'objets et analyse de l'impact de la pluie". Doctoral thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69039.
Texto completoSoler, Julien. "Orion, a generic model for data mining : application to video games". Thesis, Brest, 2015. http://www.theses.fr/2015BRES0035/document.
Texto completoThe video game industry's needs are constantly changing. In the field of artificial intelligence, we identify inchapter 1, the different needs of industry in this area. We believe that the design of a learning behavior through imitation solution that is functional and efficient would cover most of these needs. In chapter 2, we show that data mining techniques can be very useful to provide such a solution. However, for now, these techniques are not sufficient to automatically build a comprehensive behavior that would be usable in modern video games. In chapter 3, we propose a generic model to learn behavior by imitating human players: Orion.This model consists of two parts, a structural model and a behavioral model. The structural model provides a general data mining framework, providing an abstraction of the different methods used in this research. This framework allows us to build a general purpose tool with better possibilities for visualizing than existing data mining tools. The behavioral model is designed to integrate data mining techniques in a more general architecture and is based on the Behavior Trees. In chapter 4, we illustrate how we use our model by implementing the behavior of players in the Pong and Unreal Tournament 3 games using Orion. In chapter 5,we identify possible improvements, both of our data mining framework and our behavioral model
Allesiardo, Robin. "Bandits Manchots sur Flux de Données Non Stationnaires". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS334/document.
Texto completoThe multi-armed bandit is a framework allowing the study of the trade-off between exploration and exploitation under partial feedback. At each turn t Є [1,T] of the game, a player has to choose an arm kt in a set of K and receives a reward ykt drawn from a reward distribution D(µkt) of mean µkt and support [0,1]. This is a challeging problem as the player only knows the reward associated with the played arm and does not know what would be the reward if she had played another arm. Before each play, she is confronted to the dilemma between exploration and exploitation; exploring allows to increase the confidence of the reward estimators and exploiting allows to increase the cumulative reward by playing the empirical best arm (under the assumption that the empirical best arm is indeed the actual best arm).In the first part of the thesis, we will tackle the multi-armed bandit problem when reward distributions are non-stationary. Firstly, we will study the case where, even if reward distributions change during the game, the best arm stays the same. Secondly, we will study the case where the best arm changes during the game. The second part of the thesis tacles the contextual bandit problem where means of reward distributions are now dependent of the environment's current state. We will study the use of neural networks and random forests in the case of contextual bandits. We will then propose meta-bandit based approach for selecting online the most performant expert during its learning
Payrastre, Olivier. "Faisabilité et utilité du recueil de données historiques pour l'étude des crues extrêmes de petits cours d'eau - Etude du cas de quatre bassins versants affluents de l'Aude". Phd thesis, Ecole des Ponts ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001792.
Texto completoPayrastre, Olivier Renaud. "Faisabilité et utilité du recueil de données historiques pour l'étude des crues extrêmes de petits cours d'eau : étude du cas de quatre bassins versants affluents de l'Aude". Marne-la-vallée, ENPC, 2005. http://www.theses.fr/2005ENPC0033.
Texto completoOuni, Marwa. "Problèmes inverses en mécanique des fluides résolus par des stratégies de jeux". Thesis, Université Côte d'Azur, 2021. http://theses.univ-cotedazur.fr/2021COAZ4021.
Texto completoThis thesis aims to study the ability of theoretic game approaches to deal with ill-posed problems. The first part of the thesis is dedicated to the Stokes system’s linear problem, with the goal of detecting unknown geometric inclusions or pointwise sources in a stationary viscous fluid, using a single compatible pair of Dirichlet and Neumann data, available only on a partially accessible part of the boundary. Inverse geometric-or-source identification for the Cauchy-Stokes problem is severely ill-posed (in the sense of Hadamard) for both the inclusions or sources and the missing data reconstructions, and designing stable and efficient algorithms is challenging. To solve the joint completion/detection problem, we reformulate it as a three players Nash game. The two first players aim at recovering the missing data (Dirichlet and Neumann conditions prescribed over the inaccessible boundary), while the third player seeks to identify the shape and locations of the inclusions (in Chapter 2) or determine the source term (in Chapter 3). We then introduce new algorithms dedicated to the Nash equilibria, which is expected to approximate the original coupled problems’ solutions. We present different numerical experiments to illustrate the efficiency and robustness of our 3- player Nash game strategy. The extension of this work to another situation, such as identifying small objects, has been carried out (in Chapter 4). The second purpose of this thesis is to extend those results to the case of quasi-Newtonian fluid flow whose viscosity is assumed to be a nonlinear function that varies upon the imposed rate of deformation. The considered problem then is a nonlinear Cauchy type because of the non-linearity of the viscosity function. Two different iterative procedures, control-type and Nash game algorithms, are considered to solve it. From a computational point of view, the non-linearity needs some particular algorithms. We propose a novel one-shot algorithm to solve the nonlinear state equations during a recovery process, representing a different idea to treat the nonlinear Cauchy problems. Some numerical experiments are provided to demonstrate our algorithm’s efficiency in the noise-free and noisy data cases. A comparison between the one-shot scheme and the fixed-point method was performed. Finally, we introduce an algorithm to jointly recover the missing boundary data and the location and shape of the inclusions for nonlinear Stokes models based on the Game-Theoretic approach
Nicol, Olivier. "Data-driven evaluation of contextual bandit algorithms and applications to dynamic recommendation". Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10211/document.
Texto completoThe context of this thesis work is dynamic recommendation. Recommendation is the action, for an intelligent system, to supply a user of an application with personalized content so as to enhance what is refered to as "user experience" e.g. recommending a product on a merchant website or even an article on a blog. Recommendation is considered dynamic when the content to recommend or user tastes evolve rapidly e.g. news recommendation. Many applications that are of interest to us generates a tremendous amount of data through the millions of online users they have. Nevertheless, using this data to evaluate a new recommendation technique or even compare two dynamic recommendation algorithms is far from trivial. This is the problem we consider here. Some approaches have already been proposed. Nonetheless they were not studied very thoroughly both from a theoretical point of view (unquantified bias, loose convergence bounds...) and from an empirical one (experiments on private data only). In this work we start by filling many blanks within the theoretical analysis. Then we comment on the result of an experiment of unprecedented scale in this area: a public challenge we organized. This challenge along with a some complementary experiments revealed a unexpected source of a huge bias: time acceleration. The rest of this work tackles this issue. We show that a bootstrap-based approach allows to significantly reduce this bias and more importantly to control it
Megel, Cyrille. "Petits ARN non codants dérivant d’ARN de transfert et endoribonucléases impliquées dans leur biogenèse chez Arabidopsis thaliana". Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAJ104/document.
Texto completoAmong the small ncRNAs, tRNA-derived RNA fragments (tRFs) were identified in all domains of life. However, only few data report on plants tRFs. Short tRF were retrieved from A. thaliana small RNA libraries (various tissues, plants submitted to abiotic stress or argonaute immunoprecipitated fractions). Mainly tRF-5D or tRF-3T (cleavage in the D or T region respectively) were found, and fluctuations in the tRF population were observed.Using in vitro approaches, A. thaliana RNase T2 endoribonucleases (RNS) were shown to cleave tRNAs in the anticodon region but also in the D or T region. Through a whole study of RNS expression, we show that two RNS are also strongly expressed in the siliques at a late stage of development. Thus, we analyzed the tRF population of this particular developmental stage. Upon phosphate starvation, we demonstrate also the implication of one RNS in the production of tRFs in planta. Altogether, our data open new perspectives for RNS and tRFs as major actors of gene expression inplants
Gomez, José Raul. "Un cadre d'évaluation systématique pour les outils d'intégration de systèmes d'information". Mémoire, Université de Sherbrooke, 2011. http://savoirs.usherbrooke.ca/handle/11143/1642.
Texto completoDari, Bekara Kheira. "Protection des données personnelles côté utilisateur dans le e-commerce". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0045.
Texto completoInformatics and Internet in particular favor largely the collection of data without user permission, their disclosure to third parties and their cross-analysis. The density of the human activities in the digital world thus constitutes a fertile ground for potential invasions of privacy of the users. Our works examine first the legal context of privacy protection, as well as the diverse computing means intended for the protection of personal data. A need for user centered solutions emerges, giving him/her more control over his/her personal data. In this perspective, we analyze European and French privacy legislation to extract data protection axis. Then we specify the constraints related to these axes, and we introduce them in existing security policy models. Thus we suggest the application of one model for both access control and privacy protection. The access control model should be extended by new privacy related conditions and parameters. To do so, we define the language XPACML (eXtensible Privacy aware Access Control Markup Language) based on XACML and new privacy extensions. Placed in an E-commerce context, we define a semantic model allowing to represent various electronic transactions contexts, and leading to a dynamic generation of context- aware XPACML policies. Looking for a vast protection of the personal data, we dedicate the last part of our works to the possible negotiations which can be made between a user and a service provider. Two protocols are proposed. The first one permits the negotiation of the terms and the conditions of data protection policies, while the second permits the negotiation of the requested data themselves
Chebaicheb, Hasna. "Etude de la composition chimique des particules fines et des sources d'aérosol organique sur différents sites en France à partir de jeux de données pluriannuels à haute résolution temporelle". Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Lille Douai, 2023. http://www.theses.fr/2023MTLD0006.
Texto completoConsidering the major climatic and health impacts of fine particulate matter, this work studies their chemical composition at 13 French sites from 2015 to 2021. Organic aerosols (OA) predominate, with increases in winter (residential heating emissions) and summer (formation of secondary organic aerosols). Ammonium nitrate, also a secondary pollutant from combustion and agriculture, dominates during springtime pollution episodes, particularly in the north.The main sources of OA are traffic emissions and biomass combustion. Others are site-specific (cooking activities, industry, ship emissions). Oxygenated factors dominate OA, suggesting aging and secondary formation processes. These results can guide policies aimed at improving air quality, help improve the model accuracy and inform future epidemiological studies)
Royer, Kevin. "Vers un entrepôt de données et des processus : le cas de la mobilité électrique chez EDF". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2015. http://www.theses.fr/2015ESMA0001/document.
Texto completoNowadays, the electrical vehicles (EV) market is undergoing a rapid expansion and has become ofgreat importance for utility companies such as EDF. In order to fulfill its objectives (demand optimization,pricing, etc.), EDF has to extract and analyze heterogeneous data from EV and charging spots. Inorder to tackle this, we used data warehousing (DW) technology serving as a basis for business process(BP). To avoid the garbage in/garbage out phenomena, data had to be formatted and standardized.We have chosen to rely on an ontology in order to deal with data sources heterogeneity. Because theconstruction of an ontology can be a slow process, we proposed an modular and incremental constructionof the ontology based on bricks. We based our DW on the ontology which makes its construction alsoan incremental process. To upload data to this particular DW, we defined the ETL (Extract, Trasform& Load) process at the semantic level. We then designed recurrent BP with BPMN (Business ProcessModelization & Notation) specifications to extract EDF required knowledge. The assembled DWpossesses data and BP that are both described in a semantic context. We implemented our solutionon the OntoDB platform, developed at the ISAE-ENSMA Laboratory of Computer Science and AutomaticControl for Systems. The solution has allowed us to homogeneously manipulate the ontology, thedata and the BP through the OntoQL language. Furthermore, we added to the proposed platform thecapacity to automatically execute any BP described with BPMN. Ultimately, we were able to provideEDF with a tailor made platform based on declarative elements adapted to their needs
Gagne, Christophe. "Les interactions verbales en France et en Grande-Bretagne : étude comparative de quatre petits commerces français et britanniques". Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO20051/document.
Texto completoThis thesis, which is of a contrastive and intercultural nature, is informed by the idea that it is by observing the behaviour of interactants in everyday interactions that the relationship between cultures can best be approached, and the specificity of the forms of behaviour encountered explored. Through the careful and detailed analysis of recordings taken in four different shops (French and British), the study aims to understand the linguistic behaviour of the participants by linking it to various contextual elements (micro-contextual elements: discursive material that surrounds the utterances analysed; situational elements: site layout, number of participants, interaction’s finality; macro-contextual ones: status of service encounters and of the types of shops selected, cultural values that underpin explored behaviour). The purpose of the study (which analyses opening and closing rituals; thanking; the way directive speech acts such as questions, offers and requests are performed; conversational sequences) is to provide a better understanding of the communicative styles that can be associated with French and British cultures
Nisse, Nicolas. "Complexité algorithmique: entre structure et connaissance. Comment les jeux de poursuite peuvent apporter des solutions". Habilitation à diriger des recherches, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00998854.
Texto completoDufourny, Sylvain. "Optimisation de décisions économiques concurrentielles dans un simulateur de gestion d’entreprise". Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10092/document.
Texto completoDigital technologies are becoming increasingly popular in teaching and learning processes. New educational practices are also revolutionizing the standards of training. For example, the "gamification" of the curricula has become a current trend. It allows, through games, to exercise learners differently. Business management simulation, also known as business games, fall within this context. They place learners at the head of virtual companies and simulate a competitive market. The deployment of this practice nevertheless encounters some operational difficulties: size of the group, training of the teacher... It is in this context that we envisage the implementation of autonomous agents to accompany the learners or the competitors.To do this, firstly, we propose a modeling of a company, based on mixed linear programs allowing optimization of the internal departments of the companies (production, delivery, finance). For the second step, we will introduce a local heuristic search, ensuring a generation of efficient solutions in a given economic and competitive environment. Thirdly, following a knowledge extraction phase, we propose the definition and construction of anticipation trees that predict the competitive decisions of the engaged protagonists and thus to be able to estimate the quality of the solutions built. In order to validate the proposed approaches, we compared them with the real behaviors of players and evaluated the contribution of the exploitation of the knowledge. Finally, we proposed a framework allowing a generalization of the method to other business games
Fargevieille, Amélie. "Sélection sexuelle et évolution des ornements femelles : une étude de la coloration du plumage femelle utilisant des analyses comparatives et des jeux de données à long terme issus de populations de mésange bleue (Cyanistes caeruleus)". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT127/document.
Texto completoOrnamental traits are classically associated with males in animal species. The asymmetrical view is related to sex roles, in which males are competing (intra-sexual selection) to attract females which chose the best mate (intersexual selection). This idea was developed with the concept of anisogamy, the asymmetry in the production of male and female gametes. Females producing few but large gametes maximize their offspring survival rate by investing more in parental care; they become the limiting sex and chose males which are thus competing for access to reproduction. Then, any ornamental trait increasing pairing success would become advantageous for males, leading to more developed secondary sexual traits in this sex. If ornamental traits are more frequent in males, there are also many examples with females, especially in socially monogamous species with biparental care. Evolutionary biologists have only started recently to test processes explaining the outbreak and maintenance of female ornaments. Genetic correlation is an unquestionable process involved in this evolution, and social selection is also a major process. Several empirical studies have also related male mate choice to female ornaments and theoretical models have defined key parameters driving the evolution of male mate choice. Furthermore, phylogenetical studies retracing the evolution of ornaments have showed a high lability in female traits, with more frequent gains and losses of ornamental traits in females compared to males. In order to link sexual selection to the evolution of female ornaments, this thesis was based on these previous achievements to develop different approaches to better understand the role of sexual selection in the evolution and maintenance of female colouration. Comparative methods in songbirds tested the key parameters defined by theoretical models as driving the evolution of male mate choice. In line with theoretical models, results highlight the importance of male investment in parental care in the evolution of female plumage colouration. They also show how female initial investment in reproduction limits this evolution. Another thesis axis focused on colouration in a monogamous species, the Blue tit Cyanistes caeruleus, using a large dataset across 10 years in four populations and tested in particular (i) the strength of genetic correlation, (ii) relations between proxies of reproductive success and colouration and (iii) the existence of assortative mating in this species. The main results highlight a strong genetic correlation and a wide spatiotemporal variation and the use of meta-analyses revealed correlation between female colouration and proxies of reproductive success as well as a weak but positive pattern of assortative mating on the two measured patches (crown and chest). Both sides of the thesis represent new insights in favour of the evolution of female ornaments. They also highlight the complexity associated with their evolution and the importance of considering spatiotemporal variation for extensive understanding and generalisation
Pavaux, Alice. "Inductive, Functional and Non-Linear Types in Ludics". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD092.
Texto completoThis thesis investigates the types of ludics. Within the context of the Curry–Howard correspondence,l udics is a framework in which the dynamic aspects of both logic and programming can be studied. The basic objects, called designs, are untyped infinitary proofs that can also beseen as strategies from the perspective of game semantics, and a type or behaviour is a set of designs well-behaved with respect to interaction. We are interested in observing the interactive properties of behaviours. Our attention is particularly focused on behaviours representing the types of data and functions, and on non-linear behaviours which allow the duplication of objects. A new internal completeness result for infinite unions unveils the structure of inductive data types. Thanks to an analysis of the visitable paths, i.e., the possible execution traces, we prove that inductive and functional behaviours are regular, paving the way for a characterisation of MALL in ludics. We also show that a functional behaviour is pure, a property ensuring the safety of typing, if and only if it is not a type of functions taking functions as argument. Finally,we set the bases for a precise study of non-linearity in ludics by recovering a form of internal completeness and discussing the visitable paths
Mathonat, Romain. "Rule discovery in labeled sequential data : Application to game analytics". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI080.
Texto completoIt is extremely useful to exploit labeled datasets not only to learn models and perform predictive analytics but also to improve our understanding of a domain and its available targeted classes. The subgroup discovery task has been considered for more than two decades. It concerns the discovery of rules covering sets of objects having interesting properties, e.g., they characterize a given target class. Though many subgroup discovery algorithms have been proposed for both transactional and numerical data, discovering rules within labeled sequential data has been much less studied. In that context, exhaustive exploration strategies can not be used for real-life applications and we have to look for heuristic approaches. In this thesis, we propose to apply bandit models and Monte Carlo Tree Search to explore the search space of possible rules using an exploration-exploitation trade-off, on different data types such as sequences of itemset or time series. For a given budget, they find a collection of top-k best rules in the search space w.r.t chosen quality measure. They require a light configuration and are independent from the quality measure used for pattern scoring. To the best of our knowledge, this is the first time that the Monte Carlo Tree Search framework has been exploited in a sequential data mining setting. We have conducted thorough and comprehensive evaluations of our algorithms on several datasets to illustrate their added-value, and we discuss their qualitative and quantitative results. To assess the added-value of one or our algorithms, we propose a use case of game analytics, more precisely Rocket League match analysis. Discovering interesting rules in sequences of actions performed by players and using them in a supervised classification model shows the efficiency and the relevance of our approach in the difficult and realistic context of high dimensional data. It supports the automatic discovery of skills and it can be used to create new game modes, to improve the ranking system, to help e-sport commentators, or to better analyse opponent teams, for example
Chakraborty, Kaushik. "Cryptography with spacetime constraints". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066194.
Texto completoIn this thesis we have studied how to exploit relativistic constraints such as the non-superluminal signalling principle to design secure cryptographic primitives like position-verification and bit commitment. According to non-superluminal signalling principle, no physical carrier of information can travel faster than the speed of light. This put a constraint on the communication time between two distant stations. One can consider this delay in information transfer as a temporal non-communication constraint. Cryptographic primitives like bit-commitment, oblivious transfer can be implemented with perfect secrecy under such non-communication assumption between the agents. The first part of this thesis has studied how non-signalling constraints can be used for secure position verification. Here, we have discussed about a strategy which can attack any position verification scheme. In the next part of this thesis we have discussed about the nonlocal games, relevant for studying relativistic bit commitment protocols. We have established an upper bound on the classical value of such family of games. The last part of this thesis discusses about two relativistic bit commitment protocols and their security against classical adversaries. We conclude this thesis by giving a brief summary of the content of each chapter and mentioning interesting open problems. These open problems can be very useful for better understanding of the role of spacetime constraints such as non-superluminal signalling in designing perfectly secure cryptographic primitives
Mansuy, Mathieu. "Aide au tolérancement tridimensionnel : modèle des domaines". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00734713.
Texto completoChakraborty, Kaushik. "Cryptography with spacetime constraints". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066194/document.
Texto completoIn this thesis we have studied how to exploit relativistic constraints such as the non-superluminal signalling principle to design secure cryptographic primitives like position-verification and bit commitment. According to non-superluminal signalling principle, no physical carrier of information can travel faster than the speed of light. This put a constraint on the communication time between two distant stations. One can consider this delay in information transfer as a temporal non-communication constraint. Cryptographic primitives like bit-commitment, oblivious transfer can be implemented with perfect secrecy under such non-communication assumption between the agents. The first part of this thesis has studied how non-signalling constraints can be used for secure position verification. Here, we have discussed about a strategy which can attack any position verification scheme. In the next part of this thesis we have discussed about the nonlocal games, relevant for studying relativistic bit commitment protocols. We have established an upper bound on the classical value of such family of games. The last part of this thesis discusses about two relativistic bit commitment protocols and their security against classical adversaries. We conclude this thesis by giving a brief summary of the content of each chapter and mentioning interesting open problems. These open problems can be very useful for better understanding of the role of spacetime constraints such as non-superluminal signalling in designing perfectly secure cryptographic primitives
Schertzer, Jérémie. "Exploiting modern GPUs architecture for real-time rendering of massive line sets". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT037.
Texto completoIn this thesis, we consider massive line sets generated from brain tractograms. They describe neural connections that are represented with millions of poly-line fibers, summing up to billions of segments. Thanks to the two-staged mesh shader pipeline, we build a tractogram renderer surpassing state-of-the-art performances by two orders of magnitude.Our performances come from fiblets: a compressed representation of segment blocks. By combining temporal coherence and morphological dilation on the z-buffer, we define a fast occlusion culling test for fiblets. Thanks to our heavily-optimized parallel decompression algorithm, surviving fiblets are swiftly synthesized to poly-lines. We also showcase how our fiblet pipeline speeds-up advanced tractogram interaction features.For the general case of line rendering, we propose morphological marching: a screen-space technique rendering custom-width tubes from the thin rasterized lines of the G-buffer. By approximating a tube as the union of spheres densely distributed along its axes, each sphere shading each pixel is retrieved relying on a multi-pass neighborhood propagation filter. Accelerated by the compute pipeline, we reach real-time performances for the rendering of depth-dependant wide lines.To conclude our work, we implement a virtual reality prototype combining fiblets and morphological marching. It makes possible for the first time the immersive visualization of huge tractograms with fast shading of thick fibers, thus paving the way for diverse perspectives