Gotowa bibliografia na temat „Génération de données synthétiques”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Génération de données synthétiques”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Génération de données synthétiques"
Corriger, J., i J. Goret. "Développement et évaluation d’approches algorithmiques et par GAN pour la génération de données synthétiques en Allergologie". Revue Française d'Allergologie 64 (kwiecień 2024): 103893. http://dx.doi.org/10.1016/j.reval.2024.103893.
Pełny tekst źródłaChahnazarian, Anouch. "Hausse récente de la fécondité en Haïti : un nouvel engouement pour la vie en union?" Population Vol. 47, nr 3 (1.03.1992): 583–616. http://dx.doi.org/10.3917/popu.p1992.47n3.0616.
Pełny tekst źródłaBogardii, J. J., i L. Duckstein. "Evénements de période sèche en pays semi-aride". Revue des sciences de l'eau 6, nr 1 (12.04.2005): 23–46. http://dx.doi.org/10.7202/705164ar.
Pełny tekst źródłaRICARD, F. H., G. MARCHE i E. LE BIHAN-DUVAL. "Essai d’amélioration par sélection de la qualité de carcasse du poulet de chair". INRAE Productions Animales 7, nr 4 (27.09.1994): 253–61. http://dx.doi.org/10.20870/productions-animales.1994.7.4.4173.
Pełny tekst źródłaLlamas, J., R. Fernandez i A. Calvache. "Génération de séries synthétiques de débit". Canadian Journal of Civil Engineering 14, nr 6 (1.12.1987): 795–806. http://dx.doi.org/10.1139/l87-118.
Pełny tekst źródłaESSALMANI, R., S. SOULIER, N. BESNARD, M. HUDRISIER, J. COSTA DA SILVA i J. L. VILOTTE. "Données de base sur la transgenèse". INRAE Productions Animales 13, HS (22.12.2000): 181–86. http://dx.doi.org/10.20870/productions-animales.2000.13.hs.3835.
Pełny tekst źródłaFleury, Charles. "La génération X au Québec : une génération sacrifiée ?" Recherche 49, nr 3 (5.02.2009): 475–99. http://dx.doi.org/10.7202/019877ar.
Pełny tekst źródłaBlackburn, Heidi. "La prochaine génération d’employés". Documentation et bibliothèques 63, nr 1 (28.02.2017): 48–60. http://dx.doi.org/10.7202/1039073ar.
Pełny tekst źródłaCHATELLIER, Vincent, Christophe PERROT, Emmanuel BEGUIN, Marc MORAINE i Patrick VEYSSET. "Compétitivité et emplois à la production dans les secteurs bovins français". INRAE Productions Animales 33, nr 4 (6.04.2021): 261–82. http://dx.doi.org/10.20870/productions-animales.2020.33.4.4609.
Pełny tekst źródłaStreet, María Constanza, i Benoît Laplante. "Pas plus élevée, mais après la migration ! Fécondité, immigration et calendrier de constitution de la famille". Articles 43, nr 1 (4.06.2014): 35–68. http://dx.doi.org/10.7202/1025490ar.
Pełny tekst źródłaRozprawy doktorskie na temat "Génération de données synthétiques"
Kieu, Van Cuong. "Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques". Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS029/document.
Pełny tekst źródłaIn the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training
Desbois-Bédard, Laurence. "Génération de données synthétiques pour des variables continues : étude de différentes méthodes utilisant les copules". Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27748.
Pełny tekst źródłaStatistical agencies face a growing demand for releasing microdata to the public. To this end, many techniques have been proposed for publishing microdata while providing confidentiality : synthetic data generation in particular. This thesis focuses on such technique by presenting two existing methods, GAPD and C-GADP, as well as suggesting one based on vine copula models. GADP assumes that the variables of original and synthetic data are normally distributed, while C-GADP assumes that they have a normal copula distribution. Vine copula models are proposed due to their flexibility. These three methods are then assessed according to utility and risk. Data utility depends on maintaining certain similarities between the original and confidential data, while risk can be observed in two types : reidentification and inference. This work will focus on the utility examined with different analysis-specific measures, a global measure based on propensity scores and the risk of inference evaluated with a distance-based prediction.
Uzan, Kathy. "Les vaccins synthétiques : données récentes". Paris 5, 1989. http://www.theses.fr/1989PA05P188.
Pełny tekst źródłaBarrère, Killian. "Architectures de Transformer légères pour la reconnaissance de textes manuscrits anciens". Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0017.
Pełny tekst źródłaTransformer architectures deliver low error rates but are challenging to train due to limited annotated data in handwritten text recognition. We propose lightweight Transformer architectures to adapt to the limited amounts of annotated handwritten text available. We introduce a fast Transformer architecture with an encoder, processing up to 60 pages per second. We also present architectures using a Transformer decoder to incorporate language modeling into character recognition. To effectively train our architectures, we offer algorithms for generating synthetic data adapted to the visual style of modern and historical documents. Finally, we propose strategies for learning with limited data and reducing prediction errors. Our architectures, combined with synthetic data and these strategies, achieve competitive error rates on lines of text from modern documents. For historical documents, they train effectively with minimal annotated data, surpassing state-ofthe- art approaches. Remarkably, just 500 annotated lines are sufficient for character error rates close to 5%
Ruiz, Paredes Javier Antonio. "Génération d'accélérogrammes synthétiques large-bande par modélisation cinématique de la rupture sismique". Paris, Institut de physique du globe, 2007. http://www.theses.fr/2007GLOB0009.
Pełny tekst źródłaIn order to make the broadband kinematic rupture modeling more realistic with respect to dynamic modeling, physical constraints are added to the rupture parameters. To improve the slip velocity function (SVF) modeling, an evolution of the k-2 source model is proposed, which consists to decompose the slip as a sum of sub-events by band of k. This model yields to SVFclose to the solution proposed by Kostrov for a crack, while preserving the spectral characteristics of the radiated wavefield, i. E. A w2 model with spectral amplitudes at high frequency scaled to the coefficient of directivity Cd. To better control the directivity effects, acomposite source description is combined with a scaling law defining the extent of the nucleation area for each sub-event. The resulting model allows to reduce the apparent coefficient of directivity to a fraction of Cd, as well as to reproduce the standard deviation of the new empirical attenuation relationships proposed for Japan. To make source models more realistic, avariable rupture velocity in agreement with the physics of the rupture must be considered. The followed approach that is based on an analytical relation between the fracture energy, the slip and the rupture velocity, leads to higher values of the peak ground acceleration in the vicinity ofthe fault. Finally, to better account for the interaction of the wavefield with the geological medium, a semi-empirical methodology is developed combining a composite source model with empirical Green functions, and is applied to the Yamaguchi, Mw 5. 9 earthquake. The modeled synthetics reproduce satisfactorily well the observed main characteristics of ground motions
Pazat, Jean-Louis. "Génération de code réparti par distribution de données". Habilitation à diriger des recherches, Université Rennes 1, 1997. http://tel.archives-ouvertes.fr/tel-00170867.
Pełny tekst źródłaBaez, miranda Belen. "Génération de récits à partir de données ambiantes". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM049/document.
Pełny tekst źródłaStories are a communication tool that allow people to make sense of the world around them. It represents a platform to understand and share their culture, knowledge and identity. Stories carry a series of real or imaginary events, causing a feeling, a reaction or even trigger an action. For this reason, it has become a subject of interest for different fields beyond Literature (Education, Marketing, Psychology, etc.) that seek to achieve a particular goal through it (Persuade, Reflect, Learn, etc.).However, stories remain underdeveloped in Computer Science. There are works that focus on its analysis and automatic production. However, those algorithms and implementations remain constrained to imitate the creative process behind literary texts from textual sources. Thus, there are no approaches that produce automatically stories whose 1) the source consists of raw material that passed in real life and 2) and the content projects a perspective that seeks to convey a particular message. Working with raw data becomes relevant today as it increase exponentially each day through the use of connected devices.Given the context of Big Data, we present an approach to automatically generate stories from ambient data. The objective of this work is to bring out the lived experience of a person from the data produced during a human activity. Any areas that use such raw data could benefit from this work, for example, Education or Health. It is an interdisciplinary effort that includes Automatic Language Processing, Narratology, Cognitive Science and Human-Computer Interaction.This approach is based on corpora and models and includes the formalization of what we call the activity récit as well as an adapted generation approach. It consists of 4 stages: the formalization of the activity récit, corpus constitution, construction of models of activity and the récit, and the generation of text. Each one has been designed to overcome constraints related to the scientific questions asked in view of the nature of the objective: manipulation of uncertain and incomplete data, valid abstraction according to the activity, construction of models from which it is possible the Transposition of the reality collected though the data to a subjective perspective and rendered in natural language. We used the activity narrative as a case study, as practitioners use connected devices, so they need to share their experience. The results obtained are encouraging and give leads that open up many prospects for research
Morisse, Pierre. "Correction de données de séquençage de troisième génération". Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR043/document.
Pełny tekst źródłaThe aims of this thesis are part of the vast problematic of high-throughput sequencing data analysis. More specifically, this thesis deals with long reads from third-generation sequencing technologies. The aspects tackled in this topic mainly focus on error correction, and on its impact on downstream analyses such a de novo assembly. As a first step, one of the objectives of this thesis is to evaluate and compare the quality of the error correction provided by the state-of-the-art tools, whether they employ a hybrid (using complementary short reads) or a self-correction (relying only on the information contained in the long reads sequences) strategy. Such an evaluation allows to easily identify which method is best tailored for a given case, according to the genome complexity, the sequencing depth, or the error rate of the reads. Moreover, developpers can thus identify the limiting factors of the existing methods, in order to guide their work and propose new solutions allowing to overcome these limitations. A new evaluation tool, providing a wide variety of metrics, compared to the only tool previously available, was thus developped. This tool combines a multiple sequence alignment approach and a segmentation strategy, thus allowing to drastically reduce the evaluation runtime. With the help of this tool, we present a benchmark of all the state-of-the-art error correction methods, on various datasets from several organisms, spanning from the A. baylyi bacteria to the human. This benchmark allowed to spot two major limiting factors of the existing tools: the reads displaying error rates above 30%, and the reads reaching more than 50 000 base pairs. The second objective of this thesis is thus the error correction of highly noisy long reads. To this aim, a hybrid error correction tool, combining different strategies from the state-of-the-art, was developped, in order to overcome the limiting factors of existing methods. More precisely, this tool combines a short reads alignmentstrategy to the use of a variable-order de Bruijn graph. This graph is used in order to link the aligned short reads, and thus correct the uncovered regions of the long reads. This method allows to process reads displaying error rates as high as 44%, and scales better to larger genomes, while allowing to reduce the runtime of the error correction, compared to the most efficient state-of-the-art tools.Finally, the third objectif of this thesis is the error correction of extremely long reads. To this aim, aself-correction tool was developed, by combining, once again, different methologies from the state-of-the-art. More precisely, an overlapping strategy, and a two phases error correction process, using multiple sequence alignement and local de Bruijn graphs, are used. In order to allow this method to scale to extremely long reads, the aforementioned segmentation strategy was generalized. This self-correction methods allows to process reads reaching up to 340 000 base pairs, and manages to scale very well to complex organisms such as the human genome
Fontin, Mickaël. "Contribution à la génération de séries synthétiques de pluies, de débits et de températures". Toulouse, INPT, 1987. http://www.theses.fr/1987INPT117H.
Pełny tekst źródłaKhalili, Malika. "Nouvelle approche de génération multi-site des données climatiques". Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/580/1/KHALILI_Malika.pdf.
Pełny tekst źródłaKsiążki na temat "Génération de données synthétiques"
(Editor), Harald Ganzinger, i Neil D. Jones (Editor), red. Programs as Data Objects: Proceedings of a Workshop, Copenhagen, Denmark, October 17 - 19, 1985 (Lecture Notes in Computer Science). Springer, 1986.
Znajdź pełny tekst źródłaCzęści książek na temat "Génération de données synthétiques"
"Outils de génération automatique des modèles". W Conception de bases de données avec UML, 447–504. Presses de l'Université du Québec, 2007. http://dx.doi.org/10.2307/j.ctv18pgv5t.10.
Pełny tekst źródłaESLAMI, Yasamin, Mario LEZOCHE i Philippe THOMAS. "Big Data Analytics et machine learning pour les systèmes industriels cyber-physiques". W Digitalisation et contrôle des systèmes industriels cyber-physiques, 175–95. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9085.ch9.
Pełny tekst źródłaMouchabac, Stéphane. "Antipsychotiques de seconde génération dans la dépression résistante : données cliniques". W Dépressions Difficiles, Dépressions Résistantes, 17–23. Elsevier, 2013. http://dx.doi.org/10.1016/b978-2-294-73727-5.00003-9.
Pełny tekst źródłaVOLK, Rebekka. "Modéliser le bâti immobilier existant : planification et gestion de la déconstruction". W Le BIM, nouvel art de construire, 157–79. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9110.ch7.
Pełny tekst źródłaKARLIS, Dimitris, i Katerina ORFANOGIANNAKI. "Modèles de régression de Markov pour les séries chronologiques de comptage des séismes". W Méthodes et modèles statistiques pour la sismogenèse, 165–80. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9037.ch6.
Pełny tekst źródła"Les données de l’enquête sont représentatives de la diversité des engagements et des porteurs d’engagement". W Rapport de la Génération Égalité sur la redevabilité 2022, 14. United Nations, 2023. http://dx.doi.org/10.18356/9789210021906c007.
Pełny tekst źródłaStreszczenia konferencji na temat "Génération de données synthétiques"
Devictor, Nicolas. "Quatrième génération : situation internationale". W Données nucléaires : avancées et défis à relever. Les Ulis, France: EDP Sciences, 2014. http://dx.doi.org/10.1051/jtsfen/2014donra02.
Pełny tekst źródłaRIBEIRO DOS SANTOS, Daniel, Anne JULIEN-VERGONJANNE i Johann BOUCLÉ. "Cellules Solaires pour les Télécommunications et la Récupération d’Énergie". W Les journées de l'interdisciplinarité 2022. Limoges: Université de Limoges, 2022. http://dx.doi.org/10.25965/lji.661.
Pełny tekst źródłaRaporty organizacyjne na temat "Génération de données synthétiques"
Hicks, Jacqueline, Alamoussa Dioma, Marina Apgar i Fatoumata Keita. Premiers résultats d'une évaluation de recherche-action systémique au Kangaba, Mali. Institute of Development Studies, kwiecień 2024. http://dx.doi.org/10.19088/ids.2024.019.
Pełny tekst źródła