Academic literature on the topic 'Génération de données synthétiques'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Génération de données synthétiques.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Génération de données synthétiques"
Corriger, J., and J. Goret. "Développement et évaluation d’approches algorithmiques et par GAN pour la génération de données synthétiques en Allergologie." Revue Française d'Allergologie 64 (April 2024): 103893. http://dx.doi.org/10.1016/j.reval.2024.103893.
Full textChahnazarian, Anouch. "Hausse récente de la fécondité en Haïti : un nouvel engouement pour la vie en union?" Population Vol. 47, no. 3 (March 1, 1992): 583–616. http://dx.doi.org/10.3917/popu.p1992.47n3.0616.
Full textBogardii, J. J., and L. Duckstein. "Evénements de période sèche en pays semi-aride." Revue des sciences de l'eau 6, no. 1 (April 12, 2005): 23–46. http://dx.doi.org/10.7202/705164ar.
Full textRICARD, F. H., G. MARCHE, and E. LE BIHAN-DUVAL. "Essai d’amélioration par sélection de la qualité de carcasse du poulet de chair." INRAE Productions Animales 7, no. 4 (September 27, 1994): 253–61. http://dx.doi.org/10.20870/productions-animales.1994.7.4.4173.
Full textLlamas, J., R. Fernandez, and A. Calvache. "Génération de séries synthétiques de débit." Canadian Journal of Civil Engineering 14, no. 6 (December 1, 1987): 795–806. http://dx.doi.org/10.1139/l87-118.
Full textESSALMANI, R., S. SOULIER, N. BESNARD, M. HUDRISIER, J. COSTA DA SILVA, and J. L. VILOTTE. "Données de base sur la transgenèse." INRAE Productions Animales 13, HS (December 22, 2000): 181–86. http://dx.doi.org/10.20870/productions-animales.2000.13.hs.3835.
Full textFleury, Charles. "La génération X au Québec : une génération sacrifiée ?" Recherche 49, no. 3 (February 5, 2009): 475–99. http://dx.doi.org/10.7202/019877ar.
Full textBlackburn, Heidi. "La prochaine génération d’employés." Documentation et bibliothèques 63, no. 1 (February 28, 2017): 48–60. http://dx.doi.org/10.7202/1039073ar.
Full textCHATELLIER, Vincent, Christophe PERROT, Emmanuel BEGUIN, Marc MORAINE, and Patrick VEYSSET. "Compétitivité et emplois à la production dans les secteurs bovins français." INRAE Productions Animales 33, no. 4 (April 6, 2021): 261–82. http://dx.doi.org/10.20870/productions-animales.2020.33.4.4609.
Full textStreet, María Constanza, and Benoît Laplante. "Pas plus élevée, mais après la migration ! Fécondité, immigration et calendrier de constitution de la famille." Articles 43, no. 1 (June 4, 2014): 35–68. http://dx.doi.org/10.7202/1025490ar.
Full textDissertations / Theses on the topic "Génération de données synthétiques"
Kieu, Van Cuong. "Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS029/document.
Full textIn the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training
Desbois-Bédard, Laurence. "Génération de données synthétiques pour des variables continues : étude de différentes méthodes utilisant les copules." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27748.
Full textStatistical agencies face a growing demand for releasing microdata to the public. To this end, many techniques have been proposed for publishing microdata while providing confidentiality : synthetic data generation in particular. This thesis focuses on such technique by presenting two existing methods, GAPD and C-GADP, as well as suggesting one based on vine copula models. GADP assumes that the variables of original and synthetic data are normally distributed, while C-GADP assumes that they have a normal copula distribution. Vine copula models are proposed due to their flexibility. These three methods are then assessed according to utility and risk. Data utility depends on maintaining certain similarities between the original and confidential data, while risk can be observed in two types : reidentification and inference. This work will focus on the utility examined with different analysis-specific measures, a global measure based on propensity scores and the risk of inference evaluated with a distance-based prediction.
Uzan, Kathy. "Les vaccins synthétiques : données récentes." Paris 5, 1989. http://www.theses.fr/1989PA05P188.
Full textBarrère, Killian. "Architectures de Transformer légères pour la reconnaissance de textes manuscrits anciens." Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0017.
Full textTransformer architectures deliver low error rates but are challenging to train due to limited annotated data in handwritten text recognition. We propose lightweight Transformer architectures to adapt to the limited amounts of annotated handwritten text available. We introduce a fast Transformer architecture with an encoder, processing up to 60 pages per second. We also present architectures using a Transformer decoder to incorporate language modeling into character recognition. To effectively train our architectures, we offer algorithms for generating synthetic data adapted to the visual style of modern and historical documents. Finally, we propose strategies for learning with limited data and reducing prediction errors. Our architectures, combined with synthetic data and these strategies, achieve competitive error rates on lines of text from modern documents. For historical documents, they train effectively with minimal annotated data, surpassing state-ofthe- art approaches. Remarkably, just 500 annotated lines are sufficient for character error rates close to 5%
Ruiz, Paredes Javier Antonio. "Génération d'accélérogrammes synthétiques large-bande par modélisation cinématique de la rupture sismique." Paris, Institut de physique du globe, 2007. http://www.theses.fr/2007GLOB0009.
Full textIn order to make the broadband kinematic rupture modeling more realistic with respect to dynamic modeling, physical constraints are added to the rupture parameters. To improve the slip velocity function (SVF) modeling, an evolution of the k-2 source model is proposed, which consists to decompose the slip as a sum of sub-events by band of k. This model yields to SVFclose to the solution proposed by Kostrov for a crack, while preserving the spectral characteristics of the radiated wavefield, i. E. A w2 model with spectral amplitudes at high frequency scaled to the coefficient of directivity Cd. To better control the directivity effects, acomposite source description is combined with a scaling law defining the extent of the nucleation area for each sub-event. The resulting model allows to reduce the apparent coefficient of directivity to a fraction of Cd, as well as to reproduce the standard deviation of the new empirical attenuation relationships proposed for Japan. To make source models more realistic, avariable rupture velocity in agreement with the physics of the rupture must be considered. The followed approach that is based on an analytical relation between the fracture energy, the slip and the rupture velocity, leads to higher values of the peak ground acceleration in the vicinity ofthe fault. Finally, to better account for the interaction of the wavefield with the geological medium, a semi-empirical methodology is developed combining a composite source model with empirical Green functions, and is applied to the Yamaguchi, Mw 5. 9 earthquake. The modeled synthetics reproduce satisfactorily well the observed main characteristics of ground motions
Pazat, Jean-Louis. "Génération de code réparti par distribution de données." Habilitation à diriger des recherches, Université Rennes 1, 1997. http://tel.archives-ouvertes.fr/tel-00170867.
Full textBaez, miranda Belen. "Génération de récits à partir de données ambiantes." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM049/document.
Full textStories are a communication tool that allow people to make sense of the world around them. It represents a platform to understand and share their culture, knowledge and identity. Stories carry a series of real or imaginary events, causing a feeling, a reaction or even trigger an action. For this reason, it has become a subject of interest for different fields beyond Literature (Education, Marketing, Psychology, etc.) that seek to achieve a particular goal through it (Persuade, Reflect, Learn, etc.).However, stories remain underdeveloped in Computer Science. There are works that focus on its analysis and automatic production. However, those algorithms and implementations remain constrained to imitate the creative process behind literary texts from textual sources. Thus, there are no approaches that produce automatically stories whose 1) the source consists of raw material that passed in real life and 2) and the content projects a perspective that seeks to convey a particular message. Working with raw data becomes relevant today as it increase exponentially each day through the use of connected devices.Given the context of Big Data, we present an approach to automatically generate stories from ambient data. The objective of this work is to bring out the lived experience of a person from the data produced during a human activity. Any areas that use such raw data could benefit from this work, for example, Education or Health. It is an interdisciplinary effort that includes Automatic Language Processing, Narratology, Cognitive Science and Human-Computer Interaction.This approach is based on corpora and models and includes the formalization of what we call the activity récit as well as an adapted generation approach. It consists of 4 stages: the formalization of the activity récit, corpus constitution, construction of models of activity and the récit, and the generation of text. Each one has been designed to overcome constraints related to the scientific questions asked in view of the nature of the objective: manipulation of uncertain and incomplete data, valid abstraction according to the activity, construction of models from which it is possible the Transposition of the reality collected though the data to a subjective perspective and rendered in natural language. We used the activity narrative as a case study, as practitioners use connected devices, so they need to share their experience. The results obtained are encouraging and give leads that open up many prospects for research
Morisse, Pierre. "Correction de données de séquençage de troisième génération." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR043/document.
Full textThe aims of this thesis are part of the vast problematic of high-throughput sequencing data analysis. More specifically, this thesis deals with long reads from third-generation sequencing technologies. The aspects tackled in this topic mainly focus on error correction, and on its impact on downstream analyses such a de novo assembly. As a first step, one of the objectives of this thesis is to evaluate and compare the quality of the error correction provided by the state-of-the-art tools, whether they employ a hybrid (using complementary short reads) or a self-correction (relying only on the information contained in the long reads sequences) strategy. Such an evaluation allows to easily identify which method is best tailored for a given case, according to the genome complexity, the sequencing depth, or the error rate of the reads. Moreover, developpers can thus identify the limiting factors of the existing methods, in order to guide their work and propose new solutions allowing to overcome these limitations. A new evaluation tool, providing a wide variety of metrics, compared to the only tool previously available, was thus developped. This tool combines a multiple sequence alignment approach and a segmentation strategy, thus allowing to drastically reduce the evaluation runtime. With the help of this tool, we present a benchmark of all the state-of-the-art error correction methods, on various datasets from several organisms, spanning from the A. baylyi bacteria to the human. This benchmark allowed to spot two major limiting factors of the existing tools: the reads displaying error rates above 30%, and the reads reaching more than 50 000 base pairs. The second objective of this thesis is thus the error correction of highly noisy long reads. To this aim, a hybrid error correction tool, combining different strategies from the state-of-the-art, was developped, in order to overcome the limiting factors of existing methods. More precisely, this tool combines a short reads alignmentstrategy to the use of a variable-order de Bruijn graph. This graph is used in order to link the aligned short reads, and thus correct the uncovered regions of the long reads. This method allows to process reads displaying error rates as high as 44%, and scales better to larger genomes, while allowing to reduce the runtime of the error correction, compared to the most efficient state-of-the-art tools.Finally, the third objectif of this thesis is the error correction of extremely long reads. To this aim, aself-correction tool was developed, by combining, once again, different methologies from the state-of-the-art. More precisely, an overlapping strategy, and a two phases error correction process, using multiple sequence alignement and local de Bruijn graphs, are used. In order to allow this method to scale to extremely long reads, the aforementioned segmentation strategy was generalized. This self-correction methods allows to process reads reaching up to 340 000 base pairs, and manages to scale very well to complex organisms such as the human genome
Fontin, Mickaël. "Contribution à la génération de séries synthétiques de pluies, de débits et de températures." Toulouse, INPT, 1987. http://www.theses.fr/1987INPT117H.
Full textKhalili, Malika. "Nouvelle approche de génération multi-site des données climatiques." Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/580/1/KHALILI_Malika.pdf.
Full textBooks on the topic "Génération de données synthétiques"
(Editor), Harald Ganzinger, and Neil D. Jones (Editor), eds. Programs as Data Objects: Proceedings of a Workshop, Copenhagen, Denmark, October 17 - 19, 1985 (Lecture Notes in Computer Science). Springer, 1986.
Find full textBook chapters on the topic "Génération de données synthétiques"
"Outils de génération automatique des modèles." In Conception de bases de données avec UML, 447–504. Presses de l'Université du Québec, 2007. http://dx.doi.org/10.2307/j.ctv18pgv5t.10.
Full textESLAMI, Yasamin, Mario LEZOCHE, and Philippe THOMAS. "Big Data Analytics et machine learning pour les systèmes industriels cyber-physiques." In Digitalisation et contrôle des systèmes industriels cyber-physiques, 175–95. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9085.ch9.
Full textMouchabac, Stéphane. "Antipsychotiques de seconde génération dans la dépression résistante : données cliniques." In Dépressions Difficiles, Dépressions Résistantes, 17–23. Elsevier, 2013. http://dx.doi.org/10.1016/b978-2-294-73727-5.00003-9.
Full textVOLK, Rebekka. "Modéliser le bâti immobilier existant : planification et gestion de la déconstruction." In Le BIM, nouvel art de construire, 157–79. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9110.ch7.
Full textKARLIS, Dimitris, and Katerina ORFANOGIANNAKI. "Modèles de régression de Markov pour les séries chronologiques de comptage des séismes." In Méthodes et modèles statistiques pour la sismogenèse, 165–80. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9037.ch6.
Full text"Les données de l’enquête sont représentatives de la diversité des engagements et des porteurs d’engagement." In Rapport de la Génération Égalité sur la redevabilité 2022, 14. United Nations, 2023. http://dx.doi.org/10.18356/9789210021906c007.
Full textConference papers on the topic "Génération de données synthétiques"
Devictor, Nicolas. "Quatrième génération : situation internationale." In Données nucléaires : avancées et défis à relever. Les Ulis, France: EDP Sciences, 2014. http://dx.doi.org/10.1051/jtsfen/2014donra02.
Full textRIBEIRO DOS SANTOS, Daniel, Anne JULIEN-VERGONJANNE, and Johann BOUCLÉ. "Cellules Solaires pour les Télécommunications et la Récupération d’Énergie." In Les journées de l'interdisciplinarité 2022. Limoges: Université de Limoges, 2022. http://dx.doi.org/10.25965/lji.661.
Full textReports on the topic "Génération de données synthétiques"
Hicks, Jacqueline, Alamoussa Dioma, Marina Apgar, and Fatoumata Keita. Premiers résultats d'une évaluation de recherche-action systémique au Kangaba, Mali. Institute of Development Studies, April 2024. http://dx.doi.org/10.19088/ids.2024.019.
Full text