Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Génération de données synthétiques“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Génération de données synthétiques" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Génération de données synthétiques"
Corriger, J., und J. Goret. „Développement et évaluation d’approches algorithmiques et par GAN pour la génération de données synthétiques en Allergologie“. Revue Française d'Allergologie 64 (April 2024): 103893. http://dx.doi.org/10.1016/j.reval.2024.103893.
Der volle Inhalt der QuelleChahnazarian, Anouch. „Hausse récente de la fécondité en Haïti : un nouvel engouement pour la vie en union?“ Population Vol. 47, Nr. 3 (01.03.1992): 583–616. http://dx.doi.org/10.3917/popu.p1992.47n3.0616.
Der volle Inhalt der QuelleBogardii, J. J., und L. Duckstein. „Evénements de période sèche en pays semi-aride“. Revue des sciences de l'eau 6, Nr. 1 (12.04.2005): 23–46. http://dx.doi.org/10.7202/705164ar.
Der volle Inhalt der QuelleRICARD, F. H., G. MARCHE und E. LE BIHAN-DUVAL. „Essai d’amélioration par sélection de la qualité de carcasse du poulet de chair“. INRAE Productions Animales 7, Nr. 4 (27.09.1994): 253–61. http://dx.doi.org/10.20870/productions-animales.1994.7.4.4173.
Der volle Inhalt der QuelleLlamas, J., R. Fernandez und A. Calvache. „Génération de séries synthétiques de débit“. Canadian Journal of Civil Engineering 14, Nr. 6 (01.12.1987): 795–806. http://dx.doi.org/10.1139/l87-118.
Der volle Inhalt der QuelleESSALMANI, R., S. SOULIER, N. BESNARD, M. HUDRISIER, J. COSTA DA SILVA und J. L. VILOTTE. „Données de base sur la transgenèse“. INRAE Productions Animales 13, HS (22.12.2000): 181–86. http://dx.doi.org/10.20870/productions-animales.2000.13.hs.3835.
Der volle Inhalt der QuelleFleury, Charles. „La génération X au Québec : une génération sacrifiée ?“ Recherche 49, Nr. 3 (05.02.2009): 475–99. http://dx.doi.org/10.7202/019877ar.
Der volle Inhalt der QuelleBlackburn, Heidi. „La prochaine génération d’employés“. Documentation et bibliothèques 63, Nr. 1 (28.02.2017): 48–60. http://dx.doi.org/10.7202/1039073ar.
Der volle Inhalt der QuelleCHATELLIER, Vincent, Christophe PERROT, Emmanuel BEGUIN, Marc MORAINE und Patrick VEYSSET. „Compétitivité et emplois à la production dans les secteurs bovins français“. INRAE Productions Animales 33, Nr. 4 (06.04.2021): 261–82. http://dx.doi.org/10.20870/productions-animales.2020.33.4.4609.
Der volle Inhalt der QuelleStreet, María Constanza, und Benoît Laplante. „Pas plus élevée, mais après la migration ! Fécondité, immigration et calendrier de constitution de la famille“. Articles 43, Nr. 1 (04.06.2014): 35–68. http://dx.doi.org/10.7202/1025490ar.
Der volle Inhalt der QuelleDissertationen zum Thema "Génération de données synthétiques"
Kieu, Van Cuong. „Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques“. Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS029/document.
Der volle Inhalt der QuelleIn the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training
Desbois-Bédard, Laurence. „Génération de données synthétiques pour des variables continues : étude de différentes méthodes utilisant les copules“. Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27748.
Der volle Inhalt der QuelleStatistical agencies face a growing demand for releasing microdata to the public. To this end, many techniques have been proposed for publishing microdata while providing confidentiality : synthetic data generation in particular. This thesis focuses on such technique by presenting two existing methods, GAPD and C-GADP, as well as suggesting one based on vine copula models. GADP assumes that the variables of original and synthetic data are normally distributed, while C-GADP assumes that they have a normal copula distribution. Vine copula models are proposed due to their flexibility. These three methods are then assessed according to utility and risk. Data utility depends on maintaining certain similarities between the original and confidential data, while risk can be observed in two types : reidentification and inference. This work will focus on the utility examined with different analysis-specific measures, a global measure based on propensity scores and the risk of inference evaluated with a distance-based prediction.
Uzan, Kathy. „Les vaccins synthétiques : données récentes“. Paris 5, 1989. http://www.theses.fr/1989PA05P188.
Der volle Inhalt der QuelleBarrère, Killian. „Architectures de Transformer légères pour la reconnaissance de textes manuscrits anciens“. Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0017.
Der volle Inhalt der QuelleTransformer architectures deliver low error rates but are challenging to train due to limited annotated data in handwritten text recognition. We propose lightweight Transformer architectures to adapt to the limited amounts of annotated handwritten text available. We introduce a fast Transformer architecture with an encoder, processing up to 60 pages per second. We also present architectures using a Transformer decoder to incorporate language modeling into character recognition. To effectively train our architectures, we offer algorithms for generating synthetic data adapted to the visual style of modern and historical documents. Finally, we propose strategies for learning with limited data and reducing prediction errors. Our architectures, combined with synthetic data and these strategies, achieve competitive error rates on lines of text from modern documents. For historical documents, they train effectively with minimal annotated data, surpassing state-ofthe- art approaches. Remarkably, just 500 annotated lines are sufficient for character error rates close to 5%
Ruiz, Paredes Javier Antonio. „Génération d'accélérogrammes synthétiques large-bande par modélisation cinématique de la rupture sismique“. Paris, Institut de physique du globe, 2007. http://www.theses.fr/2007GLOB0009.
Der volle Inhalt der QuelleIn order to make the broadband kinematic rupture modeling more realistic with respect to dynamic modeling, physical constraints are added to the rupture parameters. To improve the slip velocity function (SVF) modeling, an evolution of the k-2 source model is proposed, which consists to decompose the slip as a sum of sub-events by band of k. This model yields to SVFclose to the solution proposed by Kostrov for a crack, while preserving the spectral characteristics of the radiated wavefield, i. E. A w2 model with spectral amplitudes at high frequency scaled to the coefficient of directivity Cd. To better control the directivity effects, acomposite source description is combined with a scaling law defining the extent of the nucleation area for each sub-event. The resulting model allows to reduce the apparent coefficient of directivity to a fraction of Cd, as well as to reproduce the standard deviation of the new empirical attenuation relationships proposed for Japan. To make source models more realistic, avariable rupture velocity in agreement with the physics of the rupture must be considered. The followed approach that is based on an analytical relation between the fracture energy, the slip and the rupture velocity, leads to higher values of the peak ground acceleration in the vicinity ofthe fault. Finally, to better account for the interaction of the wavefield with the geological medium, a semi-empirical methodology is developed combining a composite source model with empirical Green functions, and is applied to the Yamaguchi, Mw 5. 9 earthquake. The modeled synthetics reproduce satisfactorily well the observed main characteristics of ground motions
Pazat, Jean-Louis. „Génération de code réparti par distribution de données“. Habilitation à diriger des recherches, Université Rennes 1, 1997. http://tel.archives-ouvertes.fr/tel-00170867.
Der volle Inhalt der QuelleBaez, miranda Belen. „Génération de récits à partir de données ambiantes“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM049/document.
Der volle Inhalt der QuelleStories are a communication tool that allow people to make sense of the world around them. It represents a platform to understand and share their culture, knowledge and identity. Stories carry a series of real or imaginary events, causing a feeling, a reaction or even trigger an action. For this reason, it has become a subject of interest for different fields beyond Literature (Education, Marketing, Psychology, etc.) that seek to achieve a particular goal through it (Persuade, Reflect, Learn, etc.).However, stories remain underdeveloped in Computer Science. There are works that focus on its analysis and automatic production. However, those algorithms and implementations remain constrained to imitate the creative process behind literary texts from textual sources. Thus, there are no approaches that produce automatically stories whose 1) the source consists of raw material that passed in real life and 2) and the content projects a perspective that seeks to convey a particular message. Working with raw data becomes relevant today as it increase exponentially each day through the use of connected devices.Given the context of Big Data, we present an approach to automatically generate stories from ambient data. The objective of this work is to bring out the lived experience of a person from the data produced during a human activity. Any areas that use such raw data could benefit from this work, for example, Education or Health. It is an interdisciplinary effort that includes Automatic Language Processing, Narratology, Cognitive Science and Human-Computer Interaction.This approach is based on corpora and models and includes the formalization of what we call the activity récit as well as an adapted generation approach. It consists of 4 stages: the formalization of the activity récit, corpus constitution, construction of models of activity and the récit, and the generation of text. Each one has been designed to overcome constraints related to the scientific questions asked in view of the nature of the objective: manipulation of uncertain and incomplete data, valid abstraction according to the activity, construction of models from which it is possible the Transposition of the reality collected though the data to a subjective perspective and rendered in natural language. We used the activity narrative as a case study, as practitioners use connected devices, so they need to share their experience. The results obtained are encouraging and give leads that open up many prospects for research
Morisse, Pierre. „Correction de données de séquençage de troisième génération“. Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR043/document.
Der volle Inhalt der QuelleThe aims of this thesis are part of the vast problematic of high-throughput sequencing data analysis. More specifically, this thesis deals with long reads from third-generation sequencing technologies. The aspects tackled in this topic mainly focus on error correction, and on its impact on downstream analyses such a de novo assembly. As a first step, one of the objectives of this thesis is to evaluate and compare the quality of the error correction provided by the state-of-the-art tools, whether they employ a hybrid (using complementary short reads) or a self-correction (relying only on the information contained in the long reads sequences) strategy. Such an evaluation allows to easily identify which method is best tailored for a given case, according to the genome complexity, the sequencing depth, or the error rate of the reads. Moreover, developpers can thus identify the limiting factors of the existing methods, in order to guide their work and propose new solutions allowing to overcome these limitations. A new evaluation tool, providing a wide variety of metrics, compared to the only tool previously available, was thus developped. This tool combines a multiple sequence alignment approach and a segmentation strategy, thus allowing to drastically reduce the evaluation runtime. With the help of this tool, we present a benchmark of all the state-of-the-art error correction methods, on various datasets from several organisms, spanning from the A. baylyi bacteria to the human. This benchmark allowed to spot two major limiting factors of the existing tools: the reads displaying error rates above 30%, and the reads reaching more than 50 000 base pairs. The second objective of this thesis is thus the error correction of highly noisy long reads. To this aim, a hybrid error correction tool, combining different strategies from the state-of-the-art, was developped, in order to overcome the limiting factors of existing methods. More precisely, this tool combines a short reads alignmentstrategy to the use of a variable-order de Bruijn graph. This graph is used in order to link the aligned short reads, and thus correct the uncovered regions of the long reads. This method allows to process reads displaying error rates as high as 44%, and scales better to larger genomes, while allowing to reduce the runtime of the error correction, compared to the most efficient state-of-the-art tools.Finally, the third objectif of this thesis is the error correction of extremely long reads. To this aim, aself-correction tool was developed, by combining, once again, different methologies from the state-of-the-art. More precisely, an overlapping strategy, and a two phases error correction process, using multiple sequence alignement and local de Bruijn graphs, are used. In order to allow this method to scale to extremely long reads, the aforementioned segmentation strategy was generalized. This self-correction methods allows to process reads reaching up to 340 000 base pairs, and manages to scale very well to complex organisms such as the human genome
Fontin, Mickaël. „Contribution à la génération de séries synthétiques de pluies, de débits et de températures“. Toulouse, INPT, 1987. http://www.theses.fr/1987INPT117H.
Der volle Inhalt der QuelleKhalili, Malika. „Nouvelle approche de génération multi-site des données climatiques“. Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/580/1/KHALILI_Malika.pdf.
Der volle Inhalt der QuelleBücher zum Thema "Génération de données synthétiques"
(Editor), Harald Ganzinger, und Neil D. Jones (Editor), Hrsg. Programs as Data Objects: Proceedings of a Workshop, Copenhagen, Denmark, October 17 - 19, 1985 (Lecture Notes in Computer Science). Springer, 1986.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Génération de données synthétiques"
„Outils de génération automatique des modèles“. In Conception de bases de données avec UML, 447–504. Presses de l'Université du Québec, 2007. http://dx.doi.org/10.2307/j.ctv18pgv5t.10.
Der volle Inhalt der QuelleESLAMI, Yasamin, Mario LEZOCHE und Philippe THOMAS. „Big Data Analytics et machine learning pour les systèmes industriels cyber-physiques“. In Digitalisation et contrôle des systèmes industriels cyber-physiques, 175–95. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9085.ch9.
Der volle Inhalt der QuelleMouchabac, Stéphane. „Antipsychotiques de seconde génération dans la dépression résistante : données cliniques“. In Dépressions Difficiles, Dépressions Résistantes, 17–23. Elsevier, 2013. http://dx.doi.org/10.1016/b978-2-294-73727-5.00003-9.
Der volle Inhalt der QuelleVOLK, Rebekka. „Modéliser le bâti immobilier existant : planification et gestion de la déconstruction“. In Le BIM, nouvel art de construire, 157–79. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9110.ch7.
Der volle Inhalt der QuelleKARLIS, Dimitris, und Katerina ORFANOGIANNAKI. „Modèles de régression de Markov pour les séries chronologiques de comptage des séismes“. In Méthodes et modèles statistiques pour la sismogenèse, 165–80. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9037.ch6.
Der volle Inhalt der Quelle„Les données de l’enquête sont représentatives de la diversité des engagements et des porteurs d’engagement“. In Rapport de la Génération Égalité sur la redevabilité 2022, 14. United Nations, 2023. http://dx.doi.org/10.18356/9789210021906c007.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Génération de données synthétiques"
Devictor, Nicolas. „Quatrième génération : situation internationale“. In Données nucléaires : avancées et défis à relever. Les Ulis, France: EDP Sciences, 2014. http://dx.doi.org/10.1051/jtsfen/2014donra02.
Der volle Inhalt der QuelleRIBEIRO DOS SANTOS, Daniel, Anne JULIEN-VERGONJANNE und Johann BOUCLÉ. „Cellules Solaires pour les Télécommunications et la Récupération d’Énergie“. In Les journées de l'interdisciplinarité 2022. Limoges: Université de Limoges, 2022. http://dx.doi.org/10.25965/lji.661.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Génération de données synthétiques"
Hicks, Jacqueline, Alamoussa Dioma, Marina Apgar und Fatoumata Keita. Premiers résultats d'une évaluation de recherche-action systémique au Kangaba, Mali. Institute of Development Studies, April 2024. http://dx.doi.org/10.19088/ids.2024.019.
Der volle Inhalt der Quelle