Дисертації з теми "Génération de données de synthèse"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Génération de données de synthèse".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Molinari, Isabelle. "Test de génération de thrombine sur ACL7000 (développement d'un programme de traitement des données sur Microsoft Excel et éléments d'analyse de l'intérêt du test dans les états d'hypercoagulabilité)." Bordeaux 2, 1999. http://www.theses.fr/1999BOR23102.
Повний текст джерелаPazat, Jean-Louis. "Génération de code réparti par distribution de données." Habilitation à diriger des recherches, Université Rennes 1, 1997. http://tel.archives-ouvertes.fr/tel-00170867.
Повний текст джерелаBaez, miranda Belen. "Génération de récits à partir de données ambiantes." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM049/document.
Повний текст джерелаStories are a communication tool that allow people to make sense of the world around them. It represents a platform to understand and share their culture, knowledge and identity. Stories carry a series of real or imaginary events, causing a feeling, a reaction or even trigger an action. For this reason, it has become a subject of interest for different fields beyond Literature (Education, Marketing, Psychology, etc.) that seek to achieve a particular goal through it (Persuade, Reflect, Learn, etc.).However, stories remain underdeveloped in Computer Science. There are works that focus on its analysis and automatic production. However, those algorithms and implementations remain constrained to imitate the creative process behind literary texts from textual sources. Thus, there are no approaches that produce automatically stories whose 1) the source consists of raw material that passed in real life and 2) and the content projects a perspective that seeks to convey a particular message. Working with raw data becomes relevant today as it increase exponentially each day through the use of connected devices.Given the context of Big Data, we present an approach to automatically generate stories from ambient data. The objective of this work is to bring out the lived experience of a person from the data produced during a human activity. Any areas that use such raw data could benefit from this work, for example, Education or Health. It is an interdisciplinary effort that includes Automatic Language Processing, Narratology, Cognitive Science and Human-Computer Interaction.This approach is based on corpora and models and includes the formalization of what we call the activity récit as well as an adapted generation approach. It consists of 4 stages: the formalization of the activity récit, corpus constitution, construction of models of activity and the récit, and the generation of text. Each one has been designed to overcome constraints related to the scientific questions asked in view of the nature of the objective: manipulation of uncertain and incomplete data, valid abstraction according to the activity, construction of models from which it is possible the Transposition of the reality collected though the data to a subjective perspective and rendered in natural language. We used the activity narrative as a case study, as practitioners use connected devices, so they need to share their experience. The results obtained are encouraging and give leads that open up many prospects for research
Morisse, Pierre. "Correction de données de séquençage de troisième génération." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR043/document.
Повний текст джерелаThe aims of this thesis are part of the vast problematic of high-throughput sequencing data analysis. More specifically, this thesis deals with long reads from third-generation sequencing technologies. The aspects tackled in this topic mainly focus on error correction, and on its impact on downstream analyses such a de novo assembly. As a first step, one of the objectives of this thesis is to evaluate and compare the quality of the error correction provided by the state-of-the-art tools, whether they employ a hybrid (using complementary short reads) or a self-correction (relying only on the information contained in the long reads sequences) strategy. Such an evaluation allows to easily identify which method is best tailored for a given case, according to the genome complexity, the sequencing depth, or the error rate of the reads. Moreover, developpers can thus identify the limiting factors of the existing methods, in order to guide their work and propose new solutions allowing to overcome these limitations. A new evaluation tool, providing a wide variety of metrics, compared to the only tool previously available, was thus developped. This tool combines a multiple sequence alignment approach and a segmentation strategy, thus allowing to drastically reduce the evaluation runtime. With the help of this tool, we present a benchmark of all the state-of-the-art error correction methods, on various datasets from several organisms, spanning from the A. baylyi bacteria to the human. This benchmark allowed to spot two major limiting factors of the existing tools: the reads displaying error rates above 30%, and the reads reaching more than 50 000 base pairs. The second objective of this thesis is thus the error correction of highly noisy long reads. To this aim, a hybrid error correction tool, combining different strategies from the state-of-the-art, was developped, in order to overcome the limiting factors of existing methods. More precisely, this tool combines a short reads alignmentstrategy to the use of a variable-order de Bruijn graph. This graph is used in order to link the aligned short reads, and thus correct the uncovered regions of the long reads. This method allows to process reads displaying error rates as high as 44%, and scales better to larger genomes, while allowing to reduce the runtime of the error correction, compared to the most efficient state-of-the-art tools.Finally, the third objectif of this thesis is the error correction of extremely long reads. To this aim, aself-correction tool was developed, by combining, once again, different methologies from the state-of-the-art. More precisely, an overlapping strategy, and a two phases error correction process, using multiple sequence alignement and local de Bruijn graphs, are used. In order to allow this method to scale to extremely long reads, the aforementioned segmentation strategy was generalized. This self-correction methods allows to process reads reaching up to 340 000 base pairs, and manages to scale very well to complex organisms such as the human genome
Khalili, Malika. "Nouvelle approche de génération multi-site des données climatiques." Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/580/1/KHALILI_Malika.pdf.
Повний текст джерелаGenestier, Richard. "Vérification formelle de programmes de génération de données structurées." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2041/document.
Повний текст джерелаThe general problem of proving properties of imperative programs is undecidable. Some subproblems– restricting the languages of programs and properties – are known to be decidable. Inpractice, thanks to heuristics, program proving tools sometimes automate proofs for programs andproperties living outside of the theoretical framework of known decidability results. We illustrate thisfact by building a catalog of proofs, for similar programs and properties of increasing complexity. Mostof these programs are combinatorial map generators.Thus, this work contributes to the research fields of enumerative combinatorics and softwareengineering. We distribute a C library of bounded exhaustive generators of structured arrays, formallyspecified in ACSL and verified with the WP plugin of the Frama-C analysis platform. We also proposea testing-based methodology to assist interactive proof in Coq, an original formal study of maps, andnew results in enumerative combinatorics
Hattab, Z'hour. "Synthèse d'hétérocycles phosphorylés dérivés d'acides aminés : Application à la synthèse d'antitumoraux de nouvelle génération." Paris 13, 2010. http://www.theses.fr/2010PA132033.
Повний текст джерелаOxazaphosphorinanes are alkylant agents used in chemotherapy but showing undesirable effects and toxicity. On the other hand, bisphosphonates are compounds used in the treatment of osteoporosis and bone metastases. They generate some side effects and have a poor bioavailability by oral absorption, due to their poor lipophilicity. Our study consisted in synthesize phosphorylated heterocycles derivated from aminoacids called oxazaphospholidinones and their coupling with bisphosphonates in order to obtain new antitumoral molecules. They could be more efficient with less undesirable effects. In a first part, we have synthesized nitrogen-protected oxazaphospholidinones from amino-alcohols. Different protective groups have been studied : benzyl, tert-butyloxycarbonyl (Boc) or benzyloxycarbonyl (Cbz). These compounds were obtained as a mixture of two diastereoisomers (P2S,C4S and P2R,C4S ; the configuration C4S was fixed by the configuration of the starting aminoacid). They were separated by column chromatography or crystallization. Diastereoisomers were characterized by IR and NMR spectroscopies and mass spectrometry and a crystallographic study was realized. In the second part, we present the deprotection of the benzyloxycarbonyl group (Cbz) and the unsuccessful hydrogenolysis or oxidation assays of benzyl group. Lastly, coupling of bisphosphonate with protected oxazaphospholidine unfortunately failed
Caron, Maxime. "Données confidentielles : génération de jeux de données synthétisés par forêts aléatoires pour des variables catégoriques." Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25935.
Повний текст джерелаConfidential data are very common in statistics nowadays. One way to treat them is to create partially synthetic datasets for data sharing. We will present an algorithm based on random forest to generate such datasets for categorical variables. We are interested by the formula used to make inference from multiple synthetic dataset. We show that the order of the synthesis has an impact on the estimation of the variance with the formula. We propose a variant of the algorithm inspired by differential privacy, and show that we are then not able to estimate a regression coefficient nor its variance. We show the impact of synthetic datasets on structural equations modeling. One conclusion is that the synthetic dataset does not really affect the coefficients between latent variables and measured variables.
Effantin, dit Toussaint Brice. "Colorations de graphes et génération exhaustive d'arbres." Dijon, 2003. http://www.theses.fr/2003DIJOS021.
Повний текст джерелаChevrier, Christine. "Génération de séquences composées d'images de synthèse et d'images vidéo." Nancy 1, 1996. http://www.theses.fr/1996NAN10121.
Повний текст джерелаThe visual impact assessment of architectural projects in urban environments is usually based on manual drawings, paintings on photographs, scale models or computer-generated images. These techniques are either too expensive or not realistic enough. Strictly using computer images means requiring an accurate 3D model of the environment. Computing such a model takes a long time and the results lack of visual accuracy. Our technique of overlaying computer generated images onto photographs of the environment is considerably more effective and reliable. This method is a promising solution regarding computation time (no accurate 3D model of the environment) as weIl as visual realism (provided by the photograph itself). Such a solution requires nevertheless to solve lots of problems in order to get geometrical and photometrical coherence in the resulting image. To this end, image analysis and image synthesis methods have to be designed and developed. The method is generalized to produce an animated film, and can furthermore greatly increase the realism of the simulation. My Ph. D. Work was to test and integrate various image analysis and synthesis techniques for the composition of computer generated images with pictures or video film. This report explains the steps required for a realistic encrustation. For each of these steps, various techniques were tested in order to be able to choose the most suitable solutions according to the state of the most recent researches and to the applications we were dealing with (architectural and urban projects). The application of this work was the simulation of the Paris Bridges illumination projects. Concurrently to this work, I present a new method for the interpolation of computer images for the generation of a sequence of images. This method requires no approximation of the camera motion. Since video images are interlaced, sequences of computer images need to be interleaved too. The interpolation technique we propose is capable of doing this
Lagrange, Jean-Philippe. "Ogre : un système expert pour la génération de requêtes relationnelles." Paris 9, 1992. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1992PA090035.
Повний текст джерелаProst-Boucle, A. "Génération rapide d'accélérateurs matériels par synthèse d'architecture sous contraintes de ressources." Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-01071661.
Повний текст джерелаProst-Boucle, Adrien. "Génération rapide d'accélerateurs matériels par synthèse d'architecture sous contraintes de ressources." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT039/document.
Повний текст джерелаIn the field of high-performance computing, FPGA circuits are very attractive for their performance and low consumption. However, their presence is still marginal, mainly because of the limitations of current development tools. These limitations force the user to have expert knowledge about numerous technical concepts. They also have to manually control the synthesis processes in order to obtain solutions both fast and that fulfill the hardware constraints of the targeted platforms.A novel generation methodology based on high-level synthesis is proposed in order to push these limits back. The design space exploration consists in the iterative application of transformations to an initial circuit, which progressively increases its rapidity and its resource consumption. The rapidity of this process, along with its convergence under resource constraints, are thus guaranteed. The exploration is also guided towards the most pertinent solutions thanks to the detection of the most critical sections of the applications to synthesize, for the targeted execution context. This information can be refined with an execution scenarion specified by the user.A demonstration tool for this methodology, AUGH, has been built. Experiments have been conducted with several applications known in the field of high-level synthesis. Of very differen sizes, these applications confirm the pertinence of the proposed methodology for fast and automatic generation of complex hardware accelerators, under strict resource constraints. The proposed methodology is very close to the compilation process for microprocessors, which enable it to be used even by users non experts about digital circuit design. These works constitute a significant progress for a broader adoption of FPGA as general-purpose hardware accelerators, in order to make computing machines both faster and more energy-saving
Al, Lakiss Louwanda. "Synthèse de gallophosphates microstructurés par génération in situ de l’agent structurant." Mulhouse, 2006. http://www.theses.fr/2006MULH0842.
Повний текст джерелаThe present thesis deals with the synthesis of new microporous gallophosphates and fluorogallophosphates. In this work, a new synthesis route for gallophosphate materials has been studied. It consists in the in situ generation of the structure-directing agent (SDA) from alkylformamide. These molecules decompose into alkylamines in the synthesis media. The in situ release of the amine during the synthesis appears to be a key step in the crystallization of the gallophosphate materials. Indeed, experiments using alkylamine in the starting mixture directly did not lead to the crystallization of the same materials. The in situ generation of the alkylamine has been very successful, several new materials have been obtained i. E. Mu-30, Mu-34, Ea-TREN GaPO, Mu-35, Mu-37 and Mu-38. The materials obtained were then characterized by powder XRD, SEM, elemental and thermal analyses and solid state NMR spectroscopy. Some structures were solved from single crystal XRD, or from XRD powder pattern by using Rietveld method
Bounar, Boualem. "Génération automatique de programmes sur une base de données en réseau : couplage PROLOG-Base de données en réseau." Lyon 1, 1986. http://www.theses.fr/1986LYO11703.
Повний текст джерелаLeroux, (zinovieva) Elena. "Méthodes symboliques pour la génération de tests desystèmes réactifs comportant des données." Phd thesis, Université Rennes 1, 2004. http://tel.archives-ouvertes.fr/tel-00142441.
Повний текст джерелаde transitions ne permet pas de le faire. Ceci oblige à énumérer les valeurs des données avant de construire le modèle de système de transitions d'un système, ce qui peut provoquer le problème de l'explosion de l'espace d'états. Cette énumération a également pour effet d'obtenir des cas de test où toutes les données sont instanciées. Or, cela contredit la pratique industrielle où les cas de test sont de vrais programmes avec des variables et des paramètres. La génération de tels
cas de test exige de nouveaux modèles et techniques. Dans cette thèse, nous atteignons deux objectifs. D'une part, nous introduisons un modèle appelé système symbolique de transitions à entrée/sortie qui inclut explicitement toutes les données d'un système réactif. D'autre part, nous proposons et implémentons une nouvelle technique de génération de test qui traite symboliquement les données d'un système en combinant l'approche de génération de test proposée auparavant par notre groupe de recherche avec des techniques d'interprétation abstraite. Les cas de test générés automatiquement par notre technique satisfont des propriétés de correction: ils émettent toujours un verdict correct.
Uribe, Lobello Ricardo. "Génération de maillages adaptatifs à partir de données volumiques de grande taille." Thesis, Lyon 2, 2013. http://www.theses.fr/2013LYO22024.
Повний текст джерелаIn this document, we have been interested in the surface extraction from the volumetric representation of an object. With this objective in mind, we have studied the spatial subdivision surface extraction algorithms. This approaches divide the volume in order to build a piecewise approximation of the surface. The general idea is to combine local and simple approximations to extract a complete representation of the object's surface.The methods based on the Marching Cubes (MC) algorithm have problems to produce good quality and to handle adaptive surfaces. Even if a lot of improvements to MC have been proposed, these approaches solved one or two problems but they don't offer a complete solution to all the MC drawbacks. Dual methods are more adapted to use adaptive sampling over volumes. These methods generate surfaces that are dual to those generated by the Marching Cubes algorithm or dual grids in order to use MC methods. These solutions build adaptive meshes that represent well the features of the object. In addition, recent improvements guarantee that the produced meshes have good geometrical and topological properties.In this dissertation, we have studied the main topological and geometrical properties of volumetric objects. In a first stage, we have explored the state of the art on spatial subdivision surface extraction methods in order to identify theirs advantages, theirs drawbacks and the implications of theirs application on volumetric objects. We have concluded that a dual approach is the best option to obtain a good compromise between mesh quality and geometrical approximation. In a second stage, we have developed a general pipeline for surface extraction based on a combination of dual methods and connected components extraction to better capture the topology and geometry of the original object. In a third stage, we have presented an out-of-core extension of our surface extraction pipeline in order to extract adaptive meshes from huge volumes. Volumes are divided in smaller sub-volumes that are processed independently to produce surface patches that are later combined in an unique and topologically correct surface. This approach can be implemented in parallel to speed up its performance. Test realized in a vast set of volumes have confirmed our results and the features of our solution
Xue, Xiaohui. "Génération et adaptation automatiques de mappings pour des sources de données XML." Phd thesis, Versailles-St Quentin en Yvelines, 2006. http://www.theses.fr/2006VERS0019.
Повний текст джерелаThe integration of information originating from multiple heterogeneous data sources is required by many modern information systems. In this context, the applications’ needs are described by a target schema and the way in-stances of the target schema are derived from the data sources is expressed through mappings. In this thesis, we address the problem of mapping generation for multiple XML data sources and the adaptation of these mappings when the target schema or the sources evolve. We propose an automatic generation approach that first decom-poses the target schema into subtrees, then defines mappings, called partial mappings, for each of these subtrees, and finally combines these partial mappings to generate the mappings for the whole target schema. We also propose a mapping adaptation approach to keep existing mappings current if some changes occur in the target schema or in one of the sources. We have developed a prototype implementation of a tool to support these proc-esses
Xue, Xiaohui. "Génération et adaptation automatiques de mappings pour des sources de données XML." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00324429.
Повний текст джерелаNous proposons une approche de génération de mappings en trois phases : (i) la décomposition du schéma cible en sous-arbres, (ii) la recherche de mappings partiels pour chacun de ces sous-arbres et enfin (iii) la génération de mappings pour l'ensemble du schéma cible à partir de ces mappings partiels. Le résultat de notre approche est un ensemble de mappings, chacun ayant une sémantique propre. Dans le cas où l'information requise par le schéma cible n'est pas présente dans les sources, aucun mapping ne sera produit. Dans ce cas, nous proposons de relaxer certaines contraintes définies sur le schéma cible pour permettre de générer des mappings. Nous avons développé un outil pour supporter notre approche. Nous avons également proposé une approche d'adaptation des mappings existants en cas de changement survenant dans les sources ou dans le schéma cible.
Tawbi, Khouloud. "Synthèse de métallophosphates microstructurés obtenus par génération in situ de l'agent structurant." Phd thesis, Université de Haute Alsace - Mulhouse, 2012. http://tel.archives-ouvertes.fr/tel-00833362.
Повний текст джерелаDi, Cristo Philippe. "Génération automatique de la prosodie pour la synthèse à partir du texte." Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11050.
Повний текст джерелаZinovieva-Leroux, Eléna. "Méthodes symboliques pour la génération de tests de systèmes réactifs comportant des données." Rennes 1, 2004. https://tel.archives-ouvertes.fr/tel-00142441.
Повний текст джерелаMoinard, Matthieu. "Codage vidéo hybride basé contenu par analyse/synthèse de données." Phd thesis, Telecom ParisTech, 2011. http://tel.archives-ouvertes.fr/tel-00830924.
Повний текст джерелаCombaz, Jean. "Utilisation de phénomènes de croissance pour la génération de formes en synthèse d'images." Phd thesis, Université Joseph Fourier (Grenoble), 2004. http://tel.archives-ouvertes.fr/tel-00528689.
Повний текст джерелаDenhez, Clément. "Nouvelles méthodes de génération et d'activation de complexes zirconocènes : Synthèse de composés azotés." Reims, 2006. http://theses.univ-reims.fr/exl-doc/GED00000440.pdf.
Повний текст джерелаThis memory deals with the development of new methods for activation of zirconocene complexes through a bimetallic co-operation. New syntheses of nitrogencontaining compounds have been developed on this basis. The first part of this work is devoted to the highly chemoselective carboalumination of aldimines under zirconium catalysis. This reaction involves alanes as alkyl promotors and proceeds through metallacycles involving a bimetallic polarisation. In the case of imines derived from aniline, a zirconium-catalyzed regioselective acylation at the ortho position can be carried out. In the second part, we developed a new diastereoselective synthesis of 2- and 2,5- substituted pyrrolidines starting from optically pure N-allyloxazolidines, according to a hydrozirconation/ Lewis acid-catalyzed cyclization sequence. The third chapter presents a stereoselective synthesis of γ-lactams from N-homoallylcarbamates, by using a zirconium (II)-mediated intramolecular coupling reaction. Finally, the last part of this thesis was devoted to the development of a new reagent, synthetic equivalent of zirconocene(II). This reagent is obtained by reduction of dichlorozirconocene with a lanthanide alloy, the mischmetall. This new reagent possesses an important synthetic potential in many couplings. In particular, the first coupling involving 1- alkynes and the first zirconium catalyzed trimerisation of alkynes have been carried out
Turgeon, Josyane. "Synthèse d'une nouvelle génération de polymères conducteurs autodopés pour applications en électronique organique." Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69046.
Повний текст джерелаThe use of electronic devices is constantly evolving and adapting to consumer needs. In the food industry, this need concerns, among other things, the development of intelligent packaging capable of maintaining the integrity of consumer products throughout the cold chain. This is why there is an interest in delivering a lightweight, flexible and affordable device capable of detecting problematic factors in the cold chain. It is important that this device contains an electronic circuit that can signal any problems in order to prevent irreparable damage to consumer products. This master's project is therefore focused on the development of a new generation of self-doped conductive polymers that can replace metallic components in printed electronic circuits for smart packaging. A major disadvantage of the increasing use of printed electronics is that the conductive materials used for such applications are often expensive metals, such as silver, which generate polluting waste at the end of the devices' life cycle. An interesting alternative to replace metallic conductors is the use of conductive polymers. That is why this master's project is looking at the synthesis of self-doped conductive polymers. Why self-doped? Because the current state of the art in the field of conductive polymers is a matrix of two polymers, poly(3,4-ethylenedioxythiophene) (PEDOT) and polystyrene sulfonate (PSS). Although PSS is essential for the aqueous dispersion of PEDOT, it is responsible for a significant decrease in the conductivity of the material. Acid self-doped polymers would therefore be interesting to study in order to free themselves from PSS and thus maximize the conductivities obtained and facilitate the processing of these polymers.
Joffres, Benoît. "Synthèse de bio-liquide de seconde génération par hydroliquéfaction catalytique de la lignine." Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10223.
Повний текст джерелаNowadays, the transformation of lignocellulosic biomass is deeply investigated in order to provide biofuels and chemicals. Lignin, a by-product of pulp and bio-ethanol industry, is an available resource which could be used for the production of aromatic and phenolic compounds. However, this macromolecule mainly made of propylphenolic units linked by ether functions needs to be depolymerized. This work focuses on the study of liquefaction mechanisms by catalytic hydroconversion of wheat straw lignin extracted by a soda pulping. In the first part of this study, an in-depth characterization of this lignin was carried out using techniques. A structure of our lignin was proposed as a result. Then, a procedure was developed to perform the catalytic hydroconversion and recover the products. Catalytic experiments were carried out in a semi-batch reactor at 350°C, using H2 (8 MPa), a hydrogen donor solvent (tetralin) and a sulfide NiMoP/Al2O3 catalyst. The recovered products were separated into a liquid phase, gases, a lignin residue and THF insoluble solids. A conversion of 81 wt% of lignin into non-solid products was reached after 28h of reaction with an excellent mass balance. The characterization of the different fractions was carried out using techniques. Thanks to this protocol, we were able to point out the role of the H-donor solvent for preventing solid formation as well as the role of the catalyst for hydrodeoxygenation and hydrogenation of the depolymerized products. Finally, the catalytic hydroconversion of the lignin was carried out with the different residence times, which helps understanding the transformations occurring during the conversion. At the beginning of the reaction, we observed decarboxylation, hydrogenolysis of aliphatic OH and cleavage of ether linkages between the phenolic units of the lignin. Then, we observed elimination of methoxy groups, mainly by demethylation followed by dehydroxylation. Finally, the main products obtained during the reaction were phenolic and deoxygenated compounds such as aromatics, naphthenes and alkanes
Stéphan, Véronique. "Construction d'objets symboliques par synthèse des résultats de requêtes SQL." Paris 9, 1998. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1998PA090019.
Повний текст джерелаSibottier, Emilie. "Génération électro-assistée de films à base de silice : fonctionnalisation, mésostructuration et applications analytiques." Thesis, Nancy 1, 2007. http://www.theses.fr/2007NAN10101/document.
Повний текст джерелаThe study deals with various aspects of a novel method of sol-gel synthesis : the electro-assisted generation of functionalized and/or mesostructured silica thin films, and their applications in analytical electrochemistry. Sol-gel-derived silica films functionnalized with amine or thiol groups have been electrogenerated on gold electrodes. The formation of a partial self-assembled monolayer of mercaptopropyltrimethoxysilane (MPTMS) on gold led to a silica film adhering well to the electrode surface owing to the MPTMS acting as a « molecular glue ». The whole process was characterized by two successive distinct rates, starting by a slow deposition stage leading to thin deposits, which was followed by a much faster film growing in the form of macroporous coatings. The use of these modified electrodes was considered as a voltammetric sensor for copper(II). By adding a surfactant in the synthesis medium, it’s possible to electrogenerate mesostructured silica films with hexagonal structure with pore channels oriented perpendiculary to the substrate (which is difficult to get by other methods). The electrochemically-induced-self-assembly of surfactant-templated silica thin films can be applied to various conducting supports. The broad interest of the novel method was demonstrated by its ability to produce homogeneous deposits of silica on non-planar surfaces or heterogeneous substrates, what is difficult by the traditional techniques of film deposition. Finally, a preliminary approach has been proposed in order to apply the electrodeposition process coupled with a scanning electrochemical microscope in order to get localized sol-gel deposits at the micrometric size level on gold
Kieu, Van Cuong. "Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS029/document.
Повний текст джерелаIn the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training
Benalia, Akram Djellal. "HELPDraw : un environnement visuel pour la génération automatique de programmes à parallélisme de données." Lille 1, 1995. http://www.theses.fr/1995LIL10095.
Повний текст джерелаNesvijevskaia, Anna. "Phénomène Big Data en entreprise : processus projet, génération de valeur et Médiation Homme-Données." Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1247.
Повний текст джерелаBig Data, a sociotechnical phenomenon carrying myths, is reflected in companies by the implementation of first projects, especially Data Science projects. However, they do not seem to generate the expected value. The action-research carried out over the course of 3 years in the field, through an in-depth qualitative study of multiple cases, points to key factors that limit this generation of value, including overly self-contained project process models. The result is (1) an open data project model (Brizo_DS), orientated on the usage, including knowledge capitalization, intended to reduce the uncertainties inherent in these exploratory projects, and transferable to the scale of portfolio management of corporate data projects. It is completed with (2) a tool for documenting the quality of the processed data, the Databook, and (3) a Human-Data Mediation device, which guarantee the alignment of the actors towards an optimal result
Brusson, Jean-Michel. "Génération électrochimique de catalyseurs Ziegler-Natta pour la polymérisation de l'éthylène." Lille 1, 1988. http://www.theses.fr/1988LIL10125.
Повний текст джерелаRoquier, Ghislain. "Etude de modèles flux de données pour la synthèse logicielle multiprocesseur." Rennes, INSA, 2004. http://www.theses.fr/2008ISAR0020.
Повний текст джерелаParallelism is a universal characteristic of modern computing platforms, from multi-core processorsto programmable logic devices. The sequential programming paradigm is no longer adapted in thecontext of parallel and distributed architectures. The work presented in this thesis document findtheir foundation in the AAA methodology to build parallel programs based on an high-level representationsof both application and architecture. This work has enabled to extend the class of applications that can be modelled by the specification of new graph formalism. The final part of the document shows our involvement in the MPEG RVC framework. The RVC standard intends to facilitate for building the reference codecs offuture MPEG standards, which is based on dataflow to build decoder using a new dataflow languagecalled CAL. This work has enabled to specify and develop a software synthesis tool that enables anautomatic translation of dataflow programs written in CAL
Mehira, Djamel. "Bases de données images : application à la réutilisation en synthèse d'images." La Rochelle, 1998. http://www.theses.fr/1998LAROS017.
Повний текст джерелаAbdollahzadeh, Ali Akbar. "Validation de données par équilibrage de bilans : synthèse et nouvelles approches." Vandoeuvre-les-Nancy, INPL, 1997. http://www.theses.fr/1997INPL118N.
Повний текст джерелаThiéblin, Elodie. "Génération automatique d'alignements complexes d'ontologies." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30135.
Повний текст джерелаThe Linked Open Data (LOD) cloud is composed of data repositories. The data in the repositories are described by vocabularies also called ontologies. Each ontology has its own terminology and model. This leads to heterogeneity between them. To make the ontologies and the data they describe interoperable, ontology alignments establish correspondences, or links between their entities. There are many ontology matching systems which generate simple alignments, i.e., they link an entity to another. However, to overcome the ontology heterogeneity, more expressive correspondences are sometimes needed. Finding this kind of correspondence is a fastidious task that can be automated. In this thesis, an automatic complex matching approach based on a user's knowledge needs and common instances is proposed. The complex alignment field is still growing and little work address the evaluation of such alignments. To palliate this lack, we propose an automatic complex alignment evaluation system. This system is based on instances. A famous alignment evaluation dataset has been extended for this evaluation
Zerroukhi, Amar. "Synthèse et caractérisation de copolymères en vue de leurs applications en optique non linéaire." Lyon 1, 1993. http://www.theses.fr/1993LYO10126.
Повний текст джерелаBonnel, Nicolas. "Génération dynamique de présentations interactives en multimédia 3D, de données, pour les applications en ligne." Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00532641.
Повний текст джерелаAbdelmoula, Mariem. "Génération automatique de jeux de tests avec analyse symbolique des données pour les systèmes embarqués." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4149/document.
Повний текст джерелаOne of the biggest challenges in hardware and software design is to ensure that a system is error-free. Small errors in reactive embedded systems can have disastrous and costly consequences for a project. Preventing such errors by identifying the most probable cases of erratic system behavior is quite challenging. Indeed, tests in industry are overall non-exhaustive, while formal verification in scientific research often suffers from combinatorial explosion problem. We present in this context a new approach for generating exhaustive test sets that combines the underlying principles of the industrial test technique and the academic-based formal verification approach. Our approach builds a generic model of the system under test according to the synchronous approach. The goal is to identify the optimal preconditions for restricting the state space of the model such that test generation can take place on significant subspaces only. So, all the possible test sets are generated from the extracted subspace preconditions. Our approach exhibits a simpler and efficient quasi-flattening algorithm compared with existing techniques and a useful compiled internal description to check security properties and reduce the state space combinatorial explosion problem. It also provides a symbolic processing technique of numeric data that provides a more expressive and concrete test of the system. We have implemented our approach on a tool called GAJE. To illustrate our work, this tool was applied to verify an industrial project on contactless smart cards security
Desbois-Bédard, Laurence. "Génération de données synthétiques pour des variables continues : étude de différentes méthodes utilisant les copules." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27748.
Повний текст джерелаStatistical agencies face a growing demand for releasing microdata to the public. To this end, many techniques have been proposed for publishing microdata while providing confidentiality : synthetic data generation in particular. This thesis focuses on such technique by presenting two existing methods, GAPD and C-GADP, as well as suggesting one based on vine copula models. GADP assumes that the variables of original and synthetic data are normally distributed, while C-GADP assumes that they have a normal copula distribution. Vine copula models are proposed due to their flexibility. These three methods are then assessed according to utility and risk. Data utility depends on maintaining certain similarities between the original and confidential data, while risk can be observed in two types : reidentification and inference. This work will focus on the utility examined with different analysis-specific measures, a global measure based on propensity scores and the risk of inference evaluated with a distance-based prediction.
Genevaux, Jean-David. "Représentation, modélisation et génération procédurale de terrains." Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22013/document.
Повний текст джерелаThis PhD (entitled "Representation, modelisation and procedural generation of terrains") is related to movie and videogames digital content creation, especially natural scenes.Our work is dedicated to handle and to generate landscapes efficently. We propose a new model based on a construction tree inside which the user can handle parts of the terrain intuitively. We also present techniques to efficently visualize such model. Finally, we present a new algorithm for generating large-scale terrains exhibiting hierarchical structures based on their hydrographic networks: elevation is generated in a broad compliance to water-tansport principles without having to resort on costly hydraulic simulations
Kou, Huaizhong. "Génération d'adaptateurs web intelligents à l'aide de techniques de fouilles de texte." Versailles-St Quentin en Yvelines, 2003. http://www.theses.fr/2003VERS0011.
Повний текст джерелаThis thesis defines a system framework of semantically integrating Web information, called SEWISE. It can integrate text information from various Web sources belonging to an application domain into common domain-specific concept ontology. In SEWISE, Web wrappers are built around different Web sites to automatically extract interesting information from. Text mining technologies are then used to discover the semantics Web documents talk about. SEWISE can ease topic-oriented information researches over the Web. Three problems related to the document categorization are studied. Firstly, we investigate the approaches to feature selection and proposed two approaches CBA and IBA to select features. To estimate statistic term associations and integrate them within document similarity model, a mathematical model is proposed. Finally, the category score calculation algorithms used by k-NN classifiers are studied. Two weighted algorithms CBW and IBW to calculate category score are proposed
Cao, Wenjie. "Modélisation et prototypage d'une méthodologie de génération intégrant les informations pragmatiques, syntaxiques et de contrôle de prosodie." Grenoble INPG, 2007. http://www.theses.fr/2007INPG0088.
Повний текст джерелаThe topic of the thesis is to ameliorate the speech synthesis by starting from abstract semantic representations, and pragmatic information (affect, emotions, and maybe dialogue context). The thesis shows that the naturalness of synthesized speech can be ameliorate by directly inserting annotations (in the format of SSML, SABLE or XML) into the text to be pronounced in order to control various parameters (energy, voice type, prosody); but also through parameterizing the procedure of natural language generation, starting from an abstract representation, integrating the selection of lexicon and of syntax, as well as the control of prosody
Moyse, Gilles. "Résumés linguistiques de données numériques : interprétabilité et périodicité de séries." Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066526.
Повний текст джерелаOur research is in the field of fuzzy linguistic summaries (FLS) that allow to generate natural language sentences to describe very large amounts of numerical data, providing concise and intelligible views of these data. We first focus on the interpretability of FLS, crucial to provide end-users with an easily understandable text, but hard to achieve due to its linguistic form. Beyond existing works on that topic, based on the basic components of FLS, we propose a general approach for the interpretability of summaries, considering them globally as groups of sentences. We focus more specifically on their consistency. In order to guarantee it in the framework of standard fuzzy logic, we introduce a new model of oppositions between increasingly complex sentences. The model allows us to show that these consistency properties can be satisfied by selecting a specific negation approach. Moreover, based on this model, we design a 4-dimensional cube displaying all the possible oppositions between sentences in a FLS and show that it generalises several existing logical opposition structures. We then consider the case of data in the form of numerical series and focus on linguistic summaries about their periodicity: the sentences we propose indicate the extent to which the series are periodic and offer an appropriate linguistic expression of their periods. The proposed extraction method, called DPE, standing for Detection of Periodic Events, splits the data in an adaptive manner and without any prior information, using tools from mathematical morphology. The segments are then exploited to compute the period and the periodicity, measuring the quality of the estimation and the extent to which the series is periodic. Lastly, DPE returns descriptive sentences of the form ``Approximately every 2 hours, the customer arrival is important''. Experiments with artificial and real data show the relevance of the proposed DPE method. From an algorithmic point of view, we propose an incremental and efficient implementation of DPE, based on established update formulas. This implementation makes DPE scalable and allows it to process real-time streams of data. We also present an extension of DPE based on the local periodicity concept, allowing the identification of local periodic subsequences in a numerical series, using an original statistical test. The method validated on artificial and real data returns natural language sentences that extract information of the form ``Every two weeks during the first semester of the year, sales are high''
Raschia, Guillaume. "SaintEtiq : une approche floue pour la génération de résumés à partir de bases de données relationnelles." Nantes, 2001. http://www.theses.fr/2001NANT2099.
Повний текст джерелаCheaib, Rouba. "Les carboxymethyl glycosides lactones : synthèse et application à l'imagerie membranaire." Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0056/these.pdf.
Повний текст джерелаThe carboxymethyl 3,4,6-tri-O-acétyl-α-D-glucopyranoside 2-0-lactone,, prepared starting from isomaltulose, has shown to be a good “glycoside donor” for the synthesis of compounds targeted either for their biological interest or for their potential physicochemical properties such as pseudo-oligosaccharides, pseudo-glycoaminoacids, and pseudo-glycolipids. New strategies of synthesis for obtaining carboxymethyl glycoside lactones based on mono- and disaccharidic systems with varied protective groups are now presented. In addition, the fonctionnalisation of position 2 selectively released after opening of these lactones by nucleophilic agents allowed the synthesis of multifunctionnalised systems. An application of this strategy was also studied for the synthesis of amphiphilic products for the design of water-soluble membrane probes for the membrane imagery by non linear optical microscopy
Platzer, Auriane. "Mécanique numérique en grandes transformations pilotée par les données : De la génération de données sur mesure à une stratégie adaptative de calcul multiéchelle." Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0041.
Повний текст джерелаComputational mechanics is a field in which a large amount of data is both consumed and produced. On the one hand, the recent developments of experimental measurement techniques have provided rich data for the identification process of constitutive models used in finite element simulations. On the other hand, multiscale analysis produces a huge amount of discrete values of displacements, strains and stresses from which knowledge is extracted on the overall material behavior. The constitutive model then acts as a bottleneck between upstream and downstream material data. In contrast, Kirchdoerfer and Ortiz (Computer Methods in Applied Mechanics and Engineering, 304, 81-101) proposed a model-free computing paradigm, called data-driven computational mechanics. The material response is then only represented by a database of raw material data (strain-stress pairs). The boundary value problem is thus reformulated as a constrained distance minimization between (i) the mechanical strain-stress state of the body, and (ii) the material database. In this thesis, we investigate the question of material data coverage, especially in the finite strain framework. The data-driven approach is first extended to a geometrically nonlinear setting: two alternative formulations are considered and a finite element solver is proposed for both. Second, we explore the generation of tailored databases using a mechanically meaningful sampling method. The approach is assessed by means of finite element analyses of complex structures exhibiting large deformations. Finally, we propose a prototype multiscale data-driven solver, in which the material database is adaptively enriched
Zaher, Noufal Issam al. "Outils de CAO pour la génération d'opérateurs arithmétiques auto-contrôlables." Grenoble INPG, 2001. http://www.theses.fr/2001INPG0028.
Повний текст джерелаThiessard, Frantz. "Détection des effets indésirables des médicaments par un système de génération automatisée du signal adapté à la base nationale française de pharmacovigilance." Bordeaux 2, 2004. http://www.theses.fr/2004BOR21184.
Повний текст джерелаEvaluation and improvement of drugs risk/benefit ratio in population implies their adverse reactions surveillance after marketing. Pharmacovigilance main objective is to detect drugs adverse reactions relied mainly on spontaneous notifications. The French pharmacovigilance is faced to a very large data flow while no automatic method is available to edit a list of potentially suspected drug/adverse drug reaction associations. Eight methods were studied : Proportional Reporting Ratio (PRR), Reporting Odds Ratio (ROR), Uule's Q, Sequential Probability Ratio Test (SPRT2), Poisson's probabilities, X2, Information Component (IC), and Empirical Baye's Method (EBAM). Signals obtained with each method were compared through simulated data, then through real data from the French pharmacovigilance database