Dissertations / Theses on the topic 'Intelligence artificielle générative'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Intelligence artificielle générative.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Lacan, Alice. "Transcriptomics data generation with deep generative models." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG010.
Full textThis thesis explores deep generative models to improve synthetic transcriptomics data generation, addressing data scarcity in phenotypes classification tasks. We focus on Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and diffusion models (DDPM/DDIM), assessing their ability to balance realism and diversity in high-dimensional tabular datasets. First, we adapt quality metrics for gene expression and introduce a knowledge-based self-attention module within GANs (AttGAN) to improve the fidelity-diversity trade-off. A main contribution is boosting classification performance using minimal real samples augmented with synthetic data. Secondly, another contribution was the first adaptation of diffusion models to transcriptomic data, demonstrating competitiveness with VAEs and GANs. We also introduce an interpolation analysis bringing perspectives on data diversity and the identification of biomarkers. Finally, we present GMDA (Generative Modeling with Density Alignment), a resource efficient alternative to GANs that balances realism and diversity by aligning locally real and synthetic sample densities. This framework allows controlled exploration of instance space, stable training, and frugality across datasets. Ultimately, this thesis provides comprehensiveinsights and methodologies to advance synthetic transcriptomics data generation
Régin, Florian. "Programmation par contraintes générative." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4052.
Full textConstraint Programming (CP) is a method for solving combinatorial problems, encapsulating techniques from artificial intelligence, computer science and operations research. Model Checking (MC) is a formal technique for automatically proving whether a given system satisfies a specification. To handle CP problems, which often have many states and transitions, CP solvers are slower or use more memory than MC solvers (called model checkers). The latter are sometimes able to push the exponential (in time/space), compared to CP solvers. This thesis aims to answer two questions: How can we create a CP technique that can solve MC problems as efficiently or more efficiently than model checkers? How can this technique be used on classical CP problems as efficiently or more efficiently than traditional CP solvers? We answered the first question by creating GenCP, a CP technique inspired by the Generative Constraint Satisfaction Problem (GCSP) and the On-the-fly MC (OTF). To answer the second question, we refined GenCP and demonstrated its capabilities against traditional CP on classic CP problems, such as the NQueens problem. Generative Constraint Programming is a made-up term to refer to any CP technique that resembles GenCP/GCSP. The major drawback of MC problems is the state explosion problem. Several variants of MC have been created to solve this problem. OTF is the variant of MC that achieves the best results compared with CP solvers on MC problems. OTF doesn't start with any states and creates/destroys states on the fly during the search for solutions. GCSP was created to solve configuration problems (which resemble MC problems). GCSP involves three main concepts: variables, which represent the objects of the problem; domains, which represent the values associated with the variables; and constraints, which represent properties associated with one or more variables. In traditional CP, these concepts must be defined prior to the search for solutions. In GCSP, domains must be defined prior to the solution search, and variables/domains are generated during the solution search. GCSP is more efficient than traditional CP on MC problems, but less efficient than OTF. We designed GenCP to be a mix between GCSP and OTF. To the best of our knowledge, GenCP is the first CP technique capable of starting the solution search with none of the CP concepts defined; GenCP generates the concepts during the solution search. GenCP outperforms GCSP and traditional CP on MC problems, and is equivalent to model checkers. GenCP has been refined using OTF. Refining consists of simultaneously processing domain and constraint generation and propagation. The refined version of GenCP generates domains that are guaranteed to satisfy the constraints, and are therefore often smaller in size than the unrefined version. The refined version has proven to be efficient, achieving better results than traditional CP on classical CP problems: NQueens, All Interval, Graceful Graphs and Langford Number. To further demonstrate the advantages of GenCP over traditional CP, we introduced GenCPML, a new hybridization between CP and Machine Learning (ML), where domains are created on the fly during the search for solutions by an ML model. On some problems, GenCPML manages to achieve better results than CP alone and ML alone
Haidar, Ahmad. "Responsible Artificial Intelligence : Designing Frameworks for Ethical, Sustainable, and Risk-Aware Practices." Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASI008.pdf.
Full textArtificial Intelligence (AI) is rapidly transforming the world, redefining the relationship between technology and society. This thesis investigates the critical need for responsible and sustainable development, governance, and usage of AI and Generative AI (GAI). The study addresses the ethical risks, regulatory gaps, and challenges associated with AI systems while proposing actionable frameworks for fostering Responsible Artificial Intelligence (RAI) and Responsible Digital Innovation (RDI).The thesis begins with a comprehensive review of 27 global AI ethical declarations to identify dominant principles such as transparency, fairness, accountability, and sustainability. Despite their significance, these principles often lack the necessary tools for practical implementation. To address this gap, the second study in the research presents an integrative framework for RAI based on four dimensions: technical, AI for sustainability, legal, and responsible innovation management.The third part of the thesis focuses on RDI through a qualitative study of 18 interviews with managers from diverse sectors. Five key dimensions are identified: strategy, digital-specific challenges, organizational KPIs, end-user impact, and catalysts. These dimensions enable companies to adopt sustainable and responsible innovation practices while overcoming obstacles in implementation.The fourth study analyzes emerging risks from GAI, such as misinformation, disinformation, bias, privacy breaches, environmental concerns, and job displacement. Using a dataset of 858 incidents, this research employs binary logistic regression to examine the societal impact of these risks. The results highlight the urgent need for stronger regulatory frameworks, corporate digital responsibility, and ethical AI governance. Thus, this thesis provides critical contributions to the fields of RDI and RAI by evaluating ethical principles, proposing integrative frameworks, and identifying emerging risks. It emphasizes the importance of aligning AI governance with international standards to ensure that AI technologies serve humanity sustainably and equitably
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS027.
Full textThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Abdelghani, Rania. "Guider les esprits de demain : agents conversationnels pour entraîner la curiosité et la métacognition chez les jeunes apprenants." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0152.
Full textEpistemic curiosity—the desire to actively seek information for its inherent pleasure—is a complex phenomenon extensively studied across various domains. Several researchers in psychology, neuroscience, and computer science have repeatedly highlighted its foundational role in cognitive development and in fostering lifelong learning. Further, epistemic curiosity is considered key for cultivating a flexible mindset capable of adapting to the world’s uncertainties. These insights have spurred significant interest in the educational field, recognizing curiosity as essential for helping individuals be active and in control of their learning. These properties are crucial for addressing some of today’s major educational challenges, namely offering students individualized support to suit their competencies and motivations, and helping them become able to learn autonomously and independently in their dynamic and uncertain environments. Despite this well-documented importance of curiosity in education, its practical implementation and promotion in the classroom remains limited. Notably, one of the primary expressions of curiosity— question-asking (QA)—is nearly absent in most of today’s educational settings. Several reports show that students often spend a lot of time answering teachers’ questions rather than asking their own. And when they do ask questions, they are typically low-level and memory-based, as opposed to curious questions that seek novel information. In this context, this thesis aims to develop educational technologies that can foster children’s curiosity-driven learning by practicing curious QA behaviors, and their related metacognitive (MC) skills. Ultimately, we implemented interventions to train three dimensions: 1) Linguistic QA Skills: We implement a conversational agent to train the ability to formulate curious questions using compound questioning words and correct interrogative constructions. It helps children generate curious questions during reading-comprehension tasks, by providing specific cues. The effectiveness of different cue structures (a sentence vs. series of keywords) and implementations (hand-generated vs. GPT-3-generated content) is studied. 2) Curiosity-related metacognitive Skills: We create animated videos to give declarative knowledge about curiosity and its related MC skills: the ability to self reflect, make educated guesses, formulate efficient questions, and evaluate newly-acquired information. We also propose sessions to practice these skills during reading-comprehension tasks using specific cues given by conversational agents we designed to train procedural MC. 3) Social Perceptions and beliefs: We create animated videos to address the negative constructs learners tend to have about curiosity. They explain the importance of curiosity and how to control it during learning. Over 150 French students aged 9 to 11 were recruited to test these trainings of the three dimensions. Combined, these latter enhanced students’ MC sensitivity and perception of curiosity. At their turn, these factors facilitated students’ divergent QA behaviors which, at their turn, led to stronger learning progress and positive, affordable learning experiences. But despite the positive results, our methods had limitations, particularly their short duration. We suggest testing longer-lasting interventions to examine their long-term effects on curiosity. Finally, this thesis highlights the need to continue exploring QA and MC research in the age of Generative Artificial Intelligence (GAI). Indeed, while GAI facilitates access to information, it still requires good QA abilities and MC monitoring to prevent misinformation and facilitate its detection. We thus propose a framework to link efficient GAI use in education to QA and MC skills, and GAI literacy. We also present a behavioral study we intend to conduct to test this framework
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS027/document.
Full textThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
El, Mernissi Karim. "Une étude de la génération d'explication dans un système à base de règles." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066332/document.
Full textThe concept of “Business Rule Management System” (BRMS) has been introduced in order to facilitate the design, the management and the execution of company-specific business policies. Based on a symbolic approach, the main idea behind these tools is to enable the business users to manage the business rule changes in the system without requiring programming skills. It is therefore a question of providing them with tools that enable to formulate their business policies in a near natural language form and automate their processing. Nowadays, with the expansion of intelligent systems, we have to cope with more and more complex decision logic and large volumes of data. It is not straightforward to identify the causes leading to a decision. There is a growing need to justify and optimize automated decisions in a short time frame, which motivates the integration of advanced explanatory component into its systems. Thus, the main challenge of this research is to provide an industrializable approach for explaining the decision-making processes of business rules applications and more broadly rule-based systems. This approach should be able to provide the necessary information for enabling a general understanding of the decision, to serve as a justification for internal and external entities as well as to enable the improvement of existing rule engines. To this end, the focus will be on the generation of the explanations in themselves as well as on the manner and the form in which they will be delivered
Boulic-Bouadjio, Audren. "Génération multi-agents de réseaux sociaux." Thesis, Toulouse 1, 2021. http://www.theses.fr/2021TOU10003.
Full textBonnefoi, Pierre-François. "Techniques de satisfaction de contraintes pour la modélisation déclarative : application à la génération concurrente de scènes." Limoges, 1999. http://www.theses.fr/1999LIMO0045.
Full textNdiaye, Seydina Moussa. "Apprentissage par renforcement en horizon fini : Application à la génération de règles pour la conduite de culture." Toulouse 3, 1999. http://www.theses.fr/1999TOU30010.
Full textFaiz, Rim. "Modélisation formelle des connaissances temporelles à partir de textes en vue d'une génération automatique de programmes." Paris 9, 1996. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1996PA090023.
Full textNiquil, Yves. "Acquisition d'exemples en discrimination : spécification des exemples par génération de scenarios." Paris 9, 1993. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1993PA090063.
Full textElferkh, Jrad Zeina. "Apport des techniques de l'intelligence artificielle dans la négociation dynamique de niveaux de service : proposition d'une interface utilisateur pour l'Internet de nouvelle génération." Paris 13, 2006. http://www.theses.fr/2006PA132005.
Full textLaunay, Jean-Pierre. "Génération de code de protocole de communication par système expert." Paris 9, 1995. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1995PA090030.
Full textPeureux, Fabien. "Génération de tests aux limites à partir de spécifications B en programmation logique avec contraintes ensemblistes." Besançon, 2002. http://www.theses.fr/2002BESA2046.
Full textThe work presented in this thesis defines a functional test case generation method based on an original approach using formal methods. This method is fully supported by a tool-set called BZ-Testing-Tools. Our goal is to test some implementation from a B abstract machine of the system under test, which is not derived from the formal model. The test generation process works by translating the formal specifications of the system under test into an equivalent constraint system. A domain partition of the state variables of the specifications is performed to derive all the possible behaviours of the system. From each behaviour, test objectives are performed through a computation of boundary goals by a specific solver using Constraint Logic Programming. Then test cases are generated, by searching for a sequence of operations that reaches the boundary goal from the initial state. Finally, the formal specifications are used as an oracle to determinate the expected output for a given input
Godbout, Mathieu. "Approches par bandit pour la génération automatique de résumés de textes." Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69488.
Full textThis thesis discusses the use of bandit methods to solve the problem of training extractive abstract generation models. The extractive models, which build summaries by selecting sentences from an original document, are difficult to train because the target summary of a document is usually not built in an extractive way. It is for this purpose that we propose to see the production of extractive summaries as different bandit problems, for which there exist algorithms that can be leveraged for training summarization models.In this paper, BanditSum is first presented, an approach drawn from the literature that sees the generation of the summaries of a set of documents as a contextual bandit problem. Next,we introduce CombiSum, a new algorithm which formulates the generation of the summary of a single document as a combinatorial bandit. By exploiting the combinatorial formulation,CombiSum manages to incorporate the notion of the extractive potential of each sentence of a document in its training. Finally, we propose LinCombiSum, the linear variant of Com-biSum which exploits the similarities between sentences in a document and uses the linear combinatorial bandit formulation instead
El, Mernissi Karim. "Une étude de la génération d'explication dans un système à base de règles." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066332.
Full textThe concept of “Business Rule Management System” (BRMS) has been introduced in order to facilitate the design, the management and the execution of company-specific business policies. Based on a symbolic approach, the main idea behind these tools is to enable the business users to manage the business rule changes in the system without requiring programming skills. It is therefore a question of providing them with tools that enable to formulate their business policies in a near natural language form and automate their processing. Nowadays, with the expansion of intelligent systems, we have to cope with more and more complex decision logic and large volumes of data. It is not straightforward to identify the causes leading to a decision. There is a growing need to justify and optimize automated decisions in a short time frame, which motivates the integration of advanced explanatory component into its systems. Thus, the main challenge of this research is to provide an industrializable approach for explaining the decision-making processes of business rules applications and more broadly rule-based systems. This approach should be able to provide the necessary information for enabling a general understanding of the decision, to serve as a justification for internal and external entities as well as to enable the improvement of existing rule engines. To this end, the focus will be on the generation of the explanations in themselves as well as on the manner and the form in which they will be delivered
Tientcheu, Joseph. "Couplage fonctionnel homme-machine : des communications interactives et de la génération des faits d'émotion." Paris 8, 2003. http://www.theses.fr/2003PA082328.
Full textThis text is a mediologic approach of communication by comparison with many other sciences. It develops a theory based upon human and computer interface with the help of psychology, philosophy and Artificial Intelligency which can help us to get a new vision on hypermedia communications. The first part illustrates how somes parameters and concepts involve in our process construction. The second part describes in what HUMAN being is particular, the third part shows some common points and differences beetwen human and the computer about communications. The fourth part represents the staging tools concerning the rich feeld of our commitment. The end is a reflexion about systemic construction wich can helps us to built an ecologic communications approach
Fahlaoui, Ouafae. "Système expert de génération automatique de Grafcet à partir d'une spécification lexicale libre du cahier des charges." Paris 11, 1987. http://www.theses.fr/1987PA112204.
Full textGiroire, Hélène. "Un système à base de connaissances pour la génération d'exercices dans des domaines liés au monde réel." Paris 6, 1989. http://www.theses.fr/1989PA066210.
Full textWang, Dong Hue. "Systèmes multi-agents adaptatifs avec contraintes temps-réel : De la spécification formelle à la vérification et à la génération de code." Evry-Val d'Essonne, 2005. http://www.theses.fr/2005EVRY0011.
Full textThe design of reactive systems must comply with logical correctness (the system does what it supposed to do) and timeliness (the system has to satisfy a set of temporal constraints) criteria. In this paper, we propose a global approach for the design of adaptative reactive systems, i. E. Systems that dynamically adapt their architecture depending on the context. We use the timed automata formalism for the design of the agents' behaviour. This allows evaluating beforehand the properties of the system (regarding logical correctiness and timeliness), thanks to model-checking and simulation techniques. This model is enhanced to tools that we developed for the automatic generation of code, allowing to produce very quickly a running multi-agent prototype satisfying the properties of model
Hankach, Pierre. "Génération automatique de textes par satisfaction de contraintes." Paris 7, 2009. http://www.theses.fr/2009PA070027.
Full textWe address in this thesis the construction of a natural language generation System - computer software that transforms a formal representation of information into a text in natural language. In our approach, we define the generation problem as a constraint satisfaction problem (CSP). The implemented System ensures an integrated processing of generation operations as their different dependencies are taken into account and no priority is given to any type of operation over the others. In order to define the constraint satisfaction problem, we represent the construction operations of a text by decision variables. Individual operations that implement the same type of minimal expressions in the text form a generation task. We classify decision variables according to the type of operations they represent (e. G. Content selection variables, document structuring variables. . . ). The linguistic rules that govern the operations are represented as constraints on the variables. A constraint can be defined over variables of the same type or different types, capturing the dependency between the corresponding operations. The production of a text consists of resolving the global System of constraints, that is finding an evaluation of the variables that satisfies all the constraints. As part of the grammar of constraints for generation, we particularly formulate the constraints that govern document structuring operations. We model by constraints the rhetorical structure of SORT in order to yield coherent texts as the generator's output. Beforehand, in order to increase the generation capacities of our System, we extend the rhetorical structure to cover texts in the non-canonical order. Furthermore, in addition to defining these coherence constraints, we formulate a set of constraints that enables controlling the form of the macrostructure by communicative goals. Finally, we propose a solution to the problem of computational complexity of generating large texts. This solution is based on the generation of a text by groups of clauses. The problem of generating a text is therefore divided into many problems of reduced complexity, where each of them is concerned with generating a part of the text. These parts are of limited size so the associated complexity to their generation remains reasonable. The proposed partitioning of generation is motivated by linguistic considerations
Naanaa, Wady. "Résolution de problèmes de satisfaction de contraintes intégrant la flexibilité et la symétrie : application à la génération de structures moléculaires." Lyon 1, 1996. http://www.theses.fr/1996LYO10320.
Full textFranceschi, Jean-Yves. "Apprentissage de représentations et modèles génératifs profonds dans les systèmes dynamiques." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS014.
Full textThe recent rise of deep learning has been motivated by numerous scientific breakthroughs, particularly regarding representation learning and generative modeling. However, most of these achievements have been obtained on image or text data, whose evolution through time remains challenging for existing methods. Given their importance for autonomous systems to adapt in a constantly evolving environment, these challenges have been actively investigated in a growing body of work. In this thesis, we follow this line of work and study several aspects of temporality and dynamical systems in deep unsupervised representation learning and generative modeling. Firstly, we present a general-purpose deep unsupervised representation learning method for time series tackling scalability and adaptivity issues arising in practical applications. We then further study in a second part representation learning for sequences by focusing on structured and stochastic spatiotemporal data: videos and physical phenomena. We show in this context that performant temporal generative prediction models help to uncover meaningful and disentangled representations, and conversely. We highlight to this end the crucial role of differential equations in the modeling and embedding of these natural sequences within sequential generative models. Finally, we more broadly analyze in a third part a popular class of generative models, generative adversarial networks, under the scope of dynamical systems. We study the evolution of the involved neural networks with respect to their training time by describing it with a differential equation, allowing us to gain a novel understanding of this generative model
Molinet, Benjamin. "Génération et évaluation d'explications argumentatives en langage naturel appliquées au domaine médical." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4063.
Full textArgument(ation) mining, a rapidly growing area of Natural Language Processing (NLP) and computational models of argument, aims at the automatic recognition of argument structures (i.e., components and relations) in natural language textual resources. In the healthcare domain, argument mining has proven beneficial in providing methods to automatically detect argumentative structures to support Evidence-Based Medicine (EBM). The importance of these approaches relies on the fact that, despite the accuracy of neural models in medical diagnosis, explanation of their outcomes remains problematic. The thesis tackles this open issue and focuses on generation and assessment of natural language argumentative explanations for diagnosis predictions, supporting clinicians in decision making and learning. First, I proposed a novel complete pipeline to automatically generate natural language explanations of medical question answering exams for diagnoses relying on a medical ontology and clinical entities from exam texts. I defined a state-of-the-art medical named entity recognition and classification (NERC) system to detect layperson symptoms and medical findings that I align to ontology terms so as to justify a diagnosis of a clinical case provided to medical residents. NERC module, called SYMEXP, allows our system to generate template-based natural language argumentative explanations to justify why the correct answer is correct and why the other proposed options are not. Second, I proposed an argument-based explanation assessment framework, called ABEXA, to automatically extract the argumentative structure of a medical question answering document and highlight a set of customisable criteria to characterise the clinical explanation and the document argumentation. ABEXA tackles the issue of explanation assessment from the argumentative viewpoint by defining a set of graph rules over an automatically generated argumentative graph. Third, I contributed to the design and development of the ANTIDOTE software tool, proposing different modules for argumentation-driven explainable Artificial Intelligence for digital medicine. Our system offers the following functionalities: multilingual argumentative analysis for the medical domain, explanation, extraction and generation of clinical diagnoses, multilingual large language models for the medical domain, and the first multilingual benchmark for medical question-answering. In conclusion, in this thesis, I explore how artificial intelligence combined with the argumentation theory could lead to more transparent healthcare systems. We apply our results to the critical domain of medicine showing all their potential in terms of support for education, for example, of clinical residents
Ben, ameur Ayoub. "Artificial intelligence for resource allocation in multi-tenant edge computing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS019.
Full textWe consider in this thesis Edge Computing (EC) as a multi-tenant environment where Network Operators (NOs) own edge resources deployed in base stations, central offices and/or smart boxes, virtualize them and let third party Service Providers (SPs) - or tenants - distribute part of their applications in the edge in order to serve the requests sent by the users. SPs with heterogeneous requirements coexist in the edge, ranging from Ultra-Reliable Low Latency Communications (URLLC) for controlling cars or robots, to massive Machine Type Communication (mMTC) for Internet of Things (IoT) requiring a massive number of connected devices, to media services, such as video streaming and Augmented/Virtual Reality (AR/VR), whose quality of experience is strongly dependant on the available resources. SPs independently orchestrate their set of microservices, running on containers, which can be easily replicated, migrated or stopped. Each SP can adapt to the resources allocated by the NO, deciding whether to run microservices in the devices, in the edge nodes or in the cloud.We aim in this thesis to advance the emergence of real deployments of the“true” EC in real networks, by showingthe utility that NOs can collect thanks to EC. We believe that thiscan contribute to encourage concrete engagement and investments engagement of NOs in EC. For this, we point to design novel data-driven strategiesthat efficiently allocate resources between heterogeneous SPs, at the edge owned by the NO, in order to optimize its relevant objectives, e.g., cost reduction, revenue maximization and better Quality of Service (QoS) perceived by end users, in terms of latency, reliability and throughput, while satisfying the SPs requirements.This thesis presents a perspective on how NOs, the sole owners of resources at the far edge (e.g., at base stations), can extract value through the implementation of EC within a multi-tenant environment.By promoting this vision of EC and by supporting it via quantitativeresults and analysis, this thesis provides, mainly to NOs, findings that can influence decision strategies about the future deployment of EC. This might foster the emergence of novel low-latency and data-intensive applications, such as high resolution augmented reality, which are not feasible in the current Cloud Computing (CC) setting.Another contribution of the thesis it that it provides solutions based on novel methods that harness the power of data-driven optimization.We indeed adapt cutting-edge techniques from Reinforcement Learning (RL) and sequential decision making to the practical problem of resource allocation inEC. In doing so, we succeed in reducing the learning time of the adopted strategies up to scales that are compatible with the EC dynamics, via careful design of estimation models embedded in the learning process. Our strategies are conceived in order not to violate the confidentiality guarantees that are essential for SPs to accept running their computation at the EC, thanks to the multi-tenant setting
Rarivomanana, Jens A. "Système CADOC : génération fonctionnelle de test pour les circuits complexes." Phd thesis, Grenoble INPG, 1985. http://tel.archives-ouvertes.fr/tel-00319028.
Full textAudemard, Gilles. "Résolution du problème SAT et génération de modèles finis en logique du premier ordre." Aix-Marseille 1, 2001. http://www.theses.fr/2001AIX11036.
Full textArantes, Júnior Wilmondes Manzi de. "P. A. S. Pluggable Alert System : un système pour la génération et l'affichage d'alertes médicales adaptées à l'utilisateur." Lyon, INSA, 2006. http://theses.insa-lyon.fr/publication/2006ISAL0025/these.pdf.
Full textWe propose a system that is able to detect and trigger user-defined medical alerts in the context of healthcare networks. Such alerts are created by using fuzzy linguistic variables associated with importance levels (e. G. Alert if age = elderly; important and air-temperature = very-hot; very-important) and whose dependency relationships (e. G. The weight depends on the age) are modeled through a weighted oriented graph. Each alert the system triggers has two quality indicators – an applicability level and a trust level – which state to which extent the patient is concerned and how reliable it is. Our system is also able to transparently infer missing information by using an historical database containing previously reported similar cases. Finally, a multi-agents display module adapts each alert to the context, which is represented by the patient (elderly, etc. ), the healthcare user (physician, etc. ), the display device (PC, PDA, etc. ), the place (hospital, etc. ) and the urgency of the alert itself (very urgent, etc. ). The adaptation process is controlled by three intelligent agents – the patient, the physician and the alert – which negotiate to agree on the min-max quality levels required for the three dimensions of display: contents (information that will be shown), graphics (graphic components that will be used) and security (protection mechanisms to use). Then, the corresponding task announcements are broadcasted within three societies of reactive agents (which have not cognitive capabilities and simply perform tasks) representing these dimensions and the winning ones collaborate to build the interface of the alert. The final system will be integrated into the hospital information system provided by the company that has sponsored my research and will be patented as soon as possible
Caudron, Didier. "Utilisation de techniques de résolution de problèmes pour la planification de séquences de redémarrage de processus continus." Compiègne, 1990. http://www.theses.fr/1990COMPD306.
Full textSantoni, Williams Alexius. "Apprentissage par mémorisation d'expériences dans la résolution des problèmes." Compiègne, 1989. http://www.theses.fr/1989COMPD160.
Full textCiguene, Richardson. "Génération automatique de sujets d'évaluation individuels en contexte universitaire." Electronic Thesis or Diss., Amiens, 2019. http://www.theses.fr/2019AMIE0046.
Full textThis PhD work focuses on the evaluation of learning and especially the automatic generation of evaluation topics in universities. We rely on a base of source questions to create topic questions through algorithms that are able to construct differentiated assessment tests. This research has made it possible to develop a metric that measures this differentiation and to propose algorithms aimed at maximizing total differentiation on test collections, while minimizing the number of necessary patterns. The average performance of the latter depends on the number of patterns available in the source database (compared to the number of items desired in the tests), and the size of the generated collections. We focused on the possible differentiation in very small collections of subjects, and proposes methodological tracks to optimize the distribution of these differentiated subjects to cohorts of students respecting the constraints of the teacher. The rest of this work will eventually take into account the level of difficulty of a test as a new constraint, relying in part on the statistical and semantic data collected after each test. The goal is to be able to maximize the differentiation by keeping the equity between the Tests of a Collection, for an optimized distribution during the Events
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Full textThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Membrado, Miguel. "Génération d'un système conceptuel écrit en langage de type semi-naturel en vue d'un traitment des données textuelles : application au langage médical." Paris 11, 1989. http://www.theses.fr/1989PA112004.
Full textWe present our research and our own realization on a KBMS (Knowledge Based Management System) aiming at processing any kind of data, especially textual data, and the related knowledge. In this field of applied Artificial Intelligence, we propose a way for representing knowledge : to describe it in a semi-natural language able as well to describe structures or relations as rules. Knowledge is managed as conceptual definitions figuring in a dictionary which represents the knowledge base. The power of this language allows to process a lot of ambiguities, especially those coming from contextual polysemia, to deal with metonymia or incomplete knowledge, and to solve several kinds of paraphrases. Simultaneous polyhierarchies as well as chunks are taken into account. The system has been specially studied for automatic processing of medical reports. An application to neuro radiology has been taken as example. But it could be applied as well to any other field, included outside Medecine to any professional field. Text analysis is realized in two steps : first a conceptual extraction, secondly a structural analysis. The first step only is taken into account in this thesis. It aims at retrieving pertinent documents, matching them to the given question by comparison between concepts, not between character strings. An overview of the second step will be presented. The final goal is to be able to retrieve the knowledge contained into the texts, i. E. The data themselves, and to manage it in respect to the knowledge represented into the dictionaries
Gotlieb, Arnaud. "Contributions à la génération de tests à base de contraintes." Habilitation à diriger des recherches, Université Européenne de Bretagne, 2011. http://tel.archives-ouvertes.fr/tel-00699260.
Full textDaher, Tony. "Gestion cognitive des réseaux radio auto-organisant de cinquième génération." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT023.
Full textThe pressure on operators to improve the network management efficiency is constantly growing for many reasons: the user traffic that is increasing very fast, higher end users expectations, emerging services with very specific requirements. Self-Organizing Networks (SON) concept was introduced by the 3rd Generation Partnership Project as a promising solution to simplify the operation and management of complex networks. Many SON modules are already being deployed in today’s networks. Such networks are known as SON enabled networks, and they have proved to be useful in reducing the complexity of network management. However, SON enabled networks are still far from realizing a network that is autonomous and self-managed as a whole. In fact, the behavior of the SON functions depends on the parameters of their algorithm, as well as on the network environment where it is deployed. Besides, SON objectives and actions might be conflicting with each other, leading to incompatible parameter tuning in the network. Each SON function hence still needs to be itself manually configured, depending on the network environment and the objectives of the operator. In this thesis, we propose an approach for an integrated SON management system through a Cognitive Policy Based SON Management (C-PBSM) approach, based on Reinforcement Learning (RL). The C-PBSM translates autonomously high level operator objectives, formulated as target Key Performance Indicators (KPIs), into configurations of the SON functions. Furthermore, through its cognitive capabilities, the C-PBSM is able to build its knowledge by interacting with the real network. It is also capable of adapting with the environment changes. We investigate different RL approaches, we analyze the convergence time and the scalability and propose adapted solutions. We tackle the problem of non-stationarity in the network, notably the traffic variations, as well as the different contexts present in a network. We propose as well an approach for transfer learning and collaborative learning. Practical aspects of deploying RL agents in real networks are also investigated under Software Defined Network (SDN) architecture
Durand, Philippe. "Contributions à la génération et à l'amendement de plans d'actions : application à la conception de gammes d'usinage dans un contexte CIM." Phd thesis, Grenoble INPG, 1988. http://tel.archives-ouvertes.fr/tel-00330034.
Full textCripwell, Liam. "Controllable and Document-Level Text Simplification." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0186.
Full textText simplification is a task that involves rewriting a text to make it easier to read and understand for a wider audience, while still expressing the same core meaning. This has potential benefits for disadvantaged end-users (e.g. non-native speakers, children, the reading impaired), while also showing promise as a preprocessing step for downstream NLP tasks. Recent advancement in neural generative models have led to the development of systems that are capable of producing highly fluent outputs. However, these end-to-end systems often rely on training corpora to implicitly learn how to perform the necessary rewrite operations. In the case of simplification, these datasets are lacking in both quantity and quality, with most corpora either being very small, automatically constructed, or subject to strict licensing agreements. As a result, many systems tend to be overly conservative, often making no changes to the original text or being limited to the paraphrasing of short word sequences without substantial structural modifications. Furthermore, most existing work on text simplification is limited to sentence-level inputs, with attempts to iteratively apply these approaches to document-level simplification failing to coherently preserve the discourse structure of the document. This is problematic, as most real-world applications of text simplification concern document-level texts. In this thesis, we investigate strategies for mitigating the conservativity of simplification systems while promoting a more diverse range of transformation types. This involves the creation of new datasets containing instances of under-represented operations and the implementation of controllable systems capable of being tailored towards specific transformations and simplicity levels. We later extend these strategies to document-level simplification, proposing systems that are able to consider surrounding document context and use similar controllability techniques to plan which sentence-level operations to perform ahead of time, allowing for both high performance and scalability. Finally, we analyze current evaluation processes and propose new strategies that can be used to better evaluate both controllable and document-level simplification systems
Duffaut, Olivier. "Problématique multi-modèle pour la génération d'arbres de test : application au domaine de l'automobile." Toulouse, ENSAE, 1994. http://www.theses.fr/1994ESAE0005.
Full textBourcier, Frédéric. "Représentation des connaissances pour la résolution de problèmes et la génération d'explications en langue naturelle : contribution au projet AIDE." Compiègne, 1996. http://www.theses.fr/1996COMPD903.
Full textNesvijevskaia, Anna. "Phénomène Big Data en entreprise : processus projet, génération de valeur et Médiation Homme-Données." Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1247.
Full textBig Data, a sociotechnical phenomenon carrying myths, is reflected in companies by the implementation of first projects, especially Data Science projects. However, they do not seem to generate the expected value. The action-research carried out over the course of 3 years in the field, through an in-depth qualitative study of multiple cases, points to key factors that limit this generation of value, including overly self-contained project process models. The result is (1) an open data project model (Brizo_DS), orientated on the usage, including knowledge capitalization, intended to reduce the uncertainties inherent in these exploratory projects, and transferable to the scale of portfolio management of corporate data projects. It is completed with (2) a tool for documenting the quality of the processed data, the Databook, and (3) a Human-Data Mediation device, which guarantee the alignment of the actors towards an optimal result
Tsang, Jean Patrick. "Planification par combinaison de plans : application à la génération de gammes d'usinage." Grenoble INPG, 1987. http://tel.archives-ouvertes.fr/tel-00325043.
Full textBenabbou, Azzeddine. "Génération dynamique de situations critiques en environnements virtuels : dilemme et ambiguïté." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2432.
Full textOur work is related to dynamic generation of critical situations in virtual environments. We focus on two particular types of critical situations: dilemmas and ambiguous situations. The challenge of this work is to generate automatically these situations using models that are not intended to describe dilemmas and ambiguous situations. A dilemma is defined as a situation that includes a difficult choice. It refers to a situation where individuals have to choose between two, or more, inconvenient options. Ambiguity refers to situations that can be interpreted in different ways. In the context of this thesis, we propose a formal model for the two notions that is inspired by humanities and social sciences. Using this formalization, we propose a set of algorithms and generation technics that use knowledge models — manipulated by domain experts — that are not intended to describe dilemmas and ambiguous situations. The use of these models enables the generation engine to infer new knowledge used to extract the entities (e.g. actions, events, objects) that can potentially produce situations that meet the properties defined in the dilemma and ambiguity formalization. In order to propose a content adapted to each learner, it is necessary to take into consideration the value system of each person in the dilemma generation process. Thus, we propose to operationalize the theory of universal values of Schwartz. Concerning the ambiguity, it is necessary to take into account the level of knowledge of the learner regarding the world variables. Thus, we propose to model the mental representation of the learner. In order to consider the uncertainties in this representation, we propose to use the belief functions theory that is well-suited for this matter
Wang, Yaohui. "Apprendre à générer des vidéos de personnes." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4116.
Full textGenerative Adversarial Networks (GANs) have witnessed increasing attention due to their abilities to model complex visual data distributions, which allow them to generate and translate realistic images. While realistic \textit{video generation} is the natural sequel, it is substantially more challenging w.r.t. complexity and computation, associated to the simultaneous modeling of appearance, as well as motion. Specifically, in inferring and modeling the distribution of human videos, generative models face three main challenges: (a) generating uncertain motion and retaining of human appearance, (b) modeling spatio-temporal consistency, as well as (c) understanding of latent representation. In this thesis, we propose three novel approaches towards generating high-visual quality videos and interpreting latent space in video generative models. We firstly introduce a method, which learns to conditionally generate videos based on single input images. Our proposed model allows for controllable video generation by providing various motion categories. Secondly, we present a model, which is able to produce videos from noise vectors by disentangling the latent space into appearance and motion. We demonstrate that both factors can be manipulated in both, conditional and unconditional manners. Thirdly, we introduce an unconditional video generative model that allows for interpretation of the latent space. We place emphasis on the interpretation and manipulation of motion. We show that our proposed method is able to discover semantically meaningful motion representations, which in turn allow for control in generated results. Finally, we describe a novel approach to combine generative modeling with contrastive learning for unsupervised person re-identification. Specifically, we leverage generated data as data augmentation and show that such data can boost re-identification accuracy
Henry-Chatelain, Catherine. "Génération de méta-faits pour la modélisation du raisonnement en diagnostic médical : application du diagnostic de l'infection néonatale." Compiègne, 1987. http://www.theses.fr/1987COMPD068.
Full textThe theme of this work is the development of an expert system for materno-foetal diagnosis in newborn babies. The study is part of the development of an essential expert system usable in either the diagnostic or simulation mode. Firstly, we present the various stages of an expert system development and also the main modes of knowledge representation via expert system description in the medical field. Secondly, we describe the essential expert system and its natural language interface with which its development has been conducted. Following this, we describe the main feature of materno-foetal infections, so as to highlight the various problems associated with their diagnosis. These are broken down and formulated in such a way that the analysis is in the form of fairly simple reasoning process. We put forward a general-purpose model of knowledge representation, based here upon infection criteria, as well as a meta-knowledge automatic generation module ; the latter, using the direct description of the basic facts allows us to deduce new data, in terms compatible with those used by doctors. The practical use of the module is described in considerable detail. The whole of the various generated meta-knowledge is reported, as is its analysis and the choice of triggerable rules. An example of a consultation is given. Results are presented for the evaluation phase, which was conducted in a pediatric reanimation unit
Ghosh, Aishik. "Simulation of the ATLAS electromagnetic calorimeter using generative adversarial networks and likelihood-free inference of the offshell Higgs boson couplings at the LHC." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP058.
Full textSince the discovery of the Higgs boson in 2012, experiments at the LHC have been testing Standard Model predictions with high precision measurements. Measurements of the off-shell couplings of the Higgs boson will remove certain degeneracies that cannot be resolved with the current on-shell measurements, such as probing the Higgs boson width, which may lead to hints for new physics. One part of this thesis focuses on the measurement of the off-shell couplings of the Higgs boson produced by vector boson fusion and decaying to four leptons. This decay channel provides a unique opportunity to probe the Higgs in its off-shell regime due to enhanced cross-sections beyond 2Mz (twice the mass of the Z boson) region of the four lepton mass. The significant quantum interference between the signal and background processes renders the concept of `class labels' ill-defined, and poses a challenge to traditional methods and generic machine learning classification models used to optimise a signal strength measurement. A new family of machine learning based likelihood-free inference strategies, which leverage additional information that can be extracted from the simulator, were adapted to a signal strength measurement problem. The study shows promising results compared to baseline techniques on a fast simulated Delphes dataset. Also introduced in this context is the aspiration network, an improved adversarial algorithm for training while maintaining invariance with respect to chosen features. Measurements in the ATLAS experiment rely on large amounts of precise simulated data. The current Geant4 simulation software is computationally too expensive to sustain the large amount of simulated data required for planned future analyses. The other part of this thesis focuses on a new approach to fast simulation using a Generative Adversarial Network (GAN). The cascading shower simulation of the complex ATLAS calorimeter is the slowest part of the simulation chain using Geant4. Replacing it with a neural network that has learnt the probability distribution of the particle showers as a function of the incident particle properties and local detector geometry increases the simulation speed by several orders of magnitude, even on single core CPUs, and opens to door the further speed up on GPUs. The integration into the ATLAS software allows for the first time to make realistic comparisons to hand-designed fast simulation frameworks. The study is performed on a small section of the detector (0,20<|η|<0,25) using photons and compares distributions using samples simulated by the model standalone as well as after integration into the ATLAS software against fully simulated Geant4 samples. Important lessons on the merits and demerits of various strategies, benefit the ultimate goal of simulating the entire ATLAS calorimeter with a few deep generative models. The study also reveals an inherent problem with the popular gradient penalty based Wasserstein GAN, and proposes a solution
Hati, Yliess. "Expression Créative Assistée par IA : Le Cas de La Colorisation Automatique de Line Art." Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS060.
Full textAutomatic lineart colorization is a challenging task for Computer Vision. Con- trary to grayscale images, linearts lack semantic information such as shading and texture, making the task even more difficult.This thesis dissertation is built upon related works and explores the use of modern generative Artificial Intelligence (AI) architectures such as Generative Adversarial Networks (GANs) and Denoising Diffusion Models (DDMs) to both improve the quality of previous techniques, as well as better capturing the user colorization intent throughout three contributions: PaintsTorch, StencilTorch and StablePaint.As a result, an iterative and interactive framework based on colored strokes and masks provided by the end user is built to foster Human-Machine collaboration in favour of natural, and emerging workflows inspired by digital painting processes
Ozcelik, Furkan. "Déchiffrer le langage visuel du cerveau : reconstruction d'images naturelles à l'aide de modèles génératifs profonds à partir de signaux IRMf." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES073.
Full textThe great minds of humanity were always curious about the nature of mind, brain, and consciousness. Through physical and thought experiments, they tried to tackle challenging questions about visual perception. As neuroimaging techniques were developed, neural encoding and decoding techniques provided profound understanding about how we process visual information. Advancements in Artificial Intelligence and Deep Learning areas have also influenced neuroscientific research. With the emergence of deep generative models like Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Latent Diffusion Models (LDM), researchers also used these models in neural decoding tasks such as visual reconstruction of perceived stimuli from neuroimaging data. The current thesis provides two frameworks in the above-mentioned area of reconstructing perceived stimuli from neuroimaging data, particularly fMRI data, using deep generative models. These frameworks focus on different aspects of the visual reconstruction task than their predecessors, and hence they may bring valuable outcomes for the studies that will follow. The first study of the thesis (described in Chapter 2) utilizes a particular generative model called IC-GAN to capture both semantic and realistic aspects of the visual reconstruction. The second study (mentioned in Chapter 3) brings new perspective on visual reconstruction by fusing decoded information from different modalities (e.g. text and image) using recent latent diffusion models. These studies become state-of-the-art in their benchmarks by exhibiting high-fidelity reconstructions of different attributes of the stimuli. In both of our studies, we propose region-of-interest (ROI) analyses to understand the functional properties of specific visual regions using our neural decoding models. Statistical relations between ROIs and decoded latent features show that while early visual areas carry more information about low-level features (which focus on layout and orientation of objects), higher visual areas are more informative about high-level semantic features. We also observed that generated ROI-optimal images, using these visual reconstruction frameworks, are able to capture functional selectivity properties of the ROIs that have been examined in many prior studies in neuroscientific research. Our thesis attempts to bring valuable insights for future studies in neural decoding, visual reconstruction, and neuroscientific exploration using deep learning models by providing the results of two visual reconstruction frameworks and ROI analyses. The findings and contributions of the thesis may help researchers working in cognitive neuroscience and have implications for brain-computer-interface applications
Caillon, Antoine. "Hierarchical temporal learning for multi-instrument and orchestral audio synthesis." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS115.
Full textRecent advances in deep learning have offered new ways to build models addressing a wide variety of tasks through the optimization of a set of parameters based on minimizing a cost function. Amongst these techniques, probabilistic generative models have yielded impressive advances in text, image and sound generation. However, musical audio signal generation remains a challenging problem. This comes from the complexity of audio signals themselves, since a single second of raw audio spans tens of thousands of individual samples. Modeling musical signals is even more challenging as important information are structured across different time scales, from micro (e.g. timbre, transient, phase) to macro (e.g. genre, tempo, structure) information. Modeling every scale at once would require large architectures, precluding the use of resulting models in real time setups for computational complexity reasons.In this thesis, we study how a hierarchical approach to audio modeling can address the musical signal modeling task, while offering different levels of control to the user. Our main hypothesis is that extracting different representation levels of an audio signal allows to abstract the complexity of lower levels for each modeling stage. This would eventually allow the use of lightweight architectures, each modeling a single audio scale. We start by addressing raw audio modeling by proposing an audio model combining Variational Auto Encoders and Generative Adversarial Networks, yielding high-quality 48kHz neural audio synthesis, while being 20 times faster than real time on CPU. Then, we study how autoregressive models can be used to understand the temporal behavior of the representation yielded by this low-level audio model, using optional additional conditioning signals such as acoustic descriptors or tempo. Finally, we propose a method for using all the proposed models directly on audio streams, allowing their use in realtime applications that we developed during this thesis. We conclude by presenting various creative collaborations led in parallel of this work with several composers and musicians, directly integrating the current state of the proposed technologies inside musical pieces
Scordamaglia, Sabine. "Définition et évaluation d'un système expert d'aide à la planification des données prévisionnelles commerciales." Compiègne, 1990. http://www.theses.fr/1990COMPD232.
Full text