Academic literature on the topic 'Intelligence artificielle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Intelligence artificielle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Intelligence artificielle":
Rivière, Jérémy, Carole Adamn, and Sylvie Pesty. "Un ACA sincère, affectif et expressif comme compagnon artificiel." Revue d'intelligence artificielle 28, no. 1 (February 2014): 67–99. http://dx.doi.org/10.3166/ria.28.67-99.
Dissertations / Theses on the topic "Intelligence artificielle":
Allouche, Mohamad Khaled. "Une société d'agents temporels pour la supervision de systèmes industriels." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 1998. http://tel.archives-ouvertes.fr/tel-00822431.
Metz, Clément. "Codages optimisés pour la conception d'accélérateurs matériels de réseaux de neurones profonds." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST190.
Neural networks are an important component of machine learning tools because of their wide range of applications (health, energy, defence, finance, autonomous navigation, etc.). The performance of neural networks is greatly influenced by the complexity of their architecture in terms of the number of layers, neurons and connections. But the training and inference of ever-larger networks translates to greater demands on hardware resources and longer computing times. Conversely, their portability is limited on embedded systems with low memory and/or computing capacity.The aim of this thesis is to study and design methods for reducing the hardware footprint of neural networks while preserving their performance as much as possible. We restrict ourselves to convolution networks dedicated to computer vision by studying the possibilities offered by quantization. Quantization aims to reduce the hardware footprint, in terms of memory, bandwidth and computation operators, by reducing the number of bits in the network parameters and activations.The contributions of this thesis consist of a new post-training quantization method based on the exploitation of spatial correlations of network parameters, an approach facilitating the learning of very highly quantized networks, and a method aiming to combine mixed precision quantization and lossless entropy coding.The contents of this thesis are essentially limited to algorithmic aspects, but the research orientations were strongly influenced by the requirement for hardware feasibility of our solutions
Toofanee, Mohammud Shaad Ally. "An innovative ecosystem based on deep learning : Contributions for the prevention and prediction of diabetes complications." Electronic Thesis or Diss., Limoges, 2023. https://aurore.unilim.fr/theses/nxfile/default/656b0a1f-2ff2-49c5-bb3e-f34704d6f6b0/blobholder:0/2023LIMO0107.pdf.
In the year 2021, estimations indicated that approximately 537 million individuals were affected by diabetes, a number anticipated to escalate to 643 million by the year 2030 and further to 783 million by 2045. Diabetes, characterized as a persistent metabolic ailment, necessitates unceasing daily care and management. In the context of Mauritius, as per the most recent report by the International Diabetes Federation, the prevalence of diabetes, specifically Type 2 Diabetes (T2D), stood at 22.6% of the population in 2021, with projections indicating a surge to 26.6% by the year 2045. Amidst this alarming trend, a concurrent advancement has been observed in the realm of technology, with artificial intelligence techniques showcasing promising capabilities in the spheres of medicine and healthcare. This doctoral dissertation embarks on the exploration of the intersection between artificial intelligence and diabetes education, prevention, and management.We initially focused on exploring the potential of artificial intelligence (AI), more specifically, deep learning, to address a critical complication linked to diabetes – Diabetic Foot Ulcer (DFU). The emergence of DFU poses the grave risk of lower limb amputations, consequently leading to severe socio-economic repercussions. In response, we put forth an innovative solution named DFU-HELPER. This tool serves as a preliminary measure for validating the treatment protocols administered by healthcare professionals to individual patients afflicted by DFU. The initial assessment of the proposed tool has exhibited promising performance characteristics, although further refinement and rigorous testing are imperative. Collaborative efforts with public health experts will be pivotal in evaluating the practical efficacy of the tool in real-world scenarios. This approach seeks to bridge the gap between AI technologies and clinical interventions, with the ultimate goal of improving the management of patients with DFU.Our research also addressed the critical aspects of privacy and confidentiality inherent in handling health-related data. Acknowledging the extreme importance of safeguarding sensitive information, we delved into the realm of Peer-to-Peer Federated Learning. This investigation specifically found application in our proposal for the DFU-Helper tool discussed earlier. By exploring this advanced approach, we aimed to ensure that the implementation of our technology aligns with privacy standards, thereby fostering a trustworthy and secure environment for healthcare data management.Finally, our research extended to the development of an intelligent conversational agent designed to offer round-the-clock support for individuals seeking information about diabetes. In pursuit of this goal, the creation of an appropriate dataset was paramount. In this context, we leveraged Natural Language Processing techniques to curate data from online media sources focusing on diabetes-related content
Lajoie, Isabelle. "Apprentissage de représentations sur-complètes par entraînement d’auto-encodeurs." Thèse, 2009. http://hdl.handle.net/1866/3768.
Progress in the machine learning domain allows computational system to address more and more complex tasks associated with vision, audio signal or natural language processing. Among the existing models, we find the Artificial Neural Network (ANN), whose popularity increased suddenly with the recent breakthrough of Hinton et al. [22], that consists in using Restricted Boltzmann Machines (RBM) for performing an unsupervised, layer by layer, pre-training initialization, of a Deep Belief Network (DBN), which enables the subsequent successful supervised training of such architecture. Since this discovery, researchers studied the efficiency of other similar pre-training strategies such as the stacking of traditional auto-encoder (SAE) [5, 38] and the stacking of denoising auto-encoder (SDAE) [44]. This is the context in which the present study started. After a brief introduction of the basic machine learning principles and of the pre-training methods used until now with RBM, AE and DAE modules, we performed a series of experiments to deepen our understanding of pre-training with SDAE, explored its different proprieties and explored variations on the DAE algorithm as alternative strategies to initialize deep networks. We evaluated the sensitivity to the noise level, and influence of number of layers and number of hidden units on the generalization error obtained with SDAE. We experimented with other noise types and saw improved performance on the supervised task with the use of pepper and salt noise (PS) or gaussian noise (GS), noise types that are more justified then the one used until now which is masking noise (MN). Moreover, modifying the algorithm by imposing an emphasis on the corrupted components reconstruction during the unsupervised training of each different DAE showed encouraging performance improvements. Our work also allowed to reveal that DAE was capable of learning, on naturals images, filters similar to those found in V1 cells of the visual cortex, that are in essence edges detectors. In addition, we were able to verify that the learned representations of SDAE, are very good characteristics to be fed to a linear or gaussian support vector machine (SVM), considerably enhancing its generalization performance. Also, we observed that, alike DBN, and unlike SAE, the SDAE had the potential to be used as a good generative model. As well, we opened the door to novel pre-training strategies and discovered the potential of one of them : the stacking of renoising auto-encoders (SRAE).