Literatura académica sobre el tema "Réseau de croyance profond"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Réseau de croyance profond".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Réseau de croyance profond"
Do, Trinh-Minh-Tri y Thierry Artières. "Modèle hybride champ markovien conditionnel et réseau de neurones profond". Document numérique 14, n.º 2 (30 de agosto de 2011): 11–27. http://dx.doi.org/10.3166/dn.14.2.11-27.
Texto completoKlisnick, A., C. Pourrat, J. Gabrillargues, P. Clavelou, M. Ruivard y J. Schmidt. "Thrombolyse in situ d'une thrombophlébitecérébrale du réseau profond : À propos d'un cas". La Revue de Médecine Interne 24 (junio de 2003): 120s. http://dx.doi.org/10.1016/s0248-8663(03)80285-0.
Texto completoHamzaoui, D., S. Montagne, P. Mozer, R. Renard-Penna y H. Delingette. "Segmentation automatique de la prostate à l’aide d’un réseau de neurones profond". Progrès en Urologie 30, n.º 13 (noviembre de 2020): 696–97. http://dx.doi.org/10.1016/j.purol.2020.07.010.
Texto completoMarty y Depairon. "Prise en charge de la phlegmasia cerulea dolens". Praxis 95, n.º 21 (1 de mayo de 2006): 845–48. http://dx.doi.org/10.1024/0369-8394.95.21.845.
Texto completoFillières-Riveau, Gauthier, Jean-Marie Favreau, Vincent Barra y Guillaume Touya. "Génération de cartes tactiles photoréalistes pour personnes déficientes visuelles par apprentissage profond". Revue Internationale de Géomatique 30, n.º 1-2 (enero de 2020): 105–26. http://dx.doi.org/10.3166/rig.2020.00104.
Texto completoChirouter, Edwige. "Philosophie et littérature de jeunesse : la vérité, la fiction et la vie". Nouveaux cahiers de la recherche en éducation 11, n.º 2 (31 de julio de 2013): 161–68. http://dx.doi.org/10.7202/1017500ar.
Texto completoLevesque, Simon. "Entretien avec Ansgar Rougemont-Bücking". Cygne noir, n.º 9 (19 de agosto de 2022): 63–79. http://dx.doi.org/10.7202/1091461ar.
Texto completoLe Saux, Françoise. "La femme, le chien et le clerc". Reinardus / Yearbook of the International Reynard Society 28 (31 de diciembre de 2016): 130–41. http://dx.doi.org/10.1075/rein.28.09les.
Texto completoPrakash, Prem, Marc Sebban, Amaury Habrard, Jean-Claude Barthelemy, Frédéric Roche y Vincent Pichot. "Détection automatique des apnées du sommeil sur l’ECG nocturne par un apprentissage profond en réseau de neurones récurrents (RNN)". Médecine du Sommeil 18, n.º 1 (marzo de 2021): 43–44. http://dx.doi.org/10.1016/j.msom.2020.11.077.
Texto completoChicoine, Nathalie, Johanne Charbonneau, Damaris Rose y Brian Ray. "Le processus de reconstruction des réseaux sociaux des femmes immigrantes dans l’espace montréalais". Articles et notes de recherche : Représentations et vécus 10, n.º 2 (12 de abril de 2005): 27–48. http://dx.doi.org/10.7202/057934ar.
Texto completoTesis sobre el tema "Réseau de croyance profond"
Kaabi, Rabeb. "Apprentissage profond et traitement d'images pour la détection de fumée". Electronic Thesis or Diss., Toulon, 2020. http://www.theses.fr/2020TOUL0017.
Texto completoThis thesis deals with the problem of forest fire detection using image processing and machine learning tools. A forest fire is a fire that spreads over a wooded area. It can be of natural origin (due to lightning or a volcanic eruption) or human. Around the world, the impact of forest fires on many aspects of our daily lives is becoming more and more apparent on the entire ecosystem.Many methods have been shown to be effective in detecting forest fires. The originality of the present work lies in the early detection of fires through the detection of forest smoke and the classification of smoky and non-smoky regions using deep learning and image processing tools. A set of pre-processing techniques helped us to have an important database which allowed us afterwards to test the robustness of the model based on deep belief network we proposed and to evaluate the performance by calculating the following metrics (IoU, Accuracy, Recall, F1 score). Finally, the proposed algorithm is tested on several images in order to validate its efficiency. The simulations of our algorithm have been compared with those processed in the state of the art (Deep CNN, SVM...) and have provided very good results. The results of the proposed methods gave an average classification accuracy of about 96.5% for the early detection of smoke
Antipov, Grigory. "Apprentissage profond pour la description sémantique des traits visuels humains". Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0071/document.
Texto completoThe recent progress in artificial neural networks (rebranded as deep learning) has significantly boosted the state-of-the-art in numerous domains of computer vision. In this PhD study, we explore how deep learning techniques can help in the analysis of gender and age from a human face. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes.Firstly, we conduct a comprehensive study which results in an empirical formulation of a set of principles for optimal design and training of gender recognition and age estimation Convolutional Neural Networks (CNNs). As a result, we obtain the state-of-the-art CNNs for gender/age prediction according to the three most popular benchmarks, and win an international competition on apparent age estimation. On a very challenging internal dataset, our best models reach 98.7% of gender classification accuracy and an average age estimation error of 4.26 years.In order to address the problem of synthesis and editing of human faces, we design and train GA-cGAN, the first Generative Adversarial Network (GAN) which can generate synthetic faces of high visual fidelity within required gender and age categories. Moreover, we propose a novel method which allows employing GA-cGAN for gender swapping and aging/rejuvenation without losing the original identity in synthetic faces. Finally, in order to show the practical interest of the designed face editing method, we apply it to improve the accuracy of an off-the-shelf face verification software in a cross-age evaluation scenario
Antipov, Grigory. "Apprentissage profond pour la description sémantique des traits visuels humains". Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0071.
Texto completoThe recent progress in artificial neural networks (rebranded as deep learning) has significantly boosted the state-of-the-art in numerous domains of computer vision. In this PhD study, we explore how deep learning techniques can help in the analysis of gender and age from a human face. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes.Firstly, we conduct a comprehensive study which results in an empirical formulation of a set of principles for optimal design and training of gender recognition and age estimation Convolutional Neural Networks (CNNs). As a result, we obtain the state-of-the-art CNNs for gender/age prediction according to the three most popular benchmarks, and win an international competition on apparent age estimation. On a very challenging internal dataset, our best models reach 98.7% of gender classification accuracy and an average age estimation error of 4.26 years.In order to address the problem of synthesis and editing of human faces, we design and train GA-cGAN, the first Generative Adversarial Network (GAN) which can generate synthetic faces of high visual fidelity within required gender and age categories. Moreover, we propose a novel method which allows employing GA-cGAN for gender swapping and aging/rejuvenation without losing the original identity in synthetic faces. Finally, in order to show the practical interest of the designed face editing method, we apply it to improve the accuracy of an off-the-shelf face verification software in a cross-age evaluation scenario
Katranji, Mehdi. "Apprentissage profond de la mobilité des personnes". Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCA024.
Texto completoKnowledge of mobility is a major challenge for authorities mobility organisers and urban planning. Due to the lack of formal definition of human mobility, the term "people's mobility" will be used in this book. This topic will be introduced by a description of the ecosystem by considering these actors and applications.The creation of a learning model has prerequisites: an understanding of the typologies of the available data sets, their strengths and weaknesses. This state of the art in mobility knowledge is based on the four-step model that has existed and been used since 1970, ending with the renewal of the methodologies of recent years.Our models of people's mobility are then presented. Their common point is the emphasis on the individual, unlike traditional approaches that take the locality as a reference. The models we propose are based on the fact that the intake of individuals' decisions is based on their perception of the environment.This finished book on the study of the deep learning methods of Boltzmann machines restricted. After a state of the art of this family of models, we are looking for strategies to make these models viable in the application world. This last chapter is our contribution main theoretical, by improving robustness and performance of these models
Cheung-Mon-Chan, Pascal. "Réseaux bayésiens et filtres particulaires pour l'égalisation adaptative et le décodage conjoints". Phd thesis, Télécom ParisTech, 2003. http://pastel.archives-ouvertes.fr/pastel-00000732.
Texto completoLe, Cornec Kergann. "Apprentissage Few Shot et méthode d'élagage pour la détection d'émotions sur bases de données restreintes". Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC034.
Texto completoEmotion detection plays a major part in human interactions, a goodunderstanding of the speaker's emotional state leading to a betterunderstanding of his speech. It is de facto the same in human-machineinteractions.In the area of emotion detection using computers, deep learning hasemerged as the state of the art. However, classical deep learningtechnics perform poorly when training sets are small. This thesis explores two possible ways for tackling this issue, pruning and fewshot learning.Many pruning methods exist but focus on maximising pruning withoutlosing too much accuracy.We propose a new pruning method, improving the choice of the weightsto remove. This method is based on the rivalry of two networks, theoriginal network and a network we name rival.The idea is to share weights between both models in order to maximisethe accuracy. During training, weights impacting negatively the accuracy will be removed, thus optimising the architecture while improving accuracy. This technic is tested on different networks as well asdifferent databases and achieves state of the art results, improvingaccuracy while pruning a significant percentage of weights.The second area of this thesis is the exploration of matching networks(both siamese and triple), as an answer to learning on small datasets.Sounds and Images were merged to learn their main features, in orderto detect emotions.We show that, while restricting ourselves to 200 training instancesfor each class, triplet network achieves state of the art (trained on hundreds of thousands instances) on some databases.We also show that, in the area of emotion detection, triplet networksprovide a better vectorial embedding of the emotions thansiamese networks, and thusdeliver better results.A new loss function based on triplet loss is also introduced, facilitatingthe training process of the triplet and siamese networks. To allow abetter comparison of our model, different methods are used to provideelements of validation, especially on the vectorial embedding.In the long term, both methods can be combined to propose lighter and optimised networks. As thenumber of parameters is lowered by pruning, the triplet network shouldlearn more easily and could achieve better performances
Azaza, Lobna. "Une approche pour estimer l'influence dans les réseaux complexes : application au réseau social Twitter". Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCK009/document.
Texto completoInfluence in complex networks and in particular Twitter has become recently a hot research topic. Detecting most influential users leads to reach a large-scale information diffusion area at low cost, something very useful in marketing or political campaigns. In this thesis, we propose a new approach that considers the several relations between users in order to assess influence in complex networks such as Twitter. We model Twitter as a multiplex heterogeneous network where users, tweets and objects are represented by nodes, and links model the different relations between them (e.g., retweets, mentions, and replies).The multiplex PageRank is applied to data from two datasets in the political field to rank candidates according to their influence. Even though the candidates' ranking reflects the reality, the multiplex PageRank scores are difficult to interpret because they are very close to each other.Thus, we want to go beyond a quantitative measure and we explore how relations between nodes in the network could reveal about the influence and propose TwitBelief, an approach to assess weighted influence of a certain node. This is based on the conjunctive combination rule from the belief functions theory that allow to combine different types of relations while expressing uncertainty about their importance weights. We experiment TwitBelief on a large amount of data gathered from Twitter during the European Elections 2014 and the French 2017 elections and deduce top influential candidates. The results show that our model is flexible enough to consider multiple interactions combination according to social scientists needs or requirements and that the numerical results of the belief theory are accurate. We also evaluate the approach over the CLEF RepLab 2014 data set and show that our approach leads to quite interesting results. We also propose two extensions of TwitBelief in order to consider the tweets content. The first is the estimation of polarized influence in Twitter network. In this extension, sentiment analysis of the tweets with the algorithm of forest decision trees allows to determine the influence polarity. The second extension is the categorization of communication styles in Twitter, it determines whether the communication style of Twitter users is informative, interactive or balanced
El, Zoghby Nicole. "Fusion distribuée de données échangées dans un réseau de véhicules". Phd thesis, Université de Technologie de Compiègne, 2014. http://tel.archives-ouvertes.fr/tel-01070896.
Texto completoMoukari, Michel. "Estimation de profondeur à partir d'images monoculaires par apprentissage profond". Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC211/document.
Texto completoComputer vision is a branch of artificial intelligence whose purpose is to enable a machine to analyze, process and understand the content of digital images. Scene understanding in particular is a major issue in computer vision. It goes through a semantic and structural characterization of the image, on one hand to describe its content and, on the other hand, to understand its geometry. However, while the real space is three-dimensional, the image representing it is two-dimensional. Part of the 3D information is thus lost during the process of image formation and it is therefore non trivial to describe the geometry of a scene from 2D images of it.There are several ways to retrieve the depth information lost in the image. In this thesis we are interested in estimating a depth map given a single image of the scene. In this case, the depth information corresponds, for each pixel, to the distance between the camera and the object represented in this pixel. The automatic estimation of a distance map of the scene from an image is indeed a critical algorithmic brick in a very large number of domains, in particular that of autonomous vehicles (obstacle detection, navigation aids).Although the problem of estimating depth from a single image is a difficult and inherently ill-posed problem, we know that humans can appreciate distances with one eye. This capacity is not innate but acquired and made possible mostly thanks to the identification of indices reflecting the prior knowledge of the surrounding objects. Moreover, we know that learning algorithms can extract these clues directly from images. We are particularly interested in statistical learning methods based on deep neural networks that have recently led to major breakthroughs in many fields and we are studying the case of the monocular depth estimation
Groueix, Thibault. "Learning 3D Generation and Matching". Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1024.
Texto completoThe goal of this thesis is to develop deep learning approaches to model and analyse 3D shapes. Progress in this field could democratize artistic creation of 3D assets which currently requires time and expert skills with technical software.We focus on the design of deep learning solutions for two particular tasks, key to many 3D modeling applications: single-view reconstruction and shape matching.A single-view reconstruction (SVR) method takes as input a single image and predicts the physical world which produced that image. SVR dates back to the early days of computer vision. In particular, in the 1960s, Lawrence G. Roberts proposed to align simple 3D primitives to the input image under the assumption that the physical world is made of cuboids. Another approach proposed by Berthold Horn in the 1970s is to decompose the input image in intrinsic images and use those to predict the depth of every input pixel.Since several configurations of shapes, texture and illumination can explain the same image, both approaches need to form assumptions on the distribution of images and 3D shapes to resolve the ambiguity. In this thesis, we learn these assumptions from large-scale datasets instead of manually designing them. Learning allows us to perform complete object reconstruction, including parts which are not visible in the input image.Shape matching aims at finding correspondences between 3D objects. Solving this task requires both a local and global understanding of 3D shapes which is hard to achieve explicitly. Instead we train neural networks on large-scale datasets to solve this task and capture this knowledge implicitly through their internal parameters.Shape matching supports many 3D modeling applications such as attribute transfer, automatic rigging for animation, or mesh editing.The first technical contribution of this thesis is a new parametric representation of 3D surfaces modeled by neural networks.The choice of data representation is a critical aspect of any 3D reconstruction algorithm. Until recently, most of the approaches in deep 3D model generation were predicting volumetric voxel grids or point clouds, which are discrete representations. Instead, we present an alternative approach that predicts a parametric surface deformation ie a mapping from a template to a target geometry. To demonstrate the benefits of such a representation, we train a deep encoder-decoder for single-view reconstruction using our new representation. Our approach, dubbed AtlasNet, is the first deep single-view reconstruction approach able to reconstruct meshes from images without relying on an independent post-processing, and can do it at arbitrary resolution without memory issues. A more detailed analysis of AtlasNet reveals it also generalizes better to categories it has not been trained on than other deep 3D generation approaches.Our second main contribution is a novel shape matching approach purely based on reconstruction via deformations. We show that the quality of the shape reconstructions is critical to obtain good correspondences, and therefore introduce a test-time optimization scheme to refine the learned deformations. For humans and other deformable shape categories deviating by a near-isometry, our approach can leverage a shape template and isometric regularization of the surface deformations. As category exhibiting non-isometric variations, such as chairs, do not have a clear template, we learn how to deform any shape into any other and leverage cycle-consistency constraints to learn meaningful correspondences. Our reconstruction-for-matching strategy operates directly on point clouds, is robust to many types of perturbations, and outperforms the state of the art by 15% on dense matching of real human scans
Libros sobre el tema "Réseau de croyance profond"
Mangeot, Mathieu y Agnès Tutin, eds. Lexique(s) et genre(s) textuel(s) : approches sur corpus. Editions des archives contemporaines, 2020. http://dx.doi.org/10.17184/eac.9782813003454.
Texto completoCapítulos de libros sobre el tema "Réseau de croyance profond"
HADJADJ-AOUL, Yassine y Soraya AIT-CHELLOUCHE. "Utilisation de l’apprentissage par renforcement pour la gestion des accès massifs dans les réseaux NB-IoT". En La gestion et le contrôle intelligents des performances et de la sécurité dans l’IoT, 27–55. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9053.ch2.
Texto completoJACQUEMONT, Mikaël, Thomas VUILLAUME, Alexandre BENOIT, Gilles MAURIN y Patrick LAMBERT. "Analyse d’images Cherenkov monotélescope par apprentissage profond". En Inversion et assimilation de données de télédétection, 303–35. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9142.ch9.
Texto completoMOLINIER, Matthieu, Jukka MIETTINEN, Dino IENCO, Shi QIU y Zhe ZHU. "Analyse de séries chronologiques d’images satellitaires optiques pour des applications environnementales". En Détection de changements et analyse des séries temporelles d’images 2, 125–74. ISTE Group, 2024. http://dx.doi.org/10.51926/iste.9057.ch4.
Texto completo