Teses / dissertações sobre o tema "Traitement assisté par l'humain"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 41 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Traitement assisté par l'humain".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Chevalier, Frédéric. "Le diagnostic assisté par ordinateur de l'image tomodensitométrique : étude de paramètres adaptés". Paris 12, 1991. http://www.theses.fr/1991PA120036.
Texto completo da fonteZhang, Xiwei. "Méthodes de traitement d’images pour le dépistage de la rétinopathie diabétique assisté par ordinateur". Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0024/document.
Texto completo da fonteDiabetic retinopathy is the main cause of blindness among the middle-aged population. An early detection and adapted treatment considerably reduce the risk of sight loss. Medical authorities recommend an annual examination to diabetic patients. Several diabetic retinopathy screening programs have been deployed to enforce this recommendation. The aim of the TeleOphta project was to automatically detect normal examinations in a diabetic screening system, in order to reduce the burden on readers, and therefore serve more patients. This thesis proposes several methods to extract information linked to diabetic retinopathy lesions from color eye fundus images.The detection of exudates, microaneurysms and hemorrhages is discussed in detail. One of the main challenges of this work is to deal with clinical images, acquired by different types of eye fundus cameras, by different persons. Therefore the data base heterogeneity is high. New pre-processing methods, which perform not only normalization and denoising tasks, but also detect reflections and artifacts in the images, are proposed. Novel candidate segmentation methods based on mathematical morphology, and new textural and contextual features for lesion characterization, are proposed. A random forest algorithm is used to detect lesions among the candidates. The proposed methods make extensive use of new residue analysis methods.Moreover, three new publicly available retinal image databases, e-ophtha EX, e-ophtha MA and e-ophtha HM, respectively designed to develop and evaluate exudate, microaneurysms and hemorrhages detections methods, are proposed in this work. The images are extracted from the OPHDIAT telemedicine network for diabetic retinopathy screening. Manual annotations of the lesions are given in detail in these databases. The proposed algorithms are evaluated on these databases.The proposed methods have been integrated within the TeleOphta system, which is presented and evaluated on two large databases. Each patient record is classified into two categories: “To be referred” or “Normal”. The classification is based not only on the results of the presented methods, but also on image signatures provided by other partners, as well as on medical and acquisition-related information. The evaluation shows that the TeleOphta system can make about 2 times more patients benefit from the diagnosis service
Poïarkova, Elena. "L' enseignement assisté par ordinateur de la traduction français-russe". Aix-Marseille 1, 2006. http://www.theses.fr/2006AIX10018.
Texto completo da fonteDavid, Amos Abayomi. "Processus EXPRIM, Image et IA pour un EIIAO individualisé (Enseignement par l'Image Intelligemment Assisté par Ordinateur) : le prototype BIRDS". Vandoeuvre-les-Nancy, INPL, 1990. http://docnum.univ-lorraine.fr/public/INPL_T_1990_DAVID_A_A.pdf.
Texto completo da fonteLebouvier, Alexandre. "Étude d'un réformateur de gazole assisté par plasma dédié au post-traitement des NOx émis par un moteur Diesel". Paris, ENMP, 2011. http://www.theses.fr/2011ENMP0037.
Texto completo da fonteTo meet the future Euro 6 regulation for NOₓ emission from diesel engines, car manufacturers are considering the use od a NOₓ trap catalyst. During the regeneration phase, reducing species (H2, CO, HC) are brought to the stored NOₓ trap which will convert the NOₓ into N2. This injection of reductants can be achieved by engine control or by a reforming process. The plasma reformer offers an aletrnative to traditional reforming catalysts to produce syngas. This thesis is part of researches investigated at the Center for Energy and Processes for twenty years on the plasma-assisted hydrocarbon conversion. The first goal of this thesis was to show experimentally the feasibility and viability of plasma-assisted exhaust gas fuel reforming of diesel fuel for onboard application. The experimental bench, designed for the H2 fuel cell feeding, has been adapted to meet this new application. It has been demonstrated, on two engine operation points, that performances were sensitive to the available oxygen rate in exhaust gases. The NOₓ trap regeneration can be estimated to 12 s at low load. The second goal was to develop numerical models to understand the coupled physical and chamical phenomena occuring with the plasma. Three models have been developped. A first 3 D MHD model gave intereting results hardly experimentally measurable about the intrinsic properties of the low-current arc. A more complex model including the vortex injection of gases and the restrike mode has been implemented. A 1D multistage kinetic model, using a detail kinetic mechanism of a diesel fuel surrogate molecule, has resulted in acquiring the trends of different parameters which have been then compared with experimental results. Finally, a 2 D asisymmetric model was developped to study the interaction between the plasma, considered as a heat source, and chemical Kinetics
Ouldja, Hadj. "Réalisation d'une interface en langage naturel et son application à l'enseignement assisté par ordinateur". Paris 6, 1988. http://www.theses.fr/1988PA066456.
Texto completo da fonteHoogstoel, Frédéric. "Une approche organisationnelle du travail coopératif assisté par ordinateur : application au projet Co-Learn". Lille 1, 1995. http://www.theses.fr/1995LIL10155.
Texto completo da fonteZitouni, Djamel. "De la modelisation au traitement de l'information médicale". Antilles-Guyane, 2009. http://www.theses.fr/2010AGUY0382.
Texto completo da fonteThe intensive care unit is a complex environment ; the practice of medicine is specific. The handling of a patient during his/her stay should be done by care staffs with specific knowledge. To help care staffs in their tasks, a plethora of equipment is dedicated to them. These equipments evolve constantly. In the search of a continuous improvement in this activity, the use of automated increasingly appears as a major support and a future challenge for medical practices. Over the last thirty years, several attempts have been made to develop automated guidelines. However, most of these tools are prone to numerous unsolved issues, both in the translation of textual protocols to formal forms and in the treatment of information coming from biomedical monitors. To overcome biases of diagnosis support systems, we chose a different approach. We have defined a formalism that allows caregivers formalizing medical knowledge. We spent the last three years in the intensive care unit of the university hospital of Fort de France with the aim to develop a complete chain of data processing. The final goal was the automation of guidelines in the room, at the patient’s bedside. We propose a set of methods and tools to establish the complete chain of treatment follow-up for a patient, from admission to discharge. This methodology is based on a bedside experimental station: Aidiag (AIDe aux DIAGnostic). This station is a patient-centered tool that also adequately fits to medical routines. A genuine methodology for analyzing biomedical signals allows a first signal processing prior to their physiological interpretation. An artificial intelligence engine (Think!) and a new formalism (Oneah)
Alfiansyah, Agung. "Intégration de données ultrasonores per-opératoires dans le geste de chirurgie orthopédique assisté par ordinateur". Aix-Marseille 2, 2007. http://theses.univ-amu.fr.lama.univ-amu.fr/2007AIX22108.pdf.
Texto completo da fonteThis work addresses the problem of the integration of ultrasound imaging for intraoperative data acquisition in computer assisted orthopaedic surgery, in particular for hip surgery applications. The point is to improve the quality of the surgery using a minimally invasive, real time, and highly available imaging device. The method we propose uses a featurebased registration between ultrasound images and a pre-operative CT scan volume. We present an ultrasound image segmentation method based on a deformable model with the integration of a regional energy term to detect the local characteristics of ultrasound images. The feature-based registration is a variant of the ICP algorithm that uses a pre-calculated distance map with a Lavenberg-Marquardt optimization. We also propose a protocol for the pre- and intra-operative data acquisition. The real operating room constraint are taken into account for the design of this protocol while trying to provide the necessary ergonomy for the surgeon. A large validation has been conducted on phantoms and a cadaver and is presented in this thesis. From this validation we assess the performances of the data acquisition protocol, as well as the precision of the segmentation and the robustness and precision of the registration. Performances are measured quantitatively and qualitatively. Finally we propose some possible improvements to the segmentation and registration
Rousseau, Eric. "Curage axillaire préparé par lipoaspiration et assisté par endoscopie dans le traitement chirurgical des cancers du sein (à propos de 43 cas)". Bordeaux 2, 1997. http://www.theses.fr/1997BOR23052.
Texto completo da fonteAïche-Belkadi, Lynda. "Modification des propriétés de surface de poudres en lit fluidisé assisté par une post-décharge". Toulouse 3, 2009. http://thesesups.ups-tlse.fr/693/.
Texto completo da fonteThis work concerns the development of an original process which consists in coupling two technologies: fluidized bed and cold nitrogen remote-plasma. The main object of this work is to treat an important weight of thermo-sensitive powders at low temperature and pressure. Besides, our goal also consists in understanding the phenomenon taking place in our process. The fluidized bed is used for improving the gas-solid contact. The fluidization induces a uniform treatment, important mass and thermal transfers between the two phases. The fluidization is achieved by nitrogen, which flows previously through a microwave discharge. The microwave discharge generates chemically active species. In this study, the fixed bed height is of the order of the bed diameter. This work includes two parts: in the first one, the grafting of new chemical functions on the surface of polyethylene powders due to their exposure to a cold nitrogen remote plasma with and without oxygen is studied. The second concerns the deposition of silicon oxide on the same type of powder from silane (SiH4) and oxygen. In the first part, the wettability of polyethylene powders is increased, knowing that the polyethylene powders is naturally hydrophobic. The treatment efficiency depends essentially on the composition of the plasma gas. In fact, a pure nitrogen post-discharge can increase the wettability character of the powder, but only relative to organic liquids. Nevertheless, adding a small amount of oxygen allows reaching a better water wettability. In the second part, the feasibility of depositing silicon oxide on the surface of a polyethylene powder, using a far nitrogen/oxygen cold remote-plasma and silane in a fluidized bed is showed. The deposition occurs under the form of nodules. The powder water wettability is drastically improved by the deposition process. The hydrophilic character and the continuity of the deposit are exalted by the increase of the oxygen concentration injected in the discharge. Aging studies show that the coated-powder wettablity reaches a value similar to that of silica after a few days
Laveau, Nicolas. "Mouvement et video : estimation, compression et filtrage morphologique". Paris, ENMP, 2005. http://www.theses.fr/2005ENMP1346.
Texto completo da fonteThis thesis deals with video sequences and successively focuses on the main themes of video compression - motion estimation, spatial and temporal transforms, coefficient quantization and coding - then on these of spatio-temporal filtering and video segmentation. Two motion estimation schemes are studied, one based on the projection of the optical flow equation on a wavelet basis, the other on the multiscale minimization of a motion field described by a piecewise bilinear model. We then focus on the adaptation of a rate allocation model developped for still images in the h263 standard. We develop two lifting-based wavelet transforms, one for the spatial domain, the other for the motion field. Lastly, we introduce structuring elements that follow the motion field to create a 2d+t mathematical morphology
Trebaul, Lena. "Développement d'outils de traitement du signal et statistiques pour l'analyse de groupe des réponses induites par des stimulations électriques corticales directes chez l'humain". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS045/document.
Texto completo da fonteIntroduction: Low-frequency direct electrical stimulation is performed in drug-resistant epileptic patients, implanted with depth electrodes. It induces cortico-cortical evoked potentials (CCEP) that allow in vivo connectivity mapping of local networks. The multicentric project F-TRACT aims at gathering data of several hundred patients in a database to build a propabilistic functional tractography atlas that estimates connectivity at the cortex level.Methods: Semi-automatic processing pipelines have been developed to handle the amount of stereo-electroencephalography (SEEG) and imaging data and store them in a database. New signal processing and machine-learning methods have been developed and included in the pipelines, in order to automatically identify bad channels and correct the stimulation artifact. Group analyses have been performed using CCEP features and time-frequency maps of the stimulation responses.Results: The new methods performance has been assessed on heterogeneous data, coming from different hospital center recording and stimulating using variable parameters. The atlas was generated from a sample of 173 patients, providing a connectivity probability value for 79% of the possible connections and estimating biophysical properties of fibers for 46% of them. The methodology was applied on patients who experienced auditory symptoms that allowed the identification of different networks involved in hallucination or illusion generation. Oscillatory group analysis showed that anatomy was driving the stimulation response pattern.Discussion: A methodology for CCEP study at the cerebral cortex scale is presented in this thesis. Heterogeneous data in terms of acquisition and stimulation parameters and spatially were used and handled. An increasing number of patients’ data will allow the maximization of the statistical power of the atlas in order to study causal cortico-cortical interactions
Abu, Al-Chay Najim. "Un système expert pour l'analyse et la synthèse des verbes arabes : dans un cadre d'Enseignement Assisté par Ordinateur". Lyon 1, 1988. http://www.theses.fr/1988LYO10076.
Texto completo da fonteMaraoui, Mohsen. "Elaboration d'un dictionnaire multifonction, à large couverture de la langue arabe : applications aux systèmes d'ALAO". Grenoble 3, 2009. http://www.theses.fr/2009GRE39042.
Texto completo da fonteThere is much CALL software on the Internet, designed by using authoring systems such as (Course builder, Hot Potatoes or Netquizz). Such activities poses several problems as the rigidity of software (the data used are predetermined and can not be altered or enhanced) and the not adaptability of course to the language skills of learners (the path is independent of its response to each step, they can not evaluate). The use of the NLP for the design of software CALL, is currently one of the method that can solve the problems. But after more than two decades since the early work, the advanced research in the topic of CALL based on the NLP remains weak, due to two main factors: the lack of NLP from language didactic psychoanalysts or computer scientists, and the cost of resources and products of natural language processing. For this there are only a limited number of prototypes and experimental systems for the Latin languages. The CALL work based on NLP for the Arabic language is practically not existed, in despite of a rich literature on the automatic processing of Arabic. In addition to factors mentioned above, the deficiency of the Arabic language in this area is due to that Arabic is a language difficult to treat automatically. Under this situation, and willing to enrich the possibilities for creating educational activities for Arabic we have: As a first step, developed a labelled dictionary of Arabic (as complete as possible), a derivative, a Conjugator and a morphological analyzer for Arabic. In a second step, we used these tools to create a number of educational applications for Arabic learning as a foreign language for French learners by using our system SALA
Gâteau, Jérôme. "Imagerie ultrasonore ultrarapide d'événements de cavitation : application en thérapie par ultrasons et imagerie de détection". Paris 7, 2011. http://www.theses.fr/2011PA077013.
Texto completo da fonteThe onset of cavitation activity in an aqueous medium is linked to the formation of gas/vapour-filled cavities of micrometric size. This formation can be acoustically mediated and is then called acoustic bubble nucleation. We focus here in the activation of seed nucléi by short (a few cycles) and high amplitude ultrasonic excitation (order of magnitude MPa). Bubbles are generated during the rarefaction phase of the wave and are transient (they dissolve). The nucleation properties of biological tissues are little known. However, they can be assessed using ultrasound: the formation of a bubble results in the appearance of a new scatterer (which can be detected with a pulse-écho detection), and each cavitation event generates an acoustic emission (detected with passive reception). In n this PhD manuscript, we use ultrafast ultrasound imaging (simultaneous acquisition on an array of transducers with a high frame rate) to detect cavitation events. Two in vitro applications were first validated. On one hand, bubble nucleation was performed through a human skull, and transcranial passive detection of a single cavitation event was used in a time reversal process to optimize adaptive focusing for thermal therapy of brain tissue. On the other hand, the formation and dissolution of bubbles in scattering biological tissues (muscle) were detected with a high sensitivity by combining passive detection and ultrafast active imaging. Finally, in vivo experiments on sheep's brain, and others in vitro on animal blood showed that nucleation in biological tissue is a random phenomenon, and high negative pressure are mandatory to initiate nucleation in vivo (< -12MPa)
Moulin, Claude. "Adaptation dynamique d'un système d'aide a l'apprentissage de la géométrie : modélisation par un système multiagent". Rouen, 1998. http://www.theses.fr/1998ROUES043.
Texto completo da fonteKachouri, Imen. "Description et classification des masses mammaires pour le diagnostic du cancer du sein". Thesis, Evry-Val d'Essonne, 2012. http://www.theses.fr/2012EVRY0017/document.
Texto completo da fonteThe computer-aided diagnosis of breast cancer is becoming increasingly a necessity given the exponential growth of performed mammograms. In particular, the breast mass diagnosis and classification arouse nowadays a great interest. Indeed, the complexity of processed forms and the difficulty to distinguish between them require the use of appropriate descriptors. In this work, characterization methods suitable for breast pathologies are proposed and the study of different classification methods is addressed. In order to analyze the mass shapes, a study about the different segmentation techniques in the context of breast mass detection is achieved. This study allows to adopt the level set model based on minimization of region-scalable fitting energy. Once the images are segmented, a study of various descriptors proposed inthe literature is conducted. Nevertheless, these proposals have some limitations such as sensitivity to noise, non invariance to geometric transformations and imprecise and general description of lesions. In this context, we propose a novel descriptor entitled the Skeleton End Points descriptor (SEP) in order to better characterize spiculations in mass contour while respecting the scale invariance. A second descriptor named the Protuberance Selection (PS) is proposed. It ensures also the same invariance criterion and the accurate description of the contour roughness. However, SEP and PS proposals are sensitive to noise. A third proposal entitled Spiculated Mass Descriptor (SMD) which has good robustness to noise is then carried out. In order to compare different descriptors, a comparative study between different classifiers is performed. The Support Vector Machine (SVM) provides for all considered descriptors the best classification result. Finally, the proposed descriptors and others commonly used in the breast cancer field are compared to test their ability to characterize the considered mass contours
Desbiens, Yvan. "Influence d'un didacticiel en traitement de données comptables sur la pensée opératoire d'élèves du secteur professionnel en en [sic] commerce et secrétariat de niveau secondaire IV, V et V intensif (15 ans et plus)". Doctoral thesis, Université Laval, 1987. http://hdl.handle.net/20.500.11794/29293.
Texto completo da fonteTaillepierre, Philippe. "Diodes électroluminescentes organiques : études des efficacités lumineuses et du traitement ionique des électrodes pour l'amélioration du vieillissement". Limoges, 2006. https://aurore.unilim.fr/theses/nxfile/default/206e837a-c35f-4de6-ab2e-9754ce39da3d/blobholder:0/2006LIMO0032.pdf.
Texto completo da fonteMostly, organic electroluminescent diodes (OLEDs) present a polychromatic emission contrary to the inorganic diodes. However, this emission is not taken into account during calculation of the photometric parameters (luminance, efficiencies) which are obtained by supposing a monochromatic emission. Calculations of these parameters are carried out considering the real emission of the diodes. The obtained results show that photometric parameters including the eye photoptic response in the calculation, are overestimated in the case of green diodes and underestimated for blue or red diodes if a monochromatic emission is considered. Studies are also carried out about OLEDs using a cathode protected by a silver layer deposited with an ion beam assistance. The obtained densification of the silver layer permits to fight against the “dark spots” phenomenon to improve the diode lifetime
El, Youbi Faysal. "Etude de transducteurs multiéléments pour le contrôle santé intégré par ondes de Lamb et développement du traitement de signal associé". Valenciennes, 2005. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/42645aa4-651d-459d-b3dd-ce534a2906da.
Texto completo da fonteThis work concerns firstly, the study of an Integrated Health Monitoring System (IHMS) based on the generation and the reception of Lamb waves by multi-element transducers and secondly, the development of a signal processing tool able to identify all present Lamb modes and to study their sensitivities to the presence of defects. To achieve this task, a theoretical relationship between the Short Time Fourier Transform (STFT) and the Two Dimensional Fourier Transform (2D-TF) was demonstrated. The comparison of the amplitudes obtained was made possible by the application of these two signal processing techniques. This allowed firstly, to improve the Lamb modes identification and secondly, to note the presence of parasitic modes. In addition, Finite Elements Modeling (FEM) of the system revealed that the piezoelectric elements composing emitter and receiver transducers were at the origin of these parasitic modes. In order to eliminate these parasites, the solution suggested was to reduce elements thickness. Moreover, this study showed that the transducer in reception could be improved by developing a segmented transducer. Once this system optimized, a hole was introduced in the centre of the plate in order to study the sensitivity of the modes to its presence and its size. The received signals were then treated. The results obtained made possible to test the sensitivity of the modes with respect to the size of this hole. Thus, they show clearly the effectiveness of the system developed for structural health monitoring
Kouvahe, Amélé Eyram Florence. "Etude du remodelage vasculaire pathologique : de la caractérisation macroscopique en imagerie TDM à l’analyse en microscopie numérique". Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS019.
Texto completo da fonteThis research focuses on the study of the vascular network in general, in several imaging modalities and several anatomo-pathological configurations. Its objective is to discriminate vascular structures in image data and to detect and quantify the presence of morphological modifications (remodeling) related to a pathology. The proposed generic analysis framework exploits a priori knowledge of the geometry of blood vessels and their contrast with respect to the surrounding tissue. The originality of the developed approach consists in exploiting a multidirectional locally connected filter (LCF) adapted to the dimension of the data space (2D or 3D). This filter allows the selection of curvilinear structures in positive contrast in images whose cross-sectional size does not exceed the size of the filtering window. This selection remains effective even at the level of vessel subdivision. The multi-resolution approach makes it possible to overcome the difference in vascular calibers in the network and to segment the entire vascular structure, even in the presence of a local caliber change. The proposed segmentation approach is general. It can be easily adapted to different imaging modalities that preserve a contrast (positive or negative) between the vessels and their environment. This has been demonstrated in different types of imaging, such as thoracic CT with and without contrast agent injection, hepatic perfusion data, eye fundus imaging and infrared microscopy (for fiber segmentation in mouse brain).From an accurate and robust segmentation of the vascular network, it is possible to detect and characterize the presence of remodeling due to a pathology. This is achieved by analyzing the vessel caliber variation along the central axis which provides both a global view on the caliber distribution in the studied organ (to be compared with a "healthy" reference) and a local detection of shape remodeling. The latter case has been applied for the detection and quantification of pulmonary arteriovenous malformations (PAVM).Initially planned in a study of tumor angiogenesis, the segmentation method developed above was not applicable to infrared microscopy because of lack of vascular contrast in the spectral bands analyzed. Instead, it was exploited for the extraction of brain fibers as a support element for image interpolation aiming the 3D reconstruction of the brain volume from the 2D sub-sampled data. In this respect, a 2D-2D interpolation with realignment of the structures was developed as a second methodological contribution of the thesis. We proposed a geometric interpolation approach controlled by a prior mapping of the corresponding structures in the images, which in our case were the tumor region, the fibers, the brain ventricles and the contour of the brain. An atlas containing the unique labels of the structures to be matched is thus built up for each image. Labels of the same value are aligned using a field of directional vectors established at the level of their contours, in a higher dimensional space (3D here). The diffusion of this field of vectors results in a smooth directional flow from one image to the other, which represents the homeomorphic transformation between the two images. The proposed method has two advantages: it is general, which is demonstrated on different image modalities (microscopy, CT, MRI, atlas) and it allows controlling the alignment of structures whose correspondence is targeted in priority
Marsac, Laurent. "Focalisation utrasonore adaptative et application à la thérapie du cerveau". Paris 7, 2013. http://www.theses.fr/2013PA077106.
Texto completo da fonteThe aim of this thesis is to optimize transcranial ultrasound focusing for brain therapy at 1 MHz. Tissues are heated in a non invasive way by focused ultrasound using a spherical probe. Distortion of the wavefront induced by the skull is compensated. Treatments are MR-guided in order to measure the temperature rise at focus. In vitro ultrasound measurements allowed to measure the phase and amplitude aberration on different skulls. Aberration correction based on CT scan imagine of the skull and numerical simulations are optimised and compared. The best method is applied to ex vivo human heads for validation of the accuracy. Skull aberrations are corrected by an adaptative focusing method based on the estimation of the acoustic intensity at focus. Measurement is done thanks to the displacement of the tissue induced by the acoustic radiation pressure. Measurement done by an MR-ARFI IRM sequence and validated in phantoms and cadaver heads and confirms a better focusing. The blood brain barrier (BBB) was opened on 2 animals. This is done by focusing after injection of microbubbles in blood. Local BBB opening and its closing after 24h are verified. These first applications at 1 MHz allow to prepare future applications on patients
Hébert, Marie-Marthe. "Didactique de la pratique d'écriture au secondaire, avec traitement de texte : vers une communication interactive". Doctoral thesis, Université Laval, 1987. http://hdl.handle.net/20.500.11794/29310.
Texto completo da fonteBouzidi, Laïd. "Conception d'un système d'E. A. O. Pour l'apprentissage d'une langue : application à l'enseignement de la morphologie de l'arabe". Lyon 1, 1989. http://www.theses.fr/1989LYO10106.
Texto completo da fonteDascalu, Mihai. "L'analyse de la complexité du discours et du texte pour apprendre et collaborer". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00978420.
Texto completo da fonteLaplante, André. "Élaboration et mise à l'essai de guides pour l'initiation à l'ordinateur et au traitement de texte auprès des formatrices et des analphabètes fonctionnels". Master's thesis, Université Laval, 1990. http://hdl.handle.net/20.500.11794/17629.
Texto completo da fonteLoiseau, Mathieu. "Elaboration d'un modèle pour une base de textes indexée pédagogiquement pour l'enseignement des langues". Grenoble 3, 2009. https://tel.archives-ouvertes.fr/tel-00440460v3.
Texto completo da fonteThis PhD thesis deals with the notion of pedagogical indexation and tackles it from the point of view of searching for and selecting texts for language teaching. This particular problem is set in the field of Computer Assisted Language Learning (CALL) and of the potential contribution of Natural Language Processing (NLP) to this discipline, before being considered within the scope of elements more directly relevant to language didactics, in order to propose an empirical approach. The latter is then justified by the inadequacy of current description standards for pedagogical resources where modeling of raw objects in a consistent fashion is concerned. This is particularly true for texts in the context of language learning. The thesis subsequently revolves around two questionnaires the aim of which is to provide insight into language teachers' declared practices regarding searching for and selecting texts in the context of class planning. The first questionnaire provides data to formalize the notion of pedagogical context, which is later considered through some of its components thanks to the second questionnaire. Finally, these first formalization drafts provide foundations for the definition of a model aiming at taking into account the contextuality of the properties said to be pedagogical, which is inherent to raw resources. Finally, possible leads for implementing this model are suggested through the description of a computerized system
Wera, Marie-Thérèse. "Histoire en pièces détachées : une activité de traitement de texte intégrant lecture et écriture destinée à des enfants éprouvant des difficultés d'apprentissage en français au primaire". Master's thesis, Université Laval, 1988. http://hdl.handle.net/20.500.11794/29264.
Texto completo da fonteBrunet, Eric. "Conception et réalisation d'un système expert d'aide à l'interprétation des chocs mécaniques en centrale nucléaire". Compiègne, 1988. http://www.theses.fr/1988COMPD113.
Texto completo da fonteThe main purpose of this research work was the design of a Diagnostic Expert System work bench (MIGRE) and, from it, the realization of a system to aid the interpretation of mechanical impacts in nuclear power plants. The central problem for knowledge based system is related to the concept of “knowledge”. MIGRE proposes a three-level classification of knowledge. The first level is concerned with basic or descriptive knowledge and is formalised in an Entity-Relation model. The second level associates the basic concepts with specific information (“Knowledge Vector”). The last level deals with inference knowledge. Each element of expertise is represented by a “marked” rule (strategy, inference, definition,. . . ). MIGRE provides tools support the activities of application development. Thus, the knowledge base editor includes a Specialized Natural Language Interface, whose aim is to understand the “meaning” of a sentence, and in particular to look for “implicit” knowledge. The parser is a semantic one, using a Definite Clause Grammar. Problem solution is guided by the answers given by the knowledge Exploitation module to a number of tasks extended dynamically during the reasoning. The results show that the two “intellectual” activities of understanding sentences and reasoning to solve a problem require a common core of knowledge
Aarabi, Ardalan. "Détection et classification spatiotemporelle automatique d'évènements EEG pour l'analyse de sources d'activité cérébrale chez le nouveau-né et l'enfant". Amiens, 2007. http://www.theses.fr/2007AMIED002.
Texto completo da fonteNeonates, especially the premature ones, are at high risk of brain damage and life-long cognitive disability. Concerning the full-term neonates, neurological pathologies are often accompanied by epileptic manifestations. These newborns may be impaired in other domains including coordination, cognition and behavior. EEG is a useful non-invasive tool to measure the electrical activity of the brain. In this thesis, we developed tools to identify normal and pathological EEG events in neonates and children. We paid a special attention to detect (i) seizures by using specific age-dependant features of the newborn EEG, (ii) brain epileptic states and (iii) short-term events like spikes and spike-and-waves for each state. We characterized EEG events by extracting a set of contextual features in order to classify them. Then, the location of cerebral generators was found and tracked by spatial clustering of the equivalent dipoles of the EEG events in different brain states. The results showed good sensitivities and selectivities with a low false detection rates in neonates and children
Billami, Mokhtar Boumedyen. "Désambiguïsation sémantique dans le cadre de la simplification lexicale : contributions à un système d'aide à la lecture pour des enfants dyslexiques et faibles lecteurs". Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0354.
Texto completo da fonteIn recent years, a large number of technologies have been created to help people who have difficulty when reading written texts. The proposed systems integrate speech technologies (reading aloud) or visual aids (setting and/or coloring of fonts or increasing the space between letters and lines). However, it is essential to also propose transformations on the texts’ content in order to have simpler and more frequent substitutes. The purpose of this thesis is to contribute to develop a reading aid system that automatically provides a simplified version of a given text while keeping the same meaning of words.The presented work addresses the problem of semantic ambiguity (quite common in natural language processing) and aims to propose solutions for Word Sense Disambiguation (WSD) by using unsupervised and knowledge-based approaches from lexico-semantic resources. First, we propose a state of the art of the WSD approaches and semantic similarity measures which are crucial for this process. Thereafter, we compare various algorithms of WSD in order to get the best of them. Finally, we present our contributions for creating a lexical resource for French that proposes disambiguated and graduated synonyms according to their level of difficulty to be read and understood. We show that our resource is useful and can be integrated in a lexical simplification of texts module
Zribi, Abir. "Apprentissage par noyaux multiples : application à la classification automatique des images biomédicales microscopiques". Thesis, Rouen, INSA, 2016. http://www.theses.fr/2016ISAM0001.
Texto completo da fonteThis thesis arises in the context of computer aided analysis for subcellular protein localization in microscopic images. The aim is the establishment of an automatic classification system allowing to identify the cellular compartment in which a protein of interest exerts its biological activity. In order to overcome the difficulties in attempting to discern the cellular compartments in microscopic images, the existing state-of-art systems use several descriptors to train an ensemble of classifiers. In this thesis, we propose a different classification scheme wich better cope with the requirement of genericity and flexibility to treat various image datasets. Aiming to provide an efficient image characterization of microscopic images, a new feature system combining local, frequency-domain, global, and region-based features is proposed. Then, we formulate the problem of heterogeneous feature fusion as a kernel selection problem. Using multiple kernel learning, the problems of optimal feature sets selection and classifier training are simultaneously resolved. The proposed combination scheme leads to a simple and a generic framework capable of providing a high performance for microscopy image classification. Extensive experiments were carried out using widely-used and best known datasets. When compared with the state-of-the-art systems, our framework is more generic and outperforms other classification systems. To further expand our study on multiple kernel learning, we introduce a new formalism for learning with multiple kernels performed in two steps. This contribution consists in proposing three regularized terms with in the minimization of kernels weights problem, formulated as a classification problem using Separators with Vast Margin on the space of pairs of data. The first term ensures that kernels selection leads to a sparse representation. While the second and the third terms introduce the concept of kernels similarity by using a correlation measure. Experiments on various biomedical image datasets show a promising performance of our method compared to states of art methods
Michel, Johan. "Modèles d'activités pédagogiques et de support à l'interaction pour l'apprentissage d'une langue : le système Sampras". Phd thesis, Université du Maine, 2006. http://tel.archives-ouvertes.fr/tel-00090250.
Texto completo da fonteCheikhrouhou, Imen. "Description et classification des masses mammaires pour le diagnostic du cancer du sein". Phd thesis, Université d'Evry-Val d'Essonne, 2012. http://tel.archives-ouvertes.fr/tel-00875976.
Texto completo da fonteLortal, Gaëlle. "Médiatiser l'annotation pour une herméneutique numérique : AnT&CoW, un collecticiel pour une coopération via l'annotation de documents numériques". Phd thesis, Université de Technologie de Troyes, 2006. http://tel.archives-ouvertes.fr/tel-00136042.
Texto completo da fonteL'annotation comme support au travail coopératif est envisagée à la fois comme un objet qui relève de l'étiquette et du commentaire et comme une activité qui relève de la communication, de l'indexation et de l'élaboration de discours. La conception de notre collecticiel se fonde sur un modèle d'activité d'annotation qui souligne la dimension interactionnelle et coopérative de l'annotation. Cette démarche guidée par les modèles est enrichie par l'utilisation de corpus qui permet de conserver l'utilisateur final au centre de nos préoccupations. Nous présentons une maquette du collecticiel, AnT&CoW, utilisant des outils de TAL pour le soutien de l'utilisateur à différents niveaux : soutien à la construction de classification et aide à l'indexation. Une première évaluation de cette maquette est également présentée.
Roehri, Nicolas. "Caractérisation du rôle des oscillations à haute fréquence dans les réseaux épileptiques". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0012/document.
Texto completo da fonteEpilepsy is a major health problem as it affects 50 million people worldwide. One third of the patients are resistant to medication. Surgical removal of the brain areas generating the seizure – the epileptogenic zone – is considered as the standard option for these patients to be seizure free. The non-negligible rate of surgical failure has led to seek other electrophysiological criteria. One putative marker is the high-frequency oscillations (HFOs).An HFO is a brief oscillation between 80-500 Hz lasting at least 4 periods recorded in intracerebral EEG. Due to their short-lasting nature, visually marking of these small oscillations is tedious and time-consuming. Automatically detecting these oscillations seems an imperative stage to study HFOs on cohorts of patients. There is however no general agreement on existing detectors.In this thesis, we developed a new way of representing HFOs thanks to a novel normalisation of the wavelet transform and to use this representation as a base for detecting HFOs automatically. We secondly designed a strategy to properly characterise and validate automated detectors. Finally, we characterised, in a cohort of patients, the reliability of HFOs and epileptic spikes - the standard marker - as predictors of the epileptogenic zone using the validated detector. The conclusion of this thesis is that HFOs are not better than epileptic spikes in predicting the epileptogenic zone but combining the two leads to a more robust biomarker
Laurent, Mario. "Recherche et développement du Logiciel Intelligent de Cartographie Inversée, pour l’aide à la compréhension de texte par un public dyslexique". Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAL016/document.
Texto completo da fonteChildren with language impairment, such as dyslexia, are often faced with important difficulties when learning to read and during any subsequent reading tasks. These difficulties tend to compromise the understanding of the texts they must read during their time at school. This implies learning difficulties and may lead to academic failure. Over the past fifteen years, general tools developed in the field of Natural Language Processing have been transformed into specific tools for that help with and compensate for language impaired students' difficulties. At the same time, the use of concept maps or heuristic maps to encourage dyslexic children express their thoughts, or retain certain knowledge, has become popular. This thesis aims to identify and explore knowledge about the dyslexic public, how society takes care of them and what difficulties they face; the pedagogical possibilities opened up by the use of maps; and the opportunities created by automatic summarization and Information Retrieval fields. The aim of this doctoral research project was to create an innovative piece of software that automatically transforms a given text into a map. It was important that this piece of software facilitate reading comprehension while including functionalities that are adapted to dyslexic teenagers. The project involved carrying out an exploratory experiment on reading comprehension aid, thanks to heuristic maps, that make the identification of new research topics possible, and implementing an automatic mapping software prototype that is presented at the end of this thesis
Gauthier, Elodie. "Collecter, Transcrire, Analyser : quand la machine assiste le linguiste dans son travail de terrain". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM011/document.
Texto completo da fonteIn the last few decades, many scientists were concerned with the fast extinction of languages. Faced with this alarming decline of the world's linguistic heritage, action is urgently needed to enable fieldwork linguists, at least, to document languages by providing them innovative collection tools and to enable them to describe these languages. Machine assistance might be interesting to help them in such a task.This is what we propose in this work, focusing on three pillars of the linguistic fieldwork: collection, transcription and analysis.Recordings are essential, since they are the source material, the starting point of the descriptive work. Speech recording is also a valuable object for the documentation of the language.The growing proliferation of smartphones and other interactive voice mobile devices offer new opportunities for fieldwork linguists and researchers in language documentation. Field recordings should also include ethnolinguistic material which is particularly valuable to document traditions and way of living. However, large data collections require well organized repositories to access the content, with efficient file naming and metadata conventions.Thus, we have developed LIG-AIKUMA, a free Android app running on various mobile phones and tablets. The app aims to record speech for language documentation, over an innovative way.It includes a smart generation and handling of speaker metadata as well as respeaking and parallel audio data mapping.LIG-AIKUMA proposes a range of different speech collection modes (recording, respeaking, translation and elicitation) and offers the possibility to share recordings between users. Through these modes, parallel corpora are built such as "under-resourced speech - well-resourced speech", "speech - image", "speech - video", which are also of a great interest for speech technologies, especially for unsupervised learning.After the data collection step, the fieldwork linguist transcribes these data. Nonetheless, it can not be done -currently- on the whole collection, since the task is tedious and time-consuming.We propose to use automatic techniques to help the fieldwork linguist to take advantage of all his speech collection. Along these lines, automatic speech recognition (ASR) is a way to produce transcripts of the recordings, with a decent quality.Once the transcripts are obtained (and corrected), the linguist can analyze his data. In order to analyze the whole collection collected, we consider the use of forced alignment methods. We demonstrate that such techniques can lead to fine evaluation of linguistic features. In return, we show that modeling specific features may lead to improvements of the ASR systems
Perez, Laura Haide. "Génération automatique de phrases pour l'apprentissage des langues". Electronic Thesis or Diss., Université de Lorraine, 2013. http://www.theses.fr/2013LORR0062.
Texto completo da fonteIn this work, we explore how Natural Language Generation (NLG) techniques can be used to address the task of (semi-)automatically generating language learning material and activities in Camputer-Assisted Language Learning (CALL). In particular, we show how a grammar-based Surface Realiser (SR) can be usefully exploited for the automatic creation of grammar exercises. Our surface realiser uses a wide-coverage reversible grammar namely SemTAG, which is a Feature-Based Tree Adjoining Grammar (FB-TAG) equipped with a unification-based compositional semantics. More precisely, the FB-TAG grammar integrates a flat and underspecified representation of First Order Logic (FOL) formulae. In the first part of the thesis, we study the task of surface realisation from flat semantic formulae and we propose an optimised FB-TAG-based realisation algorithm that supports the generation of longer sentences given a large scale grammar and lexicon. The approach followed to optimise TAG-based surface realisation from flat semantics draws on the fact that an FB-TAG can be translated into a Feature-Based Regular Tree Grammar (FB-RTG) describing its derivation trees. The derivation tree language of TAG constitutes a simpler language than the derived tree language, and thus, generation approaches based on derivation trees have been already proposed. Our approach departs from previous ones in that our FB-RTG encoding accounts for feature structures present in the original FB-TAG having thus important consequences regarding over-generation and preservation of the syntax-semantics interface. The concrete derivation tree generation algorithm that we propose is an Earley-style algorithm integrating a set of well-known optimisation techniques: tabulation, sharing-packing, and semantic-based indexing. In the second part of the thesis, we explore how our SemTAG-based surface realiser can be put to work for the (semi-)automatic generation of grammar exercises. Usually, teachers manually edit exercises and their solutions, and classify them according to the degree of dificulty or expected learner level. A strand of research in (Natural Language Processing (NLP) for CALL addresses the (semi-)automatic generation of exercises. Mostly, this work draws on texts extracted from the Web, use machine learning and text analysis techniques (e.g. parsing, POS tagging, etc.). These approaches expose the learner to sentences that have a potentially complex syntax and diverse vocabulary. In contrast, the approach we propose in this thesis addresses the (semi-)automatic generation of grammar exercises of the type found in grammar textbooks. In other words, it deals with the generation of exercises whose syntax and vocabulary are tailored to specific pedagogical goals and topics. Because the grammar-based generation approach associates natural language sentences with a rich linguistic description, it permits defining a syntactic and morpho-syntactic constraints specification language for the selection of stem sentences in compliance with a given pedagogical goal. Further, it allows for the post processing of the generated stem sentences to build grammar exercise items. We show how Fill-in-the-blank, Shuffle and Reformulation grammar exercises can be automatically produced. The approach has been integrated in the Interactive French Learning Game (I-FLEG) serious game for learning French and has been evaluated both based in the interactions with online players and in collaboration with a language teacher
Perez, Laura Haide. "Génération automatique de phrases pour l'apprentissage des langues". Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0062/document.
Texto completo da fonteIn this work, we explore how Natural Language Generation (NLG) techniques can be used to address the task of (semi-)automatically generating language learning material and activities in Camputer-Assisted Language Learning (CALL). In particular, we show how a grammar-based Surface Realiser (SR) can be usefully exploited for the automatic creation of grammar exercises. Our surface realiser uses a wide-coverage reversible grammar namely SemTAG, which is a Feature-Based Tree Adjoining Grammar (FB-TAG) equipped with a unification-based compositional semantics. More precisely, the FB-TAG grammar integrates a flat and underspecified representation of First Order Logic (FOL) formulae. In the first part of the thesis, we study the task of surface realisation from flat semantic formulae and we propose an optimised FB-TAG-based realisation algorithm that supports the generation of longer sentences given a large scale grammar and lexicon. The approach followed to optimise TAG-based surface realisation from flat semantics draws on the fact that an FB-TAG can be translated into a Feature-Based Regular Tree Grammar (FB-RTG) describing its derivation trees. The derivation tree language of TAG constitutes a simpler language than the derived tree language, and thus, generation approaches based on derivation trees have been already proposed. Our approach departs from previous ones in that our FB-RTG encoding accounts for feature structures present in the original FB-TAG having thus important consequences regarding over-generation and preservation of the syntax-semantics interface. The concrete derivation tree generation algorithm that we propose is an Earley-style algorithm integrating a set of well-known optimisation techniques: tabulation, sharing-packing, and semantic-based indexing. In the second part of the thesis, we explore how our SemTAG-based surface realiser can be put to work for the (semi-)automatic generation of grammar exercises. Usually, teachers manually edit exercises and their solutions, and classify them according to the degree of dificulty or expected learner level. A strand of research in (Natural Language Processing (NLP) for CALL addresses the (semi-)automatic generation of exercises. Mostly, this work draws on texts extracted from the Web, use machine learning and text analysis techniques (e.g. parsing, POS tagging, etc.). These approaches expose the learner to sentences that have a potentially complex syntax and diverse vocabulary. In contrast, the approach we propose in this thesis addresses the (semi-)automatic generation of grammar exercises of the type found in grammar textbooks. In other words, it deals with the generation of exercises whose syntax and vocabulary are tailored to specific pedagogical goals and topics. Because the grammar-based generation approach associates natural language sentences with a rich linguistic description, it permits defining a syntactic and morpho-syntactic constraints specification language for the selection of stem sentences in compliance with a given pedagogical goal. Further, it allows for the post processing of the generated stem sentences to build grammar exercise items. We show how Fill-in-the-blank, Shuffle and Reformulation grammar exercises can be automatically produced. The approach has been integrated in the Interactive French Learning Game (I-FLEG) serious game for learning French and has been evaluated both based in the interactions with online players and in collaboration with a language teacher