Дисертації з теми "Classification basée sur des règles"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-48 дисертацій для дослідження на тему "Classification basée sur des règles".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Wang, Zhiqiang. "Aide à la décision en usinage basée sur des règles métier et apprentissages non supervisés." Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4038.
Повний текст джерелаIn the general context of Industry 4.0, large volumes of manufacturing data are available on instrumented machine-tools. They are interesting to exploit not only to improve machine-tool performances but also to support the decision making for the operational management. This thesis aims at proposing a decision-aid system for intelligent and connected machine-tools through Data mining. The first step in a data mining approach is the selection of relevant data. Raw data must, therefore, be classified into different groups of contexts. This thesis proposes a contextual classification procedure in machining based on unsupervised machine learning by Gaussian mixture model. Based on this contextual classification information, different machining incidents can be detected in real-time. They include chatter, tool breakage and excessive vibration. This thesis introduces a set of business rules for incidents detection. The operational context has been deciphering when incidents occur, based on the contextual classification that explains the types of machining and tool engagement. Then, the nouveaux relevant and appropriate Key Performance Indicators (KPIs) can be proposed based on these contextual information and the incidents detected to support decision making for the operational management
Polat, Songül. "Combined use of 3D and hyperspectral data for environmental applications." Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.
Повний текст джерелаEver-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Bouker, Slim. "Contribution à l'extraction des règles d'association basée sur des préférences." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22585/document.
Повний текст джерелаMebarki, Nasser. "Une approche d'ordonnancement temps réel basée sur la sélection dynamique de règles de priorité." Lyon 1, 1995. http://www.theses.fr/1995LYO10043.
Повний текст джерелаCouchot, Alain. "Analyse statique de la terminaison des règles actives basée sur la notion de chemin maximal." Paris 12, 2001. http://www.theses.fr/2001PA120042.
Повний текст джерелаThe active rules are intended to enrich the databases with a reactive behaviour. An active rule is composed of three main components: the event, the condition, the action. It is desired to guarantee a priori the termination of a set of active rules. The aim of this thesis is to increase the number of termination situations detected by the static analysis. We first determine some restrictions of the previous static analysis methods. We develop then an algorithm for static analysis of termination based on the notion of maximal path of a node. The notion of maximal path is intended to replace the notion of cycle, used by the previous termination algorithms. We present some applications and extensions of our termination algorithm. These extensions and applications concern the active rules flot included in a cycle, the composite conditions, the composite events, the priorities between ailes, and the modular design of rules. .
Duda, Dorota. "Classification d’images médicales basée sur l’analyse de texture." Rennes 1, 2009. http://www.theses.fr/2009REN1S172.
Повний текст джерелаThe first objective of this thesis is to propose methods of tissue characterization, based on a set of images, each of them representing a particular property of the tissue. These methods consist in a simultaneous analysis of several textures displayed on different images and corresponding to the same part of the organ. They allow to characterize the changes in properties of tissue, resulting from determined modifications of the images acquisition conditions. The second objective is the implementation of the proposed methods. The developed software allows the user to define relations between variable properties of the dynamic texture, that he wants to highlight. The proposed method has been applied to hepatic tissue characterization in Computed Tomography images. Three acquisition moments have been considered: before and after (arterial and portal phases) injection of contrast product. The method enabled to characterize the changes in the tissue properties, resulting from time evolution of the contrast product concentration in the liver vessels. The presented results concern the classification of four types of liver tissue (including two kinds of tumors). These tissues were characterized either by textural features corresponding to a single acquisition moment, or by features obtained by the dynamic texture analysis (grouped into multiphasic vectors). The quality of the liver tissue classification increases significantly when the tissue is characterized on the basis of the three images corresponding to three phases of acquisition
Ghemmogne, Fossi Leopold. "Gestion des règles basée sur l'indice de puissance pour la détection de fraude : Approches supervisées et semi-supervisées." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI079.
Повний текст джерелаThis thesis deals with the detection of credit card fraud. According to the European Central Bank, the value of frauds using cards in 2016 amounted to 1.8 billion euros. The challenge for institutions is to reduce these frauds. In general, fraud detection systems consist of an automatic system built with "if-then" rules that control all incoming transactions and trigger an alert if the transaction is considered suspicious. An expert group checks the alert and decides whether it is true or not. The criteria used in the selection of the rules that are kept operational are mainly based on the individual performance of the rules. This approach ignores the non-additivity of the rules. We propose a new approach using power indices. This approach assigns to the rules a normalized score that quantifies the influence of the rule on the overall performance of the group. The indexes we use are the Shapley Value and Banzhaf Value. Their applications are 1) Decision support to keep or delete a rule; 2) Selection of the number k of best-ranked rules, in order to work with a more compact set. Using real credit card fraud data, we show that: 1) This approach performs better than the one that evaluates the rules in isolation. 2) The performance of the set of rules can be achieved by keeping one-tenth of the rules. We observe that this application can be considered as a task of selection of characteristics: We show that our approach is comparable to the current algorithms of the selection of characteristics. It has an advantage in rule management because it assigns a standard score to each rule. This is not the case for most algorithms, which focus only on an overall solution. We propose a new version of Banzhaf Value, namely k-Banzhaf; which outperforms the previous in terms of computing time and has comparable performance. Finally, we implement a self-learning process to reinforce the learning in an automatic learning algorithm. We compare these with our power indices to rank credit card fraud data. In conclusion, we observe that the selection of characteristics based on the power indices has comparable results with the other algorithms in the self-learning process
Bouadjio, Victor. "CAO des circuits intégrés MOS : vérification des règles de dessin par une méthode basée sur les traitements combinatoires locaux." Paris 11, 1987. http://www.theses.fr/1987PA112202.
Повний текст джерелаThis work presents a method for MOS IC Design Rule Verification, based on Local Combinatory Processings. Design rules, as well as their verification methods, are described. We recall the effects of the length (L) and the width (W) of devices on the working of circuits. We show some applications of the Local Combinatory Processings in Image Processings, and we then show their use in Design Rule Checkings. Some machines working on the basis of the method are descrybed. Our software has been applied on a test-circuit (a clock synchronizer), and the results of the processings are available in the thesis
Mokhtari, Amine. "Système personnalisé de planification d'itinéraire unimodal : une approche basée sur la théorie des ensembles flous." Rennes 1, 2011. http://www.theses.fr/2011REN1E004.
Повний текст джерелаAllani, Atig Olfa. "Une approche de recherche d'images basée sur la sémantique et les descripteurs visuels." Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080032.
Повний текст джерелаImage retrieval is a very active search area. Several image retrieval approaches that allow mapping between low-level features and high-level semantics have been proposed. Among these, one can cite object recognition, ontologies, and relevance feedback. However, their main limitation concern their high dependence on reliable external resources and lack of capacity to combine semantic and visual information.This thesis proposes a system based on a pattern graph combining semantic and visual features, relevant visual feature selection for image retrieval and improvement of results visualization. The idea is (1) build a pattern graph composed of a modular ontology and a graph-based model, (2) to build visual feature collections to guide feature selection during online retrieval phase and (3) improve the retrieval results visualization with the integration of semantic relations.During the pattern graph building, ontology modules associated to each domain are automatically built using textual corpuses and external resources. The region's graphs summarize the visual information in a condensed form and classify it given its semantics. The pattern graph is obtained using modules composition. In visual features collections building, association rules are used to deduce the best practices on visual features use for image retrieval. Finally, results visualization uses the rich information on images to improve the results presentation.Our system has been tested on three image databases. The results show an improvement in the research process, a better adaptation of the visual features to the domains and a richer visualization of the results
Siddour, Abdelkader. "Classification automatique des diatomées : une approche basée sur le contour et la géométrie." Thèse, Université du Québec à Trois-Rivières, 2007. http://depot-e.uqtr.ca/1243/1/030004296.pdf.
Повний текст джерелаSadaoui, Lazhar. "Évaluation de la cohésion des classes : une nouvelle approche basée sur la classification." Thèse, Université du Québec à Trois-Rivières, 2010. http://depot-e.uqtr.ca/1436/1/030168333.pdf.
Повний текст джерелаJabbour, Tony. "Classification de l'inflammabilité des fluides frigorigènes basée sur la vitesse fondamentale de flamme." Paris, ENMP, 2004. http://www.theses.fr/2004ENMP1221.
Повний текст джерелаThe current flammability classifications do not address adequately the flammability hazard, and better assessment should be provided. The burning velocity is shown to be an appropriate parameter related to flammability hazard and can be used as an additional criterion for flammability classification of refrigerants. The burning velocity is related to the parameters of combustion initiation and the main consequences of flammability hazard. The derived formulations demonstrate that the burning velocity is a main parameter to be considered in the flammability classification. The vertical tube method is used to measure the burning velocity. The results show that the burning velocity allows to differentiate flammability levels. The maximum burning velocity is taken as additional criterion to the lower flammability limit and heat of combustion in the flammability classification of refrigerants
Hospital, Fabien. "Conception préliminaire des actionneurs électromagnétiques basée sur les modèles : lois d'estimations et règles de conception pour la transmission de puissance mécanique." Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0036/document.
Повний текст джерелаIn the continuity of work of research on embedded systems of mechanical transmission, this these has two objectives: to improve estimation models of Electromechanical Actuator (EMA) pieces and to extend the cross-cutting vision of preliminary design to the dynamic aspects. Indeed, in the preliminary design based on models, this research should allow to adapt the choice of architectures and technologies to static and dynamic performance to achieve. The models developed from the scaling laws will be extended and exploited to modelize elementary pieces of the EMA and to highlight rules of "good practices" in preliminary design. We focus in particular on the design of EMA in aeronautic field. In the first time, we developed estimations models and métamodèles of elementary components of EMA which was decomposed in housing and elementary pieces.In a second time, we established rules of good practices for actuator sizing in control position loop. Contrary to older researches, we take into account control synthesis to create these rules. From simulations models, we quantified the influence of technological defects of components (inertia and saturations, elasticity (backlash), friction) on EMA performances. Indeed, the usual command structures and the choice of the elements of control are intimately linked to the dynamic performance and these defects.Finally, to give test way to identify with accuracy a friction model in aeronautical conditions of temperature, a test bench of Harmonic Drive was create, integrated and implemented. It allows booting the validation of rules of good practices in preliminary design
Hmida, Marwa. "Reconnaissance de formes basée sur l'approche possibiliste dans les images mammographiques." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0061/document.
Повний текст джерелаIn view of the significant increase in breast cancer mortality rate among women as well as the continuous growth in number of mammograms performed each year, computer-aided diagnosis is becoming more and more imperative for experts. In our thesis work, special attention is given to breast masses as they represent the most common sign of breast cancer in mammograms. Nevertheless, mammographic images have very low contrast and breast masses possess ambiguous margins. Thus, it is difficult to distinguish them from the surrounding parenchymal. Moreover, the complexity and the large variability of breast mass shapes make diagnostic and classification challenging tasks.In this context, we propose a computer-aided diagnosis system which firstly segments masses in regions of interests and then classifies them as benign or malignant. Mass segmentation is a critical step in a computer-aided diagnosis system since it affects the performance of subsequent analysis steps namely feature analysis and classification. Indeed, poor segmentation may lead to poor decision making. Such a case may occur due to two types of imperfection: uncertainty and imprecision. Therefore, we propose to deal with these imperfections using fuzzy contours which are integrated in the energy of an active contour to get a fuzzy-energy based active contour model that is used for final delineation of mass.After mass segmentation, a classification method is proposed. This method is based on possibility theory which allows modeling the ambiguities inherent to the knowledge expressed by the expert. Moreover, since shape and margin characteristics are very important for differentiating between benign and malignant masses, the proposed method is essentially based on shape descriptors.The evaluation of the proposed methods was carried out using the regions of interest containing masses extracted from the MIAS base. The obtained results are very interesting and the comparisons made have demonstrated their performances
Zidi, Amir. "Recherche d'information dirigée par les interfaces utilisateur : approche basée sur l'utilisation des ontologies de domaine." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0012/document.
Повний текст джерелаThis thesis study the using of ontologies in information retrieval systemdedicated to a specific domain. For that we propose a two-level approach to deal with i) the query formulation that assists the user in selecting concepts and properties of the used ontology ; ii) the query recommendation that uses the case-based reasoning method, where a new query is considered as a new case. Solving a new case consists of reusing similar cases from the history of the previous similar cases already processed. For the validation of the proposed approaches, a system was developed and a set of computational experimentations was made. Finally, research perspectives conclude that this present report
Essayeh, Aroua. "Une approche de personnalisation de la recherche d'information basée sur le Web sémantique." Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0003.
Повний текст джерелаThis PhD thesis reports on a recent study in the field of information retrieval (IR), more specifically personalized IR. Traditional IR uses various methods and approaches. However, given the proliferation of data from different sources, traditional IR is no longer considered to be an effective means of meeting users’ requirements. (‘Users’ here refers to the main actor in an IR system.) In this thesis, we address two main problems related to personalized IR: (1) the development and implementation of a user model; and (2) the formulation of a search query to improve the results returned to users according to their perceptions and preferences. To achieve these goals, we propose a semantic information search approach, based on the use of semantic information and guided by ontologies. The contribution of our work is threefold. First, it models and constructs user profiles following a modular ontological approach; this model allows the capture of information related to the user, and models the data according to the semantic approach so that the data can be re-used for reasoning and inference tasks. Second, it provides evidence for reformulating a query by exploiting concepts, hierarchical and non-hierarchical relationships between concepts and properties. Third, based on our findings, we recommend search results that are informed by the user’s communities, built by the improved unsupervised classification approach called the ‘Fuzzy K-mode’. These communities are also semantically modeled with modular profile ontology. To validate our proposed approach, we implemented a system for searching the itineraries for public transport. Finally, this thesis proposes research perspectives based on the limitations we encountered
Chaari, Anis. "Nouvelle approche d'identification dans les bases de données biométriques basée sur une classification non supervisée." Phd thesis, Université d'Evry-Val d'Essonne, 2009. http://tel.archives-ouvertes.fr/tel-00549395.
Повний текст джерелаGuernine, Taoufik. "Classification hiérarchique floue basée sur le SVM et son application pour la catégorisation des documents." Mémoire, Université de Sherbrooke, 2010. http://savoirs.usherbrooke.ca/handle/11143/4838.
Повний текст джерелаDelahaye, Alexandre. "Classification multi-échelle d'images à très haute résolution spatiale basée sur une nouvelle approche texturale." Thèse, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8935.
Повний текст джерелаAbstract : Classifying remote sensing images is an increasingly difficult task due to the availability of very high spatial resolution (VHSR) data. The amount of details in such images is a major obstacle to the use of spectral classification methods as well as most textural classification algorithms, including statistical methods. However, structural methods offer an interesting alternative to this issue: these object-oriented approaches focus on analyzing the structure of an image in order to interpret its meaning. In the first part of this thesis, we propose a new algorithm belonging to this category: KPC (KeyPoint-based Classification). KPC is based on keypoint detection and analysis and offers an efficient answer to the issue of classifying VHSR images. Tests led on artificial and real remote sensing images have proven its discriminating power. Furthermore, many studies have proven that evidential fusion (based on Dempster-Shafer theory) is well-suited to remote sensing images because of its ability to handle abstract concepts such as ambiguity and uncertainty. However, few studies did focus on the application of this theory to complex textural data such as structural data. This issue is dealt with in the second part of this thesis; we focused on fusing multiscale KPC classifications with the help of Dempster-Shafer theory. Tests have shown that this multi-scale approach leads to an increase in classification efficiency when the original image has a low quality. Our study also points out a substantial potential for improvement gained from the estimation of intermediate classifications reliability and provides ideas to get these estimations.
Gan, Changquan. "Une approche de classification non supervisée basée sur la notion des K plus proches voisins." Compiègne, 1994. http://www.theses.fr/1994COMP765S.
Повний текст джерелаDou, Weibei. "Segmentation d'images multispectrales basée sur la fusion d'informations : application aux images IRM." Caen, 2006. http://www.theses.fr/2006CAEN2026.
Повний текст джерелаMbow, Mouhamadou Mansour. "Aide à la décision basée sur l'expertise métier dans le domaine de la FAO pour la fabrication additive : une approche par mathématisation des règles." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALI082.
Повний текст джерелаPre-processing or CAM operations in additive manufacturing (AM) by powder bedfusion (PBF) include complex operations such as the definition of part orientation, the design of support structures, the nesting of parts, etc. The definition of the parts orientation in the manufacturing build space is the first step and several studies have shown that all of the remaining steps depend on it as well as the quality, cost and production time of the part. Its definition is given by only two rotation angle parameters in the global machine reference, but their definition is complex and requires strong skills in the field. Studies in the literature have shown that industrial users rely on their knowledge or expertise of the process to achieve this. Today, despite the technical advances, there is still a lack of tools or methods to take into account this formalized expertise. In this context, this thesis (from ANR COFFA project) proposes methods and tools to efficiently include the formalized knowledge of experts in the decision making process of CAM parameters, in PBF and AM in general.As a first step, the review of methods to use expertise in decision making in traditional manufacturing CAM is presented in order to find the disadvantages and possibilities for integration in AM. Secondly, an investigation of the types of knowledge that can be used for decision support is carried out. This part of the work also explores the knowledge resources available for the definition of part orientation in the research literature but also in the industrial practice. The third part of the work presents a new approach for transforming knowledge of action rule type into desirability functions. This approach allows in particular to evaluate these action rules on parts and to obtain a quantitative appreciation which is considered as the level of respect of the rule (when the CAM parameters applied to the part). Then, this approach is applied to the action rules found in the second part of the work to establish quantitative models for the calculation of orientation desirability. Finally, a tool for the calculation of this orientation desirability and decision-making support is presented. The use of the tool is illustrated through case studies of industrial parts benchmarked with commercial tools, and through tests carried out with engineering school students.The main output of this project is the provision of numerical means to assistCAM operators in their decision-making on optimal manufacturing parameters based on the company expertise. In addition, the approach presented offers the possibility of redesigning parts by targeting surfaces that have a low desirability with respect to the part orientation
Jouini, Mohamed Soufiane. "Caractérisation des réservoirs basée sur des textures des images scanners de carottes." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13769/document.
Повний текст джерелаCores extracted, during wells drilling, are essential data for reservoirs characterization. A medical scanner is used for their acquisition. This feature provide high resolution images improving the capacity of interpretation. The main goal of the thesis is to establish links between these images and petrophysical data. Then parametric texture modelling can be used to achieve this goal and should provide reliable set of descriptors. A possible solution is to focus on parametric methods allowing synthesis. Even though, this method is not a proven mathematically, it provides high confidence on set of descriptors and allows interpretation into synthetic textures. In this thesis methods and algorithms were developed to achieve the following goals : 1. Segment main representative texture zones on cores. This is achieved automatically through learning and classifying textures based on parametric model. 2. Find links between scanner images and petrophysical parameters. This is achieved though calibrating and predicting petrophysical data with images (Supervised Learning Process)
Plaud, Angéline. "Classification ensembliste des séries temporelles multivariées basée sur les M-histogrammes et une approche multi-vues." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC047.
Повний текст джерелаRecording measurements about various phenomena and exchanging information about it, participate in the emergence of a type of data called time series. Today humongous quantities of those data are often collected. A time series is characterized by numerous points and interactions can be observed between those points. A time series is multivariate when multiple measures are recorded at each timestamp, meaning a point is, in fact, a vector of values. Even if univariate time series, one value at each timestamp, are well-studied and defined, it’s not the case of multivariate one, for which the analysis is still challenging. Indeed, it is not possible to apply directly techniques of classification developed on univariate data to the case of multivariate one. In fact, for this latter, we have to take into consideration the interactions not only between points but also between dimensions. Moreover, in industrial cases, as in Michelin company, the data are big and also of different length in terms of points size composing the series. And this brings a new complexity to deal with during the analysis. None of the current techniques of classifying multivariate time series satisfies the following criteria, which are a low complexity of computation, dealing with variation in the number of points and good classification results. In our approach, we explored a new tool, which has not been applied before for MTS classification, which is called M-histogram. A M-histogram is a visualization tool using M axis to project the density function underlying the data. We have employed it here to produce a new representation of the data, that allows us to bring out the interactions between dimensions. Searching for links between dimensions correspond particularly to a part of learning techniques called multi-view learning. A view is an extraction of dimensions of a dataset, which are of same nature or type. Then the goal is to display the links between the dimensions inside each view in order to classify all the data, using an ensemble classifier. So we propose a multi-view ensemble model to classify multivariate time series. The model creates multiple M-histograms from differents groups of dimensions. Then each view allows us to get a prediction which we can aggregate to get a final prediction. In this thesis, we show that the proposed model allows a fast classification of multivariate time series of different sizes. In particular, we applied it on aMichelin use case
Dhouib, Diala. "Aide multicritère au pilotage d'un processus basée sur le raisonnement à partir de cas." Paris 8, 2009. http://octaviana.fr/document/149146086#?c=0&m=0&s=0&cv=0f.
Повний текст джерелаThis thesis proposes tools of multicriteria decision aid for process piloting based on the knowledge capitalization via the Case-Based Reasoning (CBR) technique. Two models have been developed. The first model, using past similar cases, helps the pilot of a process to resolve a new problem. This is done by taking into account causalities relations which exist between performance inductors and indicators as well as dependence relations between criteria. The second model is based on hybridization between the CBR and the clustering. It tries to improve the phases: cases representation, similar cases retrieval and case base maintenance of the CBR cycle. The application of a clustering method represents a way of arranging the case base to facilitate the piloting aid. These two models can be executed in a complementarity relation. Indeed, the second model based on clustering allows, at first, to form homogeneous groups including the new case to look for its solution. Then, after obtaining the cluster containing the new case with its similar cases, the first model will be activated to find the closest. However, criteria used by these two models are quantitative. For that purpose, a linguistic approach was used to apply non homogeneous data which can be numeric or linguistic. These two models were applied in a real industrial case of cardboard packagings manufacturing. They were also implemented in a computer prototype in the form of an Interactive System of Process Piloting Aid (ISPPA) via interfaces to better validate their applications
Gacem, Amina. "Méthodologie d’évaluation de performances basée sur l’identification de modèles de comportements : applications à différentes situations de handicap." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0053.
Повний текст джерелаThe performance assessment is an important process to identify the abilities and the limits of a person. Currently, the assessment requires the mediation of a specialist (doctor, therapist, etc. ) which must performs analysis and tests to reach a subjective decision. In the literature, several works propose assessment methods based on performance criteria: it is a quantitative evaluation which is objective. This type of evaluation is usually based on statistical analysis. In this work, a new methodology of performance assessment is proposed. It is based on the identification of reference behaviours. Those behaviours are then used as references for the evaluation of other people. The identification of reference behaviours is an essential element of our work. It is based on classification methods. In our work, we have tested two different methods. The first one is the "Fuzzy C-means" which allows a thorough search of reference behaviours. However, behaviours are represented by proxy criteria. The second method is the "Hidden Markov Models". It offers a time series analysis based on the temporal behaviour variation. However, it is not easy to determine the training phase of this method. This assessment methodology has been applied in the context of different applications designed for disabled people: driving electric wheelchair, driving an automobile and the use of pointing devices (mouse, trackball, joystick, etc. ). In each application, a protocol and an ecological situation are defined in order to evaluate participants on different platforms involving functional control interfaces (joystick, mouse, steering wheel, etc. ). Then, statistical tools are used to analyze the data and provide a first interpretation of behaviours. The application of our methodology identifies different reference behaviours and the assessment by comparing behaviours let to identify different levels of expertise. In each of the studied applications, our methodology identifies automatically different reference behaviours. Then, the assessment of people, carried out by comparing to the reference behaviours, let identify different levels of expertise and illustrate the evolution of learning during the assessment. The proposed evaluation methodology is an iterative process. So that, the population of experienced people can be enriched by adding people who become stable after assessment. Therefore, this allows the search for new reference behaviours
Malgouyres, Hugues. "Définition et détection automatique des incohérences structurelles et comportementales des modèles UML : Couplage des techniques de métamodélisation et de vérification basée sur la programmation logique." Toulouse, INSA, 2006. http://www.theses.fr/2006ISAT0038.
Повний текст джерелаThe purpose of this thesis is to develop a method that permits to ensure the UML model consistency. Two aspects have been addressed, the consistency definition and the consistency checking. The first step has led to a document that contains 650 consistency rules. Half of these rules are new consistency rules deduced from UML semantics. The aim is to make a census of all consistency rules. The second step concerns consistency checking. The developed method associates meta-modeling with system verification in logic programming techniques. Logic programming is used to encode UML model, to formalize UML operational semantics and to express the inconsistencies. The detection of structural and behavioral inconsistencies is then enabled. To conclude, a prototype has been developed. Experimental results on an industrial model from avionics domain corroborate the practical interest of the approach
Georgescu, Vera. "Classification de données multivariées multitypes basée sur des modèles de mélange : application à l'étude d'assemblages d'espèces en écologie." Phd thesis, Université d'Avignon, 2010. http://tel.archives-ouvertes.fr/tel-00624382.
Повний текст джерелаShahzad, Atif. "Une Approche Hybride de Simulation-Optimisation Basée sur la fouille de Données pour les problèmes d'ordonnancement." Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00647353.
Повний текст джерелаAl-Najdi, Atheer. "Une approche basée sur les motifs fermés pour résoudre le problème de clustering par consensus." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4111/document.
Повний текст джерелаClustering is the process of partitioning a dataset into groups, so that the instances in the same group are more similar to each other than to instances in any other group. Many clustering algorithms were proposed, but none of them proved to provide good quality partition in all situations. Consensus clustering aims to enhance the clustering process by combining different partitions obtained from different algorithms to yield a better quality consensus solution. In this work, a new consensus clustering method, called MultiCons, is proposed. It uses the frequent closed itemset mining technique in order to discover the similarities between the different base clustering solutions. The identified similarities are presented in a form of clustering patterns, that each defines the agreement between a set of base clusters in grouping a set of instances. By dividing these patterns into groups based on the number of base clusters that define the pattern, MultiCons generates a consensussolution from each group, resulting in having multiple consensus candidates. These different solutions are presented in a tree-like structure, called ConsTree, that facilitates understanding the process of building the multiple consensuses, and also the relationships between the data instances and their structuring in the data space. Five consensus functions are proposed in this work in order to build a consensus solution from the clustering patterns. Approach 1 is to just merge any intersecting clustering patterns. Approach 2 can either merge or split intersecting patterns based on a proposed measure, called intersection ratio
L'Héritier, Cécile. "Une approche de retour d’expérience basée sur l’analyse multicritère et l’extraction de connaissances : Application au domaine humanitaire." Thesis, Nîmes, 2020. http://www.theses.fr/2020NIME0001.
Повний текст джерелаBecause of its critical impacts on performance and competitivity, organizations’ knowledge is today considered to be an invaluable asset. In this context, the development of methods and frameworks aiming at improving knowledge preservation and exploitation is of major interest. Within Lessons Learned framework – which proposes relevant methods to tackle these challenges –, we propose to work on an approach mixing Knowledge Representation, Multiple-Criteria Decision Analysis and Inductive Reasoning for inferring general learnings by analyzing past experiences. The proposed approach, which is founded on a specific case-based reasoning, intends to study the similarities of past experiences – shared features, patterns – and their potential influence on the overall success of cases through the identification of a set of criteria having a major contribution on this success. For the purpose of highlighting this potential causal link to infer general learnings, we envisage relying on inductive reasoning techniques. The considered work will be developed and validated through the scope of a humanitarian organization, Médecins Sans Frontières, with a focus on the logistical response in emergency situations
Ta, Minh Thuy. "Techniques d'optimisation non convexe basée sur la programmation DC et DCA et méthodes évolutives pour la classification non supervisée." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0099/document.
Повний текст джерелаThis thesis focus on four problems in data mining and machine learning: clustering data streams, clustering massive data sets, weighted hard and fuzzy clustering and finally the clustering without a prior knowledge of the clusters number. Our methods are based on deterministic optimization approaches, namely the DC (Difference of Convex functions) programming and DCA (Difference of Convex Algorithm) for solving some classes of clustering problems cited before. Our methods are also, based on elitist evolutionary approaches. We adapt the clustering algorithm DCA–MSSC to deal with data streams using two windows models: sub–windows and sliding windows. For the problem of clustering massive data sets, we propose to use the DCA algorithm with two phases. In the first phase, massive data is divided into several subsets, on which the algorithm DCA–MSSC performs clustering. In the second phase, we propose a DCA–Weight algorithm to perform a weighted clustering on the obtained centers in the first phase. For the weighted clustering, we also propose two approaches: weighted hard clustering and weighted fuzzy clustering. We test our approach on image segmentation application. The final issue addressed in this thesis is the clustering without a prior knowledge of the clusters number. We propose an elitist evolutionary approach, where we apply several evolutionary algorithms (EAs) at the same time, to find the optimal combination of initial clusters seed and in the same time the optimal clusters number. The various tests performed on several sets of large data are very promising and demonstrate the effectiveness of the proposed approaches
Daviet, Hélène. "Class-Add, une procédure de sélection de variables basée sur une troncature k-additive de l'information mutuelle et sur une classification ascendante hiérarchique en pré-traitement." Phd thesis, Université de Nantes, 2009. http://tel.archives-ouvertes.fr/tel-00481931.
Повний текст джерелаDaviet, Desmier Hélène. "ClassAdd, une procédure de sélection de variables basée sur une troncature k-additive de l'informatique mutuelle et sur une classification ascendante hiérarchique en pré-traitement." Nantes, 2009. http://www.theses.fr/2009NANT2019.
Повний текст джерелаSubset variable selection algorithms are necessary when the number of features is too huge to provide a good understanding of the underlying process that generated the data. In the past few years, variable and feature selection have become the focus of much research because of domains, such as molecular chemistry or gene expression array analysis, with hundreds to tens of thousands of variables. In the framework of subset variable selection for supervised classification involving only discret variables, we propose a selection algorithm using a computationally efficient relevance measure based on a k-additive truncation of the mutual information and involving an agglomerative hierarchical clustering of the set of potentially discriminatory variables in order to reduce the number of subsets whose relevance is estimated
Ghrissi, Amina. "Ablation par catheter de fibrillation atriale persistante guidée par dispersion spatiotemporelle d’électrogrammes : Identification automatique basée sur l’apprentissage statistique." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4026.
Повний текст джерелаCatheter ablation is increasingly used to treat atrial fibrillation (AF), the most common sustained cardiac arrhythmia encountered in clinical practice. A recent patient-tailored AF ablation therapy, giving 95% of procedural success rate, is based on the use of a multipolar mapping catheter called PentaRay. It targets areas of spatiotemporal dispersion (STD) in the atria as potential AF drivers. STD stands for a delay of the cardiac activation observed in intracardiac electrograms (EGMs) across contiguous leads.In practice, interventional cardiologists localize STD sites visually using the PentaRay multipolar mapping catheter. This thesis aims to automatically characterize and identify ablation sites in STD-based ablation of persistent AF using machine learning (ML) including deep learning (DL) techniques. In the first part, EGM recordings are classified into STD vs. non-STD groups. However, highly imbalanced dataset ratio hampers the classification performance. We tackle this issue by using adapted data augmentation techniques that help achieve good classification. The overall performance is high with values of accuracy and AUC around 90%. First, two approaches are benchmarked, feature engineering and automatic feature extraction from a time series, called maximal voltage absolute values at any of the bipoles (VAVp). Statistical features are extracted and fed to ML classifiers but no important dissimilarity is obtained between STD and non-STD categories. Results show that the supervised classification of raw VAVp time series itself into the same categories is promising with values of accuracy, AUC, sensi-tivity and specificity around 90%. Second, the classification of raw multichannel EGM recordings is performed. Shallow convolutional arithmetic circuits are investigated for their promising theoretical interest but experimental results on synthetic data are unsuccessful. Then, we move forward to more conventional supervised ML tools. We design a selection of data representations adapted to different ML and DL models, and benchmark their performance in terms of classification and computational cost. Transfer learning is also assessed. The best performance is achieved with a convolutional neural network (CNN) model for classifying raw EGM matrices. The average performance over cross-validation reaches 94% of accuracy and AUC added to an F1-score of 60%. In the second part, EGM recordings acquired during mapping are labeled ablated vs. non-ablated according to their proximity to the ablation sites then classified into the same categories. STD labels, previously defined by interventional cardiologists at the ablation procedure, are also aggregated as a prior probability in the classification task.Classification results on the test set show that a shallow CNN gives the best performance with an F1-score of 76%. Aggregating STD label does not help improve the model’s performance. Overall, this work is among the first attempts at the application of statistical analysis and ML tools to automatically identify successful ablation areas in STD-based ablation. By providing interventional cardiologists with a real-time objective measure of STD, the proposed solution offers the potential to improve the efficiency and effectiveness of this fully patient-tailored catheter ablation approach for treating persistent AF
Bruni, Edoardo. "Systématique des hauteurs : une théorie musicale basée sur la classification, la description et la comparaison de tous les ensembles de hauteurs (gammes, modes, accords)." Paris 4, 2005. http://www.theses.fr/2005PA040400.
Повний текст джерелаTanquerel, Lucille. "Caractérisation des documents sonores : Etude et conception d'un procédé de calcul rapide de signature audio basée sur une perception limitée du contenu." Caen, 2008. http://www.theses.fr/2008CAEN2056.
Повний текст джерелаThe description of the sound characteristics of a document is a key for treatments involving automatic audio data. The objective of our work is to describe a method able to generate rapidly a signature of a sound file by the extraction of physical characteristics over the file (spectral analysis of signal). The innovation of our proposal concerns the organization of the extraction of samples and the analysis mode to provide quickly a signature representative of musical content. The organization of extraction defines how samples are taken. Our proposal aims to achieve a statistical sequential minimum sampling allocated over the sound file. The principle of this proposal is based on the assumption that the collection of a small quantity of small duration samples is sufficient to have information summarizing effectively the perceived rhythm. Our validation method is based on an error objective recognition. We show that the signature can compare the files between them and accurately identify identical pieces even if they are not complete. We also show that it can combine two halves of the same song with a significant success rate. On the other hand the validation is based on the comparison of the rhythmical signature with human perception and also on the distinction of sound recordings according to the language spoken. All tests provide interesting results given the time of calculation
Selmane, Sid Ali. "Détection et analyse des communautés dans les réseaux sociaux : approche basée sur l'analyse formelle de concepts." Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22004.
Повний текст джерелаThe study of community structure in networks became an increasingly important issue. The knowledge of core modules (communities) of networks helps us to understand how they work and behaviour, and to understand the performance of these systems. A community in a graph (network) is defined as a set of nodes that are strongly linked, but weakly linked with the rest of the graph. Members of the same community share the same interests. The originality of our research is to show that it is relevant to use formal concept analysis for community detection unlike conventional approaches using graphs. We studied several problems related to community detection in social networks : (1) the evaluation of community detection methods in the literature, (2) the detection of disjointed and overlapping communities, and (3) modelling and analysing heterogeneous social network of three-dimensional data. To assess the community detection methods proposed in the literature, we discussed this subject by studying first the state of the art that allowed us to present a classification of community detection methods by evaluating each method presented in the literature (the best known methods). For the second part, we were interested in developing a disjointed and overlapping community detection approach in homogeneous social networks from adjacency matrices (one mode data or one dimension) by exploiting techniques from formal concept analysis. We paid also a special attention to methods of modeling heterogeneous social networks. We focused in particular to three-dimensional data and proposed in this framework a modeling approach and social network analysis from three-dimensional data. This is based on a methodological framework to better understand the threedimensional aspect of this data. In addition, the analysis concerns the discovery of communities and hidden relationships between different types of individuals of these networks. The main idea lies in mining communities and rules of triadic association from these heterogeneous networks to simplify and reduce the computational complexity of this process. The results will then be used for an application recommendation of links and content to individuals in a social network
Bahri, Nesrine. "Une commande neuronale adaptative basée sur des émulateurs neuronal et multimodèle pour les systèmes non linéaires MIMO et SIMO." Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0024/document.
Повний текст джерелаThe porosity of a composite plate in carbon / epoxy of type RTM is known by used of tomography X. A method of determination of this porosity by measure of the mitigation of the longitudinal waves through the thickness of this kind of plate is proposed. These measures are made on surfaces of different sizes (from some cm2 to some mm2) and allow the obtaining of cartographies. A correspondence porosity (tomo X) - Mitigation (US wave) is deducted and analyzed according to the structure of the composite material. In every case, we estimate the quality of the obtained relations and we deduct the limits of validity of the correspondence between porosity and mitigation. First results of acoustic tomography are obtained
Davy, Manuel. "Noyaux optimisés pour la classification dans le plan temps-fréquence : proposition d'un alorithme constructif et d'une référence bayésienne basée sur les méthodes MCMC : application au diagnostic d'enceintes acoustiques." Nantes, 2000. http://www.theses.fr/2000NANT2065.
Повний текст джерелаSteichen, Olivier. "Raisonnement par règles et raisonnement par cas pour la résolution des problèmes en médecine." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010691.
Повний текст джерелаPhysicians try to solve health problems of individual patients. Customized solutions take into account the uniqueness of the patient. Is the individualization of medical decisions possible and desirable'? If so, how can I tor should it be performed? The first part of the thesis shows: that the question arises since the first conceptualizations of medical reasoning (Hippocrates); that is was much debated in the early nineteenth century, when statistical studies were first performed to guide medical decisions; and that the medical observation and its evolution materialize how case documentation and management interact. The second part addresses the issue in the current context, from the birth of evidence-based medicine, its cri tics and its evolution. The third part shows that linking rule-based and case-based reasoning adequately pictures the process of customizing medical decisions. This simple model can account for the movement between two kinds of customization and leads to a balanced approach, tested in the field of practice evaluation and medical literature
Malik, Muhammad Ghulam Abbas. "Méthodes et outils pour les problèmes faibles de traduction." Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00502192.
Повний текст джерелаDésoyer, Adèle. "Appariement de contenus textuels dans le domaine de la presse en ligne : développement et adaptation d'un système de recherche d'information." Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100119/document.
Повний текст джерелаThe goal of this thesis, conducted within an industrial framework, is to pair textual media content. Specifically, the aim is to pair on-line news articles to relevant videos for which we have a textual description. The main issue is then a matter of textual analysis, no image or spoken language analysis was undertaken in the present study. The question that arises is how to compare these particular objects, the texts, and also what criteria to use in order to estimate their degree of similarity. We consider that one of these criteria is the topic similarity of their content, in other words, the fact that two documents have to deal with the same topic to form a relevant pair. This problem fall within the field of information retrieval (ir) which is the main strategy called upon in this research. Furthermore, when dealing with news content, the time dimension is of prime importance. To address this aspect, the field of topic detection and tracking (tdt) will also be explored.The pairing system developed in this thesis distinguishes different steps which complement one another. In the first step, the system uses natural language processing (nlp) methods to index both articles and videos, in order to overcome the traditionnal bag-of-words representation of texts. In the second step, two scores are calculated for an article-video pair: the first one reflects their topical similarity and is based on a vector space model; the second one expresses their proximity in time, based on an empirical function. At the end of the algorithm, a classification model learned from manually annotated document pairs is used to rank the results.Evaluation of the system's performances raised some further questions in this doctoral research. The constraints imposed both by the data and the specific need of the partner company led us to adapt the evaluation protocol traditionnal used in ir, namely the cranfield paradigm. We therefore propose an alternative solution for evaluating the system that takes all our constraints into account
Labiad, Ali. "Sélection des mots clés basée sur la classification et l'extraction des règles d'association." Thèse, 2017. http://depot-e.uqtr.ca/8196/1/031872941.pdf.
Повний текст джерелаBendakir, Narimel. "RARE : un système de recommandation de cours basé sur les régles d'association." Thèse, 2006. http://hdl.handle.net/1866/16732.
Повний текст джерелаTshibala, Tshitoko Emmanuel. "Prédiction des efforts de test : une approche basée sur les seuils des métriques logicielles et les algorithmes d'apprentissage automatique." Thèse, 2019. http://depot-e.uqtr.ca/id/eprint/9431/1/eprint9431.pdf.
Повний текст джерелаSalavati-Khoshghalb, Majid. "Recourse policies in the vehicle routing problem with stochastic demands." Thèse, 2017. http://hdl.handle.net/1866/19297.
Повний текст джерела