Thèses sur le sujet « Similarity measure and multi-Criteria »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Similarity measure and multi-Criteria.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 17 meilleures thèses pour votre recherche sur le sujet « Similarity measure and multi-Criteria ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Serrai, Walid. « Évaluation de performances de solutions pour la découverte et la composition des services web ». Electronic Thesis or Diss., Paris Est, 2020. http://www.theses.fr/2020PESC0032.

Texte intégral
Résumé :
Les systèmes logiciels accessibles via le web sont construits en utilisant des services web existants et distribués qui s'interagissent par envoi de messages. Le service web expose ses fonctionnalités à travers une interface décrite dans un format manipulable par ordinateur. Les autres systèmes interagissent, sans intervention humaine, avec le service web selon une procédure prescrite en utilisant les messages d’un protocole.Les services web peuvent être déployés sur des plateformes cloud. Ce type de déploiement occasionne un grand nombre de services à gérer au niveau de mêmes répertoires soulevant différents problèmes : Comment gérer efficacement ces services afin de faciliter leur découverte pour une éventuelle composition. En effet, étant donné un répertoire, comment définir une architecture voire une structure de données permettant d’optimiser la découverte des services, leur composition et leur gestion.La découverte de services consiste à trouver un ou plusieurs services satisfaisant les critères du client. La composition de services consiste quant à elle à trouver un nombre de services pouvant être exécutés selon un schéma et satisfaisant les contraintes du client. Comme le nombre de services augmente sans cesse, la demande pour la conception d’architectures permettant d’offrir non seulement un service de qualité mais aussi un temps de réponse rapide pour la découverte, la sélection et la composition, est de plus en plus intense. Ces architectures doivent de plus être facilement gérables et maintenables dans le temps. L’exploration de communautés et de structures d’index corrélée avec l’utilisation de mesures multi critères pourrait offrir une solution efficace à condition de bien choisir les structures de données, les types de mesures, et les techniques appropriés. Dans cette thèse, des solutions sont proposées pour la découverte, la sélection de services et leur composition de telle façon à optimiser la recherche en termes de temps de réponse et de pertinence des résultats. L’évaluation des performances des solutions proposées est conduite en utilisant des plateformes de simulations
Software systems accessible via the web are built using existing and distributed web services that interact by sending messages. The web service exposes its functionalities through an interface described in a computer-readable format. Other systems interact, without human intervention, with the web service according to a prescribed procedure using the messages of a protocol. Web services can be deployed on cloud platforms. This type of deployment causes a large number of services to be managed at the level of the same directories raising different problems: How to manage these services effectively to facilitate their discovery for a possible composition. Indeed, given a directory, how to define an architecture or even a data structure to optimize the discovery of services, their composition, and their management. Service discovery involves finding one or more services that meet the client’s criteria. The service composition consists of finding many services that can be executed according to a scheme and that satisfy the client’s constraints. As the number of services is constantly increasing, the demand for the design of architectures to provide not only quality service but also rapid responsetime for discovery, selection, and composition, is getting more intense. These architectures must also be easily manageable and maintainable over time. The exploration of communities and index structures correlated with the use of multi-criteria measures could offer an effective solution provided that the data structures, the types of measures, are chosen correctly, and the appropriate techniques. In this thesis, solutions are proposed for the discovery, the selection of services and their composition in such a way as to optimizethe search in terms of response time and the relevance of the results. The performance evaluation of the proposed solutions is carried out using simulation platforms
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chaibou, Salaou Mahaman Sani. « Segmentation d'image par intégration itérative de connaissances ». Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0140.

Texte intégral
Résumé :
Le traitement d’images est un axe de recherche très actif depuis des années. L’interprétation des images constitue une de ses branches les plus importantes de par ses applications socio-économiques et scientifiques. Cependant cette interprétation, comme la plupart des processus de traitements d’images, nécessite une phase de segmentation pour délimiter les régions à analyser. En fait l’interprétation est un traitement qui permet de donner un sens aux régions détectées par la phase de segmentation. Ainsi, la phase d’interprétation ne pourra analyser que les régions détectées lors de la segmentation. Bien que l’objectif de l’interprétation automatique soit d’avoir le même résultat qu’une interprétation humaine, la logique des techniques classiques de ce domaine ne marie pas celle de l’interprétation humaine. La majorité des approches classiques d’interprétation d’images séparent la phase de segmentation et celle de l’interprétation. Les images sont d’abord segmentées puis les régions détectées sont interprétées. En plus, au niveau de la segmentation les techniques classiques parcourent les images de manière séquentielle, dans l’ordre de stockage des pixels. Ce parcours ne reflète pas nécessairement le parcours de l’expert humain lors de son exploration de l’image. En effet ce dernier commence le plus souvent par balayer l’image à la recherche d’éventuelles zones d’intérêts. Dans le cas échéant, il analyse les zones potentielles sous trois niveaux de vue pour essayer de reconnaitre de quel objet s’agit-il. Premièrement, il analyse la zone en se basant sur ses caractéristiques physiques. Ensuite il considère les zones avoisinantes de celle-ci et enfin il zoome sur toute l’image afin d’avoir une vue complète tout en considérant les informations locales à la zone et celles de ses voisines. Pendant son exploration, l’expert, en plus des informations directement obtenues sur les caractéristiques physiques de l’image, fait appel à plusieurs sources d’informations qu’il fusionne pour interpréter l’image. Ces sources peuvent inclure les connaissent acquises grâce à son expérience professionnelle, les contraintes existantes entre les objets de ce type d’images, etc. L’idée de l’approche présentée ici est que simuler l’activité visuelle de l’expert permettrait une meilleure compatibilité entre les résultats de l’interprétation et ceux de l’expert. Ainsi nous retenons de cette analyse trois aspects importants du processus d’interprétation d’image que nous allons modéliser dans l’approche proposée dans ce travail : 1. Le processus de segmentation n’est pas nécessairement séquentiel comme la plus part des techniques de segmentations qu’on rencontre, mais plutôt une suite de décisions pouvant remettre en cause leurs prédécesseurs. L’essentiel étant à la fin d’avoir la meilleure classification des régions. L’interprétation ne doit pas être limitée par la segmentation. 2. Le processus de caractérisation d’une zone d’intérêt n’est pas strictement monotone i.e. que l’expert peut aller d’une vue centrée sur la zone à vue plus large incluant ses voisines pour ensuite retourner vers la vue contenant uniquement la zone et vice-versa. 3. Lors de la décision plusieurs sources d’informations sont sollicitées et fusionnées pour une meilleure certitude. La modélisation proposée de ces trois niveaux met particulièrement l’accent sur les connaissances utilisées et le raisonnement qui mène à la segmentation des images
Image processing has been a very active area of research for years. The interpretation of images is one of its most important branches because of its socio-economic and scientific applications. However, the interpretation, like most image processing processes, requires a segmentation phase to delimit the regions to be analyzed. In fact, interpretation is a process that gives meaning to the regions detected by the segmentation phase. Thus, the interpretation phase can only analyze the regions detected during the segmentation. Although the ultimate objective of automatic interpretation is to produce the same result as a human, the logic of classical techniques in this field does not marry that of human interpretation. Most conventional approaches to this task separate the segmentation phase from the interpretation phase. The images are first segmented and then the detected regions are interpreted. In addition, conventional techniques of segmentation scan images sequentially, in the order of pixels appearance. This way does not necessarily reflect the way of the expert during the image exploration. Indeed, a human usually starts by scanning the image for possible region of interest. When he finds a potential area, he analyzes it under three view points trying to recognize what object it is. First, he analyzes the area based on its physical characteristics. Then he considers the region's surrounding areas and finally he zooms in on the whole image in order to have a wider view while considering the information local to the region and those of its neighbors. In addition to information directly gathered from the physical characteristics of the image, the expert uses several sources of information that he merges to interpret the image. These sources include knowledge acquired through professional experience, existing constraints between objects from the images, and so on.The idea of the proposed approach, in this manuscript, is that simulating the visual activity of the expert would allow a better compatibility between the results of the interpretation and those ofthe expert. We retain from the analysis of the expert's behavior three important aspects of the image interpretation process that we will model in this work: 1. Unlike what most of the segmentation techniques suggest, the segmentation process is not necessarily sequential, but rather a series of decisions that each one may question the results of its predecessors. The main objective is to produce the best possible regions classification. 2. The process of characterizing an area of interest is not a one way process i.e. the expert can go from a local view restricted to the region of interest to a wider view of the area, including its neighbors and vice versa. 3. Several information sources are gathered and merged for a better certainty, during the decision of region characterisation. The proposed model of these three levels places particular emphasis on the knowledge used and the reasoning behind image segmentation
Styles APA, Harvard, Vancouver, ISO, etc.
3

Šulc, Zdeněk. « Similarity Measures for Nominal Data in Hierarchical Clustering ». Doctoral thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-261939.

Texte intégral
Résumé :
This dissertation thesis deals with similarity measures for nominal data in hierarchical clustering, which can cope with variables with more than two categories, and which aspire to replace the simple matching approach standardly used in this area. These similarity measures take into account additional characteristics of a dataset, such as frequency distribution of categories or number of categories of a given variable. The thesis recognizes three main aims. The first one is an examination and clustering performance evaluation of selected similarity measures for nominal data in hierarchical clustering of objects and variables. To achieve this goal, four experiments dealing both with the object and variable clustering were performed. They examine the clustering quality of the examined similarity measures for nominal data in comparison with the commonly used similarity measures using a binary transformation, and moreover, with several alternative methods for nominal data clustering. The comparison and evaluation are performed on real and generated datasets. Outputs of these experiments lead to knowledge, which similarity measures can generally be used, which ones perform well in a particular situation, and which ones are not recommended to use for an object or variable clustering. The second aim is to propose a theory-based similarity measure, evaluate its properties, and compare it with the other examined similarity measures. Based on this aim, two novel similarity measures, Variable Entropy and Variable Mutability are proposed; especially, the former one performs very well in datasets with a lower number of variables. The third aim of this thesis is to provide a convenient software implementation based on the examined similarity measures for nominal data, which covers the whole clustering process from a computation of a proximity matrix to evaluation of resulting clusters. This goal was also achieved by creating the nomclust package for the software R, which covers this issue, and which is freely available.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wach, Dominika, Ute Stephan et Marjan Gorgievski. « More than money : Developing an integrative multi-factorial measure of entrepreneurial success ». Sage, 2016. https://tud.qucosa.de/id/qucosa%3A35642.

Texte intégral
Résumé :
This article conceptualizes and operationalizes ‘subjective entrepreneurial success’ in a manner which reflects the criteria employed by entrepreneurs, rather than those imposed by researchers. We used two studies to explore this notion; the first qualitative enquiry investigated success definitions using interviews with 185 German entrepreneurs; five factors emerged from their reports: firm performance, workplace relationships, personal fulfilment, community impact and personal financial rewards. The second study developed a questionnaire, the Subjective Entrepreneurial Success–Importance Scale (SES-IS), to measure these five factors using a sample of 184 entrepreneurs. We provide evidence for the validity of the SES-IS, including establishing systematic relationships of SES-IS with objective indicators of firm success, annual income and entrepreneur satisfaction with life and financial situation. We also provide evidence for the cross-cultural invariance of SES-IS using a sample of Polish entrepreneurs. The contribution of our research being that subjective entrepreneurial success is a multi-factorial construct, that is, entrepreneurs value various indicators of success with monetary returns as only one possible option.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Escande, Paul. « Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables ». Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0020/document.

Texte intégral
Résumé :
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes
The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yu, Jodie Wei. « Investigation of New Forward Osmosis Draw Agents and Prioritization of Recent Developments of Draw Agents Using Multi-criteria Decision Analysis ». DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2185.

Texte intégral
Résumé :
Forward osmosis (FO) is an emerging technology for water treatment due to their ability to draw freshwater using an osmotic pressure gradient across a semi-permeable membrane. However, the lack of draw agents that could both produce reasonable flux and be separated from the draw solution at a low cost stand in the way of widespread implementation. This study had two objectives: evaluate the performance of three materials — peptone, carboxymethyl cellulose (CMC), and magnetite nanoparticles (Fe3O4 NPs) — as potential draw agents, and to use multi-criteria decision matrices to systematically prioritize known draw agents from literature for research investigation. Peptone showed water flux and reverse solute flux values comparable to other organic draw agents. CMC’s high viscosity made it impractical to use and is not recommended as a draw agent. Fe3O4 NPs showed average low fluxes (e.g., 2.14 LMH) but discrete occurrences of high flux values (e.g., 14 LMH) were observed during FO tests. This result indicates that these nanoparticles have potential as draw agents but further work is needed to optimize the characteristics of the nanoparticle suspension. Separation of the nanoparticles from the product water using coagulation was shown to be theoretically possible if only electrostatic and van der Waals forces are taken into account, not steric repulsion. If coagulation is to be considered for separation, research efforts on development of nanoparticle suspensions as FO draw agents should focus on development of electrostatically stabilized nanoparticles. A combination of Fe3O4 NP and peptone showed a higher flux than Fe3O4 NPs alone, but did not produce additive or synergistic flux. This warrants further research to investigate more combinations of draw agents to achieve higher flux than that obtained by individual draw agents. Potential draw agents were prioritized by conducting a literature review of draw agents, extracting data on evaluation criteria for draw agents developed over the past five years, using these data to rank the draw agents using the Analytical Hierarchy Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solutions (TOPSIS). The evaluation criteria used in the ranking matrices were water flux, reverse solute flux, replenishment cost, regeneration cost, and regeneration efficacy. The results showed that the top five ranked draw agents were P-2SO3-2Na, TPHMP-Na, PEI-600P-Na, NaCl, and NH4-CO2. The impact of the assumption made during the multi-criteria decision analysis process was evaluated through sensitivity analyses altering criterion weighting and including more criteria. This ranking system provided recommendations for future research and development on draw agents by highlighting research gaps.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Heyns, Werner. « Urban congestion charging : road pricing as a traffic reduction measure / W. Heyns ». Thesis, North-West University, 2005. http://hdl.handle.net/10394/523.

Texte intégral
Résumé :
Urban traffic congestion is recognised as a major problem by most people in world cities. However, the implementation of congestion reducing measures on a wide scale eludes most world cities suffering from traffic congestion, as many oppose the notion of road pricing and despite economists and transportation professionals having advocated its benefits for a number of decades. The effects of road pricing have attracted considerable attention from researchers examining its effects, as it is thought to hold the key in understanding and overcoming some inherent obstacles to implementation. Unfortunately, many of the attempts consider the effects in isolation and with hypothetical, idealised and analytical tools, sometimes loosing sight of the complexities of the problem. This research empirically investigates the effects of road pricing in London, and identifies factors, which may prove to sustain it as a traffic reduction instrument. The results indicate that an integrated approach has to be developed and implemented, based upon the recognition of local perceptions, concerns, aspirations and locally acceptable solutions, if the acceptance of road pricing is to be improved. The key to dealing with the effects of road pricing, is to encourage a concerted effort by various stakeholders developing strategies considering a range of differing initiatives, coordinating and managing them in the realm of the political-economic context in which they exist.
Thesis (M.Art. et Scien. (Town and Regional Planning))--North-West University, Potchefstroom Campus, 2005.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Saksrisathaporn, Krittiya. « A multi-criteria decision support system using knowledge management and project life cycle approach : application to humanitarian supply chain management ». Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22016/document.

Texte intégral
Résumé :
Cette thèse vise à contribuer à la compréhension des cycle de vie d’une opération humanitaire (HOLC). Gestion de la chaîne d'approvisionnement humanitaire (HSCM) dans un contexte de mise en perspective et dans l’objectif de proposer un modèle décisionnel qui s'applique aux phases de HOLC lors d’une situation réelle. Cela inclut la mise en oeuvre du modèle proposé pour concevoir et développer un outil d'aide à la décision afin d'améliorer les performances de la logistique humanitaire tant dans les opérations de secours nationaux qu’internationaux.Cette recherche est divisée en trois phases. La première partie vise à présenter le sens de l'étude ; la zone de recherche prise en compte pour la gestion de la chaîne d'approvisionnement (SCM) doit être clairement définie. La première phase consiste à clarifier et définir le HSCM HL, la gestion de la chaîne d'approvisionnement commerciale (CSCM) et le SCM, ainsi que la relation entre ces différents éléments. La gestion du cycle de vie du projet (PLCM) et les différentes approches sont également présentés. La compréhension de la différence entre la gestion du cycle de vie du projet (PLM) et la PLCM est également nécessaire, cela ne peut être abordé dans la phase de cycle de vie de l'opération humanitaire. De plus, les modèles Multiple-Criteria Decision Making (MCDM) et l’aide à la décision concernant le HL sont analysés pour établir le fossé existant en matière de recherche. Les approches MCDM qui mettent en oeuvre le système d'aide à la décision (DSS) et la manière dont le MAS a été utilisé dans le contexte HSCM sont étudiées.La deuxième phase consiste en la proposition d’un modèle décisionnel fondé sur l’approche MCDM à l'appui de la décision du décideur avant qu'il/elle prenne des mesures. Ce modèle prévoit le classement des alternatives concernant l'entrepôt, le fournisseur et le transport au cours des phases de HOLC. Le modèle décisionnel proposé est réalisée en 3 scénarios. I. La décision en 4phases HOLC – opération de secours internationale de la Croix-Rouge Française (CRF). II. La décision en3phases HOLC – opération nationale dela Croix-Rouge thaïlandaise (TRC). III. La décision au niveau de la phase de réponse HOLC – opération internationale du TRC dans quatre pays. Dans cette phase, le scénario I et II sont réalisés étape par étape au travers de calculs numériques et formules mathématiques. Le scénario III sera présenté dans la troisième phase. Pour établir trois scénarios, les données internes recueillies lors des entretiens avec le chef de la logistique de la Croix-Rouge Française, et le vice-président de la fondation de la Coix-Rouge thaïlandaise, seront utilisées. Les données externes proviennent de chercheurs qui sont des experts dans le domaine HL ou le champ du HSCM, de la littérature, et de sources issues des organismes humanitaires (documents d’ateliers, rapports, informations publiées sur leurs sites officiels).Dans la troisième phase, une application Internet multi-critères (decision support system MCDSS WB) mettant en oeuvre le modèle proposé est élaborée. Afin d'atteindre une décision appropriée en temps réel, le WB-MCDSS est développé sur la base d’un protocole client-serveur et est simple à utiliser. Le dernier mais non le moindre ; une application de validation du modèle est réalisée à l'aide de l'approche de l'analyse de sensibilité
This thesis aims to contribute to the understanding of HOLC in context of the HSCM and to propose a decision model which applies to the phases of HOLC the decision making regarding a real situation . This include the implementation of the proposed model to design and develop a decision support tool in order to improve the performance of humanitarian logistics in both national and international relief operations.This research is divided into three phases; the first phase is to clarify and define HL among HSCM, commercial supply chain management (CSCM) and SCM and their relationship. Project Life Cycle Management (PLCM) approaches are also presented. The difference between project life cycle management (PLM) and PLCM is also required to distinguish a clear understanding which can be addressed in the phase of humanitarian operation life cycle. Additionally, the literature of Multiple-Criteria Decision Making (MCDM) models and existing decision aid system for HL are analyzed to establish the research gap. The MCDM approaches which implement the decision support system (DSS) and lastly how DSS has been used in the HSCM context.The second phase is to propose a decision model based on MCDM approaches to support the decision of the decision maker before he/she takes action. This model provides the ranking alternatives to warehouse, supplier and transportation over the phases of HOLC. The proposed decision model is conducted in 3 scenarios; I. The decision in 4-phase HOLC, international relief operation of French Red Cross (FRC). II. The decision on 3-phase HOLC, national operation by the Thai Red Cross (TRC). III. The decision on response phase HOLC, international operation by the FRC in four countries. In this phase, the scenario I and II are performed step by step though numerical calculation and mathematical formulas. The scenario III will be presented in the third phase.In the third phase, an application of web-based multi-criteria decision support system (WB-MCDSS) which implement the proposed model is developed. The web-based multi-criteria decision support system is developed based on the integration of analytical hierarchy process (AHP) and TOPSIS approaches. In order to achieve an appropriate decision in a real time response, the WB-MCDSS is developed based on server-client protocol and is simple to operate. Last but not least, a validation application of the model is performed using the sensitivity analysis approach
Styles APA, Harvard, Vancouver, ISO, etc.
9

Igoulalene, Idris. « Développement d'une approche floue multicritère d'aide à la coordination des décideurs pour la résolution des problèmes de sélection dans les chaines logistiques ». Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4357/document.

Texte intégral
Résumé :
Dans le cadre de cette thèse, notre objectif est de développer une approche multicritère d'aide à la coordination des décideurs pour la résolution des problèmes de sélection dans les chaines logistiques. En effet, nous considérons le cas où nous avons k décideurs/experts notés ST1,...,STk qui cherchent à classer un ensemble de m alternatives/choix notées A1,...,Am évaluées en termes de n critères conflictuels notés C1,..., Cn. L'ensemble des données manipulées est flou. Chaque décideur est amené à exprimer ses préférences pour chaque alternative par rapport à chaque critère à travers une matrice dite matrice des préférences. Notre approche comprend principalement deux phases, respectivement une phase de consensus qui consiste à trouver un accord global entre les décideurs et une phase de classement qui traite le problème de classement des différentes alternatives.Comme résultats, pour la première phase, nous avons adapté deux mécanismes de consensus, le premier est basé sur l'opérateur mathématique neat OWA et le second sur la mesure de possibilité. De même, nous avons développé un nouveau mécanisme de consensus basé sur la programmation par but goal programming. Pour la phase de classement, nous avons adapté dans un premier temps la méthode TOPSIS et dans un second, le modèle du goal programming avec des fonctions de satisfaction. Pour illustrer l'applicabilité de notre approche, nous avons utilisé différents problèmes de sélection dans les chaines logistiques comme la sélection des systèmes de formation, la sélection des fournisseurs, la sélection des robots et la sélection des entrepôts
This thesis presents a development of a multi-criteria group decision making approach to solve the selection problems in supply chains. Indeed, we start in the context where a group of k decision makers/experts, is in charge of the evaluation and the ranking of a set of potential m alternatives. The alternatives are evaluated in fuzzy environment while taking into consideration both subjective (qualitative) and objective (quantitative) n conflicting criteria. Each decision maker is brought to express his preferences for each alternative relative to each criterion through a fuzzy matrix called preference matrix. We have developed three new approaches for manufacturing strategy, information system and robot selection problem:1. Fuzzy consensus-based possibility measure and goal programming approach.2. Fuzzy consensus-based neat OWA and goal programming approach.3. Fuzzy consensus-based goal programming and TOPSIS approach.Finally, a comparison of these three approaches is conducted and thus was able to give recommendations to improve the approaches and provide decision aid to the most satisfying decision makers
Styles APA, Harvard, Vancouver, ISO, etc.
10

Dang, Vinh Q. « Evolutionary approaches for feature selection in biological data ». Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2014. https://ro.ecu.edu.au/theses/1276.

Texte intégral
Résumé :
Data mining techniques have been used widely in many areas such as business, science, engineering and medicine. The techniques allow a vast amount of data to be explored in order to extract useful information from the data. One of the foci in the health area is finding interesting biomarkers from biomedical data. Mass throughput data generated from microarrays and mass spectrometry from biological samples are high dimensional and is small in sample size. Examples include DNA microarray datasets with up to 500,000 genes and mass spectrometry data with 300,000 m/z values. While the availability of such datasets can aid in the development of techniques/drugs to improve diagnosis and treatment of diseases, a major challenge involves its analysis to extract useful and meaningful information. The aims of this project are: 1) to investigate and develop feature selection algorithms that incorporate various evolutionary strategies, 2) using the developed algorithms to find the “most relevant” biomarkers contained in biological datasets and 3) and evaluate the goodness of extracted feature subsets for relevance (examined in terms of existing biomedical domain knowledge and from classification accuracy obtained using different classifiers). The project aims to generate good predictive models for classifying diseased samples from control.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Yang, Chin-Chang, et 楊晉昌. « Fuzzy Similarity Measure Based Hybrid Image Filter for Color Image Restoration : Multi-methodology Evolutionary Programming ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/48632683999874780982.

Texte intégral
Résumé :
碩士
國立成功大學
資訊工程學系碩博士班
97
A multi-methodology evolutionary computation and fuzzy similarity measure based hybrid image filter for color image restoration is proposed in this thesis. First, a multi-methodology evolutionary computation (MMEC) is proposed for multi-objective optimization problems. Then, a hybrid image filter with fuzzy-based similarity measure is proposed for noise reduction. Finally, an experience-based construction of fuzzy sets in the similarity measure has been shown as near-optimized via MMEC and is applied to color image restoration. The experimental results show that the proposed fuzzy similarity measure based hybrid image filter can achieve better filtering quality than the classical vector filters and the bilateral filter which are restricted by the shapes of functions themselves. The proposed filter is effective to restore color images contaminated by impulse noise, Gaussian noise, and mixed noise.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Chaibou, salaou Mahaman Sani. « Segmentation d'image par intégration itérative de connaissances ». Thesis, 2019. http://www.theses.fr/2019IMTA0140/document.

Texte intégral
Résumé :
Le traitement d’images est un axe de recherche très actif depuis des années. L’interprétation des images constitue une de ses branches les plus importantes de par ses applications socio-économiques et scientifiques. Cependant cette interprétation, comme la plupart des processus de traitements d’images, nécessite une phase de segmentation pour délimiter les régions à analyser. En fait l’interprétation est un traitement qui permet de donner un sens aux régions détectées par la phase de segmentation. Ainsi, la phase d’interprétation ne pourra analyser que les régions détectées lors de la segmentation. Bien que l’objectif de l’interprétation automatique soit d’avoir le même résultat qu’une interprétation humaine, la logique des techniques classiques de ce domaine ne marie pas celle de l’interprétation humaine. La majorité des approches classiques d’interprétation d’images séparent la phase de segmentation et celle de l’interprétation. Les images sont d’abord segmentées puis les régions détectées sont interprétées. En plus, au niveau de la segmentation les techniques classiques parcourent les images de manière séquentielle, dans l’ordre de stockage des pixels. Ce parcours ne reflète pas nécessairement le parcours de l’expert humain lors de son exploration de l’image. En effet ce dernier commence le plus souvent par balayer l’image à la recherche d’éventuelles zones d’intérêts. Dans le cas échéant, il analyse les zones potentielles sous trois niveaux de vue pour essayer de reconnaitre de quel objet s’agit-il. Premièrement, il analyse la zone en se basant sur ses caractéristiques physiques. Ensuite il considère les zones avoisinantes de celle-ci et enfin il zoome sur toute l’image afin d’avoir une vue complète tout en considérant les informations locales à la zone et celles de ses voisines. Pendant son exploration, l’expert, en plus des informations directement obtenues sur les caractéristiques physiques de l’image, fait appel à plusieurs sources d’informations qu’il fusionne pour interpréter l’image. Ces sources peuvent inclure les connaissent acquises grâce à son expérience professionnelle, les contraintes existantes entre les objets de ce type d’images, etc. L’idée de l’approche présentée ici est que simuler l’activité visuelle de l’expert permettrait une meilleure compatibilité entre les résultats de l’interprétation et ceux de l’expert. Ainsi nous retenons de cette analyse trois aspects importants du processus d’interprétation d’image que nous allons modéliser dans l’approche proposée dans ce travail : 1. Le processus de segmentation n’est pas nécessairement séquentiel comme la plus part des techniques de segmentations qu’on rencontre, mais plutôt une suite de décisions pouvant remettre en cause leurs prédécesseurs. L’essentiel étant à la fin d’avoir la meilleure classification des régions. L’interprétation ne doit pas être limitée par la segmentation. 2. Le processus de caractérisation d’une zone d’intérêt n’est pas strictement monotone i.e. que l’expert peut aller d’une vue centrée sur la zone à vue plus large incluant ses voisines pour ensuite retourner vers la vue contenant uniquement la zone et vice-versa. 3. Lors de la décision plusieurs sources d’informations sont sollicitées et fusionnées pour une meilleure certitude. La modélisation proposée de ces trois niveaux met particulièrement l’accent sur les connaissances utilisées et le raisonnement qui mène à la segmentation des images
Image processing has been a very active area of research for years. The interpretation of images is one of its most important branches because of its socio-economic and scientific applications. However, the interpretation, like most image processing processes, requires a segmentation phase to delimit the regions to be analyzed. In fact, interpretation is a process that gives meaning to the regions detected by the segmentation phase. Thus, the interpretation phase can only analyze the regions detected during the segmentation. Although the ultimate objective of automatic interpretation is to produce the same result as a human, the logic of classical techniques in this field does not marry that of human interpretation. Most conventional approaches to this task separate the segmentation phase from the interpretation phase. The images are first segmented and then the detected regions are interpreted. In addition, conventional techniques of segmentation scan images sequentially, in the order of pixels appearance. This way does not necessarily reflect the way of the expert during the image exploration. Indeed, a human usually starts by scanning the image for possible region of interest. When he finds a potential area, he analyzes it under three view points trying to recognize what object it is. First, he analyzes the area based on its physical characteristics. Then he considers the region's surrounding areas and finally he zooms in on the whole image in order to have a wider view while considering the information local to the region and those of its neighbors. In addition to information directly gathered from the physical characteristics of the image, the expert uses several sources of information that he merges to interpret the image. These sources include knowledge acquired through professional experience, existing constraints between objects from the images, and so on.The idea of the proposed approach, in this manuscript, is that simulating the visual activity of the expert would allow a better compatibility between the results of the interpretation and those ofthe expert. We retain from the analysis of the expert's behavior three important aspects of the image interpretation process that we will model in this work: 1. Unlike what most of the segmentation techniques suggest, the segmentation process is not necessarily sequential, but rather a series of decisions that each one may question the results of its predecessors. The main objective is to produce the best possible regions classification. 2. The process of characterizing an area of interest is not a one way process i.e. the expert can go from a local view restricted to the region of interest to a wider view of the area, including its neighbors and vice versa. 3. Several information sources are gathered and merged for a better certainty, during the decision of region characterisation. The proposed model of these three levels places particular emphasis on the knowledge used and the reasoning behind image segmentation
Styles APA, Harvard, Vancouver, ISO, etc.
13

Alasoud, Ahmed Khalifa. « A multi-matching technique for combining similarity measures in ontology integration ». Thesis, 2009. http://spectrum.library.concordia.ca/976336/1/NR63399.pdf.

Texte intégral
Résumé :
Ontology matching is a challenging problem in many applications, and is a major issue for interoperability in information systems. It aims to find semantic correspondences between a pair of input ontologies, which remains a labor intensive and expensive task. This thesis investigates the problem of ontology matching in both theoretical and practical aspects and proposes a solution methodology, called multi-matching . The methodology is validated using standard benchmark data and its performance is compared with available matching tools. The proposed methodology provides a framework for users to apply different individual matching techniques. It then proceeds with searching and combining the match results to provide a desired match result in reasonable time. In addition to existing applications for ontology matching such as ontology engineering, ontology integration, and exploiting the semantic web, the thesis proposes a new approach for ontology integration as a backbone application for the proposed matching techniques. In terms of theoretical contributions, we introduce new search strategies and propose a structure similarity measure to match structures of ontologies. In terms of practical contribution, we developed a research prototype, called MLMAR - Multi-Level Matching Algorithm with Recommendation analysis technique, which implements the proposed multi-level matching technique, and applies heuristics as optimization techniques. Experimental results show practical merits and usefulness of MLMAR
Styles APA, Harvard, Vancouver, ISO, etc.
14

« Multi-Variate Time Series Similarity Measures and Their Robustness Against Temporal Asynchrony ». Master's thesis, 2015. http://hdl.handle.net/2286/R.I.36436.

Texte intégral
Résumé :
abstract: The amount of time series data generated is increasing due to the integration of sensor technologies with everyday applications, such as gesture recognition, energy optimization, health care, video surveillance. The use of multiple sensors simultaneously for capturing different aspects of the real world attributes has also led to an increase in dimensionality from uni-variate to multi-variate time series. This has facilitated richer data representation but also has necessitated algorithms determining similarity between two multi-variate time series for search and analysis. Various algorithms have been extended from uni-variate to multi-variate case, such as multi-variate versions of Euclidean distance, edit distance, dynamic time warping. However, it has not been studied how these algorithms account for asynchronous in time series. Human gestures, for example, exhibit asynchrony in their patterns as different subjects perform the same gesture with varying movements in their patterns at different speeds. In this thesis, we propose several algorithms (some of which also leverage metadata describing the relationships among the variates). In particular, we present several techniques that leverage the contextual relationships among the variates when measuring multi-variate time series similarities. Based on the way correlation is leveraged, various weighing mechanisms have been proposed that determine the importance of a dimension for discriminating between the time series as giving the same weight to each dimension can led to misclassification. We next study the robustness of the considered techniques against different temporal asynchronies, including shifts and stretching. Exhaustive experiments were carried on datasets with multiple types and amounts of temporal asynchronies. It has been observed that accuracy of algorithms that rely on data to discover variate relationships can be low under the presence of temporal asynchrony, whereas in case of algorithms that rely on external metadata, robustness against asynchronous distortions tends to be stronger. Specifically, algorithms using external metadata have better classification accuracy and cluster separation than existing state-of-the-art work, such as EROS, PCA, and naive dynamic time warping.
Dissertation/Thesis
Masters Thesis Computer Science 2015
Styles APA, Harvard, Vancouver, ISO, etc.
15

Mousavi, Mohammad M., et J. Quenniche. « Multi-criteria ranking of corporate distress prediction models : empirical evaluation and methodological contributions ». 2018. http://hdl.handle.net/10454/16704.

Texte intégral
Résumé :
Yes
Although many modelling and prediction frameworks for corporate bankruptcy and distress have been proposed, the relative performance evaluation of prediction models is criticised due to the assessment exercise using a single measure of one criterion at a time, which leads to reporting conflicting results. Mousavi et al. (Int Rev Financ Anal 42:64–75, 2015) proposed an orientation-free super-efficiency DEA-based framework to overcome this methodological issue. However, within a super-efficiency DEA framework, the reference benchmark changes from one prediction model evaluation to another, which in some contexts might be viewed as “unfair” benchmarking. In this paper, we overcome this issue by proposing a slacks-based context-dependent DEA (SBM-CDEA) framework to evaluate competing distress prediction models. In addition, we propose a hybrid crossbenchmarking- cross-efficiency framework as an alternative methodology for ranking DMUs that are heterogeneous. Furthermore, using data on UK firms listed on London Stock Exchange, we perform a comprehensive comparative analysis of the most popular corporate distress prediction models; namely, statistical models, under both mono criterion and multiple criteria frameworks considering several performance measures. Also, we propose new statistical models using macroeconomic indicators as drivers of distress.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Hussain, Zahid, et 胡杉奕. « Distance, similarity and entropy for hesitant fuzzy sets based on Hausdorff metric with applications to multi-criteria decision making and clustering ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2ac4kx.

Texte intégral
Résumé :
博士
中原大學
應用數學研究所
107
Distance, similarity and entropy play an indispensable role in almost every field of our daily life settings. Distance and similarity measures are widely used to differentiate between two sets or objects. While entropy measures the fuzziness in a fuzzy set. Different distance and similarity measures have been proposed for hesitant fuzzy sets (HFSs) in the literature, but either they are in sufficient or not reflect desirable results. In this manuscript, the construction of new distance and similarity measures between HFSs based on Hausdorff metric is proposed. We first present a novel and simple method for calculating a distance between HFSs based on Hasudorff metric in a suitable and intuitive way. Two main features of the proposed approach are: (1) not necessary to add a minimum value, a maximum value or any value to the shorter one of hesitant fuzzy elements (HFEs) for extending it to the larger one of HFEs; and (2) no need to arrange HFEs either in ascending or descending order. This is because adding such values and arrangements of elements will not put any impact on final results. We then extend distance to similarity measure between HFSs. Next, measuring uncertainty for an HFS is computed by an amount of distinction between an HFS and its complement. Hausdorff metric is used to calculate a distance between an HFS and its complement which assists us to construct novel entropy of HFSs. An axiomatic definition of entropy measure for HFSs is also given in this dissertation. The proposed entropy is proved to satisfy all axioms. Furthermore, more generalizations of the proposed entropy allow us to onstruct different entropy measures of HFSs which reflect that the closer of an HFS to its complement shows less distinction between them and produces the larger entropy measure of the HFS, and also the more distinction between them gives smaller amount of uncertainty. Furthermore, we claim some properties and also several examples are presented to compare our proposed distance, similarity and entropy measures with existing methods. We apply the proposed distance of HFSs to multi-criteria decision making and the similarity measure of HFSs to clustering. The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is used to construct hesitant fuzzy (TOPSIS) based on the proposed entropy measure to solve multicriteria decision making problems. Finally, expository examples are utilized to manifest simplicity, practicability and effectiveness of our proposed distance, similarity and entropies as compared to existing methods. The comparison results demonstrate that the proposed distance, similarity and entropy measures are much simpler, intuitive and better than most existing methods.
Styles APA, Harvard, Vancouver, ISO, etc.
17

GABBRIELLI, EMANUELE. « L’impatto delle Misure Agroambientali nella Regione Toscana. Un'Analisi Multicriteriale Geografica ». Doctoral thesis, 2015. http://hdl.handle.net/2158/995008.

Texte intégral
Résumé :
SOMMARIO. Le misure agroambientali, inserite nei Piani di Sviluppo Rurale (PSR), rivestono un ruolo di primaria importanza all’interno delle Politiche Comunitarie. Sia dal punto di vista degli importi stanziati, sia da quello delle superfici interessate, queste misure costituiscono in quasi tutte le regioni il principale strumento di finanziamento dei Programmi di Sviluppo Rurale. La valutazione degli effettivi impatti generati delle misure agroambientali sui sistemi rurali non risulta però completamente chiara. I vari documenti di valutazione ex post degli effetti dei PSR mostrano evidenti lacune nella quantificazione di diversi impatti attribuiti a queste misure. Questa complessità è dovuta principalmente alla specificità tecnica dei servizi ambientali, che sono difficilmente identificabili e misurabili. La ricerca ha avuto lo scopo di fornire un modello di analisi adattabile alle diverse situazioni territoriali, in grado di analizzare la distribuzione e l’impatto dei finanziamenti comunitari, per le misure del Piano di Sviluppo Rurale che prevedono aiuti a superficie. In particolare l’analisi si è concentrata sulle misure agroambientali dell’agricoltura biologica e dell’agricoltura integrata, del PSR 2007-2013 della Regione Toscana. Nella prima parte, grazie all’utilizzo di un database appositamente costruito è stato possibile delineare un quadro completo degli interventi, valutandone gli obiettivi politici e analizzandone l’effettiva assegnazione sul territorio. La seconda parte ha visto l’impiego di un modello geografico di valutazione multicriteriale, con lo scopo di identificare una metodologia per un’analisi approfondita della distribuzione e dell’impatto sul territorio delle misure stesse. In particolare, è stata condotta una simulazione sugli effetti economici ed ambientali di riduzioni del budget di finanziamenti comunitari, attraverso l’impiego dei dati georeferenziati dei singoli poligoni di tutte le particelle aziendali oggetto d’impegno agroambientale. L’obiettivo è stato quello di sviluppare un supporto all’intervento pubblico nelle strategie di gestione e di programmazione degli interventi, con riferimento specifico alle interazioni tra sviluppo agricolo e qualità ambientale. L’approccio metodologico utilizzato potrà costituire uno strumento utile per coadiuvare i policy maker nelle loro decisioni, in sede di analisi ex-ante, intermedia ed ex-post, anche in previsione delle nuove misure relative alla programmazione 2014-2020. ABSTRACT. Agri-environment measures, included into the Rural Development Programmes (RDPs), play a major role within the Community Policies. In most regions these measures are the main funding of Rural Development Programmes, from the point of view of amount allocated and areas concerned. However, the evaluation of the impacts generated by agri-environment measures on rural systems, is not completely definite. The various ex-post evaluations on the effects of RDPs show evident gaps in the quantification of different impacts attributed to these measures. This complexity is mainly due to the technical peculiarity of environmental services, which are hardly identifiable and measurable. The research was aimed to provide an analysis pattern suitable to different territorial situations. Through the research it was possible to analyze the distribution and impact of EU funding for the measures of Rural Development Plan which provide aid to area. The analysis particulary focused on agri-environment measures of organic farming and integrated agriculture, of RDP 2007-2013 in Tuscany. In the first part, thanks to the use of a purpose-built database, it was possible to define a complete framework for all interventions. It was also possible to evaluate political aims and analyze the actual assignment in the area. The second part of this research shows the use of a multi-criteria’s geographic pattern evaluation, with the aim of identifying a methodology for an in-depth analysis of distribution and impact on territory of the measures. In particular, it was conducted a simulation on the economical and environmental effects of EU funding budget reductions, through the use of geo-referenced data of individual polygons for all business particles subject to agri-environmental commitment. The aim was to develop a support for public intervention in management strategies and operations planning, with specific reference to the interactions between agricultural development and quality environmental. The methodological approach employed will be a helpful tool to assist policy makers in their own decisions, in ex-ante, interim and ex-post analysis, also in preparation for the new measures on programming 2014-2020.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie