Littérature scientifique sur le sujet « Similarity measure and multi-Criteria »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Similarity measure and multi-Criteria ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Similarity measure and multi-Criteria"

1

Yang, Qing Bo, et Ruo Juan Xue. « Similarity Measure between Vague Sets Based on Products ». Applied Mechanics and Materials 667 (octobre 2014) : 85–88. http://dx.doi.org/10.4028/www.scientific.net/amm.667.85.

Texte intégral
Résumé :
A new method for measure similarity between vague sets is proposed in this paper. Multi-criteria evaluation problems are used in decision-making constantly. Vague sets model is used to describe multi-criteria evaluation problems in this paper. And the similarity measure method based on products is used in sorting alternatives. The proposed method can solve the multi-criteria evaluation problems in a reasonable and objective way.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Nisha, Dr B., et Dr S. Vijayalaksmi. « Hesitant Fuzzy Soft Sets with Similarity Measure ». International Journal for Research in Applied Science and Engineering Technology 12, no 1 (31 janvier 2024) : 1549–54. http://dx.doi.org/10.22214/ijraset.2024.58205.

Texte intégral
Résumé :
Abstract: Molodtsov’s soft set theory is a newly emerging mathematical tool to handle uncertainty. Babitha and John defined another important soft set, as hesitant fuzzy soft sets. This paper gives a methodology to solve the multi-criteria decision making problems using similarity measures on Hesitant fuzzy soft sets. A decision making problem was solved with the help of similarity measure on hesitant fuzzy soft set.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Duong, Truong Thi Thuy, et Nguyen Xuan Thao. « TOPSIS model based on entropy and similarity measure for market segment selection and evaluation ». Asian Journal of Economics and Banking 5, no 2 (22 juin 2021) : 194–203. http://dx.doi.org/10.1108/ajeb-12-2020-0106.

Texte intégral
Résumé :
PurposeThe paper aims to propose a practical model for market segment selection and evaluation. The paper carries out a technique of order preference similarity to the ideal solution (TOPSIS) approach to make an operation systematic dealing with multi-criteria decision- making problem.Design/methodology/approachIntroducing a multi-criteria decision-making problem based on TOPSIS approach. A new entropy and new similarity measure under neutrosopic environment are proposed to evaluate the weights of criteria and the relative closeness coefficient in TOPSIS model.FindingsThe outcomes show that the TOPSIS model based on new entropy and similarity measure is effective for evaluation and selection market segment. Profitability, growth of the market, the likelihood of sustainable differential advantages are the most important insights of criteria.Originality/valueThis paper put forward an effective multi-criteria decision-making dealing with uncertain information.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Lunyan, Qing Xia, Huimin Li et Yongchao Cao. « Multi-criteria decision making method based on improved cosine similarity measure with interval neutrosophic sets ». International Journal of Intelligent Computing and Cybernetics 12, no 3 (12 août 2019) : 414–23. http://dx.doi.org/10.1108/ijicc-05-2019-0047.

Texte intégral
Résumé :
Purpose The fuzziness and complexity of evaluation information are common phenomenon in practical decision-making problem, interval neutrosophic sets (INSs) is a power tool to deal with ambiguous information. Similarity measure plays an important role in judging the degree between ideal and each alternative in decision-making process, the purpose of this paper is to establish a multi-criteria decision-making method based on similarity measure under INSs. Design/methodology/approach Based on an extension of existing cosine similarity, this paper first introduces an improved cosine similarity measure between interval neutosophic numbers, which considers the degrees of the truth membership, the indeterminacy membership and the falsity membership of the evaluation values. And then a multi-criteria decision-making method is established based on the improved cosine similarity measure, in which the ordered weighted averaging (OWA) is adopted to aggregate the neutrosophic information related to each alternative. Finally, an example on supplier selection is given to illustrate the feasibility and practicality of the presented decision-making method. Findings In the whole process of research and practice, it was realized that the application field of the proposed similarity measure theory still should be expanded, and the development of interval number theory is one of further research direction. Originality/value The main contributions of this paper are as follows: this study presents an improved cosine similarity measure under INSs, in which the weights of the three independent components of an interval number are taken into account; OWA are adopted to aggregate the neutrosophic information related to each alternative; and a multi-criteria decision-making method using the proposed similarity is developed under INSs.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Talukdar, Pranjal, et Palash Dutta. « An Advanced Entropy Measure of IFSs via Similarity Measure ». International Journal of Fuzzy System Applications 12, no 1 (10 mars 2023) : 1–23. http://dx.doi.org/10.4018/ijfsa.319712.

Texte intégral
Résumé :
The Entropy measure of an intuitionistic fuzzy set (IFS) plays a significant role in decision making sciences, for instance, medical diagnosis, pattern recognition, criminal investigation, etc. The inadequate nature of an entropy measure may lead to some invalid results. Therefore, it is significant to use an efficient entropy measure for studying various decision-making problems under IFS environment. This paper first proposes a novel similarity measure for IFS. Based on the proposed similarity measure, an advanced entropy measure is defined with a different axiomatic approach. This axiomatic approach allows us to measure an IFS's entropy with the help of a similarity measure. To show the efficiency of the proposed similarity measure, a comparative study is performed with the existing similarity measures. Some structural linguistic variables are taken as examples to show the validity and consistency of the proposed entropy measure along with the existing entropy measures. Finally, based on the proposed entropy measure, a multi-criteria decision-making problem is performed.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Peng, Xindong, et Huiyong Yuan. « Pythagorean Fuzzy Multi-Criteria Decision Making Method Based on Multiparametric Similarity Measure ». Cognitive Computation 13, no 2 (17 janvier 2021) : 466–84. http://dx.doi.org/10.1007/s12559-020-09781-x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wagh, Rupali S., et Deepa Anand. « Legal document similarity : a multi-criteria decision-making perspective ». PeerJ Computer Science 6 (23 mars 2020) : e262. http://dx.doi.org/10.7717/peerj-cs.262.

Texte intégral
Résumé :
The vast volume of documents available in legal databases demands effective information retrieval approaches which take into consideration the intricacies of the legal domain. Relevant document retrieval is the backbone of the legal domain. The concept of relevance in the legal domain is very complex and multi-faceted. In this work, we propose a novel approach of concept based similarity estimation among court judgments. We use a graph-based method, to identify prominent concepts present in a judgment and extract sentences representative of these concepts. The sentences and concepts so mined are used to express/visualize likeness among concepts between a pair of documents from different perspectives. We also propose to aggregate the different levels of matching so obtained into one measure quantifying the level of similarity between a judgment pair. We employ the ordered weighted average (OWA) family of aggregation operators for obtaining the similarity value. The experimental results suggest that the proposed approach of concept based similarity is effective in the extraction of relevant legal documents and performs better than other competing techniques. Additionally, the proposed two-level abstraction of similarity enables informative visualization for deeper insights into case relevance.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mohamed, Saida, Areeg Abdalla et Robert John. « New Entropy-Based Similarity Measure between Interval-Valued Intuitionstic Fuzzy Sets ». Axioms 8, no 2 (18 juin 2019) : 73. http://dx.doi.org/10.3390/axioms8020073.

Texte intégral
Résumé :
In this paper, we propose a new approach to constructing similarity measures using the entropy measure for Interval-Valued Intuitionistic Fuzzy Sets. In addition, we provide several illustrative examples to demonstrate the practicality and effectiveness of the proposed formula. Finally, we use the new proposed similarity measure to develop a new approach for solving problems of pattern recognition and multi-criteria fuzzy decision-making.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gvozdev, O. G., A. V. Materuhin et A. A. Maiorov. « Adaptive geofields similarity measure based on binary similarity measures generalization ». Geodesy and Cartography 1002, no 12 (20 janvier 2024) : 38–48. http://dx.doi.org/10.22389/0016-7126-2023-1002-12-38-48.

Texte intégral
Résumé :
The authors discuss the task of geofield similarity measurement. The local and global approaches to it are reviewed. The rationale of inapplicability of measures, originally developed for images, for general case of geofields is given. The adaptive function family based on generalization of binary similarity measures is proposed. It assumes adaptation to specific scales, tasks and subject domain. The software implementation of this functions family is discussed. Its applicability for detection of similarities and difference of several geofield special cases is considered. The computational performance of the proposed mechanism different use is studied. It is shown that MT-IoU one (Multi-threshold Intersection-over-Union) is flexible and performant framework for building specialized geofield similarity measure functions
Styles APA, Harvard, Vancouver, ISO, etc.
10

Dong, Yuanxiang, Xiaoting Cheng, Weijie Chen, Hongbo Shi et Ke Gong. « A cosine similarity measure for multi-criteria group decision making under neutrosophic soft environment ». Journal of Intelligent & ; Fuzzy Systems 39, no 5 (19 novembre 2020) : 7863–80. http://dx.doi.org/10.3233/jifs-201328.

Texte intégral
Résumé :
In actual life, uncertain and inconsistent information exists widely. How to deal with the information so that it can be better applied is a problem that has to be solved. Neutrosophic soft sets can process uncertain and inconsistent information. Also, Dempster-Shafer evidence theory has the advantage of dealing with uncertain information, and it can synthesize uncertain information and deal with subjective judgments effectively. Therefore, this paper creatively combines the Dempster-Shafer evidence theory with the neutrosophic soft sets, and proposes a cosine similarity measure for multi-criteria group decision making. Different from the previous studies, the proposed similarity measure is utilized to measure the similarity between two objects in the structure of neutrosophic soft set, rather than two neutrosophic soft sets. We also propose the objective degree and credibility degree which reflect the decision makers’ subjective preference based on the similarity measure. Then parameter weights are calculated by the objective degree. Additionally, based on credibility degree and parameter weights, we propose the modified score function, modified accuracy function, and modified certainty function, which can be employed to obtain partial order relation and make decisions. Later, we construct an aggregation algorithm for multi-criteria group decision making based on Dempster’s rule of combination and apply the algorithm to a case of medical diagnosis. Finally, by testing and comparing the algorithm, the results demonstrate that the proposed algorithm can solve the multi-criteria group decision making problems effectively.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Similarity measure and multi-Criteria"

1

Serrai, Walid. « Évaluation de performances de solutions pour la découverte et la composition des services web ». Electronic Thesis or Diss., Paris Est, 2020. http://www.theses.fr/2020PESC0032.

Texte intégral
Résumé :
Les systèmes logiciels accessibles via le web sont construits en utilisant des services web existants et distribués qui s'interagissent par envoi de messages. Le service web expose ses fonctionnalités à travers une interface décrite dans un format manipulable par ordinateur. Les autres systèmes interagissent, sans intervention humaine, avec le service web selon une procédure prescrite en utilisant les messages d’un protocole.Les services web peuvent être déployés sur des plateformes cloud. Ce type de déploiement occasionne un grand nombre de services à gérer au niveau de mêmes répertoires soulevant différents problèmes : Comment gérer efficacement ces services afin de faciliter leur découverte pour une éventuelle composition. En effet, étant donné un répertoire, comment définir une architecture voire une structure de données permettant d’optimiser la découverte des services, leur composition et leur gestion.La découverte de services consiste à trouver un ou plusieurs services satisfaisant les critères du client. La composition de services consiste quant à elle à trouver un nombre de services pouvant être exécutés selon un schéma et satisfaisant les contraintes du client. Comme le nombre de services augmente sans cesse, la demande pour la conception d’architectures permettant d’offrir non seulement un service de qualité mais aussi un temps de réponse rapide pour la découverte, la sélection et la composition, est de plus en plus intense. Ces architectures doivent de plus être facilement gérables et maintenables dans le temps. L’exploration de communautés et de structures d’index corrélée avec l’utilisation de mesures multi critères pourrait offrir une solution efficace à condition de bien choisir les structures de données, les types de mesures, et les techniques appropriés. Dans cette thèse, des solutions sont proposées pour la découverte, la sélection de services et leur composition de telle façon à optimiser la recherche en termes de temps de réponse et de pertinence des résultats. L’évaluation des performances des solutions proposées est conduite en utilisant des plateformes de simulations
Software systems accessible via the web are built using existing and distributed web services that interact by sending messages. The web service exposes its functionalities through an interface described in a computer-readable format. Other systems interact, without human intervention, with the web service according to a prescribed procedure using the messages of a protocol. Web services can be deployed on cloud platforms. This type of deployment causes a large number of services to be managed at the level of the same directories raising different problems: How to manage these services effectively to facilitate their discovery for a possible composition. Indeed, given a directory, how to define an architecture or even a data structure to optimize the discovery of services, their composition, and their management. Service discovery involves finding one or more services that meet the client’s criteria. The service composition consists of finding many services that can be executed according to a scheme and that satisfy the client’s constraints. As the number of services is constantly increasing, the demand for the design of architectures to provide not only quality service but also rapid responsetime for discovery, selection, and composition, is getting more intense. These architectures must also be easily manageable and maintainable over time. The exploration of communities and index structures correlated with the use of multi-criteria measures could offer an effective solution provided that the data structures, the types of measures, are chosen correctly, and the appropriate techniques. In this thesis, solutions are proposed for the discovery, the selection of services and their composition in such a way as to optimizethe search in terms of response time and the relevance of the results. The performance evaluation of the proposed solutions is carried out using simulation platforms
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chaibou, Salaou Mahaman Sani. « Segmentation d'image par intégration itérative de connaissances ». Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0140.

Texte intégral
Résumé :
Le traitement d’images est un axe de recherche très actif depuis des années. L’interprétation des images constitue une de ses branches les plus importantes de par ses applications socio-économiques et scientifiques. Cependant cette interprétation, comme la plupart des processus de traitements d’images, nécessite une phase de segmentation pour délimiter les régions à analyser. En fait l’interprétation est un traitement qui permet de donner un sens aux régions détectées par la phase de segmentation. Ainsi, la phase d’interprétation ne pourra analyser que les régions détectées lors de la segmentation. Bien que l’objectif de l’interprétation automatique soit d’avoir le même résultat qu’une interprétation humaine, la logique des techniques classiques de ce domaine ne marie pas celle de l’interprétation humaine. La majorité des approches classiques d’interprétation d’images séparent la phase de segmentation et celle de l’interprétation. Les images sont d’abord segmentées puis les régions détectées sont interprétées. En plus, au niveau de la segmentation les techniques classiques parcourent les images de manière séquentielle, dans l’ordre de stockage des pixels. Ce parcours ne reflète pas nécessairement le parcours de l’expert humain lors de son exploration de l’image. En effet ce dernier commence le plus souvent par balayer l’image à la recherche d’éventuelles zones d’intérêts. Dans le cas échéant, il analyse les zones potentielles sous trois niveaux de vue pour essayer de reconnaitre de quel objet s’agit-il. Premièrement, il analyse la zone en se basant sur ses caractéristiques physiques. Ensuite il considère les zones avoisinantes de celle-ci et enfin il zoome sur toute l’image afin d’avoir une vue complète tout en considérant les informations locales à la zone et celles de ses voisines. Pendant son exploration, l’expert, en plus des informations directement obtenues sur les caractéristiques physiques de l’image, fait appel à plusieurs sources d’informations qu’il fusionne pour interpréter l’image. Ces sources peuvent inclure les connaissent acquises grâce à son expérience professionnelle, les contraintes existantes entre les objets de ce type d’images, etc. L’idée de l’approche présentée ici est que simuler l’activité visuelle de l’expert permettrait une meilleure compatibilité entre les résultats de l’interprétation et ceux de l’expert. Ainsi nous retenons de cette analyse trois aspects importants du processus d’interprétation d’image que nous allons modéliser dans l’approche proposée dans ce travail : 1. Le processus de segmentation n’est pas nécessairement séquentiel comme la plus part des techniques de segmentations qu’on rencontre, mais plutôt une suite de décisions pouvant remettre en cause leurs prédécesseurs. L’essentiel étant à la fin d’avoir la meilleure classification des régions. L’interprétation ne doit pas être limitée par la segmentation. 2. Le processus de caractérisation d’une zone d’intérêt n’est pas strictement monotone i.e. que l’expert peut aller d’une vue centrée sur la zone à vue plus large incluant ses voisines pour ensuite retourner vers la vue contenant uniquement la zone et vice-versa. 3. Lors de la décision plusieurs sources d’informations sont sollicitées et fusionnées pour une meilleure certitude. La modélisation proposée de ces trois niveaux met particulièrement l’accent sur les connaissances utilisées et le raisonnement qui mène à la segmentation des images
Image processing has been a very active area of research for years. The interpretation of images is one of its most important branches because of its socio-economic and scientific applications. However, the interpretation, like most image processing processes, requires a segmentation phase to delimit the regions to be analyzed. In fact, interpretation is a process that gives meaning to the regions detected by the segmentation phase. Thus, the interpretation phase can only analyze the regions detected during the segmentation. Although the ultimate objective of automatic interpretation is to produce the same result as a human, the logic of classical techniques in this field does not marry that of human interpretation. Most conventional approaches to this task separate the segmentation phase from the interpretation phase. The images are first segmented and then the detected regions are interpreted. In addition, conventional techniques of segmentation scan images sequentially, in the order of pixels appearance. This way does not necessarily reflect the way of the expert during the image exploration. Indeed, a human usually starts by scanning the image for possible region of interest. When he finds a potential area, he analyzes it under three view points trying to recognize what object it is. First, he analyzes the area based on its physical characteristics. Then he considers the region's surrounding areas and finally he zooms in on the whole image in order to have a wider view while considering the information local to the region and those of its neighbors. In addition to information directly gathered from the physical characteristics of the image, the expert uses several sources of information that he merges to interpret the image. These sources include knowledge acquired through professional experience, existing constraints between objects from the images, and so on.The idea of the proposed approach, in this manuscript, is that simulating the visual activity of the expert would allow a better compatibility between the results of the interpretation and those ofthe expert. We retain from the analysis of the expert's behavior three important aspects of the image interpretation process that we will model in this work: 1. Unlike what most of the segmentation techniques suggest, the segmentation process is not necessarily sequential, but rather a series of decisions that each one may question the results of its predecessors. The main objective is to produce the best possible regions classification. 2. The process of characterizing an area of interest is not a one way process i.e. the expert can go from a local view restricted to the region of interest to a wider view of the area, including its neighbors and vice versa. 3. Several information sources are gathered and merged for a better certainty, during the decision of region characterisation. The proposed model of these three levels places particular emphasis on the knowledge used and the reasoning behind image segmentation
Styles APA, Harvard, Vancouver, ISO, etc.
3

Šulc, Zdeněk. « Similarity Measures for Nominal Data in Hierarchical Clustering ». Doctoral thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-261939.

Texte intégral
Résumé :
This dissertation thesis deals with similarity measures for nominal data in hierarchical clustering, which can cope with variables with more than two categories, and which aspire to replace the simple matching approach standardly used in this area. These similarity measures take into account additional characteristics of a dataset, such as frequency distribution of categories or number of categories of a given variable. The thesis recognizes three main aims. The first one is an examination and clustering performance evaluation of selected similarity measures for nominal data in hierarchical clustering of objects and variables. To achieve this goal, four experiments dealing both with the object and variable clustering were performed. They examine the clustering quality of the examined similarity measures for nominal data in comparison with the commonly used similarity measures using a binary transformation, and moreover, with several alternative methods for nominal data clustering. The comparison and evaluation are performed on real and generated datasets. Outputs of these experiments lead to knowledge, which similarity measures can generally be used, which ones perform well in a particular situation, and which ones are not recommended to use for an object or variable clustering. The second aim is to propose a theory-based similarity measure, evaluate its properties, and compare it with the other examined similarity measures. Based on this aim, two novel similarity measures, Variable Entropy and Variable Mutability are proposed; especially, the former one performs very well in datasets with a lower number of variables. The third aim of this thesis is to provide a convenient software implementation based on the examined similarity measures for nominal data, which covers the whole clustering process from a computation of a proximity matrix to evaluation of resulting clusters. This goal was also achieved by creating the nomclust package for the software R, which covers this issue, and which is freely available.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wach, Dominika, Ute Stephan et Marjan Gorgievski. « More than money : Developing an integrative multi-factorial measure of entrepreneurial success ». Sage, 2016. https://tud.qucosa.de/id/qucosa%3A35642.

Texte intégral
Résumé :
This article conceptualizes and operationalizes ‘subjective entrepreneurial success’ in a manner which reflects the criteria employed by entrepreneurs, rather than those imposed by researchers. We used two studies to explore this notion; the first qualitative enquiry investigated success definitions using interviews with 185 German entrepreneurs; five factors emerged from their reports: firm performance, workplace relationships, personal fulfilment, community impact and personal financial rewards. The second study developed a questionnaire, the Subjective Entrepreneurial Success–Importance Scale (SES-IS), to measure these five factors using a sample of 184 entrepreneurs. We provide evidence for the validity of the SES-IS, including establishing systematic relationships of SES-IS with objective indicators of firm success, annual income and entrepreneur satisfaction with life and financial situation. We also provide evidence for the cross-cultural invariance of SES-IS using a sample of Polish entrepreneurs. The contribution of our research being that subjective entrepreneurial success is a multi-factorial construct, that is, entrepreneurs value various indicators of success with monetary returns as only one possible option.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Escande, Paul. « Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables ». Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0020/document.

Texte intégral
Résumé :
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes
The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yu, Jodie Wei. « Investigation of New Forward Osmosis Draw Agents and Prioritization of Recent Developments of Draw Agents Using Multi-criteria Decision Analysis ». DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2185.

Texte intégral
Résumé :
Forward osmosis (FO) is an emerging technology for water treatment due to their ability to draw freshwater using an osmotic pressure gradient across a semi-permeable membrane. However, the lack of draw agents that could both produce reasonable flux and be separated from the draw solution at a low cost stand in the way of widespread implementation. This study had two objectives: evaluate the performance of three materials — peptone, carboxymethyl cellulose (CMC), and magnetite nanoparticles (Fe3O4 NPs) — as potential draw agents, and to use multi-criteria decision matrices to systematically prioritize known draw agents from literature for research investigation. Peptone showed water flux and reverse solute flux values comparable to other organic draw agents. CMC’s high viscosity made it impractical to use and is not recommended as a draw agent. Fe3O4 NPs showed average low fluxes (e.g., 2.14 LMH) but discrete occurrences of high flux values (e.g., 14 LMH) were observed during FO tests. This result indicates that these nanoparticles have potential as draw agents but further work is needed to optimize the characteristics of the nanoparticle suspension. Separation of the nanoparticles from the product water using coagulation was shown to be theoretically possible if only electrostatic and van der Waals forces are taken into account, not steric repulsion. If coagulation is to be considered for separation, research efforts on development of nanoparticle suspensions as FO draw agents should focus on development of electrostatically stabilized nanoparticles. A combination of Fe3O4 NP and peptone showed a higher flux than Fe3O4 NPs alone, but did not produce additive or synergistic flux. This warrants further research to investigate more combinations of draw agents to achieve higher flux than that obtained by individual draw agents. Potential draw agents were prioritized by conducting a literature review of draw agents, extracting data on evaluation criteria for draw agents developed over the past five years, using these data to rank the draw agents using the Analytical Hierarchy Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solutions (TOPSIS). The evaluation criteria used in the ranking matrices were water flux, reverse solute flux, replenishment cost, regeneration cost, and regeneration efficacy. The results showed that the top five ranked draw agents were P-2SO3-2Na, TPHMP-Na, PEI-600P-Na, NaCl, and NH4-CO2. The impact of the assumption made during the multi-criteria decision analysis process was evaluated through sensitivity analyses altering criterion weighting and including more criteria. This ranking system provided recommendations for future research and development on draw agents by highlighting research gaps.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Heyns, Werner. « Urban congestion charging : road pricing as a traffic reduction measure / W. Heyns ». Thesis, North-West University, 2005. http://hdl.handle.net/10394/523.

Texte intégral
Résumé :
Urban traffic congestion is recognised as a major problem by most people in world cities. However, the implementation of congestion reducing measures on a wide scale eludes most world cities suffering from traffic congestion, as many oppose the notion of road pricing and despite economists and transportation professionals having advocated its benefits for a number of decades. The effects of road pricing have attracted considerable attention from researchers examining its effects, as it is thought to hold the key in understanding and overcoming some inherent obstacles to implementation. Unfortunately, many of the attempts consider the effects in isolation and with hypothetical, idealised and analytical tools, sometimes loosing sight of the complexities of the problem. This research empirically investigates the effects of road pricing in London, and identifies factors, which may prove to sustain it as a traffic reduction instrument. The results indicate that an integrated approach has to be developed and implemented, based upon the recognition of local perceptions, concerns, aspirations and locally acceptable solutions, if the acceptance of road pricing is to be improved. The key to dealing with the effects of road pricing, is to encourage a concerted effort by various stakeholders developing strategies considering a range of differing initiatives, coordinating and managing them in the realm of the political-economic context in which they exist.
Thesis (M.Art. et Scien. (Town and Regional Planning))--North-West University, Potchefstroom Campus, 2005.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Saksrisathaporn, Krittiya. « A multi-criteria decision support system using knowledge management and project life cycle approach : application to humanitarian supply chain management ». Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22016/document.

Texte intégral
Résumé :
Cette thèse vise à contribuer à la compréhension des cycle de vie d’une opération humanitaire (HOLC). Gestion de la chaîne d'approvisionnement humanitaire (HSCM) dans un contexte de mise en perspective et dans l’objectif de proposer un modèle décisionnel qui s'applique aux phases de HOLC lors d’une situation réelle. Cela inclut la mise en oeuvre du modèle proposé pour concevoir et développer un outil d'aide à la décision afin d'améliorer les performances de la logistique humanitaire tant dans les opérations de secours nationaux qu’internationaux.Cette recherche est divisée en trois phases. La première partie vise à présenter le sens de l'étude ; la zone de recherche prise en compte pour la gestion de la chaîne d'approvisionnement (SCM) doit être clairement définie. La première phase consiste à clarifier et définir le HSCM HL, la gestion de la chaîne d'approvisionnement commerciale (CSCM) et le SCM, ainsi que la relation entre ces différents éléments. La gestion du cycle de vie du projet (PLCM) et les différentes approches sont également présentés. La compréhension de la différence entre la gestion du cycle de vie du projet (PLM) et la PLCM est également nécessaire, cela ne peut être abordé dans la phase de cycle de vie de l'opération humanitaire. De plus, les modèles Multiple-Criteria Decision Making (MCDM) et l’aide à la décision concernant le HL sont analysés pour établir le fossé existant en matière de recherche. Les approches MCDM qui mettent en oeuvre le système d'aide à la décision (DSS) et la manière dont le MAS a été utilisé dans le contexte HSCM sont étudiées.La deuxième phase consiste en la proposition d’un modèle décisionnel fondé sur l’approche MCDM à l'appui de la décision du décideur avant qu'il/elle prenne des mesures. Ce modèle prévoit le classement des alternatives concernant l'entrepôt, le fournisseur et le transport au cours des phases de HOLC. Le modèle décisionnel proposé est réalisée en 3 scénarios. I. La décision en 4phases HOLC – opération de secours internationale de la Croix-Rouge Française (CRF). II. La décision en3phases HOLC – opération nationale dela Croix-Rouge thaïlandaise (TRC). III. La décision au niveau de la phase de réponse HOLC – opération internationale du TRC dans quatre pays. Dans cette phase, le scénario I et II sont réalisés étape par étape au travers de calculs numériques et formules mathématiques. Le scénario III sera présenté dans la troisième phase. Pour établir trois scénarios, les données internes recueillies lors des entretiens avec le chef de la logistique de la Croix-Rouge Française, et le vice-président de la fondation de la Coix-Rouge thaïlandaise, seront utilisées. Les données externes proviennent de chercheurs qui sont des experts dans le domaine HL ou le champ du HSCM, de la littérature, et de sources issues des organismes humanitaires (documents d’ateliers, rapports, informations publiées sur leurs sites officiels).Dans la troisième phase, une application Internet multi-critères (decision support system MCDSS WB) mettant en oeuvre le modèle proposé est élaborée. Afin d'atteindre une décision appropriée en temps réel, le WB-MCDSS est développé sur la base d’un protocole client-serveur et est simple à utiliser. Le dernier mais non le moindre ; une application de validation du modèle est réalisée à l'aide de l'approche de l'analyse de sensibilité
This thesis aims to contribute to the understanding of HOLC in context of the HSCM and to propose a decision model which applies to the phases of HOLC the decision making regarding a real situation . This include the implementation of the proposed model to design and develop a decision support tool in order to improve the performance of humanitarian logistics in both national and international relief operations.This research is divided into three phases; the first phase is to clarify and define HL among HSCM, commercial supply chain management (CSCM) and SCM and their relationship. Project Life Cycle Management (PLCM) approaches are also presented. The difference between project life cycle management (PLM) and PLCM is also required to distinguish a clear understanding which can be addressed in the phase of humanitarian operation life cycle. Additionally, the literature of Multiple-Criteria Decision Making (MCDM) models and existing decision aid system for HL are analyzed to establish the research gap. The MCDM approaches which implement the decision support system (DSS) and lastly how DSS has been used in the HSCM context.The second phase is to propose a decision model based on MCDM approaches to support the decision of the decision maker before he/she takes action. This model provides the ranking alternatives to warehouse, supplier and transportation over the phases of HOLC. The proposed decision model is conducted in 3 scenarios; I. The decision in 4-phase HOLC, international relief operation of French Red Cross (FRC). II. The decision on 3-phase HOLC, national operation by the Thai Red Cross (TRC). III. The decision on response phase HOLC, international operation by the FRC in four countries. In this phase, the scenario I and II are performed step by step though numerical calculation and mathematical formulas. The scenario III will be presented in the third phase.In the third phase, an application of web-based multi-criteria decision support system (WB-MCDSS) which implement the proposed model is developed. The web-based multi-criteria decision support system is developed based on the integration of analytical hierarchy process (AHP) and TOPSIS approaches. In order to achieve an appropriate decision in a real time response, the WB-MCDSS is developed based on server-client protocol and is simple to operate. Last but not least, a validation application of the model is performed using the sensitivity analysis approach
Styles APA, Harvard, Vancouver, ISO, etc.
9

Igoulalene, Idris. « Développement d'une approche floue multicritère d'aide à la coordination des décideurs pour la résolution des problèmes de sélection dans les chaines logistiques ». Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4357/document.

Texte intégral
Résumé :
Dans le cadre de cette thèse, notre objectif est de développer une approche multicritère d'aide à la coordination des décideurs pour la résolution des problèmes de sélection dans les chaines logistiques. En effet, nous considérons le cas où nous avons k décideurs/experts notés ST1,...,STk qui cherchent à classer un ensemble de m alternatives/choix notées A1,...,Am évaluées en termes de n critères conflictuels notés C1,..., Cn. L'ensemble des données manipulées est flou. Chaque décideur est amené à exprimer ses préférences pour chaque alternative par rapport à chaque critère à travers une matrice dite matrice des préférences. Notre approche comprend principalement deux phases, respectivement une phase de consensus qui consiste à trouver un accord global entre les décideurs et une phase de classement qui traite le problème de classement des différentes alternatives.Comme résultats, pour la première phase, nous avons adapté deux mécanismes de consensus, le premier est basé sur l'opérateur mathématique neat OWA et le second sur la mesure de possibilité. De même, nous avons développé un nouveau mécanisme de consensus basé sur la programmation par but goal programming. Pour la phase de classement, nous avons adapté dans un premier temps la méthode TOPSIS et dans un second, le modèle du goal programming avec des fonctions de satisfaction. Pour illustrer l'applicabilité de notre approche, nous avons utilisé différents problèmes de sélection dans les chaines logistiques comme la sélection des systèmes de formation, la sélection des fournisseurs, la sélection des robots et la sélection des entrepôts
This thesis presents a development of a multi-criteria group decision making approach to solve the selection problems in supply chains. Indeed, we start in the context where a group of k decision makers/experts, is in charge of the evaluation and the ranking of a set of potential m alternatives. The alternatives are evaluated in fuzzy environment while taking into consideration both subjective (qualitative) and objective (quantitative) n conflicting criteria. Each decision maker is brought to express his preferences for each alternative relative to each criterion through a fuzzy matrix called preference matrix. We have developed three new approaches for manufacturing strategy, information system and robot selection problem:1. Fuzzy consensus-based possibility measure and goal programming approach.2. Fuzzy consensus-based neat OWA and goal programming approach.3. Fuzzy consensus-based goal programming and TOPSIS approach.Finally, a comparison of these three approaches is conducted and thus was able to give recommendations to improve the approaches and provide decision aid to the most satisfying decision makers
Styles APA, Harvard, Vancouver, ISO, etc.
10

Dang, Vinh Q. « Evolutionary approaches for feature selection in biological data ». Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2014. https://ro.ecu.edu.au/theses/1276.

Texte intégral
Résumé :
Data mining techniques have been used widely in many areas such as business, science, engineering and medicine. The techniques allow a vast amount of data to be explored in order to extract useful information from the data. One of the foci in the health area is finding interesting biomarkers from biomedical data. Mass throughput data generated from microarrays and mass spectrometry from biological samples are high dimensional and is small in sample size. Examples include DNA microarray datasets with up to 500,000 genes and mass spectrometry data with 300,000 m/z values. While the availability of such datasets can aid in the development of techniques/drugs to improve diagnosis and treatment of diseases, a major challenge involves its analysis to extract useful and meaningful information. The aims of this project are: 1) to investigate and develop feature selection algorithms that incorporate various evolutionary strategies, 2) using the developed algorithms to find the “most relevant” biomarkers contained in biological datasets and 3) and evaluate the goodness of extracted feature subsets for relevance (examined in terms of existing biomedical domain knowledge and from classification accuracy obtained using different classifiers). The project aims to generate good predictive models for classifying diseased samples from control.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Similarity measure and multi-Criteria"

1

Howell, Simon J. Clinical trial designs in anaesthesia. Sous la direction de Jonathan G. Hardman. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199642045.003.0030.

Texte intégral
Résumé :
A clinical trial is a research study that assigns people or groups to different interventions and compares the impact of these on health outcomes. This chapter examines the design and delivery of clinical trials in anaesthesia and perioperative medicine covering the issues outlined below. The features of a high-quality clinical trial include well-defined inclusion and exclusion criteria, a control group, randomization, and blinding. Outcome measures may be broadly divided into counting the number of people who experience an outcome and taking measurements on people. The outcome measures selected for a clinical trial reflect the purpose of the study and may include ‘true’ clinical measures such as major postoperative complications or surrogate measures such as the results of a biochemical test. Outcome measures may be combined in a composite outcome. Assessment of health-related quality of life using a tool such as the SF-36 questionnaire is an important aspect of many clinical trials in its own right and also informs the economic analyses that may be embedded in a trial. Determining the number for recruits needed for a clinical trial requires both clinical and statistical insight and judgement. The analysis of a clinical trial requires a similarly sophisticated approach that takes into account the objectives of the study and balances the need for appropriate subgroup analyses with the risk of false-positive results. The safe and effective management of a clinical trial requires rigorous organizational discipline and an understanding of the ethical and regulatory structures that govern clinical research.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Sobczyk, Eugeniusz Jacek. Uciążliwość eksploatacji złóż węgla kamiennego wynikająca z warunków geologicznych i górniczych. Instytut Gospodarki Surowcami Mineralnymi i Energią PAN, 2022. http://dx.doi.org/10.33223/onermin/0222.

Texte intégral
Résumé :
Hard coal mining is characterised by features that pose numerous challenges to its current operations and cause strategic and operational problems in planning its development. The most important of these include the high capital intensity of mining investment projects and the dynamically changing environment in which the sector operates, while the long-term role of the sector is dependent on factors originating at both national and international level. At the same time, the conditions for coal mining are deteriorating, the resources more readily available in active mines are being exhausted, mining depths are increasing, temperature levels in pits are rising, transport routes for staff and materials are getting longer, effective working time is decreasing, natural hazards are increasing, and seams with an increasing content of waste rock are being mined. The mining industry is currently in a very difficult situation, both in technical (mining) and economic terms. It cannot be ignored, however, that the difficult financial situation of Polish mining companies is largely exacerbated by their high operating costs. The cost of obtaining coal and its price are two key elements that determine the level of efficiency of Polish mines. This situation could be improved by streamlining the planning processes. This would involve striving for production planning that is as predictable as possible and, on the other hand, economically efficient. In this respect, it is helpful to plan the production from operating longwalls with full awareness of the complexity of geological and mining conditions and the resulting economic consequences. The constraints on increasing the efficiency of the mining process are due to the technical potential of the mining process, organisational factors and, above all, geological and mining conditions. The main objective of the monograph is to identify relations between geological and mining parameters and the level of longwall mining costs, and their daily output. In view of the above, it was assumed that it was possible to present the relationship between the costs of longwall mining and the daily coal output from a longwall as a function of onerous geological and mining factors. The monograph presents two models of onerous geological and mining conditions, including natural hazards, deposit (seam) parameters, mining (technical) parameters and environmental factors. The models were used to calculate two onerousness indicators, Wue and WUt, which synthetically define the level of impact of onerous geological and mining conditions on the mining process in relation to: —— operating costs at longwall faces – indicator WUe, —— daily longwall mining output – indicator WUt. In the next research step, the analysis of direct relationships of selected geological and mining factors with longwall costs and the mining output level was conducted. For this purpose, two statistical models were built for the following dependent variables: unit operating cost (Model 1) and daily longwall mining output (Model 2). The models served two additional sub-objectives: interpretation of the influence of independent variables on dependent variables and point forecasting. The models were also used for forecasting purposes. Statistical models were built on the basis of historical production results of selected seven Polish mines. On the basis of variability of geological and mining conditions at 120 longwalls, the influence of individual parameters on longwall mining between 2010 and 2019 was determined. The identified relationships made it possible to formulate numerical forecast of unit production cost and daily longwall mining output in relation to the level of expected onerousness. The projection period was assumed to be 2020–2030. On this basis, an opinion was formulated on the forecast of the expected unit production costs and the output of the 259 longwalls planned to be mined at these mines. A procedure scheme was developed using the following methods: 1) Analytic Hierarchy Process (AHP) – mathematical multi-criteria decision-making method, 2) comparative multivariate analysis, 3) regression analysis, 4) Monte Carlo simulation. The utilitarian purpose of the monograph is to provide the research community with the concept of building models that can be used to solve real decision-making problems during longwall planning in hard coal mines. The layout of the monograph, consisting of an introduction, eight main sections and a conclusion, follows the objectives set out above. Section One presents the methodology used to assess the impact of onerous geological and mining conditions on the mining process. Multi-Criteria Decision Analysis (MCDA) is reviewed and basic definitions used in the following part of the paper are introduced. The section includes a description of AHP which was used in the presented analysis. Individual factors resulting from natural hazards, from the geological structure of the deposit (seam), from limitations caused by technical requirements, from the impact of mining on the environment, which affect the mining process, are described exhaustively in Section Two. Sections Three and Four present the construction of two hierarchical models of geological and mining conditions onerousness: the first in the context of extraction costs and the second in relation to daily longwall mining. The procedure for valuing the importance of their components by a group of experts (pairwise comparison of criteria and sub-criteria on the basis of Saaty’s 9-point comparison scale) is presented. The AHP method is very sensitive to even small changes in the value of the comparison matrix. In order to determine the stability of the valuation of both onerousness models, a sensitivity analysis was carried out, which is described in detail in Section Five. Section Six is devoted to the issue of constructing aggregate indices, WUe and WUt, which synthetically measure the impact of onerous geological and mining conditions on the mining process in individual longwalls and allow for a linear ordering of longwalls according to increasing levels of onerousness. Section Seven opens the research part of the work, which analyses the results of the developed models and indicators in individual mines. A detailed analysis is presented of the assessment of the impact of onerous mining conditions on mining costs in selected seams of the analysed mines, and in the case of the impact of onerous mining on daily longwall mining output, the variability of this process in individual fields (lots) of the mines is characterised. Section Eight presents the regression equations for the dependence of the costs and level of extraction on the aggregated onerousness indicators, WUe and WUt. The regression models f(KJC_N) and f(W) developed in this way are used to forecast the unit mining costs and daily output of the designed longwalls in the context of diversified geological and mining conditions. The use of regression models is of great practical importance. It makes it possible to approximate unit costs and daily output for newly designed longwall workings. The use of this knowledge may significantly improve the quality of planning processes and the effectiveness of the mining process.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Similarity measure and multi-Criteria"

1

Abbas, Rizwan, Qaisar Abbas, Gehad Abdullah Amran, Abdulaziz Ali, Majed Hassan Almusali, Ali A. AL-Bakhrani et Mohammed A. A. Al-qaness. « A New Similarity Measure for Multi Criteria Recommender System ». Dans Advances in Intelligent Systems and Computing, 29–52. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-28106-8_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chatterjee, R., P. Majumdar et S. K. Samanta. « Similarity Measures in Neutrosophic Sets-I ». Dans Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets, 249–94. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00045-5_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Chatterjee, R., P. Majumdar et S. K. Samanta. « Similarity Measures in Neutrosophic Sets-II ». Dans Fuzzy Multi-criteria Decision-Making Using Neutrosophic Sets, 295–325. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00045-5_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nguyen, Duc Thang, Lihui Chen et Chee Keong Chan. « Multi-viewpoint Based Similarity Measure and Optimality Criteria for Document Clustering ». Dans Information Retrieval Technology, 49–60. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17187-1_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Karacapilidis, Nikos, et Lefteris Hatzieleftheriou. « Exploiting Similarity Measures in Multi-criteria Based Recommendations ». Dans E-Commerce and Web Technologies, 424–34. Berlin, Heidelberg : Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45229-4_41.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yang, Jie, Guoyin Wang et Xukun Li. « Multi-granularity Similarity Measure of Cloud Concept ». Dans Rough Sets, 318–30. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47160-0_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bai, Xue, Siwei Luo, Qi Zou et Yibiao Zhao. « Contour Grouping by Clustering with Multi-feature Similarity Measure ». Dans Lecture Notes in Computer Science, 415–22. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14980-1_40.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Joshi, Deepa, et Sanjay Kumar. « An Approach to Multi-criteria Decision Making Problems Using Dice Similarity Measure for Picture Fuzzy Sets ». Dans Communications in Computer and Information Science, 135–40. Singapore : Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0023-3_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jayaprada, S., Amarapini Aswani et G. Gayathri. « Hierarchical Divisive Clustering with Multi View-Point Based Similarity Measure ». Dans Proceedings of the International Conference on Frontiers of Intelligent Computing : Theory and Applications (FICTA) 2013, 483–91. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02931-3_55.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Szmidt, Eulalia, et Janusz Kacprzyk. « An Application of Intuitionistic Fuzzy Set Similarity Measures to a Multi-criteria Decision Making Problem ». Dans Artificial Intelligence and Soft Computing – ICAISC 2006, 314–23. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11785231_34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Similarity measure and multi-Criteria"

1

Sanz, Ismael, María Pérez et Rafael Berlanga. « Measure Selection in Multi-similarity XML Applications ». Dans 2008 19th International Conference on Database and Expert Systems Applications (DEXA). IEEE, 2008. http://dx.doi.org/10.1109/dexa.2008.46.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mun, Duhwan, Junmyon Cho et Karthik Ramani. « A Method for Measuring Part Similarity Using Ontology and a Multi-Criteria Decision Making Method ». Dans ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87711.

Texte intégral
Résumé :
When existing parts are re-used for the development of a new product or business-to-business transactions, a method for searching parts from a part database that is similar to the user’s requirements is necessary. To this end, it is important to develop a part search method that can measure similarity between parts and the user’s input data with generality as well as robustness. In this paper, the authors suggest a method for measuring part similarity using ontology and a multi-criteria decision making method and address the technical details of the approach. The proposed method ensures interoperability with existing engineering information management systems, represents part specifications systematically, and has generality in the procedure for measuring part similarity in specifications. A case study for ejector pins conducted to demonstrate the proposed method is also discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kasiri, Keyvan, Paul Fieguth et David A. Clausi. « Self-similarity measure for multi-modal image registration ». Dans 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. http://dx.doi.org/10.1109/icip.2016.7533211.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Daewon Lee, M. Hofmann, F. Steinke, Y. Altun, N. D. Cahill et B. Scholkopf. « Learning similarity measure for multi-modal 3D image registration ». Dans 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2009. http://dx.doi.org/10.1109/cvprw.2009.5206840.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Abdoos, Monireh, Nasser Mozayani et Ahmad Akbari. « A new similarity difference measure in multi agent systems ». Dans 2009 14th International CSI Computer Conference (CSICC 2009) (Postponed from July 2009). IEEE, 2009. http://dx.doi.org/10.1109/csicc.2009.5349625.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yang Yan, Lihui Chen et Duc Thang Nguyen. « Semi-supervised clustering with multi-viewpoint based similarity measure ». Dans 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane). IEEE, 2012. http://dx.doi.org/10.1109/ijcnn.2012.6252650.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pickering, Mark R. « A new similarity measure for multi-modal image registration ». Dans 2011 18th IEEE International Conference on Image Processing (ICIP 2011). IEEE, 2011. http://dx.doi.org/10.1109/icip.2011.6116092.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Daewon Lee, Matthias Hofmann, Florian Steinke, Yasemin Altun, Nathan D. Cahill et Bernhard Scholkopf. « Learning similarity measure for multi-modal 3D image registration ». Dans 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2009. http://dx.doi.org/10.1109/cvpr.2009.5206840.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Hao Hu, Bin Liu, Weiwei Guo, Zenghui Zhang et Wenxian Yu. « Preliminary exploration of introducing spatial correlation information into the probabilistic patch-based similarity measure ». Dans 2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp). IEEE, 2017. http://dx.doi.org/10.1109/multi-temp.2017.8035223.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sarkar, Kamal, et Sohini Roy Chowdhury. « Improving Salience-Based Multi-Document Summarization Performance using a Hybrid Sentence Similarity Measure ». Dans 4th International Conference on AI, Machine Learning and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140202.

Texte intégral
Résumé :
The process of creating a single summary from a group of related text documents obtained from many sources is known as multi-document summarization. The efficacy of a multidocument summarization system is heavily reliant upon the sentence similarity metric employed to eliminate redundant sentences from the summary, given that the documents may contain redundant information. The sentence similarity measure is also crucial for a graph-based multi-document summarization, where the presence of an edge between two phrases is decided by how similar the two sentences are to one another. To enhance multi-document summarization performance, this study provides a new method for defining a hybrid sentence similarity measure combining a lexical similarity measure and a BERT-based semantic similarity measure. Tests conducted on the benchmark datasets demonstrate how well the proposed hybrid sentence similarity metric is effective for enhancing multi-document summarization performance.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Similarity measure and multi-Criteria"

1

Duvvuri, Sarvani, et Srinivas S. Pulugurtha. Researching Relationships between Truck Travel Time Performance Measures and On-Network and Off-Network Characteristics. Mineta Transportation Institute, juillet 2021. http://dx.doi.org/10.31979/mti.2021.1946.

Texte intégral
Résumé :
Trucks serve significant amount of freight tonnage and are more susceptible to complex interactions with other vehicles in a traffic stream. While traffic congestion continues to be a significant ‘highway’ problem, delays in truck travel result in loss of revenue to the trucking companies. There is a significant research on the traffic congestion mitigation, but a very few studies focused on data exclusive to trucks. This research is aimed at a regional-level analysis of truck travel time data to identify roads for improving mobility and reducing congestion for truck traffic. The objectives of the research are to compute and evaluate the truck travel time performance measures (by time of the day and day of the week) and use selected truck travel time performance measures to examine their correlation with on-network and off-network characteristics. Truck travel time data for the year 2019 were obtained and processed at the link level for Mecklenburg County, Wake County, and Buncombe County, NC. Various truck travel time performance measures were computed by time of the day and day of the week. Pearson correlation coefficient analysis was performed to select the average travel time (ATT), planning time index (PTI), travel time index (TTI), and buffer time index (BTI) for further analysis. On-network characteristics such as the speed limit, reference speed, annual average daily traffic (AADT), and the number of through lanes were extracted for each link. Similarly, off-network characteristics such as land use and demographic data in the near vicinity of each selected link were captured using 0.25 miles and 0.50 miles as buffer widths. The relationships between the selected truck travel time performance measures and on-network and off-network characteristics were then analyzed using Pearson correlation coefficient analysis. The results indicate that urban areas, high-volume roads, and principal arterial roads are positively correlated with the truck travel time performance measures. Further, the presence of agricultural, light commercial, heavy commercial, light industrial, single-family residential, multi-family residential, office, transportation, and medical land uses increase the truck travel time performance measures (decrease the operational performance). The methodological approach and findings can be used in identifying potential areas to serve as truck priority zones and for planning decentralized delivery locations.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Drayton, Paul, Jeffrey Panek, Tom McGrath et James McCarthy. PR-312-12206-R01 FTIR Formaldehyde Measurement at Turbine NESHAP and Ambient Levels. Chantilly, Virginia : Pipeline Research Council International, Inc. (PRCI), juillet 2016. http://dx.doi.org/10.55274/r0011014.

Texte intégral
Résumé :
Since formaldehyde is ubiquitous (e.g., naturally formed through atmospheric chemistry even if not directly emitted), there is also the potential that atmospheric levels and atmospheric chemistry are not adequately understood.� That avenue of investigation may provide important information that could be important in assessing formaldehyde health risk, source contribution, and ultimately regulatory criteria for gas-fired combustion sources.� In 2002 and 2003, the pipeline industry conducted turbine formaldehyde testing using refined FTIR methods and a dedicated measurement system, which indicated exhaust formaldehyde below 100 ppb and near the method detection limit.� Anecdotal data from that test program showed ambient levels similar to turbine exhaust in some cases.� For example, during the industry test program, a serendipitous finding observed that ambient formaldehyde concentrations varied and were independent of turbine operation.� Instead, naturally occurring emissions from an adjacent corn field appeared to spike the ambient concentration to levels higher than formaldehyde exhaust levels, depending on whether there was direct sunlight or shading from a cloud (i.e., due to �naturally occurring� formaldehyde from vegetation and/or other organics and ambient photochemistry that forms formaldehyde).� Evidence of �high� ambient formaldehyde levels (relative to turbine exhaust) would be a powerful counterargument to restrictive formaldehyde regulations. If ambient levels are similar to (or higher than) in-stack formaldehyde for turbines, then a NESHAP requiring catalytic control of turbine formaldehyde results in a significant burden without� environmental benefit, while also negatively impacting turbine efficiency and environmental impacts associated with catalyst construction, installation, operation, cleaning, and disposal.� Similarly, if ambient formaldehyde is significantly higher (in at least some circumstances) than currently available ambient data suggests, there could be implications for perceived formaldehyde risk and the basis, need for, and stringency of formaldehyde reductions from turbines or other combustion sources.� In a more far-reaching impact, ambient FTIR data could provide additional insights on atmospheric reactions that not only impact formaldehyde issues, but also ozone (and NOx control issues) because of the importance of formaldehyde and hydrocarbon chemistry in ambient ozone formation. These determinations are challenged by the ability to accurately measure formaldehyde at levels less than 100 parts per billion (ppbv).� Ambient measurements rely on �batch methods� subject to error (due to the inherent instability and reactivity of formaldehyde), and those methods do not provide real-time continuous results.� Extractive Fourier Transform Infrared (FTIR) methods were developed for combustion exhaust formaldehyde measurement, but measuring the ultra-low levels from turbines, commensurate with the NESHAP standard of 90 ppb, is challenging.� This project was intended assess ambient formaldehyde levels as compared to the NESHAP standard and acquire additional ambient measurement data using FTIR testing.
Styles APA, Harvard, Vancouver, ISO, etc.
3

McPhedran, R., K. Patel, B. Toombs, P. Menon, M. Patel, J. Disson, K. Porter, A. John et A. Rayner. Food allergen communication in businesses feasibility trial. Food Standards Agency, mars 2021. http://dx.doi.org/10.46756/sci.fsa.tpf160.

Texte intégral
Résumé :
Background: Clear allergen communication in food business operators (FBOs) has been shown to have a positive impact on customers’ perceptions of businesses (Barnett et al., 2013). However, the precise size and nature of this effect is not known: there is a paucity of quantitative evidence in this area, particularly in the form of randomised controlled trials (RCTs). The Food Standards Agency (FSA), in collaboration with Kantar’s Behavioural Practice, conducted a feasibility trial to investigate whether a randomised cluster trial – involving the proactive communication of allergen information at the point of sale in FBOs – is feasible in the United Kingdom (UK). Objectives: The trial sought to establish: ease of recruitments of businesses into trials; customer response rates for in-store outcome surveys; fidelity of intervention delivery by FBO staff; sensitivity of outcome survey measures to change; and appropriateness of the chosen analytical approach. Method: Following a recruitment phase – in which one of fourteen multinational FBOs was successfully recruited – the execution of the feasibility trial involved a quasi-randomised matched-pairs clustered experiment. Each of the FBO’s ten participating branches underwent pair-wise matching, with similarity of branches judged according to four criteria: Food Hygiene Rating Scheme (FHRS) score, average weekly footfall, number of staff and customer satisfaction rating. The allocation ratio for this trial was 1:1: one branch in each pair was assigned to the treatment group by a representative from the FBO, while the other continued to operate in accordance with their standard operating procedure. As a business-based feasibility trial, customers at participating branches throughout the fieldwork period were automatically enrolled in the trial. The trial was single-blind: customers at treatment branches were not aware that they were receiving an intervention. All customers who visited participating branches throughout the fieldwork period were asked to complete a short in-store survey on a tablet affixed in branches. This survey contained four outcome measures which operationalised customers’: perceptions of food safety in the FBO; trust in the FBO; self-reported confidence to ask for allergen information in future visits; and overall satisfaction with their visit. Results: Fieldwork was conducted from the 3 – 20 March 2020, with cessation occurring prematurely due to the closure of outlets following the proliferation of COVID-19. n=177 participants took part in the trial across the ten branches; however, response rates (which ranged between 0.1 - 0.8%) were likely also adversely affected by COVID-19. Intervention fidelity was an issue in this study: while compliance with delivery of the intervention was relatively high in treatment branches (78.9%), erroneous delivery in control branches was also common (46.2%). Survey data were analysed using random-intercept multilevel linear regression models (due to the nesting of customers within branches). Despite the trial’s modest sample size, there was some evidence to suggest that the intervention had a positive effect for those suffering from allergies/intolerances for the ‘trust’ (β = 1.288, p<0.01) and ‘satisfaction’ (β = 0.945, p<0.01) outcome variables. Due to singularity within the fitted linear models, hierarchical Bayes models were used to corroborate the size of these interactions. Conclusions: The results of this trial suggest that a fully powered clustered RCT would likely be feasible in the UK. In this case, the primary challenge in the execution of the trial was the recruitment of FBOs: despite high levels of initial interest from four chains, only one took part. However, it is likely that the proliferation of COVID-19 adversely impacted chain participation – two other FBOs withdrew during branch eligibility assessment and selection, citing COVID-19 as a barrier. COVID-19 also likely lowered the on-site survey response rate: a significant negative Pearson correlation was observed between daily survey completions and COVID-19 cases in the UK, highlighting a likely relationship between the two. Limitations: The trial was quasi-random: selection of branches, pair matching and allocation to treatment/control groups were not systematically conducted. These processes were undertaken by a representative from the FBO’s Safety and Quality Assurance team (with oversight from Kantar representatives on pair matching), as a result of the chain’s internal operational restrictions.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak et Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, juillet 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Texte intégral
Résumé :
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie