Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Concepts weighting.

Zeitschriftenartikel zum Thema „Concepts weighting“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Concepts weighting" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ain, Qurat Ul, Mohamed Amine Chatti, Komlan Gluck Charles Bakar, Shoeb Joarder und Rawaa Alatrash. „Automatic Construction of Educational Knowledge Graphs: A Word Embedding-Based Approach“. Information 14, Nr. 10 (27.09.2023): 526. http://dx.doi.org/10.3390/info14100526.

Der volle Inhalt der Quelle
Annotation:
Knowledge graphs (KGs) are widely used in the education domain to offer learners a semantic representation of domain concepts from educational content and their relations, termed as educational knowledge graphs (EduKGs). Previous studies on EduKGs have incorporated concept extraction and weighting modules. However, these studies face limitations in terms of accuracy and performance. To address these challenges, this work aims to improve the concept extraction and weighting mechanisms by leveraging state-of-the-art word and sentence embedding techniques. Concretely, we enhance the SIFRank keyphrase extraction method by using SqueezeBERT and we propose a concept-weighting strategy based on SBERT. Furthermore, we conduct extensive experiments on different datasets, demonstrating significant improvements over several state-of-the-art keyphrase extraction and concept-weighting techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

S, Florence Vijila, und Nirmala K. „Quantification of Portrayal Concepts using tf-idf Weighting“. International Journal of Information Sciences and Techniques 3, Nr. 5 (30.09.2013): 1–6. http://dx.doi.org/10.5121/ijist.2013.3501.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Klinkenberg, Ralf. „Learning drifting concepts: Example selection vs. example weighting“. Intelligent Data Analysis 8, Nr. 3 (13.08.2004): 281–300. http://dx.doi.org/10.3233/ida-2004-8305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Buatoom, Uraiwan, Waree Kongprawechnon und Thanaruk Theeramunkong. „Document Clustering Using K-Means with Term Weighting as Similarity-Based Constraints“. Symmetry 12, Nr. 6 (06.06.2020): 967. http://dx.doi.org/10.3390/sym12060967.

Der volle Inhalt der Quelle
Annotation:
In similarity-based constrained clustering, there have been various approaches on how to define the similarity between documents to guide the grouping of similar documents together. This paper presents an approach to use term-distribution statistics extracted from a small number of cue instances with their known classes, for term weightings as indirect distance constraint. As for distribution-based term weighting, three types of term-oriented standard deviations are exploited: distribution of a term in a collection (SD), average distribution of a term in a class (ACSD), and average distribution of a term among classes (CSD). These term weightings are explored with the consideration of symmetry concepts by varying the magnitude to positive and negative for promoting and demoting effects of three standard deviations. In k-means, followed the symmetry concept, both seeded and unseeded centroid initializations are investigated and compared to the centroid-based classification. Our experiment is conducted using five English text collections and one Thai text collection, i.e., Amazon, DI, WebKB1, WebKB2, and 20Newsgroup, as well as TR, a collection of Thai reform-related opinions. Compared to the conventional TFIDF, the distribution-based term weighting improves the centroid-based method, seeded k-means, and k-means with the error reduction rate of 22.45%, 31.13%, and 58.96%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gan, Keng Hoon, und Keat Keong Phang. „Finding target and constraint concepts for XML query construction“. International Journal of Web Information Systems 11, Nr. 4 (16.11.2015): 468–90. http://dx.doi.org/10.1108/ijwis-04-2015-0017.

Der volle Inhalt der Quelle
Annotation:
Purpose – This paper aims to focus on automatic selection of two important structural concepts required in an XML query, namely, target and constraint concepts, when given a keywords query. Due to the diversities of concepts used in XML resources, it is not easy to select a correct concept when constructing an XML query. Design/methodology/approach – In this paper, a Context-based Term Weighting model that performs term weighting based on part of documents. Each part represents a specific context, thus offering better capturing of concept and term relationship. For query time analysis, a Query Context Graph and two algorithms, namely, Select Target and Constraint (QC) and Select Target and Constraint (QCAS) are proposed to find the concepts for constructing XML query. Findings – Evaluations were performed using structured document for conference domain. For constraint concept selection, the approach CTX+TW achieved better result than its baseline, NCTX, when search term has ambiguous meanings by using context-based scoring for the concepts. CTX+TW also shows its stability on various scoring models like BM25, TFIEF and LM. For target concept selection, CTX+TW outperforms the standard baseline, SLCA, whereas it also records higher coverage than FCA, when structural keywords are used in query. Originality/value – The idea behind this approach is to capture the concepts required for term interpretation based on parts of the collections rather than the entire collection. This allows better selection of concepts, especially when a structured XML document consists many different types of information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schellenberg, Florian, William R. Taylor, Ilse Jonkers und Silvio Lorenzetti. „Robustness of kinematic weighting and scaling concepts for musculoskeletal simulation“. Computer Methods in Biomechanics and Biomedical Engineering 20, Nr. 7 (01.03.2017): 720–29. http://dx.doi.org/10.1080/10255842.2017.1295305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kellerer, A. M. „Weighting factors for radiation quality: how to unite the two current concepts“. Radiation Protection Dosimetry 110, Nr. 1-4 (01.08.2004): 781–87. http://dx.doi.org/10.1093/rpd/nch164.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

SAIF, ABDULGABBAR, UMMI ZAKIAH ZAINODIN, NAZLIA OMAR und ABDULLAH SAEED GHAREB. „Weighting-based semantic similarity measure based on topological parameters in semantic taxonomy“. Natural Language Engineering 24, Nr. 6 (04.06.2018): 861–86. http://dx.doi.org/10.1017/s1351324918000190.

Der volle Inhalt der Quelle
Annotation:
AbstractSemantic measures are used in handling different issues in several research areas, such as artificial intelligence, natural language processing, knowledge engineering, bioinformatics, and information retrieval. Hierarchical feature-based semantic measures have been proposed to estimate the semantic similarity between two concepts/words depending on the features extracted from a semantic taxonomy (hierarchy) of a given lexical source. The central issue in these measures is the constant weighting assumption that all elements in the semantic representation of the concept possess the same relevance. In this paper, a new weighting-based semantic similarity measure is proposed to address the issues in hierarchical feature-based measures. Four mechanisms are introduced to weigh the degree of relevance of features in the semantic representation of a concept by using topological parameters (edge, depth, descendants, and density) in a semantic taxonomy. With the semantic taxonomy of WordNet, the proposed semantic measure is evaluated for word semantic similarity in four gold-standard datasets. Experimental results show that the proposed measure outperforms hierarchical feature-based semantic measures in all the datasets. Comparison results also imply that the proposed measure is more effective than information-content measures in measuring semantic similarity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yıldırım, Yıldız, und Melek Gülşah Şahin. „How Do Different Weighting Methods Affect the Overall Effect Size in Meta-Analysis? : An Example of Science Attitude in Türkiye Sample“. International Journal of Psychology and Educational Studies 10, Nr. 3 (29.08.2023): 744–57. http://dx.doi.org/10.52380/ijpes.2023.10.3.1049.

Der volle Inhalt der Quelle
Annotation:
There is increasing interest in meta-analysis in different fields due to the need to combine the results of primary research. One of the crucial concepts in combining results is weighting. This study examines how Hunter and Schmidt's method, weighting by sample size; Hedges and Vevea's method, weighting by inverse variance; and Osburn and Callender's method, unweighting, affect the overall effect size in meta-analysis. In this context, for meta-analysis, the search was done for studies examining the effects of alternative measurement and assessment techniques and methods in science education on science attitudes. The databases of the HEI National Thesis Center, Web of Science, ERIC, EBSCO, Google Scholar, and DergiPark were searched between 2010 and 2021. Eleven studies (with 14 effect sizes) that met the criteria were included in the meta-analysis. In line with the study's findings, it was observed that the overall effect sizes were significant and did not change much in the weighting methods. Besides, it was found that the method with the lowest standard error was unweighted. The weighting methods of Hunter and Schmidt and Hedges and Vevea gave similar results in terms of standard error. When the correlation coefficient between the weighting methods was examined, it was seen that all correlation coefficients were greater than 0.90.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Azizah, Nuril Lutvi, Vevy Liansari und Alfan Indra Kusuma. „SPACED REPETITION CONCEPT DESIGN WITH FUZZY MULTI CRITERIA ANALYSIS AS A MEDIA TO IMPROVE NUMERACY LEARNING FOR ELEMENTARY SCHOOL STUDENTS“. BAREKENG: Jurnal Ilmu Matematika dan Terapan 17, Nr. 2 (11.06.2023): 1049–56. http://dx.doi.org/10.30598/barekengvol17iss2pp1049-1056.

Der volle Inhalt der Quelle
Annotation:
Numeracy Learning activities after the Covid-19 Pandemic has been decline in the quality of learning in a number of elementary school students. This research is limited to learning numeracy for low grade students, namely grade 1,2, and 3, because the basic concept of numeration begins with low grades There are several factors that make difficult for students to understand numeracy, the students often forget about the concepts that have been taught before. An Effective way of memorizing with the conventional way is memorizing with repeat pronunciations. To improve the quality of education, learning concepts that were previously carried out conventionally must be developed in a modern way using application. The purpose of this research is to improve the numeracy learning concept of low-grade students in primary school after Covid-19 pandemic which is more fun and modern by using the concept of spaced repetition based on android flashcards. The analysis of 135 student’s assessment is based on criteria such as tangible, reliability, empathy, responsive, and assurance. The decision support system using Fuzzy Multi Criteria Methods (MCDM) is also used to determine the weighting of the criteria and the effectiveness of learning using spaced repetition concept and it’s application. The result of the weighting using fuzzy multi criteria is obtained defuzzification that tangible 65.18, reliability 56.54, responsive 46.17, assurance 49.13, and empathy 29.62. Tangible has the highest results in this test, it means that the students prefer modern learning with android application with an attractive experienced. The correlation test obtained the result 0.76 which is a high value in decision making and could be accepted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Turrisi, Giuseppe F., John T. Jennings und Lars Vilhelmsen. „Phylogeny and generic concepts of the parasitoid wasp family Aulacidae (Hymenoptera: Evanioidea)“. Invertebrate Systematics 23, Nr. 1 (2009): 27. http://dx.doi.org/10.1071/is08031.

Der volle Inhalt der Quelle
Annotation:
The results of the first phylogenetic investigation of members of the Aulacidae of the world are presented. The main objective was to test the monophyly of the currently recognised genera. In total, 79 morphological characters were scored for a substantial sample of the extant aulacid fauna, including 72 species, as well as 12 outgroup taxa belonging to Evaniidae, Gasteruptiidae, Megalyridae, Trigonalidae, Braconidae and Stephanidae. All zoogeographic regions were represented. The dataset was analysed under different conditions (ordered, unordered, equal and implied weighting). The results under different weighting conditions are not fully congruent and many relationships remain unresolved. However, the analyses demonstrate that the current generic classification of the Aulacidae is not a natural one. There is support for a very large, monophyletic clade which includes all Pristaulacus Kieffer spp. + Panaulix Benoit spp. This suggests a wider generic concept for Pristaulacus, which is redefined and rediagnosed here. As a consequence, Panaulix becomes a junior synonym of Pristaulacus (syn. nov.), and the two described species of Panaulix are transferred to Pristaulacus: Pristaulacus rex (Benoit, 1984), comb. nov., and Pristaulacus irenae (Madl, 1990), comb. nov. The genus Aulacus Jurine was consistently paraphyletic and is not valid as currently defined. Furthermore, we failed to retrieve a consistent topology among the different clades of Aulacus. A satisfactory reclassification of Aulacus, however, requires a much more comprehensive taxon sample and/or additional character data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

WANG, LEI, KAP LUK CHAN und XUEJIAN XIONG. „A SUB-VECTOR WEIGHTING SCHEME FOR IMAGE RETRIEVAL WITH RELEVANCE FEEDBACK“. International Journal of Image and Graphics 02, Nr. 02 (April 2002): 199–213. http://dx.doi.org/10.1142/s0219467802000597.

Der volle Inhalt der Quelle
Annotation:
In image retrieval with relevance feedback, feature components are weighted to reflect the high-level concepts, and a user's subjective perception, embodied in the images labelled by the user in the feedback. However, the number of labelled images is often small and the covariance matrix needed for weighting will be singular. For this reason, the commonly used methods discard the mutual correlation among the feature components completely and use a diagonal covariance matrix. In this paper, a sub-vector weighting scheme is proposed. This scheme partitions a multi-dimensional visual feature vector into multiple low-dimensional sub-vectors. The singularity of the covariance matrix for each sub-vector can be avoided due to the lower dimensionality of the sub-vectors. Thus, the mutual correlation in each sub-vector can be retained for weighting and an optimally weighted similarity metric can be applied on each sub-vector. The similarity scores obtained from different sub-vectors are combined, as the final score, to rank the database images. Experimental results demonstrated that the proposed weighting scheme can significantly improve the efficacy of image retrieval with relevance feedback.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Satiman, Satiman. „Pattern of Weighting in Cases Outside the Criminal Code“. Ratio Legis Journal 1, Nr. 1 (15.06.2022): 50. http://dx.doi.org/10.30659/rlj.1.1.50-58.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study is to find out and analyze the pattern of criminal threats, especially the pattern of criminal aggravation in the law, as a consideration for experts in the formulation of laws in the future.The approach method used is a normative juridical approach, through the principles of both legal issues, theories, concepts and regulations related to the problem.. The result of this research is the pattern of weighting due to additional elements, which can be in the form of behavior (planning) or events arising from certain behaviors or consequences (severe injuries or death). In this case, the threat of general criminalization is actually not just a "sanction" that can be imposed by a judge that has been stipulated in the law, but is also a moral justification for criminalization, especially regarding what and what kind of punishment is appropriate and fair.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Formica, Anna, Elaheh Pourabbas und Francesco Taglino. „Semantic Search Enhanced with Rating Scores“. Future Internet 12, Nr. 4 (15.04.2020): 67. http://dx.doi.org/10.3390/fi12040067.

Der volle Inhalt der Quelle
Annotation:
This paper presents SemSime, a method based on semantic similarity for searching over a set of digital resources previously annotated by means of concepts from a weighted reference ontology. SemSime is an enhancement of SemSim and, with respect to the latter, it uses a frequency approach for weighting the ontology, and refines both the user request and the digital resources with the addition of rating scores. Such scores are High, Medium, and Low, and in the user request indicate the preferences assigned by the user to each of the concepts representing the searching criteria, whereas in the annotation of the digital resources they represent the levels of quality associated with each concept in describing the resources. The SemSime has been evaluated and the results of the experiment show that it performs better than SemSim and an evolution of it, referred to as S e m S i m R V .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Rais, Mohammed, Mohammed Bekkali und Abdelmonaime Lachkar. „An Efficient Method for Biomedical Word Sense Disambiguation Based on Web-Kernel Similarity“. International Journal of Healthcare Information Systems and Informatics 16, Nr. 4 (Oktober 2021): 1–14. http://dx.doi.org/10.4018/ijhisi.20211001.oa9.

Der volle Inhalt der Quelle
Annotation:
Searching for the best sense for a polysemous word remains one of the greatest challenges in the representation of biomedical text. To this end, Word Sense Disambiguation (WSD) algorithms mostly rely on an External Source of Knowledge, like a Thesaurus or Ontology, for automatically selecting the proper concept of an ambiguous term in a given Window of Context using semantic similarity and relatedness measures. In this paper, we propose a Web-based Kernel function for measuring the semantic relatedness between concepts to disambiguate an expression versus multiple possible concepts. This measure uses the large volume of documents returned by PubMed Search engine to determine the greater context for a biomedical short text through a new term weighting scheme based on Rough Set Theory (RST). To illustrate the efficiency of our proposed method, we evaluate a WSD algorithm based on this measure on a biomedical dataset (MSH-WSD) that contains 203 ambiguous terms and acronyms. The obtained results demonstrate promising improvements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Petrovay, K. „Area-Weighting of Sunspot Group Positions and Proper Motion Artifacts“. International Astronomical Union Colloquium 141 (1993): 123–26. http://dx.doi.org/10.1017/s025292110002892x.

Der volle Inhalt der Quelle
Annotation:
AbstractTwo simple examples are presented to show that concepts about the physical nature of sunspot groups may significantly influence the statistical data analysis process. In particular, the second example shows that the well-known difference in the decay rates of preceding (p-) and following (f-) polarity parts of sunspot groups may lead to a fake proper motion effect when area-weighted group positions are used. This effect may be responsible for some recent contradictory findings concerning the motions of sunspot groups. It is therefore argued that while area-weighting is adequate when calculating the mean positions of p- and f-parts of a sunspot group separately, defining the position of the group as a whole by the unweighted average of the mean positions of the p- and f-parts is more satisfactory from the theoretical point of view (whenever it is possible to distinguish between spots of different polarities). Similarly, it is best not to “correct” sunspot proper motions for internal differential rotation within groups.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Garnsey, Margaret R. „Automatic Classification of Financial Accounting Concepts“. Journal of Emerging Technologies in Accounting 3, Nr. 1 (01.01.2006): 21–39. http://dx.doi.org/10.2308/jeta.2006.3.1.21.

Der volle Inhalt der Quelle
Annotation:
Information and standards overload are part of the current business environment. In accounting, this is exacerbated due to the variety of users and the evolving nature of accounting language. This article describes a research project that determines the feasibility of using statistical methods to automatically group related accounting concepts together. Starting with the frequencies of words in documents and modifying them for local and global weighting, Latent Semantic Indexing (LSI) and agglomerative clustering were used to derive clusters of related accounting concepts. Resultant clusters were compared to terms generated randomly and terms identified by individuals to determine if related terms are identified. A recognition test was used to determine if providing individuals with lists of terms generated automatically allowed them to identify additional relevant terms. Results found that both clusters obtained from the weighted term-document matrix and clusters from a LSI matrix based on 50 dimensions contained significant numbers of related terms. There was no statistical difference in the number of related terms found by the methods. However, the LSI clusters contained terms that were of a lower frequency in the corpus. This finding may have significance in using cluster terms to assist in retrieval. When given a specific term and asked for related terms, providing individuals with a list of potential terms significantly increased the number of related terms they were able to identify when compared to their free-recall.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Ramadan, Rino. „The Implementation Concept Of "Learning Style Inventory" David Kolb Based PHP On STMIK Nusa Mandiri“. SinkrOn 3, Nr. 2 (18.03.2019): 193. http://dx.doi.org/10.33395/sinkron.v3i2.10037.

Der volle Inhalt der Quelle
Annotation:
The concept of "Learning Style Inventory" offered David Kolb is a concept for the assessment in detecting a person's learning style. The learning process is based on the experience of having 5 cycles. 5 of them can be used as a reference for the assessment. David Kolb has created 12 questions that can already be used as a reference in making this assessment. Question by David Kolb create an outline already refers to the cycle of learning from experience. After getting answers to 12 questions before, and then we do the calculation based on a formula created by David Kolb. The formula are consists of 4 score. The first score is CE (Concentrate Experience), then the second score is AE (Active Experimentation), then the third score is RO (Reflective Observation), and the final score is AC (Abstract conceptualization). The assessment process will do is add any weighting of each question and divided based on each option. Then we add up to 12 about the nominal weighting. Having obtained in total, we then perform the detection process of learning styles based on the concept of learning styles with the reference calculation by analysis David Kolb's learning style. The concept offered David Kolb has many implemented with a variety of versions, this time the writer will try to implement this concept to the programming language PHP along with supporters other programming language. By implementing these concepts based on PHP, then the respondent can conduct the assessment process whenever and wherever
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Riffelli, Stefano. „Global Comfort Indices in Indoor Environments: A Survey“. Sustainability 13, Nr. 22 (19.11.2021): 12784. http://dx.doi.org/10.3390/su132212784.

Der volle Inhalt der Quelle
Annotation:
The term “comfort” has a number of nuances and meanings according to the specific context. This study was aimed at providing a review of the influence (or “weight”) of the different factors that contribute to global comfort, commonly known as indoor environmental quality (IEQ). A dedicated section includes the methodologies and strategies for finding the most relevant studies on this topic. Resulting in 85 studies, this review outlines 27 studies containing 26 different weightings and 9 global comfort indices (GCIs) with a formula. After an overview of the main concepts, basic definitions, indices, methods and possible strategies for each type of comfort, the studies on the IEQ categories weights to reach a global comfort index are reviewed. A particular interest was paid to research with a focus on green buildings and smart homes. The core section includes global indoor environmental quality indices, besides a specific emphasis on indices found in recent literature to understand the best aspects that they all share. For each of these overall indices, some specific details are shown, such as the comfort categories, the general formula, and the methods employed. The last section reports IEQ elements percentage weighting summary, common aspects of GCIs, requisites for an indoor global comfort index (IGCI), and models adopted in comfort category weighting. Furthermore, current trends are described in the concluding remarks pointing to a better IGCI by considering additional aspects and eventually adopting artificial intelligence algorithms. This leads to the optimal control of any actuator, maximising energy savings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Douglass, Steven, und Nathan Gibson. „K-MEANS CLUSTERING OF NEUTRON SPECTRA FOR CROSS SECTION COLLAPSE“. EPJ Web of Conferences 247 (2021): 03010. http://dx.doi.org/10.1051/epjconf/202124703010.

Der volle Inhalt der Quelle
Annotation:
The process of generating cross sections for whole-core analysis typically involves collapsing cross sections against an approximate spectrum generated by solving problems with reduced scope (e.g., 2D slices of a fuel assembly). Such spectra vary with the location of a material region and with other state parameters (e.g., burnup, temperature, soluble boron concentration), resulting in a burdensome and potentially time consuming process to store and load spectra. Commonly, this is resolved by manually determinining material regions for which the cross sections can be collapsed with a single weighting flux, requiring a combination of domain knowledge, engineering judgment, and trial and error. Exploring new reactor concepts and solving increasingly complicated problems with deterministic transport methods will therefore benefit greatly from an automated approach to grouping spectra independent of problem geometry or reactor type. This paper leverages a data analytics technique known as k-means clustering to group regions with similar weighting spectra into individual clusters, within each of which an average weighting flux is applied. Despite the clustering algorithm being agnostic to the physics of the problem, the approach results in a nearly 98% decrease in number of spectra regions with minimal impact to the accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Mansor, Muhammad Naufal, und Mohd Nazri Rejab. „Innovative Concepts for Newborn Pain Based Systems with Hu Moment and Similar Classifier“. Applied Mechanics and Materials 475-476 (Dezember 2013): 1098–103. http://dx.doi.org/10.4028/www.scientific.net/amm.475-476.1098.

Der volle Inhalt der Quelle
Annotation:
Image analysis of infant pain has been proven to be an excellent tool in the area of automatic detection of pathological status of an infant. This paper investigates the application of parameter weighting for invariant moments to provide the robust representation of infant pain images. Two classes of infant images were considered such as normal images, and babies in pain. A Similar Classifier is suggested to classify the infant images into normal and pathological images. Similar Classifier is trained with different spread factor or smoothing parameter to obtain better classification accuracy. The experimental results demonstrate that the suggested features and classification algorithms give very promising classification accuracy of above 89.54% and it expounds that the suggested method can be used to help medical professionals for diagnosing pathological status of an infant from face images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Most, Thomas, und Christian Bucher. „New concepts for moving least squares: An interpolating non-singular weighting function and weighted nodal least squares“. Engineering Analysis with Boundary Elements 32, Nr. 6 (Juni 2008): 461–70. http://dx.doi.org/10.1016/j.enganabound.2007.10.013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Pérez de Frutos, Javier, André Pedersen, Egidijus Pelanis, David Bouget, Shanmugapriya Survarachakan, Thomas Langø, Ole-Jakob Elle und Frank Lindseth. „Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation“. PLOS ONE 18, Nr. 2 (24.02.2023): e0282110. http://dx.doi.org/10.1371/journal.pone.0282110.

Der volle Inhalt der Quelle
Annotation:
Purpose This study aims to explore training strategies to improve convolutional neural network-based image-to-image deformable registration for abdominal imaging. Methods Different training strategies, loss functions, and transfer learning schemes were considered. Furthermore, an augmentation layer which generates artificial training image pairs on-the-fly was proposed, in addition to a loss layer that enables dynamic loss weighting. Results Guiding registration using segmentations in the training step proved beneficial for deep-learning-based image registration. Finetuning the pretrained model from the brain MRI dataset to the abdominal CT dataset further improved performance on the latter application, removing the need for a large dataset to yield satisfactory performance. Dynamic loss weighting also marginally improved performance, all without impacting inference runtime. Conclusion Using simple concepts, we improved the performance of a commonly used deep image registration architecture, VoxelMorph. In future work, our framework, DDMR, should be validated on different datasets to further assess its value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yu, Chenglin, und Hailong Pei. „Dynamic Weighting Translation Transfer Learning for Imbalanced Medical Image Classification“. Entropy 26, Nr. 5 (01.05.2024): 400. http://dx.doi.org/10.3390/e26050400.

Der volle Inhalt der Quelle
Annotation:
Medical image diagnosis using deep learning has shown significant promise in clinical medicine. However, it often encounters two major difficulties in real-world applications: (1) domain shift, which invalidates the trained model on new datasets, and (2) class imbalance problems leading to model biases towards majority classes. To address these challenges, this paper proposes a transfer learning solution, named Dynamic Weighting Translation Transfer Learning (DTTL), for imbalanced medical image classification. The approach is grounded in information and entropy theory and comprises three modules: Cross-domain Discriminability Adaptation (CDA), Dynamic Domain Translation (DDT), and Balanced Target Learning (BTL). CDA connects discriminative feature learning between source and target domains using a synthetic discriminability loss and a domain-invariant feature learning loss. The DDT unit develops a dynamic translation process for imbalanced classes between two domains, utilizing a confidence-based selection approach to select the most useful synthesized images to create a pseudo-labeled balanced target domain. Finally, the BTL unit performs supervised learning on the reassembled target set to obtain the final diagnostic model. This paper delves into maximizing the entropy of class distributions, while simultaneously minimizing the cross-entropy between the source and target domains to reduce domain discrepancies. By incorporating entropy concepts into our framework, our method not only significantly enhances medical image classification in practical settings but also innovates the application of entropy and information theory within deep learning and medical image processing realms. Extensive experiments demonstrate that DTTL achieves the best performance compared to existing state-of-the-art methods for imbalanced medical image classification tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Gorbachev, Sergey, und Tatyana Abramova. „Design of Fuzzy Cognitive Model of Mutual Influence and Connectivity of Innovative Technologies“. Journal of Modeling and Optimization 10, Nr. 1 (30.06.2018): 1. http://dx.doi.org/10.32732/jmo.2018.10.1.1.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to research and development of fuzzy cognitive model of mutual influence and connectivity of innovative technologies on the basis of synthesis of interdisciplinary research strategies - foresight methods based on logically transparent mechanisms of interpretation of the solution, expert weighting methods and neutrosophic cognitive maps. Neutrosophic cognitive map of mutual influence and connectivity of innovative technologies is constructed on the basis of data on the nature and intensity of these interactions, allowing to calculate the degree of influence of one or several concepts (technologies), taking into account their state, on the target indicator, and also calculate the fixed state of the concept system. The results of experimental verification and the multiplicative effect of the developed model on the data of expert surveys are described, under conditions of transition to the 6th technological order.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

SEON, CHOONG-NYOUNG, HARKSOO KIM und JUNGYUN SEO. „TRANSLATION ASSISTANCE SYSTEM BASED ON SELECTIVE WEIGHTING AND CLUSTER-BASED SEARCHING METHODS“. International Journal on Artificial Intelligence Tools 21, Nr. 06 (Dezember 2012): 1250028. http://dx.doi.org/10.1142/s0218213012500285.

Der volle Inhalt der Quelle
Annotation:
Visiting a foreign country is now much easier than it was in the past. This has led to a consequent increase in the need for translation services during these visits. To satisfy this need, a reliable translation assistance system based on sentence retrieval techniques is proposed. When a user inputs a sentence in his/her native language, the proposed system retrieves sentences similar to the input sentence from a pre-constructed bilingual corpus and returns pairs of sentences in the native and foreign languages. To reduce the lexical disagreement problems that inevitably occur in this sentence retrieval application, the proposed system uses multi-level linguistic information (i.e., keywords, sentence types, and concepts) with different weights as indexing terms. In addition, the proposed system uses clustering information from sentences with similar meanings to smooth the retrieval target sentences. In an experiment, the proposed system outperformed traditional IR systems. Based on various experiments, it was found that multi-level information was effective at alleviating critical lexical disagreement problems in sentence retrieval. It was also found that the proposed system was suitable for sentence retrieval applications such as translation assistance systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

El-Dsouky, Ali I., Hesham A. Ali und Rabab Samy Rashed. „Ranking Documents Based on the Semantic Relations Using Analytical Hierarchy Process“. International Journal of Information Retrieval Research 7, Nr. 3 (Juli 2017): 22–37. http://dx.doi.org/10.4018/ijirr.2017070102.

Der volle Inhalt der Quelle
Annotation:
With the rapid growth of the World Wide Web comes the need for a fast and accurate way to reach the information required. Search engines play an important role in retrieving the required information for users. Ranking algorithms are an important step in search engines so that the user could retrieve the pages most relevant to his query In this work, the authors present a method for utilizing genealogical information from ontology to find the suitable hierarchical concepts for query extension, and ranking web pages based on semantic relations of the hierarchical concepts related to query terms, taking into consideration the hierarchical relations of domain searched (sibling, synonyms and hyponyms) by different weighting based on AHP method. So, it provides an accurate solution for ranking documents when compared to the three common methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Maestas, Cherie D., Matthew K. Buttice und Walter J. Stone. „Extracting Wisdom from Experts and Small Crowds: Strategies for Improving Informant-based Measures of Political Concepts“. Political Analysis 22, Nr. 3 (2014): 354–73. http://dx.doi.org/10.1093/pan/mpt050.

Der volle Inhalt der Quelle
Annotation:
Social scientists have increasingly turned to expert judgments to generate data for difficult-to-measure concepts, but getting access to and response from highly expert informants can be costly and challenging. We examine how informant selection and post-survey response aggregation influence the validity and reliability of measures built from informant observations. We draw upon three surveys with parallel survey questions of candidate characteristics to examine the trade-off between expanding the size of the local informant pool and the pool's level of expertise. We find that a “wisdom-of-crowds” effect trumps the benefits associated with the expertise of individual informants when the size of the rater pool is modestly increased. We demonstrate that the benefits of expertise are best realized by prescreening potential informants for expertise rather than post-survey weighting by expertise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Pandiselvi, Selvakumar, Raja Ramachandran, Jinde Cao, Grienggrai Rajchakit, Aly R. Seadawy und Ahmed Alsaedi. „An advanced delay-dependent approach of impulsive genetic regulatory networks besides the distributed delays, parameter uncertainties and time-varying delays“. Nonlinear Analysis: Modelling and Control 23, Nr. 6 (21.11.2018): 803–29. http://dx.doi.org/10.15388/na.2018.6.1.

Der volle Inhalt der Quelle
Annotation:
In this typescript, we concerned the problem of delay-dependent approach of impulsive genetic regulatory networks besides the distributed delays, parameter uncertainties and time-varying delays. An advanced Lyapunov–Krasovskii functional are defined, which is in triple integral form. Combining the Lyapunov–Krasovskii functional with convex combination method and free-weighting matrix approach the stability conditions are derived with the help of linear matrix inequalities (LMIs). Some available software collections are used to solve the conditions. Lastly, two numerical examples and their simulations are conferred to indicate the feasibility of the theoretical concepts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Frost, M. E., und N. A. Spence. „The Rediscovery of Accessibility and Economic Potential: The Critical Issue of Self-Potential“. Environment and Planning A: Economy and Space 27, Nr. 11 (November 1995): 1833–48. http://dx.doi.org/10.1068/a271833.

Der volle Inhalt der Quelle
Annotation:
Economic potential measures of accessibility seem to have been rediscovered in the research literature recently, as well as in research that informs policy formulation. These new applications are using more and more sophisticated sources of data but are in large measure still operationalising the familiar concepts of market potential. Such potential is calculated for any zone by summing the representative economic mass of all other zones in the system each divided by some measure of the intervening travel impedance between that zone and every other zone. In this straightforward calculation it becomes necessary to incorporate the economic mass of the zone under consideration itself and to decide on the appropriate travel impedance. This apparently simple task is the focus of this paper. Most research of this type uses a weighting of the radius of the circle equalling the area of the zone in question to approximate the travel impedance. The purpose of this paper is to demonstrate that the choice of weighting is important in determining the nature of the resultant potential surface.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Tekli, Joe, Gilbert Tekli und Richard Chbeir. „Combining offline and on-the-fly disambiguation to perform semantic-aware XML querying“. Computer Science and Information Systems, Nr. 00 (2022): 63. http://dx.doi.org/10.2298/csis220228063t.

Der volle Inhalt der Quelle
Annotation:
Many efforts have been deployed by the IR community to extend free-text query processing toward semi-structured XML search. Most methods rely on the concept of Lowest Comment Ancestor (LCA) between two or multiple structural nodes to identify the most specific XML elements containing query keywords posted by the user. Yet, few of the existing approaches consider XML semantics, and the methods that process semantics generally rely on computationally expensive word sense disambiguation (WSD) techniques, or apply semantic analysis in one stage only: performing query relaxation/refinement over the bag of words retrieval model, to reduce processing time. In this paper, we describe a new approach for XML keyword search aiming to solve the limitations mentioned above. Our solution first transforms the XML document collection (offline) and the keyword query (on-the-fly) into meaningful semantic representations using context-based and global disambiguation methods, specially designed to allow almost linear computation efficiency. We use a semantic-aware inverted index to allow semantic-aware search, result selection, and result ranking functionality. The semantically augmented XML data tree is processed for structural node clustering, based on semantic query concepts (i.e., key-concepts), in order to identify and rank candidate answer sub-trees containing related occurrences of query key-concepts. Dedicated weighting functions and various search algorithms have been developed for that purpose and will be presented here. Experimental results highlight the quality and potential of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

JUMARIE, GUY. „A (NEW) MEASURE OF FUZZY UNCERTAINTY VIA INTERVAL ANALYSIS, WHICH IS FULLY CONSISTENT WITH SHANNON THEORY“. Tamkang Journal of Mathematics 22, Nr. 3 (01.09.1991): 223–41. http://dx.doi.org/10.5556/j.tkjm.22.1991.4606.

Der volle Inhalt der Quelle
Annotation:
Many authors have suggested different measures of the amount of uncertainty involved in fuzzy sets, but most of these concepts suffer from drawbacks: mainly, they are indexes of fuzziness rather than measures of uncertainty, and they are not fully consistent with Shannon theory. The question is herein once more considered by combining the information theory of deterministic functions, recently initiated by the author, with the viewpoint of interval analysis; and one so derive the new concept of "uncertainty of order c of fuzzy sets". It is shown that it satisfies the main properties which are desirable for a measure of uncertainty. Some topics are outlined, such as informational distance between fuzzy sets, and mutual infonnation between fuzzy sets for instance. One so has at hand a unified approach to Shannon information expressed in terms of probability, and to fuzzy information described by weighting coefficients commonly referred to as possibility distribution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Rojali, Rojali, Syaeful Karim und Edgar Gerriano. „Perancangan Program Aplikasi Penentuan Portofolio Investasi dengan Metode Dempster Shafer Fuzzy-Analytical Hierarchy Process“. ComTech: Computer, Mathematics and Engineering Applications 2, Nr. 1 (01.06.2011): 139. http://dx.doi.org/10.21512/comtech.v2i1.2726.

Der volle Inhalt der Quelle
Annotation:
Investment is very popular nowadays to gain profits of the available property. Many investment portfolios are available but there are still a handful of users who have not been able to determine the best type of investment. Surely it would be very unfortunate if the property can not be well invested. Therefore, it is necessary to propose another alternative method that can determine the existing investment portfolio. This method begins by a process which details existing factors in every investment portfolio. Then, it takes intuition properly to give weight to the initial calculation. After that, it takes a Fuzzy concept as a tool to provide a numerical weight to each of the parameters. The final step is weighting process using Dempster Shafer method. With the use of three basic concepts, investors are expected to obtain objective and optimal computing results related to the profitable investment portfolio selection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ma, Qing, Hongyuan Sun, Zhe Chen und Yuhang Tan. „A novel MCDM approach for design concept evaluation based on interval-valued picture fuzzy sets“. PLOS ONE 18, Nr. 11 (27.11.2023): e0294596. http://dx.doi.org/10.1371/journal.pone.0294596.

Der volle Inhalt der Quelle
Annotation:
The assessment of design concepts presents an efficient and effective strategy for businesses to strengthen their competitive edge and introduce market-worthy products. The widely accepted viewpoint acknowledges this as a intricate multi-criteria decision-making (MCDM) approach, involving a multitude of evaluative criteria and a significant amount of data that is frequently ambiguously defined and subjectively influenced. In order to tackle the problems of uncertainty and fuzziness in design concept evaluation, our research creatively combines interval-valued picture fuzzy set (IVPFS) with an MCDM process of design concept evaluation. Firstly, this study draws on the existing relevant literature and the experience of decision makers to identify some important criteria and corresponding sub-criteria and form a scientific evaluation indicator system. We then introduce the essential operational concepts of interval-valued picture fuzzy numbers (IVPFNs) and the interval-valued picture fuzzy ordered weighted interactive averaging (IVPFOWIA) operator. Thirdly, an entropy weighting method based on IVPFS is proposed in this research to calculate the weights of criteria and sub-criteria, and based on this, an integrated IVPF decision matrix is further constructed based on the presented IVPFOWIA operator. Finally, the best design concept alternative is selected by applying the extended TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) approach with IVPFS. The IVPFS combined with improved MCDM method have been proven to be superior in complex and uncertain decision-making situations through experiments and comparative assessments. The information ambiguity in the evaluation of design concept is well characterized by our augmentation based on IVPFS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Harahap, Mar’ie Mahfudz, und Reski Anwar. „SUPREME COURT REGULATION (PERMA) NUMBER 1 YEAR 2020: SOLUTIONS IN THE GUIDELINES FOR DETERMINING DEATH PENALTY FOR CORRUPTION CRIMINAL ACTS IN CERTAIN CONDITIONS“. JCH (Jurnal Cendekia Hukum) 7, Nr. 2 (31.03.2022): 257. http://dx.doi.org/10.33760/jch.v7i2.474.

Der volle Inhalt der Quelle
Annotation:
The sanctioning of Supreme Court Regulation No. 1 of 2020 in the time of Covid-19 opens a new glimmer of hope in corruption cases in certain circumstances in ndonesia. This regulation provides guidelines for judges in providing the death penalty for corruptors. The motivation behind the Research is to see how the sentencing guidelines in dropping the death penalty to corruptors in certain circumstances, and to understand how the view of the study concepts and comparative of the death penalty for corruptors. This research is included in the normative legal research section using primary and secondary legal materials. The approach used is a statute approach, comparative approach, and conceptual approach. This research is included in the normative juridical research with the method of statutory, concept, and comparison approach. The results of study, it appears that the death penalty is only for corruption crimes by weighting and threatened alternatively, and may be dropped when considering the high level of guilty, impact, and profit while in comparison.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Habibur Rahman Arjuni und Arif Senja Fitrani. „Sistem Pendukung Keputusan Peserta Lomba Desain Logo Menggunakan Metode Simple Additive Weighting (SAW) Berbasis Website“. Explorer 2, Nr. 2 (31.07.2022): 71–78. http://dx.doi.org/10.47065/explorer.v2i2.310.

Der volle Inhalt der Quelle
Annotation:
The Faculty of Business, Law and Social Sciences (FBHIS) Universitas Muhammadiyah Sidoarjo (Umsida) held an FBHIS Logo Design Competition for the FBHIS creative community. The provisions of the competition are to represent all study programs (Prodi) in FBHIS, namely Management Study Program, Master of Management, Law, Communication Studies, Public Administration, Accounting, and Digital Business. To help determine the winner of the best contestants, a decision support system (SPK) is needed that is able to provide alternative solutions to the parties concerned. The method used in the decision support system for selecting participants in the logo design competition uses the Simple Additive Weighting (SAW) method for the weighting of the criteria. The software used in making DSS is MySQL and PHP. The assessment criteria are taken from the Originality selection test scores, Design Idea Concepts, Aesthetic Values, Conformity of Themes and Messages Delivered. Alternative samples were taken as many as 20 participants with the top rank. The results of the SAW calculation give the order of the participants' scores from the highest to the lowest. The result of the SPK is not a final decision, because it goes back to the one who made the decision
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Teegavarapu, Ramesh S. V. „Estimation of missing precipitation records integrating surface interpolation techniques and spatio-temporal association rules“. Journal of Hydroinformatics 11, Nr. 2 (01.03.2009): 133–46. http://dx.doi.org/10.2166/hydro.2009.009.

Der volle Inhalt der Quelle
Annotation:
Deterministic and stochastic weighting methods are the most frequently used methods for estimating missing rainfall values. These methods may not always provide accurate estimates due to their inability to completely characterize the spatial and temporal variability of rainfall. A new association rule mining (ARM) based spatial interpolation approach is proposed, developed and investigated in the current study to estimate missing precipitation values at a gauging station. As an integrated approach this methodology combines the power of data mining techniques and spatial interpolation approaches. Data mining concepts are used to extract and formulate rules based on spatial and temporal associations among observed precipitation data series. The rules are then used to improve the precipitation estimates obtained from spatial interpolation methods. A stochastic spatial interpolation technique and three deterministic weighting methods are used as interpolation methods in the current study. Historical daily precipitation data obtained from 15 rain gauging stations from a temperate climatic region (Kentucky, USA) are used to test this approach and derive conclusions about its efficacy for estimating missing precipitation data. Results suggest that the use of association rule mining in conjunction with a spatial interpolation technique can improve the precipitation estimates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Navarro, Ignacio J., Víctor Yepes und José V. Martí. „A Review of Multicriteria Assessment Techniques Applied to Sustainable Infrastructure Design“. Advances in Civil Engineering 2019 (17.06.2019): 1–16. http://dx.doi.org/10.1155/2019/6134803.

Der volle Inhalt der Quelle
Annotation:
Given the great impacts associated with the construction and maintenance of infrastructures in both the environmental, the economic and the social dimensions, a sustainable approach to their design appears essential to ease the fulfilment of the Sustainable Development Goals set by the United Nations. Multicriteria decision-making methods are usually applied to address the complex and often conflicting criteria that characterise sustainability. The present study aims to review the current state of the art regarding the application of such techniques in the sustainability assessment of infrastructures, analysing as well the sustainability impacts and criteria included in the assessments. The Analytic Hierarchy Process is the most frequently used weighting technique. Simple Additive Weighting has turned out to be the most applied decision-making method to assess the weighted criteria. Although a life cycle assessment approach is recurrently used to evaluate sustainability, standardised concepts, such as cost discounting, or presentation of the assumed functional unit or system boundaries, as required by ISO 14040, are still only marginally used. Additionally, a need for further research in the inclusion of fuzziness in the handling of linguistic variables is identified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Safii, M. „Sistem Pendukung Keputusan Penerima Beasiswa PPA Dan BBM Menggunakan Metode Simple Additive Weighting (SAW)“. Jurasik (Jurnal Riset Sistem Informasi dan Teknik Informatika) 2, Nr. 1 (31.07.2017): 75. http://dx.doi.org/10.30645/jurasik.v2i1.21.

Der volle Inhalt der Quelle
Annotation:
Improving Academic Achievement Scholarship (PPA) and Student Learning Aid (BBM) is one of the stimulus given to the government for students with the aim of motivating students to enhance the spirit of learning. In accordance with the program guidelines issued by the Directorate General of Higher Education must follow the principle of 3T is Right Target, Right Number and Timely. Computer Informatic Management Academy (AMIK) Tunas Bangsa Pematangsiantar is one of the private universities obtained on the scholarship quota. In accordance with the number of students currently as many as 1,400 people then the allocation of the scholarship must be selected carefully to fit the government guidelines. In this case we need a method that can support decision boosters system (DSS) in determining the grantee is using simple additive weighting method (SAW). With the basic concepts seek a weighted sum of the rating performance of each alternative on all attributes that require a decision matrix normalization process. This method can make an assessment criteria and detailed compound with a comprehensive framework considerations hierarchy process which then calculate the weights to each criterion in determining the priority recommendations of PPA and BBM scholarship recipients in accordance with the target.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Mahoney, James, und Gary Goertz. „A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research“. Political Analysis 14, Nr. 3 (2006): 227–49. http://dx.doi.org/10.1093/pan/mpj017.

Der volle Inhalt der Quelle
Annotation:
The quantitative and qualitative research traditions can be thought of as distinct cultures marked by different values, beliefs, and norms. In this essay, we adopt this metaphor toward the end of contrasting these research traditions across 10 areas: (1) approaches to explanation, (2) conceptions of causation, (3) multivariate explanations, (4) equifinality, (5) scope and causal generalization, (6) case selection, (7) weighting observations, (8) substantively important cases, (9) lack of fit, and (10) concepts and measurement. We suggest that an appreciation of the alternative assumptions and goals of the traditions can help scholars avoid misunderstandings and contribute to more productive “cross-cultural” communication in political science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

PATERITSAS, CHRISTOS, und ANDREAS STAFYLOPATIS. „MEMORY-BASED CLASSIFICATION WITH DYNAMIC FEATURE SELECTION USING SELF-ORGANIZING MAPS FOR PATTERN EVALUATION“. International Journal on Artificial Intelligence Tools 16, Nr. 05 (Oktober 2007): 875–99. http://dx.doi.org/10.1142/s0218213007003588.

Der volle Inhalt der Quelle
Annotation:
Memory-based learning is one of the main fields in the area of machine learning. We propose a new methodology for addressing the classification task that relies on the main idea of the k-nearest neighbors algorithm, which is the most important representative of this field. In the proposed approach, given an unclassified pattern, a set of neighboring patterns is found, not necessarily using all input feature dimensions. Also, following the concept of the naïve Bayesian classifier, we adopt the assumption of independence of input features in the outcome of the classification task. The two concepts are merged in an attempt to take advantage of their good performance features. In order to further improve the performance of our approach, we propose a novel weighting scheme of the memory-base. Using the self-organizing maps model during the execution of the algorithm, dynamic weights of the memory-base patterns are produced. Experimental results have shown improved performance of the proposed method in comparison with the aforementioned algorithms and their variations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Mouriño García, Marcos Antonio, Roberto Pérez Rodríguez und Luis E. Anido Rifón. „Biomedical literature classification using encyclopedic knowledge: a Wikipedia-based bag-of-concepts approach“. PeerJ 3 (29.09.2015): e1279. http://dx.doi.org/10.7717/peerj.1279.

Der volle Inhalt der Quelle
Annotation:
Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria—that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text—thus suffering from synonymy and polysemy—and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge—concretely Wikipedia—in order to create bag-of-concepts (BoC) representations of documents, understanding concept as “unit of meaning”, and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Díez, Francisco Javier, Manuel Arias, Jorge Pérez-Martín und Manuel Luque. „Teaching Probabilistic Graphical Models with OpenMarkov“. Mathematics 10, Nr. 19 (30.09.2022): 3577. http://dx.doi.org/10.3390/math10193577.

Der volle Inhalt der Quelle
Annotation:
OpenMarkov is an open-source software tool for probabilistic graphical models. It has been developed especially for medicine, but has also been used to build applications in other fields and for tuition, in more than 30 countries. In this paper we explain how to use it as a pedagogical tool to teach the main concepts of Bayesian networks and influence diagrams, such as conditional dependence and independence, d-separation, Markov blankets, explaining away, optimal policies, expected utilities, etc., and some inference algorithms: logic sampling, likelihood weighting, and arc reversal. The facilities for learning Bayesian networks interactively can be used to illustrate step by step the performance of the two basic algorithms: search-and-score and PC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Andriyanty, Reny, Marsadi Aras, Silvia Nur Afuani und Amalia Nurfallah. „Strategi Pengembangan Bisnis Rumah Makan Padang Di Sekitar Lingkar Kampus IBI Kosgoro 1957“. Mediastima 26, Nr. 1 (10.06.2020): 18–39. http://dx.doi.org/10.55122/mediastima.v26i1.1.

Der volle Inhalt der Quelle
Annotation:
The reseach purpose formulated the specific strategy for Rumah Makan Padang business development. The research method is descriptive qualitative. Data analysis techniques will be divided into three activities: 1) The input stage uses the Delphi technique, 2) the adjustment stage by weighting technique which used a reciprocal rank method and 3) The decision determination stage analyzed by the quantitative strategic planning matrix method. The recomendation strategy are: do the supply chain management with trusted suppliers, make the collaboration with the government to facilate and grant all permits related to halal, health, safety and food security accreditation, develope financing, delivery and promoting throug digital technology, innovate the menus and keep to maintaining unique building concepts and service techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Körber, Andreas. „Historical consciousness, knowledge, and competencies of historical thinking: An integrated model of historical thinking and curricular implications“. Historical Encounters: A journal of historical consciousness, historical cultures, and history education 8, Nr. 1 (04.05.2021): 97–119. http://dx.doi.org/10.52289/hej8.107.

Der volle Inhalt der Quelle
Annotation:
Comparative and reflection on history education across national and cultural boundaries has shown that regardless of different traditions of history education, legislative interventions and research, some questions are common to research, debate and development, albeit there are both differences and commonalities in concepts and terminology. One of the common problems is the weighting of the components “knowledge”, “historical consciousness”, and “skills” or “competencies” both as aims of history education and in their curricular interrelation with regard to progression. On the backdrop of a long standing debate around German “chronological” teaching of history, making use of some recent comparative reflections, the article discusses principles for designing non-chronological curricula focusing on sequential elaboration in all three dimensions of history learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Dagher, Issam, und Hussein Al-Bazzaz. „Improving the Component-Based Face Recognition Using Enhanced Viola–Jones and Weighted Voting Technique“. Modelling and Simulation in Engineering 2019 (03.04.2019): 1–9. http://dx.doi.org/10.1155/2019/8234124.

Der volle Inhalt der Quelle
Annotation:
This paper enhances the recognition capabilities of the facial component-based techniques using the concepts of better Viola–Jones component detection and weighting facial components. Our method starts with enhanced Viola–Jones face component detection and cropping. The facial components are detected and cropped accurately during all pose-changing circumstances. The cropped components are represented by the histogram of oriented gradients (HOG). The weight of each component was determined using a validation process. Combining these weights was done by a simple voting technique. Three public databases were used: the AT&T database, the PUT database, and the AR database. Several improvements are observed using the weighted voting recognition method presented in this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Rácz, Gábor, Attila Sali und Klaus-Dieter Schewe. „Refined Fuzzy Profile Matching“. Acta Cybernetica 26, Nr. 2 (18.09.2023): 243–66. http://dx.doi.org/10.14232/actacyb.277380.

Der volle Inhalt der Quelle
Annotation:
A profile describes a set of properties, e.g. a set of skills a person may have or a set of skills required for a particular job. Profile matching aims to determine how well a given profile fits to a requested profile and vice versa. Fuzzyness is naturally attached to this problem. The filter-based matching theory uses filters in lattices to represent profiles, and matching values in the interval [0,1], so the lattice order refers to subsumption between the concepts in a profile. In this article the lattice is extended by additional information in form of weighted extra edges that represent partial quantifiable relationships between these concepts. This gives rise to fuzzy filters, which permit a refinement of profile matching. Another way to introduce fuzzyness is to treat profiles as fuzzy sets. In the present paper we combine these two aproaches. Extra edges may introduce directed cycles in the directed graph of the ontology, and the structure of a lattice is lost. We provide a construction grounded in formal concept analysis to extend the original lattice and remove the cycles such that matching values determined over the extended lattice are exactly those resulting from the use of fuzzy filters in case of crisp profiles. For fuzzy profiles we show how to modify the weighting construction while eliminating the directed cycles but still regaining the matching values. We also give sharp estimates for the growth of the number of vertices in this construction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Mitra, Soupaya, und Shankha Shubhra Goswami. „Application of Simple Average Weighting Optimization Method in the Selection of Best Desktop Computer Model“. Advanced Journal of Graduate Research 6, Nr. 1 (22.07.2019): 60–68. http://dx.doi.org/10.21467/ajgr.6.1.60-68.

Der volle Inhalt der Quelle
Annotation:
Multi-Criteria Decision Making (MCDM) is one of the most emerging concepts in today’s world which enables a decision maker to select the best strategies among different available alternatives. MCDM technique helps to remove the biasness and confusion while selecting a product or process. In recent few years different MCDM methodologies finds wide area of applications in industries as well as in our daily life. In this paper, such one type of application is broadly described. One example is taken from our daily life, which is generally faced by most of the students while purchasing a desktop computer. The main objective of this paper is to select the best desktop computer models among five different models actual available in the market having different configurations. For this analysis, 100 computer users have been surveyed to know their relative preferences and choices, which of the computer specifications is most important to them. For this present analysis few numbers of criteria have been considered and also there are number of sub-criteria within each criterion (for example, the processor may be different for different models like I3, I5, I7 etc.). The MCDM methodology which is adopted for this selection process is known as Simple Average Weighting (SAW) method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Mitchel, R. E. J. „Cancer and Low Dose Responses in vivo: Implications for Radiation Protection“. Dose-Response 5, Nr. 4 (01.10.2007): dose—response.0. http://dx.doi.org/10.2203/dose-response.07-014.mitchel.

Der volle Inhalt der Quelle
Annotation:
The Linear No Threshold (LNT) hypothesis states that ionizing radiation risk is directly proportional to dose, without a threshold. This hypothesis, along with a number of additional derived or auxiliary concepts such as radiation and tissue type weighting factors, and dose rate reduction factors, are used to calculate radiation risk estimates for humans, and are therefore fundamental for radiation protection practices. This system is based mainly on epidemiological data of cancer risk in human populations exposed to relatively high doses (above 100 mSv), with the results linearly extrapolated back to the low doses typical of current exposures. The system therefore uses dose as a surrogate for risk. There is now a large body of information indicating that, at low doses, the LNT hypothesis, along with most of the derived and auxiliary concepts, is incorrect. The use of dose as a predictor of risk needs to be re-examined and the use of dose limits, as a means of limiting risk needs to be re-evaluated. This re-evaluation could lead to large changes in radiation protection practices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Juliantara, Kadek Bayu Krisna, Anak Agung Sagung Laksmi Dewi und I. Made Minggu Widyantara. „Peran Reserse dalam Penyidikan Tindak Pidana Pencurian dengan Pemberatan (Studi Kasus Polsek Sukawati Gianyar Bali)“. Jurnal Konstruksi Hukum 2, Nr. 3 (01.07.2021): 510–14. http://dx.doi.org/10.22225/jkh.2.3.3625.510-514.

Der volle Inhalt der Quelle
Annotation:
Recently, the incidence of criminal acts of theft with weights has increased, the investigator in handling this case must be careful in uncovering criminal acts of theft with weights. This study aims to examine the role of the detective in uncovering the crime of theft by weighting and analyzing the obstacles of the investigator in uncovering the crime of theft by weight. The type of research used is empirical, namely research based on facts that occur and develop existing concepts, the approach method used is by conducting research in the field. The sources of legal materials used are primary and secondary legal materials obtained through observation and documentation. After all the data has been collected, the next step is to process and analyze it qualitatively. The results showed that the role of the criminal investigator of the Sukawati Police in uncovering the criminal act of theft of weighting began with a complaint report from the public, then the action of the Sukawati Police Detective carried out an examination at the scene of the case, examination of witnesses, confiscation of evidence, arrest, search, detention, filing. and submission of case files to the court. The obstacles faced by the Criminal Investigation Unit of the Sukawati Sector are the perpetrator, a recidivist, the lack of tools to track the perpetrators, and community factors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie