Literatura académica sobre el tema "Irrelevance Coverage Model"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Irrelevance Coverage Model".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Irrelevance Coverage Model"

1

Xiang, Jianwen, Fumio Machida, Kumiko Tadano y Yoshiharu Maeno. "An Imperfect Fault Coverage Model With Coverage of Irrelevant Components". IEEE Transactions on Reliability 64, n.º 1 (marzo de 2015): 320–32. http://dx.doi.org/10.1109/tr.2014.2363155.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Salihu, Shakirat Aderonke y Oluwakemi Christiana Abikoye. "An Enhanced Information Retrieval-Based Bug Localization System with Code Coverage, Stack Traces, and Spectrum Information". Journal of Hunan University Natural Sciences 49, n.º 4 (30 de abril de 2022): 108–24. http://dx.doi.org/10.55463/issn.1674-2974.49.4.12.

Texto completo
Resumen
Several strategies such as Vector Space Model (VSM), revised Vector Space Model (rVSM), and integration of additional elements such as stack trace and previously corrected bug report have been utilized to improve the Information Retrieval (IR) based bug localization process. Most of the existing IR-based approaches make use of source code files without filtering, which eventually increases the search space of the technique, thereby slowing down the bug localization process. This study developed an enhanced IR-based bug localization model as a viable solution. Specifically, an enhanced rVSM (e-rVSM) is developed based on the hybridization of code coverage, stack traces, and spectrum information. Combining the stack trace and spectrum information as additional features can enhance the accuracy of the IR-based technique by boosting the bug localization process. Code coverage analysis was conducted to remove irrelevant source files and reduce the search space of the IR technique. Then the filtered source files are preprocessed via tokenization and stemming from selecting relevant features and removing unwanted words. The preprocessed data is further analyzed by finding similarities between the preprocessed bug reports and source code files using the e-rVSM. Finally, scores for each source code and suspected buggy files are ranked in descending order. The performance of the proposed e-rVSM is tested on two open-source projects (Zxing and SWT), and its effectiveness is assessed using TopN rank (where N = 5, 10), Mean Reciprocal Rank (MRR), and Mean Average Precision (MAP). Findings from the experimental results revealed the effectiveness of e-rVSM in bug localization. In particular, e-rVSM recorded a significant Top 5 (80.2%; 65%) and Top 10 (89.1%; 75%) rank values on SWT and Zxing dataset respectively. Also, the proposed e-rVSM had MRR values of 80% and 54% on the SWT dataset and MAP values of 61.22% and 47.23% on the Zxing dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fan, Jung-wei, Jianrong Li y Yves A. Lussier. "Semantic Modeling for Exposomics with Exploratory Evaluation in Clinical Context". Journal of Healthcare Engineering 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/3818302.

Texto completo
Resumen
Exposome is a critical dimension in the precision medicine paradigm. Effective representation of exposomics knowledge is instrumental to melding nongenetic factors into data analytics for clinical research. There is still limited work in (1) modeling exposome entities and relations with proper integration to mainstream ontologies and (2) systematically studying their presence in clinical context. Through selected ontological relations, we developed a template-driven approach to identifying exposome concepts from the Unified Medical Language System (UMLS). The derived concepts were evaluated in terms of literature coverage and the ability to assist in annotating clinical text. The generated semantic model represents rich domain knowledge about exposure events (454 pairs of relations between exposure and outcome). Additionally, a list of 5667 disorder concepts with microbial etiology was created for inferred pathogen exposures. The model consistently covered about 90% of PubMed literature on exposure-induced iatrogenic diseases over 10 years (2001–2010). The model contributed to the efficiency of exposome annotation in clinical text by filtering out 78% of irrelevant machine annotations. Analysis into 50 annotated discharge summaries helped advance our understanding of the exposome information in clinical text. This pilot study demonstrated feasibility of semiautomatically developing a useful semantic resource for exposomics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Khraibet AL-Behadili, Hayder Naser, Ku Ruhana Ku-Mahamud y Rafid Sagban. "Annealing strategy for an enhance rule pruning technique in ACO-Based rule classification". Indonesian Journal of Electrical Engineering and Computer Science 16, n.º 3 (1 de diciembre de 2019): 1499. http://dx.doi.org/10.11591/ijeecs.v16.i3.pp1499-1507.

Texto completo
Resumen
<span>Ant colony optimization (ACO) was successfully applied to data mining classification task through ant-mining algorithms. Exploration and exploitation are search strategies that guide the learning process of a classification model and generate a list of rules. Exploitation refers to the process of intensifying the search for neighbors in good regions, </span><span>whereas exploration aims towards new promising regions during a search process. </span><span>The existing balance between exploration and exploitation in the rule construction procedure is limited to the roulette wheel selection mechanism, which complicates rule generation. Thus, low-coverage complex rules with irrelevant terms will be generated. This work proposes an enhancement rule pruning procedure for the ACO algorithm that can be used in rule-based classification. This procedure, called the annealing strategy, is an improvement of ant-mining algorithms in the rule construction procedure. Presented as a pre-pruning technique, the annealing strategy deals first with irrelevant terms before creating a complete rule through an annealing schedule. The proposed improvement was tested through benchmarking experiments, and results were compared with those of four of the most related ant-mining algorithms, namely, Ant-Miner, CAnt-Miner, TACO-Miner, and Ant-Miner with hybrid pruner. </span><span>Results display that our proposed technique achieves better performance in terms of classification accuracy, model size, and </span><span>computational time. </span><span>The proposed annealing schedule can be used in other ACO variants for different applications to improve classification accuracy.</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Joshi, Saket, Kristian Kersting y Roni Khardon. "Self-Taught Decision Theoretic Planning with First Order Decision Diagrams". Proceedings of the International Conference on Automated Planning and Scheduling 20 (25 de mayo de 2021): 89–96. http://dx.doi.org/10.1609/icaps.v20i1.13411.

Texto completo
Resumen
We present a new paradigm for planning by learning, where the planner is given a model of the world and a small set of states of interest, but no indication of optimal actions in these states. The additional information can help focus the planner on regions of the state space that are of interest and lead to improved performance. We demonstrate this idea by introducing novel model-checking reduction operations for First Order Decision Diagrams (FODD), a representation that has been used to implement decision-theoretic planning with Relational Markov Decision Processes (RMDP). Intuitively, these reductions modify the construction of the value function by removing any complex specifications that are irrelevant to the set of training examples, thereby focusing on the region of interest. We show that such training examples can be constructed on the fly from a description of the planning problem thus we can bootstrap to get a self-taught planning system. Additionally, we provide a new heuristic to embed universal and conjunctive goals within the framework of RMDP planners, expanding the scope and applicability of such systems. We show that these ideas lead to significant improvements in performance in terms of both speed and coverage of the planner, yielding state of the art planning performance on problems from the International Planning Competition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sivetc, Liudmila y Mariëlle Wijermars. "The Vulnerabilities of Trusted Notifier-Models in Russia: The Case of Netoscope". Media and Communication 9, n.º 4 (21 de octubre de 2021): 27–38. http://dx.doi.org/10.17645/mac.v9i4.4237.

Texto completo
Resumen
Current digital ecosystems are shaped by platformisation, algorithmic recommender systems, and news personalisation. These (algorithmic) infrastructures influence online news dissemination and therefore necessitate a reconceptualisation of how online media control is or may be exercised in states with restricted media freedom. Indeed, the degree of media plurality and journalistic independence becomes irrelevant when reporting is available but difficult to access; for example, if the websites of media outlets are not indexed or recommended by the search engines, news aggregators, or social media platforms that function as algorithmic gatekeepers. Research approaches to media control need to be broadened because authoritarian governments are increasingly adopting policies that govern the internet <em>through</em> its infrastructure; the power they leverage against private infrastructure owners yields more effective—and less easily perceptible—control over online content dissemination. Zooming in on the use of trusted notifier-models to counter online harms in Russia, we examine the Netoscope project (a database of Russian domain names suspected of malware, botnet, or phishing activities) in which federal censor Roskomnadzor cooperates with, e.g., Yandex (that downranks listed domains in search results), Kaspersky, and foreign partners. Based<strong> </strong>on publicly available reports, media coverage, and semi-structured interviews, the article analyses the degree of influence, control, and oversight of Netoscope’s participating partners over the database and its applications. We argue that, in the absence of effective legal safeguards and transparency requirements, the politicised nature of internet infrastructure makes the trusted notifier-model vulnerable to abuse in authoritarian states.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hui, Haisheng, Xueying Zhang, Zelin Wu y Fenlian Li. "Dual-Path Attention Compensation U-Net for Stroke Lesion Segmentation". Computational Intelligence and Neuroscience 2021 (31 de agosto de 2021): 1–16. http://dx.doi.org/10.1155/2021/7552185.

Texto completo
Resumen
For the segmentation task of stroke lesions, using the attention U-Net model based on the self-attention mechanism can suppress irrelevant regions in an input image while highlighting salient features useful for specific tasks. However, when the lesion is small and the lesion contour is blurred, attention U-Net may generate wrong attention coefficient maps, leading to incorrect segmentation results. To cope with this issue, we propose a dual-path attention compensation U-Net (DPAC-UNet) network, which consists of a primary network and auxiliary path network. Both networks are attention U-Net models and identical in structure. The primary path network is the core network that performs accurate lesion segmentation and outputting of the final segmentation result. The auxiliary path network generates auxiliary attention compensation coefficients and sends them to the primary path network to compensate for and correct possible attention coefficient errors. To realize the compensation mechanism of DPAC-UNet, we propose a weighted binary cross-entropy Tversky (WBCE-Tversky) loss to train the primary path network to achieve accurate segmentation and propose another compound loss function called tolerance loss to train the auxiliary path network to generate auxiliary compensation attention coefficient maps with expanded coverage area to perform compensate operations. We conducted segmentation experiments using the 239 MRI scans of the anatomical tracings of lesions after stroke (ATLAS) dataset to evaluate the performance and effectiveness of our method. The experimental results show that the DSC score of the proposed DPAC-UNet network is 6% higher than the single-path attention U-Net. It is also higher than the existing segmentation methods of the related literature. Therefore, our method demonstrates powerful abilities in the application of stroke lesion segmentation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mau, Stefan, Irena Pletikosa y Joël Wagner. "Forecasting the next likely purchase events of insurance customers". International Journal of Bank Marketing 36, n.º 6 (3 de septiembre de 2018): 1125–44. http://dx.doi.org/10.1108/ijbm-11-2016-0180.

Texto completo
Resumen
Purpose The purpose of this paper is to demonstrate the value of enriched customer data for analytical customer relationship management (CRM) in the insurance sector. In this study, online quotes from an insurer’s website are evaluated in terms of serving as a trigger event to predict churn, retention, and cross-selling. Design/methodology/approach For this purpose, the records of online quotes from a Swiss insurer are linked to records of existing customers from 2012 to 2015. Based on the data from automobile and home insurance policyholders, random forest prediction models for classification are fitted. Findings Enhancing traditional customer data with such additional information substantially boosts the accuracy for predicting future purchases. The models identify customers who have a high probability of adjusting their insurance coverage. Research limitations/implications The findings of the study imply that enriching traditional customer data with online quotes yields a valuable approach to predicting purchase behavior. Moreover, the quote data provide supplementary features that contribute to improving prediction performance. Practical implications This study highlights the importance of selecting the relevant data sources to target the right customers at the right time and to thus benefit from analytical CRM practices. Originality/value This paper is one of the first to investigate the potential value of data-rich environments for insurers and their customers. It provides insights on how to identify relevant customers for ensuing marketing activities efficiently and thus avoiding irrelevant offers. Hence, the study creates value for insurers as well as customers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Petegrosso, Raphael, Zhuliu Li y Rui Kuang. "Machine learning and statistical methods for clustering single-cell RNA-sequencing data". Briefings in Bioinformatics 21, n.º 4 (27 de junio de 2019): 1209–23. http://dx.doi.org/10.1093/bib/bbz063.

Texto completo
Resumen
Abstract Single-cell RNAsequencing (scRNA-seq) technologies have enabled the large-scale whole-transcriptome profiling of each individual single cell in a cell population. A core analysis of the scRNA-seq transcriptome profiles is to cluster the single cells to reveal cell subtypes and infer cell lineages based on the relations among the cells. This article reviews the machine learning and statistical methods for clustering scRNA-seq transcriptomes developed in the past few years. The review focuses on how conventional clustering techniques such as hierarchical clustering, graph-based clustering, mixture models, $k$-means, ensemble learning, neural networks and density-based clustering are modified or customized to tackle the unique challenges in scRNA-seq data analysis, such as the dropout of low-expression genes, low and uneven read coverage of transcripts, highly variable total mRNAs from single cells and ambiguous cell markers in the presence of technical biases and irrelevant confounding biological variations. We review how cell-specific normalization, the imputation of dropouts and dimension reduction methods can be applied with new statistical or optimization strategies to improve the clustering of single cells. We will also introduce those more advanced approaches to cluster scRNA-seq transcriptomes in time series data and multiple cell populations and to detect rare cell types. Several software packages developed to support the cluster analysis of scRNA-seq data are also reviewed and experimentally compared to evaluate their performance and efficiency. Finally, we conclude with useful observations and possible future directions in scRNA-seq data analytics. Availability All the source code and data are available at https://github.com/kuanglab/single-cell-review.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Soleimanvandi Azar, Neda, Seyed Hossein Mohaqeqi Kamal, Homeira Sajadi, Gholam Reza Ghaedamini Harouni, Salaheddin Karimi y Ameneh Setareh Foroozan. "Barriers and Facilitators of the Outpatient Health Service Use by the Elderly". Salmand 15, n.º 3 (1 de octubre de 2020): 258–77. http://dx.doi.org/10.32598/sija.15.3.551.3.

Texto completo
Resumen
Objectives: Increasing care needs for the elderly are an important concern for different countries, especially those with an aging population. It is important for health policy making to have knowledge of the factors affecting the use of health services in the elderly to identify the potential problems and develop appropriate interventions for improving utilization and increasing access to health services. This study aims to investigate the barriers and facilitators of the outpatient health service use in the elderly Methods & Materials: In this systematic review, studies in English published from 1996 to 2019 were searched in Web of Science, PubMed and Scopus databases using PRISMA guidelines and related keywords. After eliminating duplicate and irrelevant articles, the quality of remaining articles was evaluated by two evaluators independently, based on STROBE checklist. Narrative synthesis method was used to combine the data Results: Forty-four eligible studies were included for the review. The determinants of the health service use were divided into three categories of predisposing factors (e.g. age, gender, marital status, ethnicity), enabling factors (e.g. income, insurance coverage, education level, employment status, social network, social support), and need factors (e.g. having chronic disease, self-assessed health status, severity of disease, number of diseases, comorbid diseases, physical disability, unhealthy lifestyle). Findings showed that age >80 years, ethnic minority, being unemployed and retired, low educational level, small and limited social network, and physical disability were the barriers to using outpatient health services, while being female, married, having insurance, social support, having a companion during a disease, having children, high income level, and shorter distance to the health care centers were the facilitators of using outpatient health services in the elderly Conclusion: A group of factors are associated with the outpatient health service use by the elderly. These factors include predisposing, enabling, and need-related factors according to Andersen’s behavioral model of health service use. Interventions to increase the use of health services by the elderly should be based on these factors, and should be taken into account by the policymakers to reduce the burden of health services caused by diseases.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Irrelevance Coverage Model"

1

Ye, Luyao. "Analysis of the Components and Systems Relevance". Doctoral thesis, 2021. http://hdl.handle.net/2158/1251754.

Texto completo
Resumen
In systems with Imperfect Fault Coverage (IFC), all components are subject to uncovered failures, possibly threatening the whole system. Therefore, to improve the system reliability, it is important to timely detect, identify, and shut down the components that are no more relevant for the system operation. Thus, the Irrelevance Coverage Model (ICM) was proposed based on the Imperfect Fault Coverage Model (IFCM). In the ICM, any component detected as irrelevant can be safely shut down without reducing the system reliability and preventing the case where its eventual failure may remain uncovered and cause a direct system failure. This not only improves the system reliability but also saves energy. This thesis solves the problem of quantitative evaluation of component relevance. It assumes that components have independent and identically distributed~(i.i.d.) lifetimes to describe only the impact of the system design on the system reliability and energy consumption. To this end, the Component Relevance is proposed to represent the probability that a component can keep its relevance throughout the system lifetime. Then, the Birnbaum Importance (BI) measure is applied to the system with ICM. The BI measure with ICM considers the relevance of the components while considering the reliability of the components. At the same time, the changes of the importance of the components in three different models, i.e., Perfect Fault Coverage Model (PFCM), Imperfect Fault Coverage Model (IFCM), and ICM, are analyzed. Moreover, the Dynamic Relevance Measure~(DRM) is defined to characterize the irrelevant components in different stages of the system lifetime depending on the number of occurred component failures, supporting the evaluation of the probability that the system fails due to uncovered failures of irrelevant components. Also, the gain in shutting down the irrelevant components in the ICM can be evaluated both in terms of the energy saved and the fraction of the average system lifetime during the system is not coherent. Finally, the system reliability over time is also efficiently derived, both in the case that irrelevance is not considered and in the case that irrelevant components can be immediately isolated, notably supporting any general (i.e.,~non-Markovian) distribution for the failure time of components. The feasibility and effectiveness of the proposed analysis methods are assessed on two real-scale case studies addressing the reliability evaluation of a flight control system and a multi-hop Wireless Sensor Network~(WSN). I have obtained the most important components for the left edge flap of the F18 flight control system to improve the component reliability, which improves system reliability more obviously. For the different topologies of WSN, the reliability and relevance of the Diagonal topology are better than the Orthogonal topology. So the WSN with Diagonal topology should be given priority in the system phase.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Irrelevance Coverage Model"

1

Song, Kangning, Siwei Zhou, Luyao Ye, Piaoyi Liu, Jing Tian y Jianwen Xiang. "Reliability Analysis of Multi-State System Based on Irrelevance Coverage Model". En 2022 IEEE 27th Pacific Rim International Symposium on Dependable Computing (PRDC). IEEE, 2022. http://dx.doi.org/10.1109/prdc55274.2022.00032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yang, Ming, Dongdong Zhao, Luyao Ye, Siwei Zhou y Jianwen Xiang. "Reliability Analysis of Phased-Mission System in Irrelevancy Coverage Model". En 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). IEEE, 2019. http://dx.doi.org/10.1109/qrs.2019.00025.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía