Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Rule based updates.

Zeitschriftenartikel zum Thema „Rule based updates“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Rule based updates" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Erwig, Martin, und Delin Ren. „A rule-based language for programming software updates“. ACM SIGPLAN Notices 37, Nr. 12 (Dezember 2002): 88–97. http://dx.doi.org/10.1145/636517.636530.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Salah, Ramzi, Muaadh Mukred, Lailatul Qadri binti Zakaria, Rashad Ahmed und Hasan Sari. „A New Rule-Based Approach for Classical Arabic in Natural Language Processing“. Journal of Mathematics 2022 (21.01.2022): 1–20. http://dx.doi.org/10.1155/2022/7164254.

Der volle Inhalt der Quelle
Annotation:
Named entity recognition (NER) is fundamental in several natural language processing applications. It involves finding and categorizing text into predefined categories such as a person's name, location, and so on. One of the most famous approaches to identify named entity is the rule-based approach. This paper introduces a rule-based NER method that can be used to examine Classical Arabic documents. The proposed method relied on triggers words, patterns, gazetteers, rules, and blacklists generated by the linguistic information about entities named in Arabic. The method operates in three stages, operational stage, preprocessing stage, and processing the rule application stage. The proposed approach was evaluated, and the results indicate that this approach achieved a 90.2% rate of precision, an 89.3% level of recall, and an F-measure of 89.5%. This new approach was introduced to overcome the challenges related to coverage in rule-based NER systems, especially when dealing with Classical Arabic texts. It improved their performance and allowed for automated rule updates. The grammar rules, gazetteers, blacklist, patterns, and trigger words were all integrated into the rule-based system in this way.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

SLOTA, MARTIN, und JOÃO LEITE. „The rise and fall of semantic rule updates based onSE-models“. Theory and Practice of Logic Programming 14, Nr. 6 (07.08.2013): 869–907. http://dx.doi.org/10.1017/s1471068413000100.

Der volle Inhalt der Quelle
Annotation:
AbstractLogic programs under the stable model semantics, or answer-set programs, provide an expressive rule-based knowledge representation framework, featuring a formal, declarative and well-understood semantics. However, handling the evolution of rule bases is still a largely open problem. The Alchourrón, Gärdenfors and Makinson (AGM) framework for belief change was shown to give inappropriate results when directly applied to logic programs under a non-monotonic semantics such as the stable models. The approaches to address this issue, developed so far, proposed update semantics based on manipulating the syntactic structure of programs and rules.More recently, AGM revision has been successfully applied to a significantly more expressive semantic characterisation of logic programs based onSE-models. This is an important step, as it changes the focus from the evolution of a syntactic representation of a rule base to the evolution of its semantic content.In this paper, we borrow results from the area of belief update to tackle the problem of updating (instead of revising) answer-set programs. We prove a representation theorem which makes it possible to constructively define any operator satisfying a set of postulates derived from Katsuno and Mendelzon's postulates for belief update. We define a specific operator based on this theorem, examine its computational complexity and compare the behaviour of this operator with syntactic rule update semantics from the literature. Perhaps surprisingly, we uncover a serious drawback of all rule update operators based on Katsuno and Mendelzon's approach to update and onSE-models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yuan, Peiyan, Xiaoxiao Pang, Ping Liu und En Zhang. „FollowMe: One Social Importance-Based Collaborative Scheme in MONs“. Future Internet 11, Nr. 4 (17.04.2019): 98. http://dx.doi.org/10.3390/fi11040098.

Der volle Inhalt der Quelle
Annotation:
The performance of mobile opportunistic networks mainly relies on collaboration among nodes. Thus far, researchers have ignored the influence of node sociality on the incentive process, leading to poor network performance. Considering the fact that followers always imitate the behavior of superstars, this paper proposes FollowMe, which integrates the social importance of nodes with evolutionary game theory to improve the collaborative behavior of nodes. First, we use the prisoner’s dilemma model to establish the matrix of game gains between nodes. Second, we introduce the signal reference as a game rule between nodes. The number of nodes choosing different strategies in a game round is used to calculate the cumulative income of the node in combination with the probability formula. Finally, the Fermi function is used to determine whether the node updates the strategy. The simulation results show that, compared with the random update rule, the proposed strategy is more capable of promoting cooperative behavior between nodes to improve the delivery rate of data packets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zheng, Hongdi, Junfeng Wang, Jianping Zhang und Ruirui Li. „IRTS: An Intelligent and Reliable Transmission Scheme for Screen Updates Delivery in DaaS“. ACM Transactions on Multimedia Computing, Communications, and Applications 17, Nr. 3 (22.07.2021): 1–24. http://dx.doi.org/10.1145/3440035.

Der volle Inhalt der Quelle
Annotation:
Desktop-as-a-service (DaaS) has been recognized as an elastic and economical solution that enables users to access personal desktops from anywhere at any time. During the interaction process of DaaS, users rely on screen updates to perceive execution results remotely, and thus the reliability and timeliness of screen updates transmission have a great influence on users’ quality of experience (QoE). However, the efficient transmission of screen updates in DaaS is facing severe challenges: most transmission schemes applied in DaaS determine sending strategies in terms of pre-set rules, lacking the intelligence to utilize bandwidth rationally and fit new network scenarios. Meanwhile, they tend to focus on reliability or timeliness and perform unsatisfactorily in ensuring reliability and timeliness simultaneously, leading to lower transmission efficiency of screen updates and users’ QoE when network conditions turn unfavorable. In this article, an intelligent and reliable end-to-end transmission scheme (IRTS) is proposed to cope with the preceding issues. IRTS draws support from reinforcement learning by adopting SARSA, an online learning method based on the temporal difference update rule, to grasp the optimal mapping between network states and sending actions, which extricates IRTS from the reliance on pre-set rules and augments its adaptability to different network conditions. Moreover, IRTS guarantees reliability and timeliness via an adaptive loss recovery method, which intends to recover lost screen updates data automatically with fountain code while controlling the number of redundant packets generated. Extensive performance evaluations are conducted, and numerical results show that IRTS outperforms the reference schemes in display quality, end-to-end delay/delay jitter, and fairness when transferring screen updates under various network conditions, proving that IRTS can enhance the transmission efficiency of screen updates and users’ QoE in DaaS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Thill, Marc, Christian Jackisch, Wolfgang Janni, Volkmar Müller, Ute-Susann Albert, Ingo Bauerfeind, Jens Blohmer et al. „AGO Recommendations for the Diagnosis and Treatment of Patients with Locally Advanced and Metastatic Breast Cancer: Update 2019“. Breast Care 14, Nr. 4 (2019): 247–55. http://dx.doi.org/10.1159/000500999.

Der volle Inhalt der Quelle
Annotation:
Every year the Breast Committee of the Arbeitsgemeinschaft Gynäkologische Onkologie (German Gynecological Oncology Group, AGO), a group of gynecological oncologists specialized in breast cancer and interdisciplinary members specialized in pathology, radiologic diagnostics, medical oncology, and radiation oncology, prepares and updates evidence-based recommendations for the diagnosis and treatment of patients with early and metastatic breast cancer. Every update is performed according to a documented rule-fixed algorithm, by thoroughly reviewing and scoring the recent publications for their scientific validity and clinical relevance. This current publication presents the 2019 update on the recommendations for metastatic breast cancer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Guangquan, Ting Wang, Qi Chen, Peng Shao, Naixue Xiong und Athanasios Vasilakos. „A Survey on Particle Swarm Optimization for Association Rule Mining“. Electronics 11, Nr. 19 (24.09.2022): 3044. http://dx.doi.org/10.3390/electronics11193044.

Der volle Inhalt der Quelle
Annotation:
Association rule mining (ARM) is one of the core techniques of data mining to discover potentially valuable association relationships from mixed datasets. In the current research, various heuristic algorithms have been introduced into ARM to address the high computation time of traditional ARM. Although a more detailed review of the heuristic algorithms based on ARM is available, this paper differs from the existing reviews in that we expected it to provide a more comprehensive and multi-faceted survey of emerging research, which could provide a reference for researchers in the field to help them understand the state-of-the-art PSO-based ARM algorithms. In this paper, we review the existing research results. Heuristic algorithms for ARM were divided into three main groups, including biologically inspired, physically inspired, and other algorithms. Additionally, different types of ARM and their evaluation metrics are described in this paper, and the current status of the improvement in PSO algorithms is discussed in stages, including swarm initialization, algorithm parameter optimization, optimal particle update, and velocity and position updates. Furthermore, we discuss the applications of PSO-based ARM algorithms and propose further research directions by exploring the existing problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

BOYLE, P. A., L. DEL DEBBIO, R. J. HUDSPITH, N. GARRON, E. KERRANE, K. MALTMAN und J. M. ZANOTTI. „LATTICE INPUT ON THE INCLUSIVE τ DECAY Vus PUZZLE“. International Journal of Modern Physics: Conference Series 35 (Januar 2014): 1460441. http://dx.doi.org/10.1142/s2010194514604414.

Der volle Inhalt der Quelle
Annotation:
Recent analyses of flavor-breaking hadronic-τ-decay-based sum rules produce values of |Vus| ~ 3σ low compared to 3-family unitarity expectations. An unresolved systematic issue is the significant variation in |Vus| produced by different prescriptions for treating the slowly converging D = 2 OPE series. We investigate the reliability of these prescriptions using lattice data for various flavor-breaking correlators and show the fixed-scale prescription is clearly preferred. Preliminary updates of the conventional τ-based, and related mixed τ-electroproduction-data-based, sum rule analyses incorporating B-factory results for low-multiplicity strange τ decay mode distributions are then performed. Use of the preferred FOPT D = 2 OPE prescription is shown to significantly reduce the discrepancy between 3-family unitarity expectations and the sum rule results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Giacomini, Raffaella, Vasiliki Skreta und Javier Turen. „Heterogeneity, Inattention, and Bayesian Updates“. American Economic Journal: Macroeconomics 12, Nr. 1 (01.01.2020): 282–309. http://dx.doi.org/10.1257/mac.20180235.

Der volle Inhalt der Quelle
Annotation:
We formulate a theory of expectations updating that fits the dynamics of accuracy and disagreement in a new survey of professional forecasters. We document new stylized facts, including the puzzling persistence of disagreement as uncertainty resolves. Our theory explains these facts by allowing for different channels of heterogeneity. Agents produce an initial forecast based on heterogeneous priors and are heterogeneously “inattentive.” Updaters use Bayes’ rule and interpret public information using possibly heterogeneous models. Structural estimation of our theory supports the conclusion that in normal times heterogeneous priors and inattention are enough to generate persistent disagreement, but not during the crisis. (JEL C53, D81, D83, D84, E31, E37)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rajab, Sharifa. „Rule Base Simplification and Constrained Learning for Interpretability in TSK Neuro-Fuzzy Modelling“. International Journal of Fuzzy System Applications 9, Nr. 2 (April 2020): 31–58. http://dx.doi.org/10.4018/ijfsa.2020040102.

Der volle Inhalt der Quelle
Annotation:
Neuro-fuzzy systems based on a fuzzy model proposed by Takagi, Sugeno and Kang known as the TSK fuzzy model provide a powerful method for modelling uncertain and highly complex non-linear systems. The initial fuzzy rule base in TSK neuro-fuzzy systems is usually obtained using data driven approaches. This process induces redundancy into the system by adding redundant fuzzy rules and fuzzy sets. This increases complexity which adversely affects generalization capability and transparency of the fuzzy model being designed. In this article, the authors explore the potential of TSK fuzzy modelling in developing comparatively interpretable neuro-fuzzy systems with better generalization capability in terms of higher approximation accuracy. The approach is based on three phases, the first phase deals with automatic data driven rule base induction followed by rule base simplification phase. Rule base simplification uses similarity analysis to remove similar fuzzy sets and resulting redundant fuzzy rules from the rule base, thereby simplifying the neuro-fuzzy model. During the third phase, the parameters of membership functions are fine-tuned using a constrained hybrid learning technique. The learning process is constrained which prevents unchecked updates to the parameters so that a highly complex rule base does not emerge at the end of model optimization phase. An empirical investigation of this methodology is done by application of this approach to two well-known non-linear benchmark forecasting problems and a real-world stock price forecasting problem. The results indicate that rule base simplification using a similarity analysis effectively removes redundancy from the system which improves interpretability. The removal of redundancy also increased the generalization capability of the system measured in terms of increased forecasting accuracy. For all the three forecasting problems the proposed neuro-fuzzy system demonstrated better accuracy-interpretability tradeoff as compared to two well-known TSK neuro-fuzzy models for function approximation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Signes-Pont, Maria Teresa, José Juan Cortés-Plana und Higinio Mora-Mora. „An Epidemic Grid Model to Address the Spread of Covid-19: A Comparison between Italy, Germany and France“. Mathematical and Computational Applications 26, Nr. 1 (08.02.2021): 14. http://dx.doi.org/10.3390/mca26010014.

Der volle Inhalt der Quelle
Annotation:
This paper presents a discrete compartmental Susceptible–Exposed–Infected–Recovered/Dead (SEIR/D) model to address the expansion of Covid-19. This model is based on a grid. As time passes, the status of the cells updates by means of binary rules following a neighborhood and a delay pattern. This model has already been analyzed in previous works and successfully compared with the corresponding continuous models solved by ordinary differential equations (ODE), with the intention of finding the homologous parameters between both approaches. Thus, it has been possible to prove that the combination neighborhood-update rule is responsible for the rate of expansion and recovering/death of the disease. The delays (between Susceptible and Asymptomatic, Asymptomatic and Infected, Infected and Recovered/Dead) may have a crucial impact on both height and timing of the peak of Infected and the Recovery/Death rate. This theoretical model has been successfully tested in the case of the dissemination of information through mobile social networks and in the case of plant pests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Marple, Gary, und David Walker. „Low-cost assimilation for sound speed fields in the PCA framework“. Journal of the Acoustical Society of America 152, Nr. 4 (Oktober 2022): A201. http://dx.doi.org/10.1121/10.0016029.

Der volle Inhalt der Quelle
Annotation:
Forecast ocean sound speed fields are often inaccurate and need to be reconciled with observation data. Conventional data-assimilation methods used for this are generally quite computationally intensive. A compressed representation of the forecast sound speed fields can be obtained using principal component analysis (PCA), where the forecast fields are represented by a linear combination fo PCA modes. We develop a low-cost assimilation approach that updates the PCA compressed representation of the background forecast field, based on observation data.The approach uses Bayes‘ rule to obtain the maximum likelihood estimate for the update in the space spanned by the PCA modes. Significant cost savings come from the dimensionality reduction that is provided by PCA. Results are presented for sound speed fields derived from HYCOM ocean forecast data and expendable bathythermograph data obtained from the Scipps Institution of Oceanography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Cardona, Diana M., Stephanie Peditto, Loveleen Singh und Stephen Black-Schaffer. „Updates to Medicare's Quality Payment Program That May Impact You“. Archives of Pathology & Laboratory Medicine 144, Nr. 6 (01.06.2020): 679–85. http://dx.doi.org/10.5858/arpa.2019-0376-ra.

Der volle Inhalt der Quelle
Annotation:
Context.— Within Medicare's Quality Payment Program, and more specifically the Merit-based Incentive Payment System, pathologists stand to potentially lose or gain approximately $2 billion during the initial 7 years of the program. If you or your group provides services to Medicare beneficiaries, you will likely need to comply with the program. Objective.— To avoid potential reductions in Medicare reimbursement, pathologists need to understand the requirements of these new payment programs. Data Sources.— Each year the Centers for Medicare & Medicaid Services publish a Final Rule detailing the program requirements and updates. 2020 marks the fourth reporting year for the Merit-based Incentive Payment System. Performance this year will impact 2022 Medicare Part B distributions by up to ±9%. Conclusions.— By staying up to date with the ever-evolving Merit-based Incentive Payment System requirements, pathologists will be better equipped to successfully comply with this relatively new payment system, reduce the burden of participating, understand the reporting differences of the various performance categories, and thereby be able to maximize their scoring and incentive potential.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Poenisch, Volker, und Andrew Clark. „Elements of an Interface Standard for Knowledge Sources in Knowledge-Based Engineering“. Journal of Computing and Information Science in Engineering 6, Nr. 1 (10.06.2005): 78–83. http://dx.doi.org/10.1115/1.2164449.

Der volle Inhalt der Quelle
Annotation:
Software architectures for knowledge-based engineering often separate applications from their knowledge sources. We propose elements for a new standard for the resulting interface. The fully automatic transfer of knowledge would require a shared engineering ontology. We suggest something less ambitious, namely to make only part of the knowledge machine-intelligible, such as a number embedded in a rule. This proposal still requires a human to code the knowledge from the source into the application. However, it would allow the application to automatically handle some updates to the knowledge sources. We discuss the details of a formalism to describe the machine-intelligible part of the knowledge; we suggest some meta-data; and we clarify the relationship of the proposal to existing standards.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Derhab, Abdelouahid, Mohamed Guerroumi, Mohamed Belaoued und Omar Cheikhrouhou. „BMC-SDN: Blockchain-Based Multicontroller Architecture for Secure Software-Defined Networks“. Wireless Communications and Mobile Computing 2021 (21.04.2021): 1–12. http://dx.doi.org/10.1155/2021/9984666.

Der volle Inhalt der Quelle
Annotation:
Multicontroller software-defined networks have been widely adopted to enable management of large-scale networks. However, they are vulnerable to several attacks including false data injection, which creates topology inconsistency among controllers. To deal with this issue, we propose BMC-SDN, a security architecture that integrates blockchain and multicontroller SDN and divides the network into several domains. Each SDN domain is managed by one master controller that communicates through blockchain with the masters of the other domains. The master controller creates blocks of network flow updates, and its redundant controllers validate the new block based on a proposed reputation mechanism. The reputation mechanism rates the controllers, i.e., block creator and voters, after each voting operation using constant and combined adaptive fading reputation strategies. The evaluation results demonstrate a fast and optimal detection of fraudulent flow rule injection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Fan, Wenfei, Ruochun Jin, Ping Lu, Chao Tian und Ruiqi Xu. „Towards event prediction in temporal graphs“. Proceedings of the VLDB Endowment 15, Nr. 9 (Mai 2022): 1861–74. http://dx.doi.org/10.14778/3538598.3538608.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a class of temporal association rules, denoted by TACOs, for event prediction. As opposed to previous graph rules, TACOs monitor updates to graphs, and can be used to capture temporal interests in recommendation and catch frauds in response to behavior changes, among other things. TACOs are defined on temporal graphs in terms of change patterns and (temporal) conditions, and may carry machine learning (ML) predicates for temporal event prediction. We settle the complexity of reasoning about TACOs, including their satisfiability, implication and prediction problems. We develop a system, referred to as TASTE. TASTE discovers TACOs by iteratively training a rule creator based on generative ML models in a creator-critic framework. Moreover, it predicts events by applying the discovered TACOs. Using real-life and synthetic datasets, we experimentally verify that TASTE is on average 31.4 times faster than conventional data mining methods in TACO discovery, and it improves the accuracy of state-of-the-art event prediction models by 23.4%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Li, Jing Jiao, Ho Cholman, Yong Chen und Song Ho Pak. „A Studying on Implementation of NIDS Pattern Matching Based on FPGA“. Advanced Materials Research 403-408 (November 2011): 1985–88. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1985.

Der volle Inhalt der Quelle
Annotation:
Intrusion detection for network security is an application area demanding high throughput. The pattern matching in intrusion detection requires extremely high performance to process string matching. Most of pattern matching using software has many time complexities and cannot reach the requirements of high throughput. The pattern matching using hardware considerably improves the speed of matching and has several other advantages. This paper describes a FPGA-based pattern matching architecture, using hashing method called XOR Hashing. The proposed method updates new patterns without reconfiguration and processes the collision and has high matching performance. The proposed system implements the pattern matching by using Snort rule-set, an open source Network Intrusion Detection and has simulation processing on PC. Compared with existing hardware method, the results explained that our method has relatively high performance for the pattern matching and can else process the pattern matching with high performance on low–cost FPGA device.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Coffey, J. C., R. Sehgal, K. Culligan, C. Dunne, D. McGrath, N. Lawes und D. Walsh. „Terminology and nomenclature in colonic surgery: universal application of a rule-based approach derived from updates on mesenteric anatomy“. Techniques in Coloproctology 18, Nr. 9 (27.06.2014): 789–94. http://dx.doi.org/10.1007/s10151-014-1184-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yang, Haohua. „LSTM-Based Deep Model for Investment Portfolio Assessment and Analysis“. Applied Bionics and Biomechanics 2022 (07.07.2022): 1–6. http://dx.doi.org/10.1155/2022/1852138.

Der volle Inhalt der Quelle
Annotation:
In recent years, within the scope of financial quantification, quantitative investment models that support human-oriented algorithms have been proposed. These models attempt to characterize fiat-delayed series through intelligent acquaintance methods to predict data and arrange investment strategies. The standard long short-term memory (LSTM) neural network has the shortcoming of low effectiveness of the fiscal cycle sequence. This work utters throughout the amended LSTM design. The augury result of the neural reticulation was upgraded by coalesce attentional propose to the LSTM class, and a genetic algorithmic program product was formulated. Genetic algorithm (GA) updates the inalienable parameters to a higher generalization aptitude. Using man stock insignitor future data from January 2019 to May 2020, we accomplish a station-of-the-contrivance algorithmic rule. Inferences have shown that the improved LSTM example proposed in this paper outperforms other designs in multiple respect, and it performs effectively in investment portfolio design, which is suitable for future investment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Li, Kun, und Dae-Ki Kang. „Enhanced Generative Adversarial Networks with Restart Learning Rate in Discriminator“. Applied Sciences 12, Nr. 3 (24.01.2022): 1191. http://dx.doi.org/10.3390/app12031191.

Der volle Inhalt der Quelle
Annotation:
A series of Generative Adversarial Networks (GANs) could effectively capture the salient features in the dataset in an adversarial way, thereby generating target data. The discriminator of GANs provides significant information to update parameters in the generator and itself. However, the discriminator usually becomes converged before the generator has been well trained. Due to this problem, GANs frequently fail to converge and are led to mode collapse. This situation can cause inadequate learning. In this paper, we apply restart learning in the discriminator of the GAN model, which could bring more meaningful updates for the training process. Based on CIFAR-10 and Align Celeba, the experiment results show that the proposed method could improve the performance of a DCGAN with a low FID score over a stable learning rate scheme. Compared with two other stable GANs—SNGAN and WGAN-GP—the DCGAN with a restart schedule had a satisfying performance. Compared with the Two Time-Scale Update Rule, the restart learning rate is more conducive to the training of DCGAN. The empirical analysis indicates four main parameters have varying degrees of influence on the proposed method and present an appropriate parameter setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Santos, Gabriel, Pedro Faria, Zita Vale, Tiago Pinto und Juan M. Corchado. „Constrained Generation Bids in Local Electricity Markets: A Semantic Approach“. Energies 13, Nr. 15 (02.08.2020): 3990. http://dx.doi.org/10.3390/en13153990.

Der volle Inhalt der Quelle
Annotation:
The worldwide investment in renewable energy sources is leading to the formation of local energy communities in which users can trade electric energy locally. Regulations and the required enablers for effective transactions in this new context are currently being designed. Hence, the development of software tools to support local transactions is still at an early stage and faces the challenge of constant updates to the data models and business rules. The present paper proposes a novel approach for the development of software tools to solve auction-based local electricity markets, considering the special needs of local energy communities. The proposed approach considers constrained bids that can increase the effectiveness of distributed generation use. The proposed method takes advantage of semantic web technologies, in order to provide models with the required dynamism to overcome the issues related to the constant changes in data and business models. Using such techniques allows the system to be agnostic to the data model and business rules. The proposed solution includes the proposed constraints, application ontology, and semantic rule templates. The paper includes a case study based on real data that illustrates the advantages of using the proposed solution in a community with 27 consumers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Yuan, Shuai, und Honglei Wang. „Research on Improvement of the Combination Method for Conflicting Evidence Based on Historical Data“. Symmetry 12, Nr. 5 (06.05.2020): 762. http://dx.doi.org/10.3390/sym12050762.

Der volle Inhalt der Quelle
Annotation:
In a multi-sensor system, due to the difference of performance of sensors and the environment in which the sensor collects evidence, evidence collected will be highly conflicting, which leads to the failure of D-S evidence theory. The current research on combination methods of conflicting evidence focuses on eliminating the problem of "Zadeh paradox" brought by conflicting evidence, but do not distinguish the evidence from different sources effectively. In this paper, the credibility of each piece of evidence to be combined is weighted based on historical data, and the modified evidence is obtained by weighted average. Then the final result is obtained by combining the modified evidence using D-S evidence theory, and the improved decision rule is used for the final decision. After the decision, the system updates and stores the historical data based on actual results. The improved decision rule can solve the problem that the system cannot make a decision when there are two or more propositions corresponding to the maximum support in the final combination result. This method satisfies commutative law and associative law, so it has the symmetry that can meet the needs of the combination of time-domain evidence. Numerical examples show that the combination method of conflict evidence based on historical data can not only solve the problem of “Zadeh paradox”, but also obtain more reasonable results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Amrr, Syed Muhammad, Abdulrahman Alturki, Ankit Kumar und M. Nabi. „Prescribed Performance-Based Event-Driven Fault-Tolerant Robust Attitude Control of Spacecraft under Restricted Communication“. Electronics 10, Nr. 14 (16.07.2021): 1709. http://dx.doi.org/10.3390/electronics10141709.

Der volle Inhalt der Quelle
Annotation:
This paper explores the problem of attitude stabilization of spacecraft under multiple uncertainties and constrained bandwidth resources. The proposed control law is designed by combining the sliding mode control (SMC) technique with a prescribed performance control (PPC) method. Further, the control input signal is executed in an aperiodic time framework using the event-trigger (ET) mechanism to minimize the control data transfer through a constrained wireless network. The SMC provides robustness against inertial uncertainties, disturbances, and actuator faults, whereas the PPC strategy aims to achieve a predefined system performance. The PPC technique is developed by transforming the system attitude into a new variable using the prescribed performance function, which acts as a predefined constraint for transient and steady-state responses. In addition, the ET mechanism updates the input value to the actuator only when there is a violation of the triggering rule; otherwise, the actuator output remains at a fixed value. Moreover, the proposed triggering rule is constituted through the Lyapunov stability analysis. Thus, the proposed approach can be extended to a broader class of complex nonlinear systems. The theoretical analyses prove the uniformly ultimately bounded stability of the closed-loop system and the non-existence of the Zeno behavior. The effectiveness of the proposed methodology is also presented along with the comparative studies through simulation results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Guedeney, Paul, und Jean-Philippe Collet. „Diagnosis and Management of Acute Coronary Syndrome: What is New and Why? Insight From the 2020 European Society of Cardiology Guidelines“. Journal of Clinical Medicine 9, Nr. 11 (28.10.2020): 3474. http://dx.doi.org/10.3390/jcm9113474.

Der volle Inhalt der Quelle
Annotation:
The management of acute coronary syndrome (ACS) has been at the center of an impressive amount of research leading to a significant improvement in outcomes over the last 50 years. The 2020 European Society of Cardiology (ESC) Guidelines for the management of patients presenting without persistent ST-segment elevation myocardial infarction have incorporated the most recent breakthroughs and updates from large randomized controlled trials (RCT) on the diagnosis and management of this disease. The purpose of the present review is to describe the main novelties and the rationale behind these recommendations. Hence, we describe the accumulating evidence against P2Y12 receptors inhibitors pretreatment prior to coronary angiography, the preference for prasugrel as leading P2Y12 inhibitors in the setting of ACS, and the numerous available antithrombotic regimens based on various durations of dual or triple antithrombotic therapy, according to the patient ischemic and bleeding risk profiles. We also detail the recently implemented 0 h/1 h and 0 h/2 h rule in, rule out algorithms and the growing role of computed coronary tomography angiography to rule out ACS in patients at low-to-moderate risk.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Ivanovic, Stefan, Ana-Maria Olteanu-Raimond, Sébastien Mustière und Thomas Devogele. „A Filtering-Based Approach for Improving Crowdsourced GNSS Traces in a Data Update Context“. ISPRS International Journal of Geo-Information 8, Nr. 9 (30.08.2019): 380. http://dx.doi.org/10.3390/ijgi8090380.

Der volle Inhalt der Quelle
Annotation:
Traces collected by citizens using GNSS (Global Navigation Satellite System) devices during sports activities such as running, hiking or biking are now widely available through different sport-oriented collaborative websites. The traces are collected by citizens for their own purposes and frequently shared with the sports community on the internet. Our research assumption is that crowdsourced GNSS traces may be a valuable source of information to detect updates in authoritative datasets. Despite their availability, the traces present some issues such as poor metadata, attribute incompleteness and heterogeneous positional accuracy. Moreover, certain parts of the traces (GNSS points composing the traces) are results of the displacements made out of the existing paths. In our context (i.e., update authoritative data) these off path GNSS points are considered as noise and should be filtered. Two types of noise are examined in this research: Points representing secondary activities (e.g., having a lunch break) and points representing errors during the acquisition. The first ones we named secondary human behaviour (SHB), whereas we named the second ones outliers. The goal of this paper is to improve the smoothness of traces by detecting and filtering both SHB and outliers. Two methods are proposed. The first one allows for the detection secondary human behaviour by analysing only traces geometry. The second one is a rule-based machine learning method that detects outliers by taking into account the intrinsic characteristics of points composing the traces, as well as the environmental conditions during traces acquisition. The proposed approaches are tested on crowdsourced GNSS traces collected in mountain areas during sports activities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Xu, S., Z. Ji, D. T. Pham und F. Yu. „Simultaneous localization and mapping: swarm robot mutual localization and sonar arc bidirectional carving mapping“. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 225, Nr. 3 (10.09.2010): 733–44. http://dx.doi.org/10.1243/09544062jmes2239.

Der volle Inhalt der Quelle
Annotation:
This work primarily aims to study robot swarm global mapping in a static indoor environment. Due to the prerequisite estimation of the robots' own poses, it is upgraded to a simultaneous localization and mapping (SLAM) problem. Five techniques are proposed to solve the SLAM problem, including the extended Kalman filter (EKF)-based mutual localization, sonar arc bidirectional carving mapping, grid-oriented correlation, working robot group substitution, and termination rule. The EKF mutual localization algorithm updates the pose estimates of not only the current robot, but also the landmark-functioned robots. The arc-carving mapping algorithm is to increase the azimuth resolution of sonar readings by using their freespace regions to shrink the possible regions. It is further improved in both accuracy and efficiency by the creative ideas of bidirectional carving, grid-orientedly correlated-arc carving, working robot group substitution, and termination rule. Software simulation and hardware experiment have verified the feasibility of the proposed SLAM philosophy when implemented in a typical medium-cluttered office by a team of three robots. Besides the combined effect, individual algorithm components have also been investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Hirsch, Joshua A., Andrew B. Rosenkrantz, Sameer A. Ansari, Laxmaiah Manchikanti und Gregory N. Nicola. „MACRA 2.0: are you ready for MIPS?“ Journal of NeuroInterventional Surgery 9, Nr. 7 (24.11.2016): 714–16. http://dx.doi.org/10.1136/neurintsurg-2016-012845.

Der volle Inhalt der Quelle
Annotation:
The annual cost of healthcare delivery in the USA now exceeds US$3 trillion. Fee for service methodology is often implicated as a cause of this exceedingly high figure. The Affordable Care Act created the Center for Medicare and Medicaid Innovation (CMMI) to pilot test value based alternative payments for reimbursing physician services. In 2015, the Medicare Access and CHIP Reauthorization Act (MACRA) was passed into law. MACRA has dramatic implications for all US based healthcare providers. MACRA permanently repealed the Medicare Sustainable Growth Rate so as to stabilize physician part B Medicare payments, consolidated pre-existing federal performance programs into the Merit based Incentive Payments System (MIPS), and legislatively mandated new approaches to paying clinicians. Neurointerventionalists will predominantly participate in MIPS. MIPS unifies, updates, and streamlines previously existing federal performance programs, thereby reducing onerous redundancies and overall administrative burden, while consolidating performance based payment adjustments. While MIPS may be perceived as a straightforward continuation of fee for service methodology with performance modifiers, MIPS is better viewed as a stepping stone toward eventually adopting alternative payment models in later years. In October 2016, the Centers for Medicare and Medicaid Services (CMS) released a final rule for MACRA implementation, providing greater clarity regarding 2017 requirements. The final rule provides a range of options for easing MIPS reporting requirements in the first performance year. Nonetheless, taking the newly offered ‘minimum possible’ approach toward meeting the requirements will still have negative consequences for providers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kwon, Yonghwan, Barton A. Forman, Jawairia A. Ahmad, Sujay V. Kumar und Yeosang Yoon. „Exploring the Utility of Machine Learning-Based Passive Microwave Brightness Temperature Data Assimilation over Terrestrial Snow in High Mountain Asia“. Remote Sensing 11, Nr. 19 (28.09.2019): 2265. http://dx.doi.org/10.3390/rs11192265.

Der volle Inhalt der Quelle
Annotation:
This study explores the use of a support vector machine (SVM) as the observation operator within a passive microwave brightness temperature data assimilation framework (herein SVM-DA) to enhance the characterization of snow water equivalent (SWE) over High Mountain Asia (HMA). A series of synthetic twin experiments were conducted with the NASA Land Information System (LIS) at a number of locations across HMA. Overall, the SVM-DA framework is effective at improving SWE estimates (~70% reduction in RMSE relative to the Open Loop) for SWE depths less than 200 mm during dry snowpack conditions. The SVM-DA framework also improves SWE estimates in deep, wet snow (~45% reduction in RMSE) when snow liquid water is well estimated by the land surface model, but can lead to model degradation when snow liquid water estimates diverge from values used during SVM training. In particular, two key challenges of using the SVM-DA framework were observed over deep, wet snowpacks. First, variations in snow liquid water content dominate the brightness temperature spectral difference (ΔTB) signal associated with emission from a wet snowpack, which can lead to abrupt changes in SWE during the analysis update. Second, the ensemble of SVM-based predictions can collapse (i.e., yield a near-zero standard deviation across the ensemble) when prior estimates of snow are outside the range of snow inputs used during the SVM training procedure. Such a scenario can lead to the presence of spurious error correlations between SWE and ΔTB, and as a consequence, can result in degraded SWE estimates from the analysis update. These degraded analysis updates can be largely mitigated by applying rule-based approaches. For example, restricting the SWE update when the standard deviation of the predicted ΔTB is greater than 0.05 K helps prevent the occurrence of filter divergence. Similarly, adding a thin layer (i.e., 5 mm) of SWE when the synthetic ΔTB is larger than 5 K can improve SVM-DA performance in the presence of a precipitation dry bias. The study demonstrates that a carefully constructed SVM-DA framework cognizant of the inherent limitations of passive microwave-based SWE estimation holds promise for snow mass data assimilation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Tarnowska, Katarzyna A., Arunkumar Bagavathi und Zbigniew W. Ras. „High-Performance Actionable Knowledge Miner for Boosting Business Revenue“. Applied Sciences 12, Nr. 23 (03.12.2022): 12393. http://dx.doi.org/10.3390/app122312393.

Der volle Inhalt der Quelle
Annotation:
This research proposes a novel strategy for constructing a knowledge-based recommender system (RS) based on both structured data and unstructured text data. We present its application to improve the services of heavy equipment repair companies to better adjust to their customers’ needs. The ultimate outcome of this work is a visualized web-based interactive recommendation dashboard that shows options that are predicted to improve the customer loyalty metric, known as Net Promoter Score (NPS). We also present a number of techniques aiming to improve the performance of action rule mining by allowing to have convenient periodic updates of the system’s knowledge base. We describe the preprocessing-based and distributed-processing-based method and present the results of testing them for performance within the RS framework. The proposed modifications for the actionable knowledge miner were implemented and compared with the original method in terms of the mining results/times and generated recommendations. Preprocessing-based methods decreased mining by 10–20×, while distributed mining implementation decreased mining timesby 300–400×, with negligible knowledge loss. The article concludes with the future directions for the scalability of the NPS recommender system and remaining challenges in its big data processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Won, Minsu, Hyeonmi Kim und Gang-Len Chang. „Knowledge-Based System for Estimating Incident Clearance Duration for Maryland I-95“. Transportation Research Record: Journal of the Transportation Research Board 2672, Nr. 14 (18.08.2018): 61–72. http://dx.doi.org/10.1177/0361198118792119.

Der volle Inhalt der Quelle
Annotation:
For incident response operations to be appreciated by the general public, it is essential that responsible highway agencies are capable of providing the estimated clearance duration of a detected incident at a level sufficiently reliable for motorists to make proper decisions such as selecting a detour route. Depending on the estimated clearance duration, the incident response center can then implement proper strategies to interact with motorists, ranging from providing incident information only to executing mandatory detouring operations. This study presents a knowledge-based system, based on detailed incident reports collected by the Maryland-CHART (Coordinated Highway Action Response Team) program between years 2012 and 2016, for such needs. The proposed system features the use of interval-based estimates derived from knowledge of historical data, with different confidence levels for each estimated incident clearance duration, and its rule-based structure for convenient updates with new data and available expertise from field operators. As some key variables associated with incident duration often only become available as the clearance operations progress, the developed system with its sequential nature allows users to dynamically revise the estimated duration when additional data have been reported. The preliminary evaluation results have shown the promise of the developed system which, with its invaluable historical information, can circumvent the many data quality and availability issues which have long plagued the applicability of some state-of-the-art models on this subject.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Dona Sabila, Alzena, Mustafid Mustafid und Suryono Suryono. „Inventory Control System by Using Vendor Managed Inventory (VMI)“. E3S Web of Conferences 31 (2018): 11015. http://dx.doi.org/10.1051/e3sconf/20183111015.

Der volle Inhalt der Quelle
Annotation:
The inventory control system has a strategic role for the business in managing inventory operations. Management of conventional inventory creates problems in the stock of goods that often runs into vacancies and excess goods at the retail level. This study aims to build inventory control system that can maintain the stability of goods availability at the retail level. The implementation of Vendor Managed Inventory (VMI) method on inventory control system provides transparency of sales data and inventory of goods at retailer level to supplier. Inventory control is performed by calculating safety stock and reorder point of goods based on sales data received by the system. Rule-based reasoning is provided on the system to facilitate the monitoring of inventory status information, thereby helping the process of inventory updates appropriately. Utilization of SMS technology is also considered as a medium of collecting sales data in real-time due to the ease of use. The results of this study indicate that inventory control using VMI ensures the availability of goods ± 70% and can reduce the accumulation of goods ± 30% at the retail level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Bateman, Alex, Maria-Jesus Martin, Sandra Orchard, Michele Magrane, Rahat Agivetova, Shadab Ahmad, Emanuele Alpi et al. „UniProt: the universal protein knowledgebase in 2021“. Nucleic Acids Research 49, Nr. D1 (25.11.2020): D480—D489. http://dx.doi.org/10.1093/nar/gkaa1100.

Der volle Inhalt der Quelle
Annotation:
Abstract The aim of the UniProt Knowledgebase is to provide users with a comprehensive, high-quality and freely accessible set of protein sequences annotated with functional information. In this article, we describe significant updates that we have made over the last two years to the resource. The number of sequences in UniProtKB has risen to approximately 190 million, despite continued work to reduce sequence redundancy at the proteome level. We have adopted new methods of assessing proteome completeness and quality. We continue to extract detailed annotations from the literature to add to reviewed entries and supplement these in unreviewed entries with annotations provided by automated systems such as the newly implemented Association-Rule-Based Annotator (ARBA). We have developed a credit-based publication submission interface to allow the community to contribute publications and annotations to UniProt entries. We describe how UniProtKB responded to the COVID-19 pandemic through expert curation of relevant entries that were rapidly made available to the research community through a dedicated portal. UniProt resources are available under a CC-BY (4.0) license via the web at https://www.uniprot.org/.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Li, Kelu, Junyuan Yang und Xuezhi Li. „Effects of co-infection on vaccination behavior and disease propagation“. Mathematical Biosciences and Engineering 19, Nr. 10 (2022): 10022–36. http://dx.doi.org/10.3934/mbe.2022468.

Der volle Inhalt der Quelle
Annotation:
<abstract><p>Coinfection is the process of an infection of a single host with two or more pathogen variants or with two or more distinct pathogen species, which often threatens public health and the stability of economies. In this paper, we propose a novel two-strain epidemic model characterizing the co-evolution of coinfection and voluntary vaccination strategies. In the framework of evolutionary vaccination, we design two game rules, the individual-based risk assessment (IB-RA) updated rule, and the strategy-based risk assessment (SB-RA) updated rule, to update the vaccination policy. Through detailed numerical analysis, we find that increasing the vaccine effectiveness and decreasing the transmission rate effectively suppress the disease prevalence, and moreover, the outcome of the SB-RA updated rule is more encouraging than those results of the IB-RA rule for curbing the disease transmission. Coinfection complicates the effects of the transmission rate of each strain on the final epidemic sizes.</p></abstract>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Gezgin, Deniz Mertkan, und Can Mihci. „Smartphone Addiction in Undergraduate Athletes: Reasons and Effects of Using Instagram Intensively“. International Journal of Technology in Education and Science 4, Nr. 3 (19.06.2020): 188–202. http://dx.doi.org/10.46328/ijtes.v4i3.106.

Der volle Inhalt der Quelle
Annotation:
Instagram has become a popular social network software based primarily on the concept of sharing visual content. As the most popular mobile application among university students, it is thought to be a major component of excessive smartphone use cases due to the need of its users for frequently checking updates. As a result, heavy Instagram-use for both sharing personally generated content and checking on others’ updates is thought to be a contributor to smartphone addiction. The purpose of this study is to examine the effects of Instagram social network usage characteristics upon smartphone addiction levels of university students, specifically those enrolled in the Athletics Departments, in order to examine the particular case of young athletes that use Instagram extensively. Therefore, the study group consists of 97 undergraduate students enrolled in the Athletics Department of a state university located in the Thrace regions of Turkey during the 2017-2018 academic year, who were also taking up a pedagogical formation certification course for becoming prospective K12 physical education teachers. Adopting a mixed method research model, the study has shown that, as far as young athletes who report Instagram as their favorite smartphone application is concerned, heavy Instagram use statistically predicts smartphone addiction. Moreover, according to qualitative data, the Occam’s Razor rule applies with these young athletes’ interaction with Instagram. Problematic use patterns are more easily explained by passive-observant behavior associated with a certain Fear of Missing Out, rather than a strong desire to exhibit their body image and sports success.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Patel, A., M. Mazer-Amirshahi, G. Fusch, A. Chan, J. van den Anker und S. Samiee-Zafarghandy. „P76 The effect of the pregnancy and lactation labeling rule on prescribing information of FDA-approved drugs“. Archives of Disease in Childhood 104, Nr. 6 (17.05.2019): e48.2-e48. http://dx.doi.org/10.1136/archdischild-2019-esdppp.114.

Der volle Inhalt der Quelle
Annotation:
BackgroundThe U.S. Food and Drug Administration implemented the new Pregnancy and Lactation Labeling Rule (PLLR) in June 2015. Under PLLR, all new drug applications were to present a narrative risk assessment (as opposed to letter category), while drug approvals after June 2001, were required to phase in by June 2020. The purpose of this study was to assess the quality of presented pregnancy and lactation data in the drug labeling and degree of adherence to the PLLR.Design/MethodsWe reviewed the labeling data of all new molecular entities (NMEs) approved from 1999–2017. The pregnancy and lactation information was classified as: 1. Harmful to use 2. Safe to use 3. Consideration of safety and efficacy. For drugs approvals after June 2001, presence of pregnancy letter category system was noted.ResultsOf the 456 NMEs, 131 (29%) were classified as harmful to use in pregnancy and 207 (45%) as harmful to use during lactation. This number did not follow any specific pattern over the course of 19 years. Less than 1% of drugs were deemed to be safe during pregnancy or lactation. Human data was the source of pregnancy or lactation information for only 2% of drugs. Up to 70% of drugs belonged to each implementation schedule has yet to meet the PLLR compliance requirement.Conclusion(s)Pregnant and lactating women are mostly advised against use of medications that might be needed for their health and health of their infants based on very limited data. Pharmaceutical companies lagged behind the required adherence rule for labeling updates on pregnancy and lactation information.Disclosure(s)Nothing to disclose
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Liu, Yin, Huidong Su, Shuanghu Zhang und Tiantian Jin. „Update of river health assessment indicator system, weight, and assignment criteria in China“. Water Supply 21, Nr. 6 (27.04.2021): 3153–67. http://dx.doi.org/10.2166/ws.2021.087.

Der volle Inhalt der Quelle
Annotation:
Abstract Comprehensive assessment of river health is challenging due to the diversity of rivers, the complexity of their ecosystem and functional service. This paper updates the river health assessment indicator system, weight, and assignment criteria in China by reviewing and examining the peer-reviewed literature. We propose an indicator system, weight, and criteria, validated by nine case studies and able to assess the country-scale river health. Our analysis shows that the rule layer of indicator system includes hydrology, water quality, aquatic organism, physical habitats, and functional service; its corresponding weights are set to 0.15, 0.21, 0.18, 0.22, and 0.24, respectively. The ten most representative indicators are selected that incorporates the indicator layer with their corresponding weights. The evaluation based on case studies shows that in eight out of nine cases, our results are consistent with those obtained in previous studies. Therefore, the suggested index system, weight, and three-assessment criteria are well suited for the complex cases in China. This paper can serve as a reference for a river health assessment and present a comprehensive listing of assessment criteria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Liu, Cong, Chi Yuan, Alex M. Butler, Richard D. Carvajal, Ziran Ryan Li, Casey N. Ta und Chunhua Weng. „DQueST: dynamic questionnaire for search of clinical trials“. Journal of the American Medical Informatics Association 26, Nr. 11 (07.08.2019): 1333–43. http://dx.doi.org/10.1093/jamia/ocz121.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective Information overload remains a challenge for patients seeking clinical trials. We present a novel system (DQueST) that reduces information overload for trial seekers using dynamic questionnaires. Materials and Methods DQueST first performs information extraction and criteria library curation. DQueST transforms criteria narratives in the ClinicalTrials.gov repository into a structured format, normalizes clinical entities using standard concepts, clusters related criteria, and stores the resulting curated library. DQueST then implements a real-time dynamic question generation algorithm. During user interaction, the initial search is similar to a standard search engine, and then DQueST performs real-time dynamic question generation to select criteria from the library 1 at a time by maximizing its relevance score that reflects its ability to rule out ineligible trials. DQueST dynamically updates the remaining trial set by removing ineligible trials based on user responses to corresponding questions. The process iterates until users decide to stop and begin manually reviewing the remaining trials. Results In simulation experiments initiated by 10 diseases, DQueST reduced information overload by filtering out 60%–80% of initial trials after 50 questions. Reviewing the generated questions against previous answers, on average, 79.7% of the questions were relevant to the queried conditions. By examining the eligibility of random samples of trials ruled out by DQueST, we estimate the accuracy of the filtering procedure is 63.7%. In a study using 5 mock patient profiles, DQueST on average retrieved trials with a 1.465 times higher density of eligible trials than an existing search engine. In a patient-centered usability evaluation, patients found DQueST useful, easy to use, and returning relevant results. Conclusion DQueST contributes a novel framework for transforming free-text eligibility criteria to questions and filtering out clinical trials based on user answers to questions dynamically. It promises to augment keyword-based methods to improve clinical trial search.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ioannidis, Evangelos, Nikos Varsakelis und Ioannis Antoniou. „Intelligent Agents in Co-Evolving Knowledge Networks“. Mathematics 9, Nr. 1 (05.01.2021): 103. http://dx.doi.org/10.3390/math9010103.

Der volle Inhalt der Quelle
Annotation:
We extend the agent-based models for knowledge diffusion in networks, restricted to random mindless interactions and to “frozen” (static) networks, in order to take into account intelligent agents and network co-evolution. Intelligent agents make decisions under bounded rationality. This is the key distinction of intelligent interacting agents compared to mindless colliding molecules, involved in the usual diffusion mechanism resulting from accidental collisions. The co-evolution of link weights and knowledge levels is modeled at the local microscopic level of “agent-to-agent” interaction. Our network co-evolution model is actually a “learning mechanism”, where weight updates depend on the previous values of both weights and knowledge levels. The goal of our work is to explore the impact of (a) the intelligence of the agents, modeled by the selection-decision rule for knowledge acquisition, (b) the innovation rate of the agents, (c) the number of “top innovators” and (d) the network size. We find that rational intelligent agents transform the network into a “centralized world”, reducing the entropy of their selections-decisions for knowledge acquisition. In addition, we find that the average knowledge, as well as the “knowledge inequality”, grow exponentially.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Santos, Douglas, und Jéferson Campos Nobre. „Vulnerability Identification on GNU/Linux Operating Systems through Case-Based Reasoning“. Revista de Informática Teórica e Aplicada 26, Nr. 3 (30.11.2019): 13–25. http://dx.doi.org/10.22456/2175-2745.82079.

Der volle Inhalt der Quelle
Annotation:
Operating system security has been steadily evolving over the years. Several mechanisms, softwares and guides of best practices of configuration have been developed to contribute with the security of such systems. The process that makes an operating system safer by considering the default level obtained at the installation is known as hardening. Experience and technical knowledge are important attributes for the professional performing this process. In this context, automated rule-based tools are often used to assist professionals with little experience in vulnerability identification activities. However, the use of rules establishes a dependency on developers for the development of new rules as well as to keep them updated. Failure to update rules can significantly compromise the integrity of vulnerability identification results. In this paper, the Case-Based Reasoning (CBR) technique is used to improve tools that assist inexperienced professionals in conducting vulnerability identification activities. The purpose of using CBR is to make inexperienced professionals obtain similar results as experienced professionals. In addition, the dependence on rule developers is diminished. A prototype was developed considering the GNU/Linux system in order to carry out an experimental evaluation. This evaluation demonstrated that the application of CBR improves the performance of inexperienced professionals in terms of the number of identified vulnerabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Murovec, Jure, Janez Kušar und Tomaž Berlec. „Methodology for Searching Representative Elements“. Applied Sciences 9, Nr. 17 (23.08.2019): 3482. http://dx.doi.org/10.3390/app9173482.

Der volle Inhalt der Quelle
Annotation:
Companies have to assure their share on the global market, meet customer demands and produce customer-tailored products. With time and production line updates, the layout becomes non-optimal and product diversity only increases this problem. To stay competitive, they need to increase their productivity and eliminate waste. Due to a variety of products consisting of similar components and variants thereof, a huge number of various elements are encountered in a production process, the material flow of which is hardly manageable. Although the elements differ from each other, their representative elements can be defined. This paper will illustrate a methodology for searching representative elements (MIRE), which is a combination of the known Pareto’s analysis (also known as ABC analysis or 20/80 rule) and a calculation of a loading function, that can be based on any element feature. Results of using the MIRE methodology in a case from an industrial environment have shown that the analysis can be carried out within a very short time and this provides for permanent analysis, optimisation and, consequently, permanent improvement in the material flow through a production process. The methodology is most suitable for smaller companies as it enables rapid analysis, especially in cases when there is no pre-recorded material flow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Price, Bronwyn, Nica Huber, Anita Nussbaumer und Christian Ginzler. „The Habitat Map of Switzerland: A Remote Sensing, Composite Approach for a High Spatial and Thematic Resolution Product“. Remote Sensing 15, Nr. 3 (21.01.2023): 643. http://dx.doi.org/10.3390/rs15030643.

Der volle Inhalt der Quelle
Annotation:
Habitat maps at high thematic and spatial resolution and broad extents are fundamental tools for biodiversity conservation, the planning of ecological networks and the management of ecosystem services. To derive a habitat map for Switzerland, we used a composite methodology bringing together the best available spatial data and distribution models. The approach relies on the segmentation and classification of high spatial resolution (1 m) aerial imagery. Land cover data, as well as habitat and species distribution models built on Earth observation data from Sentinel 1 and 2, Landsat, Planetscope and LiDAR, inform the rule-based classification to habitats defined by the hierarchical Swiss Habitat Typology (TypoCH). A total of 84 habitats in 32 groups and 9 overarching classes are mapped in a spatially explicit manner across Switzerland. Validation and plausibility analysis with four independent datasets show that the mapping is broadly plausible, with good accuracy for most habitats, although with lower performance for fine-scale and linear habitats, habitats with restricted geographical distributions and those predominantly characterised by understorey species, especially forest habitats. The resulting map is a vector dataset available for interactive viewing and download from open EnviDat data sharing platform. The methodology is semi-automated to allow for updates over time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Marshall, Iain J., Benjamin Nye, Joël Kuiper, Anna Noel-Storr, Rachel Marshall, Rory Maclean, Frank Soboczenski, Ani Nenkova, James Thomas und Byron C. Wallace. „Trialstreamer: A living, automatically updated database of clinical trial reports“. Journal of the American Medical Informatics Association 27, Nr. 12 (17.09.2020): 1903–12. http://dx.doi.org/10.1093/jamia/ocaa163.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective Randomized controlled trials (RCTs) are the gold standard method for evaluating whether a treatment works in health care but can be difficult to find and make use of. We describe the development and evaluation of a system to automatically find and categorize all new RCT reports. Materials and Methods Trialstreamer continuously monitors PubMed and the World Health Organization International Clinical Trials Registry Platform, looking for new RCTs in humans using a validated classifier. We combine machine learning and rule-based methods to extract information from the RCT abstracts, including free-text descriptions of trial PICO (populations, interventions/comparators, and outcomes) elements and map these snippets to normalized MeSH (Medical Subject Headings) vocabulary terms. We additionally identify sample sizes, predict the risk of bias, and extract text conveying key findings. We store all extracted data in a database, which we make freely available for download, and via a search portal, which allows users to enter structured clinical queries. Results are ranked automatically to prioritize larger and higher-quality studies. Results As of early June 2020, we have indexed 673 191 publications of RCTs, of which 22 363 were published in the first 5 months of 2020 (142 per day). We additionally include 304 111 trial registrations from the International Clinical Trials Registry Platform. The median trial sample size was 66. Conclusions We present an automated system for finding and categorizing RCTs. This yields a novel resource: a database of structured information automatically extracted for all published RCTs in humans. We make daily updates of this database available on our website (https://trialstreamer.robotreviewer.net).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Allenmark, Fredrik, Ahu Gokce, Thomas Geyer, Artyom Zinchenko, Hermann J. Müller und Zhuanghua Shi. „Inter-trial effects in priming of pop-out: Comparison of computational updating models“. PLOS Computational Biology 17, Nr. 9 (03.09.2021): e1009332. http://dx.doi.org/10.1371/journal.pcbi.1009332.

Der volle Inhalt der Quelle
Annotation:
In visual search tasks, repeating features or the position of the target results in faster response times. Such inter-trial ‘priming’ effects occur not just for repetitions from the immediately preceding trial but also from trials further back. A paradigm known to produce particularly long-lasting inter-trial effects–of the target-defining feature, target position, and response (feature)–is the ‘priming of pop-out’ (PoP) paradigm, which typically uses sparse search displays and random swapping across trials of target- and distractor-defining features. However, the mechanisms underlying these inter-trial effects are still not well understood. To address this, we applied a modeling framework combining an evidence accumulation (EA) model with different computational updating rules of the model parameters (i.e., the drift rate and starting point of EA) for different aspects of stimulus history, to data from a (previously published) PoP study that had revealed significant inter-trial effects from several trials back for repetitions of the target color, the target position, and (response-critical) target feature. By performing a systematic model comparison, we aimed to determine which EA model parameter and which updating rule for that parameter best accounts for each inter-trial effect and the associated n-back temporal profile. We found that, in general, our modeling framework could accurately predict the n-back temporal profiles. Further, target color- and position-based inter-trial effects were best understood as arising from redistribution of a limited-capacity weight resource which determines the EA rate. In contrast, response-based inter-trial effects were best explained by a bias of the starting point towards the response associated with a previous target; this bias appeared largely tied to the position of the target. These findings elucidate how our cognitive system continually tracks, and updates an internal predictive model of, a number of separable stimulus and response parameters in order to optimize task performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Chen, Yu, Wei Xiang Xu und Xu Min Liu. „The Association Rules Updating Algorithm Based on Reverse Search“. Key Engineering Materials 467-469 (Februar 2011): 1126–31. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.1126.

Der volle Inhalt der Quelle
Annotation:
This paper analyzed the existing association rules update algorithm IUA, found out that when the decision makers gave priority attention to the situation of maximum frequent itemsets, this algorithm cannot lower the cost of the database traversal to quickly access to the largest number of frequent itemsets. For the lack of the algorithm, an algorithm which is based on reverse search approach to update association rules is presented. The updating algorithm based on reverse search first generated all frequent itemsets of new itemsets. Then, it spliced the new largest frequent itemsets and original largest frequent itemsets for trimming, get the updated maximal frequent itemsets. This algorithm not only reduces the traversal times in the process of association rules updating, but also realized the priority access to the largest operation of frequent itemsets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Seo, Dong Gi. „Overview and current management of computerized adaptive testing in licensing/certification examinations“. Journal of Educational Evaluation for Health Professions 14 (26.07.2017): 17. http://dx.doi.org/10.3352/jeehp.2017.14.17.

Der volle Inhalt der Quelle
Annotation:
Computerized adaptive testing (CAT) has been implemented in high-stakes examinations such as the National Council Licensure Examination-Registered Nurses in the United States since 1994. Subsequently, the National Registry of Emergency Medical Technicians in the United States adopted CAT for certifying emergency medical technicians in 2007. This was done with the goal of introducing the implementation of CAT for medical health licensing examinations. Most implementations of CAT are based on item response theory, which hypothesizes that both the examinee and items have their own characteristics that do not change. There are 5 steps for implementing CAT: first, determining whether the CAT approach is feasible for a given testing program; second, establishing an item bank; third, pretesting, calibrating, and linking item parameters via statistical analysis; fourth, determining the specification for the final CAT related to the 5 components of the CAT algorithm; and finally, deploying the final CAT after specifying all the necessary components. The 5 components of the CAT algorithm are as follows: item bank, starting item, item selection rule, scoring procedure, and termination criterion. CAT management includes content balancing, item analysis, item scoring, standard setting, practice analysis, and item bank updates. Remaining issues include the cost of constructing CAT platforms and deploying the computer technology required to build an item bank. In conclusion, in order to ensure more accurate estimations of examinees’ ability, CAT may be a good option for national licensing examinations. Measurement theory can support its implementation for high-stakes examinations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Ward, Joel, Andrew N. Papanikitas, Regent Lee, Naomi Warner, Emma Mckenzie-Edwards, Stephen Bergman und Ashok Inderraj Handa. „‘The House of God’: reflections 40 years on, in conversation with author Samuel Shem“. Postgraduate Medical Journal 94, Nr. 1115 (September 2018): 531–34. http://dx.doi.org/10.1136/postgradmedj-2018-135727.

Der volle Inhalt der Quelle
Annotation:
The House of God is a seminal work of medical satire based on the gruelling internship experiences of Samuel Shem at the Beth Israel Hospital. Thirteen ‘Laws’ were offered to rationalise the seemingly chaotic patient management and flow. There have been large shifts in the healthcare landscape and practice since, so we consider whether these medical truisms are still applicable to contemporary National Health Service practice and propose updates where necessary:People are sometimes allowed to die.GOMERs (Get Out of My Emergency Room) still go to ground.Master yourself, join the multidisciplinary team.The patient is the one with the disease, but not the only one suffering.Placement (discharge planning) comes first.There is no body cavity that cannot be reached with a gentle arm and good interventional radiologists.Fit the rule to the patient rather than the patient to the rule.They can always pay you less.The only bad admission is a futile one.If you don’t take a temperature you can’t find a fever and if you are not going to act on it, don’t do the test.Show me a BMS (best medical student) who ONLY triples my work, and I’ll show you a future Foundation Year 1 doctor (FY1) who is an asset to the firm.Interpret radiology freely, but share your clinical findings with the radiologist and in a timely fashion.Doing nothing can be a viable option.These were developed in conversation with Samuel Shem, who also offers further insight on the creation of the original laws.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Prentzas, Jim, und Ioannis Hatzilygeroudis. „Rule-based update methods for a hybrid rule base“. Data & Knowledge Engineering 55, Nr. 2 (November 2005): 103–28. http://dx.doi.org/10.1016/j.datak.2005.02.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zhu, Hua Ji, und Hua Rui Wu. „Spatio-Temporal Modeling on Village Land Feature Change“. Applied Mechanics and Materials 204-208 (Oktober 2012): 2721–25. http://dx.doi.org/10.4028/www.scientific.net/amm.204-208.2721.

Der volle Inhalt der Quelle
Annotation:
Village land continually changes in the real world. In order to keep the data up-to-date, data producers need update the data frequently. When the village land data are updated, the update information must be dispensed to the end-users to keep their client-databases current. In the real world, village land changes in many forms. Identifying the change type of village land (i.e. captures the semantics of change) and representing them in the data world can help end-users understand the change commonly and be convenient for end-users to integrate these change information into their databases. This work focuses on the model of the spatio-temporal change. A three-tuple model CAR for representing the spatio-temporal change is proposed based on the village land feature set before change and the village land feature set after change, change type and rules. In this model, the C denotes the change type. A denotes the attribute set; R denotes the judging rules of change type. The rule is described by the IF-THEN expressions. By the operations between R and A, the C is distinguished. This model overcomes the limitations of current methods. And more, the rules in this model can be easy realized in computer program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Brás, Glender, Alisson Marques Silva und Elizabeth Fialho Wanner. „Multi-gene genetic programming to building up fuzzy rule-base in Neo-Fuzzy-Neuron networks“. Journal of Intelligent & Fuzzy Systems 41, Nr. 1 (11.08.2021): 499–516. http://dx.doi.org/10.3233/jifs-202146.

Der volle Inhalt der Quelle
Annotation:
This paper introduces a new approach to build the rule-base on Neo-Fuzzy-Neuron (NFN) Networks. The NFN is a Neuro-Fuzzy network composed by a set of n decoupled zero-order Takagi-Sugeno models, one for each input variable, each one containing m rules. Employing Multi-Gene Genetic Programming (MG-GP) to create and adjust Gaussian membership functions and a Gradient-based method to update the network parameters, the proposed model is dubbed NFN-MG-GP. In the proposed model, each individual of MG-GP represents a complete rule-base of NFN. The rule-base is adjusted by genetic operators (Crossover, Reproduction, Mutation), and the consequent parameters are updated by a predetermined number of Gradient method epochs, every generation. The algorithm uses Elitism to ensure that the best rule-base is not lost between generations. The performance of the NFN-MG-GP is evaluated using instances of time series forecasting and non-linear system identification problems. Computational experiments and comparisons against state-of-the-art alternative models show that the proposed algorithms are efficient and competitive. Furthermore, experimental results show that it is possible to obtain models with good accuracy applying Multi-Gene Genetic Programming to construct the rule-base on NFN Networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Khan, Muhammad Taimoor, Nouman Azam, Shehzad Khalid und Furqan Aziz. „Hierarchical lifelong topic modeling using rules extracted from network communities“. PLOS ONE 17, Nr. 3 (03.03.2022): e0264481. http://dx.doi.org/10.1371/journal.pone.0264481.

Der volle Inhalt der Quelle
Annotation:
Topic models extract latent concepts from texts in the form of topics. Lifelong topic models extend topic models by learning topics continuously based on accumulated knowledge from the past which is updated continuously as new information becomes available. Hierarchical topic modeling extends topic modeling by extracting topics and organizing them into a hierarchical structure. In this study, we combine the two and introduce hierarchical lifelong topic models. Hierarchical lifelong topic models not only allow to examine the topics at different levels of granularity but also allows to continuously adjust the granularity of the topics as more information becomes available. A fundamental issue in hierarchical lifelong topic modeling is the extraction of rules that are used to preserve the hierarchical structural information among the rules and will continuously update based on new information. To address this issue, we introduce a network communities based rule mining approach for hierarchical lifelong topic models (NHLTM). The proposed approach extracts hierarchical structural information among the rules by representing textual documents as graphs and analyzing the underlying communities in the graph. Experimental results indicate improvement of the hierarchical topic structures in terms of topic coherence that increases from general to specific topics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie