Academic literature on the topic 'Noisy-OR model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Noisy-OR model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Noisy-OR model"

1

Quintanar-Gago, David A., and Pamela F. Nelson. "The extended Recursive Noisy OR model: Static and dynamic considerations." International Journal of Approximate Reasoning 139 (December 2021): 185–200. http://dx.doi.org/10.1016/j.ijar.2021.09.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Kuang, Arnaud Martin, and Quan Pan. "The Belief Noisy-OR Model Applied to Network Reliability Analysis." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 24, no. 06 (November 30, 2016): 937–60. http://dx.doi.org/10.1142/s0218488516500434.

Full text
Abstract:
One difficulty faced in knowledge engineering for Bayesian Network (BN) is the quantification step where the Conditional Probability Tables (CPTs) are determined. The number of parameters included in CPTs increases exponentially with the number of parent variables. The most common solution is the application of the so-called canonical gates. The Noisy-OR (NOR) gate, which takes advantage of the independence of causal interactions, provides a logarithmic reduction of the number of parameters required to specify a CPT. In this paper, an extension of NOR model based on the theory of belief functions, named Belief Noisy-OR (BNOR), is proposed. BNOR is capable of dealing with both aleatory and epistemic uncertainty of the network. Compared with NOR, more rich information which is of great value for making decisions can be got when the available knowledge is uncertain. Specially, when there is no epistemic uncertainty, BNOR degrades into NOR. Additionally, different structures of BNOR are presented in this paper in order to meet various needs of engineers. The application of BNOR model on the reliability evaluation problem of networked systems demonstrates its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, W., P. Poupart, and P. Van Beek. "Exploiting Structure in Weighted Model Counting Approaches to Probabilistic Inference." Journal of Artificial Intelligence Research 40 (April 19, 2011): 729–65. http://dx.doi.org/10.1613/jair.3232.

Full text
Abstract:
Previous studies have demonstrated that encoding a Bayesian network into a SAT formula and then performing weighted model counting using a backtracking search algorithm can be an effective method for exact inference. In this paper, we present techniques for improving this approach for Bayesian networks with noisy-OR and noisy-MAX relations---two relations that are widely used in practice as they can dramatically reduce the number of probabilities one needs to specify. In particular, we present two SAT encodings for noisy-OR and two encodings for noisy-MAX that exploit the structure or semantics of the relations to improve both time and space efficiency, and we prove the correctness of the encodings. We experimentally evaluated our techniques on large-scale real and randomly generated Bayesian networks. On these benchmarks, our techniques gave speedups of up to two orders of magnitude over the best previous approaches for networks with noisy-OR/MAX relations and scaled up to larger networks. As well, our techniques extend the weighted model counting approach for exact inference to networks that were previously intractable for the approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Büttner, Martha, Lisa Schneider, Aleksander Krasowski, Joachim Krois, Ben Feldberg, and Falk Schwendicke. "Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs." Journal of Clinical Medicine 12, no. 9 (April 23, 2023): 3058. http://dx.doi.org/10.3390/jcm12093058.

Full text
Abstract:
Supervised deep learning requires labelled data. On medical images, data is often labelled inconsistently (e.g., too large) with varying accuracies. We aimed to assess the impact of such label noise on dental calculus detection on bitewing radiographs. On 2584 bitewings calculus was accurately labeled using bounding boxes (BBs) and artificially increased and decreased stepwise, resulting in 30 consistently and 9 inconsistently noisy datasets. An object detection network (YOLOv5) was trained on each dataset and evaluated on noisy and accurate test data. Training on accurately labeled data yielded an mAP50: 0.77 (SD: 0.01). When trained on consistently too small BBs model performance significantly decreased on accurate and noisy test data. Model performance trained on consistently too large BBs decreased immediately on accurate test data (e.g., 200% BBs: mAP50: 0.24; SD: 0.05; p < 0.05), but only after drastically increasing BBs on noisy test data (e.g., 70,000%: mAP50: 0.75; SD: 0.01; p < 0.05). Models trained on inconsistent BB sizes showed a significant decrease of performance when deviating 20% or more from the original when tested on noisy data (mAP50: 0.74; SD: 0.02; p < 0.05), or 30% or more when tested on accurate data (mAP50: 0.76; SD: 0.01; p < 0.05). In conclusion, accurate predictions need accurate labeled data in the training process. Testing on noisy data may disguise the effects of noisy training data. Researchers should be aware of the relevance of accurately annotated data, especially when testing model performances.
APA, Harvard, Vancouver, ISO, and other styles
5

Shang, Yuming, He-Yan Huang, Xian-Ling Mao, Xin Sun, and Wei Wei. "Are Noisy Sentences Useless for Distant Supervised Relation Extraction?" Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8799–806. http://dx.doi.org/10.1609/aaai.v34i05.6407.

Full text
Abstract:
The noisy labeling problem has been one of the major obstacles for distant supervised relation extraction. Existing approaches usually consider that the noisy sentences are useless and will harm the model's performance. Therefore, they mainly alleviate this problem by reducing the influence of noisy sentences, such as applying bag-level selective attention or removing noisy sentences from sentence-bags. However, the underlying cause of the noisy labeling problem is not the lack of useful information, but the missing relation labels. Intuitively, if we can allocate credible labels for noisy sentences, they will be transformed into useful training data and benefit the model's performance. Thus, in this paper, we propose a novel method for distant supervised relation extraction, which employs unsupervised deep clustering to generate reliable labels for noisy sentences. Specifically, our model contains three modules: a sentence encoder, a noise detector and a label generator. The sentence encoder is used to obtain feature representations. The noise detector detects noisy sentences from sentence-bags, and the label generator produces high-confidence relation labels for noisy sentences. Extensive experimental results demonstrate that our model outperforms the state-of-the-art baselines on a popular benchmark dataset, and can indeed alleviate the noisy labeling problem.
APA, Harvard, Vancouver, ISO, and other styles
6

Zheng, Guoqing, Ahmed Hassan Awadallah, and Susan Dumais. "Meta Label Correction for Noisy Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 11053–61. http://dx.doi.org/10.1609/aaai.v35i12.17319.

Full text
Abstract:
Leveraging weak or noisy supervision for building effective machine learning models has long been an important research problem. Its importance has further increased recently due to the growing need for large-scale datasets to train deep learning models. Weak or noisy supervision could originate from multiple sources including non-expert annotators or automatic labeling based on heuristics or user interaction signals. There is an extensive amount of previous work focusing on leveraging noisy labels. Most notably, recent work has shown impressive gains by using a meta-learned instance re-weighting approach where a meta-learning framework is used to assign instance weights to noisy labels. In this paper, we extend this approach via posing the problem as a label correction problem within a meta-learning framework. We view the label correction procedure as a meta-process and propose a new meta-learning based framework termed MLC (Meta Label Correction) for learning with noisy labels. Specifically, a label correction network is adopted as a meta-model to produce corrected labels for noisy labels while the main model is trained to leverage the corrected labels. Both models are jointly trained by solving a bi-level optimization problem. We run extensive experiments with different label noise levels and types on both image recognition and text classification tasks. We compare the re-weighing and correction approaches showing that the correction framing addresses some of the limitations of re-weighting. We also show that the proposed MLC approach outperforms previous methods in both image and language tasks.
APA, Harvard, Vancouver, ISO, and other styles
7

Maeda, Shin-ichi, Wen-Jie Song, and Shin Ishii. "Nonlinear and Noisy Extension of Independent Component Analysis: Theory and Its Application to a Pitch Sensation Model." Neural Computation 17, no. 1 (January 1, 2005): 115–44. http://dx.doi.org/10.1162/0899766052530866.

Full text
Abstract:
In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p.d.f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e.g., Bermond & Cardoso, 1999) or noise-free nonlinear ICA (e.g., Lee, Koehler, & Orglmeister, 1997) as special cases. Especially when the nonlinear function is linear, the learning rule derived as a generalized expectation-maximization algorithm has a similar form to the noisy ICA algorithm previously presented by Douglas, Cichocki, and Amari (1998). Moreover, our learning rule becomes identical to the standard noise-free linear ICA algorithm in the noiseless limit, while existing MLE-based noisy ICA algorithms do not rigorously include the noise-free ICA. We trained our noisy nonlinear ICA by using acoustic signals such as speech and music. The model after learning successfully simulates virtual pitch phenomena, and the existence region of virtual pitch is qualitatively similar to that observed in a psychoacoustic experiment. Although a linear transformation hypothesized in the central auditory system can account for the pitch sensation, our model suggests that the linear transformation can be acquired through learning from actual acoustic signals. Since our model includes a cepstrum analysis in a special case, it is expected to provide a useful feature extraction method that has often been given by the cepstrum analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhan, Peida, Hong Jiao, Kaiwen Man, and Lijun Wang. "Using JAGS for Bayesian Cognitive Diagnosis Modeling: A Tutorial." Journal of Educational and Behavioral Statistics 44, no. 4 (February 10, 2019): 473–503. http://dx.doi.org/10.3102/1076998619826040.

Full text
Abstract:
In this article, we systematically introduce the just another Gibbs sampler (JAGS) software program to fit common Bayesian cognitive diagnosis models (CDMs) including the deterministic inputs, noisy “and” gate model; the deterministic inputs, noisy “or” gate model; the linear logistic model; the reduced reparameterized unified model; and the log-linear CDM (LCDM). Further, we introduce the unstructured latent structural model and the higher order latent structural model. We also show how to extend these models to consider polytomous attributes, the testlet effect, and longitudinal diagnosis. Finally, we present an empirical example as a tutorial to illustrate how to use JAGS codes in R.
APA, Harvard, Vancouver, ISO, and other styles
9

Hong, Zhiwei, Xiaocheng Fan, Tao Jiang, and Jianxing Feng. "End-to-End Unpaired Image Denoising with Conditional Adversarial Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4140–49. http://dx.doi.org/10.1609/aaai.v34i04.5834.

Full text
Abstract:
Image denoising is a classic low level vision problem that attempts to recover a noise-free image from a noisy observation. Recent advances in deep neural networks have outperformed traditional prior based methods for image denoising. However, the existing methods either require paired noisy and clean images for training or impose certain assumptions on the noise distribution and data types. In this paper, we present an end-to-end unpaired image denoising framework (UIDNet) that denoises images with only unpaired clean and noisy training images. The critical component of our model is a noise learning module based on a conditional Generative Adversarial Network (cGAN). The model learns the noise distribution from the input noisy images and uses it to transform the input clean images to noisy ones without any assumption on the noise distribution and data types. This process results in pairs of clean and pseudo-noisy images. Such pairs are then used to train another denoising network similar to the existing denoising methods based on paired images. The noise learning and denoising components are integrated together so that they can be trained end-to-end. Extensive experimental evaluation has been performed on both synthetic and real data including real photographs and computer tomography (CT) images. The results demonstrate that our model outperforms the previous models trained on unpaired images as well as the state-of-the-art methods based on paired training data when proper training pairs are unavailable.
APA, Harvard, Vancouver, ISO, and other styles
10

Kağan Akkaya, Emre, and Burcu Can. "Transfer learning for Turkish named entity recognition on noisy text." Natural Language Engineering 27, no. 1 (January 28, 2020): 35–64. http://dx.doi.org/10.1017/s1351324919000627.

Full text
Abstract:
AbstractIn this article, we investigate using deep neural networks with different word representation techniques for named entity recognition (NER) on Turkish noisy text. We argue that valuable latent features for NER can, in fact, be learned without using any hand-crafted features and/or domain-specific resources such as gazetteers and lexicons. In this regard, we utilize character-level, character n-gram-level, morpheme-level, and orthographic character-level word representations. Since noisy data with NER annotation are scarce for Turkish, we introduce a transfer learning model in order to learn infrequent entity types as an extension to the Bi-LSTM-CRF architecture by incorporating an additional conditional random field (CRF) layer that is trained on a larger (but formal) text and a noisy text simultaneously. This allows us to learn from both formal and informal/noisy text, thus improving the performance of our model further for rarely seen entity types. We experimented on Turkish as a morphologically rich language and English as a relatively morphologically poor language. We obtained an entity-level F1 score of 67.39% on Turkish noisy data and 45.30% on English noisy data, which outperforms the current state-of-art models on noisy text. The English scores are lower compared to Turkish scores because of the intense sparsity in the data introduced by the user writing styles. The results prove that using subword information significantly contributes to learning latent features for morphologically rich languages.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Noisy-OR model"

1

Li, Wei. "Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic Reasoning." Thesis, 2010. http://hdl.handle.net/10012/5322.

Full text
Abstract:
Boolean propositional satisfiability (SAT) and probabilistic reasoning represent two core problems in AI. Backtracking based algorithms have been applied in both problems. In this thesis, I investigate structure-based techniques for solving real world SAT and Bayesian networks, such as software testing and medical diagnosis instances. When solving a SAT instance using backtracking search, a sequence of decisions must be made as to which variable to branch on or instantiate next. Real world problems are often amenable to a divide-and-conquer strategy where the original instance is decomposed into independent sub-problems. Existing decomposition techniques are based on pre-processing the static structure of the original problem. I propose a dynamic decomposition method based on hypergraph separators. Integrating this dynamic separator decomposition into the variable ordering of a modern SAT solver leads to speedups on large real world SAT problems. Encoding a Bayesian network into a CNF formula and then performing weighted model counting is an effective method for exact probabilistic inference. I present two encodings for improving this approach with noisy-OR and noisy-MAX relations. In our experiments, our new encodings are more space efficient and can speed up the previous best approaches over two orders of magnitude. The ability to solve similar problems incrementally is critical for many probabilistic reasoning problems. My aim is to exploit the similarity of these instances by forwarding structural knowledge learned during the analysis of one instance to the next instance in the sequence. I propose dynamic model counting and extend the dynamic decomposition and caching technique to multiple runs on a series of problems with similar structure. This allows us to perform Bayesian inference incrementally as the evidence, parameter, and structure of the network change. Experimental results show that my approach yields significant improvements over previous model counting approaches on multiple challenging Bayesian network instances.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Noisy-OR model"

1

Back, Kerry E. Rational Expectations Equilibria. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190241148.003.0022.

Full text
Abstract:
When differences in beliefs are due to differences in information, investors learn from prices. If there are no risk‐sharing motives for trade, then differences in information do not lead to trade (the no‐trade theorem). Equilibrium prices can fully reveal information, but then there is no incentive to gather information (the Grossman‐Stiglitz paradox). Noisy trades or asset supplies facilitate partially revealing equilibria. In the Grossman‐Stiglitz model and the Hellwig model, prices equal discounted expected values minus a risk premium term that depends on the average precision of investors’ information weighted by their risk tolerances. The chapter explains the mechanics of updating beliefs when fundamentals and signals are normally distributed.
APA, Harvard, Vancouver, ISO, and other styles
2

Portillo, Rafael, Filiz Unsal, Stephen O’Connell, and Catherine Pattillo. Implementation Errors and Incomplete Information. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198785811.003.0009.

Full text
Abstract:
This chapter shows that limited effects of monetary policy can reflect shortcomings of existing policy frameworks in low-income countries rather than (or in addition to) the structural features often put forward in policy and academic debates. The chapter focuses on two pervasive issues: lack of effective frameworks for implementing policy, so that short-term interest rates display considerable unintended volatility, and poor communication about policy intent. The authors introduce these features into an otherwise standard New Keynesian model with incomplete information. Implementation errors result from insufficient accommodation to money demand shocks, creating a noisy wedge between actual and intended interest rates. The representative private agent must then infer policy intentions from movements in interest rates and money. Under these conditions, even exogenous and persistent changes in the stance of monetary policy can have weak effects, even when the underlying transmission (as might be observed under complete information) is strong.
APA, Harvard, Vancouver, ISO, and other styles
3

Golan, Amos. Foundations of Info-Metrics. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199349524.001.0001.

Full text
Abstract:
This book provides a framework for info-metrics—the science of modeling, inference, and reasoning under conditions of noisy and insufficient information. Info-metrics is an inherently interdisciplinary framework that emerged from the intersection of information theory, statistical inference, and decision-making under uncertainty. It allows us to process the available information with minimal reliance on assumptions that cannot be validated. This book focuses on unifying all information processing and model building within a single constrained optimization framework. It provides a complete framework for modeling and inference, rather than a problem-specific model. The framework evolves from the simple premise that our available information is often insufficient to provide a unique answer for decisions we wish to make. Each decision, or solution, is derived from the available input information along with a choice of inferential procedure. The book contains many multidisciplinary applications that demonstrate the simplicity and generality of the framework in real-world settings: These include initial diagnosis at an emergency room, optimal dose decisions, election forecasting, network and information aggregation, weather pattern analyses, portfolio allocation, inference of strategic behavior, incorporation of prior information, option pricing, and modeling an interacting social system. This book presents simple derivations of the key results that are necessary to understand and apply the fundamental concepts to a variety of problems. Derivations are often supported by graphical illustrations. The book is designed to be accessible for graduate students, researchers, and practitioners across the disciplines, requiring only basic quantitative skills and a little persistence.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Noisy-OR model"

1

Woudenberg, Steven P. D., and Linda C. van der Gaag. "Using the Noisy-OR Model Can Be Harmful … But It Often Is Not." In Lecture Notes in Computer Science, 122–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22152-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bolt, Janneke H., and Linda C. van der Gaag. "An Empirical Study of the Use of the Noisy-Or Model in a Real-Life Bayesian Network." In Communications in Computer and Information Science, 11–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14055-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guan, Ji, Wang Fang, and Mingsheng Ying. "Verifying Fairness in Quantum Machine Learning." In Computer Aided Verification, 408–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_20.

Full text
Abstract:
AbstractDue to the beyond-classical capability of quantum computing, quantum machine learning is applied independently or embedded in classical models for decision making, especially in the field of finance. Fairness and other ethical issues are often one of the main concerns in decision making. In this work, we define a formal framework for the fairness verification and analysis of quantum machine learning decision models, where we adopt one of the most popular notions of fairness in the literature based on the intuition—any two similar individuals must be treated similarly and are thus unbiased. We show that quantum noise can improve fairness and develop an algorithm to check whether a (noisy) quantum machine learning model is fair. In particular, this algorithm can find bias kernels of quantum data (encoding individuals) during checking. These bias kernels generate infinitely many bias pairs for investigating the unfairness of the model. Our algorithm is designed based on a highly efficient data structure—Tensor Networks—and implemented on Google’s TensorFlow Quantum. The utility and effectiveness of our algorithm are confirmed by the experimental results, including income prediction and credit scoring on real-world data, for a class of random (noisy) quantum decision models with 27 qubits ($$2^{27}$$ 2 27 -dimensional state space) tripling ($$2^{18}$$ 2 18 times more than) that of the state-of-the-art algorithms for verifying quantum machine learning models.
APA, Harvard, Vancouver, ISO, and other styles
4

Cohen, Albert, Wolfgang Dahmen, and Ron DeVore. "State Estimation—The Role of Reduced Models." In SEMA SIMAI Springer Series, 57–77. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-86236-7_4.

Full text
Abstract:
AbstractThe exploration of complex physical or technological processes usually requires exploiting available information from different sources: (i) physical laws often represented as a family of parameter dependent partial differential equations and (ii) data provided by measurement devices or sensors. The amount of sensors is typically limited and data acquisition may be expensive and in some cases even harmful. This article reviews some recent developments for this “small-data” scenario where inversion is strongly aggravated by the typically large parametric dimensionality. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic accuracy quantification related to the required computational complexity. We discuss optimality criteria which delineate intrinsic information limits, and highlight the role of reduced models for developing efficient computational strategies. In particular, the need to adapt the reduced models—not to a specific (possibly noisy) data set but rather to the sensor system—is a central theme. This, in turn, is facilitated by exploiting geometric perspectives based on proper stable variational formulations of the continuous model.
APA, Harvard, Vancouver, ISO, and other styles
5

Salotti, Jean Marc. "Noisy-or Nodes for Conditioning Models." In From Animals to Animats 11, 458–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15193-4_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Walrand, Jean. "Speech Recognition: A." In Probability in Electrical Engineering and Computer Science, 205–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_11.

Full text
Abstract:
AbstractSpeech recognition can be formulated as the problem of guessing a sequence of words that produces a sequence of sounds. The human brain is remarkably good at solving this problem, even though the same words correspond to many different sounds, because of accents or characteristics of the voice. Moreover, the environment is always noisy, to that the listeners hear a corrupted version of the speech.Computers are getting much better at speech recognition and voice command systems are now common for smartphones (Siri), automobiles (GPS, music, and climate control), call centers, and dictation systems. In this chapter, we explain the main ideas behind the algorithms for speech recognition and for related applications.The starting point is a model of the random sequence (e.g., words) to be recognized and of how this sequence is related to the observation (e.g., voice). The main model is called a hidden Markov chain. The idea is that the successive parts of speech form a Markov chain and that each word maps randomly to some sounds. The same model is used to decode strings of symbols in communication systems.Section 11.1 is a general discussion of learning. The hidden Markov chain model used in speech recognition and in error decoding is introduced in Sect. 11.2. That section explains the Viterbi algorithm. Section 11.3 discusses expectation maximization and clustering algorithms. Section 11.4 covers learning for hidden Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
7

Koltai, Júlia, Zoltán Kmetty, and Károly Bozsonyi. "From Durkheim to Machine Learning: Finding the Relevant Sociological Content in Depression and Suicide-Related Social Media Discourses." In Pathways Between Social Science and Computational Social Science, 237–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-54936-7_11.

Full text
Abstract:
AbstractThe phenomenon of suicide has been a focal point since Durkheim among social scientists. Internet and social media sites provide new ways for people to express their positive feelings, but they are also platforms to express suicide ideation or depressed thoughts. Most of these posts are not about real suicide, and some of them are a cry for help. Nevertheless, suicide- and depression-related content varies among platforms, and it is not evident how a researcher can find these materials in mass data of social media. Our paper uses the corpus of more than four million Instagram posts, related to mental health problems. After defining the initial corpus, we present two different strategies to find the relevant sociological content in the noisy environment of social media. The first approach starts with a topic modeling (Latent Dirichlet Allocation), the output of which serves as the basis of a supervised classification method based on advanced machine-learning techniques. The other strategy is built on an artificial neural network-based word embedding language model. Based on our results, the combination of topic modeling and neural network word embedding methods seems to be a promising way to find the research related content in a large digital corpus.Our research can provide added value in the detection of possible self-harm events. With the utilization of complex techniques (such as topic modeling and word embedding methods), it is possible to identify the most problematic posts and most vulnerable users.
APA, Harvard, Vancouver, ISO, and other styles
8

Srinivas, Sampath. "A Generalization of the Noisy-Or Model." In Uncertainty in Artificial Intelligence, 208–15. Elsevier, 1993. http://dx.doi.org/10.1016/b978-1-4832-1451-1.50030-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Busemeyer, Marius R., and Julian L. Garritzmann. "Loud, Noisy, or Quiet Politics?" In The World Politics of Social Investment: Volume II, 59–85. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780197601457.003.0003.

Full text
Abstract:
This chapter develops a theoretical model for the conditions under which parties, public opinion, or interest groups, respectively, affect public policymaking. It argues that the influence of public opinion, parties, and interest groups depends on the salience of the respective topic and on the degree of agreement in public opinion. Public opinion has the greatest influence in a world of “loud” politics when salience is high and the public’s attitudes are coherent. In contrast, when an issue is salient but attitudes are conflicting, public opinion sends a “loud but noisy” signal and party politics have a stronger influence on policymaking. Finally, when an issue is not salient (i.e., “quiet” politics), interest groups are dominant. Empirically, the chapter studies the politics of social investment reform in Western Europe. Based on an original survey of public opinion in eight Western European countries as well as on process tracing analysis of policy reforms, the chapter demonstrates how the influence of public opinion, parties, and interest groups on social investment reforms depends on the salience of the respective topic and on the coherence of public opinion.
APA, Harvard, Vancouver, ISO, and other styles
10

Rodgers, Waymond. "The Expedient Algorithmic Pathway." In Dominant Algorithms to Evaluate Artificial Intelligence: From the view of Throughput Model, 96–129. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815049541122010006.

Full text
Abstract:
The Expedient Algorithmic Pathway (P→D) represents an individual or organization with a certain level of expertise providing a decision without the assistance of information since the information may be too noisy, incomplete, inadequately understood, or the alternatives cannot be differentiated. In addition, time pressures may circumvent an individual or organization from analyzing the available information. This algorithm is very useful in AI applications ranging from data gathering to problem solving.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Noisy-OR model"

1

Nagesh, Ajay, Gholamreza Haffari, and Ganesh Ramakrishnan. "Noisy Or-based model for Relation Extraction using Distant Supervision." In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/d14-1208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramakrishnan, Ganesh, Krishna Prasad Chitrapura, Raghu Krishnapuram, and Pushpak Bhattacharyya. "A model for handling approximate, noisy or incomplete labeling in text classification." In the 22nd international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1102351.1102437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yongjian Hu, Yunfei Zhou, Xuefei Jiang, Zhihuai Xiao, and Zhaohui Sun. "Study of Hydropower Units Fault Diagnosis based on Bayesian Network Noisy Or Model." In 2014 ISFMFE - 6th International Symposium on Fluid Machinery and Fluid Engineering. Institution of Engineering and Technology, 2014. http://dx.doi.org/10.1049/cp.2014.1132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zhaohui, Xiaogang Wang, Wan Qiu, and Dongxin Shi. "Research on Intelligent Traditional Chinese Medicine Prescription Model Based on Noisy-or Bayesian Network." In 2020 International Conference on Culture-oriented Science & Technology (ICCST). IEEE, 2020. http://dx.doi.org/10.1109/iccst50977.2020.00100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pan, Weiran, Wei Wei, and Feida Zhu. "Automatic Noisy Label Correction for Fine-Grained Entity Typing." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/599.

Full text
Abstract:
Fine-grained entity typing (FET) aims to assign proper semantic types to entity mentions according to their context, which is a fundamental task in various entity-leveraging applications. Current FET systems usually establish on large-scale weaklysupervised/distantly annotation data, which may contain abundant noise and thus severely hinder the performance of the FET task. Although previous studies have made great success in automatically identifying the noisy labels in FET, they usually rely on some auxiliary resources which may be unavailable in real-world applications (e.g., pre-defined hierarchical type structures, humanannotated subsets). In this paper, we propose a novel approach to automatically correct noisy labels for FET without external resources. Specifically, it first identifies the potentially noisy labels by estimating the posterior probability of a label being positive or negative according to the logits output by the model, and then relabel candidate noisy labels by training a robust model over the remaining clean labels. Experiments on two popular benchmarks prove the effectiveness of our method. Our source code can be obtained from https://github.com/CCIIPLab/DenoiseFET.
APA, Harvard, Vancouver, ISO, and other styles
6

Park, Jun H., and N. Sri Namachchivaya. "Noisy Impact Oscillators." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-60861.

Full text
Abstract:
The purpose of this work is to develop an averaging approach to study the dynamics of a vibro-impact system excited by random perturbations. As a prototype, we consider a noisy single-degree-of-freedom equation with both positive and negative stiffness and achieve a model reduction; i.e., the development of rigorous methods to replace in some asymptotic regime, a complicated system by a simpler one. To this end, we study the equations as a random perturbation of a two-dimensional weakly dissipative Hamiltonian system with either center type or saddle type fixed points. We achieve the model-reduction through stochastic averaging. Examination of the reduced Markov process on a graph yields mean exit times, probability density functions, and stochastic bifurcations.
APA, Harvard, Vancouver, ISO, and other styles
7

Asl, Sajjad Fekri, Michael Athans, and Antonio Pascoal. "Estimation and Identification of Mass-Spring-Dashpot Systems Using Multiple-Model Adaptive Algorithms." In ASME 2002 International Mechanical Engineering Congress and Exposition. ASMEDC, 2002. http://dx.doi.org/10.1115/imece2002-33442.

Full text
Abstract:
We present performance evaluations of different configuration for state estimation and identification of a complex Mass-Spring-Dashpot (MSD) system using Multiple-Model Adaptive Estimation (MMAE) algorithms. The algorithms compare two distinct MMAE strategies using either constant-gain or time-varying-gain Kalman filters to identify the correct model of the MSD system. Simulation results, for a variety of noisy measurement assumptions, illustrate the behaviour of the MMAE algorithms which are robust to mass uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Junshuang, Richong Zhang, Yongyi Mao, Hongyu Guo, and Jinpeng Huai. "Modeling Noisy Hierarchical Types in Fine-Grained Entity Typing: A Content-Based Weighting Approach." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/731.

Full text
Abstract:
Fine-grained entity typing (FET), which annotates the entities in a sentence with a set of finely specified type labels, often serves as the first and critical step towards many natural language processing tasks. Despite great processes have been made, current FET methods have difficulty to cope with the noisy labels which naturally come with the data acquisition processes. Existing FET approaches either pre-process to clean the noise or simply focus on one of the noisy labels, sidestepping the fact that those noises are related and content dependent. In this paper, we directly model the structured, noisy labels with a novel content-sensitive weighting schema. Coupled with a newly devised cost function and a hierarchical type embedding strategy, our method leverages a random walk process to effectively weight out noisy labels during training. Experiments on several benchmark datasets validate the effectiveness of the proposed framework and establish it as a new state of the art strategy for noisy entity typing problem.
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Harry W. H., Jack P. K. Ma, Donald P. H. Wong, Lucien K. L. Ng, and Sherman S. M. Chow. "Learning Model with Error -- Exposing the Hidden Model of BAYHENN." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/488.

Full text
Abstract:
Privacy-preserving deep neural network (DNN) inference remains an intriguing problem even after the rapid developments of different communities. One challenge is that cryptographic techniques such as homomorphic encryption (HE) do not natively support non-linear computations (e.g., sigmoid). A recent work, BAYHENN (Xie et al., IJCAI'19), considers HE over the Bayesian neural network (BNN). The novelty lies in "meta-prediction" over a few noisy DNNs. The claim was that the clients can get intermediate outputs (to apply non-linear function) but are still prevented from learning the exact model parameters, which was justified via the widely-used learning-with-error (LWE) assumption (with Gaussian noises as the error). This paper refutes the security claim of BAYHENN via both theoretical and empirical analyses. We formally define a security game with different oracle queries capturing two realistic threat models. Our attack assuming a semi-honest adversary reveals all the parameters of single-layer BAYHENN, which generalizes to recovering the whole model that is "as good as" the BNN approximation of the original DNN, either under the malicious adversary model or with an increased number of oracle queries. This shows the need for rigorous security analysis ("the noise introduced by BNN can obfuscate the model" fails -- it is beyond what LWE guarantees) and calls for the collaboration between cryptographers and machine-learning experts to devise practical yet provably-secure solutions.
APA, Harvard, Vancouver, ISO, and other styles
10

Choi, Seunggil, and N. Sri Namachchivaya. "An Averaging Approach for Noisy Strongly Nonlinear Periodically Forced Systems." In ASME 2002 International Mechanical Engineering Congress and Exposition. ASMEDC, 2002. http://dx.doi.org/10.1115/imece2002-39384.

Full text
Abstract:
The purpose of this work is to develop a unified approach to study the dynamics of single degree of freedom systems excited by both periodic and random perturbations. The near resonant motion of such systems is not well understood. We will study this problem in depth with the aim of discovering a common geometric structure in the phase space, and to determine the effects of noisy perturbations on the passage of trajectories through the resonance zone. We consider the noisy, periodically driven Duffing equation as a prototypical single degree of freedom system and achieve a model-reduction through stochastic averaging. Depending on the strength of the noise, reduced Markov process takes its values on a line or on graph with certain gluing conditions at the vertex of the graph. The reduced model will provide a framework for computing standard statistical measures of dynamics and stability, namely, mean exit times, probability density functions, and stochastic bifurcations. This work will also explain a counter-intuitive phenomena of stochastic resonance, in which a weak periodic force in a nonlinear system can be enhanced by the addition of external noise.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography