To see the other types of publications on this topic, follow the link: Divergence cognitive.

Journal articles on the topic 'Divergence cognitive'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Divergence cognitive.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Man, Na, Kechao Wang, and Lin Liu. "Using Computer Cognitive Atlas to Improve Students' Divergent Thinking Ability." Journal of Organizational and End User Computing 33, no. 6 (November 2021): 1–16. http://dx.doi.org/10.4018/joeuc.20211101.oa25.

Full text
Abstract:
Human society has entered the era of intelligence. Social development in the era of intelligence has spawned a large number of intelligent applications. Intelligent applications have put forward unprecedented requirements on the level of cognitive intelligence of machines, and the realization of machine cognitive intelligence depends on knowledge map technology. Divergent thinking is an important part of thinking and an important indicator for measuring innovative thinking. The research in this article found that after the experiment, the associated probabilities of the F values of fluency, flexibility, uniqueness, semantic divergence, graphical divergence, and problem divergence were 0.389, 0.442, 0.594, 0.267, 0.319, and 0.478, which were all greater than the significance level of 0.05, That is, the divergent thinking ability of the experimental group has been significantly improved. The results of this study show that the use of computer cognitive maps can improve students' divergent thinking ability.
APA, Harvard, Vancouver, ISO, and other styles
2

Atabek-Yigit, Elif. "Can cognitive structure outcomes reveal cognitive styles? A study on the relationship between cognitive styles and cognitive structure outcomes on the subject of chemical kinetics." Chemistry Education Research and Practice 19, no. 3 (2018): 746–54. http://dx.doi.org/10.1039/c8rp00018b.

Full text
Abstract:
Determination of the relationship between individuals’ cognitive styles and cognitive structure outcomes was the main aim of this study. Sixty-six participants were enrolled in the study and their cognitive styles were determined by using the Hidden Figure Test (for their field dependent/independent dimension of cognitive style) and the Convergent/Divergent Test (for their convergence/divergence dimension of cognitive style). An open-ended questionnaire was formed in order to determine participants’ cognitive structure outcomes. The study topic was chosen as chemical kinetics since it is one of the most difficult topics in chemistry according to many students and also there is limited study in the literature on this topic. Key concepts about chemical kinetics were selected and given to the participants and they were asked to write a text by using the given concepts. A flow map technique was used to reveal participants’ cognitive structure outcomes. According to the findings of this study, it can be said that field independent participants tended to be divergent thinkers while field dependents tended to be convergent thinkers. Also, strong positive relationships between participants’ field dependency/independency and some cognitive structure outcomes (extent and richness) were found. That is, field independents tended to have more extended and richer cognitive structure outcomes. However, the convergence/divergence dimension of cognitive style did not show any correlation with cognitive structure outcomes.
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Xiao. "A Note on Divergences." Neural Computation 28, no. 10 (October 2016): 2045–62. http://dx.doi.org/10.1162/neco_a_00878.

Full text
Abstract:
In many areas of neural computation, like learning, optimization, estimation, and inference, suitable divergences play a key role. In this note, we study the conjecture presented by Amari ( 2009 ) and find a counterexample to show that the conjecture does not hold generally. Moreover, we investigate two classes of [Formula: see text]-divergence (Zhang, 2004 ), weighted f-divergence and weighted [Formula: see text]-divergence, and prove that if a divergence is a weighted f-divergence, as well as a Bregman divergence, then it is a weighted [Formula: see text]-divergence. This result reduces in form to the main theorem established by Amari ( 2009 ) when [Formula: see text] [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
4

Amari, Shun-ichi, Ryo Karakida, Masafumi Oizumi, and Marco Cuturi. "Information Geometry for Regularized Optimal Transport and Barycenters of Patterns." Neural Computation 31, no. 5 (May 2019): 827–48. http://dx.doi.org/10.1162/neco_a_01178.

Full text
Abstract:
We propose a new divergence on the manifold of probability distributions, building on the entropic regularization of optimal transportation problems. As Cuturi ( 2013 ) showed, regularizing the optimal transport problem with an entropic term is known to bring several computational benefits. However, because of that regularization, the resulting approximation of the optimal transport cost does not define a proper distance or divergence between probability distributions. We recently tried to introduce a family of divergences connecting the Wasserstein distance and the Kullback-Leibler divergence from an information geometry point of view (see Amari, Karakida, & Oizumi, 2018 ). However, that proposal was not able to retain key intuitive aspects of the Wasserstein geometry, such as translation invariance, which plays a key role when used in the more general problem of computing optimal transport barycenters. The divergence we propose in this work is able to retain such properties and admits an intuitive interpretation.
APA, Harvard, Vancouver, ISO, and other styles
5

Thuc, Kieu-Xuan, and In-Soo Koo. "A Kullback-Leiber Divergence-based Spectrum Sensing for Cognitive Radio Systems." Journal of Korean Society for Internet Information 13, no. 1 (February 29, 2012): 1–6. http://dx.doi.org/10.7472/jksii.2012.13.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yamada, Makoto, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, and Masashi Sugiyama. "Relative Density-Ratio Estimation for Robust Distribution Comparison." Neural Computation 25, no. 5 (May 2013): 1324–70. http://dx.doi.org/10.1162/neco_a_00442.

Full text
Abstract:
Divergence estimators based on direct approximation of density ratios without going through separate approximation of numerator and denominator densities have been successfully applied to machine learning tasks that involve distribution comparison such as outlier detection, transfer learning, and two-sample homogeneity test. However, since density-ratio functions often possess high fluctuation, divergence estimation is a challenging task in practice. In this letter, we use relative divergences for distribution comparison, which involves approximation of relative density ratios. Since relative density ratios are always smoother than corresponding ordinary density ratios, our proposed method is favorable in terms of nonparametric convergence speed. Furthermore, we show that the proposed divergence estimator has asymptotic variance independent of the model complexity under a parametric setup, implying that the proposed estimator hardly overfits even with complex models. Through experiments, we demonstrate the usefulness of the proposedapproach.
APA, Harvard, Vancouver, ISO, and other styles
7

WANG, QIAN. "From Divergence to Convergence: Towards Integration of Cognitive Linguistics and Critical Discourse Analysis in Political Discourse." Advances in Social Sciences Research Journal 6, no. 11 (December 3, 2019): 401–11. http://dx.doi.org/10.14738/assrj.611.7442.

Full text
Abstract:
The social turn of cognitive linguistics and cognitive turn of critical discourse analysis breed the opportunity for cognitive linguistics and critical discourse analysis to develop towards a more converging path that integrates both cognitive and social dimensions of language. As known, political discourse is intrinsically persuasive and always informs a power relation with attempts to achieve effectiveness of persuasion. This paper argues that both approaches (Cognitive Linguistics and CDA) are concerned with surfaced evidence of implicit ideologies hidden behind political discourse, so the integration of CL and CDA could extend the research scope for both paradigms on one hand, and provide more powerful explanatory tools to augment our understanding of the intertwined relations between language, cognition and society on the other.
APA, Harvard, Vancouver, ISO, and other styles
8

Couture, S. M., D. L. Penn, M. Losh, R. Adolphs, R. Hurley, and J. Piven. "Comparison of social cognitive functioning in schizophrenia and high functioning autism: more convergence than divergence." Psychological Medicine 40, no. 4 (August 12, 2009): 569–79. http://dx.doi.org/10.1017/s003329170999078x.

Full text
Abstract:
BackgroundIndividuals with schizophrenia and individuals with high-functioning autism (HFA) seem to share some social, behavioral and biological features. Although marked impairments in social cognition have been documented in both groups, little empirical work has compared the social cognitive functioning of these two clinical groups.MethodForty-four individuals with schizophrenia, 36 with HFA and 41 non-clinical controls completed a battery of social cognitive measures that have been linked previously to specific brain regions.ResultsThe results indicate that the individuals with schizophrenia and HFA were both impaired on a variety of social cognitive tasks relative to the non-clinical controls, but did not differ from one another. When individuals with schizophrenia were divided into negative symptom and paranoid subgroups, exploratory analyses revealed that individuals with HFA may be more similar, in terms of the pattern of social cognition impairments, to the negative symptom group than to the paranoia group.ConclusionsOur findings provide further support for similarities in social cognition deficits between HFA and schizophrenia, which have a variety of implications for future work on gene–brain–behavior relationships.
APA, Harvard, Vancouver, ISO, and other styles
9

Treichler, Emily B. H., Michael L. Thomas, Andrew W. Bismark, William C. Hochberger, Melissa Tarasenko, John Nungaray, Lauren Cardoso, et al. "Divergence of subjective and performance-based cognitive gains following cognitive training in schizophrenia." Schizophrenia Research 210 (August 2019): 215–20. http://dx.doi.org/10.1016/j.schres.2018.12.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kopcsó, Krisztina, and András Láng. "Regulated Divergence: Textual Patterns, Creativity and Cognitive Emotion Regulation." Creativity Research Journal 29, no. 2 (April 3, 2017): 218–23. http://dx.doi.org/10.1080/10400419.2017.1303318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Okuno, Akifumi, and Hidetoshi Shimodaira. "Hyperlink regression via Bregman divergence." Neural Networks 126 (June 2020): 362–83. http://dx.doi.org/10.1016/j.neunet.2020.03.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Villmann, Thomas, and Sven Haase. "Divergence-Based Vector Quantization." Neural Computation 23, no. 5 (May 2011): 1343–92. http://dx.doi.org/10.1162/neco_a_00110.

Full text
Abstract:
Supervised and unsupervised vector quantization methods for classification and clustering traditionally use dissimilarities, frequently taken as Euclidean distances. In this article, we investigate the applicability of divergences instead, focusing on online learning. We deduce the mathematical fundamentals for its utilization in gradient-based online vector quantization algorithms. It bears on the generalized derivatives of the divergences known as Fréchet derivatives in functional analysis, which reduces in finite-dimensional problems to partial derivatives in a natural way. We demonstrate the application of this methodology for widely applied supervised and unsupervised online vector quantization schemes, including self-organizing maps, neural gas, and learning vector quantization. Additionally, principles for hyperparameter optimization and relevance learning for parameterized divergences in the case of supervised vector quantization are given to achieve improved classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Mstislavskaya, Elena Vasil’evna. "Divergence as a Criterion of Musical Performance Quality." Pan-Art 2, no. 2 (May 31, 2022): 29–36. http://dx.doi.org/10.30853/pa20220008.

Full text
Abstract:
The purpose of the study is to substantiate that divergence is a criterion of musical performance quality. The material presented in the paper is addressed to those musicians whose musical abilities do not manifest themselves in the obviously higher sides of the cognitive sphere, for example, in phenomenal memory, etc. The paper considers two aspects of a musician’s creative activities – the intellectual one and the emotional one; determines the roles of divergent qualities of a musician. The novelty of the study lies in a scientific substantiation of divergent thinking development in musical performance practice. As a result, the main issues of educating a modern performing musician have been identified. The comprehension of some of the scientific positions given in the paper can contribute to increasing the importance of intellectual work in musical performance and determining the individual trajectory of a musician’s professional development taking into account the specified factor.
APA, Harvard, Vancouver, ISO, and other styles
14

Kampffmeyer, Michael, Sigurd Løkse, Filippo M. Bianchi, Lorenzo Livi, Arnt-Børre Salberg, and Robert Jenssen. "Deep divergence-based approach to clustering." Neural Networks 113 (May 2019): 91–101. http://dx.doi.org/10.1016/j.neunet.2019.01.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Pilecki, Brian, Nathan Thoma, and Dean McKay. "Cognitive Behavioral and Psychodynamic Therapies: Points of Intersection and Divergence." Psychodynamic Psychiatry 43, no. 3 (September 2015): 463–90. http://dx.doi.org/10.1521/pdps.2015.43.3.463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gabalda, Isabel Caro, Robert A. Neimeyer, and Cory F. Newman. "Theory and Practice in the Cognitive Psychotherapies: Convergence and Divergence." Journal of Constructivist Psychology 23, no. 1 (January 2010): 65–83. http://dx.doi.org/10.1080/10720530903400996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Vaessen, Anniek, and Leo Blomert. "The Cognitive Linkage and Divergence of Spelling and Reading Development." Scientific Studies of Reading 17, no. 2 (March 2013): 89–107. http://dx.doi.org/10.1080/10888438.2011.614665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Movellan, Javier R. "Contrastive Divergence in Gaussian Diffusions." Neural Computation 20, no. 9 (September 2008): 2238–52. http://dx.doi.org/10.1162/neco.2008.01-07-430.

Full text
Abstract:
This letter presents an analysis of the contrastive divergence (CD) learning algorithm when applied to continuous-time linear stochastic neural networks. For this case, powerful techniques exist that allow a detailed analysis of the behavior of CD. The analysis shows that CD converges to maximum likelihood solutions only when the network structure is such that it can match the first moments of the desired distribution. Otherwise, CD can converge to solutions arbitrarily different from the log-likelihood solutions, or they can even diverge. This result suggests the need to improve our theoretical understanding of the conditions under which CD is expected to be well behaved and the conditions under which it may fail. In, addition the results point to practical ideas on how to improve the performance of CD.
APA, Harvard, Vancouver, ISO, and other styles
19

Bengio, Yoshua, and Olivier Delalleau. "Justifying and Generalizing Contrastive Divergence." Neural Computation 21, no. 6 (June 2009): 1601–21. http://dx.doi.org/10.1162/neco.2008.11-07-647.

Full text
Abstract:
We study an expansion of the log likelihood in undirected graphical models such as the restricted Boltzmann machine (RBM), where each term in the expansion is associated with a sample in a Gibbs chain alternating between two random variables (the visible vector and the hidden vector in RBMs). We are particularly interested in estimators of the gradient of the log likelihood obtained through this expansion. We show that its residual term converges to zero, justifying the use of a truncation—running only a short Gibbs chain, which is the main idea behind the contrastive divergence (CD) estimator of the log-likelihood gradient. By truncating even more, we obtain a stochastic reconstruction error, related through a mean-field approximation to the reconstruction error often used to train autoassociators and stacked autoassociators. The derivation is not specific to the particular parametric forms used in RBMs and requires only convergence of the Gibbs chain. We present theoretical and empirical evidence linking the number of Gibbs steps k and the magnitude of the RBM parameters to the bias in the CD estimator. These experiments also suggest that the sign of the CD estimator is correct most of the time, even when the bias is large, so that CD-k is a good descent direction even for small k.
APA, Harvard, Vancouver, ISO, and other styles
20

Farsi, Roghayeh. "Experimentalism and cognition." English Text Construction 12, no. 1 (May 27, 2019): 29–58. http://dx.doi.org/10.1075/etc.00017.far.

Full text
Abstract:
Abstract This study approaches experimental literary texts from a cognitive perspective. It investigates if a constructivist modeling of cognition can be applied to such texts, and contends that there is a two-way relation between memory and literary experimentations. It suggests a fresh look at literary experimentalism – from the perspective of the cognitive processes involved in challenging (text, language, and world) schemata to varying degrees. There exists a vast body of knowledge on experimental texts, but the study of cognitive processing of such texts has until now been a less studied area of cognitive research. This study defines two main types of experimental texts based on their closeness to or divergence from the schematic parameters of world, text, and language: proximal and distal. The study shows how distal experimentations are conventionalized over the course of time and call for re-innovation.
APA, Harvard, Vancouver, ISO, and other styles
21

Daniels, Kevin, Gerry Johnson, and Leslie de Chernatony. "Task and Institutional Influences on Managers' Mental Models of Competition." Organization Studies 23, no. 1 (January 2002): 31–62. http://dx.doi.org/10.1177/0170840602231002.

Full text
Abstract:
From institutional theory, we argue (a) that the competitive, or task environment may encourage divergence of management cognition between organizations, management functions and amongst senior managers, and (b) that the institutional environment may encourage cognitive convergence at the level of the industry, the strategic group and within institutionalized practices linked to management functions and level. Using management cognition of competition as a vehicle and two cognitive mapping methods, we test a series of competing propositions amongst 32 managers in the UK personal financial services industry, an industry that evidences both task and institutional characteristics. Our findings indicate neither the superiority of exclusively task nor institutional explanations of management cognition. However, the results do indicate some influence of the institutional environment, most noticeably through the convergence of mental models within middle managers across the industry. The results also indicate some influence of the task environment, through cognitive differences across organizations and greater differentiation amongst senior managers' mental models. We interpret our results by referring to the usefulness of distinguishing between task and institutional environments in management cognition and strategic management research.
APA, Harvard, Vancouver, ISO, and other styles
22

Hardingham, Giles E., Priit Pruunsild, Michael E. Greenberg, and Hilmar Bading. "Lineage divergence of activity-driven transcription and evolution of cognitive ability." Nature Reviews Neuroscience 19, no. 1 (November 23, 2017): 9–15. http://dx.doi.org/10.1038/nrn.2017.138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Levesque, Laurie L., Jeanne M. Wilson, and Douglas R. Wholey. "Cognitive divergence and shared mental models in software development project teams." Journal of Organizational Behavior 22, no. 2 (2001): 135–44. http://dx.doi.org/10.1002/job.87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sajid, Noor, Francesco Faccio, Lancelot Da Costa, Thomas Parr, Jürgen Schmidhuber, and Karl Friston. "Bayesian Brains and the Rényi Divergence." Neural Computation 34, no. 4 (March 23, 2022): 829–55. http://dx.doi.org/10.1162/neco_a_01484.

Full text
Abstract:
Abstract Under the Bayesian brain hypothesis, behavioral variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioral preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioral variability using Rényi divergences and their associated variational bounds. Rényi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioral differences through an α parameter, given fixed priors. This rests on changes in α that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behavior. Thus, it looks as if individuals have different priors and have reached different conclusions. More specifically, α→0+ optimization constrains the variational posterior to be positive whenever the true posterior is positive. This leads to mass-covering variational estimates and increased variability in choice behavior. Furthermore, α→+∞ optimization constrains the variational posterior to be zero whenever the true posterior is zero. This leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multiarmed bandit task. We note that these α parameterizations may be especially relevant (i.e., shape preferences) when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioral preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Jun. "Divergence Function, Duality, and Convex Analysis." Neural Computation 16, no. 1 (January 1, 2004): 159–95. http://dx.doi.org/10.1162/08997660460734047.

Full text
Abstract:
From a smooth, strictly convex function Φ: Rn → R, a parametric family of divergence function DΦ(α) may be introduced: [Formula: see text] for x, y, ε int dom(Φ) and for α ε R, with DΦ(±1 defined through taking the limit of α. Each member is shown to induce an α-independent Riemannian metric, as well as a pair of dual α-connections, which are generally nonflat, except for α = ±1. In the latter case, D(±1)Φ reduces to the (nonparametric) Bregman divergence, which is representable using and its convex conjugate Φ * and becomes the canonical divergence for dually flat spaces (Amari, 1982, 1985; Amari & Nagaoka, 2000). This formulation based on convex analysis naturally extends the information-geometric interpretation of divergence functions (Eguchi, 1983) to allow the distinction between two different kinds of duality: referential duality (α -α) and representational duality (Φ  Φ *). When applied to (not necessarily normalized) probability densities, the concept of conjugated representations of densities is introduced, so that ± α-connections defined on probability densities embody both referential and representational duality and are hence themselves bidual. When restricted to a finite-dimensional affine submanifold, the natural parameters of a certain representation of densities and the expectation parameters under its conjugate representation form biorthogonal coordinates. The alpha representation (indexed by β now, β ε [−1, 1]) is shown to be the only measure-invariant representation. The resulting two-parameter family of divergence functionals D(α, β), (α, β) ε [−1, 1] × [-1, 1] induces identical Fisher information but bidual alpha-connection pairs; it reduces in form to Amari's alpha-divergence family when α =±1 or when β = 1, but to the family of Jensen difference (Rao, 1987) when β = 1.
APA, Harvard, Vancouver, ISO, and other styles
26

Notsu, Akifumi, Osamu Komori, and Shinto Eguchi. "Spontaneous Clustering via Minimum Gamma-Divergence." Neural Computation 26, no. 2 (February 2014): 421–48. http://dx.doi.org/10.1162/neco_a_00547.

Full text
Abstract:
We propose a new method for clustering based on local minimization of the gamma-divergence, which we call spontaneous clustering. The greatest advantage of the proposed method is that it automatically detects the number of clusters that adequately reflect the data structure. In contrast, existing methods, such as K-means, fuzzy c-means, or model-based clustering need to prescribe the number of clusters. We detect all the local minimum points of the gamma-divergence, by which we define the cluster centers. A necessary and sufficient condition for the gamma-divergence to have local minimum points is also derived in a simple setting. Applications to simulated and real data are presented to compare the proposed method with existing ones.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Zhiyi, and Michael Grabchak. "Nonparametric Estimation of Küllback-Leibler Divergence." Neural Computation 26, no. 11 (November 2014): 2570–93. http://dx.doi.org/10.1162/neco_a_00646.

Full text
Abstract:
In this letter, we introduce an estimator of Küllback-Leibler divergence based on two independent samples. We show that on any finite alphabet, this estimator has an exponentially decaying bias and that it is consistent and asymptotically normal. To explain the importance of this estimator, we provide a thorough analysis of the more standard plug-in estimator. We show that it is consistent and asymptotically normal, but with an infinite bias. Moreover, if we modify the plug-in estimator to remove the rare events that cause the bias to become infinite, the bias still decays at a rate no faster than [Formula: see text]. Further, we extend our results to estimating the symmetrized Küllback-Leibler divergence. We conclude by providing simulation results, which show that the asymptotic properties of these estimators hold even for relatively small sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
28

Sulikowski, Danielle, and Darren Burke. "From the lab to the world: The paradigmatic assumption and the functional cognition of avian foraging." Current Zoology 61, no. 2 (April 1, 2015): 328–40. http://dx.doi.org/10.1093/czoolo/61.2.328.

Full text
Abstract:
Abstract Mechanisms of animal learning and memory were traditionally studied without reference to niche-specific functional considerations. More recently, ecological demands have informed such investigations, most notably with respect to foraging in birds. In parallel, behavioural ecologists, primarily concerned with functional optimization, have begun to consider the role of mechanistic factors, including cognition, to explain apparent deviations from optimal predictions. In the present paper we discuss the application of laboratory-based constructs and paradigms of cognition to the real-world challenges faced by avian foragers. We argue that such applications have been handicapped by what we term the ‘paradigmatic assumption’ – the assumption that a given laboratory paradigm maps well enough onto a congruent cognitive mechanism (or cognitive ability) to justify conflation of the two. We present evidence against the paradigmatic assumption and suggest that to achieve a profitable integration between function and mechanism, with respect to animal cognition, a new conceptualization of cognitive mechanisms functional cognition – is required. This new conceptualization should define cognitive mechanisms based on the informational properties of the animal’s environment and the adaptive challenges faced. Cognitive mechanisms must be examined in settings that mimic the important aspects of the natural environment, using customized tasks designed to probe defined aspects of the mechanisms’ operation. We suggest that this approach will facilitate investigations of the functional and evolutionary relevance of cognitive mechanisms, as well as the patterns of divergence, convergence and specialization of cognitive mechanisms within and between species.
APA, Harvard, Vancouver, ISO, and other styles
29

Afgani, Mostafa, Sinan Sinanović, and Harald Haas. "The Information Theoretic Approach to Signal Anomaly Detection for Cognitive Radio." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–18. http://dx.doi.org/10.1155/2010/740594.

Full text
Abstract:
Efficient utilisation and sharing of limited spectrum resources in an autonomous fashion is one of the primary goals of cognitive radio. However, decentralised spectrum sharing can lead to interference scenarios that must be detected and characterised to help achieve the other goal of cognitive radio—reliable service for the end user. Interference events can be treated as unusual and therefore anomaly detection algorithms can be applied for their detection. Two complementary algorithms based on information theoretic measures of statistical distribution divergence and information content are proposed. The first method is applicable to signals with periodic structures and is based on the analysis of Kullback-Leibler divergence. The second utilises information content analysis to detect unusual events. Results from software and hardware implementations show that the proposed algorithms are effective, simple, and capable of processing high-speed signals in real time. Additionally, neither of the algorithms require demodulation of the signal.
APA, Harvard, Vancouver, ISO, and other styles
30

Almeida, Mariana Luciano de, Daniela Dalpubel, Estela Barbosa Ribeiro, Eduardo Schneider Bueno de Oliveira, Juliana Hotta Ansai, and Francisco Assis Carvalho Vale. "Subjective cognitive impairment, cognitive disorders and self-perceived health: The importance of the informant." Dementia & Neuropsychologia 13, no. 3 (September 2019): 335–42. http://dx.doi.org/10.1590/1980-57642018dn13-030011.

Full text
Abstract:
ABSTRACT There is great divergence of results in the literature regarding the clinical relevance and etiology of subjective cognitive impairment (SCI). Currently, SCI is studied as a pre-clinical symptom of Alzheimer's disease, before establishing a possible diagnosis of mild cognitive impairment (MCI). The hypothesis was that SCI is associated with low cognitive performance and poor self-perceived health. Objective: to investigate the relationship of SCI with objective cognitive impairment and self-perceived health in older individuals and to compare SCI reported by the elderly subjects and by their respective informants. Methods: 83 subjects participated in the study, divided between the forms of the Memory Complaint Scale (MCS). Cognition was evaluated by the Addenbrooke's Cognitive Examination - Revised and self-perceived health by the Short Form Health Survey-8. Results: there was no association between SCI and self-perceived health. SCI reported by the older adults was associated with executive functions. SCI reported by the informant was associated with overall cognitive performance, memory, verbal fluency and visuospatial functions. Conclusion: we found more robust results between SCI reported by the informant and cognitive impairment in the elderly assessed. There is a need to include and value the perception of someone who knows the older individual well enough to evaluate SCI globally.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Qun, Jose Principe, Margaret Bradley, and Peter Lang. "fMRI analysis: Distribution divergence measure based on quadratic entropy." NeuroImage 11, no. 5 (May 2000): S521. http://dx.doi.org/10.1016/s1053-8119(00)91452-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Silva, RF, and VD Calhoun. "Divergence Measurements for the Optimal Identification of Multimodal Biomarkers." NeuroImage 47 (July 2009): S102. http://dx.doi.org/10.1016/s1053-8119(09)70879-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Howard-Jones, Paul A., Sarah-Jayne Blakemore, Elspeth A. Samuel, Ian R. Summers, and Guy Claxton. "Semantic divergence and creative story generation: An fMRI investigation." Cognitive Brain Research 25, no. 1 (September 2005): 240–50. http://dx.doi.org/10.1016/j.cogbrainres.2005.05.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kumar, K. L. "Cognition and the Design of Products Large and Small." Journal of Cognitive Education and Psychology 3, no. 2 (January 2003): 164–77. http://dx.doi.org/10.1891/194589503787383118.

Full text
Abstract:
Innovative design of new products proceeds by way of cognitive processes of analysis, critical thinking, creativity, conceptualization, cognitive modeling, synthesis, prototyping, and evaluation. Design phases invariably consist of divergence, transformation, and convergence operations. Designing is a creative faculty of the mind, akin to the conceptual faculty of learning arts, sciences, and languages. The author dwells briefly on cognitive, graphical communication, morphological, philosophical, and psychological aspects of design, together with educational imperatives, and proposes that designing new products requires the same cognitive processes regardless of their size, shape, and complexity.The author has drawn upon his own experience of designing a variety of things and has quoted references to design of household artifacts, office equipment, and industrial products. Reference is made to the ‘Design and Technology’ subject being taught at junior and senior secondary schools in Botswana and elsewhere. Examples are also drawn from some recent world-class designs. These establish the belief that human design cognition is the same for all products, small or large.
APA, Harvard, Vancouver, ISO, and other styles
35

Mihoko, Minami, and Shinto Eguchi. "Robust Blind Source Separation by Beta Divergence." Neural Computation 14, no. 8 (August 1, 2002): 1859–86. http://dx.doi.org/10.1162/089976602760128045.

Full text
Abstract:
Blind source separation is aimed at recovering original independent signals when their linear mixtures are observed. Various methods for estimating a recovering matrix have been proposed and applied to data in many fields, such as biological signal processing, communication engineering, and financial market data analysis. One problem these methods have is that they are often too sensitive to outliers, and the existence of a few outliers might change the estimate drastically. In this article, we propose a robust method of blind source separation based on theβ divergence. Shift parameters are explicitly included in our model instead of the conventional way which assumes that original signals have zero mean. The estimator gives smaller weights to possible outliers so that their influence on the estimate is weakened. Simulation results show that the proposed estimator significantly improves the performance over the existing methods when outliers exist; it keeps equal performance otherwise.
APA, Harvard, Vancouver, ISO, and other styles
36

Fischer, Asja, and Christian Igel. "Bounding the Bias of Contrastive Divergence Learning." Neural Computation 23, no. 3 (March 2011): 664–73. http://dx.doi.org/10.1162/neco_a_00085.

Full text
Abstract:
Optimization based on k-step contrastive divergence (CD) has become a common way to train restricted Boltzmann machines (RBMs). The k-step CD is a biased estimator of the log-likelihood gradient relying on Gibbs sampling. We derive a new upper bound for this bias. Its magnitude depends on k, the number of variables in the RBM, and the maximum change in energy that can be produced by changing a single variable. The last reflects the dependence on the absolute values of the RBM parameters. The magnitude of the bias is also affected by the distance in variation between the modeled distribution and the starting distribution of the Gibbs chain.
APA, Harvard, Vancouver, ISO, and other styles
37

Sidtis, J. J., S. C. Strother, J. R. Anderson, K. Rehm, K. A. Schaper, and D. A. Rottenberg. "Functional activation during speech: Convergence and divergence with clinical data." NeuroImage 3, no. 3 (June 1996): S459. http://dx.doi.org/10.1016/s1053-8119(96)80461-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

van der Lubbe, Rob H. J., Marieke L. Schölvinck, J. Leon Kenemans, and Albert Postma. "Divergence of categorical and coordinate spatial processing assessed with ERPs." Neuropsychologia 44, no. 9 (January 2006): 1547–59. http://dx.doi.org/10.1016/j.neuropsychologia.2006.01.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Zhu. "The Maritime Silk Road and India: The Challenge of Overcoming Cognitive Divergence." Asia Policy 22, no. 1 (2016): 20–26. http://dx.doi.org/10.1353/asp.2016.0040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sogawa, Yasuhiro, Tsuyoshi Ueno, Yoshinobu Kawahara, and Takashi Washio. "Active learning for noisy oracle via density power divergence." Neural Networks 46 (October 2013): 133–43. http://dx.doi.org/10.1016/j.neunet.2013.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Xi, Shen Zhao, and Wei Li. "Opinion Dynamics Model Based on Cognitive Styles: Field-Dependence and Field-Independence." Complexity 2019 (February 11, 2019): 1–12. http://dx.doi.org/10.1155/2019/2864124.

Full text
Abstract:
Two distinct cognitive styles exist from the perspective of cognition: field-dependence and field-independence. In most public opinion dynamics models, people only consider that individuals update their opinions through interactions with other individuals. This represents the field-dependent cognitive style of the individual. The field-independent cognitive style is ignored in such cases. We consider both cognitive styles in public opinion dynamics and propose a public opinion evolution model based on cognitive styles (CS model). The opinions of neighbors and experiences of the individual represent field-dependent cognition and field-independent cognition, respectively, and the individual combines both cognitive styles to update his/her own opinion. In the proposed model, the experience parameter is designed to represent the weight of the current opinion in terms of the individual’s experiences and the cognitive parameter is proposed to represent the tendencies of his/her cognitive styles. We experimentally verify that the CS and Hegselmann–Krause (HK) models are similar in terms of public opinion evolution trends; with an increase in radius of confidence, the steady state of a social system shifts from divergence to polarization and eventually reaches consensus. Considering that individuals from different cultures have different degrees of inclination for the two styles, we present experiments focusing on cognitive parameter and experience parameter and analyze the evolutionary trends of opinion dynamics in different styles. We find that when an individual has a greater tendency toward the field-independent cognitive style under the influence of culture, the time required for a social system to reach a steady state will increase; the system will have greater difficultly in reaching consensus, mirroring the evolutionary trends in public opinion in the context of eastern and western cultures. The CS model constitutes an opinion dynamics model that is more consistent with the real world and may also serve as a basis for future cross-cultural research.
APA, Harvard, Vancouver, ISO, and other styles
42

Teschke, I., C. A. F. Wascher, M. F. Scriba, A. M. P. von Bayern, V. Huml, B. Siemers, and S. Tebbich. "Did tool-use evolve with enhanced physical cognitive abilities?" Philosophical Transactions of the Royal Society B: Biological Sciences 368, no. 1630 (November 19, 2013): 20120418. http://dx.doi.org/10.1098/rstb.2012.0418.

Full text
Abstract:
The use and manufacture of tools have been considered to be cognitively demanding and thus a possible driving factor in the evolution of intelligence. In this study, we tested the hypothesis that enhanced physical cognitive abilities evolved in conjunction with the use of tools, by comparing the performance of naturally tool-using and non-tool-using species in a suite of physical and general learning tasks. We predicted that the habitually tool-using species, New Caledonian crows and Galápagos woodpecker finches, should outperform their non-tool-using relatives, the small tree finches and the carrion crows in a physical problem but not in general learning tasks. We only found a divergence in the predicted direction for corvids. That only one of our comparisons supports the predictions under this hypothesis might be attributable to different complexities of tool-use in the two tool-using species. A critical evaluation is offered of the conceptual and methodological problems inherent in comparative studies on tool-related cognitive abilities.
APA, Harvard, Vancouver, ISO, and other styles
43

Mwebaze, E., P. Schneider, F. M. Schleif, J. R. Aduwo, J. A. Quinn, S. Haase, T. Villmann, and M. Biehl. "Divergence-based classification in learning vector quantization." Neurocomputing 74, no. 9 (April 2011): 1429–35. http://dx.doi.org/10.1016/j.neucom.2010.10.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Teske, Joanna Klara, and Jan Jankowski. "Dissonant and consonant narrators : Dorrit Cohn's concepts, narratorial Stance theory and cognitive literary studies." Brno studies in English, no. 2 (2022): 189–206. http://dx.doi.org/10.5817/bse2022-2-10.

Full text
Abstract:
Our paper reconsiders the notions of dissonance and consonance introduced in Dorrit Cohn's Transparent Minds (1978). Cohn applies the terms to psycho- and self-narration and defines them with reference to the narrator's prominence, distance/intimacy as well as moral and cognitive privilege with reference to the character. Taking advantage of stance theory, we argue that dissonance and consonance are best taken as dimensions of the narrator's attitude towards the character and/or the narratee, we relate aspects of consonance/dissonance to the basic facets of focalization – emotional, interpretive, and evaluative – and we analyze them in terms of convergence or divergence and further, in the case of divergence, in terms of superiority or inferiority. We claim that there is no automatic correlation between narratorial consonance/dissonance and reliability. Overall, we believe that narratorial consonance/dissonance deserves much attention because it has great impact on the reader's reception of the narrator and characters.
APA, Harvard, Vancouver, ISO, and other styles
45

Févotte, Cédric, and Jérôme Idier. "Algorithms for Nonnegative Matrix Factorization with the β-Divergence." Neural Computation 23, no. 9 (September 2011): 2421–56. http://dx.doi.org/10.1162/neco_a_00168.

Full text
Abstract:
This letter describes algorithms for nonnegative matrix factorization (NMF) with the β-divergence (β-NMF). The β-divergence is a family of cost functions parameterized by a single shape parameter β that takes the Euclidean distance, the Kullback-Leibler divergence, and the Itakura-Saito divergence as special cases (β = 2, 1, 0 respectively). The proposed algorithms are based on a surrogate auxiliary function (a local majorization of the criterion function). We first describe a majorization-minimization algorithm that leads to multiplicative updates, which differ from standard heuristic multiplicative updates by a β-dependent power exponent. The monotonicity of the heuristic algorithm can, however, be proven for β ∈ (0, 1) using the proposed auxiliary function. Then we introduce the concept of the majorization-equalization (ME) algorithm, which produces updates that move along constant level sets of the auxiliary function and lead to larger steps than MM. Simulations on synthetic and real data illustrate the faster convergence of the ME approach. The letter also describes how the proposed algorithms can be adapted to two common variants of NMF: penalized NMF (when a penalty function of the factors is added to the criterion function) and convex NMF (when the dictionary is assumed to belong to a known subspace).
APA, Harvard, Vancouver, ISO, and other styles
46

Hinton, Geoffrey E. "Training Products of Experts by Minimizing Contrastive Divergence." Neural Computation 14, no. 8 (August 1, 2002): 1771–800. http://dx.doi.org/10.1162/089976602760128018.

Full text
Abstract:
It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual “expert” models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called “contrastive divergence” whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.
APA, Harvard, Vancouver, ISO, and other styles
47

Murata, Noboru, Takashi Takenouchi, Takafumi Kanamori, and Shinto Eguchi. "Information Geometry of U-Boost and Bregman Divergence." Neural Computation 16, no. 7 (July 1, 2004): 1437–81. http://dx.doi.org/10.1162/089976604323057452.

Full text
Abstract:
We aim at an extension of AdaBoost to U-Boost, in the paradigm to build a stronger classification machine from a set of weak learning machines. A geometric understanding of the Bregman divergence defined by a generic convex function U leads to the U-Boost method in the framework of information geometry extended to the space of the finite measures over a label set. We propose two versions of U-Boost learning algorithms by taking account of whether the domain is restricted to the space of probability functions. In the sequential step, we observe that the two adjacent and the initial classifiers are associated with a right triangle in the scale via the Bregman divergence, called the Pythagorean relation. This leads to a mild convergence property of the U-Boost algorithm as seen in the expectation-maximization algorithm. Statistical discussions for consistency and robustness elucidate the properties of the U-Boost methods based on a stochastic assumption for training data.
APA, Harvard, Vancouver, ISO, and other styles
48

Amari, Shun-ichi. "Integration of Stochastic Models by Minimizing α-Divergence." Neural Computation 19, no. 10 (October 2007): 2780–96. http://dx.doi.org/10.1162/neco.2007.19.10.2780.

Full text
Abstract:
When there are a number of stochastic models in the form of probability distributions, one needs to integrate them. Mixtures of distributions are frequently used, but exponential mixtures also provide a good means of integration. This letter proposes a one-parameter family of integration, called α-integration, which includes all of these well-known integrations. These are generalizations of various averages of numbers such as arithmetic, geometric, and harmonic averages. There are psychophysical experiments that suggest that α-integrations are used in the brain. The α-divergence between two distributions is defined, which is a natural generalization of Kullback-Leibler divergence and Hellinger distance, and it is proved that α-integration is optimal in the sense of minimizing α-divergence. The theory is applied to generalize the mixture of experts and the product of experts to the α-mixture of experts. The α-predictive distribution is also stated in the Bayesian framework.
APA, Harvard, Vancouver, ISO, and other styles
49

Kompass, Raul. "A Generalized Divergence Measure for Nonnegative Matrix Factorization." Neural Computation 19, no. 3 (March 2007): 780–91. http://dx.doi.org/10.1162/neco.2007.19.3.780.

Full text
Abstract:
This letter presents a general parametric divergence measure. The metric includes as special cases quadratic error and Kullback-Leibler divergence. A parametric generalization of the two different multiplicative update rules for nonnegative matrix factorization by Lee and Seung (2001) is shown to lead to locally optimal solutions of the nonnegative matrix factorization problem with this new cost function. Numeric simulations demonstrate that the new update rule may improve the quadratic distance convergence speed. A proof of convergence is given that, as in Lee and Seung, uses an auxiliary function known from the expectation-maximization theoretical framework.
APA, Harvard, Vancouver, ISO, and other styles
50

Nurul Haque Mollah, Md, Nayeema Sultana, Mihoko Minami, and Shinto Eguchi. "Robust extraction of local structures by the minimum -divergence method." Neural Networks 23, no. 2 (March 2010): 226–38. http://dx.doi.org/10.1016/j.neunet.2009.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography