Zeitschriftenartikel zum Thema „Machines de Boltzmann restreintes“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Machines de Boltzmann restreintes.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Machines de Boltzmann restreintes" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Apolloni, B., A. Bertoni, P. Campadelli und D. de Falco. „Asymmetric Boltzmann machines“. Biological Cybernetics 66, Nr. 1 (November 1991): 61–70. http://dx.doi.org/10.1007/bf00196453.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lu, Wenhao, Chi-Sing Leung und John Sum. „Analysis on Noisy Boltzmann Machines and Noisy Restricted Boltzmann Machines“. IEEE Access 9 (2021): 112955–65. http://dx.doi.org/10.1109/access.2021.3102275.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Livesey, M. „Clamping in Boltzmann machines“. IEEE Transactions on Neural Networks 2, Nr. 1 (1991): 143–48. http://dx.doi.org/10.1109/72.80301.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Fischer, Asja. „Training Restricted Boltzmann Machines“. KI - Künstliche Intelligenz 29, Nr. 4 (12.05.2015): 441–44. http://dx.doi.org/10.1007/s13218-015-0371-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bojnordi, Mahdi Nazm, und Engin Ipek. „The Memristive Boltzmann Machines“. IEEE Micro 37, Nr. 3 (2017): 22–29. http://dx.doi.org/10.1109/mm.2017.53.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Jeremy, Ke-Thia Yao und Federico Spedalieri. „Dynamic Topology Reconfiguration of Boltzmann Machines on Quantum Annealers“. Entropy 22, Nr. 11 (24.10.2020): 1202. http://dx.doi.org/10.3390/e22111202.

Der volle Inhalt der Quelle
Annotation:
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex distributions that are difficult to sample. We have used an open-system quantum annealer to sample from complex distributions and implement Boltzmann machines with looping connectivity. Further, we have created policies mapping Boltzmann machine variables to the quantum bits of an annealer. These policies, based on correlation and entropy metrics, dynamically reconfigure the topology of Boltzmann machines during training and improve performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Luo, Heng, Ruimin Shen, Changyong Niu und Carsten Ullrich. „Sparse Group Restricted Boltzmann Machines“. Proceedings of the AAAI Conference on Artificial Intelligence 25, Nr. 1 (04.08.2011): 429–34. http://dx.doi.org/10.1609/aaai.v25i1.7923.

Der volle Inhalt der Quelle
Annotation:
Since learning in Boltzmann machines is typically quite slow, there is a need to restrict connections within hidden layers. However, theresulting states of hidden units exhibit statistical dependencies. Based on this observation, we propose using l1/l2 regularization upon the activation probabilities of hidden units in restricted Boltzmann machines to capture the local dependencies among hidden units. This regularization not only encourages hidden units of many groups to be inactive given observed data but also makes hidden units within a group compete with each other for modeling observed data. Thus, the l1/l2 regularization on RBMs yields sparsity at both the group and the hidden unit levels. We call RBMs trained with the regularizer sparse group RBMs (SGRBMs). The proposed SGRBMs are appliedto model patches of natural images, handwritten digits and OCR English letters. Then to emphasize that SGRBMs can learn more discriminative features we applied SGRBMs to pretrain deep networks for classification tasks. Furthermore, we illustrate the regularizer can also be applied to deep Boltzmann machines, which lead to sparse group deep Boltzmann machines. When adapted to the MNIST data set, a two-layer sparse group Boltzmann machine achieves an error rate of 0.84%, which is, to our knowledge, the best published result on the permutation-invariant version of the MNIST task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Decelle, Aurélien, und Cyril Furtlehner. „Gaussian-spherical restricted Boltzmann machines“. Journal of Physics A: Mathematical and Theoretical 53, Nr. 18 (16.04.2020): 184002. http://dx.doi.org/10.1088/1751-8121/ab79f3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Apolloni, B., und D. de Falco. „Learning by parallel Boltzmann machines“. IEEE Transactions on Information Theory 37, Nr. 4 (Juli 1991): 1162–65. http://dx.doi.org/10.1109/18.87009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

d'Anjou, A., M. Grana, F. J. Torrealdea und M. C. Hernandez. „Solving satisfiability via Boltzmann machines“. IEEE Transactions on Pattern Analysis and Machine Intelligence 15, Nr. 5 (Mai 1993): 514–21. http://dx.doi.org/10.1109/34.211473.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ticknor, Anthony J., und Harrison H. Barrett. „Optical Implementations In Boltzmann Machines“. Optical Engineering 26, Nr. 1 (01.01.1987): 260116. http://dx.doi.org/10.1117/12.7974015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Amari, S., K. Kurata und H. Nagaoka. „Information geometry of Boltzmann machines“. IEEE Transactions on Neural Networks 3, Nr. 2 (März 1992): 260–71. http://dx.doi.org/10.1109/72.125867.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Welling, Max, und Yee Whye Teh. „Approximate inference in Boltzmann machines“. Artificial Intelligence 143, Nr. 1 (Januar 2003): 19–50. http://dx.doi.org/10.1016/s0004-3702(02)00361-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Prager, R. W., T. D. Harrison und F. Fallside. „Boltzmann machines for speech recognition“. Computer Speech & Language 1, Nr. 1 (März 1986): 3–27. http://dx.doi.org/10.1016/s0885-2308(86)80008-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Balzer, Wolfgang, Masanobu Takahashi, Jun Ohta und Kazuo Kyuma. „Weight quantization in Boltzmann machines“. Neural Networks 4, Nr. 3 (Januar 1991): 405–9. http://dx.doi.org/10.1016/0893-6080(91)90077-i.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Passos, Leandro Aparecido, und João Paulo Papa. „Temperature-Based Deep Boltzmann Machines“. Neural Processing Letters 48, Nr. 1 (08.09.2017): 95–107. http://dx.doi.org/10.1007/s11063-017-9707-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

KOBAYASHI, M. „Boltzmann Machines with Identified States“. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, Nr. 3 (01.03.2008): 887–90. http://dx.doi.org/10.1093/ietfec/e91-a.3.887.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Miasnikof, Pierre, Mohammad Bagherbeik und Ali Sheikholeslami. „Graph clustering with Boltzmann machines“. Discrete Applied Mathematics 343 (Januar 2024): 208–23. http://dx.doi.org/10.1016/j.dam.2023.10.012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Liu, Kai, Li Min Zhang und Yong Wei Sun. „Deep Boltzmann Machines Aided Design Based on Genetic Algorithms“. Applied Mechanics and Materials 568-570 (Juni 2014): 848–51. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.848.

Der volle Inhalt der Quelle
Annotation:
To resolve the problem of no guidance about how to set the values of numerical meta-parameters and difficulty to achieve optimization of Deep Boltzmann Machines, genetic algorithms are used to develop an automatic optimizing method named GA-RBMs (Genetic Algorithm-Restricted Boltzmann Machines) for this model’s aided design. Based on the Restricted Boltzmann Machines’ features and evaluation function, a genetic algorithm is designed and realizes the global search of satisfied structure. We also initialize the network’s weights to determine the number of visible units and hidden units. The experiments were conducted on MNIST digits handwritten datasets. The results proved that this optimization reduced the dimension of visible units and improved the performance of feature extracted by Deep Boltzmann Machines. The network optimized has good generalization performance and meets the demand of Deep Boltzmann Machines’ aided design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Saul, Lawrence, und Michael I. Jordan. „Learning in Boltzmann Trees“. Neural Computation 6, Nr. 6 (November 1994): 1174–84. http://dx.doi.org/10.1162/neco.1994.6.6.1174.

Der volle Inhalt der Quelle
Annotation:
We introduce a large family of Boltzmann machines that can be trained by standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation. We present results on the problems of N-bit parity and the detection of hidden symmetries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Yasuda, Muneki, und Kazuyuki Tanaka. „Approximate Learning Algorithm in Boltzmann Machines“. Neural Computation 21, Nr. 11 (November 2009): 3130–78. http://dx.doi.org/10.1162/neco.2009.08-08-844.

Der volle Inhalt der Quelle
Annotation:
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Crawford, Daniel, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi und Pooya Ronagh. „Reinforcement learning using quantum Boltzmann machines“. Quantum Information and Computation 18, Nr. 1&2 (Februar 2018): 51–74. http://dx.doi.org/10.26421/qic18.1-2-3.

Der volle Inhalt der Quelle
Annotation:
We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse field Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learning algorithm in which the set of visible nodes representing the states and actions of an optimal policy are the first and last layers of the deep network. In absence of a transverse field, our simulations show that DBMs are trained more effectively than restricted Boltzmann machines (RBM) with the same number of nodes. We then develop a framework for training the network as a quantum Boltzmann machine (QBM) in the presence of a significant transverse field for reinforcement learning. This method also outperforms the reinforcement learning method that uses RBMs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Kobayashi, Masaki. „Information geometry of rotor Boltzmann machines“. Nonlinear Theory and Its Applications, IEICE 7, Nr. 2 (2016): 266–82. http://dx.doi.org/10.1587/nolta.7.266.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yasuda, Muneki, und Tomoyuki Obuchi. „Empirical Bayes method for Boltzmann machines“. Journal of Physics A: Mathematical and Theoretical 53, Nr. 1 (10.12.2019): 014004. http://dx.doi.org/10.1088/1751-8121/ab57a7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Genovese, Giuseppe, und Daniele Tantari. „Legendre equivalences of spherical Boltzmann machines“. Journal of Physics A: Mathematical and Theoretical 53, Nr. 9 (04.02.2020): 094001. http://dx.doi.org/10.1088/1751-8121/ab6b92.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Melko, Roger G., Giuseppe Carleo, Juan Carrasquilla und J. Ignacio Cirac. „Restricted Boltzmann machines in quantum physics“. Nature Physics 15, Nr. 9 (24.06.2019): 887–92. http://dx.doi.org/10.1038/s41567-019-0545-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ackley, David H., Geoffrey E. Hinton und Terrence J. Sejnowski. „A Learning Algorithm for Boltzmann Machines*“. Cognitive Science 9, Nr. 1 (Januar 1985): 147–69. http://dx.doi.org/10.1207/s15516709cog0901_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Apolloni, Bruno, und Diego de Falco. „Learning by Asymmetric Parallel Boltzmann Machines“. Neural Computation 3, Nr. 3 (September 1991): 402–8. http://dx.doi.org/10.1162/neco.1991.3.3.402.

Der volle Inhalt der Quelle
Annotation:
We consider the Little, Shaw, Vasudevan model as a parallel asymmetric Boltzmann machine, in the sense that we extend to this model the entropic learning rule first studied by Ackley, Hinton, and Sejnowski in the case of a sequentially activated network with symmetric synaptic matrix. The resulting Hebbian learning rule for the parallel asymmetric model draws the signal for the updating of synaptic weights from time averages of the discrepancy between expected and actual transitions along the past history of the network. As we work without the hypothesis of symmetry of the weights, we can include in our analysis also feedforward networks, for which the entropic learning rule turns out to be complementary to the error backpropagation rule, in that it “rewards the correct behavior” instead of “penalizing the wrong answers.”
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Zhang, Jian, Shifei Ding, Nan Zhang und Weikuan Jia. „Adversarial Training Methods for Boltzmann Machines“. IEEE Access 8 (2020): 4594–604. http://dx.doi.org/10.1109/access.2019.2962758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Fischer, Asja, und Christian Igel. „Training restricted Boltzmann machines: An introduction“. Pattern Recognition 47, Nr. 1 (Januar 2014): 25–39. http://dx.doi.org/10.1016/j.patcog.2013.05.025.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Zhang, Nan, Shifei Ding, Jian Zhang und Yu Xue. „An overview on Restricted Boltzmann Machines“. Neurocomputing 275 (Januar 2018): 1186–99. http://dx.doi.org/10.1016/j.neucom.2017.09.065.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Schulz, Hannes, Andreas Müller und Sven Behnke. „Exploiting local structure in Boltzmann machines“. Neurocomputing 74, Nr. 9 (April 2011): 1411–17. http://dx.doi.org/10.1016/j.neucom.2010.12.014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Kappen, Hilbert J. „Deterministic learning rules for boltzmann machines“. Neural Networks 8, Nr. 4 (Januar 1995): 537–48. http://dx.doi.org/10.1016/0893-6080(94)00112-y.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zwietering, Patrick, und Emile Aarts. „Parallel Boltzmann machines: A mathematical model“. Journal of Parallel and Distributed Computing 13, Nr. 1 (September 1991): 65–75. http://dx.doi.org/10.1016/0743-7315(91)90110-u.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Aarts, Emile H. L., und Jan H. M. Korst. „Boltzmann machines for travelling salesman problems“. European Journal of Operational Research 39, Nr. 1 (März 1989): 79–95. http://dx.doi.org/10.1016/0377-2217(89)90355-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Upadhya, Vidyadhar, und P. S. Sastry. „An Overview of Restricted Boltzmann Machines“. Journal of the Indian Institute of Science 99, Nr. 2 (18.02.2019): 225–36. http://dx.doi.org/10.1007/s41745-019-0102-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Cheng, Song, Jing Chen und Lei Wang. „Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines“. Entropy 20, Nr. 8 (07.08.2018): 583. http://dx.doi.org/10.3390/e20080583.

Der volle Inhalt der Quelle
Annotation:
We compare and contrast the statistical physics and quantum physics inspired approaches for unsupervised generative modeling of classical data. The two approaches represent probabilities of observed data using energy-based models and quantum states, respectively. Classical and quantum information patterns of the target datasets therefore provide principled guidelines for structural design and learning in these two approaches. Taking the Restricted Boltzmann Machines (RBM) as an example, we analyze the information theoretical bounds of the two approaches. We also estimate the classical mutual information of the standard MNIST datasets and the quantum Rényi entropy of corresponding Matrix Product States (MPS) representations. Both information measures are much smaller compared to their theoretical upper bound and exhibit similar patterns, which imply a common inductive bias of low information complexity. By comparing the performance of RBM with various architectures on the standard MNIST datasets, we found that the RBM with local sparse connection exhibit high learning efficiency, which supports the application of tensor network states in machine learning problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Teng, Da, Zhang Li, Guanghong Gong und Liang Han. „Boltzmann machines with clusters of stochastic binary units“. International Journal of Modeling, Simulation, and Scientific Computing 07, Nr. 02 (Juni 2016): 1650018. http://dx.doi.org/10.1142/s1793962316500185.

Der volle Inhalt der Quelle
Annotation:
The original restricted Boltzmann machines (RBMs) are extended by replacing the binary visible and hidden variables with clusters of binary units, and a new learning algorithm for training deep Boltzmann machine of this new variant is proposed. The sum of binary units of each cluster is approximated by a Gaussian distribution. Experiments demonstrate that the proposed Boltzmann machines can achieve good performance in the MNIST handwritten digital recognition task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Salakhutdinov, Ruslan, und Geoffrey Hinton. „An Efficient Learning Procedure for Deep Boltzmann Machines“. Neural Computation 24, Nr. 8 (August 2012): 1967–2006. http://dx.doi.org/10.1162/neco_a_00311.

Der volle Inhalt der Quelle
Annotation:
We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent statistics are estimated using a variational approximation that tends to focus on a single mode, and data-independent statistics are estimated using persistent Markov chains. The use of two quite different techniques for estimating the two types of statistic that enter into the gradient of the log likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer pretraining phase that initializes the weights sensibly. The pretraining also allows the variational inference to be initialized sensibly with a single bottom-up pass. We present results on the MNIST and NORB data sets showing that deep Boltzmann machines learn very good generative models of handwritten digits and 3D objects. We also show that the features discovered by deep Boltzmann machines are a very effective way to initialize the hidden layers of feedforward neural nets, which are then discriminatively fine-tuned.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Suykens, Johan A. K. „Deep Restricted Kernel Machines Using Conjugate Feature Duality“. Neural Computation 29, Nr. 8 (August 2017): 2123–63. http://dx.doi.org/10.1162/neco_a_00984.

Der volle Inhalt der Quelle
Annotation:
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Kobayashi, Masaki. „Information geometry of hyperbolic-valued Boltzmann machines“. Neurocomputing 431 (März 2021): 163–68. http://dx.doi.org/10.1016/j.neucom.2020.12.048.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

YASUDA, Muneki, und Kazuyuki TANAKA. „Boltzmann Machines with Bounded Continuous Random Variables“. Interdisciplinary Information Sciences 13, Nr. 1 (2007): 25–31. http://dx.doi.org/10.4036/iis.2007.25.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Mohan, Ankith, Aiichiro Nakano und Emilio Ferrara. „Graph signal recovery using restricted Boltzmann machines“. Expert Systems with Applications 185 (Dezember 2021): 115635. http://dx.doi.org/10.1016/j.eswa.2021.115635.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Giuffrida, Mario Valerio, und Sotirios A. Tsaftaris. „Unsupervised Rotation Factorization in Restricted Boltzmann Machines“. IEEE Transactions on Image Processing 29 (2020): 2166–75. http://dx.doi.org/10.1109/tip.2019.2946455.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Apolloni, Bruno, Egidio Battistini und Diego de Falco. „Higher-order Boltzmann machines and entropy bounds“. Journal of Physics A: Mathematical and General 32, Nr. 30 (20.07.1999): 5529–38. http://dx.doi.org/10.1088/0305-4470/32/30/301.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Li, Qing, Yang Chen und Yongjune Kim. „Compression by and for Deep Boltzmann Machines“. IEEE Transactions on Communications 68, Nr. 12 (Dezember 2020): 7498–510. http://dx.doi.org/10.1109/tcomm.2020.3020796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Bounds, D. G. „A statistical mechanical study of Boltzmann machines“. Journal of Physics A: Mathematical and General 20, Nr. 8 (01.06.1987): 2133–45. http://dx.doi.org/10.1088/0305-4470/20/8/027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Azencott, Robert, Antoine Doutriaux und Laurent Younes. „Synchronous Boltzmann machines and curve identification tasks*“. Network: Computation in Neural Systems 4, Nr. 4 (01.11.1993): 461–80. http://dx.doi.org/10.1088/0954-898x/4/4/004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Azencott, Robert, Antoine Doutriaux und Laurent Younes. „Synchronous Boltzmann machines and curve identification tasks“. Network: Computation in Neural Systems 4, Nr. 4 (Januar 1993): 461–80. http://dx.doi.org/10.1088/0954-898x_4_4_004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Neal, Radford M. „Asymmetric Parallel Boltzmann Machines are Belief Networks“. Neural Computation 4, Nr. 6 (November 1992): 832–34. http://dx.doi.org/10.1162/neco.1992.4.6.832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie