Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Machines de Boltzmann restreintes“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Machines de Boltzmann restreintes" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Machines de Boltzmann restreintes"
Apolloni, B., A. Bertoni, P. Campadelli und D. de Falco. „Asymmetric Boltzmann machines“. Biological Cybernetics 66, Nr. 1 (November 1991): 61–70. http://dx.doi.org/10.1007/bf00196453.
Der volle Inhalt der QuelleLu, Wenhao, Chi-Sing Leung und John Sum. „Analysis on Noisy Boltzmann Machines and Noisy Restricted Boltzmann Machines“. IEEE Access 9 (2021): 112955–65. http://dx.doi.org/10.1109/access.2021.3102275.
Der volle Inhalt der QuelleLivesey, M. „Clamping in Boltzmann machines“. IEEE Transactions on Neural Networks 2, Nr. 1 (1991): 143–48. http://dx.doi.org/10.1109/72.80301.
Der volle Inhalt der QuelleFischer, Asja. „Training Restricted Boltzmann Machines“. KI - Künstliche Intelligenz 29, Nr. 4 (12.05.2015): 441–44. http://dx.doi.org/10.1007/s13218-015-0371-2.
Der volle Inhalt der QuelleBojnordi, Mahdi Nazm, und Engin Ipek. „The Memristive Boltzmann Machines“. IEEE Micro 37, Nr. 3 (2017): 22–29. http://dx.doi.org/10.1109/mm.2017.53.
Der volle Inhalt der QuelleLiu, Jeremy, Ke-Thia Yao und Federico Spedalieri. „Dynamic Topology Reconfiguration of Boltzmann Machines on Quantum Annealers“. Entropy 22, Nr. 11 (24.10.2020): 1202. http://dx.doi.org/10.3390/e22111202.
Der volle Inhalt der QuelleLuo, Heng, Ruimin Shen, Changyong Niu und Carsten Ullrich. „Sparse Group Restricted Boltzmann Machines“. Proceedings of the AAAI Conference on Artificial Intelligence 25, Nr. 1 (04.08.2011): 429–34. http://dx.doi.org/10.1609/aaai.v25i1.7923.
Der volle Inhalt der QuelleDecelle, Aurélien, und Cyril Furtlehner. „Gaussian-spherical restricted Boltzmann machines“. Journal of Physics A: Mathematical and Theoretical 53, Nr. 18 (16.04.2020): 184002. http://dx.doi.org/10.1088/1751-8121/ab79f3.
Der volle Inhalt der QuelleApolloni, B., und D. de Falco. „Learning by parallel Boltzmann machines“. IEEE Transactions on Information Theory 37, Nr. 4 (Juli 1991): 1162–65. http://dx.doi.org/10.1109/18.87009.
Der volle Inhalt der Quelled'Anjou, A., M. Grana, F. J. Torrealdea und M. C. Hernandez. „Solving satisfiability via Boltzmann machines“. IEEE Transactions on Pattern Analysis and Machine Intelligence 15, Nr. 5 (Mai 1993): 514–21. http://dx.doi.org/10.1109/34.211473.
Der volle Inhalt der QuelleDissertationen zum Thema "Machines de Boltzmann restreintes"
Fissore, Giancarlo. „Generative modeling : statistical physics of Restricted Boltzmann Machines, learning with missing information and scalable training of Linear Flows“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG028.
Der volle Inhalt der QuelleNeural network models able to approximate and sample high-dimensional probability distributions are known as generative models. In recent years this class of models has received tremendous attention due to their potential in automatically learning meaningful representations of the vast amount of data that we produce and consume daily. This thesis presents theoretical and algorithmic results pertaining to generative models and it is divided in two parts. In the first part, we focus our attention on the Restricted Boltzmann Machine (RBM) and its statistical physics formulation. Historically, statistical physics has played a central role in studying the theoretical foundations and providing inspiration for neural network models. The first neural implementation of an associative memory (Hopfield, 1982) is a seminal work in this context. The RBM can be regarded to as a development of the Hopfield model, and it is of particular interest due to its role at the forefront of the deep learning revolution (Hinton et al. 2006).Exploiting its statistical physics formulation, we derive a mean-field theory of the RBM that let us characterize both its functioning as a generative model and the dynamics of its training procedure. This analysis proves useful in deriving a robust mean-field imputation strategy that makes it possible to use the RBM to learn empirical distributions in the challenging case in which the dataset to model is only partially observed and presents high percentages of missing information. In the second part we consider a class of generative models known as Normalizing Flows (NF), whose distinguishing feature is the ability to model complex high-dimensional distributions by employing invertible transformations of a simple tractable distribution. The invertibility of the transformation allows to express the probability density through a change of variables whose optimization by Maximum Likelihood (ML) is rather straightforward but computationally expensive. The common practice is to impose architectural constraints on the class of transformations used for NF, in order to make the ML optimization efficient. Proceeding from geometrical considerations, we propose a stochastic gradient descent optimization algorithm that exploits the matrix structure of fully connected neural networks without imposing any constraints on their structure other then the fixed dimensionality required by invertibility. This algorithm is computationally efficient and can scale to very high dimensional datasets. We demonstrate its effectiveness in training a multylayer nonlinear architecture employing fully connected layers
Hasasneh, Ahmad. „Robot semantic place recognition based on deep belief networks and a direct use of tiny images“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00960289.
Der volle Inhalt der QuelleSvoboda, Jiří. „Multi-modální "Restricted Boltzmann Machines"“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236426.
Der volle Inhalt der QuelleTICKNOR, ANTHONY JAMES. „OPTICAL COMPUTING IN BOLTZMANN MACHINES“. Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184169.
Der volle Inhalt der QuelleCamilli, Francesco. „Statistical mechanics perspectives on Boltzmann machines“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19302/.
Der volle Inhalt der QuelleCRUZ, FELIPE JOAO PONTES DA. „RECOMMENDER SYSTEMS USING RESTRICTED BOLTZMANN MACHINES“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30285@1.
Der volle Inhalt der QuelleCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Sistemas de recomendação aparecem em diversos domínios do mundo real. Vários modelos foram propostos para o problema de predição de entradas faltantes em um conjunto de dados. Duas das abordagens mais comuns são filtragem colaborativa baseada em similaridade e modelos de fatores latentes. Uma alternativa, mais recente, foi proposta por Salakhutdinov em 2007, usando máquinas de Boltzmann restritas, ou RBMs. Esse modelo se encaixa na família de modelos de fatores latentes, no qual, modelamos fatores latentes dos dados usando unidades binárias na camada escondida das RBMs. Esses modelos se mostraram capazes de aproximar resultados obtidos com modelos de fatoração de matrizes. Nesse trabalho vamos revisitar esse modelo e detalhar cuidadosamente como modelar e treinar RBMs para o problema de predição de entradas vazias em dados tabulares.
Recommender systems can be used in many problems in the real world. Many models were proposed to solve the problem of predicting missing entries in a specific dataset. Two of the most common approaches are neighborhood-based collaborative filtering and latent factor models. A more recent alternative was proposed on 2007 by Salakhutdinov, using Restricted Boltzmann Machines. This models belongs to the family of latent factor models, in which, we model latent factors over the data using hidden binary units. RBMs have shown that they can approximate solutions trained with a traditional matrix factorization model. In this work we ll revisit this proposed model and carefully detail how to model and train RBMs for the problem of missing ratings prediction.
Moody, John Matali. „Process monitoring with restricted Boltzmann machines“. Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86467.
Der volle Inhalt der QuelleENGLISH ABSTRACT: Process monitoring and fault diagnosis are used to detect abnormal events in processes. The early detection of such events or faults is crucial to continuous process improvement. Although principal component analysis and partial least squares are widely used for process monitoring and fault diagnosis in the metallurgical industries, these models are linear in principle; nonlinear approaches should provide more compact and informative models. The use of auto associative neural networks or auto encoders provide a principled approach for process monitoring. However, until very recently, these multiple layer neural networks have been difficult to train and have therefore not been used to any significant extent in process monitoring. With newly proposed algorithms based on the pre-training of the layers of the neural networks, it is now possible to train neural networks with very complex structures, i.e. deep neural networks. These neural networks can be used as auto encoders to extract features from high dimensional data. In this study, the application of deep auto encoders in the form of Restricted Boltzmann machines (RBM) to the extraction of features from process data is considered. These networks have mostly been used for data visualization to date and have not been applied in the context of fault diagnosis or process monitoring as yet. The objective of this investigation is therefore to assess the feasibility of using Restricted Boltzmann machines in various fault detection schemes. The use of RBM in process monitoring schemes will be discussed, together with the application of these models in automated control frameworks.
AFRIKAANSE OPSOMMING: Prosesmonitering en fout diagnose word gebruik om abnormale gebeure in prosesse op te spoor. Die vroeë opsporing van sulke gebeure of foute is noodsaaklik vir deurlopende verbetering van prosesse. Alhoewel hoofkomponent-analise en parsiële kleinste kwadrate wyd gebruik word vir prosesmonitering en fout diagnose in die metallurgiese industrieë, is hierdie modelle lineêr in beginsel; nie-lineêre benaderings behoort meer kompakte en insiggewende modelle te voorsien. Die gebruik van outo-assosiatiewe neurale netwerke of outokodeerders bied 'n beginsel gebaseerder benadering om dit te bereik. Hierdie veelvoudige laag neurale netwerke was egter tot onlangs moeilik om op te lei en is dus nie tot ʼn beduidende mate in die prosesmonitering gebruik nie. Nuwe, voorgestelde algoritmes, gebaseer op voorafopleiding van die lae van die neurale netwerke, maak dit nou moontlik om neurale netwerke met baie ingewikkelde strukture, d.w.s. diep neurale netwerke, op te lei. Hierdie neurale netwerke kan gebruik word as outokodeerders om kenmerke van hoë-dimensionele data te onttrek. In hierdie studie word die toepassing van diep outokodeerders in die vorm van Beperkte Boltzmann Masjiene vir die onttrekking van kenmerke van proses data oorweeg. Tot dusver is hierdie netwerke meestal vir data visualisering gebruik en dit is nog nie toegepas in die konteks van fout diagnose of prosesmonitering nie. Die doel van hierdie ondersoek is dus om die haalbaarheid van die gebruik van Beperkte Boltzmann Masjiene in verskeie foutopsporingskemas te assesseer. Die gebruik van Beperkte Boltzmann Masjiene se eienskappe in prosesmoniteringskemas sal bespreek word, tesame met die toepassing van hierdie modelle in outomatiese beheer raamwerke.
LACAILLE, JEROME. „Machines de boltzmann. Theorie et applications“. Paris 11, 1992. http://www.theses.fr/1992PA112213.
Der volle Inhalt der QuelleSwersky, Kevin. „Inductive principles for learning Restricted Boltzmann Machines“. Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/27816.
Der volle Inhalt der QuelleFarguell, Matesanz Enric. „A new approach to Decimation in High Order Boltzmann Machines“. Doctoral thesis, Universitat Ramon Llull, 2011. http://hdl.handle.net/10803/9155.
Der volle Inhalt der QuelleMalgrat aquesta relativa manca d'èxit, la comunitat científica de l'àmbit de les xarxes neuronals ha mantingut un cert interès amb el model. Una de les extensions més rellevants a la MB és la Màquina de Boltzmann d'Alt Ordre (HOBM), on els pesos poden connectar més de dues neurones simultàniament. Encara que les capacitats d'aprenentatge d'aquest model han estat analitzades per d'altres autors, no s'ha pogut establir una equivalència formal entre els pesos d'una MB i els pesos d'alt ordre de la HOBM.
En aquest treball s'analitza l'equivalència entre una MB i una HOBM a través de l'extensió del mètode conegut com a decimació. Decimació és una eina emprada a física estadística que es pot també aplicar a cert tipus de MB, obtenint expressions analítiques per a calcular les correlacions necessàries per a dur a terme el procés d'aprenentatge. Per tant, la decimació evita l'ús del costós algorisme del SA. Malgrat això, en la seva forma original, la decimació podia tan sols ser aplicada a cert tipus de topologies molt poc densament connectades. La extensió que es defineix en aquest treball permet calcular aquests valors independentment de la topologia de la xarxa neuronal; aquest model es basa en afegir prou pesos d'alt ordre a una MB estàndard com per a assegurar que les equacions de la decimació es poden solucionar.
Després, s'estableix una equivalència directa entre els pesos d'un model d'alt ordre, la distribució de probabilitat que pot aprendre i les matrius de Hadamard: les propietats d'aquestes matrius es poden emprar per a calcular fàcilment els pesos del sistema. Finalment, es defineix una MB estàndard amb una topologia específica que permet entendre millor la equivalència exacta entre unitats ocultes de la MB i els pesos d'alt ordre de la HOBM.
La Máquina de Boltzmann (MB) es una red neuronal estocástica con la habilidad de aprender y extrapolar distribuciones de probabilidad. Sin embargo, nunca ha llegado a ser tan popular como otros modelos de redes neuronals como, por ejemplo, el perceptrón. Esto es debido a la complejidad tanto del proceso de simulación como de aprendizaje: las cantidades que se necesitan a lo largo del proceso de aprendizaje se estiman mediante el uso de técnicas Monte Carlo (MC), a través del algoritmo del Temple Simulado (SA). En definitiva, la MB es generalmente considerada o bien una extensión de la red de Hopfield o bien como una implementación paralela del algoritmo del SA.
Pese a esta relativa falta de éxito, la comunidad científica del ámbito de las redes neuronales ha mantenido un cierto interés en el modelo. Una importante extensión es la Màquina de Boltzmann de Alto Orden (HOBM), en la que los pesos pueden conectar más de dos neuronas a la vez. Pese a que este modelo ha sido analizado en profundidad por otros autores, todavía no se ha descrito una equivalencia formal entre los pesos de una MB i las conexiones de alto orden de una HOBM.
En este trabajo se ha analizado la equivalencia entre una MB i una HOBM, a través de la extensión del método conocido como decimación. La decimación es una herramienta propia de la física estadística que también puede ser aplicada a ciertos modelos de MB, obteniendo expresiones analíticas para el cálculo de las cantidades necesarias en el algoritmo de aprendizaje. Por lo tanto, la decimación evita el alto coste computacional asociado al al uso del costoso algoritmo del SA. Pese a esto, en su forma original la decimación tan solo podía ser aplicada a ciertas topologías de MB, distinguidas por ser poco densamente conectadas. La extensión definida en este trabajo permite calcular estos valores independientemente de la topología de la red neuronal: este modelo se basa en añadir suficientes pesos de alto orden a una MB estándar como para asegurar que las ecuaciones de decimación pueden solucionarse.
Más adelante, se establece una equivalencia directa entre los pesos de un modelo de alto orden, la distribución de probabilidad que puede aprender y las matrices tipo Hadamard. Las propiedades de este tipo de matrices se pueden usar para calcular fácilmente los pesos del sistema. Finalmente, se define una BM estándar con una topología específica que permite entender mejor la equivalencia exacta entre neuronas ocultas en la MB y los pesos de alto orden de la HOBM.
The Boltzmann Machine (BM) is a stochastic neural network with the ability of both learning and extrapolating probability distributions. However, it has never been as widely used as other neural networks such as the perceptron, due to the complexity of both the learning and recalling algorithms, and to the high computational cost required in the learning process: the quantities that are needed at the learning stage are usually estimated by Monte Carlo (MC) through the Simulated Annealing (SA) algorithm. This has led to a situation where the BM is rather considered as an evolution of the Hopfield Neural Network or as a parallel implementation of the Simulated Annealing algorithm.
Despite this relative lack of success, the neural network community has continued to progress in the analysis of the dynamics of the model. One remarkable extension is the High Order Boltzmann Machine (HOBM), where weights can connect more than two neurons at a time. Although the learning capabilities of this model have already been discussed by other authors, a formal equivalence between the weights in a standard BM and the high order weights in a HOBM has not yet been established.
We analyze this latter equivalence between a second order BM and a HOBM by proposing an extension of the method known as decimation. Decimation is a common tool in statistical physics that may be applied to some kind of BMs, that can be used to obtain analytical expressions for the n-unit correlation elements required in the learning process. In this way, decimation avoids using the time consuming Simulated Annealing algorithm. However, as it was first conceived, it could only deal with sparsely connected neural networks. The extension that we define in this thesis allows computing the same quantities irrespective of the topology of the network. This method is based on adding enough high order weights to a standard BM to guarantee that the system can be solved.
Next, we establish a direct equivalence between the weights of a HOBM model, the probability distribution to be learnt and Hadamard matrices. The properties of these matrices can be used to easily calculate the value of the weights of the system. Finally, we define a standard BM with a very specific topology that helps us better understand the exact equivalence between hidden units in a BM and high order weights in a HOBM.
Bücher zum Thema "Machines de Boltzmann restreintes"
P, Coughlin James. Neural computation in Hopfield networks and Boltzmann machines. Newark: University of Delaware Press, 1995.
Den vollen Inhalt der Quelle findenWilliams, Chris. Using deterministic Boltzmann machines to discriminate temporally distorted strings. Ottawa: National Library of Canada, 1990.
Den vollen Inhalt der Quelle findenWilliams, Chris. Using deterministic Boltzmann machines to discriminate temporally distorted strings. Toronto: University, Dept. of Computer Science, 1990.
Den vollen Inhalt der Quelle findenLuttrell, Stephen P. The implications of Boltzmann-type machines for SAR data processing: A preliminary survey. Malvern, Worcs: Procurement Executive, Ministry of Defence, RSRE, 1985.
Den vollen Inhalt der Quelle findenJan, Korst, Hrsg. Simulated annealing and Boltzmann machines: A stochastic approach to combinatorial optimization and neural computing. Chichester [England]: Wiley, 1989.
Den vollen Inhalt der Quelle findenAzencott, Robert. Boltzmann Machines, Gibbs Fields, and Artificial Vision. Perseus Books (Sd), 1999.
Den vollen Inhalt der Quelle findenP, Coughlin James, und Robert H. Baran. Neural Computation in Hopfield Networks and Boltzmann Machines. Rowman & Littlefield Publishers, Incorporated, 1995.
Den vollen Inhalt der Quelle findenMasters, Timothy. Deep Belief Nets in C++ and CUDA C: Volume 1: Restricted Boltzmann Machines and Supervised Feedforward Networks. Apress, 2018.
Den vollen Inhalt der Quelle findenTiwari, Sandip. Information mechanics. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198759874.003.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Machines de Boltzmann restreintes"
Du, Ke-Lin, und M. N. S. Swamy. „Boltzmann Machines“. In Neural Networks and Statistical Learning, 699–715. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_23.
Der volle Inhalt der QuelleAlla, Sridhar, und Suman Kalyan Adari. „Boltzmann Machines“. In Beginning Anomaly Detection Using Python-Based Deep Learning, 179–212. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5177-5_5.
Der volle Inhalt der QuelleSpieksma, F. C. R. „Boltzmann Machines“. In Lecture Notes in Computer Science, 119–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0027026.
Der volle Inhalt der QuelleMunro, Paul, Hannu Toivonen, Geoffrey I. Webb, Wray Buntine, Peter Orbanz, Yee Whye Teh, Pascal Poupart et al. „Boltzmann Machines“. In Encyclopedia of Machine Learning, 132–36. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_83.
Der volle Inhalt der QuelleYan, Wei Qi. „Boltzmann Machines“. In Texts in Computer Science, 99–107. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61081-4_7.
Der volle Inhalt der QuelleHinton, Geoffrey. „Boltzmann Machines“. In Encyclopedia of Machine Learning and Data Mining, 1–7. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-1-4899-7502-7_31-1.
Der volle Inhalt der QuelleHinton, Geoffrey. „Boltzmann Machines“. In Encyclopedia of Machine Learning and Data Mining, 164–68. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_31.
Der volle Inhalt der QuelleMasters, Timothy. „Restricted Boltzmann Machines“. In Deep Belief Nets in C++ and CUDA C: Volume 1, 91–172. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3591-1_3.
Der volle Inhalt der QuellePoole, William, Andrés Ortiz-Muñoz, Abhishek Behera, Nick S. Jones, Thomas E. Ouldridge, Erik Winfree und Manoj Gopalkrishnan. „Chemical Boltzmann Machines“. In Lecture Notes in Computer Science, 210–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66799-7_14.
Der volle Inhalt der QuelleAggarwal, Charu C. „Restricted Boltzmann Machines“. In Neural Networks and Deep Learning, 235–70. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0_6.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Machines de Boltzmann restreintes"
Ticknor, Anthony J., und Harrison H. Barrett. „Optical Boltzmann machines“. In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.tuy3.
Der volle Inhalt der QuelleTicknor, Anthony J., Harrison H. Barrett und Roger L. Easton. „Optical Boltzmann Machines“. In Optical Computing. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/optcomp.1985.pd3.
Der volle Inhalt der QuelleSejnowski, Terrence J. „Higher-order Boltzmann machines“. In AIP Conference Proceedings Volume 151. AIP, 1986. http://dx.doi.org/10.1063/1.36246.
Der volle Inhalt der QuelleWang, Maolin, Chenbin Zhang, Yu Pan, Jing Xu und Zenglin Xu. „Tensor Ring Restricted Boltzmann Machines“. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852432.
Der volle Inhalt der QuellePopa, Calin-Adrian. „Complex-Valued Deep Boltzmann Machines“. In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489359.
Der volle Inhalt der QuelleBounds, D. G. „Numerical simulations of Boltzmann Machines“. In AIP Conference Proceedings Volume 151. AIP, 1986. http://dx.doi.org/10.1063/1.36220.
Der volle Inhalt der QuelleLiu, Ying. „Image compression using Boltzmann machines“. In SPIE's 1993 International Symposium on Optics, Imaging, and Instrumentation, herausgegeben von Su-Shing Chen. SPIE, 1993. http://dx.doi.org/10.1117/12.162027.
Der volle Inhalt der QuelleYu, Jianjun, und Xiaogang Ruan. „Optimal Control with Boltzmann Machines“. In 2008 Fourth International Conference on Natural Computation. IEEE, 2008. http://dx.doi.org/10.1109/icnc.2008.692.
Der volle Inhalt der QuelleTicknor, Anthony J., und Harrison H. Barrett. „Optical implementation in Boltzmann machines“. In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.mdd6.
Der volle Inhalt der QuellePassos, Leandro Aparecido, und Joao Paulo Papa. „Fine-Tuning Infinity Restricted Boltzmann Machines“. In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2017. http://dx.doi.org/10.1109/sibgrapi.2017.15.
Der volle Inhalt der Quelle