Literatura científica selecionada sobre o tema "Machines de Boltzmann restreintes"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Machines de Boltzmann restreintes".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Machines de Boltzmann restreintes"

1

Apolloni, B., A. Bertoni, P. Campadelli e D. de Falco. "Asymmetric Boltzmann machines". Biological Cybernetics 66, n.º 1 (novembro de 1991): 61–70. http://dx.doi.org/10.1007/bf00196453.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Lu, Wenhao, Chi-Sing Leung e John Sum. "Analysis on Noisy Boltzmann Machines and Noisy Restricted Boltzmann Machines". IEEE Access 9 (2021): 112955–65. http://dx.doi.org/10.1109/access.2021.3102275.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Livesey, M. "Clamping in Boltzmann machines". IEEE Transactions on Neural Networks 2, n.º 1 (1991): 143–48. http://dx.doi.org/10.1109/72.80301.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Fischer, Asja. "Training Restricted Boltzmann Machines". KI - Künstliche Intelligenz 29, n.º 4 (12 de maio de 2015): 441–44. http://dx.doi.org/10.1007/s13218-015-0371-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Bojnordi, Mahdi Nazm, e Engin Ipek. "The Memristive Boltzmann Machines". IEEE Micro 37, n.º 3 (2017): 22–29. http://dx.doi.org/10.1109/mm.2017.53.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Liu, Jeremy, Ke-Thia Yao e Federico Spedalieri. "Dynamic Topology Reconfiguration of Boltzmann Machines on Quantum Annealers". Entropy 22, n.º 11 (24 de outubro de 2020): 1202. http://dx.doi.org/10.3390/e22111202.

Texto completo da fonte
Resumo:
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex distributions that are difficult to sample. We have used an open-system quantum annealer to sample from complex distributions and implement Boltzmann machines with looping connectivity. Further, we have created policies mapping Boltzmann machine variables to the quantum bits of an annealer. These policies, based on correlation and entropy metrics, dynamically reconfigure the topology of Boltzmann machines during training and improve performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Luo, Heng, Ruimin Shen, Changyong Niu e Carsten Ullrich. "Sparse Group Restricted Boltzmann Machines". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 429–34. http://dx.doi.org/10.1609/aaai.v25i1.7923.

Texto completo da fonte
Resumo:
Since learning in Boltzmann machines is typically quite slow, there is a need to restrict connections within hidden layers. However, theresulting states of hidden units exhibit statistical dependencies. Based on this observation, we propose using l1/l2 regularization upon the activation probabilities of hidden units in restricted Boltzmann machines to capture the local dependencies among hidden units. This regularization not only encourages hidden units of many groups to be inactive given observed data but also makes hidden units within a group compete with each other for modeling observed data. Thus, the l1/l2 regularization on RBMs yields sparsity at both the group and the hidden unit levels. We call RBMs trained with the regularizer sparse group RBMs (SGRBMs). The proposed SGRBMs are appliedto model patches of natural images, handwritten digits and OCR English letters. Then to emphasize that SGRBMs can learn more discriminative features we applied SGRBMs to pretrain deep networks for classification tasks. Furthermore, we illustrate the regularizer can also be applied to deep Boltzmann machines, which lead to sparse group deep Boltzmann machines. When adapted to the MNIST data set, a two-layer sparse group Boltzmann machine achieves an error rate of 0.84%, which is, to our knowledge, the best published result on the permutation-invariant version of the MNIST task.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Decelle, Aurélien, e Cyril Furtlehner. "Gaussian-spherical restricted Boltzmann machines". Journal of Physics A: Mathematical and Theoretical 53, n.º 18 (16 de abril de 2020): 184002. http://dx.doi.org/10.1088/1751-8121/ab79f3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Apolloni, B., e D. de Falco. "Learning by parallel Boltzmann machines". IEEE Transactions on Information Theory 37, n.º 4 (julho de 1991): 1162–65. http://dx.doi.org/10.1109/18.87009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

d'Anjou, A., M. Grana, F. J. Torrealdea e M. C. Hernandez. "Solving satisfiability via Boltzmann machines". IEEE Transactions on Pattern Analysis and Machine Intelligence 15, n.º 5 (maio de 1993): 514–21. http://dx.doi.org/10.1109/34.211473.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Machines de Boltzmann restreintes"

1

Fissore, Giancarlo. "Generative modeling : statistical physics of Restricted Boltzmann Machines, learning with missing information and scalable training of Linear Flows". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG028.

Texto completo da fonte
Resumo:
Les modèles de réseaux neuronaux capables d'approximer et d'échantillonner des distributions de probabilité à haute dimension sont connus sous le nom de modèles génératifs. Ces dernières années, cette classe de modèles a fait l'objet d'une attention particulière en raison de son potentiel à apprendre automatiquement des représentations significatives de la grande quantité de données que nous produisons et consommons quotidiennement. Cette thèse présente des résultats théoriques et algorithmiques relatifs aux modèles génératifs et elle est divisée en deux parties. Dans la première partie, nous concentrons notre attention sur la Machine de Boltzmann Restreinte (RBM) et sa formulation en physique statistique. Historiquement, la physique statistique a joué un rôle central dans l'étude des fondements théoriques et dans le développement de modèles de réseaux neuronaux. La première implémentation neuronale d'une mémoire associative (Hopfield, 1982) est un travail séminal dans ce contexte. La RBM peut être considérée comme un développement du modèle de Hopfield, et elle est particulièrement intéressante en raison de son rôle à l'avant-garde de la révolution de l'apprentissage profond (Hinton et al. 2006). En exploitant sa formulation de physique statistique, nous dérivons une théorie de champ moyen de la RBM qui nous permet de caractériser à la fois son fonctionnement en tant que modèle génératif et la dynamique de sa procédure d'apprentissage. Cette analyse s'avère utile pour dériver une stratégie d'imputation robuste de type champ moyen qui permet d'utiliser la RBM pour apprendre des distributions empiriques dans le cas difficile où l'ensemble de données à modéliser n'est que partiellement observé et présente des pourcentages élevés d'informations manquantes. Dans la deuxième partie, nous considérons une classe de modèles génératifs connus sous le nom de Normalizing Flows (NF), dont la caractéristique distinctive est la capacité de modéliser des distributions complexes à haute dimension en employant des transformations inversibles d'une distribution simple et traitable. L'inversibilité de la transformation permet d'exprimer la densité de probabilité par un changement de variables dont l'optimisation par Maximum de Vraisemblance (ML) est assez simple mais coûteuse en calcul. La pratique courante est d'imposer des contraintes architecturales sur la classe de transformations utilisées pour les NF, afin de rendre l'optimisation par ML efficace. En partant de considérations géométriques, nous proposons un algorithme d'optimisation stochastique par descente de gradient qui exploite la structure matricielle des réseaux de neurones entièrement connectés sans imposer de contraintes sur leur structure autre que la dimensionnalité fixe requise par l'inversibilité. Cet algorithme est efficace en termes de calcul et peut s'adapter à des ensembles de données de très haute dimension. Nous démontrons son efficacité dans l'apprentissage d'une architecture non linéaire multicouche utilisant des couches entièrement connectées
Neural network models able to approximate and sample high-dimensional probability distributions are known as generative models. In recent years this class of models has received tremendous attention due to their potential in automatically learning meaningful representations of the vast amount of data that we produce and consume daily. This thesis presents theoretical and algorithmic results pertaining to generative models and it is divided in two parts. In the first part, we focus our attention on the Restricted Boltzmann Machine (RBM) and its statistical physics formulation. Historically, statistical physics has played a central role in studying the theoretical foundations and providing inspiration for neural network models. The first neural implementation of an associative memory (Hopfield, 1982) is a seminal work in this context. The RBM can be regarded to as a development of the Hopfield model, and it is of particular interest due to its role at the forefront of the deep learning revolution (Hinton et al. 2006).Exploiting its statistical physics formulation, we derive a mean-field theory of the RBM that let us characterize both its functioning as a generative model and the dynamics of its training procedure. This analysis proves useful in deriving a robust mean-field imputation strategy that makes it possible to use the RBM to learn empirical distributions in the challenging case in which the dataset to model is only partially observed and presents high percentages of missing information. In the second part we consider a class of generative models known as Normalizing Flows (NF), whose distinguishing feature is the ability to model complex high-dimensional distributions by employing invertible transformations of a simple tractable distribution. The invertibility of the transformation allows to express the probability density through a change of variables whose optimization by Maximum Likelihood (ML) is rather straightforward but computationally expensive. The common practice is to impose architectural constraints on the class of transformations used for NF, in order to make the ML optimization efficient. Proceeding from geometrical considerations, we propose a stochastic gradient descent optimization algorithm that exploits the matrix structure of fully connected neural networks without imposing any constraints on their structure other then the fixed dimensionality required by invertibility. This algorithm is computationally efficient and can scale to very high dimensional datasets. We demonstrate its effectiveness in training a multylayer nonlinear architecture employing fully connected layers
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hasasneh, Ahmad. "Robot semantic place recognition based on deep belief networks and a direct use of tiny images". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00960289.

Texto completo da fonte
Resumo:
Usually, human beings are able to quickly distinguish between different places, solely from their visual appearance. This is due to the fact that they can organize their space as composed of discrete units. These units, called ''semantic places'', are characterized by their spatial extend and their functional unity. Such a semantic category can thus be used as contextual information which fosters object detection and recognition. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping works, this problem is usually addressed as a supervised learning problem. The question of semantic places recognition in robotics - the ability to recognize the semantic category of a place to which scene belongs to - is therefore a major requirement for the future of autonomous robotics. It is indeed required for an autonomous service robot to be able to recognize the environment in which it lives and to easily learn the organization of this environment in order to operate and interact successfully. To achieve that goal, different methods have been already proposed, some based on the identification of objects as a prerequisite to the recognition of the scenes, and some based on a direct description of the scene characteristics. If we make the hypothesis that objects are more easily recognized when the scene in which they appear is identified, the second approach seems more suitable. It is however strongly dependent on the nature of the image descriptors used, usually empirically derived from general considerations on image coding.Compared to these many proposals, another approach of image coding, based on a more theoretical point of view, has emerged the last few years. Energy-based models of feature extraction based on the principle of minimizing the energy of some function according to the quality of the reconstruction of the image has lead to the Restricted Boltzmann Machines (RBMs) able to code an image as the superposition of a limited number of features taken from a larger alphabet. It has also been shown that this process can be repeated in a deep architecture, leading to a sparse and efficient representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. This approach has been successfully applied to the identification of tiny images from the 80 millions image database of the MIT. In the present work, we demonstrate that semantic place recognition can be achieved on the basis of tiny images instead of conventional Bag-of-Word (BoW) methods and on the use of Deep Belief Networks (DBNs) for image coding. We show that after appropriate coding a softmax regression in the projection space is sufficient to achieve promising classification results. To our knowledge, this approach has not yet been investigated for scene recognition in autonomous robotics. We compare our methods with the state-of-the-art algorithms using a standard database of robot localization. We study the influence of system parameters and compare different conditions on the same dataset. These experiments show that our proposed model, while being very simple, leads to state-of-the-art results on a semantic place recognition task.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Svoboda, Jiří. "Multi-modální "Restricted Boltzmann Machines"". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236426.

Texto completo da fonte
Resumo:
This thesis explores how multi-modal Restricted Boltzmann Machines (RBM) can be used in content-based image tagging. This work also cointains brief analysis of modalities that can be used for multi-modal classification. There are also described various RBMs, that are suitable for different kinds of input data. A design and implementation of multimodal RBM is described together with results of preliminary experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

TICKNOR, ANTHONY JAMES. "OPTICAL COMPUTING IN BOLTZMANN MACHINES". Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184169.

Texto completo da fonte
Resumo:
This dissertation covers theoretical and experimental work on applying optical processing techniques ot the operation of a Boltzmann machine. A Boltzmann machine is a processor that solves a problem by iteratively optimizing an estimate of the solution. The optimization is done by finding a minimum of an energy surface over the solution space. The energy function is designed to consider not only data but also a priori information about the problem to assist the optimization. The dissertation first establishes a generic line-of-approach for designing an algorithmic optical computer that might successfully operate using currently realizable analog optical systems for highly-parallel operations. Simulated annealing, the algorithm of the Boltzmann machine, is then shown to be adaptable to this line-of-approach and is chosen as the algorithm to demonstrate these concepts throughout the dissertation. The algorithm is analyzed and optical systems are outlined that will perform the appropriate tasks within the algorithm. From this analysis and design, realizations of the optically-assisted Boltzmann machine are described and it is shown that the optical systems can be used in these algorithmic computations to produce solutions as precise as the single-pass operations of the analog optical systems. Further considerations are discussed for increasing the usefulness of the Boltzmann machine with respect to operating on larger data sets while maintaining the full degrees of parallelism and to increasing the speed by reducing the number of electronical-optical transducers and by utilizing more of the available parallelism. It is demonstgrated how, with a little digital support, the analog optical systems can be used to produce solutions with digital precision but without compromising the speed of the optical computations. Finally there is a short discussion as to how the Boltzmann machine may be modelled as a neuromorphic system for added insight into the computational functioning of the machine.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Camilli, Francesco. "Statistical mechanics perspectives on Boltzmann machines". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19302/.

Texto completo da fonte
Resumo:
La tesi contiene un approccio rigoroso alla meccanica statistica dei sistemi disordinati con particolare attenzione a modelli di campo medio. Il punto di partenza è un’introduzione al modello Curie-Weiss in cui si presentano sia risultati classici sia una nuova proprietà di stabilità per cambio di normalizzazione. Il modello è stato risolto per interazioni ferromagnetiche ed antiferromagnetiche. Una volta introdotti gli strumenti fondamentali, si passa al modello di Sherringtone Kirkpatrick, in cui le interazioni sono estratte da una gaussiana standard ed indipendenti. Si provano l’esistenza del limite termodinamico e la correttezza del replica symmetry breaking ansatz di Parisi per l’energia libera. Il lower bound per quest'ultima è rigorosamente provato tramite lo schema di Aizenmann, Sims e Starr. Nei due capitoli successivi, sono stati studiati modelli multi-specie. Nel caso non disordinato, il modello multi-layer viene risolto. A seguire, un’analisi di modelli in cui la matrice di interazioni tra le specie è definita (negativa o positiva). Per sistemi multi-specie disordinati invece è stato analizzato solo il caso ellittico, con matrice delle covarianze delle interazioni definita positiva. Un caso iperbolico, la Deep Boltzmann Machine (DBM), è infine discusso. Proprio a causa dell'iperbolicità di questo modello si ha soltanto un upper bound, costruito con combinazioni delle energie libere di SK, che è più grande dell’energia libera. Delle prospettive interessanti emergono dallo studio della regione di annealing e di replica symmetry, due particolari regimi associati a fasi di alte temperature. Si può provare che, a campo esterno nullo, la stabilità della soluzione replica symmetric è implicata dalle stesse condizioni che assicurano l’annealing. Per finire, si mostra che, trovando degli opportuni fattori di forma, ovvero i rapporti tra le taglie dei layers della DBM, la regione di annealing di questo modello può essere compressa.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

CRUZ, FELIPE JOAO PONTES DA. "RECOMMENDER SYSTEMS USING RESTRICTED BOLTZMANN MACHINES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30285@1.

Texto completo da fonte
Resumo:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Sistemas de recomendação aparecem em diversos domínios do mundo real. Vários modelos foram propostos para o problema de predição de entradas faltantes em um conjunto de dados. Duas das abordagens mais comuns são filtragem colaborativa baseada em similaridade e modelos de fatores latentes. Uma alternativa, mais recente, foi proposta por Salakhutdinov em 2007, usando máquinas de Boltzmann restritas, ou RBMs. Esse modelo se encaixa na família de modelos de fatores latentes, no qual, modelamos fatores latentes dos dados usando unidades binárias na camada escondida das RBMs. Esses modelos se mostraram capazes de aproximar resultados obtidos com modelos de fatoração de matrizes. Nesse trabalho vamos revisitar esse modelo e detalhar cuidadosamente como modelar e treinar RBMs para o problema de predição de entradas vazias em dados tabulares.
Recommender systems can be used in many problems in the real world. Many models were proposed to solve the problem of predicting missing entries in a specific dataset. Two of the most common approaches are neighborhood-based collaborative filtering and latent factor models. A more recent alternative was proposed on 2007 by Salakhutdinov, using Restricted Boltzmann Machines. This models belongs to the family of latent factor models, in which, we model latent factors over the data using hidden binary units. RBMs have shown that they can approximate solutions trained with a traditional matrix factorization model. In this work we ll revisit this proposed model and carefully detail how to model and train RBMs for the problem of missing ratings prediction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Moody, John Matali. "Process monitoring with restricted Boltzmann machines". Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86467.

Texto completo da fonte
Resumo:
Thesis (MScEng)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Process monitoring and fault diagnosis are used to detect abnormal events in processes. The early detection of such events or faults is crucial to continuous process improvement. Although principal component analysis and partial least squares are widely used for process monitoring and fault diagnosis in the metallurgical industries, these models are linear in principle; nonlinear approaches should provide more compact and informative models. The use of auto associative neural networks or auto encoders provide a principled approach for process monitoring. However, until very recently, these multiple layer neural networks have been difficult to train and have therefore not been used to any significant extent in process monitoring. With newly proposed algorithms based on the pre-training of the layers of the neural networks, it is now possible to train neural networks with very complex structures, i.e. deep neural networks. These neural networks can be used as auto encoders to extract features from high dimensional data. In this study, the application of deep auto encoders in the form of Restricted Boltzmann machines (RBM) to the extraction of features from process data is considered. These networks have mostly been used for data visualization to date and have not been applied in the context of fault diagnosis or process monitoring as yet. The objective of this investigation is therefore to assess the feasibility of using Restricted Boltzmann machines in various fault detection schemes. The use of RBM in process monitoring schemes will be discussed, together with the application of these models in automated control frameworks.
AFRIKAANSE OPSOMMING: Prosesmonitering en fout diagnose word gebruik om abnormale gebeure in prosesse op te spoor. Die vroeë opsporing van sulke gebeure of foute is noodsaaklik vir deurlopende verbetering van prosesse. Alhoewel hoofkomponent-analise en parsiële kleinste kwadrate wyd gebruik word vir prosesmonitering en fout diagnose in die metallurgiese industrieë, is hierdie modelle lineêr in beginsel; nie-lineêre benaderings behoort meer kompakte en insiggewende modelle te voorsien. Die gebruik van outo-assosiatiewe neurale netwerke of outokodeerders bied 'n beginsel gebaseerder benadering om dit te bereik. Hierdie veelvoudige laag neurale netwerke was egter tot onlangs moeilik om op te lei en is dus nie tot ʼn beduidende mate in die prosesmonitering gebruik nie. Nuwe, voorgestelde algoritmes, gebaseer op voorafopleiding van die lae van die neurale netwerke, maak dit nou moontlik om neurale netwerke met baie ingewikkelde strukture, d.w.s. diep neurale netwerke, op te lei. Hierdie neurale netwerke kan gebruik word as outokodeerders om kenmerke van hoë-dimensionele data te onttrek. In hierdie studie word die toepassing van diep outokodeerders in die vorm van Beperkte Boltzmann Masjiene vir die onttrekking van kenmerke van proses data oorweeg. Tot dusver is hierdie netwerke meestal vir data visualisering gebruik en dit is nog nie toegepas in die konteks van fout diagnose of prosesmonitering nie. Die doel van hierdie ondersoek is dus om die haalbaarheid van die gebruik van Beperkte Boltzmann Masjiene in verskeie foutopsporingskemas te assesseer. Die gebruik van Beperkte Boltzmann Masjiene se eienskappe in prosesmoniteringskemas sal bespreek word, tesame met die toepassing van hierdie modelle in outomatiese beheer raamwerke.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

LACAILLE, JEROME. "Machines de boltzmann. Theorie et applications". Paris 11, 1992. http://www.theses.fr/1992PA112213.

Texto completo da fonte
Resumo:
Cette these presente les machines de boltzmann en trois grandes parties. On detaille tout d'abord le formalisme et l'algorithmique de ces machines d'un point de vue theorique. L'accent sera mis sur les architectures paralleles et asymetriques. Une application de ce type de reseau est presentee dans un second temps. Il s'agit d'un detecteur de contours sur des photographies en niveau de gris. La derniere partie decrit une implementation logicielle des machines de boltzmann synchrones asymetriques. On propose un simulateur et un interpreteur muni d'une notice d'utilisation. Cette derniere partie etant concue de maniere didactique puisque chaque etape est imagee par des exemples informatiques
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Swersky, Kevin. "Inductive principles for learning Restricted Boltzmann Machines". Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/27816.

Texto completo da fonte
Resumo:
We explore the training and usage of the Restricted Boltzmann Machine for unsupervised feature extraction. We investigate the many different aspects involved in their training, and by applying the concept of iterate averaging we show that it is possible to greatly improve on state of the art algorithms. We also derive estimators based on the principles of pseudo-likelihood, ratio matching, and score matching, and we test them empirically against contrastive divergence, and stochastic maximum likelihood (also known as persistent contrastive divergence). Our results show that ratio matching and score matching are promising approaches to learning Restricted Boltzmann Machines. By applying score matching to the Restricted Boltzmann Machine, we show that training an auto-encoder neural network with a particular kind of regularization function is asymptotically consistent. Finally, we discuss the concept of deep learning and its relationship to training Restricted Boltzmann Machines, and briefly explore the impact of fine-tuning on the parameters and performance of a deep belief network.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Farguell, Matesanz Enric. "A new approach to Decimation in High Order Boltzmann Machines". Doctoral thesis, Universitat Ramon Llull, 2011. http://hdl.handle.net/10803/9155.

Texto completo da fonte
Resumo:
La Màquina de Boltzmann (MB) és una xarxa neuronal estocàstica amb l'habilitat tant d'aprendre com d'extrapolar distribucions de probabilitat. Malgrat això, mai ha arribat a ser tant emprada com d'altres models de xarxa neuronal, com ara el perceptró, degut a la complexitat tan del procés de simulació com d'aprenentatge: les quantitats que es necessiten al llarg del procés d'aprenentatge són normalment estimades mitjançant tècniques Monte Carlo (MC), a través de l'algorisme del Temprat Simulat (SA). Això ha portat a una situació on la MB és més ben aviat considerada o bé com una extensió de la xarxa de Hopfield o bé com una implementació paral·lela del SA.

Malgrat aquesta relativa manca d'èxit, la comunitat científica de l'àmbit de les xarxes neuronals ha mantingut un cert interès amb el model. Una de les extensions més rellevants a la MB és la Màquina de Boltzmann d'Alt Ordre (HOBM), on els pesos poden connectar més de dues neurones simultàniament. Encara que les capacitats d'aprenentatge d'aquest model han estat analitzades per d'altres autors, no s'ha pogut establir una equivalència formal entre els pesos d'una MB i els pesos d'alt ordre de la HOBM.

En aquest treball s'analitza l'equivalència entre una MB i una HOBM a través de l'extensió del mètode conegut com a decimació. Decimació és una eina emprada a física estadística que es pot també aplicar a cert tipus de MB, obtenint expressions analítiques per a calcular les correlacions necessàries per a dur a terme el procés d'aprenentatge. Per tant, la decimació evita l'ús del costós algorisme del SA. Malgrat això, en la seva forma original, la decimació podia tan sols ser aplicada a cert tipus de topologies molt poc densament connectades. La extensió que es defineix en aquest treball permet calcular aquests valors independentment de la topologia de la xarxa neuronal; aquest model es basa en afegir prou pesos d'alt ordre a una MB estàndard com per a assegurar que les equacions de la decimació es poden solucionar.

Després, s'estableix una equivalència directa entre els pesos d'un model d'alt ordre, la distribució de probabilitat que pot aprendre i les matrius de Hadamard: les propietats d'aquestes matrius es poden emprar per a calcular fàcilment els pesos del sistema. Finalment, es defineix una MB estàndard amb una topologia específica que permet entendre millor la equivalència exacta entre unitats ocultes de la MB i els pesos d'alt ordre de la HOBM.
La Máquina de Boltzmann (MB) es una red neuronal estocástica con la habilidad de aprender y extrapolar distribuciones de probabilidad. Sin embargo, nunca ha llegado a ser tan popular como otros modelos de redes neuronals como, por ejemplo, el perceptrón. Esto es debido a la complejidad tanto del proceso de simulación como de aprendizaje: las cantidades que se necesitan a lo largo del proceso de aprendizaje se estiman mediante el uso de técnicas Monte Carlo (MC), a través del algoritmo del Temple Simulado (SA). En definitiva, la MB es generalmente considerada o bien una extensión de la red de Hopfield o bien como una implementación paralela del algoritmo del SA.

Pese a esta relativa falta de éxito, la comunidad científica del ámbito de las redes neuronales ha mantenido un cierto interés en el modelo. Una importante extensión es la Màquina de Boltzmann de Alto Orden (HOBM), en la que los pesos pueden conectar más de dos neuronas a la vez. Pese a que este modelo ha sido analizado en profundidad por otros autores, todavía no se ha descrito una equivalencia formal entre los pesos de una MB i las conexiones de alto orden de una HOBM.

En este trabajo se ha analizado la equivalencia entre una MB i una HOBM, a través de la extensión del método conocido como decimación. La decimación es una herramienta propia de la física estadística que también puede ser aplicada a ciertos modelos de MB, obteniendo expresiones analíticas para el cálculo de las cantidades necesarias en el algoritmo de aprendizaje. Por lo tanto, la decimación evita el alto coste computacional asociado al al uso del costoso algoritmo del SA. Pese a esto, en su forma original la decimación tan solo podía ser aplicada a ciertas topologías de MB, distinguidas por ser poco densamente conectadas. La extensión definida en este trabajo permite calcular estos valores independientemente de la topología de la red neuronal: este modelo se basa en añadir suficientes pesos de alto orden a una MB estándar como para asegurar que las ecuaciones de decimación pueden solucionarse.

Más adelante, se establece una equivalencia directa entre los pesos de un modelo de alto orden, la distribución de probabilidad que puede aprender y las matrices tipo Hadamard. Las propiedades de este tipo de matrices se pueden usar para calcular fácilmente los pesos del sistema. Finalmente, se define una BM estándar con una topología específica que permite entender mejor la equivalencia exacta entre neuronas ocultas en la MB y los pesos de alto orden de la HOBM.
The Boltzmann Machine (BM) is a stochastic neural network with the ability of both learning and extrapolating probability distributions. However, it has never been as widely used as other neural networks such as the perceptron, due to the complexity of both the learning and recalling algorithms, and to the high computational cost required in the learning process: the quantities that are needed at the learning stage are usually estimated by Monte Carlo (MC) through the Simulated Annealing (SA) algorithm. This has led to a situation where the BM is rather considered as an evolution of the Hopfield Neural Network or as a parallel implementation of the Simulated Annealing algorithm.

Despite this relative lack of success, the neural network community has continued to progress in the analysis of the dynamics of the model. One remarkable extension is the High Order Boltzmann Machine (HOBM), where weights can connect more than two neurons at a time. Although the learning capabilities of this model have already been discussed by other authors, a formal equivalence between the weights in a standard BM and the high order weights in a HOBM has not yet been established.

We analyze this latter equivalence between a second order BM and a HOBM by proposing an extension of the method known as decimation. Decimation is a common tool in statistical physics that may be applied to some kind of BMs, that can be used to obtain analytical expressions for the n-unit correlation elements required in the learning process. In this way, decimation avoids using the time consuming Simulated Annealing algorithm. However, as it was first conceived, it could only deal with sparsely connected neural networks. The extension that we define in this thesis allows computing the same quantities irrespective of the topology of the network. This method is based on adding enough high order weights to a standard BM to guarantee that the system can be solved.

Next, we establish a direct equivalence between the weights of a HOBM model, the probability distribution to be learnt and Hadamard matrices. The properties of these matrices can be used to easily calculate the value of the weights of the system. Finally, we define a standard BM with a very specific topology that helps us better understand the exact equivalence between hidden units in a BM and high order weights in a HOBM.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Machines de Boltzmann restreintes"

1

P, Coughlin James. Neural computation in Hopfield networks and Boltzmann machines. Newark: University of Delaware Press, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Williams, Chris. Using deterministic Boltzmann machines to discriminate temporally distorted strings. Ottawa: National Library of Canada, 1990.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Williams, Chris. Using deterministic Boltzmann machines to discriminate temporally distorted strings. Toronto: University, Dept. of Computer Science, 1990.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Luttrell, Stephen P. The implications of Boltzmann-type machines for SAR data processing: A preliminary survey. Malvern, Worcs: Procurement Executive, Ministry of Defence, RSRE, 1985.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Jan, Korst, ed. Simulated annealing and Boltzmann machines: A stochastic approach to combinatorial optimization and neural computing. Chichester [England]: Wiley, 1989.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Azencott, Robert. Boltzmann Machines, Gibbs Fields, and Artificial Vision. Perseus Books (Sd), 1999.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

P, Coughlin James, e Robert H. Baran. Neural Computation in Hopfield Networks and Boltzmann Machines. Rowman & Littlefield Publishers, Incorporated, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Masters, Timothy. Deep Belief Nets in C++ and CUDA C: Volume 1: Restricted Boltzmann Machines and Supervised Feedforward Networks. Apress, 2018.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Tiwari, Sandip. Information mechanics. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198759874.003.0001.

Texto completo da fonte
Resumo:
Information is physical, so its manipulation through devices is subject to its own mechanics: the science and engineering of behavioral description, which is intermingled with classical, quantum and statistical mechanics principles. This chapter is a unification of these principles and physical laws with their implications for nanoscale. Ideas of state machines, Church-Turing thesis and its embodiment in various state machines, probabilities, Bayesian principles and entropy in its various forms (Shannon, Boltzmann, von Neumann, algorithmic) with an eye on the principle of maximum entropy as an information manipulation tool. Notions of conservation and non-conservation are applied to example circuit forms folding in adiabatic, isothermal, reversible and irreversible processes. This brings out implications of fluctuation and transitions, the interplay of errors and stability and the energy cost of determinism. It concludes discussing networks as tools to understand information flow and decision making and with an introduction to entanglement in quantum computing.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Machines de Boltzmann restreintes"

1

Du, Ke-Lin, e M. N. S. Swamy. "Boltzmann Machines". In Neural Networks and Statistical Learning, 699–715. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_23.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Alla, Sridhar, e Suman Kalyan Adari. "Boltzmann Machines". In Beginning Anomaly Detection Using Python-Based Deep Learning, 179–212. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5177-5_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Spieksma, F. C. R. "Boltzmann Machines". In Lecture Notes in Computer Science, 119–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0027026.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Munro, Paul, Hannu Toivonen, Geoffrey I. Webb, Wray Buntine, Peter Orbanz, Yee Whye Teh, Pascal Poupart et al. "Boltzmann Machines". In Encyclopedia of Machine Learning, 132–36. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_83.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Yan, Wei Qi. "Boltzmann Machines". In Texts in Computer Science, 99–107. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61081-4_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hinton, Geoffrey. "Boltzmann Machines". In Encyclopedia of Machine Learning and Data Mining, 1–7. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-1-4899-7502-7_31-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Hinton, Geoffrey. "Boltzmann Machines". In Encyclopedia of Machine Learning and Data Mining, 164–68. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_31.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Masters, Timothy. "Restricted Boltzmann Machines". In Deep Belief Nets in C++ and CUDA C: Volume 1, 91–172. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3591-1_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Poole, William, Andrés Ortiz-Muñoz, Abhishek Behera, Nick S. Jones, Thomas E. Ouldridge, Erik Winfree e Manoj Gopalkrishnan. "Chemical Boltzmann Machines". In Lecture Notes in Computer Science, 210–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66799-7_14.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Aggarwal, Charu C. "Restricted Boltzmann Machines". In Neural Networks and Deep Learning, 235–70. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94463-0_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Machines de Boltzmann restreintes"

1

Ticknor, Anthony J., e Harrison H. Barrett. "Optical Boltzmann machines". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.tuy3.

Texto completo da fonte
Resumo:
An optical Boltzmann machine is a highly parallel computing module for inverting matrix equations expressed g = Af. A Monte Carlo procedure is used, being more efficient with large matrices and because the transform matrix A often has no true inverse. Since the operation is not determinstic, the resultant data f ^ will only be an estimate if the data f, although confidence that it is the best possible estimate of f can be made very high. An energy function E of the estimate f ^ is defined with minimum at the best estimate; then the starting estimate is iteratively perturbed by adding or subtracting grains such that the running estimate generally decreases E. Brief excursions of increasing E are initially accepted to prevent the estimate from settling into a local minimum, but these excursions are gradually prohibited to let f settle in the best estimate. Highly parallel optical systems can be used to very quickly calculate the energy of an estimate and to decide if excursions to higher energy should be accepted. A few of the possible architectures will be described, capable of processing speeds of 106–107 grains/s with only moderate advantage of the parallelism available being taken.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Ticknor, Anthony J., Harrison H. Barrett e Roger L. Easton. "Optical Boltzmann Machines". In Optical Computing. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/optcomp.1985.pd3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Sejnowski, Terrence J. "Higher-order Boltzmann machines". In AIP Conference Proceedings Volume 151. AIP, 1986. http://dx.doi.org/10.1063/1.36246.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wang, Maolin, Chenbin Zhang, Yu Pan, Jing Xu e Zenglin Xu. "Tensor Ring Restricted Boltzmann Machines". In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852432.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Popa, Calin-Adrian. "Complex-Valued Deep Boltzmann Machines". In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489359.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Bounds, D. G. "Numerical simulations of Boltzmann Machines". In AIP Conference Proceedings Volume 151. AIP, 1986. http://dx.doi.org/10.1063/1.36220.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Liu, Ying. "Image compression using Boltzmann machines". In SPIE's 1993 International Symposium on Optics, Imaging, and Instrumentation, editado por Su-Shing Chen. SPIE, 1993. http://dx.doi.org/10.1117/12.162027.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Yu, Jianjun, e Xiaogang Ruan. "Optimal Control with Boltzmann Machines". In 2008 Fourth International Conference on Natural Computation. IEEE, 2008. http://dx.doi.org/10.1109/icnc.2008.692.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ticknor, Anthony J., e Harrison H. Barrett. "Optical implementation in Boltzmann machines". In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.mdd6.

Texto completo da fonte
Resumo:
The Boltzmann machine, a hardware implementation of the simulated annealing algorithm, easily and directly takes advantage of highly interconnected systems for parallel processing. Optical systems for matrix operations and random number generation greatly relax the computational burden of the digital electronics and increase the speed by many orders of magnitude over the software implementation. The algorithm is inherently fault-tolerant, but simple use of an analog optical system for matrix operations might critically limit available dynamic range and space-bandwidth product. We therefore present a method for significantly increasing the available dynamic range with little sacrifice in the overall grain-processing speed. We also show how under some circumstances the usable space-bandwidth product of the reconstruction can be increased.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Passos, Leandro Aparecido, e Joao Paulo Papa. "Fine-Tuning Infinity Restricted Boltzmann Machines". In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2017. http://dx.doi.org/10.1109/sibgrapi.2017.15.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia