Gotowa bibliografia na temat „Unsupervised neural networks”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Unsupervised neural networks”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Unsupervised neural networks"

1

Murnion, Shane D. "Spatial analysis using unsupervised neural networks". Computers & Geosciences 22, nr 9 (listopad 1996): 1027–31. http://dx.doi.org/10.1016/s0098-3004(96)00041-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Luo, Shuyue, Shangbo Zhou, Yong Feng i Jiangan Xie. "Pansharpening via Unsupervised Convolutional Neural Networks". IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 4295–310. http://dx.doi.org/10.1109/jstars.2020.3008047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Meuleman, J., i C. van Kaam. "UNSUPERVISED IMAGE SEGMENTATION WITH NEURAL NETWORKS". Acta Horticulturae, nr 562 (listopad 2001): 101–8. http://dx.doi.org/10.17660/actahortic.2001.562.10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Gunhan, Atilla E., László P. Csernai i Jørgen Randrup. "UNSUPERVISED COMPETITIVE LEARNING IN NEURAL NETWORKS". International Journal of Neural Systems 01, nr 02 (styczeń 1989): 177–86. http://dx.doi.org/10.1142/s0129065789000086.

Pełny tekst źródła
Streszczenie:
We study an idealized neural network that may approximate certain neurophysiological features of natural neural systems. The network contains a mutual lateral inhibition and is subjected to unsupervised learning by means of a Hebb-type learning principle. Its learning ability is analysed as a function of the strength of lateral inhibition and the training set.
Style APA, Harvard, Vancouver, ISO itp.
5

Becker, Suzanna. "UNSUPERVISED LEARNING PROCEDURES FOR NEURAL NETWORKS". International Journal of Neural Systems 02, nr 01n02 (styczeń 1991): 17–33. http://dx.doi.org/10.1142/s0129065791000030.

Pełny tekst źródła
Streszczenie:
Supervised learning procedures for neural networks have recently met with considerable success in learning difficult mappings. However, their range of applicability is limited by their poor scaling behavior, lack of biological plausibility, and restriction to problems for which an external teacher is available. A promising alternative is to develop unsupervised learning algorithms which can adaptively learn to encode the statistical regularities of the input patterns, without being told explicitly the correct response for each pattern. In this paper, we describe the major approaches that have been taken to model unsupervised learning, and give an in-depth review of several examples of each approach.
Style APA, Harvard, Vancouver, ISO itp.
6

Hamad, D., C. Firmin i J. G. Postaire. "Unsupervised pattern classification by neural networks". Mathematics and Computers in Simulation 41, nr 1-2 (czerwiec 1996): 109–16. http://dx.doi.org/10.1016/0378-4754(95)00063-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Vamaraju, Janaki, i Mrinal K. Sen. "Unsupervised physics-based neural networks for seismic migration". Interpretation 7, nr 3 (1.08.2019): SE189—SE200. http://dx.doi.org/10.1190/int-2018-0230.1.

Pełny tekst źródła
Streszczenie:
We have developed a novel framework for combining physics-based forward models and neural networks to advance seismic processing and inversion algorithms. Migration is an effective tool in seismic data processing and imaging. Over the years, the scope of these algorithms has broadened; today, migration is a central step in the seismic data processing workflow. However, no single migration technique is suitable for all kinds of data and all styles of acquisition. There is always a compromise on the accuracy, cost, and flexibility of these algorithms. On the other hand, machine-learning algorithms and artificial intelligence methods have been found immensely successful in applications in which big data are available. The applicability of these algorithms is being extensively investigated in scientific disciplines such as exploration geophysics with the goal of reducing exploration and development costs. In this context, we have used a special kind of unsupervised recurrent neural network and its variants, Hopfield neural networks and the Boltzmann machine, to solve the problems of Kirchhoff and reverse time migrations. We use the network to migrate seismic data in a least-squares sense using simulated annealing to globally optimize the cost function of the neural network. The weights and biases of the neural network are derived from the physics-based forward models that are used to generate seismic data. The optimal configuration of the neural network after training corresponds to the minimum energy of the network and thus gives the reflectivity solution of the migration problem. Using synthetic examples, we determine that (1) Hopfield neural networks are fast and efficient and (2) they provide reflectivity images with mitigated migration artifacts and improved spatial resolution. Specifically, the presented approach minimizes the artifacts that arise from limited aperture, low subsurface illumination, coarse sampling, and gaps in the data.
Style APA, Harvard, Vancouver, ISO itp.
8

Xu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li i Deyi Kong. "Bearing Defect Detection with Unsupervised Neural Networks". Shock and Vibration 2021 (19.08.2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.

Pełny tekst źródła
Streszczenie:
Bearings always suffer from surface defects, such as scratches, black spots, and pits. Those surface defects have great effects on the quality and service life of bearings. Therefore, the defect detection of the bearing has always been the focus of the bearing quality control. Deep learning has been successfully applied to the objection detection due to its excellent performance. However, it is difficult to realize automatic detection of bearing surface defects based on data-driven-based deep learning due to few samples data of bearing defects on the actual production line. Sample preprocessing algorithm based on normalized sample symmetry of bearing is adopted to greatly increase the number of samples. Two different convolutional neural networks, supervised networks and unsupervised networks, are tested separately for the bearing defect detection. The first experiment adopts the supervised networks, and ResNet neural networks are selected as the supervised networks in this experiment. The experiment result shows that the AUC of the model is 0.8567, which is low for the actual use. Also, the positive and negative samples should be labelled manually. To improve the AUC of the model and the flexibility of the samples labelling, a new unsupervised neural network based on autoencoder networks is proposed. Gradients of the unlabeled data are used as labels, and autoencoder networks are created with U-net to predict the output. In the second experiment, positive samples of the supervised experiment are used as the training set. The experiment of the unsupervised neural networks shows that the AUC of the model is 0.9721. In this experiment, the AUC is higher than the first experiment, but the positive samples must be selected. To overcome this shortage, the dataset of the third experiment is the same as the supervised experiment, where all the positive and negative samples are mixed together, which means that there is no need to label the samples. This experiment shows that the AUC of the model is 0.9623. Although the AUC is slightly lower than that of the second experiment, the AUC is high enough for actual use. The experiment results demonstrate the feasibility and superiority of the proposed unsupervised networks.
Style APA, Harvard, Vancouver, ISO itp.
9

Raja, Muhammad Asif Zahoor. "Unsupervised neural networks for solving Troesch's problem". Chinese Physics B 23, nr 1 (styczeń 2014): 018903. http://dx.doi.org/10.1088/1674-1056/23/1/018903.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Parisi, Daniel R., Marı́a C. Mariani i Miguel A. Laborde. "Solving differential equations with unsupervised neural networks". Chemical Engineering and Processing: Process Intensification 42, nr 8-9 (sierpień 2003): 715–21. http://dx.doi.org/10.1016/s0255-2701(02)00207-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Unsupervised neural networks"

1

Nyamapfene, Abel. "Unsupervised multimodal neural networks". Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/844064/.

Pełny tekst źródła
Streszczenie:
We extend the in-situ Hebbian-linked SOMs network by Miikkulainen to come up with two unsupervised neural networks that learn the mapping between the individual modes of a multimodal dataset. The first network, the single-pass Hebbian linked SOMs network, extends the in-situ Hebbian-linked SOMs network by enabling the Hebbian link weights to be computed through one- shot learning. The second network, a modified counter propagation network, extends the unsupervised learning of crossmodal mappings by making it possible for only one self-organising map to implement the crossmodal mapping. The two proposed networks each have a smaller computation time and achieve lower crossmodal mean squared errors than the in-situ Hebbian- linked SOMs network when assessed on two bimodal datasets, an audio-acoustic speech utterance dataset and a phonological-semantics child utterance dataset. Of the three network architectures, the modified counterpropagation network achieves the highest percentage of correct classifications comparable to that of the LVQ-2 algorithm by Kohonen and the neural network for category learning by de Sa and Ballard in classification tasks using the audio-acoustic speech utterance dataset. To facilitate multimodal processing of temporal data, we propose a Temporal Hypermap neural network architecture that learns and recalls multiple temporal patterns in an unsupervised manner. The Temporal Hypermap introduces flexibility in the recall of temporal patterns - a stored temporal pattern can be retrieved by prompting the network with the temporal pattern's identity vector, whilst the incorporation of short term memory allows the recall of a temporal pattern, starting from the pattern item specified by contextual information up to the last item in the pattern sequence. Finally, we extend the connectionist modelling of child language acquisition in two important respects. First, we introduce the concept of multimodal representation of speech utterances at the one-word and two-word stage. This allows us to model child language at the one-word utterance stage with a single modified counterpropagation network, which is an improvement on previous models in which multiple networks are required to simulate the different aspects of speech at the one-word utterance stage. Secondly, we present, for the time, a connectionist model of the transition of child language from the one-word utterance stage to the two-word utterance stage. We achieve this using a gated multi-net comprising a modified counterpropagation network and a Temporal Hypermap.
Style APA, Harvard, Vancouver, ISO itp.
2

Macdonald, Donald. "Unsupervised neural networks for visualisation of data". Thesis, University of the West of Scotland, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395687.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Berry, Ian Michael. "Data classification using unsupervised artificial neural networks". Thesis, University of Sussex, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390079.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Harpur, George Francis. "Low entropy coding with unsupervised neural networks". Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627227.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Walcott, Terry Hugh. "Market prediction for SMEs using unsupervised neural networks". Thesis, University of East London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.532991.

Pełny tekst źródła
Streszczenie:
The objective of this study was to create a market prediction model for small and medium enterprises (SMEs). To achieve this, an extensive literature examination was carried out which focused on SMEs, marketing and prediction; neural networks as a competitive tool for SME marketing; and clustering a review. A Delphi study was used for collating expert opinions in order to determine likely factors hindering SMEs wanting to remain business proficient. An analysis of Delphi responses led to the creation of a market prediction questionnaire. This questionnaire was used to create variables for analysis using four unsupervised algorithm. The algorithms used in this study were joining tree, k-means, learning vector quantisation and the snap-drift algorithm. Questionnaire data took the form of data collected from 102 SMEs. This led to the determination of 23 variables that could best represent the data under examination. Further analysis of each 23 variable led to the choice of respondents for case study analysis. A higher education college (HEC) and a private hire company (PHC) were chosen for this stage of the research. In case study one (1), analysis has discovered that HEC's can compete with Universities if they tailor their products and services to selected academic markets as opposed to entering all academic sectors. The findings suggest that if a HEC monitors the growth of its students and establishes the likely point of creating new courses they will retain students and not lose them to universities. Comparisons between the case HEC and rival HECs has demonstrated that there is a knowledge gap that currently exists between these institutions and by using post-modem marketing coupled with neural networks a competitive advantage will be realised. In case study two (2), a private hire company was investigated allowing for the interpretation of current markets for this firm by making existing operating areas more transparent. Therefore, knowledge barriers were discovered between telephonists and drivers, and the owner/manger and drivers. As such historical data was used for distinguishing the performance of drivers within this firm. In differentiating job times and driver performance our case organisation was better equipped for determining the times in which it is most busy. Therefore, being able to determine the amount of telephonists needed per shift and the likely busy periods in which this firm will operate. Analysis of all participating SMEs have revealed that: (1) these firms are more likely to fail in the first two years of operation generally, (2) successful SMEs are owned or managed by persons having prior management and or general business expertise, (3) success is normally attributed to experience gained as a result of working or managing a threatened firm in the past, (4) successful SMEs understand the importance of valuing the ethnicity held in their respective firms and (5) these firms are less likely to understand how technology can aid and sustain market growth generally. It seems market prediction in SMEs can be affected by employee performance and managerial ability to undertake predefined tasks. The findings suggest that there are SMEs that can benefit from market prediction. More importantly, the findings indicate the need to understand the SME for determining the types of intelligent systems that can be used for initiate marketing and providing marketing prediction generally. Several theoretical and practical implications are discussed. To this effect, SME owner/managers, researchers in academia, government and public SME organisations can learn from the results. Suggestions for future research are also presented.
Style APA, Harvard, Vancouver, ISO itp.
6

Vetcha, Sarat Babu. "Fault diagnosis in pumps by unsupervised neural networks". Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300604.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bishop, Griffin R. "Unsupervised Semantic Segmentation through Cross-Instance Representation Similarity". Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1371.

Pełny tekst źródła
Streszczenie:
Semantic segmentation methods using deep neural networks typically require huge volumes of annotated data to train properly. Due to the expense of collecting these pixel-level dataset annotations, the problem of semantic segmentation without ground-truth labels has been recently proposed. Many current approaches to unsupervised semantic segmentation frame the problem as a pixel clustering task, and in particular focus heavily on color differences between image regions. In this paper, we explore a weakness to this approach: By focusing on color, these approaches do not adequately capture relationships between similar objects across images. We present a new approach to the problem, and propose a novel architecture that captures the characteristic similarities of objects between images directly. We design a synthetic dataset to illustrate this flaw in an existing model. Experiments on this synthetic dataset show that our method can succeed where the pixel color clustering approach fails. Further, we show that plain autoencoder models can implicitly capture these cross-instance object relationships. This suggests that some generative model architectures may be viable candidates for unsupervised semantic segmentation even with no additional loss terms.
Style APA, Harvard, Vancouver, ISO itp.
8

Plumbley, Mark David. "An information-theoretic approach to unsupervised connectionist models". Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387051.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Galtier, Mathieu. "A mathematical approach to unsupervised learning in recurrent neural networks". Paris, ENMP, 2011. https://pastel.hal.science/pastel-00667368.

Pełny tekst źródła
Streszczenie:
Dans cette thèse nous tentons de donner un sens mathématique à la proposition : le néocortex se construit un modèle de son environnement. Nous considérons que le néocortex est un réseau de neurones spikants dont la connectivité est soumise à une lente évolution appelée apprentissage. Dans le cas où le nombre de neurones est proche de l'infini, nous proposons une nouvelle méthode de champ-moyen afin de trouver une équation décrivant l'évolution du taux de décharge de populations de neurones. Nous étudions donc la dynamique de ce système moyennisé avec apprentissage. Dans le régime où l'apprentissage est beaucoup plus lent que l'activité du réseau nous pouvons utiliser des outils de moyennisation temporelle pour les systèmes lents/rapides. Dans ce cadre mathématique nous montrons que la connectivité du réseau converge toujours vers une unique valeur d'équilibre que nous pouvons calculer explicitement. Cette connectivité regroupe l'ensemble des connaissances du réseau à propos de son environnement. Nous comparons cette connectivité à l'équilibre avec les stimuli du réseau. Considérant que l'environnement est solution d'un système dynamique quelconque, il est possible de montrer que le réseau encode la totalité de l'information nécessaire à la définition de ce système dynamique. En effet nous montrons que la partie symétrique de la connectivité correspond à la variété sur laquelle est définie le système dynamique de l'environnement, alors que la partie anti-symétrique de la connectivité correspond au champ de vecteur définissant le système dynamique de l'environnement. Dans ce contexte il devient clair que le réseau agit comme un prédicteur de son environnement
In this thesis, we propose to give a mathematical sense to the claim: the neocortex builds itself a model of its environment. We study the neocortex as a network of spiking neurons undergoing slow STDP learning. By considering that the number of neurons is close to infinity, we propose a new mean-field method to find the ''smoother'' equation describing the firing-rate of populations of these neurons. Then, we study the dynamics of this averaged system with learning. By assuming the modification of the synapses' strength is very slow compared the activity of the network, it is possible to use tools from temporal averaging theory. They lead to showing that the connectivity of the network always converges towards a single equilibrium point which can be computed explicitely. This connectivity gathers the knowledge of the network about the world. Finally, we analyze the equilibrium connectivity and compare it to the inputs. By seeing the inputs as the solution of a dynamical system, we are able to show that the connectivity embedded the entire information about this dynamical system. Indeed, we show that the symmetric part of the connectivity leads to finding the manifold over which the inputs dynamical system is defined, and that the anti-symmetric part of the connectivity corresponds to the vector field of the inputs dynamical system. In this context, the network acts as a predictor of the future events in its environment
Style APA, Harvard, Vancouver, ISO itp.
10

Haddad, Josef, i Carl Piehl. "Unsupervised anomaly detection in time series with recurrent neural networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259655.

Pełny tekst źródła
Streszczenie:
Artificial neural networks (ANN) have been successfully applied to a wide range of problems. However, most of the ANN-based models do not attempt to model the brain in detail, but there are still some models that do. An example of a biologically constrained ANN is Hierarchical Temporal Memory (HTM). This study applies HTM and Long Short-Term Memory (LSTM) to anomaly detection problems in time series in order to compare their performance for this task. The shape of the anomalies are restricted to point anomalies and the time series are univariate. Pre-existing implementations that utilise these networks for unsupervised anomaly detection in time series are used in this study. We primarily use our own synthetic data sets in order to discover the networks’ robustness to noise and how they compare to each other regarding different characteristics in the time series. Our results shows that both networks can handle noisy time series and the difference in performance regarding noise robustness is not significant for the time series used in the study. LSTM outperforms HTM in detecting point anomalies on our synthetic time series with sine curve trend but a conclusion about the overall best performing network among these two remains inconclusive.
Artificiella neurala nätverk (ANN) har tillämpats på många problem. Däremot försöker inte de flesta ANN-modeller efterlikna hjärnan i detalj. Ett exempel på ett ANN som är begränsat till att efterlikna hjärnan är Hierarchical Temporal Memory (HTM). Denna studie tillämpar HTM och Long Short-Term Memory (LSTM) på avvikelsedetektionsproblem i tidsserier för att undersöka vilka styrkor och svagheter de har för detta problem. Avvikelserna i denna studie är begränsade till punktavvikelser och tidsserierna är i endast en variabel. Redan existerande implementationer som utnyttjar dessa nätverk för oövervakad avvikelsedetektionsproblem i tidsserier används i denna studie. Vi använder främst våra egna syntetiska tidsserier för att undersöka hur nätverken hanterar brus och hur de hanterar olika egenskaper som en tidsserie kan ha. Våra resultat visar att båda nätverken kan hantera brus och prestationsskillnaden rörande brusrobusthet var inte tillräckligt stor för att urskilja modellerna. LSTM presterade bättre än HTM på att upptäcka punktavvikelser i våra syntetiska tidsserier som följer en sinuskurva men en slutsats angående vilket nätverk som presterar bäst överlag är fortfarande oavgjord.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Unsupervised neural networks"

1

Baruque, Bruno. Fusion methods for unsupervised learning ensembles. Berlin: Springer, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Supervised and unsupervised pattern recognition: Feature extraction and computational intelligence. Boca Raton, Fla: CRC Press, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Whitehead, P. A. Design considerations for a hardware accelerator for Kohonen unsupervised learning in artificial neural networks. Manchester: UMIST, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Szu, Harold H., i Jack Agee. Independent component analyses, wavelets, unsupervised nano-biomimetic sensors, and neural networks VI: 17-19 March 2008, Orlando, Florida, USA. Bellingham, Wash: SPIE, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Sejnowski, Terrence J., Tomaso A. Poggio i Geoffrey Hinton. Unsupervised Learning: Foundations of Neural Computation. MIT Press, 2016.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sejnowski, Terrence J., i Geoffrey Hinton. Unsupervised Learning: Foundations of Neural Computation. MIT Press, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

E, Hinton Geoffrey, i Sejnowski Terrence J, red. Unsupervised learning: Foundations of neural computation. Cambridge, Mass: MIT Press, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Baruque, Bruno. Fusion Methods for Unsupervised Learning Ensembles. Springer, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Becker, Helen Suzanna. An information-theoretic unsupervised learning algorithm for neural networks. 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

(Editor), Geoffrey Hinton, i Terrence J. Sejnowski (Editor), red. Unsupervised Learning: Foundations of Neural Computation (Computational Neuroscience). The MIT Press, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Unsupervised neural networks"

1

Müller, Berndt, i Joachim Reinhardt. "Unsupervised Learning". W Neural Networks, 132–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-97239-3_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Müller, Berndt, Joachim Reinhardt i Michael T. Strickland. "Unsupervised Learning". W Neural Networks, 162–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-57760-4_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Rojas, Raúl. "Unsupervised Learning and Clustering Algorithms". W Neural Networks, 99–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-61068-4_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Castillo, Oscar, i Patricia Melin. "Unsupervised Learning Neural Networks". W Soft Computing and Fractal Theory for Intelligent Manufacturing, 75–92. Heidelberg: Physica-Verlag HD, 2003. http://dx.doi.org/10.1007/978-3-7908-1766-9_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Melin, Patricia, i Oscar Castillo. "Unsupervised Learning Neural Networks". W Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing, 85–107. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-32378-5_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Behnke, Sven. "Unsupervised Learning". W Hierarchical Neural Networks for Image Interpretation, 95–110. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45169-3_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Trentin, Edmondo, i Marco Bongini. "Probabilistically Grounded Unsupervised Training of Neural Networks". W Unsupervised Learning Algorithms, 533–58. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Donald, James, i Lex A. Akers. "An Unsupervised Neural Processor". W Silicon Implementation of Pulse Coded Neural Networks, 263–90. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2680-3_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Cook, Matthew, Florian Jug, Christoph Krautz i Angelika Steger. "Unsupervised Learning of Relations". W Artificial Neural Networks – ICANN 2010, 164–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15819-3_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Brabazon, Anthony, Michael O’Neill i Seán McGarraghy. "Neural Networks for Unsupervised Learning". W Natural Computing Algorithms, 261–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-43631-8_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Unsupervised neural networks"

1

Kosko. "Unsupervised learning in noise". W International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118553.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Fang, L., A. Jennings, W. X. Wen, K. Q. Q. Li i T. Li. "Unsupervised learning for neural trees". W 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170278.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dajani, A. L., M. Kamel i M. I. Elmasry. "Gradient methods in unsupervised neural networks". W 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170685.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Xiaolong Wang i Chandra Kambhamettu. "Age estimation via unsupervised neural networks". W 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 2015. http://dx.doi.org/10.1109/fg.2015.7163119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Miller, Mitchell, Megan Washburn i Foaad Khosmood. "Evolving unsupervised neural networks for Slither.io". W FDG '19: The Fourteenth International Conference on the Foundations of Digital Games. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3337722.3341837.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Freisleben, Bernd, i Claudia Hagen. "Unsupervised Hebbian learning in neural networks". W The first international conference on computing anticipatory systems. AIP, 1998. http://dx.doi.org/10.1063/1.56326.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

De Meulemeester, Hannes, i Bart De Moor. "Unsupervised Embeddings for Categorical Variables". W 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207703.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sun, Jianyong, i Aimin Zhou. "Unsupervised robust Bayesian feature selection". W 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889514.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Szu, Harold. "Theories of Neural Networks Leading to Unsupervised Learning". W International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371458.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Cerisara, Christophe, Paul Caillon i Guillaume Le Berre. "Unsupervised Post-Tuning of Deep Neural Networks". W 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534198.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Unsupervised neural networks"

1

Chavez, Wesley. An Exploration of Linear Classifiers for Unsupervised Spiking Neural Networks with Event-Driven Data. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.6323.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii