Gotowa bibliografia na temat „Heterogeneous neural networks”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Heterogeneous neural networks”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Heterogeneous neural networks"
Zeng, Wei, Ge Fan, Shan Sun, Biao Geng, Weiyi Wang, Jiacheng Li i Weibo Liu. "Collaborative filtering via heterogeneous neural networks". Applied Soft Computing 109 (wrzesień 2021): 107516. http://dx.doi.org/10.1016/j.asoc.2021.107516.
Pełny tekst źródłaDrakopoulos, John A., i Ahmad Abdulkader. "Training neural networks with heterogeneous data". Neural Networks 18, nr 5-6 (lipiec 2005): 595–601. http://dx.doi.org/10.1016/j.neunet.2005.06.011.
Pełny tekst źródłaTurner, Andrew James, i Julian Francis Miller. "NeuroEvolution: Evolving Heterogeneous Artificial Neural Networks". Evolutionary Intelligence 7, nr 3 (listopad 2014): 135–54. http://dx.doi.org/10.1007/s12065-014-0115-5.
Pełny tekst źródłaZhang, Chen, Zhouhua Tang, Bin Yu, Yu Xie i Ke Pan. "Deep heterogeneous network embedding based on Siamese Neural Networks". Neurocomputing 388 (maj 2020): 1–11. http://dx.doi.org/10.1016/j.neucom.2020.01.012.
Pełny tekst źródłaSun, Yizhou, Jiawei Han, Xifeng Yan, Philip S. Yu i Tianyi Wu. "Heterogeneous information networks". Proceedings of the VLDB Endowment 15, nr 12 (sierpień 2022): 3807–11. http://dx.doi.org/10.14778/3554821.3554901.
Pełny tekst źródłaIddianozie, Chidubem, i Gavin McArdle. "Towards Robust Representations of Spatial Networks Using Graph Neural Networks". Applied Sciences 11, nr 15 (27.07.2021): 6918. http://dx.doi.org/10.3390/app11156918.
Pełny tekst źródłaGracious, Tony, Shubham Gupta, Arun Kanthali, Rui M. Castro i Ambedkar Dukkipati. "Neural Latent Space Model for Dynamic Networks and Temporal Knowledge Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 5 (18.05.2021): 4054–62. http://dx.doi.org/10.1609/aaai.v35i5.16526.
Pełny tekst źródłaWu, Nan, i Chaofan Wang. "Ensemble Graph Attention Networks". Transactions on Machine Learning and Artificial Intelligence 10, nr 3 (12.06.2022): 29–41. http://dx.doi.org/10.14738/tmlai.103.12399.
Pełny tekst źródłaSon, Ha Min, Moon Hyun Kim i Tai-Myoung Chung. "Comparisons Where It Matters: Using Layer-Wise Regularization to Improve Federated Learning on Heterogeneous Data". Applied Sciences 12, nr 19 (3.10.2022): 9943. http://dx.doi.org/10.3390/app12199943.
Pełny tekst źródłaHosny, Khalid M., Marwa M. Khashaba, Walid I. Khedr i Fathy A. Amer. "An Efficient Neural Network-Based Prediction Scheme for Heterogeneous Networks". International Journal of Sociotechnology and Knowledge Development 12, nr 2 (kwiecień 2020): 63–76. http://dx.doi.org/10.4018/ijskd.2020040104.
Pełny tekst źródłaRozprawy doktorskie na temat "Heterogeneous neural networks"
Belanche, Muñoz Lluís A. (Lluís Antoni). "Heterogeneous neural networks: theory and applications". Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6660.
Pełny tekst źródłaLa similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona.
La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita.
Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.
En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia.
En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.
This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified.
The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed.
The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).
In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.
It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.
Belanche, Muñoz Lluis. "Heterogeneous neural networks: theory and applications". Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6660.
Pełny tekst źródłaThis work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified. The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed. The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.
Cabana, Tanguy. "Large deviations for the dynamics of heterogeneous neural networks". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066551/document.
Pełny tekst źródłaThis thesis addresses the rigorous derivation of mean-field results for the continuous time dynamics of heterogeneous large neural networks. In our models, we consider firing-rate neurons subject to additive noise. The network is fully connected, with highly random connectivity weights. Their variance scales as the inverse of the network size, and thus conserves a non-trivial role in the thermodynamic limit. Moreover, another heterogeneity is considered at the level of each neuron. It is interpreted as a spatial location. For biological relevance, a model considered includes delays, mean and variance of connections depending on the distance between cells. A second model considers interactions depending on the states of both neurons at play. This last case notably applies to Kuramoto's model of coupled oscillators. When the weights are independent Gaussian random variables, we show that the empirical measure of the neurons' states satisfies a large deviations principle, with a good rate function achieving its minimum at a unique probability measure, implying averaged convergence of the empirical measure and propagation of chaos. In certain cases, we also obtained quenched results. The limit is characterized through a complex non Markovian implicit equation in which the network interaction term is replaced by a non-local Gaussian process whose statistics depend on the solution over the whole neural field. We further demonstrate the universality of this limit, in the sense that neuronal networks with non-Gaussian interconnections but sub-Gaussian tails converge towards it. Moreover, we present a few numerical applications, and discuss possible perspectives
Zhao, Qiwei. "Federated Learning with Heterogeneous Challenge". Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27399.
Pełny tekst źródłaSchliebs, Stefan. "Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networks". AUT University, 2010. http://hdl.handle.net/10292/963.
Pełny tekst źródłaAntoniou, Christos Andrea. "Improving the acoustic modelling of speech using modular/ensemble combinations of heterogeneous neural networks". Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340582.
Pełny tekst źródłaWilson, Daniel B. "Combining genetic algorithms and artificial neural networks to select heterogeneous dispatching rules for a job shop system". Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1177701025.
Pełny tekst źródłaHobro, Mark. "Semantic Integration across Heterogeneous Databases : Finding Data Correspondences using Agglomerative Hierarchical Clustering and Artificial Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226657.
Pełny tekst źródłaDataintegrering är en viktig del inom området databaser när det kommer till databasmigreringar och sammanslagning av data. Forskning inom området har ökat i takt med att maskininlärning blivit ett attraktivt tillvägagångssätt under de senaste 20 åren. På grund av komplexiteten av forskningsområdet, har inga optimala lösningar hittats. Istället har flera olika tekniker framställts, som tillsammans kan förbättra databasmigreringar. Denna avhandling undersöker hur bra en lösning baserad på maskininlärning presterar för dataintegreringsproblemet vid databasmigreringar. Två algoritmer har implementerats. En är baserad på informationssökningsteori, som främst används för att ha en prestandamässig utgångspunkt för algoritmen som är baserad på maskininlärning. Den algoritmen består av ett första steg, där data grupperas med hjälp av hierarkisk klustring. Sedan tränas ett artificiellt neuronnät att hitta mönster i dessa grupperingar, för att kunna göra förutsägelser huruvida olika datainstanser har ett samband mellan två databaser. Resultatet visar att agglomerativ hierarkisk klustring presterar väl i uppgiften att klassificera den data som använts. Resultatet av matchningsalgoritmen visar på att en stor mängd av de matchande tabellerna kan hittas. Men förbättringar behöver göras för att både ge hög en hög återkallelse av matchningar och hög precision för de matchningar som hittas. Slutsatsen är att ett inlärningsbaserat tillvägagångssätt, i detta fall att använda agglomerativ hierarkisk klustring och sedan träna ett artificiellt neuronnät, fungerar bra som en basis för att till viss del automatisera ett dataintegreringsproblem likt det som presenterats i denna avhandling. För att få bättre resultat, krävs att lösningen förbättras med mer situationsspecifika algoritmer och regler.
Tekleyohannes, Anteneh Tesfaye. "Unified and heterogeneous modeling of water vapour sorption in Douglas-fir wood with artificial neural networks". Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23032.
Pełny tekst źródłaToledo, Testa Juan Ignacio. "Information extraction from heterogeneous handwritten documents". Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667388.
Pełny tekst źródłaEl objetivo de esta tesis es la extracción de Información de documentos total o parcialmente manuscritos, con una cierta estructura. Básicamente trabajamos con dos escenarios de aplicación diferentes. El primer escenario son los documentos modernos altamente estructurados, como los formularios. En estos documentos, la información semántica está pre-definida en campos con una posición concreta en el documento i la extracción de información es equivalente a una transcripción. El segundo escenario son los documentos semi-estructurados totalmente manuscritos, donde, además de transcribir, es necesario asociar un valor semántico, de entre un conjunto conocido de valores posibles, a las palabras manuscritas. En ambos casos, la calidad de la transcripción tiene un gran peso en la precisión del sistema. Por ese motivo proponemos modelos basados en redes neuronales para transcribir el texto manuscrito. Para poder afrontar el reto de los documentos semi-estructurados, hemos generado un benchmark, compuesto de dataset, una serie de tareas y una métrica que fue presentado a la comunidad científica a modo de competición internacional. También proponemos diferentes modelos basados en Redes Neuronales Convolucionales y Recurrentes, capaces de transcribir y asignar diferentes etiquetas semánticas a cada palabra manuscrita, es decir, capaces de extraer información.
The goal of this thesis is information Extraction from totally or partially handwritten documents. Basically we are dealing with two different application scenarios. The first scenario are modern highly structured documents like forms. In this kind of documents, the semantic information is encoded in different fields with a pre-defined location in the document, therefore, information extraction becomes equivalent to transcription. The second application scenario are loosely structured totally handwritten documents, besides transcribing them, we need to assign a semantic label, from a set of known values to the handwritten words. In both scenarios, transcription is an important part of the information extraction. For that reason in this thesis we present two methods based on Neural Networks, to transcribe handwritten text.In order to tackle the challenge of loosely structured documents, we have produced a benchmark, consisting of a dataset, a defined set of tasks and a metric, that was presented to the community as an international competition. Also, we propose different models based on Convolutional and Recurrent neural networks that are able to transcribe and assign different semantic labels to each handwritten words, that is, able to perform Information Extraction.
Książki na temat "Heterogeneous neural networks"
Metric for the Application of Heterogeneous Datasets to Improve Neural Networks in Cybersecurity Defense: A Quantitative Experimental Research Study. Independently Published, 2021.
Znajdź pełny tekst źródłaRusso, Mark A. Metric for the Application of Heterogeneous Datasets to Improve Neural Networks in Cybersecurity Defense: A Quantitative Experimental Research Study. Independently Published, 2021.
Znajdź pełny tekst źródłaCzęści książek na temat "Heterogeneous neural networks"
Park, Dong-Chul, Duc-Hoai Nguyen, Song-Jae Lee i Yunsik Lee. "Heterogeneous Centroid Neural Networks". W Advances in Neural Networks - ISNN 2006, 689–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_101.
Pełny tekst źródłaWang, Ruijia. "Heterogeneous Graph Neural Networks". W Advances in Graph Neural Networks, 61–85. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16174-2_4.
Pełny tekst źródłaShi, Chuan. "Heterogeneous Graph Neural Networks". W Graph Neural Networks: Foundations, Frontiers, and Applications, 351–69. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6054-2_16.
Pełny tekst źródłaGrąbczewski, Krzysztof, i Włodzisław Duch. "Heterogeneous Forests of Decision Trees". W Artificial Neural Networks — ICANN 2002, 504–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_82.
Pełny tekst źródłaSatizábal, Héctor F., Andres Pérez-Uribe i Marco Tomassini. "Avoiding Prototype Proliferation in Incremental Vector Quantization of Large Heterogeneous Datasets". W Constructive Neural Networks, 243–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04512-7_13.
Pełny tekst źródłaJin, Sichen, Yijia Zhang i Mingyu Lu. "Heterogeneous Adaptive Denoising Networks for Recommendation". W Neural Computing for Advanced Applications, 30–43. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6142-7_3.
Pełny tekst źródłaWang, Zhengxin, Jingbo Fan, He Jiang i Haibo He. "Pinning Synchronization in Heterogeneous Networks of Harmonic Oscillators". W Neural Information Processing, 836–45. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_85.
Pełny tekst źródłaKaizoji, Taisei. "Speculative Dynamics in a Heterogeneous-Agent Model". W Artificial Neural Networks — ICANN 2001, 775–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_108.
Pełny tekst źródłaXu, Hongyan, Wenjun Wang, Hongtao Liu, Mengxuan Zhang, Qiang Tian i Pengfei Jiao. "Key Nodes Cluster Augmented Embedding for Heterogeneous Information Networks". W Neural Information Processing, 499–511. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63833-7_42.
Pełny tekst źródłaFu, Guoji, Bo Yuan, Qiqi Duan i Xin Yao. "Representation Learning for Heterogeneous Information Networks via Embedding Events". W Neural Information Processing, 327–39. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_27.
Pełny tekst źródłaStreszczenia konferencji na temat "Heterogeneous neural networks"
Hu, Ruiqi, Celina Ping Yu, Sai-Fu Fung, Shirui Pan, Haishuai Wang i Guodong Long. "Universal network representation for heterogeneous information networks". W 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965880.
Pełny tekst źródłaAlhubail, Ali, Xupeng He, Marwa AlSinan, Hyung Kwak i Hussein Hoteit. "Extended Physics-Informed Neural Networks for Solving Fluid Flow Problems in Highly Heterogeneous Media". W International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22163-ms.
Pełny tekst źródłaValdes, Julio J. "Heterogeneous extreme learning machines". W 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727400.
Pełny tekst źródłaCao, Meng, Xiying Ma, Kai Zhu, Ming Xu i Chongjun Wang. "Heterogeneous Information Network Embedding with Convolutional Graph Attention Networks". W 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206610.
Pełny tekst źródłaYang, Carl, Jieyu Zhang i Jiawei Han. "Neural Embedding Propagation on Heterogeneous Networks". W 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 2019. http://dx.doi.org/10.1109/icdm.2019.00080.
Pełny tekst źródłaBai, Tong, i Gary Overett. "Heterogeneous Image Stylization Using Neural Networks". W 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2017. http://dx.doi.org/10.1109/dicta.2017.8227439.
Pełny tekst źródłaZhang, Shanshan, Tiancheng Huang i Donglin Wang. "Sequence Contained Heterogeneous Graph Neural Network". W 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533391.
Pełny tekst źródłaSublime, Jeremie, Nistor Grozavu, Younes Bennani i Antoine Cornuejols. "Collaborative clustering with heterogeneous algorithms". W 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280351.
Pełny tekst źródłaMa, Shuai, Jian-Wei Liu, Xin Zuo i Wei-Min Li. "Heterogeneous Graph Gated Attention Network". W 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533711.
Pełny tekst źródłaYang, Zheng, Bing Han, Weiming Chen i Xinbo Gao. "Learn to Encode Heterogeneous Data: A Heterogeneous Aware Network for Multi-Future Trajectory Prediction". W 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191508.
Pełny tekst źródłaRaporty organizacyjne na temat "Heterogeneous neural networks"
Kase, Hanno, Leonardo Melosi i Matthias Rottner. Estimating Nonlinear Heterogeneous Agents Models with Neural Networks. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-26.
Pełny tekst źródłaComola, Margherita, Rokhaya Dieye i Bernard Fortin. Heterogeneous peer effects and gender-based interventions for teenage obesity. CIRANO, wrzesień 2022. http://dx.doi.org/10.54932/tqag9043.
Pełny tekst źródła