Academic literature on the topic 'Heterogeneous neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Heterogeneous neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Heterogeneous neural networks"

1

Zeng, Wei, Ge Fan, Shan Sun, Biao Geng, Weiyi Wang, Jiacheng Li, and Weibo Liu. "Collaborative filtering via heterogeneous neural networks." Applied Soft Computing 109 (September 2021): 107516. http://dx.doi.org/10.1016/j.asoc.2021.107516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Drakopoulos, John A., and Ahmad Abdulkader. "Training neural networks with heterogeneous data." Neural Networks 18, no. 5-6 (July 2005): 595–601. http://dx.doi.org/10.1016/j.neunet.2005.06.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Turner, Andrew James, and Julian Francis Miller. "NeuroEvolution: Evolving Heterogeneous Artificial Neural Networks." Evolutionary Intelligence 7, no. 3 (November 2014): 135–54. http://dx.doi.org/10.1007/s12065-014-0115-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Chen, Zhouhua Tang, Bin Yu, Yu Xie, and Ke Pan. "Deep heterogeneous network embedding based on Siamese Neural Networks." Neurocomputing 388 (May 2020): 1–11. http://dx.doi.org/10.1016/j.neucom.2020.01.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Yizhou, Jiawei Han, Xifeng Yan, Philip S. Yu, and Tianyi Wu. "Heterogeneous information networks." Proceedings of the VLDB Endowment 15, no. 12 (August 2022): 3807–11. http://dx.doi.org/10.14778/3554821.3554901.

Full text
Abstract:
In 2011, we proposed PathSim to systematically define and compute similarity between nodes in a heterogeneous information network (HIN), where nodes and links are from different types. In the PathSim paper, we for the first time introduced HIN with general network schema and proposed the concept of meta-paths to systematically define new relation types between nodes. In this paper, we summarize the impact of PathSim paper in both academia and industry. We start from the algorithms that are based on meta-path-based feature engineering, then move on to the recent development in heterogeneous network representation learning, including both shallow network embedding and heterogeneous graph neural networks. In the end, we make the connection between knowledge graphs and HINs and discuss the implication of meta-paths in the symbolic reasoning scenario. Finally, we point out several future directions.
APA, Harvard, Vancouver, ISO, and other styles
6

Iddianozie, Chidubem, and Gavin McArdle. "Towards Robust Representations of Spatial Networks Using Graph Neural Networks." Applied Sciences 11, no. 15 (July 27, 2021): 6918. http://dx.doi.org/10.3390/app11156918.

Full text
Abstract:
The effectiveness of a machine learning model is impacted by the data representation used. Consequently, it is crucial to investigate robust representations for efficient machine learning methods. In this paper, we explore the link between data representations and model performance for inference tasks on spatial networks. We argue that representations which explicitly encode the relations between spatial entities would improve model performance. Specifically, we consider homogeneous and heterogeneous representations of spatial networks. We recognise that the expressive nature of the heterogeneous representation may benefit spatial networks and could improve model performance on certain tasks. Thus, we carry out an empirical study using Graph Neural Network models for two inference tasks on spatial networks. Our results demonstrate that heterogeneous representations improves model performance for down-stream inference tasks on spatial networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Gracious, Tony, Shubham Gupta, Arun Kanthali, Rui M. Castro, and Ambedkar Dukkipati. "Neural Latent Space Model for Dynamic Networks and Temporal Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4054–62. http://dx.doi.org/10.1609/aaai.v35i5.16526.

Full text
Abstract:
Although static networks have been extensively studied in machine learning, data mining, and AI communities for many decades, the study of dynamic networks has recently taken center stage due to the prominence of social media and its effects on the dynamics of social networks. In this paper, we propose a statistical model for dynamically evolving networks, together with a variational inference approach. Our model, Neural Latent Space Model with Variational Inference, encodes edge dependencies across different time snapshots. It represents nodes via latent vectors and uses interaction matrices to model the presence of edges. These matrices can be used to incorporate multiple relations in heterogeneous networks by having a separate matrix for each of the relations. To capture the temporal dynamics, both node vectors and interaction matrices are allowed to evolve with time. Existing network analysis methods use representation learning techniques for modelling networks. These techniques are different for homogeneous and heterogeneous networks because heterogeneous networks can have multiple types of edges and nodes as opposed to a homogeneous network. Unlike these, we propose a unified model for homogeneous and heterogeneous networks in a variational inference framework. Moreover, the learned node latent vectors and interaction matrices may be interpretable and therefore provide insights on the mechanisms behind network evolution. We experimented with a single step and multi-step link forecasting on real-world networks of homogeneous, bipartite, and heterogeneous nature, and demonstrated that our model significantly outperforms existing models.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Nan, and Chaofan Wang. "Ensemble Graph Attention Networks." Transactions on Machine Learning and Artificial Intelligence 10, no. 3 (June 12, 2022): 29–41. http://dx.doi.org/10.14738/tmlai.103.12399.

Full text
Abstract:
Graph neural networks have demonstrated its success in many applications on graph-structured data. Many efforts have been devoted to elaborating new network architectures and learning algorithms over the past decade. The exploration of applying ensemble learning techniques to enhance existing graph algorithms have been overlooked. In this work, we propose a simple generic bagging-based ensemble learning strategy which is applicable to any backbone graph models. We then propose two ensemble graph neural network models – Ensemble-GAT and Ensemble-HetGAT by applying the ensemble strategy to the graph attention network (GAT), and a heterogeneous graph attention network (HetGAT). We demonstrate the effectiveness of the proposed ensemble strategy on GAT and HetGAT through comprehensive experiments with four real-world homogeneous graph datasets and three real-world heterogeneous graph datasets on node classification tasks. The proposed Ensemble-GAT and Ensemble-HetGAT outperform the state-of-the-art graph neural network and heterogeneous graph neural network models on most of the benchmark datasets. The proposed ensemble strategy also alleviates the over-smoothing problem in GAT and HetGAT.
APA, Harvard, Vancouver, ISO, and other styles
9

Son, Ha Min, Moon Hyun Kim, and Tai-Myoung Chung. "Comparisons Where It Matters: Using Layer-Wise Regularization to Improve Federated Learning on Heterogeneous Data." Applied Sciences 12, no. 19 (October 3, 2022): 9943. http://dx.doi.org/10.3390/app12199943.

Full text
Abstract:
Federated Learning is a widely adopted method for training neural networks over distributed data. One main limitation is the performance degradation that occurs when data are heterogeneously distributed. While many studies have attempted to address this problem, a more recent understanding of neural networks provides insight to an alternative approach. In this study, we show that only certain important layers in a neural network require regularization for effective training. We additionally verify that Centered Kernel Alignment (CKA) most accurately calculates similarities between layers of neural networks trained on different data. By applying CKA-based regularization to important layers during training, we significantly improved performances in heterogeneous settings. We present FedCKA, a simple framework that outperforms previous state-of-the-art methods on various deep learning tasks while also improving efficiency and scalability.
APA, Harvard, Vancouver, ISO, and other styles
10

Hosny, Khalid M., Marwa M. Khashaba, Walid I. Khedr, and Fathy A. Amer. "An Efficient Neural Network-Based Prediction Scheme for Heterogeneous Networks." International Journal of Sociotechnology and Knowledge Development 12, no. 2 (April 2020): 63–76. http://dx.doi.org/10.4018/ijskd.2020040104.

Full text
Abstract:
In mobile wireless networks, the challenge of providing full mobility without affecting the quality of service (QoS) is becoming essential. These challenges can be overcome using handover prediction. The process of determining the next station which mobile user desires to transfer its data connection can be termed as handover prediction. A new proposed prediction scheme is presented in this article dependent on scanning all signal quality between the mobile user and all neighboring stations in the surrounding areas. Additionally, the proposed scheme efficiency is enhanced essentially for minimizing the redundant handover (unnecessary handovers) numbers. Both WLAN and long term evolution (LTE) networks are used in the proposed scheme which is evaluated using various scenarios with several numbers and locations of mobile users and with different numbers and locations of WLAN access point and LTE base station, all randomly. The proposed prediction scheme achieves a success rate of up to 99% in several scenarios consistent with LTE-WLAN architecture. To understand the network characteristics for enhancing efficiency and increasing the handover successful percentage especially with mobile station high speeds, a neural network model is used. Using the trained network, it can predict the next target station for heterogeneous network handover points. The proposed neural network-based scheme added a significant improvement in the accuracy ratio compared to the existing schemes using only the received signal strength (RSS) as a parameter in predicting the next station. It achieves a remarkable improvement in successful percentage ratio up to 5% compared with using only RSS.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Heterogeneous neural networks"

1

Belanche, Muñoz Lluís A. (Lluís Antoni). "Heterogeneous neural networks: theory and applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6660.

Full text
Abstract:
Aquest treball presenta una classe de funcions que serveixen de models neuronals generalitzats per ser usats en xarxes neuronals artificials. Es defineixen com una mesura de similitud que actúa com una definició flexible de neurona vista com un reconeixedor de patrons.
La similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona.

La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita.
Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.
En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia.

En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.
This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified.
The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed.

The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).

In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.
It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Belanche, Muñoz Lluis. "Heterogeneous neural networks: theory and applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6660.

Full text
Abstract:
Aquest treball presenta una classe de funcions que serveixen de models neuronals generalitzats per ser usats en xarxes neuronals artificials. Es defineixen com una mesura de similitud que actúa com una definició flexible de neurona vista com un reconeixedor de patrons. La similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona. La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita. Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia. En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.
This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified. The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed. The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Cabana, Tanguy. "Large deviations for the dynamics of heterogeneous neural networks." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066551/document.

Full text
Abstract:
Cette thèse porte sur l'obtention rigoureuse de limites de champ moyen pour la dynamique continue de grands réseaux de neurones hétérogènes. Nous considérons des neurones à taux de décharge, et sujets à un bruit Brownien additif. Le réseau est entièrement connecté, avec des poids de connections dont la variance décroît comme l'inverse du nombre de neurones conservant un effet non trivial dans la limite thermodynamique. Un second type d'hétérogénéité, interprété comme une position spatiale, est considéré au niveau de chaque cellule. Pour la pertinence biologique, nos modèles incluent ou bien des délais, ainsi que des moyennes et variances de connections, dépendants de la distance entre les cellules, ou bien des synapses dépendantes de l'état des deux neurones post- et présynaptique. Ce dernier cas s'applique au modèle de Kuramoto pour les oscillateurs couplés. Quand les poids synaptiques sont Gaussiens et indépendants, nous prouvons un principe de grandes déviations pour la mesure empirique de l'état des neurones. La bonne fonction de taux associée atteint son minimum en une unique mesure de probabilité, impliquant convergence et propagation du chaos sous la loi "averaged". Dans certains cas, des résultats "quenched" sont obtenus. La limite est solution d'une équation implicite, non Markovienne, dans laquelle le terme d'interactions est remplacé par un processus Gaussien qui dépend de la loi de la solution du réseau entier. Une universalité de cette limite est prouvée, dans le cas de poids synaptiques non-Gaussiens avec queues sous-Gaussiennes. Enfin, quelques résultats numérique sur les réseau aléatoires sont présentés, et des perspectives discutées
This thesis addresses the rigorous derivation of mean-field results for the continuous time dynamics of heterogeneous large neural networks. In our models, we consider firing-rate neurons subject to additive noise. The network is fully connected, with highly random connectivity weights. Their variance scales as the inverse of the network size, and thus conserves a non-trivial role in the thermodynamic limit. Moreover, another heterogeneity is considered at the level of each neuron. It is interpreted as a spatial location. For biological relevance, a model considered includes delays, mean and variance of connections depending on the distance between cells. A second model considers interactions depending on the states of both neurons at play. This last case notably applies to Kuramoto's model of coupled oscillators. When the weights are independent Gaussian random variables, we show that the empirical measure of the neurons' states satisfies a large deviations principle, with a good rate function achieving its minimum at a unique probability measure, implying averaged convergence of the empirical measure and propagation of chaos. In certain cases, we also obtained quenched results. The limit is characterized through a complex non Markovian implicit equation in which the network interaction term is replaced by a non-local Gaussian process whose statistics depend on the solution over the whole neural field. We further demonstrate the universality of this limit, in the sense that neuronal networks with non-Gaussian interconnections but sub-Gaussian tails converge towards it. Moreover, we present a few numerical applications, and discuss possible perspectives
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Qiwei. "Federated Learning with Heterogeneous Challenge." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27399.

Full text
Abstract:
Federated learning allows the training of a model from the distributed data of many clients under the orchestration of a central server. With the increasing concern on privacy, federated learning draws great attention from both academia and industry. However, the heterogeneous challenges introduced by natural characters of federated learning settings significantly degrade the performance of federated learning methods. Specifically, these heterogeneous challenges include the heterogeneous data challenges and the heterogeneous scenario challenges. Data heterogeneous challenges mean the significant differences between the datasets of numerous users. In federated learning, the data is stored separately on many distanced clients, causing these challenges. In addition, the heterogeneous scenario challenges refer to the differences between the devices participating in federated learning. Furthermore, the suitable models vary among the different scenarios. However, many existing federated learning methods use a single global model for all the devices' scenarios, which is not optimal for these two challenges. We first propose a novel federated learning framework called local union in federated learning (LU-FL) to address these challenges. LU-FL incorporates the hierarchical knowledge distillation mechanism that effectively transfers knowledge among different models. So, LU-FL can enable any number of models to be used on each client. Allocating the specially designed models to different clients can mitigate the adverse effects caused by these challenges. At the same time, it can further improve the accuracy of the output models. Extensive experimental results over several popular datasets demonstrate the effectiveness of our proposed method. It can effectively reduce the harmful effects of heterogeneous challenges, improving the accuracy of the final output models and the adaptability of the clients to various scenarios. So, it lets federated learning methods be applied in more diverse scenarios. Keywords: federated learning, neural networks, knowledge distillation, computer vision
APA, Harvard, Vancouver, ISO, and other styles
5

Schliebs, Stefan. "Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networks." AUT University, 2010. http://hdl.handle.net/10292/963.

Full text
Abstract:
This thesis proposes a novel feature selection and classification method employing evolving spiking neural networks (eSNN) and evolutionary algorithms (EA). The method is named the Quantum-inspired Spiking Neural Network (QiSNN) framework. QiSNN represents an integrated wrapper approach. An evolutionary process evolves appropriate feature subsets for a given classification task and simultaneously optimises the neural and learning-related parameters of the network. Unlike other methods, the connection weights of this network are determined by a fast one-pass learning algorithm which dramatically reduces the training time. In its core, QiSNN employs the Thorpe neural model that allows the efficient simulation of even large networks. In QiSNN, the presence or absence of features is represented by a string of concatenated bits, while the parameters of the neural network are continuous. For the exploration of these two entirely different search spaces, a novel Estimation of Distribution Algorithm (EDA) is developed. The method maintains a population of probabilistic models specialised for the optimisation of either binary, continuous or heterogeneous search spaces while utilising a small and intuitive set of parameters. The EDA extends the Quantum-inspired Evolutionary Algorithm (QEA) proposed by Han and Kim (2002) and was named the Heterogeneous Hierarchical Model EDA (hHM-EDA). The algorithm is compared to numerous contemporary optimisation methods and studied in terms of convergence speed, solution quality and robustness in noisy search spaces. The thesis investigates the functioning and the characteristics of QiSNN using both synthetic feature selection benchmarks and a real-world case study on ecological modelling. By evolving suitable feature subsets, QiSNN significantly enhances the classification accuracy of eSNN. Compared to numerous other feature selection techniques, like the wrapper-based Multilayer Perceptron (MLP) and the Naive Bayesian Classifier (NBC), QiSNN demonstrates a competitive classification and feature selection performance while requiring comparatively low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
6

Antoniou, Christos Andrea. "Improving the acoustic modelling of speech using modular/ensemble combinations of heterogeneous neural networks." Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, Daniel B. "Combining genetic algorithms and artificial neural networks to select heterogeneous dispatching rules for a job shop system." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1177701025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hobro, Mark. "Semantic Integration across Heterogeneous Databases : Finding Data Correspondences using Agglomerative Hierarchical Clustering and Artificial Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226657.

Full text
Abstract:
The process of data integration is an important part of the database field when it comes to database migrations and the merging of data. The research in the area has grown with the addition of machine learning approaches in the last 20 years. Due to the complexity of the research field, no go-to solutions have appeared. Instead, a wide variety of ways of enhancing database migrations have emerged. This thesis examines how well a learning-based solution performs for the semantic integration problem in database migrations. Two algorithms are implemented. One that is based on information retrieval theory, with the goal of yielding a matching result that can be used as a benchmark for measuring the performance of the machine learning algorithm. The machine learning approach is based on grouping data with agglomerative hierarchical clustering and then training a neural network to recognize patterns in the data. This allows making predictions about potential data correspondences across two databases. The results show that agglomerative hierarchical clustering performs well in the task of grouping the data into classes. The classes can in turn be used for training a neural network. The matching algorithm gives a high recall of matching tables, but improvements are needed to both receive a high recall and precision. The conclusion is that the proposed learning-based approach, using agglomerative hierarchical clustering and a neural network, works as a solid base to semi-automate the data integration problem seen in this thesis. But the solution needs to be enhanced with scenario specific algorithms and rules, to reach desired performance.
Dataintegrering är en viktig del inom området databaser när det kommer till databasmigreringar och sammanslagning av data. Forskning inom området har ökat i takt med att maskininlärning blivit ett attraktivt tillvägagångssätt under de senaste 20 åren. På grund av komplexiteten av forskningsområdet, har inga optimala lösningar hittats. Istället har flera olika tekniker framställts, som tillsammans kan förbättra databasmigreringar. Denna avhandling undersöker hur bra en lösning baserad på maskininlärning presterar för dataintegreringsproblemet vid databasmigreringar. Två algoritmer har implementerats. En är baserad på informationssökningsteori, som främst används för att ha en prestandamässig utgångspunkt för algoritmen som är baserad på maskininlärning. Den algoritmen består av ett första steg, där data grupperas med hjälp av hierarkisk klustring. Sedan tränas ett artificiellt neuronnät att hitta mönster i dessa grupperingar, för att kunna göra förutsägelser huruvida olika datainstanser har ett samband mellan två databaser. Resultatet visar att agglomerativ hierarkisk klustring presterar väl i uppgiften att klassificera den data som använts. Resultatet av matchningsalgoritmen visar på att en stor mängd av de matchande tabellerna kan hittas. Men förbättringar behöver göras för att både ge hög en hög återkallelse av matchningar och hög precision för de matchningar som hittas. Slutsatsen är att ett inlärningsbaserat tillvägagångssätt, i detta fall att använda agglomerativ hierarkisk klustring och sedan träna ett artificiellt neuronnät, fungerar bra som en basis för att till viss del automatisera ett dataintegreringsproblem likt det som presenterats i denna avhandling. För att få bättre resultat, krävs att lösningen förbättras med mer situationsspecifika algoritmer och regler.
APA, Harvard, Vancouver, ISO, and other styles
9

Tekleyohannes, Anteneh Tesfaye. "Unified and heterogeneous modeling of water vapour sorption in Douglas-fir wood with artificial neural networks." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23032.

Full text
Abstract:
The objective of this study was firstly to investigate and understand sorption properties of earlywood, latewood, annual rings and gross wood. Secondly, to develop a heterogeneous sorption model for earlywood, latewood and annual rings by taking into consideration unified complex interactions of anatomy, chemical composition and thermodynamic parameters. Thirdly, to upscale the annual ring level model to gross wood by applying artificial neural networks (ANNs) modeling tools using dimensionally reduced inputs through dimensional analysis and genetic algorithms. Four novel physical models, namely, dynamical two-level systems (TLS) model of annual rings, sorption kinetics, sorption isotherms and TLS model of physical properties and chemical composition were derived and successfully validated using experimental data of Douglas-fir. The annual ring’s TLS model was capable to generate novel physical quantities, namely, golden ring volume (GRV) and golden ring cube (GRC) to which the sorption properties are very sensitive, according to the validation tests. A new heterogeneity test criterion (HTC) was also derived. Validations of the TLS sorption models revealed new evidence showing a transient nature of sorption hysteresis in which boundary sorption isotherms asymptotically converged to a single isotherm at large time limit. A novel method for the computation of internal surface area of wood was also validated using the TLS model of sorption isotherms. The fibre saturation point prediction of the model was also found to agree well with earlier reports. The TLS model of physical properties and chemical composition was able to reveal the self-organization in Douglas-fir that gives rise to allometric scaling. The TLS modeling revealed existence of self-organizing criticality (SOC) in Douglas-fir and demonstrated mechanisms by which it is generated. Ten categories of unified ANNs Douglas-fir sorption models that predict equilibrium moisture content, diffusion and surface emission coefficients were successfully developed and validated. The network models predict sorption properties of Douglas-fir using thermodynamic variables and parameters generated by the four TLS models from chemical composition and physical properties of annual rings. The findings of this study contribute to the creation of a decision support system that would allow predicting wood properties and processing characteristics based on chemical and structural attributes.
APA, Harvard, Vancouver, ISO, and other styles
10

Toledo, Testa Juan Ignacio. "Information extraction from heterogeneous handwritten documents." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667388.

Full text
Abstract:
L’objectiu d’aquesta tesi és l’extracció d’Informació de documents total o parcialment manuscrits amb una certa estructura. Bàsicament treballem amb dos escenaris d’aplicació diferent. El primer escenari són els documents moderns altament estructurats, com formularis. En aquests documents, la informació semàntica està ja definida en camps, amb una posició concreta al document i l’extracció de la informació és equivalent a una transcripció. El segon escenari son els documents semi-estructurats totalment manuscrits on, a més de transcriure, cal associar un valor semàntic, d’entre un conjunt conegut de valors possibles, a les paraules que es transcriuen. En ambdós casos la qualitat de la transcripció té un gran pes en la precisió del sistema, per això proposem models basats en xarxes neuronals per a transcriure text manuscrit. Per a poder afrontar el repte dels documents semi-estructurats hem generat un benchmark, compost de dataset, una sèrie de tasques definides i una mètrica que es va presentar a la comunitat científica com a una competició internacional. També proposem diferents models basats en Xarxes Neuronals Convolucionals i recurrents, capaços de transcriure i assignar diferent etiquetes semàntiques a cada paraula manuscrita, és a dir, capaços d'extreure informació.
El objetivo de esta tesis es la extracción de Información de documentos total o parcialmente manuscritos, con una cierta estructura. Básicamente trabajamos con dos escenarios de aplicación diferentes. El primer escenario son los documentos modernos altamente estructurados, como los formularios. En estos documentos, la información semántica está pre-definida en campos con una posición concreta en el documento i la extracción de información es equivalente a una transcripción. El segundo escenario son los documentos semi-estructurados totalmente manuscritos, donde, además de transcribir, es necesario asociar un valor semántico, de entre un conjunto conocido de valores posibles, a las palabras manuscritas. En ambos casos, la calidad de la transcripción tiene un gran peso en la precisión del sistema. Por ese motivo proponemos modelos basados en redes neuronales para transcribir el texto manuscrito. Para poder afrontar el reto de los documentos semi-estructurados, hemos generado un benchmark, compuesto de dataset, una serie de tareas y una métrica que fue presentado a la comunidad científica a modo de competición internacional. También proponemos diferentes modelos basados en Redes Neuronales Convolucionales y Recurrentes, capaces de transcribir y asignar diferentes etiquetas semánticas a cada palabra manuscrita, es decir, capaces de extraer información.
The goal of this thesis is information Extraction from totally or partially handwritten documents. Basically we are dealing with two different application scenarios. The first scenario are modern highly structured documents like forms. In this kind of documents, the semantic information is encoded in different fields with a pre-defined location in the document, therefore, information extraction becomes equivalent to transcription. The second application scenario are loosely structured totally handwritten documents, besides transcribing them, we need to assign a semantic label, from a set of known values to the handwritten words. In both scenarios, transcription is an important part of the information extraction. For that reason in this thesis we present two methods based on Neural Networks, to transcribe handwritten text.In order to tackle the challenge of loosely structured documents, we have produced a benchmark, consisting of a dataset, a defined set of tasks and a metric, that was presented to the community as an international competition. Also, we propose different models based on Convolutional and Recurrent neural networks that are able to transcribe and assign different semantic labels to each handwritten words, that is, able to perform Information Extraction.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Heterogeneous neural networks"

1

Metric for the Application of Heterogeneous Datasets to Improve Neural Networks in Cybersecurity Defense: A Quantitative Experimental Research Study. Independently Published, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Russo, Mark A. Metric for the Application of Heterogeneous Datasets to Improve Neural Networks in Cybersecurity Defense: A Quantitative Experimental Research Study. Independently Published, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Heterogeneous neural networks"

1

Park, Dong-Chul, Duc-Hoai Nguyen, Song-Jae Lee, and Yunsik Lee. "Heterogeneous Centroid Neural Networks." In Advances in Neural Networks - ISNN 2006, 689–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Ruijia. "Heterogeneous Graph Neural Networks." In Advances in Graph Neural Networks, 61–85. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16174-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shi, Chuan. "Heterogeneous Graph Neural Networks." In Graph Neural Networks: Foundations, Frontiers, and Applications, 351–69. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6054-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grąbczewski, Krzysztof, and Włodzisław Duch. "Heterogeneous Forests of Decision Trees." In Artificial Neural Networks — ICANN 2002, 504–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Satizábal, Héctor F., Andres Pérez-Uribe, and Marco Tomassini. "Avoiding Prototype Proliferation in Incremental Vector Quantization of Large Heterogeneous Datasets." In Constructive Neural Networks, 243–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04512-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jin, Sichen, Yijia Zhang, and Mingyu Lu. "Heterogeneous Adaptive Denoising Networks for Recommendation." In Neural Computing for Advanced Applications, 30–43. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6142-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Zhengxin, Jingbo Fan, He Jiang, and Haibo He. "Pinning Synchronization in Heterogeneous Networks of Harmonic Oscillators." In Neural Information Processing, 836–45. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaizoji, Taisei. "Speculative Dynamics in a Heterogeneous-Agent Model." In Artificial Neural Networks — ICANN 2001, 775–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44668-0_108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Hongyan, Wenjun Wang, Hongtao Liu, Mengxuan Zhang, Qiang Tian, and Pengfei Jiao. "Key Nodes Cluster Augmented Embedding for Heterogeneous Information Networks." In Neural Information Processing, 499–511. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63833-7_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fu, Guoji, Bo Yuan, Qiqi Duan, and Xin Yao. "Representation Learning for Heterogeneous Information Networks via Embedding Events." In Neural Information Processing, 327–39. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Heterogeneous neural networks"

1

Hu, Ruiqi, Celina Ping Yu, Sai-Fu Fung, Shirui Pan, Haishuai Wang, and Guodong Long. "Universal network representation for heterogeneous information networks." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7965880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alhubail, Ali, Xupeng He, Marwa AlSinan, Hyung Kwak, and Hussein Hoteit. "Extended Physics-Informed Neural Networks for Solving Fluid Flow Problems in Highly Heterogeneous Media." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22163-ms.

Full text
Abstract:
Abstract Utilization of neural networks to solve physical problems has been receiving wide attention recently. These neural networks are commonly named physics-informed neural network (PINN), in which the physics are employed through the governing partial differential equations (PDEs). Traditional PINNs suffer from unstable performance when dealing with flow problems in highly heterogeneous domains. This work presents the applicability of the extended PINN (XPINN) method in solving heterogeneous problems. XPINN can create a full solution model to the solution of the governing PDEs by training the neural network on the PDEs and its constraints such as boundary and initial conditions, and known solution points. The heterogeneous problem is solved by performing domain decomposition, which divides the original heterogeneous domain into various homogeneous sub-domains. Each sub-domain incorporates its own PINN structure. The different PINNs are connected through interface conditions, allowing for information to communicate across the interfaces. These conditions include pressure and flux continuities. Various heterogeneous scenarios are implemented in this study to investigate the robustness of the proposed method. We demonstrate the accuracy of the XPINN model by comparing it with the ground truth solved from high-fidelity simulations. Results show a good match in terms of pressure and velocity with errors of less than 1%. Different interface conditions were tested, and it was observed that without the inclusion of pressure and flux continuities, the solver does not converge to the solution of interest. Sensitivity analysis was performed to explore the effects of the neural network architecture, the weights given to each loss term, and the number of training iterations. Results show that wide and shallow networks performed well due to avoiding the gradient vanishing issue that comes with deeper networks. In addition, balanced weights produced better accuracy in general. Moreover, more training iterations improved the accuracy of the results but at lower rates in later training stages. This paper presents XPINN to solve fluid flow in heterogeneous media. We demonstrate the robustness and accuracy of the proposed XPINN model by comparing it with the ground truth solutions in multiple heterogeneous cases. The model shows good potential and can be readily implemented in reservoir characterization workflow.
APA, Harvard, Vancouver, ISO, and other styles
3

Valdes, Julio J. "Heterogeneous extreme learning machines." In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Meng, Xiying Ma, Kai Zhu, Ming Xu, and Chongjun Wang. "Heterogeneous Information Network Embedding with Convolutional Graph Attention Networks." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Carl, Jieyu Zhang, and Jiawei Han. "Neural Embedding Propagation on Heterogeneous Networks." In 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 2019. http://dx.doi.org/10.1109/icdm.2019.00080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bai, Tong, and Gary Overett. "Heterogeneous Image Stylization Using Neural Networks." In 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2017. http://dx.doi.org/10.1109/dicta.2017.8227439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Shanshan, Tiancheng Huang, and Donglin Wang. "Sequence Contained Heterogeneous Graph Neural Network." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sublime, Jeremie, Nistor Grozavu, Younes Bennani, and Antoine Cornuejols. "Collaborative clustering with heterogeneous algorithms." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Shuai, Jian-Wei Liu, Xin Zuo, and Wei-Min Li. "Heterogeneous Graph Gated Attention Network." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Zheng, Bing Han, Weiming Chen, and Xinbo Gao. "Learn to Encode Heterogeneous Data: A Heterogeneous Aware Network for Multi-Future Trajectory Prediction." In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191508.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Heterogeneous neural networks"

1

Kase, Hanno, Leonardo Melosi, and Matthias Rottner. Estimating Nonlinear Heterogeneous Agents Models with Neural Networks. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Comola, Margherita, Rokhaya Dieye, and Bernard Fortin. Heterogeneous peer effects and gender-based interventions for teenage obesity. CIRANO, September 2022. http://dx.doi.org/10.54932/tqag9043.

Full text
Abstract:
This paper explores the role of gender heterogeneity in the social diffusion of obesity among adolescents and its policy implications. We propose a generalized linear social interaction model which allows for gender-dependent heterogeneity in peer effects through the channel of social synergy. We estimate the model using data on adolescent Body Mass Index and network-based interactions. Our results show that peer effects are gender-dependent, and male students are particularly responsive to the weight of their female friends. Our simulations indicate that female-tailored interventions are likely to be more effective than a gender-neutral approach to fight obesity in schools.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography