Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Heterogeneous neural networks.

Rozprawy doktorskie na temat „Heterogeneous neural networks”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 39 najlepszych rozpraw doktorskich naukowych na temat „Heterogeneous neural networks”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Belanche, Muñoz Lluís A. (Lluís Antoni). "Heterogeneous neural networks: theory and applications". Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6660.

Pełny tekst źródła
Streszczenie:
Aquest treball presenta una classe de funcions que serveixen de models neuronals generalitzats per ser usats en xarxes neuronals artificials. Es defineixen com una mesura de similitud que actúa com una definició flexible de neurona vista com un reconeixedor de patrons.
La similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona.

La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita.
Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.
En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia.

En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.
This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified.
The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed.

The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).

In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.
It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.
Style APA, Harvard, Vancouver, ISO itp.
2

Belanche, Muñoz Lluis. "Heterogeneous neural networks: theory and applications". Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/6660.

Pełny tekst źródła
Streszczenie:
Aquest treball presenta una classe de funcions que serveixen de models neuronals generalitzats per ser usats en xarxes neuronals artificials. Es defineixen com una mesura de similitud que actúa com una definició flexible de neurona vista com un reconeixedor de patrons. La similitud proporciona una marc conceptual i serveix de cobertura unificadora de molts models neuronals de la literatura i d'exploració de noves instàncies de models de neurona. La visió basada en similitud porta amb naturalitat a integrar informació heterogènia, com ara quantitats contínues i discretes (nominals i ordinals), i difuses ó imprecises. Els valors perduts es tracten de manera explícita. Una neurona d'aquesta classe s'anomena neurona heterogènia i qualsevol arquitectura neuronal que en faci ús serà una Xarxa Neuronal Heterogènia.En aquest treball ens concentrem en xarxes neuronals endavant, com focus inicial d'estudi. Els algorismes d'aprenentatge són basats en algorisms evolutius, especialment extesos per treballar amb informació heterogènia. En aquesta tesi es descriu com una certa classe de neurones heterogènies porten a xarxes neuronals que mostren un rendiment molt satisfactori, comparable o superior al de xarxes neuronals tradicionals (com el perceptró multicapa ó la xarxa de base radial), molt especialment en presència d'informació heterogènia, usual en les bases de dades actuals.
This work presents a class of functions serving as generalized neuron models to be used in artificial neural networks. They are cast into the common framework of computing a similarity function, a flexible definition of a neuron as a pattern recognizer. The similarity endows the model with a clear conceptual view and serves as a unification cover for many of the existing neural models, including those classically used for the MultiLayer Perceptron (MLP) and most of those used in Radial Basis Function Networks (RBF). These families of models are conceptually unified and their relation is clarified. The possibilities of deriving new instances are explored and several neuron models --representative of their families-- are proposed. The similarity view naturally leads to further extensions of the models to handle heterogeneous information, that is to say, information coming from sources radically different in character, including continuous and discrete (ordinal) numerical quantities, nominal (categorical) quantities, and fuzzy quantities. Missing data are also explicitly considered. A neuron of this class is called an heterogeneous neuron and any neural structure making use of them is an Heterogeneous Neural Network (HNN), regardless of the specific architecture or learning algorithm. Among them, in this work we concentrate on feed-forward networks, as the initial focus of study. The learning procedures may include a great variety of techniques, basically divided in derivative-based methods (such as the conjugate gradient)and evolutionary ones (such as variants of genetic algorithms).In this Thesis we also explore a number of directions towards the construction of better neuron models --within an integrant envelope-- more adapted to the problems they are meant to solve.It is described how a certain generic class of heterogeneous models leads to a satisfactory performance, comparable, and often better, to that of classical neural models, especially in the presence of heterogeneous information, imprecise or incomplete data, in a wide range of domains, most of them corresponding to real-world problems.
Style APA, Harvard, Vancouver, ISO itp.
3

Cabana, Tanguy. "Large deviations for the dynamics of heterogeneous neural networks". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066551/document.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur l'obtention rigoureuse de limites de champ moyen pour la dynamique continue de grands réseaux de neurones hétérogènes. Nous considérons des neurones à taux de décharge, et sujets à un bruit Brownien additif. Le réseau est entièrement connecté, avec des poids de connections dont la variance décroît comme l'inverse du nombre de neurones conservant un effet non trivial dans la limite thermodynamique. Un second type d'hétérogénéité, interprété comme une position spatiale, est considéré au niveau de chaque cellule. Pour la pertinence biologique, nos modèles incluent ou bien des délais, ainsi que des moyennes et variances de connections, dépendants de la distance entre les cellules, ou bien des synapses dépendantes de l'état des deux neurones post- et présynaptique. Ce dernier cas s'applique au modèle de Kuramoto pour les oscillateurs couplés. Quand les poids synaptiques sont Gaussiens et indépendants, nous prouvons un principe de grandes déviations pour la mesure empirique de l'état des neurones. La bonne fonction de taux associée atteint son minimum en une unique mesure de probabilité, impliquant convergence et propagation du chaos sous la loi "averaged". Dans certains cas, des résultats "quenched" sont obtenus. La limite est solution d'une équation implicite, non Markovienne, dans laquelle le terme d'interactions est remplacé par un processus Gaussien qui dépend de la loi de la solution du réseau entier. Une universalité de cette limite est prouvée, dans le cas de poids synaptiques non-Gaussiens avec queues sous-Gaussiennes. Enfin, quelques résultats numérique sur les réseau aléatoires sont présentés, et des perspectives discutées
This thesis addresses the rigorous derivation of mean-field results for the continuous time dynamics of heterogeneous large neural networks. In our models, we consider firing-rate neurons subject to additive noise. The network is fully connected, with highly random connectivity weights. Their variance scales as the inverse of the network size, and thus conserves a non-trivial role in the thermodynamic limit. Moreover, another heterogeneity is considered at the level of each neuron. It is interpreted as a spatial location. For biological relevance, a model considered includes delays, mean and variance of connections depending on the distance between cells. A second model considers interactions depending on the states of both neurons at play. This last case notably applies to Kuramoto's model of coupled oscillators. When the weights are independent Gaussian random variables, we show that the empirical measure of the neurons' states satisfies a large deviations principle, with a good rate function achieving its minimum at a unique probability measure, implying averaged convergence of the empirical measure and propagation of chaos. In certain cases, we also obtained quenched results. The limit is characterized through a complex non Markovian implicit equation in which the network interaction term is replaced by a non-local Gaussian process whose statistics depend on the solution over the whole neural field. We further demonstrate the universality of this limit, in the sense that neuronal networks with non-Gaussian interconnections but sub-Gaussian tails converge towards it. Moreover, we present a few numerical applications, and discuss possible perspectives
Style APA, Harvard, Vancouver, ISO itp.
4

Zhao, Qiwei. "Federated Learning with Heterogeneous Challenge". Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27399.

Pełny tekst źródła
Streszczenie:
Federated learning allows the training of a model from the distributed data of many clients under the orchestration of a central server. With the increasing concern on privacy, federated learning draws great attention from both academia and industry. However, the heterogeneous challenges introduced by natural characters of federated learning settings significantly degrade the performance of federated learning methods. Specifically, these heterogeneous challenges include the heterogeneous data challenges and the heterogeneous scenario challenges. Data heterogeneous challenges mean the significant differences between the datasets of numerous users. In federated learning, the data is stored separately on many distanced clients, causing these challenges. In addition, the heterogeneous scenario challenges refer to the differences between the devices participating in federated learning. Furthermore, the suitable models vary among the different scenarios. However, many existing federated learning methods use a single global model for all the devices' scenarios, which is not optimal for these two challenges. We first propose a novel federated learning framework called local union in federated learning (LU-FL) to address these challenges. LU-FL incorporates the hierarchical knowledge distillation mechanism that effectively transfers knowledge among different models. So, LU-FL can enable any number of models to be used on each client. Allocating the specially designed models to different clients can mitigate the adverse effects caused by these challenges. At the same time, it can further improve the accuracy of the output models. Extensive experimental results over several popular datasets demonstrate the effectiveness of our proposed method. It can effectively reduce the harmful effects of heterogeneous challenges, improving the accuracy of the final output models and the adaptability of the clients to various scenarios. So, it lets federated learning methods be applied in more diverse scenarios. Keywords: federated learning, neural networks, knowledge distillation, computer vision
Style APA, Harvard, Vancouver, ISO itp.
5

Schliebs, Stefan. "Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networks". AUT University, 2010. http://hdl.handle.net/10292/963.

Pełny tekst źródła
Streszczenie:
This thesis proposes a novel feature selection and classification method employing evolving spiking neural networks (eSNN) and evolutionary algorithms (EA). The method is named the Quantum-inspired Spiking Neural Network (QiSNN) framework. QiSNN represents an integrated wrapper approach. An evolutionary process evolves appropriate feature subsets for a given classification task and simultaneously optimises the neural and learning-related parameters of the network. Unlike other methods, the connection weights of this network are determined by a fast one-pass learning algorithm which dramatically reduces the training time. In its core, QiSNN employs the Thorpe neural model that allows the efficient simulation of even large networks. In QiSNN, the presence or absence of features is represented by a string of concatenated bits, while the parameters of the neural network are continuous. For the exploration of these two entirely different search spaces, a novel Estimation of Distribution Algorithm (EDA) is developed. The method maintains a population of probabilistic models specialised for the optimisation of either binary, continuous or heterogeneous search spaces while utilising a small and intuitive set of parameters. The EDA extends the Quantum-inspired Evolutionary Algorithm (QEA) proposed by Han and Kim (2002) and was named the Heterogeneous Hierarchical Model EDA (hHM-EDA). The algorithm is compared to numerous contemporary optimisation methods and studied in terms of convergence speed, solution quality and robustness in noisy search spaces. The thesis investigates the functioning and the characteristics of QiSNN using both synthetic feature selection benchmarks and a real-world case study on ecological modelling. By evolving suitable feature subsets, QiSNN significantly enhances the classification accuracy of eSNN. Compared to numerous other feature selection techniques, like the wrapper-based Multilayer Perceptron (MLP) and the Naive Bayesian Classifier (NBC), QiSNN demonstrates a competitive classification and feature selection performance while requiring comparatively low computational costs.
Style APA, Harvard, Vancouver, ISO itp.
6

Antoniou, Christos Andrea. "Improving the acoustic modelling of speech using modular/ensemble combinations of heterogeneous neural networks". Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340582.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Wilson, Daniel B. "Combining genetic algorithms and artificial neural networks to select heterogeneous dispatching rules for a job shop system". Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1177701025.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hobro, Mark. "Semantic Integration across Heterogeneous Databases : Finding Data Correspondences using Agglomerative Hierarchical Clustering and Artificial Neural Networks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226657.

Pełny tekst źródła
Streszczenie:
The process of data integration is an important part of the database field when it comes to database migrations and the merging of data. The research in the area has grown with the addition of machine learning approaches in the last 20 years. Due to the complexity of the research field, no go-to solutions have appeared. Instead, a wide variety of ways of enhancing database migrations have emerged. This thesis examines how well a learning-based solution performs for the semantic integration problem in database migrations. Two algorithms are implemented. One that is based on information retrieval theory, with the goal of yielding a matching result that can be used as a benchmark for measuring the performance of the machine learning algorithm. The machine learning approach is based on grouping data with agglomerative hierarchical clustering and then training a neural network to recognize patterns in the data. This allows making predictions about potential data correspondences across two databases. The results show that agglomerative hierarchical clustering performs well in the task of grouping the data into classes. The classes can in turn be used for training a neural network. The matching algorithm gives a high recall of matching tables, but improvements are needed to both receive a high recall and precision. The conclusion is that the proposed learning-based approach, using agglomerative hierarchical clustering and a neural network, works as a solid base to semi-automate the data integration problem seen in this thesis. But the solution needs to be enhanced with scenario specific algorithms and rules, to reach desired performance.
Dataintegrering är en viktig del inom området databaser när det kommer till databasmigreringar och sammanslagning av data. Forskning inom området har ökat i takt med att maskininlärning blivit ett attraktivt tillvägagångssätt under de senaste 20 åren. På grund av komplexiteten av forskningsområdet, har inga optimala lösningar hittats. Istället har flera olika tekniker framställts, som tillsammans kan förbättra databasmigreringar. Denna avhandling undersöker hur bra en lösning baserad på maskininlärning presterar för dataintegreringsproblemet vid databasmigreringar. Två algoritmer har implementerats. En är baserad på informationssökningsteori, som främst används för att ha en prestandamässig utgångspunkt för algoritmen som är baserad på maskininlärning. Den algoritmen består av ett första steg, där data grupperas med hjälp av hierarkisk klustring. Sedan tränas ett artificiellt neuronnät att hitta mönster i dessa grupperingar, för att kunna göra förutsägelser huruvida olika datainstanser har ett samband mellan två databaser. Resultatet visar att agglomerativ hierarkisk klustring presterar väl i uppgiften att klassificera den data som använts. Resultatet av matchningsalgoritmen visar på att en stor mängd av de matchande tabellerna kan hittas. Men förbättringar behöver göras för att både ge hög en hög återkallelse av matchningar och hög precision för de matchningar som hittas. Slutsatsen är att ett inlärningsbaserat tillvägagångssätt, i detta fall att använda agglomerativ hierarkisk klustring och sedan träna ett artificiellt neuronnät, fungerar bra som en basis för att till viss del automatisera ett dataintegreringsproblem likt det som presenterats i denna avhandling. För att få bättre resultat, krävs att lösningen förbättras med mer situationsspecifika algoritmer och regler.
Style APA, Harvard, Vancouver, ISO itp.
9

Tekleyohannes, Anteneh Tesfaye. "Unified and heterogeneous modeling of water vapour sorption in Douglas-fir wood with artificial neural networks". Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23032.

Pełny tekst źródła
Streszczenie:
The objective of this study was firstly to investigate and understand sorption properties of earlywood, latewood, annual rings and gross wood. Secondly, to develop a heterogeneous sorption model for earlywood, latewood and annual rings by taking into consideration unified complex interactions of anatomy, chemical composition and thermodynamic parameters. Thirdly, to upscale the annual ring level model to gross wood by applying artificial neural networks (ANNs) modeling tools using dimensionally reduced inputs through dimensional analysis and genetic algorithms. Four novel physical models, namely, dynamical two-level systems (TLS) model of annual rings, sorption kinetics, sorption isotherms and TLS model of physical properties and chemical composition were derived and successfully validated using experimental data of Douglas-fir. The annual ring’s TLS model was capable to generate novel physical quantities, namely, golden ring volume (GRV) and golden ring cube (GRC) to which the sorption properties are very sensitive, according to the validation tests. A new heterogeneity test criterion (HTC) was also derived. Validations of the TLS sorption models revealed new evidence showing a transient nature of sorption hysteresis in which boundary sorption isotherms asymptotically converged to a single isotherm at large time limit. A novel method for the computation of internal surface area of wood was also validated using the TLS model of sorption isotherms. The fibre saturation point prediction of the model was also found to agree well with earlier reports. The TLS model of physical properties and chemical composition was able to reveal the self-organization in Douglas-fir that gives rise to allometric scaling. The TLS modeling revealed existence of self-organizing criticality (SOC) in Douglas-fir and demonstrated mechanisms by which it is generated. Ten categories of unified ANNs Douglas-fir sorption models that predict equilibrium moisture content, diffusion and surface emission coefficients were successfully developed and validated. The network models predict sorption properties of Douglas-fir using thermodynamic variables and parameters generated by the four TLS models from chemical composition and physical properties of annual rings. The findings of this study contribute to the creation of a decision support system that would allow predicting wood properties and processing characteristics based on chemical and structural attributes.
Style APA, Harvard, Vancouver, ISO itp.
10

Toledo, Testa Juan Ignacio. "Information extraction from heterogeneous handwritten documents". Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667388.

Pełny tekst źródła
Streszczenie:
L’objectiu d’aquesta tesi és l’extracció d’Informació de documents total o parcialment manuscrits amb una certa estructura. Bàsicament treballem amb dos escenaris d’aplicació diferent. El primer escenari són els documents moderns altament estructurats, com formularis. En aquests documents, la informació semàntica està ja definida en camps, amb una posició concreta al document i l’extracció de la informació és equivalent a una transcripció. El segon escenari son els documents semi-estructurats totalment manuscrits on, a més de transcriure, cal associar un valor semàntic, d’entre un conjunt conegut de valors possibles, a les paraules que es transcriuen. En ambdós casos la qualitat de la transcripció té un gran pes en la precisió del sistema, per això proposem models basats en xarxes neuronals per a transcriure text manuscrit. Per a poder afrontar el repte dels documents semi-estructurats hem generat un benchmark, compost de dataset, una sèrie de tasques definides i una mètrica que es va presentar a la comunitat científica com a una competició internacional. També proposem diferents models basats en Xarxes Neuronals Convolucionals i recurrents, capaços de transcriure i assignar diferent etiquetes semàntiques a cada paraula manuscrita, és a dir, capaços d'extreure informació.
El objetivo de esta tesis es la extracción de Información de documentos total o parcialmente manuscritos, con una cierta estructura. Básicamente trabajamos con dos escenarios de aplicación diferentes. El primer escenario son los documentos modernos altamente estructurados, como los formularios. En estos documentos, la información semántica está pre-definida en campos con una posición concreta en el documento i la extracción de información es equivalente a una transcripción. El segundo escenario son los documentos semi-estructurados totalmente manuscritos, donde, además de transcribir, es necesario asociar un valor semántico, de entre un conjunto conocido de valores posibles, a las palabras manuscritas. En ambos casos, la calidad de la transcripción tiene un gran peso en la precisión del sistema. Por ese motivo proponemos modelos basados en redes neuronales para transcribir el texto manuscrito. Para poder afrontar el reto de los documentos semi-estructurados, hemos generado un benchmark, compuesto de dataset, una serie de tareas y una métrica que fue presentado a la comunidad científica a modo de competición internacional. También proponemos diferentes modelos basados en Redes Neuronales Convolucionales y Recurrentes, capaces de transcribir y asignar diferentes etiquetas semánticas a cada palabra manuscrita, es decir, capaces de extraer información.
The goal of this thesis is information Extraction from totally or partially handwritten documents. Basically we are dealing with two different application scenarios. The first scenario are modern highly structured documents like forms. In this kind of documents, the semantic information is encoded in different fields with a pre-defined location in the document, therefore, information extraction becomes equivalent to transcription. The second application scenario are loosely structured totally handwritten documents, besides transcribing them, we need to assign a semantic label, from a set of known values to the handwritten words. In both scenarios, transcription is an important part of the information extraction. For that reason in this thesis we present two methods based on Neural Networks, to transcribe handwritten text.In order to tackle the challenge of loosely structured documents, we have produced a benchmark, consisting of a dataset, a defined set of tasks and a metric, that was presented to the community as an international competition. Also, we propose different models based on Convolutional and Recurrent neural networks that are able to transcribe and assign different semantic labels to each handwritten words, that is, able to perform Information Extraction.
Style APA, Harvard, Vancouver, ISO itp.
11

Calabuig, Soler Daniel. "Common Radio Resource Management Strategies for Quality of Service Support in Heterogeneous Wireless Networks". Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/7348.

Pełny tekst źródła
Streszczenie:
Hoy en día existen varias tecnologías que coexisten en una misma zona formando un sistema heterogéneo. Además, este hecho se espera que se vuelva más acentuado con todas las nuevas tecnologías que se están estandarizando actualmente. Hasta ahora, generalmente son los usuarios los que eligen la tecnología a la que se van a conectar, ya sea configurando sus terminales o usando terminales distintos. Sin embargo, esta solución es incapaz de aprovechar al máximo todos los recursos. Para ello es necesario un nuevo conjunto de estrategias. Estas estrategias deben gestionar los recursos radioeléctricos conjuntamente y asegurar la satisfacción de la calidad de servicio de los usuarios. Siguiendo esta idea, esta Tesis propone dos nuevos algoritmos. El primero es un algoritmo de asignación dinámica de recusos conjunto (JDRA) capaz de asignar recursos a usuarios y de distribuir usuarios entre tecnologías al mismo tiempo. El algoritmo está formulado en términos de un problema de optimización multi-objetivo que se resuelve usando redes neuronales de Hopfield (HNNs). Las HNNs son interesantes ya que se supone que pueden alcanzar soluciones sub-óptimas en cortos periodos de tiempo. Sin embargo, implementaciones reales de las HNNs en ordenadores pierden esta rápida respuesta. Por ello, en esta Tesis se analizan las causas y se estudian posibles mejoras. El segundo algoritmo es un algoritmo de control de admisión conjunto (JCAC) que admite y rechaza usuarios teniendo en cuenta todas las tecnologías al mismo tiempo. La principal diferencia con otros algorimos propuestos es que éstos últimos toman las dicisiones de admisión en cada tecnología por separado. Por ello, se necesita de algún mecanismo para seleccionar la tecnología a la que los usuarios se van a conectar. Por el contrario, la técnica propuesta en esta Tesis es capaz de tomar decisiones en todo el sistema heterogéneo. Por lo tanto, los usuarios no se enlazan con ninguna tecnología antes de ser admitidos.
Calabuig Soler, D. (2010). Common Radio Resource Management Strategies for Quality of Service Support in Heterogeneous Wireless Networks [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7348
Palancia
Style APA, Harvard, Vancouver, ISO itp.
12

Ali, Muhammad. "Load balancing in heterogeneous wireless communications networks : optimized load aware vertical handovers in satellite-terrestrial hybrid networks incorporating IEEE 802.21 media independent handover and cognitive algorithms". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/6307.

Pełny tekst źródła
Streszczenie:
Heterogeneous wireless networking technologies such as satellite, UMTS, WiMax and WLAN are being used to provide network access for both voice and data services. In big cities, the densely populated areas like town centres, shopping centres and train stations may have coverage of multiple wireless networks. Traditional Radio Access Technology (RAT) selection algorithms are mainly based on the 'Always Best Connected' paradigm whereby the mobile nodes are always directed towards the available network which has the strongest and fastest link. Hence a large number of mobile users may be connected to the more common UMTS while the other networks like WiMax and WLAN would be underutilised, thereby creating an unbalanced load across these different wireless networks. This high variation among the load across different co-located networks may cause congestion on overloaded network leading to high call blocking and call dropping probabilities. This can be alleviated by moving mobile users from heavily loaded networks to least loaded networks. This thesis presents a novel framework for load balancing in heterogeneous wireless networks incorporating the IEEE 802.21 Media Independent Handover (MIH). The framework comprises of novel load-aware RAT selection techniques and novel network load balancing mechanism. Three new different load balancing algorithms i.e. baseline, fuzzy and neural-fuzzy algorithms have also been presented in this thesis that are used by the framework for efficient load balancing across the different co-located wireless networks. A simulation model developed in NS2 validates the performance of the proposed load balancing framework. Different attributes like load distribution in all wireless networks, handover latencies, packet drops, throughput at mobile nodes and network utilization have been observed to evaluate the effects of load balancing using different scenarios. The simulation results indicate that with load balancing the performance efficiency improves as the overloaded situation is avoided by load balancing.
Style APA, Harvard, Vancouver, ISO itp.
13

LIMA, Natália Flora De. "Frankenstein PSO na definição das arquiteturas e ajustes dos pesos e uso de PSO heterogêneo no treinamento de redes neurais feed-forward". Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/17738.

Pełny tekst źródła
Streszczenie:
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-08-24T17:35:05Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertacao-Natalia_Flora_de_Lima.pdf: 2000980 bytes, checksum: 107f0691d21b9d94e253d08f06a4fbdd (MD5)
Made available in DSpace on 2016-08-24T17:35:05Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertacao-Natalia_Flora_de_Lima.pdf: 2000980 bytes, checksum: 107f0691d21b9d94e253d08f06a4fbdd (MD5) Previous issue date: 2011-08-29
Facepe
Este trabalho apresenta dois novos algoritmos, PSO-FPSO e FPSO-FPSO, para a otimização global de redes neurais MLP (do inglês Multi Layer Perceptron) do tipo feed-forward. O propósito destes algoritmos é otimizar de forma simultânea as arquiteturas e pesos sinápticos, objetivando melhorar a capacidade de generalização da rede neural artificial (RNA). O processo de otimização automática das arquiteturas e pesos de uma rede neural vem recebendo grande atenção na área de aprendizado supervisionado, principalmente em problemas de classificação de padrões. Além dos Algoritmos Genéticos, Busca Tabu, Evolução Diferencial, Recozimento simulado que comumente são empregados no treinamento de redes neurais podemos citar abordagens populacionais como a otimização por colônia de formigas, otimização por colônia de abelhas e otimização por enxame de partículas que vêm sendo largamente utilizadas nesta tarefa. A metodologia utilizada neste trabalho trata da aplicação de dois algoritmos do tipo PSO, sendo empregados na otimização das arquiteturas e na calibração dos pesos das conexões. Nesta abordagem os algoritmos são executados de forma alternada e por um número definido de vezes. Ainda no processo de ajuste dos pesos de uma rede neural MLP foram realizados experimentos com enxame de partículas heterogêneos, que nada mais é que a junção de dois ou mais PSOs de tipos diferentes. Para validar os experimentos com os enxames homogêneos foram utilizadas sete bases de dados para problemas de classificação de padrões, são elas: câncer, diabetes, coração, vidros, cavalos, soja e tireóide. Para os experimentos com enxames heterogêneos foram utilizadas três bases, a saber: câncer, diabetes e coração. O desempenho dos algoritmos foi medido pela média do erro percentual de classificação. Algoritmos da literatura são também considerados. Os resultados mostraram que os algoritmos investigados neste trabalho obtiveram melhor acurácia de classificação quando comparados com os algoritmos da literatura mencionados neste trabalho.
This research presents two new algorithms, PSO-FPSO e FPSO-FPSO, that can be used in feed-forward MLP (Multi Layer Perceptron) neural networks for global optimization. The purpose of these algorithms is to optimize architectures and synaptic weight, at same time, to improve the capacity of generalization from Artificial Neural Network (ANN). The automatic optimization process of neural network’s architectures and weights has received much attention in supervised learning, mainly in pattern classification problems. Besides the Genetic Algorithms, Tabu Search, Differential Evolution, Simulated Annealing that are commonly used in the training of neural networks we can mentioned population approaches such Ant Colony Optimization, Bee Colony Optimization and Particle Swarm Optimization that have been widely used this task. The methodology applied in this research reports the use of two PSO algorithms, used in architecture optimization and connection weight adjust. In this approach the algorithms are performed alternately and by predefined number of times. Still in the process of adjusting the weights of a MLP neural network experiments were performed with swarm of heterogeneous particles, which is nothing more than the joining of two or more different PSOs. To validate the experiments with homogeneous clusters were used seven databases for pattern classification problems, they are: cancer, diabetes, heart, glasses, horses, soy and thyroid. For the experiments with heterogeneous clusters were used three bases, namely cancer, diabetes and heart. The performance of the algorithms was measured by the average percentage of misclassification, literature algorithms are also considered. The results showed that the algorithms investigated in this research had better accuracy rating compared with some published algorithms.
Style APA, Harvard, Vancouver, ISO itp.
14

Zhang, Wuming. "Towards non-conventional face recognition : shadow removal and heterogeneous scenario". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC030/document.

Pełny tekst źródła
Streszczenie:
Ces dernières années, la biométrie a fait l’objet d’une grande attention en raison du besoin sans cesse croissant d’authentification d’identité, notamment pour sécuriser de plus en plus d’applications enlignes. Parmi divers traits biométriques, le visage offre des avantages compétitifs sur les autres, e.g., les empreintes digitales ou l’iris, car il est naturel, non-intrusif et facilement acceptable par les humains. Aujourd’hui, les techniques conventionnelles de reconnaissance faciale ont atteint une performance quasi-parfaite dans un environnement fortement contraint où la pose, l’éclairage, l’expression faciale et d’autres sources de variation sont sévèrement contrôlées. Cependant, ces approches sont souvent confinées aux domaines d’application limités parce que les environnements d’imagerie non-idéaux sont très fréquents dans les cas pratiques. Pour relever ces défis d’une manière adaptative, cette thèse porte sur le problème de reconnaissance faciale non contrôlée, dans lequel les images faciales présentent plus de variabilités sur les éclairages. Par ailleurs, une autre question essentielle vise à profiter des informations limitées de 3D pour collaborer avec les techniques basées sur 2D dans un système de reconnaissance faciale hétérogène. Pour traiter les diverses conditions d’éclairage, nous construisons explicitement un modèle de réflectance en caractérisant l’interaction entre la surface de la peau, les sources d’éclairage et le capteur de la caméra pour élaborer une explication de la couleur du visage. A partir de ce modèle basé sur la physique, une représentation robuste aux variations d’éclairage, à savoir Chromaticity Invariant Image (CII), est proposée pour la reconstruction des images faciales couleurs réalistes et sans ombre. De plus, ce processus de la suppression de l’ombre en niveaux de couleur peut être combiné avec les techniques existantes sur la normalisation d’éclairage en niveaux de gris pour améliorer davantage la performance de reconnaissance faciale. Les résultats expérimentaux sur les bases de données de test standard, CMU-PIE et FRGC Ver2.0, démontrent la capacité de généralisation et la robustesse de notre approche contre les variations d’éclairage. En outre, nous étudions l’usage efficace et créatif des données 3D pour la reconnaissance faciale hétérogène. Dans un tel scénario asymétrique, un enrôlement combiné est réalisé en 2D et 3D alors que les images de requête pour la reconnaissance sont toujours les images faciales en 2D. A cette fin, deux Réseaux de Neurones Convolutifs (Convolutional Neural Networks, CNN) sont construits. Le premier CNN est formé pour extraire les descripteurs discriminants d’images 2D/3D pour un appariement hétérogène. Le deuxième CNN combine une structure codeur-décodeur, à savoir U-Net, et Conditional Generative Adversarial Network (CGAN), pour reconstruire l’image faciale en profondeur à partir de son homologue dans l’espace 2D. Plus particulièrement, les images reconstruites en profondeur peuvent être également transmise au premier CNN pour la reconnaissance faciale en 3D, apportant un schéma de fusion qui est bénéfique pour la performance en reconnaissance. Notre approche a été évaluée sur la base de données 2D/3D de FRGC. Les expérimentations ont démontré que notre approche permet d’obtenir des résultats comparables à ceux de l’état de l’art et qu’une amélioration significative a pu être obtenue à l’aide du schéma de fusion
In recent years, biometrics have received substantial attention due to the evergrowing need for automatic individual authentication. Among various physiological biometric traits, face offers unmatched advantages over the others, such as fingerprints and iris, because it is natural, non-intrusive and easily understandable by humans. Nowadays conventional face recognition techniques have attained quasi-perfect performance in a highly constrained environment wherein poses, illuminations, expressions and other sources of variations are strictly controlled. However these approaches are always confined to restricted application fields because non-ideal imaging environments are frequently encountered in practical cases. To adaptively address these challenges, this dissertation focuses on this unconstrained face recognition problem, where face images exhibit more variability in illumination. Moreover, another major question is how to leverage limited 3D shape information to jointly work with 2D based techniques in a heterogeneous face recognition system. To deal with the problem of varying illuminations, we explicitly build the underlying reflectance model which characterizes interactions between skin surface, lighting source and camera sensor, and elaborate the formation of face color. With this physics-based image formation model involved, an illumination-robust representation, namely Chromaticity Invariant Image (CII), is proposed which can subsequently help reconstruct shadow-free and photo-realistic color face images. Due to the fact that this shadow removal process is achieved in color space, this approach could thus be combined with existing gray-scale level lighting normalization techniques to further improve face recognition performance. The experimental results on two benchmark databases, CMU-PIE and FRGC Ver2.0, demonstrate the generalization ability and robustness of our approach to lighting variations. We further explore the effective and creative use of 3D data in heterogeneous face recognition. In such a scenario, 3D face is merely available in the gallery set and not in the probe set, which one would encounter in real-world applications. Two Convolutional Neural Networks (CNN) are constructed for this purpose. The first CNN is trained to extract discriminative features of 2D/3D face images for direct heterogeneous comparison, while the second CNN combines an encoder-decoder structure, namely U-Net, and Conditional Generative Adversarial Network (CGAN) to reconstruct depth face image from its counterpart in 2D. Specifically, the recovered depth face images can be fed to the first CNN as well for 3D face recognition, leading to a fusion scheme which achieves gains in recognition performance. We have evaluated our approach extensively on the challenging FRGC 2D/3D benchmark database. The proposed method compares favorably to the state-of-the-art and show significant improvement with the fusion scheme
Style APA, Harvard, Vancouver, ISO itp.
15

Ritholtz, Lee. "Intelligent text recognition system on a heterogeneous multi-core processor cluster a performance profile and architecture exploration /". Diss., Online access via UMI:, 2009.

Znajdź pełny tekst źródła
Streszczenie:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2009.
Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
16

Westphal, Florian. "Efficient Document Image Binarization using Heterogeneous Computing and Interactive Machine Learning". Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16797.

Pełny tekst źródła
Streszczenie:
Large collections of historical document images have been collected by companies and government institutions for decades. More recently, these collections have been made available to a larger public via the Internet. However, to make accessing them truly useful, the contained images need to be made readable and searchable. One step in that direction is document image binarization, the separation of text foreground from page background. This separation makes the text shown in the document images easier to process by humans and other image processing algorithms alike. While reasonably well working binarization algorithms exist, it is not sufficient to just being able to perform the separation of foreground and background well. This separation also has to be achieved in an efficient manner, in terms of execution time, but also in terms of training data used by machine learning based methods. This is necessary to make binarization not only theoretically possible, but also practically viable. In this thesis, we explore different ways to achieve efficient binarization in terms of execution time by improving the implementation and the algorithm of a state-of-the-art binarization method. We find that parameter prediction, as well as mapping the algorithm onto the graphics processing unit (GPU) help to improve its execution performance. Furthermore, we propose a binarization algorithm based on recurrent neural networks and evaluate the choice of its design parameters with respect to their impact on execution time and binarization quality. Here, we identify a trade-off between binarization quality and execution performance based on the algorithm’s footprint size and show that dynamically weighted training loss tends to improve the binarization quality. Lastly, we address the problem of training data efficiency by evaluating the use of interactive machine learning for reducing the required amount of training data for our recurrent neural network based method. We show that user feedback can help to achieve better binarization quality with less training data and that visualized uncertainty helps to guide users to give more relevant feedback.
Scalable resource-efficient systems for big data analytics
Style APA, Harvard, Vancouver, ISO itp.
17

Ong, Felicia Li Chin. "Heterogeneous Networking for Beyond 3G system in a High-Speed Train Environment. Investigation of handover procedures in a high-speed train environment and adoption of a pattern classification neural-networks approach for handover management". Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/12341.

Pełny tekst źródła
Streszczenie:
Based on the targets outlined by the EU Horizon 2020 (H2020) framework, it is expected that heterogeneous networking will play a crucial role in delivering seamless end-to-end ubiquitous Internet access for users. In due course, the current GSM-Railway (GSM-R) will be deemed unsustainable, as the demand for packet-oriented services continues to increase. Therefore, the opportunity to identify a plausible replacement system conducted in this research study is timely and appropriate. In this research study, a hybrid satellite and terrestrial network for enabling ubiquitous Internet access in a high-speed train environment is investigated. The study focuses on the mobility management aspect of the system, primarily related to the handover management. A proposed handover strategy, employing the RACE II MONET and ITU-T Q.65 design methodology, will be addressed. This includes identifying the functional model (FM) which is then mapped to the functional architecture (FUA), based on the Q.1711 IMT-2000 FM. In addition, the signalling protocols, information flows and message format based on the adopted design methodology will also be specified. The approach is then simulated in OPNET and the findings are then presented and discussed. The opportunity of exploring the prospect of employing neural networks (NN) for handover is also undertaken. This study focuses specifically on the use of pattern classification neural networks to aid in the handover process, which is then simulated in MATLAB. The simulation outcomes demonstrated the effectiveness and appropriateness of the NN algorithm and the competence of the algorithm in facilitating the handover process.
Style APA, Harvard, Vancouver, ISO itp.
18

Bradley, Patrick Justin. "Heterogeneously coupled neural oscillators". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33938.

Pełny tekst źródła
Streszczenie:
The work we present in this thesis is a series of studies of how heterogeneities in coupling affect the synchronization of coupled neural oscillators. We begin by examining how heterogeneity in coupling strength affects the equilibrium phase difference of a pair of coupled, spiking neurons when compared to the case of identical coupling. This study is performed using pairs of Hodgkin-Huxley and Wang-Buzsaki neurons. We find that heterogeneity in coupling strength breaks the symmetry of the bifurcation diagrams of equilibrium phase difference versus the synaptic rate constant for weakly coupled pairs of neurons. We observe important qualitative changes such as the loss of the ubiquitous in-phase and anti-phase solutions found when the coupling is identical and regions of parameter space where no phase locked solution exists. Another type of heterogeneity can be found by having different types of coupling between oscillators. Synaptic coupling between neurons can either be exciting or inhibiting. We examine the synchronization dynamics when a pair of neurons is coupled with one excitatory and one inhibitory synapse. We also use coupled pairs of Hodgkin-Huxley neurons and Wang-Buzsaki neurons for this work. We then explore the existance of 1:n coupled states for a coupled pair of theta neurons. We do this in order to reproduce an observed effect called quantal slowing. Quantal slowing is the phenomena where jumping between different $1:n$ coupled states is observed instead of gradual changes in period as a parameter in the system is varied. All of these topics fall under the general heading of coupled, non-linear oscillators and specifically weakly coupled, neural oscillators. The audience for this thesis is most likely going to be a mixed crowd as the research reported herein is interdisciplinary. Choosing the content for the introduction proved far more challenging than expected. It might be impossible to write a maximally useful introductory portion of a thesis when it could be read by a physicist, mathematician, engineer or biologist. Undoubtedly readers will find some portion of this introduction elementary. At the risk of boring some or all of my readers we decided it was best to proceed so that enough of the mathematical (biological) background is explained in the introduction so that a biologist (mathematician) is able to appreciate the motivations for the research and the results presented. We begin with a introduction in nonlinear dynamics explaining the mathematical tools we use to characterize the excitability of individual neurons, as well as oscillations and synchrony in neural networks. The next part of the introductory material is an overview of the biology of neurons. We then describe the neuron models used in this work and finally describe the techniques we employ to study coupled neurons.
Style APA, Harvard, Vancouver, ISO itp.
19

Martignano, Anna. "Real-time Anomaly Detection on Financial Data". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281832.

Pełny tekst źródła
Streszczenie:
This work presents an investigation of tailoring Network Representation Learning (NRL) for an application in the Financial Industry. NRL approaches are data-driven models that learn how to encode graph structures into low-dimensional vector spaces, which can be further exploited by downstream Machine Learning applications. They can potentially bring a lot of benefits in the Financial Industry since they extract in an automatic way features that can provide useful input regarding graph structures, called embeddings. Financial transactions can be represented as a network, and through NRL, it is possible to extract embeddings that reflect the intrinsic inter-connected nature of economic relationships. Such embeddings can be used for several purposes, among which Anomaly Detection to fight financial crime.This work provides a qualitative analysis over state-of-the-art NRL models, which identifies Graph Convolutional Network (ConvGNN) as the most suitable category of approaches for Financial Industry but with a certain need for further improvement. Financial Industry poses additional challenges when modelling a NRL solution. Despite the need of having a scalable solution to handle real-world graph with considerable dimensions, it is necessary to take into consideration several characteristics: transactions graphs are inherently dynamic since every day new transactions are executed and nodes can be heterogeneous. Besides, everything is further complicated by the need to have updated information in (near) real-time due to the sensitivity of the application domain. For these reasons, GraphSAGE has been considered as a base for the experiments, which is an inductive ConvGNN model. Two variants of GraphSAGE are presented: a dynamic variant whose weights evolve accordingly with the input sequence of graph snapshots, and a variant specifically meant to handle bipartite graphs. These variants have been evaluated by applying them to real-world data and leveraging the generated embeddings to perform Anomaly Detection. The experiments demonstrate that leveraging these variants leads toimagecomparable results with other state-of-the-art approaches, but having the advantage of being suitable to handle real-world financial data sets.
Detta arbete presenterar en undersökning av tillämpningar av Network Representation Learning (NRL) inom den finansiella industrin. Metoder inom NRL möjliggör datadriven kondensering av grafstrukturer till lågdimensionella och lätthanterliga vektorer.Dessa vektorer kan sedan användas i andra maskininlärningsuppgifter. Närmare bestämt, kan metoder inom NRL underlätta hantering av och informantionsutvinning ur beräkningsintensiva och storskaliga grafer inom den finansiella sektorn, till exempel avvikelsehantering bland finansiella transaktioner. Arbetet med data av denna typ försvåras av det faktum att transaktionsgrafer är dynamiska och i konstant förändring. Utöver detta kan noderna, dvs transaktionspunkterna, vara vitt skilda eller med andra ord härstamma från olika fördelningar.I detta arbete har Graph Convolutional Network (ConvGNN) ansetts till den mest lämpliga lösningen för nämnda tillämpningar riktade mot upptäckt av avvikelser i transaktioner. GraphSAGE har använts som utgångspunkt för experimenten i två olika varianter: en dynamisk version där vikterna uppdateras allteftersom nya transaktionssekvenser matas in, och en variant avsedd särskilt för bipartita (tvådelade) grafer. Dessa varianter har utvärderats genom användning av faktiska datamängder med avvikelsehantering som slutmål.
Style APA, Harvard, Vancouver, ISO itp.
20

Diaz, Boada Juan Sebastian. "Polypharmacy Side Effect Prediction with Graph Convolutional Neural Network based on Heterogeneous Structural and Biological Data". Thesis, KTH, Numerisk analys, NA, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288537.

Pełny tekst źródła
Streszczenie:
The prediction of polypharmacy side effects is crucial to reduce the mortality and morbidity of patients suffering from complex diseases. However, its experimental prediction is unfeasible due to the many possible drug combinations, leaving in silico tools as the most promising way of addressing this problem. This thesis improves the performance and robustness of a state-of-the-art graph convolutional network designed to predict polypharmacy side effects, by feeding it with complexity properties of the drug-protein network. The modifications also involve the creation of a direct pipeline to reproduce the results and test it with different datasets.
För att minska dödligheten och sjukligheten hos patienter som lider av komplexa sjukdomar är det avgörande att kunna förutsäga biverkningar från polyfarmaci. Att experimentellt förutsäga biverkningarna är dock ogenomförbart på grund av det stora antalet möjliga läkemedelskombinationer, vilket lämnar in silico-verktyg som det mest lovande sättet att lösa detta problem. Detta arbete förbättrar prestandan och robustheten av ett av det senaste grafiska faltningsnätverken som är utformat för att förutsäga biverkningar från polyfarmaci, genom att mata det med läkemedel-protein-nätverkets komplexitetsegenskaper. Ändringarna involverar också skapandet av en direkt pipeline för att återge resultaten och testa den med olika dataset.
Style APA, Harvard, Vancouver, ISO itp.
21

Nguyen, Thanh Hai. "Some contributions to deep learning for metagenomics". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS102.

Pełny tekst źródła
Streszczenie:
Les données métagénomiques du microbiome humain constituent une nouvelle source de données pour améliorer le diagnostic et le pronostic des maladies humaines. Cependant, réaliser une prédiction basée sur l'abondance de bactéries individuelles est un défi, car le nombre de caractéristiques est beaucoup plus grand que le nombre d'échantillons et les difficultés liées au traitement de données dimensionnelles, ainsi que la grande complexité des données hétérogènes. L'apprentissage automatique a obtenu de grandes réalisations sur d'importants problèmes de métagénomique liés au regroupement d'OTU, à l'assignation taxonomique, etc. La contribution de cette thèse est multiple: 1) un cadre de sélection de caractéristiques pour approche pour prédire les maladies à l'aide de représentations d'images artificielles. La première contribution, qui est une approche efficace de sélection de caractéristiques basée sur les capacités de visualisation de la carte auto-organisée, montre une précision de classification raisonnable par rapport aux méthodes de pointe. La seconde approche vise à visualiser les données métagénomiques en utilisant une méthode simple de remplissage, ainsi que des approches d'apprentissage de réduction dimensionnelle. La nouvelle représentation des données métagénomiques peut être considérée comme une image synthétique et utilisée comme un nouvel ensemble de données pour une méthode efficace d'apprentissage en profondeur. Les résultats montrent que les méthodes proposées permettent d'atteindre des performances prédictives à la pointe de la technologie ou de les surpasser sur des benchmarks métagénomiques riches en public
Metagenomic data from human microbiome is a novel source of data for improving diagnosis and prognosis in human diseases. However, to do a prediction based on individual bacteria abundance is a challenge, since the number of features is much bigger than the number of samples. Hence, we face the difficulties related to high dimensional data processing, as well as to the high complexity of heterogeneous data. Machine Learning has obtained great achievements on important metagenomics problems linked to OTU-clustering, binning, taxonomic assignment, etc. The contribution of this PhD thesis is multi-fold: 1) a feature selection framework for efficient heterogeneous biomedical signature extraction, and 2) a novel deep learning approach for predicting diseases using artificial image representations. The first contribution is an efficient feature selection approach based on visualization capabilities of Self-Organizing Maps for heterogeneous data fusion. The framework is efficient on a real and heterogeneous datasets containing metadata, genes of adipose tissue, and gut flora metagenomic data with a reasonable classification accuracy compared to the state-of-the-art methods. The second approach is a method to visualize metagenomic data using a simple fill-up method, and also various state-of-the-art dimensional reduction learning approaches. The new metagenomic data representation can be considered as synthetic images, and used as a novel data set for an efficient deep learning method such as Convolutional Neural Networks. The results show that the proposed methods either achieve the state-of-the-art predictive performance, or outperform it on public rich metagenomic benchmarks
Style APA, Harvard, Vancouver, ISO itp.
22

Bailey, Tony J. "Neuromorphic Architecture with Heterogeneously Integrated Short-Term and Long-Term Learning Paradigms". University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1554217105047975.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Liu, Chang. "Data Analysis of Minimally-Structured Heterogeneous Logs : An experimental study of log template extraction and anomaly detection based on Recurrent Neural Network and Naive Bayes". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191334.

Pełny tekst źródła
Streszczenie:
Nowadays, the ideas of continuous integration and continuous delivery are under heavy usage in order to achieve rapid software development speed and quick product delivery to the customers with good quality. During the process ofmodern software development, the testing stage has always been with great significance so that the delivered software is meeting all the requirements and with high quality, maintainability, sustainability, scalability, etc. The key assignment of software testing is to find bugs from every test and solve them. The developers and test engineers at Ericsson, who are working on a large scale software architecture, are mainly relying on the logs generated during the testing, which contains important information regarding the system behavior and software status, to debug the software. However, the volume of the data is too big and the variety is too complex and unpredictable, therefore, it is very time consuming and with great efforts for them to manually locate and resolve the bugs from such vast amount of log data. The objective of this thesis project is to explore a way to conduct log analysis efficiently and effectively by applying relevant machine learning algorithms in order to help people quickly detect the test failure and its possible causalities. In this project, a method of preprocessing and clusering original logs is designed and implemented in order to obtain useful data which can be fed to machine learning algorithms. The comparable log analysis, based on two machine learning algorithms - Recurrent Neural Network and Naive Bayes, is conducted for detecting the place of system failures and anomalies. Finally, relevant experimental results are provided and analyzed.
Style APA, Harvard, Vancouver, ISO itp.
24

PETRINI, ALESSANDRO. "HIGH PERFORMANCE COMPUTING MACHINE LEARNING METHODS FOR PRECISION MEDICINE". Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/817104.

Pełny tekst źródła
Streszczenie:
La Medicina di Precisione (Precision Medicine) è un nuovo paradigma che sta rivoluzionando diversi aspetti delle pratiche cliniche: nella prevenzione e diagnosi, essa è caratterizzata da un approccio diverso dal "one size fits all" proprio della medicina classica. Lo scopo delle Medicina di Precisione è di trovare misure di prevenzione, diagnosi e cura che siano specifiche per ciascun individuo, a partire dalla sua storia personale, stile di vita e fattori genetici. Tre fattori hanno contribuito al rapido sviluppo della Medicina di Precisione: la possibilità di generare rapidamente ed economicamente una vasta quantità di dati omici, in particolare grazie alle nuove tecniche di sequenziamento (Next-Generation Sequencing); la possibilità di diffondere questa enorme quantità di dati grazie al paradigma "Big Data"; la possibilità di estrarre da questi dati tutta una serie di informazioni rilevanti grazie a tecniche di elaborazione innovative ed altamente sofisticate. In particolare, le tecniche di Machine Learning introdotte negli ultimi anni hanno rivoluzionato il modo di analizzare i dati: esse forniscono dei potenti strumenti per l'inferenza statistica e l'estrazione di informazioni rilevanti dai dati in maniera semi-automatica. Al contempo, però, molto spesso richiedono elevate risorse computazionali per poter funzionare efficacemente. Per questo motivo, e per l'elevata mole di dati da elaborare, è necessario sviluppare delle tecniche di Machine Learning orientate al Big Data che utilizzano espressamente tecniche di High Performance Computing, questo per poter sfruttare al meglio le risorse di calcolo disponibili e su diverse scale, dalle singole workstation fino ai super-computer. In questa tesi vengono presentate tre tecniche di Machine Learning sviluppate nel contesto del High Performance Computing e create per affrontare tre questioni fondamentali e ancora irrisolte nel campo della Medicina di Precisione, in particolare la Medicina Genomica: i) l'identificazione di varianti deleterie o patogeniche tra quelle neutrali nelle aree non codificanti del DNA; ii) l'individuazione della attività delle regioni regolatorie in diverse linee cellulari e tessuti; iii) la predizione automatica della funzione delle proteine nel contesto di reti biomolecolari. Per il primo problema è stato sviluppato parSMURF, un innovativo metodo basato su hyper-ensemble in grado di gestire l'elevato grado di sbilanciamento che caratterizza l'identificazione di varianti patogeniche e deleterie in mezzo al "mare" di varianti neutrali nelle aree non-coding del DNA. L'algoritmo è stato implementato per sfruttare appositamente le risorse di supercalcolo del CINECA (Marconi - KNL) e HPC Center Stuttgart (HLRS Apollo HAWK), ottenendo risultati allo stato dell'arte, sia per capacità predittiva, sia per scalabilità. Il secondo problema è stato affrontato tramite lo sviluppo di reti neurali "deep", in particolare Deep Feed Forward e Deep Convolutional Neural Networks per analizzare - rispettivamente - dati di natura epigenetica e sequenze di DNA, con lo scopo di individuare promoter ed enhancer attivi in linee cellulari e tessuti specifici. L'analisi è compiuta "genome-wide" e sono state usate tecniche di parallelizzazione su GPU. Infine, per il terzo problema è stato sviluppato un algoritmo di Machine Learning semi-supervisionato su grafo basato su reti di Hopfield per elaborare efficacemente grandi network biologici, utilizzando ancora tecniche di parallelizzazione su GPU; in particolare, una parte rilevante dell'algoritmo è data dall'introduzione di una tecnica parallela di colorazione del grafo che migliora il classico approccio greedy introdotto da Luby. Tra i futuri lavori e le attività in corso, viene presentato il progetto inerente all'estensione di parSMURF che è stato recentemente premiato dal consorzio Partnership for Advance in Computing in Europe (PRACE) allo scopo di sviluppare ulteriormente l'algoritmo e la sua implementazione, applicarlo a dataset di diversi ordini di grandezza più grandi e inserire i risultati in Genomiser, lo strumento attualmente allo stato dell'arte per l'individuazione di varianti genetiche Mendeliane. Questo progetto è inserito nel contesto di una collaborazione internazionale con i Jackson Lab for Genomic Medicine.
Precision Medicine is a new paradigm which is reshaping several aspects of clinical practice, representing a major departure from the "one size fits all" approach in diagnosis and prevention featured in classical medicine. Its main goal is to find personalized prevention measures and treatments, on the basis of the personal history, lifestyle and specific genetic factors of each individual. Three factors contributed to the rapid rise of Precision Medicine approaches: the ability to quickly and cheaply generate a vast amount of biological and omics data, mainly thanks to Next-Generation Sequencing; the ability to efficiently access this vast amount of data, under the Big Data paradigm; the ability to automatically extract relevant information from data, thanks to innovative and highly sophisticated data processing analytical techniques. Machine Learning in recent years revolutionized data analysis and predictive inference, influencing almost every field of research. Moreover, high-throughput bio-technologies posed additional challenges to effectively manage and process Big Data in Medicine, requiring novel specialized Machine Learning methods and High Performance Computing techniques well-tailored to process and extract knowledge from big bio-medical data. In this thesis we present three High Performance Computing Machine Learning techniques that have been designed and developed for tackling three fundamental and still open questions in the context of Precision and Genomic Medicine: i) identification of pathogenic and deleterious genomic variants among the "sea" of neutral variants in the non-coding regions of the DNA; ii) detection of the activity of regulatory regions across different cell lines and tissues; iii) automatic protein function prediction and drug repurposing in the context of biomolecular networks. For the first problem we developed parSMURF, a novel hyper-ensemble method able to deal with the huge data imbalance that characterizes the detection of pathogenic variants in the non-coding regulatory regions of the human genome. We implemented this approach with highly parallel computational techniques using supercomputing resources at CINECA (Marconi – KNL) and HPC Center Stuttgart (HLRS Apollo HAWK), obtaining state-of-the-art results. For the second problem we developed Deep Feed Forward and Deep Convolutional Neural Networks to respectively process epigenetic and DNA sequence data to detect active promoters and enhancers in specific tissues at genome-wide level using GPU devices to parallelize the computation. Finally we developed scalable semi-supervised graph-based Machine Learning algorithms based on parametrized Hopfield Networks to process in parallel using GPU devices large biological graphs, using a parallel coloring method that improves the classical Luby greedy algorithm. We also present ongoing extensions of parSMURF, very recently awarded by the Partnership for Advance in Computing in Europe (PRACE) consortium to further develop the algorithm, apply them to huge genomic data and embed its results into Genomiser, a state-of-the-art computational tool for the detection of pathogenic variants associated with Mendelian genetic diseases, in the context of an international collaboration with the Jackson Lab for Genomic Medicine.
Style APA, Harvard, Vancouver, ISO itp.
25

Galbincea, Nicholas D. "Critical Analysis of Dimensionality Reduction Techniques and Statistical Microstructural Descriptors for Mesoscale Variability Quantification". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500642043518197.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

"Heterogeneous neural networks: theory and applications". Universitat Politècnica de Catalunya, 2000. http://www.tesisenxarxa.net/TDX-0302109-114922/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Su, Yu-Sheng, i 蘇裕勝. "Heterogeneous Graph Embedding Based on Graph Convolutional Neural Networks". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/222yfr.

Pełny tekst źródła
Streszczenie:
碩士
國立政治大學
資訊科學系
108
In recent years, information network embedding has become popular because the techniques enable to encode information into low-dimensions representation, even for a graph/network with multiple types of nodes and relations. In addition, graph neural network (GNN) has also shown its effectiveness in learning large-scale node representations on node classification. In this paper, therefore, we propose a framework based on the heterogeneous network embedding and the idea of graph neural network. In our framework, we first generate node representations by various network embedding methods. Then, we split a homogeneous network graph into subgraphs and concatenate the learned node representations into the same embedding space. After that, we apply one of variant GNN, called GraphSAGE, to generate representations for the tasks of link prediction and recommendation. In our experiments, the results on the tasks of link prediction and recommendation both show the effectiveness of the proposed framework.
Style APA, Harvard, Vancouver, ISO itp.
28

Marques, José Fernando Duarte. "Distributed Learning of Convolutional Neural Networks on Heterogeneous Processing Units". Master's thesis, 2016. http://hdl.handle.net/10316/41280.

Pełny tekst źródła
Streszczenie:
The field of deep learning has been the focus of plenty of research and development over the last years. Deep Neural Networks (DNNs), and more specifically Convolutional Neural Networks (CNNs) have shown to be powerful tools in tasks that range from ordinary, like check reading, to the most essential, being used in medical diagnosis. This evolution in the field has lead to the development of frameworks, such as Torch and Theano, that simplified the training process of a CNN, where the user only needs to create the network architecture, select the ideal hyper-parameters and provide the inputs and desired outputs. However, the easy access to these frameworks lead to an increase in both network size as well as dataset size, since the networks had to become bigger and more complex to be able to achieve more significant results. This lead to larger training times, that not even the improvement of Graphics Processing Units (GPUs) and more specifically the use of General-Purpose GPU (GPGPU) could keep up to. To solve that problem, several distributed training methods were developed, dividing the workload through several GPUs on the same machine or through Central Processing Units (CPUs) or GPUs on different machines. This distribution techniques can be divided into 2 groups: data parallelism and model parallelism. The first method consists on using replicas of the same network on each device and train them using different data. Model parallelism divides the workload of the entire network through the different devices used. However, none of these techniques used by the different frameworks takes advantage of the parallelization offered by the CNNs, and trying to use a different method with those frameworks ends up being a task too complex or even impossible. In the present thesis, a new distributed training technique is developed, that makes use of the parallelization that CNNs have to offer. The method is a variation of the model parallelism, but only the convolutional layer is distributed. Every machine receives the same inputs but a different set of kernels, and the result of the convolutions is then sent to a main machine, known as master node. This method was subjected to a series of test, varying the number of machines involved as well as the network architecture, with the results being presented in this document. The results show that this technique is capable of diminishing the training times considerably without classification performance loss, for both the CPUs as well as the GPUs. A detailed analysis regarding the influence of the network size and batch size was also included in this document. Finally, a simulation was executed, that shows results using a higher number of machines as well as a possible use of mobile GPUs, whose energy efficiency applied to deep learning was also explored in this work, supported by the contents of Appendix A.
A área de deep learning tem sido o foco de muita pesquisa e desenvolvimento ao longo dos últimos anos. As DNNs, e mais concretamente as CNNs provaram ser ferramentas poderosas em tarefas que vão desde as mais comuns, como leitura de cheques, às mais essenciais, sendo usadas em diagnóstico médico. Esta evolução na área levou ao desenvolvimento de frameworks, como o Torch e o Theano, que simplificaram o processo de treino de uma CNN, sendo necessário apenas estruturar a rede, escolhendo os parâmetros ideais e fornecer os inputs e outputs desejados. No entanto, o fácil acesso a essas frameworks levou a um aumento no tamanho tanto das redes como dos conjuntos de dados usados, uma vez que as redes tiveram que se tornar maiores e mais complexas para obter resultados mais significativos. Isto levou a tempos de treinos maiores, que nem a melhoria de GPUs e mais especificamente o uso de GPGPU conseguiu acompanhar. Para dar resposta a isso, foram desenvolvidos métodos de treino distribuído, dividindo o trabalho quer por várias GPUs na mesma máquina, quer por CPUs e GPUs em máquinas distintas. As diferentes técnicas de distribuição podem ser dividas em 2 grupos: paralelismo de dados e paralelismo de modelo. O primeiro método consiste em usar réplicas de uma rede e treinar fornecendo dados diferentes. O paralelismo de modelo passa por dividir o trabalho de toda a rede pelos diferentes dispositivos usados. No entanto, nenhuma destas técnicas usadas pela diferentes frameworks existentes tira partido da paralelização oferecida pelas CNNs, e tentar usar um outro método com essas frameworks revela-se um trabalho demasiado complexo e muitas vezes impossível. Nesta tese, é apresentada uma nova técnica de treino distribuído, que faz uso da paralelização que as CNNs oferecem. O método é uma variação do paralelismo de modelo, onde apenas a camada de convolução é distribuída. Todas as máquinas recebem as mesmas entradas mas um conjunto diferente de filtros, sendo que no final das convoluções os resultados são enviados para uma máquina central, designada como nó mestre. Este método foi alvo de uma série de testes, variando o número de máquinas envolvidas e a arquitectura da rede, cujos resultados se encontram neste documento. Os resultados mostram que esta técnica é capaz de diminuir os tempos de treino consideravelmente sem perda de desempenho de classificação, tanto para CPU como para GPU. Também foi feita uma análise detalhada sobre a influência do tamanho da rede e do batchsize no speedup conseguido. Por fim, foram também simulados resultados para um número superior de máquinas usadas, bem como o possível uso de GPUs de dispositivos móveis, cuja eficiência energética aplicada ao deep learning foi também explorada neste trabalhado, suportado pelo conteúdo do Appendix A.
Style APA, Harvard, Vancouver, ISO itp.
29

Mahapatra, Dheeren Ku. "Modelling of Heterogeneous SAR Clutter for Speckle Suppression using MAP Estimation". Thesis, 2019. http://ethesis.nitrkl.ac.in/9865/1/2019_PHD_DKMahapatra_513EC1003_Modelling.pdf.

Pełny tekst źródła
Streszczenie:
Synthetic Aperture Radar (SAR) is an active high resolution imaging system that has been used intensively in diverse applications such as surveillance, environmental monitoring, urban planning, agriculture assessment etc. However, coherent nature of the SAR imaging system makes it highly susceptible to a multiplicative granular effect known as ‘Speckle’. The inherent presence of such speckle results in significant loss of relevant information (local mean of backscatter, point targets, texture etc.) which in turn makes interpretation and processing of SAR data difficult to human interpreters as well as systems devised for computer vision. Moreover, the accuracy of most of the SAR image processing applications such as segmentation, classification, detection and recognition is also severely effected by such granular speckle. Consequently, speckle suppression (despeckling) is regarded as a crucial preprocessing step in various SAR imaging applications. Therefore, development of efficient despeckling filters has become the focal point of exploration for the researchers from radar community in recent years. A multitude of despeckling approaches has been developed so far, amongst which maximum-a-posteriori (MAP) based despeckling has earned utmost importance, since its simplicity in structure and practicability of implementation are found attractive apart from its accuracy in speckle suppression. However, the state-of-the-art MAP filters are inefficient in simultaneous mean preservation and speckle suppression from clutter data with varying degree of heterogeneity. Such inefficacy is due to (i) inability of corresponding clutter models in portraying statistics of both moderately heterogeneous and extremely heterogeneous areas, (ii) inaccurate estimation of clutter model parameters, (iii) mathematically intractable expression for the clutter probability density function (pdf) and/or its associated parameter estimates. Therefore, this dissertation deals with the investigations on various mathematical distributions, their parameters estimation strategies for efficiently characterising SAR clutter statistics and application of such clutter models in the formulation of MAP based despeckling filters. The major contributions of this dissertation are threefold. Firstly, parametric clutter distributions such as K1/2 and W1/2 are presented to characterise SAR clutter amplitude data from multiplicative modelling perspective. Also,we propose to employ method of log-cumulants (MoLC) strategy for estimation of K1/2 and W1/2 clutter model parameters. Considering Γ, symmetric β distributions as clutter texture prior densities, Γ-MAP and β-MAP filters are then formulated to obtain amplitude texture estimate from K1/2 and W1/2 distributed moderately heterogeneous clutter amplitude data. Quantitative results on despeckling 1-look real and 3-look synthetic clutter data are presented for these amplitude texture estimators to assess the effectiveness in comparison with respective texture estimators. Furthermore, we consider reparametrised inverse Gamma distribution for SAR texture and formulate corresponding G0 clutter pdf to characterise statistics of extremely heterogeneous areas accurately. The reparametrised inverse Gamma distribution is then utilised as clutter texture prior in devising a MAP filter named as ‘G0-MAP’ for efficient speckle suppression in such clutter. Experimental results are presented to illustrate the effectiveness of G0-MAP filter compared to state-of-the-art despeckling filters in preserving mean while suppressing speckle from heterogeneous clutter data. However, it is difficult to achieve improved performance in simultaneous mean preservation and speckle suppression from clutter with varying degree of heterogeneity through MAP based despeckling using state-of-the-art parametric clutter texture models as prior densities. In order to overcome such limitation, semiparametric modelling is also introduced in this dissertation for SAR texture by employing a binary mixture of Gamma and inverse Gamma (ΓIΓ) distribution as an approximation to generalised inverse Gaussian (GIG) model which can efficiently capture the statistics of diverse kind of areas. An expectation-maximisation (EM) based strategy is proposed to obtain the maximum likelihood (ML) estimates of the ΓIΓ model parameters. Cramer-Rao bounds (CRBs) for the parameters of the above mixture model are given and used for evaluating the effectiveness of EM based ML estimation through numerical examples. Monte-Carlo simulation results are also presented to validate the effectiveness of ΓIΓ model in approximating GIG distribution. Quantitative measurements on goodness-of-fit are then provided for experimental data over textured areas to illustrate the suitability and applicability of the mixture model in characterising clutter texture from areas of diverse kind. Furthermore, we devise a MAP based despeckling filter named as ‘ΓIΓ-MAP’ by considering the above mixture model as clutter texture prior density. Experimental results on despeckling clutter data are presented for the ΓIΓ-MAP filter to illustrate its superiority over state-of-the-art filters in simultaneous mean preservation and speckle suppression from both moderately and extremely heterogeneous clutter. However, the use of iterative EM algorithm in estimation of ΓIΓ model parameters makes the same computationally inferior. Therefore, a mathematically tractable generalised model, called Burr Type-XII (BXII) distribution along with computationally efficient estimation strategy for its parameters are introduced in the dissertation for accurate modelling of heterogeneous SAR clutter. We propose MoLC estimator by employing Mellin transform and second-kind statistics to obtain the parameters of this model. CRBs are derived for the BXII distribution parameter estimates and used for assessing effectiveness of the proposed estimation strategies applied to real and synthetic data. The analytical conditions for applicability of MoLC are given to assess effectiveness and flexibility of BXII clutter model. Experimental assessment on goodness-of-fit and computational burden is also provided for the proposed BXII model and compared the same with the state-of-the-art clutter distributions, which illustrates the validity and applicability of the model.
Style APA, Harvard, Vancouver, ISO itp.
30

Silva, Juliana Couras Fernandes. "A theory of spike coding networks with heterogeneous postsynaptic potentials". Master's thesis, 2021. http://hdl.handle.net/10773/32002.

Pełny tekst źródła
Streszczenie:
Modeling biologically realistic neural networks is a challenge for neural theory. While there is increasing evidence that the precise times of spikes play a crucial role in neural computation, building spike neural networks that resemble the spiking variability encountered in vivo while computing some function is not a trivial task. Boerlin et al. suggested a framework of leaky integrate-and-fire networks that, through excitation-inhibition tight balance, can track high-dimensional signals while producing spike trains with Poisson-like statistics. Notwithstanding their biologically plausible features, the spike coding networks rely on the instantaneous propagation of spikes to ensure an optimal function. Given that such an assumption may not fit the slower timescales of the synapses encountered in the brain this is a limitation of the model. Thus, under the goal of deriving a model with biologically plausible postsynaptic potentials, in this work, we take advantage of the spike coding networks’ ability to track high-dimensional signals to transform the problem of predictive tracking into a high-dimensional problem in the temporal domain. By doing so, we were able to get insights about the properties that such networks should have to be functional: no coding for the present time; temporal heterogeneity; prediction of the network’s estimate according to the dynamics of the signal being tracked. Then, by deriving a network from the same assumptions as Boerlin et al. while enforcing these properties it was possible to build a spike coding network that tracks multi-dimensional signals without relying on instantaneous communication of spikes.
Modelar redes neuronais com princípios biologicamente plausíveis é um desafio para a neurociência teórica. De facto, há evidência crescente de que os tempos precisos dos potenciais de ação emitidos por um neurónio desempenham um papel crucial na computação neuronal. No entanto, construir redes neuronais funcionais que mimetizem a variabilidade de disparos encontrada in vivo não é uma tarefa trivial. Boerlin et al. sugeriu um modelo de redes leaky integrate-and-fire que, através de um balanço apertado entre excitação e inibição neuronal, conseguem construir uma estimativa de um sinal multi-dimensional em tempo real, usando a combinação ponderada de séries de potenciais de ação com variabilidade do tipo Poisson. Apesar destas plausabilidades biológicas, estas redes codificantes por potenciais de ação sustentam-se na propagação instantânea desta entidade biofísica. Uma vez que esta assunção não vai de encontro às escalas de tempo das sinapses observadas no cérebro, esta é uma limitação do modelo. Assim, tendo como objectivo construir uma rede codificante por potenciais de ação com potenciais pós-sinápticos biologicamente plasíveis, neste trabalho usamos o facto do modelo original destas redes permitir a reconstrução de sinais multi-dimensionais para transformar o problema de reconstrução preditiva num problema multi-dimensional no domínio temporal. Através desta transformação, emergem três propriedades que estas redes devem ter para se manterem funcionais: não codificar o presente; permitir heterogeneidade temporal; prever o futuro da estimativa da rede de acordo com a dinâmica do sinal original. Assim, introduzindo estas propriedades nas assunções originais de Boerlin et al., mostramos que é possível conceber uma rede codificante por potenciais de ação que reconstrua sinais multi-dimensionais sem a necessidade da comunicação instantânea dos mesmos.
Mestrado em Engenharia Computacional
Style APA, Harvard, Vancouver, ISO itp.
31

Salamat, Amirreza. "Heterogeneous Graph Based Neural Network for Social Recommendations with Balanced Random Walk Initialization". Thesis, 2020. http://hdl.handle.net/1805/24769.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
Research on social networks and understanding the interactions of the users can be modeled as a task of graph mining, such as predicting nodes and edges in networks. Dealing with such unstructured data in large social networks has been a challenge for researchers in several years. Neural Networks have recently proven very successful in performing predictions on number of speech, image, and text data and have become the de facto method when dealing with such data in a large volume. Graph NeuralNetworks, however, have only recently become mature enough to be used in real large-scale graph prediction tasks, and require proper structure and data modeling to be viable and successful. In this research, we provide a new modeling of the social network which captures the attributes of the nodes from various dimensions. We also introduce the Neural Network architecture that is required for optimally utilizing the new data structure. Finally, in order to provide a hot-start for our model, we initialize the weights of the neural network using a pre-trained graph embedding method. We have also developed a new graph embedding algorithm. We will first explain how previous graph embedding methods are not optimal for all types of graphs, and then provide a solution on how to combat those limitations and come up with a new graph embedding method.
Style APA, Harvard, Vancouver, ISO itp.
32

(9740444), Amirreza Salamat. "Heterogeneous Graph Based Neural Network for Social Recommendations with Balanced Random Walk Initialization". Thesis, 2021.

Znajdź pełny tekst źródła
Streszczenie:
Research on social networks and understanding the interactions of the users can be modeled as a task of graph mining, such as predicting nodes and edges in networks.Dealing with such unstructured data in large social networks has been a challenge for researchers in several years. Neural Networks have recently proven very successful in performing predictions on number of speech, image, and text data and have become the de facto method when dealing with such data in a large volume. Graph NeuralNetworks, however, have only recently become mature enough to be used in real large-scale graph prediction tasks, and require proper structure and data modeling to be viable and successful. In this research, we provide a new modeling of the social network which captures the attributes of the nodes from various dimensions. We also introduce the Neural Network architecture that is required for optimally utilizing the new data structure. Finally, in order to provide a hot-start for our model, we initialize the weights of the neural network using a pre-trained graph embedding method. We have also developed a new graph embedding algorithm. We will first explain how previous graph embedding methods are not optimal for all types of graphs, and then provide a solution on how to combat those limitations and come up with a new graph embedding method.
Style APA, Harvard, Vancouver, ISO itp.
33

Alhubail, Ali. "Application of Physics-Informed Neural Networks to Solve 2-D Single-phase Flow in Heterogeneous Porous Media". Thesis, 2021. http://hdl.handle.net/10754/670174.

Pełny tekst źródła
Streszczenie:
Neural networks have recently seen tremendous advancements in applicability in many areas, one of which is their utilization in solving physical problems governed by partial differential equations and the constraints of these equations. Physics-informed neural networks is the name given to such neural networks. They are different from typical neural networks in that they include loss terms that represent the physics of the problem. These terms often include partial derivatives of the neural network outputs with respect to its inputs, and these derivatives are found through the use of automatic differentiation. The purpose of this thesis is to showcase the ability of physics-informed neural networks to solve basic fluid flow problems in homogeneous and heterogeneous porous media. This is done through the utilization of the pressure equation under a set of assumptions as well as the inclusion of Dirichlet and Neumann boundary conditions. The goal is to create a surrogate model that allows for finding the pressure and velocity profiles everywhere inside the domain of interest. In the homogeneous case, minimization of the loss function that included the boundary conditions term and the partial differential equation term allowed for producing results that show good agreement with the results from a numerical simulator. However, in the case of heterogeneous media where there are sharp discontinuities in hydraulic conductivity inside the domain, the model failed to produce accurate results. To resolve this issue, extended physics-informed neural networks were used. This method involves the decomposition of the domain into multiple homogeneous sub-domains. Each sub-domain has its own physics informed neural network structure, equation parameters, and equation constraints. To allow the sub-domains to communicate, interface conditions are placed on the interfaces that separate the different sub-domains. The results from this method matched well with the results of the simulator. In both the homogeneous and heterogeneous cases, neural networks with only one hidden layer with thirty nodes were used. Even with this simple structure for the neural networks, the computations are expensive and a large number of training iterations is required to converge.
Style APA, Harvard, Vancouver, ISO itp.
34

Mittal, Divyansh. "Robustness of Neural Activity Dynamics in the Medial Entorhinal Cortex". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5808.

Pełny tekst źródła
Streszczenie:
Biological systems exhibit considerable heterogeneity in their constitutive components and encounter stochasticity across all scales of analysis. Therefore, central questions that span all biological systems are: (a) How does the system manifest robustness in the face of parametric variability and stochasticity? (b) What are the mechanisms used by disparate biological systems to maintain the robustness of physiological outcomes? In this thesis, we chose the mammalian medial entorhinal cortex (MEC) as the model system to systematically study the principles that govern functional robustness across different scales of analysis. At the neuronal level, we assessed the impact of heterogeneities in channel properties on the robustness of cellular-scale physiology of MEC stellate cells and cortical interneurons. We demonstrated that the expression of cellular-scale degeneracy, wherein disparate combinations of molecular scale parameters (e.g., ion channels) yielded similar characteristic physiological properties (e.g., firing rate). These analyses and observations underscored the role of degeneracy as a mechanism to achieve functional robustness in cellular-scale activity despite widespread heterogeneities in the underlying molecular scale properties. An important cellular scale signature of MEC stellate cells is their ability to manifest peri-threshold intrinsic oscillations. Although different theoretical frameworks have been proposed to explain these oscillatory patterns, these frameworks do not jointly account for heterogeneities in intrinsic properties of stellate cells and stochasticity in ion-channel and synaptic physiology. In this thesis, using a combination of theoretical, computational, and electrophysiological methods, we argue for heterogeneous stochastic bifurcations as a unifying framework that fully explains peri-threshold activity patterns in MEC stellate cells. We also provide quantitative evidence for stochastic resonance, involving an optimal noise that improves system performance, as a mechanism to enhance robustness of intrinsic peri-threshold oscillations. At the network level, we chose a well-characterized function of the MEC involving grid-patterned activity generation in a 2D continuous attractor network (CAN) model of the MEC. We quantitatively addressed questions on the impact of distinct forms of biological heterogeneities on the functional stability of grid-patterned activity generation in these models. We showed that increasing degrees of biological heterogeneities progressively disrupted the emergence of grid-patterned activity and resulted in progressively large perturbations in low-frequency neural activity. We postulated that suppressing low-frequency perturbations could ameliorate the disruptive impact of biological heterogeneities on grid-patterned activity. As a physiologically relevant means to suppress low-frequency activity, we introduced intrinsic neuronal resonance either by adding an additional high-pass filter (phenomenological) or by incorporating a slow negative feedback loop (mechanistic) into our model neurons. Strikingly, 2D CAN models with resonating neurons were resilient to the incorporation of heterogeneities and exhibited stable grid-patterned firing. We extended these findings to one-dimensional CAN models built of heterogeneous conductance-based excitatory and inhibitory neuronal models. We found that slow negative feedback loops, introduced by HCN channels that are naturally endowed with slow restorative properties, stabilized activity propagation in heterogeneous 1D CAN models. Together, these findings established slow negative feedback loops as a mechanism to enhance functional robustness in heterogeneous neural networks. Together, the analyses presented in different parts of this thesis emphasize the need to account for all forms of neural-circuit heterogeneities and stochasticity in assessing robustness of biological function across scales. The findings presented here highlight degeneracy, stochastic resonance, and negative feedback loops as powerful generalized principles and mechanisms that could drive robustness in biological systems across different scales.
Style APA, Harvard, Vancouver, ISO itp.
35

CHU, CHIA-HO, i 朱佳荷. "Application of Neural Networks and Genetic Algorithm to Optimize the Heterogeneous Metals Welding Parameters of Magnesium and Copper". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/vya5st.

Pełny tekst źródła
Streszczenie:
碩士
華梵大學
工業工程與經營資訊學系碩士班
106
Lightweight and strong magnesium alloys and highly conductive copper are widely used in various fields of application. When joining the both, it is frequent to find brittle compound at the joint. Typically, the industries heavily depend on the experience of master welders but that often leads to wobbling quality and their experience is difficult to pass on, not to mention the multiple quality characteristics. For this, a process was developed in this study to resolve the multiple quality characteristics issue in the heterogeneous soldering between magnesium alloy and copper for better soldering strength. Mg alloy AZ31B and copper alloy C1100 were selected for this study. The Taguchi method was employed to develop an arc soldering experiment using inert gas and tungsten electrodes. Non-destructive test (for thickness and width of the joints and Rockwell hardness) and destructive test (impact test and tension test) were selected. A series of literature reviews, cause and effect analyses and discussions with experts led to the conclusion that the controllable factors for welding were welding current and welding rate. The fixed factors were the distance between tungsten bar and plate, elongation of tungsten bar and the flow rate of masking gas and the noise factor is different intermediate layers. Two experiments were repeated using the L9 (34) orthogonal array. The measurements were analyzed using S/N ratio analysis, TOPSIS analysis, back propagation neural network and genetic algorithm to identify the best-fit parameter combination for the welding of copper and magnesium alloys.
Style APA, Harvard, Vancouver, ISO itp.
36

Adhatarao, Sripriya Srikant. "PHOENIX: A Premise to Reinforce Heterogeneous and Evolving Internet Architectures with Exemplary Applications". Thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-150A-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Jiang, Xinxin. "Mining heterogeneous enterprise data". Thesis, 2018. http://hdl.handle.net/10453/129377.

Pełny tekst źródła
Streszczenie:
University of Technology Sydney. Faculty of Engineering and Information Technology.
Heterogeneity is becoming one of the key characteristics inside enterprise data, because the current nature of globalization and competition stress the importance of leveraging huge amounts of enterprise accumulated data, according to various organizational processes, resources and standards. Effectively deriving meaningful insights from complex large-scaled heterogeneous enterprise data poses an interesting, but critical challenge. The aim of this thesis is to investigate the theoretical foundations of mining heterogeneous enterprise data in light of the above challenges and to develop new algorithms and frameworks that are able to effectively and efficiently consider heterogeneity in four elements of the data: objects, events, context, and domains. Objects describe a variety of business roles and instruments involved in business systems. Object heterogeneity means that object information at both the data and structural level is heterogeneous. The cost-sensitive hybrid neural network (Cs-HNN) proposed leverages parallel network architectures and an algorithm specifically designed for minority classification to generate a robust model for learning heterogeneous objects. Events trace an object’s behaviours or activities. Event heterogeneity reflects the level of variety in business events and is normally expressed in the type and format of features. The approach proposed in this thesis focuses on fleet tracking as a practical example of an application with a high degree of event heterogeneity. Context describes the environment and circumstances surrounding objects and events. Context heterogeneity reflects the degree of diversity in contextual features. The coupled collaborative filtering (CCF) approach proposed in this thesis is able to provide context-aware recommendations by measuring the non-independent and identically distributed (non-IID) relationships across diverse contexts. Domains are the sources of information and reflect the nature of the business or function that has generated the data. The cross-domain deep learning (Cd-DLA) proposed in this thesis provides a potential avenue to overcome the complexity and nonlinearity of heterogeneous domains. Each of the approaches, algorithms, and frameworks for heterogeneous enterprise data mining presented in this thesis outperform the state-of-the-art methods in a range of backgrounds and scenarios, as evidenced by a theoretical analysis, an empirical study, or both. All outcomes derived from this research have been published or accepted for publication, and the follow-up work has also been recognised, which demonstrates scholarly interest in mining heterogeneous enterprise data as a research topic. However, despite this interest, heterogeneous data mining still holds increasing attractive opportunities for further exploration and development in both academia and industry.
Style APA, Harvard, Vancouver, ISO itp.
38

Chia-Pin, Wang, i 王嘉斌. "Neural-Network Based handoff algorithm to support QoS Multimedia application in Heterogeneous WLAN Environments". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/93139591775176572911.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電信工程學研究所
91
Abstract The integration of heterogeneous wireless access networks is expected in the future. It offers more selections of various services for mobile users. However, to support continuous link as mobile users roam in different networks, the handoff mechanism plays a very important role. Handoff mechanisms consist of handoff decision algorithms and handoff procedure algorithms. For real-time applications, handoff decision algorithm is much more critical because of its irretrievable property. As considering the heterogeneous wireless networks supporting real-time applications, the RSS based handoff decision algorithms with threshold and hysteresis are not suitable. They can’t make handoff decision efficiently and avoid unnecessary handoff simultaneously. Furthermore, it is difficult for them to deal with the asymmetry of bandwidth and signal power of different wireless access networks. To support multimedia services in the heterogeneous wireless access network, we propose our neural network based handoff algorithm. We simulate our handoff algorithm in the heterogeneous WLAN environment as an example. From the simulation results, our proposed neural network based handoff algorithm could make handoff decision efficiently, avoid unnecessary handoff, and select for higher data rate if available in the mean time.
Style APA, Harvard, Vancouver, ISO itp.
39

Touati, Redha. "Détection de changement en imagerie satellitaire multimodale". Thèse, 2019. http://hdl.handle.net/1866/22662.

Pełny tekst źródła
Streszczenie:
The purpose of this research is to study the detection of temporal changes between two (or more) multimodal images satellites, i.e., between two different imaging modalities acquired by two heterogeneous sensors, giving for the same scene two images encoded differently and depending on the nature of the sensor used for each acquisition. The two (or multiple) multimodal satellite images are acquired and coregistered at two different dates, usually before and after an event. In this study, we propose new models belonging to different categories of multimodal change detection in remote sensing imagery. As a first contribution, we present a new constraint scenario expressed on every pair of pixels existing in the before and after image change. A second contribution of our work is to propose a spatio-temporal textural gradient operator expressed with complementary norms and also a new filtering strategy of the difference map resulting from this operator. Another contribution consists in constructing an observation field from a pair of pixels and to infer a solution maximum a posteriori sense. A fourth contribution is proposed which consists to build a common feature space for the two heterogeneous images. Our fifth contribution lies in the modeling of patterns of change by anomalies and on the analysis of reconstruction errors which we propose to learn a non-supervised model from a training base consisting only of patterns of no-change in order that the built model reconstruct the normal patterns (non-changes) with a small reconstruction error. In the sixth contribution, we propose a pairwise learning architecture based on a pseudosiamese CNN network that takes as input a pair of data instead of a single data and constitutes two partly uncoupled CNN parallel network streams (descriptors) followed by a decision network that includes fusion layers and a loss layer in the sense of the entropy criterion. The proposed models are enough flexible to be used effectively in the monomodal change detection case.
Cette recherche a pour objet l’étude de la détection de changements temporels entre deux (ou plusieurs) images satellitaires multimodales, i.e., avec deux modalités d’imagerie différentes acquises par deux capteurs hétérogènes donnant pour la même scène deux images encodées différemment suivant la nature du capteur utilisé pour chacune des prises de vues. Les deux (ou multiples) images satellitaires multimodales sont prises et co-enregistrées à deux dates différentes, avant et après un événement. Dans le cadre de cette étude, nous proposons des nouveaux modèles de détection de changement en imagerie satellitaire multimodale semi ou non supervisés. Comme première contribution, nous présentons un nouveau scénario de contraintes exprimé sur chaque paire de pixels existant dans l’image avant et après changement. Une deuxième contribution de notre travail consiste à proposer un opérateur de gradient textural spatio-temporel exprimé avec des normes complémentaires ainsi qu’une nouvelle stratégie de dé-bruitage de la carte de différence issue de cet opérateur. Une autre contribution consiste à construire un champ d’observation à partir d’une modélisation par paires de pixels et proposer une solution au sens du maximum a posteriori. Une quatrième contribution est proposée et consiste à construire un espace commun de caractéristiques pour les deux images hétérogènes. Notre cinquième contribution réside dans la modélisation des zones de changement comme étant des anomalies et sur l’analyse des erreurs de reconstruction dont nous proposons d’apprendre un modèle non-supervisé à partir d’une base d’apprentissage constituée seulement de zones de non-changement afin que le modèle reconstruit les motifs de non-changement avec une faible erreur. Dans la dernière contribution, nous proposons une architecture d’apprentissage par paires de pixels basée sur un réseau CNN pseudo-siamois qui prend en entrée une paire de données au lieu d’une seule donnée et est constituée de deux flux de réseau (descripteur) CNN parallèles et partiellement non-couplés suivis d’un réseau de décision qui comprend de couche de fusion et une couche de classification au sens du critère d’entropie. Les modèles proposés s’avèrent assez flexibles pour être utilisés efficacement dans le cas des données-images mono-modales.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii