Academic literature on the topic 'Neural Tensor Network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural Tensor Network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural Tensor Network"

1

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
2

MURTHY, GARIMELLA RAMA. "MULTI/INFINITE DIMENSIONAL NEURAL NETWORKS, MULTI/INFINITE DIMENSIONAL LOGIC THEORY." International Journal of Neural Systems 15, no. 03 (June 2005): 223–35. http://dx.doi.org/10.1142/s0129065705000190.

Full text
Abstract:
A mathematical model of an arbitrary multi-dimensional neural network is developed and a convergence theorem for an arbitrary multi-dimensional neural network represented by a fully symmetric tensor is stated and proved. The input and output signal states of a multi-dimensional neural network/logic gate are related through an energy function, defined over the fully symmetric tensor (representing the connection structure of a multi-dimensional neural network). The inputs and outputs are related such that the minimum/maximum energy states correspond to the output states of the logic gate/neural network realizing a logic function. Similarly, a logic circuit consisting of the interconnection of logic gates, represented by a block symmetric tensor, is associated with a quadratic/higher degree energy function. Infinite dimensional logic theory is discussed through the utilization of infinite dimension/order tensors.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Yu, Xianfeng Xu, and Yun Meng. "Short-Term Load Forecasting with Tensor Partial Least Squares-Neural Network." Energies 12, no. 6 (March 14, 2019): 990. http://dx.doi.org/10.3390/en12060990.

Full text
Abstract:
Short-term load forecasting is very important for power systems. The load is related to many factors which compose tensors. However, tensors cannot be input directly into most traditional forecasting models. This paper proposes a tensor partial least squares-neural network model (TPN) to forecast the power load. The model contains a tensor decomposition outer model and a nonlinear inner model. The outer model extracts common latent variables of tensor input and vector output and makes the residuals less than the threshold by iteration. The inner model determines the relationship between the latent variable matrix and the output by using a neural network. This model structure can preserve the information of tensors and the nonlinear features of the system. Three classical models, partial least squares (PLS), least squares support vector machine (LSSVM) and neural network (NN), are selected to compare the forecasting results. The results show that the proposed model is efficient for short-term load and daily load peak forecasting. Compared to PLS, LSSVM and NN, the TPN has the best forecasting accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Zenglin. "Tensor Networks Meet Neural Networks." Journal of Physics: Conference Series 2278, no. 1 (May 1, 2022): 012003. http://dx.doi.org/10.1088/1742-6596/2278/1/012003.

Full text
Abstract:
Abstract As a simulation of the human cognitive system, deep neural networks have achieved great success in many machine learning tasks and are the main driving force of the current development of artificial intelligence. On the other hand, tensor networks as an approximation of quantum many-body systems in quantum physics are applied to quantum physics, statistical physics, quantum chemistry and machine learning. This talk will first give a brief introduction to neural networks and tensor networks, and then discuss the cross-field research between deep neural networks and tensor networks, such as network compression and knowledge fusion, including our recent work on tensor neural networks. Finally, this talk will also discuss the connection to quantum machine learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Sobolev, Konstantin, Dmitry Ermilov, Anh-Huy Phan, and Andrzej Cichocki. "PARS: Proxy-Based Automatic Rank Selection for Neural Network Compression via Low-Rank Weight Approximation." Mathematics 10, no. 20 (October 14, 2022): 3801. http://dx.doi.org/10.3390/math10203801.

Full text
Abstract:
Low-rank matrix/tensor decompositions are promising methods for reducing the inference time, computation, and memory consumption of deep neural networks (DNNs). This group of methods decomposes the pre-trained neural network weights through low-rank matrix/tensor decomposition and replaces the original layers with lightweight factorized layers. A main drawback of the technique is that it demands a great amount of time and effort to select the best ranks of tensor decomposition for each layer in a DNN. This paper proposes a Proxy-based Automatic tensor Rank Selection method (PARS) that utilizes a Bayesian optimization approach to find the best combination of ranks for neural network (NN) compression. We observe that the decomposition of weight tensors adversely influences the feature distribution inside the neural network and impairs the predictability of the post-compression DNN performance. Based on this finding, a novel proxy metric is proposed to deal with the abovementioned issue and to increase the quality of the rank search procedure. Experimental results show that PARS improves the results of existing decomposition methods on several representative NNs, including ResNet-18, ResNet-56, VGG-16, and AlexNet. We obtain a 3× FLOP reduction with almost no loss of accuracy for ILSVRC-2012ResNet-18 and a 5.5× FLOP reduction with an accuracy improvement for ILSVRC-2012 VGG-16.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xuezhong, Maolin Che, and Yimin Wei. "Tensor neural network models for tensor singular value decompositions." Computational Optimization and Applications 75, no. 3 (January 20, 2020): 753–77. http://dx.doi.org/10.1007/s10589-020-00167-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhan, Tianming, Bo Song, Yang Xu, Minghua Wan, Xin Wang, Guowei Yang, and Zebin Wu. "SSCNN-S: A Spectral-Spatial Convolution Neural Network with Siamese Architecture for Change Detection." Remote Sensing 13, no. 5 (February 27, 2021): 895. http://dx.doi.org/10.3390/rs13050895.

Full text
Abstract:
In this paper, a spectral-spatial convolution neural network with Siamese architecture (SSCNN-S) for hyperspectral image (HSI) change detection (CD) is proposed. First, tensors are extracted in two HSIs recorded at different time points separately and tensor pairs are constructed. The tensor pairs are then incorporated into the spectral-spatial network to obtain two spectral-spatial vectors. Thereafter, the Euclidean distances of the two spectral-spatial vectors are calculated to represent the similarity of the tensor pairs. We use a Siamese network based on contrastive loss to train and optimize the network so that the Euclidean distance output by the network describes the similarity of tensor pairs as accurately as possible. Finally, the values obtained by inputting all tensor pairs into the trained model are used to judge whether a pixel belongs to the change area. SSCNN-S aims to transform the problem of HSI CD into a problem of similarity measurement for tensor pairs by introducing the Siamese network. The network used to extract tensor features in SSCNN-S combines spectral and spatial information to reduce the impact of noise on CD. Additionally, a useful four-test scoring method is proposed to improve the experimental efficiency instead of taking the mean value from multiple measurements. Experiments on real data sets have demonstrated the validity of the SSCNN-S method.
APA, Harvard, Vancouver, ISO, and other styles
8

Hayashi, Kohei. "Exploring Unexplored Tensor Network Decompositions for Convolutional Neural Networks." Brain & Neural Networks 29, no. 4 (December 5, 2022): 193–201. http://dx.doi.org/10.3902/jnns.29.193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ling, Julia, Andrew Kurzawski, and Jeremy Templeton. "Reynolds averaged turbulence modelling using deep neural networks with embedded invariance." Journal of Fluid Mechanics 807 (October 18, 2016): 155–66. http://dx.doi.org/10.1017/jfm.2016.615.

Full text
Abstract:
There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. The Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
10

Hameed, Marawan Gamal Abdel, Marzieh S. Tahaei, Ali Mosleh, and Vahid Partovi Nia. "Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 771–79. http://dx.doi.org/10.1609/aaai.v36i1.19958.

Full text
Abstract:
Modern Convolutional Neural Network (CNN) architectures, despite their superiority in solving various problems, are generally too large to be deployed on resource constrained edge devices. In this paper, we reduce memory usage and floating-point operations required by convolutional layers in CNNs. We compress these layers by generalizing the Kronecker Product Decomposition to apply to multidimensional tensors, leading to the Generalized Kronecker Product Decomposition (GKPD). Our approach yields a plug-and-play module that can be used as a drop-in replacement for any convolutional layer. Experimental results for image classification on CIFAR-10 and ImageNet datasets using ResNet, MobileNetv2 and SeNet architectures substantiate the effectiveness of our proposed approach. We find that GKPD outperforms state-of-the-art decomposition methods including Tensor-Train and Tensor-Ring as well as other relevant compression methods such as pruning and knowledge distillation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Neural Tensor Network"

1

Teng, Peiyuan. "Tensor network and neural network methods in physical systems." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524836522115804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bhogi, Keerthana. "Two New Applications of Tensors to Machine Learning for Wireless Communications." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104970.

Full text
Abstract:
With the increasing number of wireless devices and the phenomenal amount of data that is being generated by them, there is a growing interest in the wireless communications community to complement the traditional model-driven design approaches with data-driven machine learning (ML)-based solutions. However, managing the large-scale multi-dimensional data to maintain the efficiency and scalability of the ML algorithms has obviously been a challenge. Tensors provide a useful framework to represent multi-dimensional data in an integrated manner by preserving relationships in data across different dimensions. This thesis studies two new applications of tensors to ML for wireless communications where the tensor structure of the concerned data is exploited in novel ways. The first contribution of this thesis is a tensor learning-based low-complexity precoder codebook design technique for a full-dimension multiple-input multiple-output (FD-MIMO) system with a uniform planar antenna (UPA) array at the transmitter (Tx) whose channel distribution is available through a dataset. Represented as a tensor, the FD-MIMO channel is further decomposed using a tensor decomposition technique to obtain an optimal precoder which is a function of Kronecker-Product (KP) of two low-dimensional precoders, each corresponding to the horizontal and vertical dimensions of the FD-MIMO channel. From the design perspective, we have made contributions in deriving a criterion for optimal product precoder codebooks using the obtained low-dimensional precoders. We show that this product codebook design problem is an unsupervised clustering problem on a Cartesian Product Grassmann Manifold (CPM), where the optimal cluster centroids form the desired codebook. We further simplify this clustering problem to a $K$-means algorithm on the low-dimensional factor Grassmann manifolds (GMs) of the CPM which correspond to the horizontal and vertical dimensions of the UPA, thus significantly reducing the complexity of precoder codebook construction when compared to the existing codebook learning techniques. The second contribution of this thesis is a tensor-based bandwidth-efficient gradient communication technique for federated learning (FL) with convolutional neural networks (CNNs). Concisely, FL is a decentralized ML approach that allows to jointly train an ML model at the server using the data generated by the distributed users coordinated by a server, by sharing only the local gradients with the server and not the raw data. Here, we focus on efficient compression and reconstruction of convolutional gradients at the users and the server, respectively. To reduce the gradient communication overhead, we compress the sparse gradients at the users to obtain their low-dimensional estimates using compressive sensing (CS)-based technique and transmit to the server for joint training of the CNN. We exploit a natural tensor structure offered by the convolutional gradients to demonstrate the correlation of a gradient element with its neighbors. We propose a novel prior for the convolutional gradients that captures the described spatial consistency along with its sparse nature in an appropriate way. We further propose a novel Bayesian reconstruction algorithm based on the Generalized Approximate Message Passing (GAMP) framework that exploits this prior information about the gradients. Through the numerical simulations, we demonstrate that the developed gradient reconstruction method improves the convergence of the CNN model.
Master of Science
The increase in the number of wireless and mobile devices have led to the generation of massive amounts of multi-modal data at the users in various real-world applications including wireless communications. This has led to an increasing interest in machine learning (ML)-based data-driven techniques for communication system design. The native setting of ML is {em centralized} where all the data is available on a single device. However, the distributed nature of the users and their data has also motivated the development of distributed ML techniques. Since the success of ML techniques is grounded in their data-based nature, there is a need to maintain the efficiency and scalability of the algorithms to manage the large-scale data. Tensors are multi-dimensional arrays that provide an integrated way of representing multi-modal data. Tensor algebra and tensor decompositions have enabled the extension of several classical ML techniques to tensors-based ML techniques in various application domains such as computer vision, data-mining, image processing, and wireless communications. Tensors-based ML techniques have shown to improve the performance of the ML models because of their ability to leverage the underlying structural information in the data. In this thesis, we present two new applications of tensors to ML for wireless applications and show how the tensor structure of the concerned data can be exploited and incorporated in different ways. The first contribution is a tensor learning-based precoder codebook design technique for full-dimension multiple-input multiple-output (FD-MIMO) systems where we develop a scheme for designing low-complexity product precoder codebooks by identifying and leveraging a tensor representation of the FD-MIMO channel. The second contribution is a tensor-based gradient communication scheme for a decentralized ML technique known as federated learning (FL) with convolutional neural networks (CNNs), where we design a novel bandwidth-efficient gradient compression-reconstruction algorithm that leverages a tensor structure of the convolutional gradients. The numerical simulations in both applications demonstrate that exploiting the underlying tensor structure in the data provides significant gains in their respective performance criteria.
APA, Harvard, Vancouver, ISO, and other styles
3

Rajbhandari, Samyam. "Locality Optimizations for Regular and Irregular Applications." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469033289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kuchar, Olga Anna. "Development of animated finger movements via a neural network for tendon tension control." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq39322.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Choi, Ki Sueng. "Characterizing structural neural networks in major depressive disorder using diffusion tensor imaging." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50353.

Full text
Abstract:
Diffusion tensor imaging (DTI) is a noninvasive MRI technique used to assess white matter (WM) integrity, fiber orientation, and structural connectivity (SC) using water diffusion properties. DTI techniques are rapidly evolving and are now having a dramatic effect on depression research. Major depressive disorder (MDD) is highly prevalent and a leading cause of worldwide disability. Despite decades of research, the neurobiology of MDD remains poorly understood. MDD is increasingly viewed as a disorder of neural circuitry in which a network of brain regions involved in mood regulation is dysfunctional. In an effort to better understand the neurobiology of MDD and develop more effective treatments, much research has focused on delineating the structure of this mood regulation network. Although many studies have focused on the structural connectivity of the mood regulation network, findings using DTI are highly variable, likely due to many technical and analytical limitations. Further, structural connectivity pattern analyses have not been adequately utilized in specific clinical contexts where they would likely have high relevance, e.g., the use of white matter deep brain stimulation (DBS) as an investigational treatment for depression. In this dissertation, we performed a comprehensive analysis of structural WM integrity in a large sample of depressed patients and demonstrated that disruption of WM does not play a major role in the neurobiology of MDD. Using graph theory analysis to assess organization of neural network, we elucidated the importance of the WM network in MDD. As an extension of this WM network analysis, we identified the necessary and sufficient WM tracts (circuit) that mediate the response of subcallosal cingulate cortex DBS treatment for depression; this work showed that such analyses may be useful in prospective target selection. Collectively, these findings contribute to better understanding of depression as a neural network disorder and possibly will improve efficacy of SCC DBS.
APA, Harvard, Vancouver, ISO, and other styles
6

Elhag, Taha Mahmoud Salih. "Tender price modelling : artificial neural networks and regression techniques." Thesis, University of Liverpool, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400240.

Full text
Abstract:
Cost modelling in construction is the art and science of developing a reliable and effective estimation of the tender price of a project. Cost estimation is an experiencebased task, which involves evaluations of unknown circumstances and complex relationships of cost-influencing factors. Researchers argue that cost model developments lack rigour and consistent conceptual framework within which the performance of different models may be compared and evaluated. This study analyses construction cost models by classifying them into three groups according to the techniques used. These include deterministic models (regression analysis); probabilistic models (Monte Carlo simulation); and artificial intelligence models (neural networks). This research investigates the development of two methodologies for tender price estimation of buildings utilising neural computing and regression techniques. The emphasis is to provide clients and practitioners with a reliable tool, which would offer trustworthy advice and prediction of tender prices at an early stage of a construction project. The analysis in this research is based upon a data set of 230 office projects, newly constructed in the UK between 1983 and 1997. The cost data of these buildings consists of tender prices and 13 other cost influencing factors. The data extracted using the Building Cost Information Service (BCIS) database of the Royal Institution of Chartered Surveyors (RICS). Questionnaire survey and interviews were adopted to identify, evaluate and rank cost significant factors according to their degree of influence on tender prices. The practitioners involved in this stage were UK based quantity surveyors. Some of these cost variables formulate the basis for developing the tender estimation models. Cluster analysis was conducted to categorise the data set into more homogeneous project groups based upon the cost variables. The hypothesis is that developing estimation models using project categories would yield better performance and more efficient models. Self-Organising Maps (SOM), a type of neural networks, is used for the cluster analysis. Seventeen neural networks and thirteen regression models are developed for tender price estimation using different parameters and cost factors. The performance and efficiency of these models are analysed and compared before and after the cluster analysis of the data set. On the other hand, sensitivity analysis is conducted by developing fifty-five models to evaluate the effectiveness of different combinationso f network parameterso n the accuracyo f tenderp rice estimation. The research findings indicate that, when the whole data set of 230 office projects is used, both methodologies produced low accuracy and failed to map the relationship between the tender price and the selected influencing cost factors. On the contrary, after clustering the data set into coherent groups using Kohonen neural networks, the performance of both RA and ANN models increased dramatically, with many estimation accuracies above 80% and 90%, which is highly satisfactory for tender price estimation at an early stage of a project. The outcomes imply that: (a) clustering the projects into homogeneous categories is significant and key for model performance and accuracy; (b) after cluster analysis there is no significant difference in the performance of RA and ANN models, although the RA outperformed the ANN in some models. The results also reveal that for both methodologies the accuracy of the estimation models that utilised two cost factors (project area and duration) outperformed the estimation models that used 13 cost factors, which is an indication that area and duration are the most dominant cost determinant variables.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Cong. "High-Dimensional Generative Models for 3D Perception." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Full text
Abstract:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval.
Doctor of Philosophy
The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Jingrong. "Design and Analysis of Intelligent Fuzzy Tension Controllers for Rolling Mills." Thesis, University of Waterloo, 2002. http://hdl.handle.net/10012/848.

Full text
Abstract:
This thesis presents a fuzzy logic controller aimed at maintaining constant tension between two adjacent stands in tandem rolling mills. The fuzzy tension controller monitors tension variation by resorting to electric current comparison of different operation modes and sets the reference for speed controller of the upstream stand. Based on modeling the rolling stand as a single input single output linear discrete system, which works in the normal mode and is subject to internal and external noise, the element settings and parameter selections in the design of the fuzzy controller are discussed. To improve the performance of the fuzzy controller, a dynamic fuzzy controller is proposed. By switching the fuzzy controller elements in relation to the step response, both transient and stationary performances are enhanced. To endow the fuzzy controller with intelligence of generalization, flexibility and adaptivity, self-learning techniques are introduced to obtain fuzzy controller parameters. With the inclusion of supervision and concern for conventional control criteria, the parameters of the fuzzy inference system are tuned by a backward propagation algorithm or their optimal values are located by means of a genetic algorithm. In simulations, the neuro-fuzzy tension controller exhibits the real-time applicability, while the genetic fuzzy tension controller reveals an outstanding global optimization ability.
APA, Harvard, Vancouver, ISO, and other styles
9

Melo, Mirthys Marinho do Carmo. "Modelagem baseada em redes neurais de meios de produção de biossurfactantes." Universidade Católica de Pernambuco, 2011. http://tede2.unicap.br:8080/handle/tede/608.

Full text
Abstract:
Made available in DSpace on 2017-06-01T18:20:30Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-02-28
The success of artificial neural networks (ANN) applications as an alternative modeling technique to response surface methodology (RSM) has attracted interest from major industries such as pharmaceuticals, cosmetics, oil, food, petroleum and surfactants, among others. Development of production media is a strategic area for the industry of biosurfactants by to increase efficiency and reduce costs of the process. In this area, surface tension measurements and emulsification activity has been routinely used for indirect monitoring of biosurfactant production. In this paper, the capabilities of RNA-based modeling and MSR were compared in surface tension estimation of biosurfactant production media. The two techniques used experimental data from the central composite design with four axial points and three replicates at the central point. The concentrations of ammonium sulfate and potassium monobasic phosphate were used as independent variables. The surface tensions of cell-free broths, with 96 h, of biosurfactant production media by Candida lipolytica UCP 988 in sea water were used as response variable. The results demonstrated the superiority of the RNA-based methodology. The quadratic model obtained using MSR showed a coefficient of determination equal to 0.43 and highly significant lack of fit. The fit of the model RNA based on experimental data was excellent. Simulations with the model using the training, validation an test sets showed root mean squared error (rmse) of less than 0.05 and coefficients of determination higher than 0.99. In this context, the RNA-based estimation of surface tension from the constituents of biosurfactant production media showed to be an efficient, reliable and economical method to monitor the biosurfactant production. The work also showed the ability of the yeast Candida lipolytica UCP 0988 use corn oil and produce biosurfactants in extremely alkaline sea water (initial pH 14), supplemented with sources of nitrogen and phosphorus
O sucesso das aplicações de redes neurais artificiais (RNA) como técnica de modelagem alternativa à metodologia de superfície de resposta (MSR) tem atraído o interesse de grandes indústrias, como a farmacêutica, a de cosméticos, a de alimentos, a de petróleo e a de surfactantes, entre outras. Desenvolvimento de meios de produção é uma área estratégica para a indústria de biossurfactantes por aumentar a eficiência e reduzir custos do processo. Nesta área, determinações de tensão superficial e de atividade de emulsificação vem sendo usadas rotineiramente para monitoramento indireto da produção de biossurfactantes. No presente trabalho, as capacidades de modelagem de metodologia baseada em RNA e metodologia de superfície de resposta foram comparadas na estimação de tensão superficial de meios de produção de biossurfactante. As duas técnicas usaram dados experimentais obtidos de planejamento composto central, com 4 pontos axiais e 3 repetições no ponto central, tendo as concentrações de sulfato de amônio e fosfato monobásico de potássio como variáveis independentes e como variável resposta a tensão superficial de líquidos metabólicos, com 96 horas, livres de células, de meios de produção de biossurfactante por Candida lipolytica UCP 988. Os resultados demonstraram a superioridade da metodologia baseada em RNA. O modelo quadrático obtido usando MSR apresentou coeficiente de determinação igual a 0,43 e falta de ajuste altamente significativa. O ajuste do modelo baseado em RNA aos dados experimentais foi excelente. Simulações com o modelo usando os conjuntos de treinamento, validação e teste apresentaram raízes dos erros quadráticos médios (rmse) inferiores a 0,05 e coeficientes de determinação superiores a 0,99. Neste contexto, a estimação da tensão superficial baseada em RNA a partir dos constituintes de meios de produção de biossurfactantes mostrou ser um método eficaz, confiável e econômico para monitorar a produção de biossurfactantes. O trabalho mostrou também a capacidade da levedura Candida lipolytica UCP 0988 utilizar óleo de milho e produzir biossurfactantes em água do mar extremamente alcalina (pH inicial 14), suplementada com fontes de nitrogênio e fósforo
APA, Harvard, Vancouver, ISO, and other styles
10

Sozgen, Burak. "Neural Network And Regression Models To Decide Whether Or Not To Bid For A Tender In Offshore Petroleum Platform Fabrication Industry." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610820/index.pdf.

Full text
Abstract:
In this thesis, three methods are presented to model the decision process of whether or not to bid for a tender in offshore petroleum platform fabrication. A sample data and the assessment based on this data are gathered from an offshore petroleum platform fabrication company and this information is analyzed to understand the significant parameters in the industry. The alternative methods, &ldquo
Regression Analysis&rdquo
, &ldquo
Neural Network Method&rdquo
and &ldquo
Fuzzy Neural Network Method&rdquo
, are used for modeling of the bidding decision process. The regression analysis examines the data statistically where the neural network method and fuzzy neural network method are based on artificial intelligence. The models are developed using the bidding data compiled from the offshore petroleum platform fabrication projects. In order to compare the prediction performance of these methods &ldquo
Cross Validation Method&rdquo
is utilized. The models developed in this study are compared with the bidding decision method used by the company. The results of the analyses show that regression analysis and neural network method manage to have a prediction performance of 80% and fuzzy neural network has a prediction performance of 77,5% whereas the method used by the company has a prediction performance of 47,5%. The results reveal that the suggested models achieve significant improvement over the existing method for making the correct bidding decision.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neural Tensor Network"

1

Balakrishnan, Kaushik. TensorFlow Reinforcement Learning Quick Start Guide: Get up and Running with Training and Deploying Intelligent, Self-Learning Agents Using Python. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Churchland, Patricia Smith. Inference to the Best Decision. Edited by John Bickle. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780195304787.003.0017.

Full text
Abstract:
This article examines the concept of the so-called inference to the best decision in relation to neurobiology. It explains that the idea of inference to the best decision, often referred to by philosophers as abduction, is also known as in experimental psychology as case-based reasoning. The article discusses the tension that has developed between the sanctity of the “ought/is” dogma and what is known about the neurobiology of social behavior and suggests that the cognitive process that we loosely call inference to the best decision is a solution to this tension. It also expresses optimism that psychology and neuroscience will eventually uncover at least the general principles concerning how neural networks perform these functions and that the two domains of explaining and deciding will have much in common.
APA, Harvard, Vancouver, ISO, and other styles
3

Hilgurt, S. Ya, and O. A. Chemerys. Reconfigurable signature-based information security tools of computer systems. PH “Akademperiodyka”, 2022. http://dx.doi.org/10.15407/akademperiodyka.458.297.

Full text
Abstract:
The book is devoted to the research and development of methods for combining computational structures for reconfigurable signature-based information protection tools for computer systems and networks in order to increase their efficiency. Network security tools based, among others, on such AI-based approaches as deep neural networking, despite the great progress shown in recent years, still suffer from nonzero recognition error probability. Even a low probability of such an error in a critical infrastructure can be disastrous. Therefore, signature-based recognition methods with their theoretically exact matching feature are still relevant when creating information security systems such as network intrusion detection systems, antivirus, anti-spam, and wormcontainment systems. The real time multi-pattern string matching task has been a major performance bottleneck in such systems. To speed up the recognition process, developers use a reconfigurable hardware platform based on FPGA devices. Such platform provides almost software flexibility and near-ASIC performance. The most important component of a signature-based information security system in terms of efficiency is the recognition module, in which the multipattern matching task is directly solved. It must not only check each byte of input data at speeds of tens and hundreds of gigabits/sec against hundreds of thousand or even millions patterns of signature database, but also change its structure every time a new signature appears or the operating conditions of the protected system change. As a result of the analysis of numerous examples of the development of reconfigurable information security systems, three most promising approaches to the construction of hardware circuits of recognition modules were identified, namely, content-addressable memory based on digital comparators, Bloom filter and Aho–Corasick finite automata. A method for fast quantification of components of recognition module and the entire system was proposed. The method makes it possible to exclude resource-intensive procedures for synthesizing digital circuits on FPGAs when building complex reconfigurable information security systems and their components. To improve the efficiency of the systems under study, structural-level combinational methods are proposed, which allow combining into single recognition device several matching schemes built on different approaches and their modifications, in such a way that their advantages are enhanced and disadvantages are eliminated. In order to achieve the maximum efficiency of combining methods, optimization methods are used. The methods of: parallel combining, sequential cascading and vertical junction have been formulated and investigated. The principle of multi-level combining of combining methods is also considered and researched. Algorithms for the implementation of the proposed combining methods have been developed. Software has been created that allows to conduct experiments with the developed methods and tools. Quantitative estimates are obtained for increasing the efficiency of constructing recognition modules as a result of using combination methods. The issue of optimization of reconfigurable devices presented in hardware description languages is considered. A modification of the method of affine transformations, which allows parallelizing such cycles that cannot be optimized by other methods, was presented. In order to facilitate the practical application of the developed methods and tools, a web service using high-performance computer technologies of grid and cloud computing was considered. The proposed methods to increase efficiency of matching procedure can also be used to solve important problems in other fields of science as data mining, analysis of DNA molecules, etc. Keywords: information security, signature, multi-pattern matching, FPGA, structural combining, efficiency, optimization, hardware description language.
APA, Harvard, Vancouver, ISO, and other styles
4

Hupaniittu, Outi, and Ulla-Maija Peltonen, eds. Arkistot ja kulttuuriperintö. SKS Finnish Literature Society, 2021. http://dx.doi.org/10.21435/tl.268.

Full text
Abstract:
Archives and the Cultural Heritage The edited volume Archives and the Cultural Heritage focuses on archives as institutions and to their tense relationship with archives as material. These dynamics are discussed in respect of the past, the present, and the future. The focus lies in the mechanisms the Finnish archive institutions have utilised when taking part in forming the cultural heritage and in debating the importance of the private archives in society. Within social sciences and history from the early 1990s onwards, the effects of globalisation have been seen as a new focal point for research. Momentarily, the archives saw the same paradigm shift as the focus of the archival studies proceeded from state to society. This brought forth the notion that the values of society are reflected in the acquisition of archival material. This archival turn draws attention to the archives as entities formed by cultural practices. The volume discusses cultural heritage within Finnish archives with diverse perspectives and from various time periods. The key concepts are cultural heritage and archives – both as institution and as material. Articles review the formation of archival collections spanning from the 19th to the 21st century and highlight that the archives have never been neutral or objective actors; rather, they have always been an active process of remembering and forgetting, a matter of inclusion and exclusion. The focus is on private archives and on the choices that guided the creation of the archives and the cultural perceptions and power structures associated with them. Although private archives have considerable social and research value, and although their material complements the picture of society provided by documentary data produced by public administrations, they have only risen to the theoretical discussions in the 21st century. The authors consider what has happened before the material ends up in the archive, what happens in the archive and what can be deduced from this. It shows how archival solutions manifest themselves, how they have influenced research and how they still affect it. One of the key questions is whose past has been preserved and whose is deemed worthy of preservation. Under what conditions have the permanently preserved documents been selected and how can they be accessed? In addition, the volume pays attention to whose documents have been ignored or forgotten, as well as to the networks and power of the individuals within the archival institution and to the politics of memory. The Archives and the Cultural Heritage is an opening to a discussion on the mechanisms, practices and goals of Finnish archival activities. It challenges archival organisations to reflect on their own operating models and to make visible their own conscious or unconscious choices. It raises awareness of the formation of the Finnish documentary cultural heritage, produces new information about private archives and participates in the scientific debate on the changing significance of archives in society. The volume is related to the Academy of Finland research project “Making and Interpreting National Pasts – Role of Finnish Archives as Networks of Power and Sites of Memory” (no 25257, 2011–2014/2019), University of Turku. Project partners Finnish Literature Society (SKS) and Society of Swedish Literature in Finland (SLS).
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neural Tensor Network"

1

Huang, Hantao, and Hao Yu. "Tensor-Solver for Deep Neural Network." In Compact and Fast Machine Learning Accelerator for IoT Devices, 63–105. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3323-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Qiuyue, Nianwen Ning, Bin Wu, and Wenying Guo. "Embedding-Based Network Alignment Using Neural Tensor Networks." In Knowledge Science, Engineering and Management, 401–13. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82153-1_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ishibashi, Hideaki, Ryota Shinriki, Hirohisa Isogai, and Tetsuo Furukawa. "Multilevel–Multigroup Analysis Using a Hierarchical Tensor SOM Network." In Neural Information Processing, 459–66. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46675-0_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Yanwei, Yang Zhou, Zengrui Zhao, and Dongxiao Yu. "Adaptive Tensor-Train Decomposition for Neural Network Compression." In Parallel and Distributed Computing, Applications and Technologies, 70–81. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69244-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Xiaoyu, Yan Liu, Xianmin Meng, and Lian Liu. "User Identity Linkage Across Social Networks Based on Neural Tensor Network." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 162–71. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66922-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Wei, and Yunfang Wu. "Hierarchical Gated Recurrent Neural Tensor Network for Answer Triggering." In Lecture Notes in Computer Science, 287–94. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69005-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jingchu, Jianyi Liu, Feiyu Chen, Teng Lu, Hua Huang, and Jinmeng Zhao. "Cross-Knowledge Graph Entity Alignment via Neural Tensor Network." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 66–74. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_8.

Full text
Abstract:
AbstractWith the expansion of the current knowledge graph scale and the increase of the number of entities, a large number of knowledge graphs express the same entity in different ways, so the importance of knowledge graph fusion is increasingly manifested. Traditional entity alignment algorithms have limited application scope and low efficiency. This paper proposes an entity alignment method based on neural tensor network (NtnEA), which can obtain the inherent semantic information of text without being restricted by linguistic features and structural information, and without relying on string information. In the three cross-lingual language data sets DBPFR−EN, DBPZH−EN and DBPJP−EN of the DBP15K data set, Mean Reciprocal Rank and Hits@k are used as the alignment effect evaluation indicators for entity alignment tasks. Compared with the existing entity alignment methods of MTransE, IPTransE, AlignE and AVR-GCN, the Hit@10 values of the NtnEA method are 85.67, 79.20, and 78.93, and the MRR is 0.558, 0.511, and 0.499, which are better than traditional methods and improved 10.7% on average.
APA, Harvard, Vancouver, ISO, and other styles
8

Bai, Yalong, Jianlong Fu, Tiejun Zhao, and Tao Mei. "Deep Attention Neural Tensor Network for Visual Question Answering." In Computer Vision – ECCV 2018, 21–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01258-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Qi, Xiaohong Xiang, and Jun Zhao. "ML-TFN: Multi Layers Tensor Fusion Network for Affective Video Content Analysis." In Neural Computing for Advanced Applications, 184–96. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6142-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Yongxin, and Timothy M. Hospedales. "Unifying Multi-domain Multitask Learning: Tensor and Neural Network Perspectives." In Domain Adaptation in Computer Vision Applications, 291–309. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58347-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural Tensor Network"

1

Kasiviswanathan, Shiva Prasad, Nina Narodytska, and Hongxia Jin. "Network Approximation using Tensor Sketching." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/321.

Full text
Abstract:
Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer vision, speech, and language processing tasks. In this paper, we study a fundamental question that arises when designing deep network architectures: Given a target network architecture can we design a `smaller' network architecture that 'approximates' the operation of the target network? The question is, in part, motivated by the challenge of parameter reduction (compression) in modern deep neural networks, as the ever increasing storage and memory requirements of these networks pose a problem in resource constrained environments.In this work, we focus on deep convolutional neural network architectures, and propose a novel randomized tensor sketching technique that we utilize to develop a unified framework for approximating the operation of both the convolutional and fully connected layers. By applying the sketching technique along different tensor dimensions, we design changes to the convolutional and fully connected layers that substantially reduce the number of effective parameters in a network. We show that the resulting smaller network can be trained directly, and has a classification accuracy that is comparable to the original network.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Huiyuan, and Jing Li. "Learning Data-Driven Drug-Target-Disease Interaction via Neural Tensor Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/477.

Full text
Abstract:
Precise medicine recommendations provide more effective treatments and cause fewer drug side effects. A key step is to understand the mechanistic relationships among drugs, targets, and diseases. Tensor-based models have the ability to explore relationships of drug-target-disease based on large amount of labeled data. However, existing tensor models fail to capture complex nonlinear dependencies among tensor data. In addition, rich medical knowledge are far less studied, which may lead to unsatisfied results. Here we propose a Neural Tensor Network (NeurTN) to assist personalized medicine treatments. NeurTN seamlessly combines tensor algebra and deep neural networks, which offers a more powerful way to capture the nonlinear relationships among drugs, targets, and diseases. To leverage medical knowledge, we augment NeurTN with geometric neural networks to capture the structural information of both drugs’ chemical structures and targets’ sequences. Extensive experiments on real-world datasets demonstrate the effectiveness of the NeurTN model.
APA, Harvard, Vancouver, ISO, and other styles
3

Tjandra, Andros, Sakriani Sakti, Ruli Manurung, Mirna Adriani, and Satoshi Nakamura. "Gated Recurrent Neural Tensor Network." In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jankovic, Marko, Masashi Sugiyama, and Branimir Reljin. "Tensor based image segmentation." In 2008 9th Symposium on Neural Network Applications in Electrical Engineering. IEEE, 2008. http://dx.doi.org/10.1109/neurel.2008.4685595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jankovic, Marko V., and Branimir Reljin. "Nonnegative contraction/averaging tensor factorization." In 2010 10th Symposium on Neural Network Applications in Electrical Engineering (NEUREL 2010). IEEE, 2010. http://dx.doi.org/10.1109/neurel.2010.5644083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Palzer, David, and Brian Hutchinson. "The Tensor Deep Stacking Network Toolkit." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tjandra, Andros, Sakriani Sakti, and Satoshi Nakamura. "Compressing recurrent neural network with tensor train." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tjandra, Andros, Sakriani Sakti, and Satoshi Nakamura. "Tensor Decomposition for Compressing Recurrent Neural Network." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xie, Kun, Huali Lu, Xin Wang, Gaogang Xie, Yong Ding, Dongliang Xie, Jigang Wen, and Dafang Zhang. "Neural Tensor Completion for Accurate Network Monitoring." In IEEE INFOCOM 2020 - IEEE Conference on Computer Communications. IEEE, 2020. http://dx.doi.org/10.1109/infocom41043.2020.9155366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

He, Xingwei, Hua Xu, Xiaomin Sun, Junhui Deng, and Jia Li. "ABiRCNN with neural tensor network for answer selection." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography