Добірка наукової літератури з теми "Recurrent Neural Network architecture"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Recurrent Neural Network architecture".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Recurrent Neural Network architecture"

1

Back, Andrew D., and Ah Chung Tsoi. "A Low-Sensitivity Recurrent Neural Network." Neural Computation 10, no. 1 (January 1, 1998): 165–88. http://dx.doi.org/10.1162/089976698300017935.

Повний текст джерела
Анотація:
The problem of high sensitivity in modeling is well known. Small perturbations in the model parameters may result in large, undesired changes in the model behavior. A number of authors have considered the issue of sensitivity in feedforward neural networks from a probabilistic perspective. Less attention has been given to such issues in recurrent neural networks. In this article, we present a new recurrent neural network architecture, that is capable of significantly improved parameter sensitivity properties compared to existing recurrent neural networks. The new recurrent neural network generalizes previous architectures by employing alternative discrete-time operators in place of the shift operator normally used. An analysis of the model demonstrates the existence of parameter sensitivity in recurrent neural networks and supports the proposed architecture. The new architecture performs significantly better than previous recurrent neural networks, as shown by a series of simple numerical experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Паршин, А. И., М. Н. Аралов, В. Ф. Барабанов, and Н. И. Гребенникова. "RANDOM MULTI-MODAL DEEP LEARNING IN THE PROBLEM OF IMAGE RECOGNITION." ВЕСТНИК ВОРОНЕЖСКОГО ГОСУДАРСТВЕННОГО ТЕХНИЧЕСКОГО УНИВЕРСИТЕТА, no. 4 (October 20, 2021): 21–26. http://dx.doi.org/10.36622/vstu.2021.17.4.003.

Повний текст джерела
Анотація:
Задача распознавания изображений - одна из самых сложных в машинном обучении, требующая от исследователя как глубоких знаний, так и больших временных и вычислительных ресурсов. В случае использования нелинейных и сложных данных применяются различные архитектуры глубоких нейронных сетей, но при этом сложным вопросом остается проблема выбора нейронной сети. Основными архитектурами, используемыми повсеместно, являются свёрточные нейронные сети (CNN), рекуррентные нейронные сети (RNN), глубокие нейронные сети (DNN). На основе рекуррентных нейронных сетей (RNN) были разработаны сети с долгой краткосрочной памятью (LSTM) и сети с управляемыми реккурентными блоками (GRU). Каждая архитектура нейронной сети имеет свою структуру, свои настраиваемые и обучаемые параметры, обладает своими достоинствами и недостатками. Комбинируя различные виды нейронных сетей, можно существенно улучшить качество предсказания в различных задачах машинного обучения. Учитывая, что выбор оптимальной архитектуры сети и ее параметров является крайне трудной задачей, рассматривается один из методов построения архитектуры нейронных сетей на основе комбинации свёрточных, рекуррентных и глубоких нейронных сетей. Показано, что такие архитектуры превосходят классические алгоритмы машинного обучения The image recognition task is one of the most difficult in machine learning, requiring both deep knowledge and large time and computational resources from the researcher. In the case of using nonlinear and complex data, various architectures of deep neural networks are used but the problem of choosing a neural network remains a difficult issue. The main architectures used everywhere are convolutional neural networks (CNN), recurrent neural networks (RNN), deep neural networks (DNN). Based on recurrent neural networks (RNNs), Long Short Term Memory Networks (LSTMs) and Controlled Recurrent Unit Networks (GRUs) were developed. Each neural network architecture has its own structure, customizable and trainable parameters, and advantages and disadvantages. By combining different types of neural networks, you can significantly improve the quality of prediction in various machine learning problems. Considering that the choice of the optimal network architecture and its parameters is an extremely difficult task, one of the methods for constructing the architecture of neural networks based on a combination of convolutional, recurrent and deep neural networks is considered. We showed that such architectures are superior to classical machine learning algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gallicchio, Claudio, and Alessio Micheli. "Fast and Deep Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3898–905. http://dx.doi.org/10.1609/aaai.v34i04.5803.

Повний текст джерела
Анотація:
We address the efficiency issue for the construction of a deep graph neural network (GNN). The approach exploits the idea of representing each input graph as a fixed point of a dynamical system (implemented through a recurrent neural network), and leverages a deep architectural organization of the recurrent units. Efficiency is gained by many aspects, including the use of small and very sparse networks, where the weights of the recurrent units are left untrained under the stability condition introduced in this work. This can be viewed as a way to study the intrinsic power of the architecture of a deep GNN, and also to provide insights for the set-up of more complex fully-trained models. Through experimental results, we show that even without training of the recurrent connections, the architecture of small deep GNN is surprisingly able to achieve or improve the state-of-the-art performance on a significant set of tasks in the field of graphs classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

VASSILIADIS, STAMATIS, GERALD G. PECHANEK, and JOSÉ G. DELGADO-FRIAS. "SPIN: THE SEQUENTIAL PIPELINED NEUROEMULATOR." International Journal on Artificial Intelligence Tools 02, no. 01 (March 1993): 117–32. http://dx.doi.org/10.1142/s0218213093000084.

Повний текст джерела
Анотація:
This paper proposes a novel digital neural network architecture referred to as the Sequential PIpelined Neuroemulator or Neurocomputer (SPIN). The SPIN processor emulates neural networks producing high performance with minimum hardware by sequentially processing each neuron in the modeled completely connected network with a pipelined physical neuron structure. In addition to describing SPIN, performance equations are estimated for the ring systolic, the recurrent systolic array, and the neuromimetic neurocomputer architectures, three previously reported schemes for the emulation of neural networks, and a comparison with the SPIN architecture is reported.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Uzdiaev, M. Yu, R. N. Iakovlev, D. M. Dudarenko, and A. D. Zhebrun. "Identification of a Person by Gait in a Video Stream." Proceedings of the Southwest State University 24, no. 4 (February 4, 2021): 57–75. http://dx.doi.org/10.21869/2223-1560-2020-24-4-57-75.

Повний текст джерела
Анотація:
Purpose of research. The given paper considers the problem of identifying a person by gait through the use of neural network recognition models focused on working with RGB images. The main advantage of using neural network models over existing methods of motor activity analysis is obtaining images from the video stream without frames preprocessing, which increases the analysis time. Methods. The present paper presents an approach to identifying a person by gait. The approach is based upon the idea of multi-class classification on video sequences. The quality of the developed approach operation was evaluated on the basis of CASIA Gait Database data set, which includes more than 15,000 video sequences. As classifiers, 5 neural network architectures have been tested: the three-dimensional convolutional neural network I3D, as well as 4 architectures representing convolutional-recurrent networks, such as unidirectional and bidirectional LTSM, unidirectional and bidirectional GRU, combined with the convolutional neural network of ResNet architecture being used in these architectures as a visual feature extractor. Results. According to the results of the conducted testing, the developed approach makes it possible to identify a person in a video stream in real-time mode without the use of specialized equipment. According to the results of its testing and through the use of the neural network models under consideration, the accuracy of human identification was more than 80% for convolutional-recurrent models and 79% for the I3D model. Conclusion. The suggested models based on I3D architecture and convolutional-recurrent architectures have shown higher accuracy for solving the problem of identifying a person by gait than existing methods. Due to the possibility of frame-by-frame video processing, the most preferred classifier for the developed approach is the use of convolutional-recurrent architectures based on unidirectional LSTM or GRU models, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kalinin, Maxim, Vasiliy Krundyshev, and Evgeny Zubkov. "Estimation of applicability of modern neural network methods for preventing cyberthreats to self-organizing network infrastructures of digital economy platforms,." SHS Web of Conferences 44 (2018): 00044. http://dx.doi.org/10.1051/shsconf/20184400044.

Повний текст джерела
Анотація:
The problems of applying neural network methods for solving problems of preventing cyberthreats to flexible self-organizing network infrastructures of digital economy platforms: vehicle adhoc networks, wireless sensor networks, industrial IoT, “smart buildings” and “smart cities” are considered. The applicability of the classic perceptron neural network, recurrent, deep, LSTM neural networks and neural networks ensembles in the restricting conditions of fast training and big data processing are estimated. The use of neural networks with a complex architecture– recurrent and LSTM neural networks – is experimentally justified for building a system of intrusion detection for self-organizing network infrastructures.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Caniago, Afif Ilham, Wilis Kaswidjanti, and Juwairiah Juwairiah. "Recurrent Neural Network With Gate Recurrent Unit For Stock Price Prediction." Telematika 18, no. 3 (October 31, 2021): 345. http://dx.doi.org/10.31315/telematika.v18i3.6650.

Повний текст джерела
Анотація:
Stock price prediction is a solution to reduce the risk of loss from investing in stocks go public. Although stock prices can be analyzed by stock experts, this analysis is analytical bias. Recurrent Neural Network (RNN) is a machine learning algorithm that can predict a time series data, non-linear data and non-stationary. However, RNNs have a vanishing gradient problem when dealing with long memory dependencies. The Gate Recurrent Unit (GRU) has the ability to handle long memory dependency data. In this study, researchers will evaluate the parameters of the RNN-GRU architecture that affect predictions with MAE, RMSE, DA, and MAPE as benchmarks. The architectural parameters tested are the number of units/neurons, hidden layers (Shallow and Stacked) and input data (Chartist and TA). The best number of units/neurons is not the same in all predicted cases. The best architecture of RNN-GRU is Stacked. The best input data is TA. Stock price predictions with RNN-GRU have different performance depending on how far the model predicts and the company's liquidity. The error value in this study (MAE, RMSE, MAPE) constantly increases as the label range increases. In this study, there are six data on stock prices with different companies. Liquid companies have a lower error value than non-liquid companies.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

TELMOUDI, ACHRAF JABEUR, HATEM TLIJANI, LOTFI NABLI, MAARUF ALI, and RADHI M'HIRI. "A NEW RBF NEURAL NETWORK FOR PREDICTION IN INDUSTRIAL CONTROL." International Journal of Information Technology & Decision Making 11, no. 04 (July 2012): 749–75. http://dx.doi.org/10.1142/s0219622012500198.

Повний текст джерела
Анотація:
A novel neural architecture for prediction in industrial control: the 'Double Recurrent Radial Basis Function network' (R2RBF) is introduced for dynamic monitoring and prognosis of industrial processes. Three applications of the R2RBF network on the prediction values confirmed that the proposed architecture minimizes the prediction error. The proposed R2RBF is excited by the recurrence of the output looped neurons on the input layer which produces a dynamic memory on both the input and output layers. Given the learning complexity of neural networks with the use of the back-propagation training method, a simple architecture is proposed consisting of two simple Recurrent Radial Basis Function networks (RRBF). Each RRBF only has the input layer with looped neurons using the sigmoid activation function. The output of the first RRBF also presents an additional input for the second RRBF. An unsupervised learning algorithm is proposed to determine the parameters of the Radial Basis Function (RBF) nodes. The K-means unsupervised learning algorithm used for the hidden layer is enhanced by the initialization of these input parameters by the output parameters of the RCE algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ziemke, Tom. "Radar Image Segmentation Using Self-Adapting Recurrent Networks." International Journal of Neural Systems 08, no. 01 (February 1997): 47–54. http://dx.doi.org/10.1142/s0129065797000070.

Повний текст джерела
Анотація:
This paper presents a novel approach to the segmentation and integration of (radar) images using a second-order recurrent artificial neural network architecture consisting of two sub-networks: a function network that classifies radar measurements into four different categories of objects in sea environments (water, oil spills, land and boats), and a context network that dynamically computes the function network's input weights. It is shown that in experiments (using simulated radar images) this mechanism outperforms conventional artificial neural networks since it allows the network to learn to solve the task through a dynamic adaptation of its classification function based on its internal state closely reflecting the current context.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

K, Karthika, Tejashree K, Naveen Rajan M., and Namita R. "Towards Strong AI with Analog Neural Chips." International Journal of Innovative Research in Advanced Engineering 10, no. 06 (June 23, 2023): 394–99. http://dx.doi.org/10.26562/ijirae.2023.v1006.28.

Повний текст джерела
Анотація:
Applied AI chips with neural networks fail to capture and scale different forms of human intelligence. In this study, the definition of a strong AI system in hardware and architecture for building neuro memristive strong AI chips is proposed. The architecture unit consists of loop and hoop networks that are built on recurrent and feed forward information propagation concepts. Applying the principle that ’every brain is different’; we build a strong network that can take different structural and functional forms. The strong networks are building using hybrids of loop and hoop networks having generalization abilities, with higher levels of randomness incorporated to introduce greater flexibility in creating different neural architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Recurrent Neural Network architecture"

1

Bopaiah, Jeevith. "A recurrent neural network architecture for biomedical event trigger classification." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/73.

Повний текст джерела
Анотація:
A “biomedical event” is a broad term used to describe the roles and interactions between entities (such as proteins, genes and cells) in a biological system. The task of biomedical event extraction aims at identifying and extracting these events from unstructured texts. An important component in the early stage of the task is biomedical trigger classification which involves identifying and classifying words/phrases that indicate an event. In this thesis, we present our work on biomedical trigger classification developed using the multi-level event extraction dataset. We restrict the scope of our classification to 19 biomedical event types grouped under four broad categories - Anatomical, Molecular, General and Planned. While most of the existing approaches are based on traditional machine learning algorithms which require extensive feature engineering, our model relies on neural networks to implicitly learn important features directly from the text. We use natural language processing techniques to transform the text into vectorized inputs that can be used in a neural network architecture. As per our knowledge, this is the first time neural attention strategies are being explored in the area of biomedical trigger classification. Our best results were obtained from an ensemble of 50 models which produced a micro F-score of 79.82%, an improvement of 1.3% over the previous best score.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pan, YaDung. "Fuzzy adaptive recurrent counterpropagation neural networks: A neural network architecture for qualitative modeling and real-time simulation of dynamic processes." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187101.

Повний текст джерела
Анотація:
In this dissertation, a new artificial neural network (ANN) architecture called fuzzy adaptive recurrent counterpropagation neural network (FARCNN) is presented. FARCNNs can be directly synthesized from a set of training data, making system behavioral learning extremely fast. FARCNNs can be applied directly and effectively to model both static and dynamic system behavior based on observed input/output behavioral patterns alone without need of knowing anything about the internal structure of the system under study. The FARCNN architecture is derived from the methodology of fuzzy inductive reasoning and a basic form of counterpropagation neural networks (CNNs) for efficient implementation of finite state machines. Analog signals are converted to fuzzy signals by use of a new type of fuzzy A/D converter, thereby keeping the size of the Kohonen layer of the CNN manageably small. Fuzzy inferencing is accomplished by an application-independent feedforward network trained by means of backpropagation. Global feedback is used to represent full system dynamics. The FARCNN architecture combines the advantages of the quantitative approach (neural network) with that of the qualitative approach (fuzzy logic) as an efficient autonomous system modeling methodology. It also makes the simulation of mixed quantitative and qualitative models more feasible. In simulation experiments, we shall show that FARCNNs can be applied directly and easily to different types of systems, including static continuous nonlinear systems, discrete sequential systems, and as part of large dynamic continuous nonlinear control systems, embedding the FARCNN into much larger industry-sized quantitative models, even permitting a feedback structure to be placed around the FARCNN.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hanson, Jack. "Protein Structure Prediction by Recurrent and Convolutional Deep Neural Network Architectures." Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382722.

Повний текст джерела
Анотація:
In this thesis, the application of convolutional and recurrent machine learning techniques to several key structural properties of proteins is explored. Chapter 2 presents the rst application of an LSTM-BRNN in structural bioinformat- ics. The method, called SPOT-Disorder, predicts the per-residue probability of a protein being intrinsically disordered (ie. unstructured, or exible). Using this methodology, SPOT-Disorder achieved the highest accuracy in the literature without separating short and long disordered regions during training as was required in previous models, and was additionally proven capable of indirectly discerning functional sites located in disordered regions. Chapter 3 extends the application of an LSTM-BRNN to a two-dimensional problem in the prediction of protein contact maps. Protein contact maps describe the intra-sequence distance between each residue pairing at a distance cuto , providing key restraints towards the possible conformations of a protein. This work, entitled SPOT-Contact, introduced the coupling of two-dimensional LSTM-BRNNs with ResNets to maximise dependency propagation in order to achieve the highest reported accuracies for contact map preci- sion. Several models of varying architectures were trained and combined as an ensemble predictor in order to minimise incorrect generalisations. Chapter 4 discusses the utilisation of an ensemble of LSTM-BRNNs and ResNets to predict local protein one-dimensional structural properties. The method, called SPOT-1D, predicts for a wide range of local structural descriptors, including several solvent exposure metrics, secondary structure, and real-valued backbone angles. SPOT-1D was signi cantly improved by the inclusion of the outputs of SPOT-Contact in the input features. Using this topology led to the best reported accuracy metrics for all predicted properties. The protein structures constructed by the backbone angles predicted by SPOT-1D achieved the lowest average error from their native structures in the literature. Chapter 5 presents an update on SPOT-Disorder, as it employs the inputs from SPOT- 1D in conjunction with an ensemble of LSTM-BRNN's and Inception Residual Squeeze and Excitation networks to predict for protein intrinsic disorder. This model con rmed the enhancement provided by utilising the coupled architectures over the LSTM-BRNN solely, whilst also introducing a new convolutional format to the bioinformatics eld. The work in Chapter 6 utilises the same topology from SPOT-1D for single-sequence prediction of protein intrinsic disorder in SPOT-Disorder-Single. Single-sequence predic- tion describes the prediction of a protein's properties without the use of evolutionary information. While evolutionary information generally improves the performance of a computational model, it comes at the expense of a greatly increased computational and time load. Removing this from the model allows for genome-scale protein analysis at a minor drop in accuracy. However, models trained without evolutionary profi les can be more accurate for proteins with limited and therefore unreliable evolutionary information.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Повний текст джерела
Анотація:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequence problems such as Machine Translation. They contain data dependencies among the executions of time-steps hence the amount of parallelism is severely limited. Thus, evaluating them in an energy-efficient manner is more challenging than evaluating other DNN algorithms. This thesis studies applications using RNNs to improve their energy efficiency on specialized architectures. Specifically, we propose novel energy-saving techniques and highly efficient architectures tailored to the evaluation of RNNs. We focus on the most successful RNN topologies which are the Long Short Term memory and the Gated Recurrent Unit. First, we characterize a set of RNNs running on a modern SoC. We identify that accessing the memory to fetch the model weights is the main source of energy consumption. Thus, we propose E-PUR: an energy-efficient processing unit for RNN inference. E-PUR achieves 6.8x speedup and improves energy consumption by 88x compared to the SoC. These benefits are obtained by improving the temporal locality of the model weights. In E-PUR, fetching the parameters is the main source of energy consumption. Thus, we strive to reduce memory accesses and propose a scheme to reuse previous computations. Our observation is that when evaluating the input sequences of an RNN model, the output of a given neuron tends to change lightly between consecutive evaluations.Thus, we develop a scheme that caches the neurons' outputs and reuses them whenever it detects that the change between the current and previously computed output value for a given neuron is small avoiding to fetch the weights. In order to decide when to reuse a previous value we employ a Binary Neural Network (BNN) as a predictor of reusability. The low-cost BNN can be employed in this context since its output is highly correlated to the output of RNNs. We show that our proposal avoids more than 24.2% of computations. Hence, on average, energy consumption is reduced by 18.5% for a speedup of 1.35x. RNN models’ memory footprint is usually reduced by using low precision for evaluation and storage. In this case, the minimum precision used is identified offline and it is set such that the model maintains its accuracy. This method utilizes the same precision to compute all time-steps.Yet, we observe that some time-steps can be evaluated with a lower precision while preserving the accuracy. Thus, we propose a technique that dynamically selects the precision used to compute each time-step. A challenge of our proposal is choosing a lower bit-width. We address this issue by recognizing that information from a previous evaluation can be employed to determine the precision required in the current time-step. Our scheme evaluates 57% of the computations on a bit-width lower than the fixed precision employed by static methods. We implement it on E-PUR and it provides 1.46x speedup and 19.2% energy savings on average.
Los algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Мельникова, І. К. "Інтелектуальна технологія прогнозу курсу криптовалют методом рекурентних нейромереж". Master's thesis, Сумський державний університет, 2019. http://essuir.sumdu.edu.ua/handle/123456789/76753.

Повний текст джерела
Анотація:
Розроблено інтелектуальну систему прогнозу курсу криптовалют методом рекурентних нейромереж, спроектовано та реалізовано програмне забезпечення з підтримкою можливостей проведення операцій над криптовалютами у вигляді веб-сервісів з мікросервісною архітектурою для підвищення їх ефективності.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tekin, Mim Kemal. "Vehicle Path Prediction Using Recurrent Neural Network." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166134.

Повний текст джерела
Анотація:
Vehicle Path Prediction can be used to support Advanced Driver Assistance Systems (ADAS) that covers different technologies like Autonomous Braking System, Adaptive Cruise Control, etc. In this thesis, the vehicle’s future path, parameterized as 5 coordinates along the path, is predicted by using only visual data collected by a front vision sensor. This approach provides cheaper application opportunities without using different sensors. The predictions are done by deep convolutional neural networks (CNN) and the goal of the project is to use recurrent neural networks (RNN) and to investigate the benefits of using reccurence to the task. Two different approaches are used for the models. The first approach is a single-frame approach that makes predictions by using only one image frame as input and predicts the future location points of the car. The single-frame approach is the baseline model. The second approach is a sequential approach that enables the network the usage of historical information of previous image frames in order to predict the vehicle’s future path for the current frame. With this approach, the effect of using recurrence is investigated. Moreover, uncertainty is important for the model reliability. Having a small uncertainty in most of the predictions or having a high uncertainty in unfamiliar situations for the model will increase success of the model. In this project, the uncertainty estimation approach is based on capturing the uncertainty by following a method that allows to work on deep learning models. The uncertainty approach uses the same models that are defined by the first two approaches. Finally, the evaluation of the approaches are done by the mean absolute error and defining two different reasonable tolerance levels for the distance between the prediction path and the ground truth path. The difference between two tolerance levels is that the first one is a strict tolerance level and the the second one is a more relaxed tolerance level. When using strict tolerance level based on distances on test data, 36% of the predictions are accepted for single-frame model, 48% for the sequential model, 27% and 13% are accepted for single-frame and sequential models of uncertainty models. When using relaxed tolerance level on test data, 60% of the predictions are accepted by single-frame model, 67% for the sequential model, 65% and 53% are accepted for single-frame and sequential models of uncertainty models. Furthermore, by using stored information for each sequence, the methods are evaluated for different conditions such as day/night, road type and road cover. As a result, the sequential model outperforms in the majority of the evaluation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sivakumar, Shyamala C. "Architectures and algorithms for stable and constructive learning in discrete time recurrent neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ31533.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Повний текст джерела
Анотація:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations. A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe. Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Melidis, Christos. "Adaptive neural architectures for intuitive robot control." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9998.

Повний текст джерела
Анотація:
This thesis puts forward a novel way of control for robotic morphologies. Taking inspiration from Behaviour Based robotics and self-organisation principles, we present an interfacing mechanism, capable of adapting both to the user and the robot, while enabling a paradigm of intuitive control for the user. A transparent mechanism is presented, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the cases where the user has to read and understand operation manuals or has to learn to operate a specific device. The seminal idea behind the work presented is the coupling of intuitive human behaviours with the dynamics of a machine in order to control and direct the machine dynamics. Starting from a tabula rasa basis, the architectures presented are able to identify control patterns (behaviours) for any given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. We provide a deep insight in the advantages of behaviour coupling, investigating the proposed system in detail, providing evidence for and quantifying emergent properties of the models proposed. The structural components of the interface are presented and assessed both individually and as a whole, as are inherent properties of the architectures. The proposed system is examined and tested both in vitro and in vivo, and is shown to work even in cases of complicated environments, as well as, complicated robotic morphologies. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

He, Jian. "Adaptive power system stabilizer based on recurrent neural network." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0008/NQ38471.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Recurrent Neural Network architecture"

1

Dayhoff, Judith E. Neural network architectures: An introduction. New York, N.Y: Van Nostrand Reinhold, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

T, Leondes Cornelius, ed. Neural network systems, techniques, and applications. San Diego: Academic Press, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

C, Jain L., and Johnson R. P, eds. Automatic generation of neural network architecture using evolutionary computation. Singapore: World Scientific, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cios, Krzysztof J. Self-growing neural network architecture using crisp and fuzzy entropy. [Washington, DC]: National Aeronautics and Space Administration, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

United States. National Aeronautics and Space Administration., ed. A neural network architecture for implementation of expert sytems for real time monitoring. [Cincinnati, Ohio]: University of Cincinnati, College of Engineering, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lim, Chee Peng. Probabilistic fuzzy ARTMAP: An autonomous neural network architecture for Bayesian probability estimation. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

United States. National Aeronautics and Space Administration., ed. A novel approach to noise-filtering based on a gain-scheduling neural network architecture. [Washington, DC]: National Aeronautics and Space Administration, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Recurrent Neural Network architecture"

1

Salem, Fathi M. "Network Architectures." In Recurrent Neural Networks, 3–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bianchi, Filippo Maria, Enrico Maiorino, Michael C. Kampffmeyer, Antonello Rizzi, and Robert Jenssen. "Recurrent Neural Network Architectures." In SpringerBriefs in Computer Science, 23–29. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70338-1_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wüthrich, Mario V., and Michael Merz. "Recurrent Neural Networks." In Springer Actuarial, 381–406. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_8.

Повний текст джерела
Анотація:
AbstractThis chapter considers recurrent neural (RN) networks. These are special network architectures that are useful for time-series modeling, e.g., applied to time-series forecasting. We study the most popular RN networks which are the long short-term memory (LSTM) networks and the gated recurrent unit (GRU) networks. We apply these networks to mortality forecasting.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Selvanathan, N., and Mashkuri Hj Yaacob. "Canonical form of Recurrent Neural Network Architecture." In Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 796. London: CRC Press, 2022. http://dx.doi.org/10.1201/9780429332111-154.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rawal, Aditya, Jason Liang, and Risto Miikkulainen. "Discovering Gated Recurrent Neural Network Architectures." In Natural Computing Series, 233–51. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3685-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tsoi, Ah Chung. "Recurrent neural network architectures: An overview." In Adaptive Processing of Sequences and Data Structures, 1–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0053993.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Modrzejewski, Mateusz, Konrad Bereda, and Przemysław Rokita. "Efficient Recurrent Neural Network Architecture for Musical Style Transfer." In Artificial Intelligence and Soft Computing, 124–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87986-0_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Modrzejewski, Mateusz, Konrad Bereda, and Przemysław Rokita. "Efficient Recurrent Neural Network Architecture for Musical Style Transfer." In Artificial Intelligence and Soft Computing, 124–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87986-0_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liao, Yongbo, Hongmei Li, and Zongbo Wang. "FPGA Based Real-Time Processing Architecture for Recurrent Neural Network." In Advances in Intelligent Systems and Computing, 705–9. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69096-4_99.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cebrián, Pedro L., Alberto Martín-Pérez, Manuel Villa, Jaime Sancho, Gonzalo Rosa, Guillermo Vazquez, Pallab Sutradhar, et al. "Deep Recurrent Neural Network Performing Spectral Recurrence on Hyperspectral Images for Brain Tissue Classification." In Design and Architecture for Signal and Image Processing, 15–27. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-29970-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Recurrent Neural Network architecture"

1

Gao, Yang, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu. "Graph Neural Architecture Search." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/195.

Повний текст джерела
Анотація:
Graph neural networks (GNNs) emerged recently as a powerful tool for analyzing non-Euclidean data such as social network data. Despite their success, the design of graph neural networks requires heavy manual work and domain knowledge. In this paper, we present a graph neural architecture search method (GraphNAS) that enables automatic design of the best graph neural architecture based on reinforcement learning. Specifically, GraphNAS uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and trains the recurrent network with policy gradient to maximize the expected accuracy of the generated architectures on a validation data set. Furthermore, to improve the search efficiency of GraphNAS on big networks, GraphNAS restricts the search space from an entire architecture space to a sequential concatenation of the best search results built on each single architecture layer. Experiments on real-world datasets demonstrate that GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of validation set accuracy. Moreover, in a transfer learning task we observe that graph neural architectures designed by GraphNAS, when transferred to new datasets, still gain improvement in terms of prediction accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fourati, Rahma, Chaouki Aouiti, and Adel M. Alimi. "Improved recurrent neural network architecture for SVM learning." In 2015 15th International Conference on Intelligent Systems Design and Applications (ISDA). IEEE, 2015. http://dx.doi.org/10.1109/isda.2015.7489221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Maulik, Romit, Romain Egele, Bethany Lusch, and Prasanna Balaprakash. "Recurrent Neural Network Architecture Search for Geophysical Emulation." In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2020. http://dx.doi.org/10.1109/sc41405.2020.00012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kumar, Swaraj, Vishal Murgai, Devashish Singh, and Issaac Kommeneni. "Recurrent Neural Network Architecture for Communication Log Analysis." In 2022 IEEE 33rd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEE, 2022. http://dx.doi.org/10.1109/pimrc54779.2022.9978048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Khaliq, Shaik Abdul, and Mohamed Asan Basiri M. "An Efficient VLSI Architecture of Recurrent Neural Network." In 2022 IEEE 6th Conference on Information and Communication Technology (CICT). IEEE, 2022. http://dx.doi.org/10.1109/cict56698.2022.9997812.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Basu, Saikat, Jaybrata Chakraborty, and Md Aftabuddin. "Emotion recognition from speech using convolutional neural network with recurrent neural network architecture." In 2017 2nd International Conference on Communication and Electronics Systems (ICCES). IEEE, 2017. http://dx.doi.org/10.1109/cesys.2017.8321292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chakraborty, Bappaditya, Partha Sarathi Mukherjee, and Ujjwal Bhattacharya. "Bangla online handwriting recognition using recurrent neural network architecture." In the Tenth Indian Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3009977.3010072.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hee-Heon Song, Sun-Mee Kang, and Seong-Whan Lee. "A new recurrent neural network architecture for pattern recognition." In Proceedings of 13th International Conference on Pattern Recognition. IEEE, 1996. http://dx.doi.org/10.1109/icpr.1996.547658.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Elsayed, Nelly, Zag ElSayed, and Anthony S. Maida. "LiteLSTM Architecture for Deep Recurrent Neural Networks." In 2022 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2022. http://dx.doi.org/10.1109/iscas48785.2022.9937585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhao, Yi, Yanyan Shen, and Junjie Yao. "Recurrent Neural Network for Text Classification with Hierarchical Multiscale Dense Connections." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/757.

Повний текст джерела
Анотація:
Text classification is a fundamental task in many Natural Language Processing applications. While recurrent neural networks have achieved great success in performing text classification, they fail to capture the hierarchical structure and long-term semantics dependency which are common features of text data. Inspired by the advent of the dense connection pattern in advanced convolutional neural networks, we propose a simple yet effective recurrent architecture, named Hierarchical Mutiscale Densely Connected RNNs (HM-DenseRNNs), which: 1) enables direct access to the hidden states of all preceding recurrent units via dense connections, and 2) organizes multiple densely connected recurrent units into a hierarchical multi-scale structure, where the layers are updated at different scales. HM-DenseRNNs can effectively capture long-term dependencies among words in long text data, and a dense recurrent block is further introduced to reduce the number of parameters and enhance training efficiency. We evaluate the performance of our proposed architecture on three text datasets and the results verify the advantages of HM-DenseRNN over the baseline methods in terms of the classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Recurrent Neural Network architecture"

1

Barto, Andrew. Adaptive Neural Network Architecture. Fort Belvoir, VA: Defense Technical Information Center, October 1987. http://dx.doi.org/10.21236/ada190114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

McDonnell, John R., and Don Waagen. Evolving Neural Network Architecture. Fort Belvoir, VA: Defense Technical Information Center, March 1993. http://dx.doi.org/10.21236/ada264802.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Brabel, Michael J. Basin Sculpting a Hybrid Recurrent Feedforward Neural Network. Fort Belvoir, VA: Defense Technical Information Center, January 1998. http://dx.doi.org/10.21236/ada336386.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Basu, Anujit. Nuclear power plant status diagnostics using a neural network with dynamic node architecture. Office of Scientific and Technical Information (OSTI), January 1992. http://dx.doi.org/10.2172/10139977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Basu, A. Nuclear power plant status diagnostics using a neural network with dynamic node architecture. Office of Scientific and Technical Information (OSTI), January 1992. http://dx.doi.org/10.2172/6649091.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bodruzzaman, M., and M. A. Essawy. Iterative prediction of chaotic time series using a recurrent neural network. Quarterly progress report, January 1, 1995--March 31, 1995. Office of Scientific and Technical Information (OSTI), March 1996. http://dx.doi.org/10.2172/283610.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3743.

Повний текст джерела
Анотація:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a sufficient number of solutions and algorithms for training, designing and testing various types of neural networks. Software architecture is presented using a procedural-object approach. Because there is no need to save intermediate result of the program (after learning entire perceptron is stored in the file), all the progress of learning is stored in the normal files on hard disk.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pettit, Chris, and D. Wilson. A physics-informed neural network for sound propagation in the atmospheric boundary layer. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/41034.

Повний текст джерела
Анотація:
We describe what we believe is the first effort to develop a physics-informed neural network (PINN) to predict sound propagation through the atmospheric boundary layer. PINN is a recent innovation in the application of deep learning to simulate physics. The motivation is to combine the strengths of data-driven models and physics models, thereby producing a regularized surrogate model using less data than a purely data-driven model. In a PINN, the data-driven loss function is augmented with penalty terms for deviations from the underlying physics, e.g., a governing equation or a boundary condition. Training data are obtained from Crank-Nicholson solutions of the parabolic equation with homogeneous ground impedance and Monin-Obukhov similarity theory for the effective sound speed in the moving atmosphere. Training data are random samples from an ensemble of solutions for combinations of parameters governing the impedance and the effective sound speed. PINN output is processed to produce realizations of transmission loss that look much like the Crank-Nicholson solutions. We describe the framework for implementing PINN for outdoor sound, and we outline practical matters related to network architecture, the size of the training set, the physics-informed loss function, and challenge of managing the spatial complexity of the complex pressure.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mohanty, Subhasish, and Joseph Listwan. Development of Digital Twin Predictive Model for PWR Components: Updates on Multi Times Series Temperature Prediction Using Recurrent Neural Network, DMW Fatigue Tests, System Level Thermal-Mechanical-Stress Analysis. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1822853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Markova, Oksana, Serhiy Semerikov та Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, травень 2018. http://dx.doi.org/10.31812/0564/2250.

Повний текст джерела
Анотація:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії