Letteratura scientifica selezionata sul tema "Convolutional recurrent neural networks"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Convolutional recurrent neural networks".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Convolutional recurrent neural networks":

1

Hindarto, Djarot. "Comparison of RNN Architectures and Non-RNN Architectures in Sentiment Analysis". sinkron 8, n. 4 (1 ottobre 2023): 2537–46. http://dx.doi.org/10.33395/sinkron.v8i4.13048.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study compares the sentiment analysis performance of multiple Recurrent Neural Network architectures and One-Dimensional Convolutional Neural Networks. THE METHODS EVALUATED ARE simple Recurrent Neural Network, Long Short-Term Memory, Gated Recurrent Unit, Bidirectional Recurrent Neural Network, and 1D ConvNets. A dataset comprising text reviews with positive or negative sentiment labels was evaluated. All evaluated models demonstrated an extremely high accuracy, ranging from 99.81% to 99.99%. Apart from that, the loss generated by these models is also low, ranging from 0.0043 to 0.0021. However, there are minor performance differences between the evaluated architectures. The Long Short-Term Memory and Gated Recurrent Unit models mainly perform marginally better than the Simple Recurrent Neural Network, albeit with slightly lower accuracy and loss. In the meantime, the Bidirectional Recurrent Neural Network model demonstrates competitive performance, as it can effectively manage text context from both directions. Additionally, One-Dimensional Convolutional Neural Networks provide satisfactory results, indicating that convolution-based approaches are also effective in sentiment analysis. The findings of this study provide practitioners with essential insights for selecting an appropriate architecture for sentiment analysis tasks. While all models yield excellent performance, the choice of architecture can impact computational efficiency and training time. Therefore, a comprehensive comprehension of the respective characteristics of Recurrent Neural Network architectures and One-Dimensional Convolutional Neural Networks is essential for making more informed decisions when constructing sentiment analysis models.
2

Kassylkassova, Kamila, Zhanna Yessengaliyeva, Gayrat Urazboev e Ayman Kassylkassova. "OPTIMIZATION METHOD FOR INTEGRATION OF CONVOLUTIONAL AND RECURRENT NEURAL NETWORK". Eurasian Journal of Mathematical and Computer Applications 11, n. 2 (2023): 40–56. http://dx.doi.org/10.32523/2306-6172-2023-11-2-40-56.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract In recent years, convolutional neural networks have been widely used in image processing and have shown good results. Particularly useful was their ability to automatically extract image features (textures and shapes of objects). The article proposes a method that improves the accuracy and speed of recognition of an ultra-precise neural network based on image recognition of people’s faces. At first, a recurrent neural network is introduced into the convolutional neural network, thereby studying the characteristics of the image more deeply. Deep image characteristics are studied in parallel using a convolutional and recurrent neural network. In line with the idea of skipping the ResNet convolution layer, a new ShortCut3- ResNet residual module is built. A double optimization model is created to fully optimize the convolution process. A study of the influence of various parameters of a convolutional neural network on network performance is demonstrated, also analyzed using simulation experiments. As a result, the optimal parameters of the convolutional neural network are established. Ex- periments show that the method presented in this paper can study various images of people’s faces regardless of age, gender, and also improves the accuracy of feature extraction and image recognition ability.
3

Lyu, Shengfei, e Jiaqi Liu. "Convolutional Recurrent Neural Networks for Text Classification". Journal of Database Management 32, n. 4 (ottobre 2021): 65–82. http://dx.doi.org/10.4018/jdm.2021100105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recurrent neural network (RNN) and convolutional neural network (CNN) are two prevailing architectures used in text classification. Traditional approaches combine the strengths of these two networks by straightly streamlining them or linking features extracted from them. In this article, a novel approach is proposed to maintain the strengths of RNN and CNN to a great extent. In the proposed approach, a bi-directional RNN encodes each word into forward and backward hidden states. Then, a neural tensor layer is used to fuse bi-directional hidden states to get word representations. Meanwhile, a convolutional neural network is utilized to learn the importance of each word for text classification. Empirical experiments are conducted on several datasets for text classification. The superior performance of the proposed approach confirms its effectiveness.
4

P., Vijay Babu, e Senthil Kumar R. "Performance Evaluation of Brain Tumor Identification and Examination Using MRI Images with Innovative Convolution Neural Networks and Comparing the Accuracy with RNN Algorithm". ECS Transactions 107, n. 1 (24 aprile 2022): 12405–14. http://dx.doi.org/10.1149/10701.12405ecst.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The main aim of the paper is to find the accuracy for brain tumor detection using the Innovative CNN and RNN algorithms. The paper addresses the design and implementation of brain tumor detection with an accurate prediction. Materials and Methods: Innovative Convolutional Neural Networks and Recurrent Neural Networks are used for finding the accuracy of brain tumor detection. Data models were trained with the neural network algorithms where the brain tumor model adopts the data models and gives responses by adopting those effectively. The model checks patterns for providing the responses to the users by using a pattern matching module. Accuracy calculation was done by using neural network algorithms. Results: The accuracy of Innovative Convolutional Neural Network in brain tumor detection is more significantly improved which is more than 95% (approx.) than the Recurrent Neural Networks. Conclusion: Based on Independent T-test analysis using SPSS statistical software, the innovative Convolutional Neural Network algorithm is significant and has more accuracy compared to Recurrent Neural Networks.
5

Peng, Wenli, Shenglai Zhen, Xin Chen, Qianjing Xiong e Benli Yu. "Study on convolutional recurrent neural networks for speech enhancement in fiber-optic microphones". Journal of Physics: Conference Series 2246, n. 1 (1 aprile 2022): 012084. http://dx.doi.org/10.1088/1742-6596/2246/1/012084.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract In this paper, several improved convolutional recurrent networks (CRN) are proposed, which can enhance the speech with non-additive distortion captured by fiber-optic microphones. Our preliminary study shows that the original CRN structure based on amplitude spectrum estimation is seriously distorted due to the loss of phase information. Therefore, we transform the network to run in time domain and gain 0.42 improvement on PESQ and 0.03 improvement on STOI. In addition, we integrate dilated convolution into CRN architecture, and adopt three different types of bottleneck modules, namely long short-term memory (LSTM), gated recurrent units (GRU) and dilated convolutions. The experimental results show that the model with dilated convolution in the encoder-decoder and the model with dilated convolution at bottleneck layer have the highest PESQ and STOI scores, respectively.
6

P, Suma, e Senthil Kumar R. "Automatic Classification of Normal and Infected Blood Cells for Leukemia Through Color Based Segmentation Technique Over Innovative CNN Algorithm and Comparing the Error Rate with RNN". ECS Transactions 107, n. 1 (24 aprile 2022): 14123–34. http://dx.doi.org/10.1149/10701.14123ecst.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To classify the normal infected blood cells through color-based segmentation for leukemia by comparing the error rate for the innovative Convolutional Neural Network and Recurrent Neural Network algorithm. Materials and Methods: Convolutional Neural Network algorithm, which has been taken as an input image and differentiating according to the properties of the image. Here the white blood cells acted as the major parameter for detecting the disease. Result: Data collection was carried out and the analysis could have been done by using blood cell sample images to detect the result and error rate of a particular algorithm. Here in this proposed work, the error rate was reduced in innovative Convolutional Neural Networks compared to Recurrent Neural Networks. Conclusion: The data was collected from various resources for the usage of disease detection. The reduced error rate for the Convolutional Neural Network (87.02%) was used as an algorithm for the whole disease detection process for reduced error rate results compared to the Recurrent Neural Network (89.42%).
7

Wang, Lin, e Zuqiang Meng. "Multichannel Two-Dimensional Convolutional Neural Network Based on Interactive Features and Group Strategy for Chinese Sentiment Analysis". Sensors 22, n. 3 (18 gennaio 2022): 714. http://dx.doi.org/10.3390/s22030714.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In Chinese sentiment analysis tasks, many existing methods tend to use recurrent neural networks (e.g., long short-term memory networks and gated recurrent units) and standard one-dimensional convolutional neural networks (1D-CNN) to extract features. This is because a recurrent neural network can deal with the order dependence of the data to a certain extent and the one-dimensional convolution can extract local features. Although these methods have good performance in sentiment analysis tasks, recurrent neural networks (RNNs) cannot be parallelized, resulting in time-inefficiency, and the standard 1D-CNN can only extract a single sample feature, with the result that the feature information cannot be fully utilized. To this end, in this paper, we propose a multichannel two-dimensional convolutional neural network based on interactive features and group strategy (MCNN-IFGS) for Chinese sentiment analysis. Firstly, we no longer use word encoding technology but use character-based integer encoding to retain more fine-grained information. Besides, in character-level vectors, the interactive features of different elements are introduced to improve the dimensionality of feature vectors and supplement semantic information so that the input matches the model network. In order to ensure that more sentiment features are learned, group strategies are used to form several feature mapping groups, so the learning object is converted from the traditional single sample to the learning of the feature mapping group, so as to achieve the purpose of learning more features. Finally, multichannel two-dimensional convolutional neural networks with different sizes of convolution kernels are used to extract sentiment features of different scales. The experimental results on the Chinese dataset show that our proposed method outperforms other baseline and state-of-the-art methods.
8

Poudel, Sushan, e Dr R. Anuradha. "Speech Command Recognition using Artificial Neural Networks". JOIV : International Journal on Informatics Visualization 4, n. 2 (26 maggio 2020): 73. http://dx.doi.org/10.30630/joiv.4.2.358.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Speech is one of the most effective way for human and machine to interact. This project aims to build Speech Command Recognition System that is capable of predicting the predefined speech commands. Dataset provided by Google’s TensorFlow and AIY teams is used to implement different Neural Network models which include Convolutional Neural Network and Recurrent Neural Network combined with Convolutional Neural Network. The combination of Convolutional and Recurrent Neural Network outperforms Convolutional Neural Network alone by 8% and achieved 96.66% accuracy for 20 labels.
9

Wu, Hao, e Saurabh Prasad. "Convolutional Recurrent Neural Networks forHyperspectral Data Classification". Remote Sensing 9, n. 3 (21 marzo 2017): 298. http://dx.doi.org/10.3390/rs9030298.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Li, Kezhi, John Daniels, Chengyuan Liu, Pau Herrero e Pantelis Georgiou. "Convolutional Recurrent Neural Networks for Glucose Prediction". IEEE Journal of Biomedical and Health Informatics 24, n. 2 (febbraio 2020): 603–13. http://dx.doi.org/10.1109/jbhi.2019.2908488.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Convolutional recurrent neural networks":

1

Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral. Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
2

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks". Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequence problems such as Machine Translation. They contain data dependencies among the executions of time-steps hence the amount of parallelism is severely limited. Thus, evaluating them in an energy-efficient manner is more challenging than evaluating other DNN algorithms. This thesis studies applications using RNNs to improve their energy efficiency on specialized architectures. Specifically, we propose novel energy-saving techniques and highly efficient architectures tailored to the evaluation of RNNs. We focus on the most successful RNN topologies which are the Long Short Term memory and the Gated Recurrent Unit. First, we characterize a set of RNNs running on a modern SoC. We identify that accessing the memory to fetch the model weights is the main source of energy consumption. Thus, we propose E-PUR: an energy-efficient processing unit for RNN inference. E-PUR achieves 6.8x speedup and improves energy consumption by 88x compared to the SoC. These benefits are obtained by improving the temporal locality of the model weights. In E-PUR, fetching the parameters is the main source of energy consumption. Thus, we strive to reduce memory accesses and propose a scheme to reuse previous computations. Our observation is that when evaluating the input sequences of an RNN model, the output of a given neuron tends to change lightly between consecutive evaluations.Thus, we develop a scheme that caches the neurons' outputs and reuses them whenever it detects that the change between the current and previously computed output value for a given neuron is small avoiding to fetch the weights. In order to decide when to reuse a previous value we employ a Binary Neural Network (BNN) as a predictor of reusability. The low-cost BNN can be employed in this context since its output is highly correlated to the output of RNNs. We show that our proposal avoids more than 24.2% of computations. Hence, on average, energy consumption is reduced by 18.5% for a speedup of 1.35x. RNN models’ memory footprint is usually reduced by using low precision for evaluation and storage. In this case, the minimum precision used is identified offline and it is set such that the model maintains its accuracy. This method utilizes the same precision to compute all time-steps.Yet, we observe that some time-steps can be evaluated with a lower precision while preserving the accuracy. Thus, we propose a technique that dynamically selects the precision used to compute each time-step. A challenge of our proposal is choosing a lower bit-width. We address this issue by recognizing that information from a previous evaluation can be employed to determine the precision required in the current time-step. Our scheme evaluates 57% of the computations on a bit-width lower than the fixed precision employed by static methods. We implement it on E-PUR and it provides 1.46x speedup and 19.2% energy savings on average.
Los algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
3

Oyharcabal, Astorga Nicolás. "Convolutional recurrent neural networks for remaining useful life prediction in mechanical systems". Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168514.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Memoria para optar al título de Ingeniero Civil Mecánico
La determinación de la vida útil remanente (RUL del inglés "Remaining Useful Life") de una máquina, equipo, dispositivo o elemento mecánico, es algo en lo que se ha estado trabajando en los últimos años y que es crucial para el futuro de cualquier industria que así lo requiera. El continuo monitoreo de máquinas junto a una buena predicción de la RUL permite la minimización de costos de mantención y menor exposición a fallas. Sin embargo, los datos obtenidos del monitoreo son variados, tienen ruido, poseen un carácter secuencial y no siempre guardan estricta relación con la RUL, por lo que su estimación es un problema difícil. Es por ello que en la actualidad se utilizan distintas clases de Redes Neuronales y en particular, cuando se quiere modelar problemas de carácter secuencial, se utilizan las Redes Neuronales Recurrentes o RNN (del inglés "Recurrent Neural Network") como LSTM (del inglés "Long Short Term Memory") o JANET (del inglés "Just Another NETwork"), por su capacidad para identificar de forma autónoma patrones en secuencias temporales, pero también junto a estas últimas redes, también se utilizan alternativas que incorporan la Convolución como operación para cada célula de las RNN y que se conocen como ConvRNN (del inglés "Convolutional Recurrent Neural Network"). Estas últimas redes son mejores que sus pares convolucional y recurrentes en ciertos casos que requieren procesar secuencias de imágenes, y en el caso particular de este trabajo, series de tiempo de datos de monitoreo que son suavizados por la Convolución y procesados por la Recurrencia. El objetivo general de este trabajo es determinar la mejor opción de ConvRNN para la determinación de la RUL de un turbofan a partir de series de tiempo de la base de datos C-MAPSS. También se estudia cómo editar la base de datos para mejorar la precisión de una ConvRNN y la aplicación de la Convolución como una operación primaria en una serie de tiempo cuyos parámetros muestran el comportamiento de un turbofan. Para ello se programa una LSTM Convolucional, LSTM Convolucional Codificador-Decodificador, JANET Convolucional y JANET Convolucional Codificador-Decodificador. A partir de esto se encuentra que el modelo JANET Convolucional Codificador-Decodificador da los mejores resultados en cuanto a exactitud promedio y cantidad de parámetros necesarios (entre menos mejor pues se necesita menos memoria) para la red, siendo además capaz de asimilar la totalidad de las bases de datos C-MAPSS. Por otro lado, también se encuentra que la RUL de la base de datos puede ser modificada para datos antes de la falla. Para la programación y puesta en marcha de las diferentes redes, se utilizan los computadores del laboratorio de Integración de Confiabilidad y Mantenimiento Inteligente (ICMI) del Departamento de Ingeniería Mecánica de la Universidad de Chile.
4

Ljubenkov, Davor. "Optimizing Bike Sharing System Flows using Graph Mining, Convolutional and Recurrent Neural Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-257783.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A Bicycle-sharing system (BSS) is a popular service scheme deployed in cities of different sizes around the world. Although docked bike systems are its most popular model used, it still experiences a number of weaknesses that could be optimized by investigating bike sharing network properties and evolution of obtained patterns.Efficiently keeping bicycle-sharing system as balanced as possible is the main problem and thus, predicting or minimizing the manual transportation of bikes across the city is the prime objective in order to save logistic costs for operating companies.The purpose of this thesis is two-fold; Firstly, it is to visualize bike flow using data exploration methods and statistical analysis to better understand mobility characteristics with respect to distance, duration, time of the day, spatial distribution, weather circumstances, and other attributes. Secondly, by obtaining flow visualizations, it is possible to focus on specific directed sub-graphs containing only those pairs of stations whose mutual flow difference is the most asymmetric. By doing so, we are able to use graph mining and machine learning techniques on these unbalanced stations.Identification of spatial structures and their structural change can be captured using Convolutional neural network (CNN) that takes adjacency matrix snapshots of unbalanced sub-graphs. A generated structure from the previous method is then used in the Long short-term memory artificial recurrent neural network (RNN LSTM) in order to find and predict its dynamic patterns.As a result, we are predicting bike flows for each node in the possible future sub-graph configuration, which in turn informs bicycle-sharing system owners in advance to plan accordingly. This combination of methods notifies them which prospective areas they should focus on more and how many bike relocation phases are to be expected. Methods are evaluated using Cross validation (CV), Root mean square error (RMSE) and Mean average error (MAE) metrics. Benefits are identified both for urban city planning and for bike sharing companies by saving time and minimizing their cost.
Lånecykel avser ett system för uthyrning eller utlåning av cyklar. Systemet används främst i större städer och bekostas huvudsakligen genom tecknande av ett abonnemang.Effektivt hålla cykel andelssystem som balanseras som möjligt huvud problemand därmed förutsäga eller minimera manuell transport av cyklar över staden isthe främsta mål för att spara logistikkostnaderna för drift companies.Syftet med denna avhandling är tvåfaldigt.För det första är det att visualisera cykelflödet med hjälp av datautforskningsmetoder och statistisk analys för att bättre förstå rörlighetskarakteristika med avseende på avstånd, varaktighet, tid på dagen, rumsfördelning, väderförhållanden och andra attribut.För det andra är det vid möjliga flödesvisualiseringar möjligt att fokusera på specifika riktade grafer som endast innehåller de par eller stationer vars ömsesidiga flödesskillnad är den mest asymmetriska.Genom att göra det kan vi anvnda grafmining och maskininlärningsteknik på dessa obalanserade stationer, och använda konjunktionsnurala nätverk (CNN) som tar adjacency matrix snapshots eller obalanserade subgrafer.En genererad struktur från den tidigare metoden används i det långa kortvariga minnet artificiella återkommande neurala nätverket (RNN LSTM) för att hitta och förutsäga dess dynamiska mönster.Som ett resultat förutsäger vi cykelflden för varje nod i den eventuella framtida underkonfigurationen, vilket i sin tur informerar cykeldelningsägare om att planera i enlighet med detta.Denna kombination av metoder meddelar dem vilka framtida områden som bör inriktas på mer och hur många cykelflyttningsfaser som kan förväntas.Metoder utvärderas med hjälp av cross validation (CV), Root mean square error (RMSE) och Mean average error (MAE) metrics.Fördelar identifieras både för stadsplanering och för cykeldelningsföretag genom att spara tid och minimera kostnaderna.
5

Tan, Ke. "Convolutional and recurrent neural networks for real-time speech separation in the complex domain". The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1626983471600193.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Daliparthi, Venkata Satya Sai Ajay. "Semantic Segmentation of Urban Scene Images Using Recurrent Neural Networks". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20651.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: In Autonomous Driving Vehicles, the vehicle receives pixel-wise sensor data from RGB cameras, point-wise depth information from the cameras, and sensors data as input. The computer present inside the Autonomous Driving vehicle processes the input data and provides the desired output, such as steering angle, torque, and brake. To make an accurate decision by the vehicle, the computer inside the vehicle should be completely aware of its surroundings and understand each pixel in the driving scene. Semantic Segmentation is the task of assigning a class label (Such as Car, Road, Pedestrian, or Sky) to each pixel in the given image. So, a better performing Semantic Segmentation algorithm will contribute to the advancement of the Autonomous Driving field. Research Gap: Traditional methods, such as handcrafted features and feature extraction methods, were mainly used to solve Semantic Segmentation. Since the rise of deep learning, most of the works are using deep learning to dealing with Semantic Segmentation. The most commonly used neural network architecture to deal with Semantic Segmentation was the Convolutional Neural Network (CNN). Even though some works made use of Recurrent Neural Network (RNN), the effect of RNN in dealing with Semantic Segmentation was not yet thoroughly studied. Our study addresses this research gap. Idea: After going through the existing literature, we came up with the idea of “Using RNNs as an add-on module, to augment the skip-connections in Semantic Segmentation Networks through residual connections.” Objectives and Method: The main objective of our work is to improve the Semantic Segmentation network’s performance by using RNNs. The Experiment was chosen as a methodology to conduct our study. In our work, We proposed three novel architectures called UR-Net, UAR-Net, and DLR-Net by implementing our idea to the existing networks U-Net, Attention U-Net, and DeepLabV3+ respectively. Results and Findings: We empirically showed that our proposed architectures have shown improvement in efficiently segmenting the edges and boundaries. Through our study, we found that there is a trade-off between using RNNs and Inference time of the model. Suppose we use RNNs to improve the performance of Semantic Segmentation Networks. In that case, we need to trade off some extra seconds during the inference of the model. Conclusion: Our findings will not contribute to the Autonomous driving field, where we need better performance in real-time. But, our findings will contribute to the advancement of Bio-medical Image segmentation, where doctors can trade-off those extra seconds during inference for better performance.
7

Hanson, Jack. "Protein Structure Prediction by Recurrent and Convolutional Deep Neural Network Architectures". Thesis, Griffith University, 2018. http://hdl.handle.net/10072/382722.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, the application of convolutional and recurrent machine learning techniques to several key structural properties of proteins is explored. Chapter 2 presents the rst application of an LSTM-BRNN in structural bioinformat- ics. The method, called SPOT-Disorder, predicts the per-residue probability of a protein being intrinsically disordered (ie. unstructured, or exible). Using this methodology, SPOT-Disorder achieved the highest accuracy in the literature without separating short and long disordered regions during training as was required in previous models, and was additionally proven capable of indirectly discerning functional sites located in disordered regions. Chapter 3 extends the application of an LSTM-BRNN to a two-dimensional problem in the prediction of protein contact maps. Protein contact maps describe the intra-sequence distance between each residue pairing at a distance cuto , providing key restraints towards the possible conformations of a protein. This work, entitled SPOT-Contact, introduced the coupling of two-dimensional LSTM-BRNNs with ResNets to maximise dependency propagation in order to achieve the highest reported accuracies for contact map preci- sion. Several models of varying architectures were trained and combined as an ensemble predictor in order to minimise incorrect generalisations. Chapter 4 discusses the utilisation of an ensemble of LSTM-BRNNs and ResNets to predict local protein one-dimensional structural properties. The method, called SPOT-1D, predicts for a wide range of local structural descriptors, including several solvent exposure metrics, secondary structure, and real-valued backbone angles. SPOT-1D was signi cantly improved by the inclusion of the outputs of SPOT-Contact in the input features. Using this topology led to the best reported accuracy metrics for all predicted properties. The protein structures constructed by the backbone angles predicted by SPOT-1D achieved the lowest average error from their native structures in the literature. Chapter 5 presents an update on SPOT-Disorder, as it employs the inputs from SPOT- 1D in conjunction with an ensemble of LSTM-BRNN's and Inception Residual Squeeze and Excitation networks to predict for protein intrinsic disorder. This model con rmed the enhancement provided by utilising the coupled architectures over the LSTM-BRNN solely, whilst also introducing a new convolutional format to the bioinformatics eld. The work in Chapter 6 utilises the same topology from SPOT-1D for single-sequence prediction of protein intrinsic disorder in SPOT-Disorder-Single. Single-sequence predic- tion describes the prediction of a protein's properties without the use of evolutionary information. While evolutionary information generally improves the performance of a computational model, it comes at the expense of a greatly increased computational and time load. Removing this from the model allows for genome-scale protein analysis at a minor drop in accuracy. However, models trained without evolutionary profi les can be more accurate for proteins with limited and therefore unreliable evolutionary information.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
8

Holm, Noah, e Emil Plynning. "Spatio-temporal prediction of residential burglaries using convolutional LSTM neural networks". Thesis, KTH, Geoinformatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229952.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The low amount solved residential burglary crimes calls for new and innovative methods in the prevention and investigation of the cases. There were 22 600 reported residential burglaries in Sweden 2017 but only four to five percent of these will ever be solved. There are many initiatives in both Sweden and abroad for decreasing the amount of occurring residential burglaries and one of the areas that are being tested is the use of prediction methods for more efficient preventive actions. This thesis is an investigation of a potential method of prediction by using neural networks to identify areas that have a higher risk of burglaries on a daily basis. The model use reported burglaries to learn patterns in both space and time. The rationale for the existence of patterns is based on near repeat theories in criminology which states that after a burglary both the burgled victim and an area around that victim has an increased risk of additional burglaries. The work has been conducted in cooperation with the Swedish Police authority. The machine learning is implemented with convolutional long short-term memory (LSTM) neural networks with max pooling in three dimensions that learn from ten years of residential burglary data (2007-2016) in a study area in Stockholm, Sweden. The model's accuracy is measured by performing predictions of burglaries during 2017 on a daily basis. It classifies cells in a 36x36 grid with 600 meter square grid cells as areas with elevated risk or not. By classifying 4% of all grid cells during the year as risk areas, 43% of all burglaries are correctly predicted. The performance of the model could potentially be improved by further configuration of the parameters of the neural network, along with a use of more data with factors that are correlated to burglaries, for instance weather. Consequently, further work in these areas could increase the accuracy. The conclusion is that neural networks or machine learning in general could be a powerful and innovative tool for the Swedish Police authority to predict and moreover prevent certain crime. This thesis serves as a first prototype of how such a system could be implemented and used.
9

Fu, Xinyu. "Context-aware sentence categorisation : word mover's distance and character-level convolutional recurrent neural network". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/52054/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Supervised k nearest neighbour and unsupervised hierarchical agglomerative clustering algorithm can be enhanced through word mover’s distance-based sentence distance metric to offer superior context-aware sentence categorisation performance. Advanced neural network-oriented classifier is able to achieve competing result on the benchmark streams via an aggregated recurrent unit incorporated with sophis- ticated convolving layer. The continually increasing number of textual snippets produced each year ne- cessitates ever improving information processing methods for searching, retrieving, and organising text. Central to these information processing methods are sentence classification and clustering, which have become an important application for nat- ural language processing and information retrieval. This present work proposes three novel sentence categorisation frameworks, namely hierarchical agglomerative clustering-word mover’s distance, k nearest neighbour-word mover’s distance, and convolutional recurrent neural network. Hierarchical agglomerative clustering-word mover’s distance employs word mover’s distance distortion function to effectively cluster unlabelled sentences into nearby centroid. K nearest neighbour-word mover’s distance classifies testing textual snippets through word mover’s distance-based sen- tence similarity. Both models are from the spectrum of count-based framework since they apply term frequency statistics when building the vector space matrix. Experimental evaluation on the two unsupervised learning data-sets show better per- formance of hierarchical agglomerative clustering-word mover’s distance over other competitors on mean squared error, completeness score, homogeneity score, and v-measure value. For k nearest neighbour-word mover’s distance, two benchmark textual streams are experimented to verify its superior classification performance against comparison algorithms on precision rate, recall ratio, and F1 score. Per- formance comparison is statistically validated via Mann-Whitney-U test. Through extensive experiments and results analysis, each research hypothesis is successfully verified to be yes. Unlike traditional singleton neural network, convolutional recurrent neural net- work model incorporates character-level convolutional network with character-aware recurrent neural network to form a combined framework. The proposed model ben- efits from character-aware convolutional neural network in that only salient features are selected and fed into the integrated character-aware recurrent neural network. Character-aware recurrent neural network effectively learns long sequence semantics via sophisticated update mechanism. The experiment presented in current thesis compares convolutional recurrent neural network framework against the state-of- the-art text classification algorithms on four popular benchmarking corpus. The present work also analyses three different recurrent neural network hidden recurrent cells’ impact on performance and their runtime efficiency. It is observed that min- imal gated unit achieves the optimal runtime and comparable performance against gated recurrent unit and long short-term memory. For term frequency-inverse docu- ment frequency-based algorithms, the current experiment examines word2vec, global vectors for word representation, and sent2vec embeddings and reports their perfor- mance differences. Performance comparison is statistically validated through Mann- Whitney-U test and the corresponding hypotheses are tested to be yes through the reported statistical analysis.
10

Kvedaraite, Indre. "Sentiment Analysis of YouTube Public Videos based on their Comments". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105754.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the rise of social media and publicly available data, opinion mining is more accessible than ever. It is valuable for content creators, companies and advertisers to gain insights into what users think and feel. This work examines comments on YouTube videos, and builds a deep learning classifier to automatically determine their sentiment. Four Long Short-Term Memory-based models are trained and evaluated. Experiments are performed to determine which deep learning model performs with the best accuracy, recall, precision, F1 score and ROC curve on a labelled YouTube Comment dataset. The results indicate that a BiLSTM-based model has the overall best performance, with the accuracy of 89%. Furthermore, the four LSTM-based models are evaluated on an IMDB movie review dataset, achieving an average accuracy of 87%, showing that the models can predict the sentiment of different textual data. Finally, a statistical analysis is performed on the YouTube videos, revealing that videos with positive sentiment have a statistically higher number of upvotes and views. However, the number of downvotes is not significantly higher in videos with negative sentiment.

Libri sul tema "Convolutional recurrent neural networks":

1

Salem, Fathi M. Recurrent Neural Networks. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89929-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tyagi, Amit Kumar, e Ajith Abraham. Recurrent Neural Networks. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Hu, Xiaolin, e P. Balasubramaniam. Recurrent neural networks. Rijek, Crotia: InTech, 2008.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Mou, Lili, e Zhi Jin. Tree-Based Convolutional Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1870-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Milosevic, Nemanja. Introduction to Convolutional Neural Networks. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5648-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Habibi Aghdam, Hamed, e Elnaz Jahani Heravi. Guide to Convolutional Neural Networks. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57550-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Venkatesan, Ragav, e Baoxin Li. Convolutional Neural Networks in Visual Computing. Boca Raton ; London : Taylor & Francis, CRC Press, 2017.: CRC Press, 2017. http://dx.doi.org/10.4324/9781315154282.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Teoh, Teik Toe. Convolutional Neural Networks for Medical Applications. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8814-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Hammer, Barbara. Learning with recurrent neural networks. London: Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Koonce, Brett. Convolutional Neural Networks with Swift for Tensorflow. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6168-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Convolutional recurrent neural networks":

1

Rajalakshmi, Ratnavel, Abhinav Basil Shinow, Aswin Murali, Kashinadh S. Nair e J. Bhuvana. "An Efficient Convolutional Neural Network with Image Augmentation for Cassava Leaf Disease Detection". In Recurrent Neural Networks, 289–305. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-19.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yan, Wei Qi. "Convolutional Neural Networks and Recurrent Neural Networks". In Texts in Computer Science, 69–124. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-4823-9_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Michelucci, Umberto. "Convolutional and Recurrent Neural Networks". In Applied Deep Learning, 323–64. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3790-8_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Borhani, Reza, Soheila Borhani e Aggelos K. Katsaggelos. "Convolutional and Recurrent Neural Networks". In Fundamentals of Machine Learning and Deep Learning in Medicine, 131–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-19502-0_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Pattabiraman, V., e R. Maheswari. "Image to Text Processing Using Convolution Neural Networks". In Recurrent Neural Networks, 43–52. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Seo, Youngjoo, Michaël Defferrard, Pierre Vandergheynst e Xavier Bresson. "Structured Sequence Modeling with Graph Convolutional Recurrent Networks". In Neural Information Processing, 362–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04167-0_33.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bartz, Christian, Tom Herold, Haojin Yang e Christoph Meinel. "Language Identification Using Deep Convolutional Recurrent Neural Networks". In Neural Information Processing, 880–89. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70136-3_93.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Nowak, Jakub, Marcin Korytkowski e Rafał Scherer. "Convolutional Recurrent Neural Networks for Computer Network Analysis". In Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series, 747–57. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30490-4_59.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Lopez, Diego Manzanas, Sung Woo Choi, Hoang-Dung Tran e Taylor T. Johnson. "NNV 2.0: The Neural Network Verification Tool". In Computer Aided Verification, 397–412. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37703-7_19.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThis manuscript presents the updated version of the Neural Network Verification (NNV) tool. NNV is a formal verification software tool for deep learning models and cyber-physical systems with neural network components. NNV was first introduced as a verification framework for feedforward and convolutional neural networks, as well as for neural network control systems. Since then, numerous works have made significant improvements in the verification of new deep learning models, as well as tackling some of the scalability issues that may arise when verifying complex models. In this new version of NNV, we introduce verification support for multiple deep learning models, including neural ordinary differential equations, semantic segmentation networks and recurrent neural networks, as well as a collection of reachability methods that aim to reduce the computation cost of reachability analysis of complex neural networks. We have also added direct support for standard input verification formats in the community such as VNNLIB (verification properties), and ONNX (neural networks) formats. We present a collection of experiments in which NNV verifies safety and robustness properties of feedforward, convolutional, semantic segmentation and recurrent neural networks, as well as neural ordinary differential equations and neural network control systems. Furthermore, we demonstrate the capabilities of NNV against a commercially available product in a collection of benchmarks from control systems, semantic segmentation, image classification, and time-series data.
10

Tan, Chuanqi, Fuchun Sun, Wenchang Zhang, Jianhua Chen e Chunfang Liu. "Multimodal Classification with Deep Convolutional-Recurrent Neural Networks for Electroencephalography". In Neural Information Processing, 767–76. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_78.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Convolutional recurrent neural networks":

1

Chien, Jen-Tzung, e Yu-Min Huang. "Stochastic Convolutional Recurrent Networks". In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206970.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Wang, Wentao, Canhui Liao, Quan Cheng e Pengju Wang. "Mixed convolutional recurrent neural networks". In the International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3371425.3371430.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

"Direction Finding Using Convolutional Neural Networks and Convolutional Recurrent Neural Networks". In 2020 28th Signal Processing and Communications Applications Conference (SIU). IEEE, 2020. http://dx.doi.org/10.1109/siu49456.2020.9302448.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yang, Yi. "Convolutional Neural Networks with Recurrent Neural Filters". In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1109.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Gulshad, Sadaf, e Jong-Hwan Kim. "Deep convolutional and recurrent writer". In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966206.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Kadambari, Sai Kiran, e Sundeep Prabhakar Chepuri. "Fast Graph Convolutional Recurrent Neural Networks". In 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019. http://dx.doi.org/10.1109/ieeeconf44664.2019.9048829.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Ruiz, Luana, Fernando Gama e Alejandro Ribeiro. "Gated Graph Convolutional Recurrent Neural Networks". In 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019. http://dx.doi.org/10.23919/eusipco.2019.8902995.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Calvin, Rachel, e Shravya Suresh. "Image Captioning using Convolutional Neural Networks and Recurrent Neural Network". In 2021 6th International Conference for Convergence in Technology (I2CT). IEEE, 2021. http://dx.doi.org/10.1109/i2ct51068.2021.9418001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Wang, Ruishuang, Zhao Li, Jian Cao, Tong Chen e Lei Wang. "Convolutional Recurrent Neural Networks for Text Classification". In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852406.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Shankar, Tanmay, Santosha K. Dwivedy e Prithwijit Guha. "Reinforcement Learning via Recurrent Convolutional Neural Networks". In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7900026.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Convolutional recurrent neural networks":

1

Forrest, Robert. Convolutional Neural Networks for Signal Detection. Office of Scientific and Technical Information (OSTI), novembre 2020. http://dx.doi.org/10.2172/1813655.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tarasenko, Andrii O., Yuriy V. Yakimov e Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], febbraio 2020. http://dx.doi.org/10.31812/123456789/3682.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
3

Donahue, Jeff, Lisa A. Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko e Trevor Darrell. Long-term Recurrent Convolutional Networks for Visual Recognition and Description. Fort Belvoir, VA: Defense Technical Information Center, novembre 2014. http://dx.doi.org/10.21236/ada623249.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yan, Erin, Harleen Sandhu, Saran Bodda, Abhinav Gupta, Xu Wu e Piyush Sabharwall. Structural Health Monitoring of Microreactor Safety Systems Using Convolutional Neural Networks. Office of Scientific and Technical Information (OSTI), luglio 2021. http://dx.doi.org/10.2172/1824205.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lukow, Steven, Ross Lee, David Grow e Jonathan Gigax. Advancing Vision-based Feedback and Convolutional Neural Networks for Visual Outlier Detection. Office of Scientific and Technical Information (OSTI), settembre 2022. http://dx.doi.org/10.2172/1889960.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Fisher, Andmorgan, Timothy Middleton, Jonathan Cotugno, Elena Sava, Laura Clemente-Harding, Joseph Berger, Allistar Smith e Teresa Li. Use of convolutional neural networks for semantic image segmentation across different computing systems. Engineer Research and Development Center (U.S.), marzo 2020. http://dx.doi.org/10.21079/11681/35881.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Garon, Isaac. IMAGE CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORKS TO AUTOMATE VISUAL INSPECTION OF CCO CONTAINERS. Office of Scientific and Technical Information (OSTI), agosto 2023. http://dx.doi.org/10.2172/1997257.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Lupo Pasini, Massimiliano, Jong Youl Choi, Pei Zhang e Justin Baker. User Manual - HydraGNN: Distributed PyTorch Implementation of Multi-Headed Graph Convolutional Neural Networks. Office of Scientific and Technical Information (OSTI), novembre 2023. http://dx.doi.org/10.2172/2224153.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Fort Belvoir, VA: Defense Technical Information Center, luglio 1988. http://dx.doi.org/10.21236/ada219114.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), giugno 2017. http://dx.doi.org/10.2172/1366924.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia