Добірка наукової літератури з теми "Deep learning architecture"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Deep learning architecture".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Deep learning architecture"

1

Munir, Khushboo, Fabrizio Frezza, and Antonello Rizzi. "Deep Learning Hybrid Techniques for Brain Tumor Segmentation." Sensors 22, no. 21 (October 26, 2022): 8201. http://dx.doi.org/10.3390/s22218201.

Повний текст джерела
Анотація:
Medical images play an important role in medical diagnosis and treatment. Oncologists analyze images to determine the different characteristics of deadly diseases, plan the therapy, and observe the evolution of the disease. The objective of this paper is to propose a method for the detection of brain tumors. Brain tumors are identified from Magnetic Resonance (MR) images by performing suitable segmentation procedures. The latest technical literature concerning radiographic images of the brain shows that deep learning methods can be implemented to extract specific features of brain tumors, aiding clinical diagnosis. For this reason, most data scientists and AI researchers work on Machine Learning methods for designing automatic screening procedures. Indeed, an automated method would result in quicker segmentation findings, providing a robust output with respect to possible differences in data sources, mostly due to different procedures in data recording and storing, resulting in a more consistent identification of brain tumors. To improve the performance of the segmentation procedure, new architectures are proposed and tested in this paper. We propose deep neural networks for the detection of brain tumors, trained on the MRI scans of patients’ brains. The proposed architectures are based on convolutional neural networks and inception modules for brain tumor segmentation. A comparison of these proposed architectures with the baseline reference ones shows very interesting results. MI-Unet showed a performance increase in comparison to baseline Unet architecture by 7.5% in dice score, 23.91% insensitivity, and 7.09% in specificity. Depth-wise separable MI-Unet showed a performance increase by 10.83% in dice score, 2.97% in sensitivity, and 12.72% in specificity as compared to the baseline Unet architecture. Hybrid Unet architecture achieved performance improvement of 9.71% in dice score, 3.56% in sensitivity, and 12.6% in specificity. Whereas the depth-wise separable hybrid Unet architecture outperformed the baseline architecture by 15.45% in dice score, 20.56% in sensitivity, and 12.22% in specificity.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Alvarez-Gonzalez, Ruben, and Andres Mendez-Vazquez. "Deep Learning Architecture Reduction for fMRI Data." Brain Sciences 12, no. 2 (February 8, 2022): 235. http://dx.doi.org/10.3390/brainsci12020235.

Повний текст джерела
Анотація:
In recent years, deep learning models have demonstrated an inherently better ability to tackle non-linear classification tasks, due to advances in deep learning architectures. However, much remains to be achieved, especially in designing deep convolutional neural network (CNN) configurations. The number of hyper-parameters that need to be optimized to achieve accuracy in classification problems increases with every layer used, and the selection of kernels in each CNN layer has an impact on the overall CNN performance in the training stage, as well as in the classification process. When a popular classifier fails to perform acceptably in practical applications, it may be due to deficiencies in the algorithm and data processing. Thus, understanding the feature extraction process provides insights to help optimize pre-trained architectures, better generalize the models, and obtain the context of each layer’s features. In this work, we aim to improve feature extraction through the use of a texture amortization map (TAM). An algorithm was developed to obtain characteristics from the filters amortizing the filter’s effect depending on the texture of the neighboring pixels. From the initial algorithm, a novel geometric classification score (GCS) was developed, in order to obtain a measure that indicates the effect of one class on another in a classification problem, in terms of the complexity of the learnability in every layer of the deep learning architecture. For this, we assume that all the data transformations in the inner layers still belong to a Euclidean space. In this scenario, we can evaluate which layers provide the best transformations in a CNN, allowing us to reduce the weights of the deep learning architecture using the geometric hypothesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kumar, Bhavesh Shri, Naren J, Vithya G, and Prahathish K. "A Novel Architecture based on Deep Learning for Scene Image Recognition." International Journal of Psychosocial Rehabilitation 23, no. 1 (February 20, 2019): 400–404. http://dx.doi.org/10.37200/ijpr/v23i1/pr190251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hyunhee Park, Hyunhee Park. "Edge Based Lightweight Authentication Architecture Using Deep Learning for Vehicular Networks." 網際網路技術學刊 23, no. 1 (January 2022): 195–202. http://dx.doi.org/10.53106/160792642022012301020.

Повний текст джерела
Анотація:
<p>When vehicles are connected to the Internet through vehicle-to-everything (V2X) systems, they are exposed to diverse attacks and threats through the network connections. Vehicle-hacking attacks in the road can significantly affect driver safety. However, it is difficult to detect hacking attacks because vehicles not only have high mobility and unreliable link conditions, but they also use broadcast-based wireless communication. To this end, V2X systems need a simple but a powerful authentication procedure on the road. Therefore, this paper proposes an edge based lightweight authentication architecture using a deep learning algorithm for road safety applications in vehicle networks. The proposed lightweight authentication architecture enables vehicles that are physically separated to form a vehicular cloud in which vehicle-to-vehicle communications can be secured. In addition, an edge-based cloud data center performs deep learning algorithms to detect car hacking attempts, and then delivers the detection results to a vehicular cloud. Extensive simulations demonstrate that the proposed authentication architecture significantly enhanced the security level. The proposed authentication architecture has 94.51 to 99.8% F1-score results depending on the number of vehicles in the intrusion detection system using control area network traffic.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hao, Xing, Guigang Zhang, and Shang Ma. "Deep Learning." International Journal of Semantic Computing 10, no. 03 (September 2016): 417–39. http://dx.doi.org/10.1142/s1793351x16500045.

Повний текст джерела
Анотація:
Deep learning is a branch of machine learning that tries to model high-level abstractions of data using multiple layers of neurons consisting of complex structures or non-liner transformations. With the increase of the amount of data and the power of computation, neural networks with more complex structures have attracted widespread attention and been applied to various fields. This paper provides an overview of deep learning in neural networks including popular architecture models and training algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zou, Han, Jing Ge, Ruichao Liu, and Lin He. "Feature Recognition of Regional Architecture Forms Based on Machine Learning: A Case Study of Architecture Heritage in Hubei Province, China." Sustainability 15, no. 4 (February 14, 2023): 3504. http://dx.doi.org/10.3390/su15043504.

Повний текст джерела
Анотація:
Architecture form has been one of the hot areas in the field of architectural design, which reflects regional architectural features to some extent. However, most of the existing methods for architecture form belong to the field of qualitative analysis. Accordingly, quantitative methods are urgently required to extract regional architectural style, identify architecture form, and to and further provide the quantitative evaluation. Based on machine learning technology, this paper proposes a novel method to quantify the feature, form, and evaluation of regional architectures. First, we construct a training dataset—the Chinese Ancient Architecture Image Dataset (CAAID), in which each image is labeled by some experts as having at least one of three typical features such as “High Pedestal”, “Deep Eave” and “Elegant Gable”. Second, the CAAID is used to train our neural network model to identify three kinds of architectural features. In order to reveal the traditional forms of regional architecture in Hubei, we built the Hubei Architectural Heritage Image Dataset (HAHID) as our object dataset, in which we collected architectural images from four different regions including southeast, northeast, southwest, and northwest Hubei. Our object dataset is then fed into our neural network model to predict the typical features for those four regions in Hubei. The obtained quantitative results show that the feature identification of the architectural form is consistent with that of regional architectures in Hubei. Moreover, we can observe from the quantitative results that four geographic regions in Hubei show variation; for instance, the feature of the ‘elegant gable’ in southeastern Hubei is more evident, while the “Deep Eave” in the northwest is more evident. In addition, some new building images are selected to feed into our neural network model and the output quantitative results can effectively identify the corresponding feature style of regional architectures in Hubei. Therefore, our proposed method based on machine learning can be used not only as a quantitative tool to extract features of regional architectures, but also as an effective approach to evaluate architecture forms in the urban renewal process.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ma, Rui, Jia-Ching Hsu, Tian Tan, Eriko Nurvitadhi, David Sheffield, Rob Pelt, Martin Langhammer, Jaewoong Sim, Aravind Dasu, and Derek Chiou. "Specializing FGPU for Persistent Deep Learning." ACM Transactions on Reconfigurable Technology and Systems 14, no. 2 (July 8, 2021): 1–23. http://dx.doi.org/10.1145/3457886.

Повний текст джерела
Анотація:
Overlay architectures are a good way to enable fast development and debug on FPGAs at the expense of potentially limited performance compared to fully customized FPGA designs. When used in concert with hand-tuned FPGA solutions, performant overlay architectures can improve time-to-solution and thus overall productivity of FPGA solutions. This work tunes and specializes FGPU, an open source OpenCL-programmable GPU overlay for FPGAs. We demonstrate that our persistent deep learning (PDL )-FGPU architecture maintains the ease-of-programming and generality of GPU programming while achieving high performance from specialization for the persistent deep learning domain. We also propose an easy method to specialize for other domains. PDL-FGPU includes new instructions, along with micro-architecture and compiler enhancements. We evaluate both the FGPU baseline and the proposed PDL-FGPU on a modern high-end Intel Stratix 10 2800 FPGA in simulation running persistent DL applications (RNN, GRU, LSTM), and non-DL applications to demonstrate generality. PDL-FGPU requires 1.4–3× more ALMs, 4.4–6.4× more M20ks, and 1–9.5× more DSPs than baseline, but improves performance by 56–693× for PDL applications with an average 23.1% degradation on non-PDL applications. We integrated the PDL-FGPU overlay into Intel OPAE to measure real-world performance/power and demonstrate that PDL-FGPU is only 4.0–10.4× slower than the Nvidia V100.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sewak, Mohit, Sanjay K. Sahay, and Hemant Rathore. "An Overview of Deep Learning Architecture of Deep Neural Networks and Autoencoders." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 182–88. http://dx.doi.org/10.1166/jctn.2020.8648.

Повний текст джерела
Анотація:
The recent wide applications of deep learning in multiple fields has shown a great progress, but to perform optimally, it requires the adjustment of various architectural features and hyper-parameters. Moreover, deep learning could be used with multiple varieties of architecture aimed at different objectives, e.g., autoencoders are popular for un-supervised learning applications for reducing the dimensionality of the dataset. Similarly, deep neural networks are popular for supervised learning applications viz., classification, regression, etc. Besides the type of deep learning architecture, some other decision criteria and parameter selection decisions are required for determining each layer size, number of layers, activation and loss functions for different layers, optimizer algorithm, regularization, etc. Thus, this paper aims to cover different choices available under each of these major and minor decision criteria for devising a neural network and to train it optimally for achieving the objectives effectively, e.g., malware detection, natural language processing, image recognition, etc.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hartanto, Cahyo Adhi, and Laksmita Rahadianti. "Single Image Dehazing Using Deep Learning." JOIV : International Journal on Informatics Visualization 5, no. 1 (March 22, 2021): 76. http://dx.doi.org/10.30630/joiv.5.1.431.

Повний текст джерела
Анотація:
Many real-world situations such as bad weather may result in hazy environments. Images captured in these hazy conditions will have low image quality due to microparticles in the air. The microparticles light to scatter and absorb, resulting in hazy images with various effects. In recent years, image dehazing has been researched in depth to handle images captured in these conditions. Various methods were developed, from traditional methods to deep learning methods. Traditional methods focus more on the use of statistical prior. These statistical prior have weaknesses in certain conditions. This paper proposes a novel architecture based on PDR-Net by using a pyramid dilated convolution and pre-processing modules, processing modules, post-processing modules, and attention applications. The proposed network is trained to minimize L1 loss and perceptual loss with the O-Haze dataset. To evaluate our architecture's result, we used structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and color difference as an objective assessment and psychovisual experiment as a subjective assessment. Our architecture obtained better results than the previous method using the O-Haze dataset with an SSIM of 0.798, a PSNR of 25.39, but not better on the color difference. The SSIM and PSNR results were strengthened by using subjective assessments and 65 respondents, most of whom chose the results of the restoration of the image produced by our architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ghimire, Deepak, Dayoung Kil, and Seong-heum Kim. "A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration." Electronics 11, no. 6 (March 18, 2022): 945. http://dx.doi.org/10.3390/electronics11060945.

Повний текст джерела
Анотація:
Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require substantial computation and memory resources while replacing traditional hand-engineered features in existing systems. In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems. Recent advances in light-weight deep learning models and network architecture search (NAS) algorithms are reviewed, starting with simplified layers and efficient convolution and including new architectural design and optimization. In addition, several practical applications of efficient CNNs have been investigated using various types of hardware architectures and platforms.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Deep learning architecture"

1

Glatt, Ruben [UNESP]. "Deep learning architecture for gesture recognition." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/115718.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-03-03T11:52:29Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-07-25Bitstream added on 2015-03-03T12:06:38Z : No. of bitstreams: 1 000807195.pdf: 2462524 bytes, checksum: 91686fbe11c74337c40fe57671eb8d82 (MD5)
O reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Glatt, Ruben. "Deep learning architecture for gesture recognition /." Guaratinguetá, 2014. http://hdl.handle.net/11449/115718.

Повний текст джерела
Анотація:
Orientador: José Celso Freire Junior
Coorientador: Daniel Julien Barros da Silva Sampaio
Banca: Galeno José de Sena
Banca: Luiz de Siqueira Martins Filho
Resumo: O reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Mestre
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Salman, Ahmad. "Learning speaker-specific characteristics with deep neural architecture." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/learning-speakerspecific-characteristics-with-deep-neural-architecture(24acb31d-2106-4e52-80ab-6c649838026a).html.

Повний текст джерела
Анотація:
Robust Speaker Recognition (SR) has been a focus of attention for researchers since long. The advancement in speech-aided technologies especially biometrics highlights the necessity of foolproof SR systems. However, the performance of a SR system critically depends on the quality of speech features used to represent the speaker-specific information. This research aims at extracting the speaker-specific information from Mel-frequency Cepstral Coefficients (MFCCs) using deep learning. Speech is a mixture of various information components that include linguistic, speaker-specific and speaker’s emotional state information. Feature extraction for each information component is inevitable in different speech-related tasks for robust performance. However, almost all forms of speech representation carry all the information as a whole, which is responsible for the compromised performances by SR systems. Motivated by the complex problem solving ability of deep architectures by learning high-level task-specific information in the data, we propose a novel Deep Neural Architecture (DNA) to extract speaker-specific information (SI) from MFCCs, a popular frequency domain speech signal representation. A two-stage learning strategy is adopted, which is based on unsupervised training for network initialisation followed by regularised contrastive learning. To train our network in the 2nd stage, we devise a contrastive loss function to discriminate the speakers on the basis of their intrinsic statistical patterns, distributed in the representations yielded by our deep network. This is achieved in the contrastive pair-wise comparison of these representations for similar or dissimilar speakers. To improve the generalisation and reduce the interference of environmental effects with the speaker-specific representation, we regulate the contrastive loss with the data reconstruction loss in a multi-objective optimisation. A detailed study has been done to analyse the parametric space in training the proposed deep architecture for optimum performance. Finally we compare the performance of our learned speaker-specific representations with several state-of-the-art techniques in speaker verification and speaker segmentation tasks. It is evident that the representations acquired through learned DNA are invariant and comparatively less sensitive to the text, language and environmental variability.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Повний текст джерела
Анотація:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kola, Ramya Sree. "Generation of synthetic plant images using deep learning architecture." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18450.

Повний текст джерела
Анотація:
Background: Generative Adversarial Networks (Goodfellow et al., 2014) (GANs)are the current state of the art machine learning data generating systems. Designed with two neural networks in the initial architecture proposal, generator and discriminator. These neural networks compete in a zero-sum game technique, to generate data having realistic properties inseparable to that of original datasets. GANs have interesting applications in various domains like Image synthesis, 3D object generation in gaming industry, fake music generation(Dong et al.), text to image synthesis and many more. Despite having a widespread application domains, GANs are popular for image data synthesis. Various architectures have been developed for image synthesis evolving from fuzzy images of digits to photorealistic images. Objectives: In this research work, we study various literature on different GAN architectures. To understand significant works done essentially to improve the GAN architectures. The primary objective of this research work is synthesis of plant images using Style GAN (Karras, Laine and Aila, 2018) variant of GAN using style transfer. The research also focuses on identifying various machine learning performance evaluation metrics that can be used to measure Style GAN model for the generated image datasets. Methods: A mixed method approach is used in this research. We review various literature work on GANs and elaborate in detail how each GAN networks are designed and how they evolved over the base architecture. We then study the style GAN (Karras, Laine and Aila, 2018a) design details. We then study related literature works on GAN model performance evaluation and measure the quality of generated image datasets. We conduct an experiment to implement the Style based GAN on leaf dataset(Kumar et al., 2012) to generate leaf images that are similar to the ground truth. We describe in detail various steps in the experiment like data collection, preprocessing, training and configuration. Also, we evaluate the performance of Style GAN training model on the leaf dataset. Results: We present the results of literature review and the conducted experiment to address the research questions. We review and elaborate various GAN architecture and their key contributions. We also review numerous qualitative and quantitative evaluation metrics to measure the performance of a GAN architecture. We then present the generated synthetic data samples from the Style based GAN learning model at various training GPU hours and the latest synthetic data sample after training for around ~8 GPU days on leafsnap dataset (Kumar et al., 2012). The results we present have a decent quality to expand the dataset for most of the tested samples. We then visualize the model performance by tensorboard graphs and an overall computational graph for the learning model. We calculate the Fréchet Inception Distance score for our leaf Style GAN and is observed to be 26.4268 (the lower the better). Conclusion: We conclude the research work with an overall review of sections in the paper. The generated fake samples are much similar to the input ground truth and appear to be convincingly realistic for a human visual judgement. However, the calculated FID score to measure the performance of the leaf StyleGAN accumulates a large value compared to that of Style GANs original celebrity HD faces image data set. We attempted to analyze the reasons for this large score.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xiao, Yao. "Vehicle Detection in Deep Learning." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91375.

Повний текст джерела
Анотація:
Computer vision techniques are becoming increasingly popular. For example, face recognition is used to help police find criminals, vehicle detection is used to prevent drivers from serious traffic accidents, and written word recognition is used to convert written words into printed words. With the rapid development of vehicle detection given the use of deep learning techniques, there are still concerns about the performance of state-of-the-art vehicle detection techniques. For example, state-of-the-art vehicle detectors are restricted by the large variation of scales. People working on vehicle detection are developing techniques to solve this problem. This thesis proposes an advanced vehicle detection model, adopting one of the classical neural networks, which are the residual neural network and the region proposal network. The model utilizes the residual neural network as a feature extractor and the region proposal network to detect the potential objects' information.
Master of Science
Computer vision techniques are becoming increasingly popular. For example, face recognition is used to help police find criminals, vehicle detection is used to prevent drivers from serious traffic accidents, and written word recognition is used to convert written words into printed words. With the rapid development of vehicle detection given the use of deep learning techniques, there are still concerns about the performance of state-of-the art vehicle detection techniques. For example, state-of-the-art vehicle detectors are restricted by the large variation of scales. People working on vehicle detection are developing techniques to solve this problem. This thesis proposes an advanced vehicle detection model, utilizing deep learning techniques to detect the potential objects’ information.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tsardakas, Renhuldt Nikos. "Protein contact prediction based on the Tiramisu deep learning architecture." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231494.

Повний текст джерела
Анотація:
Experimentally determining protein structure is a hard problem, with applications in both medicine and industry. Predicting protein structure is also difficult. Predicted contacts between residues within a protein is helpful during protein structure prediction. Recent state-of-the-art models have used deep learning to improve protein contact prediction. This thesis presents a new deep learning model for protein contact prediction, TiramiProt. It is based on the Tiramisu deep learning architecture, and trained and evaluated on the same data as the PconsC4 protein contact prediction model. 228 models using different combinations of hyperparameters were trained until convergence. The final TiramiProt model performs on par with two current state-of-the-art protein contact prediction models, PconsC4 and RaptorX-Contact, across a range of different metrics. A Python package and a Singularity container for running TiramiProt are available at https://gitlab.com/nikos.t.renhuldt/TiramiProt.
Att kunna bestämma proteiners struktur har tillämpningar inom både medicin och industri. Såväl experimentell bestämning av proteinstruktur som prediktion av densamma är svårt. Predicerad kontakt mellan olika delar av ett protein underlättar prediktion av proteinstruktur. Under senare tid har djupinlärning använts för att bygga bättre modeller för kontaktprediktion. Den här uppsatsen beskriver en ny djupinlärningsmodell för prediktion av proteinkontakter, TiramiProt. Modellen bygger på djupinlärningsarkitekturen Tiramisu. TiramiProt tränas och utvärderas på samma data som kontaktprediktionsmodellen PconsC4. Totalt tränades modeller med 228 olika hyperparameterkombinationer till konvergens. Mätt över ett flertal olika parametrar presterar den färdiga TiramiProt-modellen resultat i klass med state-of-the-art-modellerna PconsC4 och RaptorX-Contact. TiramiProt finns tillgängligt som ett Python-paket samt en Singularity-container via https://gitlab.com/nikos.t.renhuldt/TiramiProt.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fayyazifar, Najmeh. "Deep learning and neural architecture search for cardiac arrhythmias classification." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2022. https://ro.ecu.edu.au/theses/2553.

Повний текст джерела
Анотація:
Cardiovascular disease (CVD) is the primary cause of mortality worldwide. Among people with CVD, cardiac arrhythmias (changes in the natural rhythm of the heart), are a leading cause of death. The clinical routine for arrhythmia diagnosis includes acquiring an electrocardiogram (ECG) and manually reviewing the ECG trace to identify the arrhythmias. However, due to the varying expertise level of clinicians, accurate diagnosis of arrhythmias with similar visual characteristics (that naturally exists in some different types of arrhythmias) can be challenging for some front-line clinicians. In addition, there is a shortage of trained cardiologists globally, and especially in remote areas of Australia, where patients are sometimes required to wait for weeks or months for a visiting cardiologist. This impacts the timely care of patients living in remote areas. Therefore, developing an AI-based model, that assists clinicians in accurate real-time decision-making, is an essential task. This thesis provides supporting evidence that the problem of delayed and/or inaccurate cardiac arrhythmias diagnosis can be addressed by designing accurate deep learning models through Neural Architecture Search (NAS). These models can automatically differentiate different types of arrhythmias in a timely manner. Many different deep learning models and more specifically, Convolutional Neural Networks (CNNs) have been developed for automatic and accurate cardiac arrhythmias detection. However, these models are heavily hand-crafted which means designing an accurate model for a given task, requires significant trial and error. In this thesis, the process of designing an accurate CNN model for 1-dimensional biomedical data classification is automated by applying NAS techniques. NAS is a recent research paradigm in which the process of designing an accurate model (for a given task) is automated by employing a search algorithm over a pre-defined search space of possible operations in a deep learning model. In this thesis, we developed a CNN model for detection of ‘Atrial Fibrillation’ (AF) among ‘normal sinus rhythm’, ‘noise’, and ‘other arrhythmias. This model is designed by employing a well-known NAS method, Efficient Neural Architecture Search (ENAS) which uses Reinforcement Learning (RL) to perform a search over common operations in a CNN structure. This CNN model outperformed state-of-the-art deep learning models for AF detection while minimizing human intervention in CNN structure design. In order to reduce the high computation time that was required by ENAS (and typically by RL-based NAS), in this thesis, a recent NAS method called DARTS was utilized to design a CNN model for accurate diagnosis of a wider range of cardiac arrhythmias. This method employs Stochastic Gradient Descent (SGD) to perform the search procedure over a continuous and therefore differentiable search space. The search space (operations and building blocks) of DARTS was tailored to implement the search procedure over a public dataset of standard 12-lead ECG recordings containing 111 types of arrhythmias (released by the PhysioNet challenge, 2020). The performance of DARTS was further studied by utilizing it to differentiate two major sub-types of Wide QRS Complex Tachycardia (Ventricular Tachycardia- VT vs Supraventricular Tachycardia- SVT). These sub-types have similar visual characteristics, which makes differentiating between them challenging, even for experienced clinicians. This dataset is a unique collection of Wide Complex Tachycardia (WCT) recordings, collected by our medical collaborator (University of Ottawa heart institute) over the course of 11 years. The DARTS-derived model achieved 91% accuracy, outperforming cardiologists (77% accuracy) and state-of-the-art deep learning models (88% accuracy). Lastly, the efficacy of the original DARTS algorithm for the image classification task is empirically studied. Our experiments showed that the performance of the DARTS search algorithm does not deteriorate over the search course; however, the search procedure can be terminated earlier than what was designated in the original algorithm. In addition, the accuracy of the derived model could be further improved by modifying the original search operations (excluding the zero operation), making it highly valuable in a clinical setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Qian, Xiaoye. "Wearable Computing Architecture over Distributed Deep Learning Hierarchy: Fall Detection Study." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case156195574310931.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ähdel, Victor. "On the effect of architecture on deep learning based features for homography estimation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233194.

Повний текст джерела
Анотація:
Keypoint detection and description is the first step of homography and essential matrix estimation, which in turn is used in Visual Odometry and Visual SLAM. This work explores the effect (in terms of speed and accuracy) of using different deep learning architectures for such keypoints. The fully convolutional networks — with heads for both the detector and descriptor — are trained through an existing self-supervised method, where correspondences are obtained through known randomly sampled homographies. A new strategy for choosing negative correspondences for the descriptor loss is presented, which enables more flexibility in the architecture design. The new strategy turns out to be essential as it enables networks that outperform the learnt baseline at no cost in inference time. Varying the model size leads to a trade-off in speed and accuracy, and while all models outperform ORB in homography estimation, only the larger models approach SIFT’s performance; performing about 1-7% worse. Training for longer and with additional types of data might give the push needed to outperform SIFT. While the smallest models are 3× faster and use 50× fewer parameters than the learnt baseline, they still require 3× as much time as SIFT while performing about 10-30% worse. However, there is still room for improvement through optimization methods that go beyond architecture modification, e.g. quantization, which might make the method faster than SIFT.
Nyckelpunkts-detektion och deskriptor-skapande är det första steget av homografi och essentiell matris estimering, vilket i sin tur används inom Visuell Odometri och Visuell SLAM. Det här arbetet utforskar effekten (i form av snabbhet och exakthet) av användandet av olika djupinlärnings-arkitekturer för sådana nyckelpunkter. De hel-faltade nätverken – med huvuden för både detektorn och deskriptorn – tränas genom en existerande själv-handledd metod, där korrespondenser fås genom kända slumpmässigt valda homografier. En ny strategi för valet av negativa korrespondenser för deskriptorns träning presenteras, vilket möjliggör mer flexibilitet i designen av arkitektur. Den nya strategin visar sig vara väsentlig då den möjliggör nätverk som presterar bättre än den lärda baslinjen utan någon kostnad i inferenstid. Varieringen av modellstorleken leder till en kompromiss mellan snabbhet och exakthet, och medan alla modellerna presterar bättre än ORB i homografi-estimering, så är det endast de större modellerna som närmar sig SIFTs prestanda; där de presterar 1-7% sämre. Att träna längre och med ytterligare typer av data kanske ger tillräcklig förbättring för att prestera bättre än SIFT. Även fast de minsta modellerna är 3× snabbare och använder 50× färre parametrar än den lärda baslinjen, så kräver de fortfarande 3× så mycket tid som SIFT medan de presterar runt 10-30% sämre. Men det finns fortfarande utrymme för förbättring genom optimeringsmetoder som övergränsar ändringar av arkitekturen, som till exempel kvantisering, vilket skulle kunna göra metoden snabbare än SIFT.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Deep learning architecture"

1

Calin, Ovidiu. Deep Learning Architectures. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pedrycz, Witold, and Shyi-Ming Chen, eds. Deep Learning: Concepts and Architectures. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-31756-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pedrycz, Witold, and Shyi-Ming Chen, eds. Development and Analysis of Deep Learning Architectures. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-31764-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kang, Mingu, Sujan Gonugondla, and Naresh R. Shanbhag. Deep In-memory Architectures for Machine Learning. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-35971-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mitchell, Laura, Vishnu Subramanian, and Sri Yogesh K. Deep Learning with Pytorch 1. x: Implement Deep Learning Techniques and Neural Network Architecture Variants Using Python, 2nd Edition. Packt Publishing, Limited, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shi, Cong, Ji Liu, and Xichuan Zhou. Deep Learning on Edge Computing Devices: Design Challenges of Algorithm and Architecture. Elsevier, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Deep Learning on Edge Computing Devices: Design Challenges of Algorithm and Architecture. Elsevier, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bengio, Yoshua. Learning Deep Architectures for AI. Now Publishers Inc, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Decision Making Handbook: Engineering, IoT, Information Technology, Marketing, Architecture, Deep Learning, Data Mining,TR5, Excel Dashboard, Social Media, Business Development and Artificial Intelligence. Independently Published, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Daneshtalab, Masoud, and Mehdi Modarressi, eds. Hardware Architectures for Deep Learning. Institution of Engineering and Technology, 2020. http://dx.doi.org/10.1049/pbcs055e.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Deep learning architecture"

1

Briot, Jean-Pierre, Gaëtan Hadjeres, and François-David Pachet. "Architecture." In Deep Learning Techniques for Music Generation, 51–114. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-70163-9_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tripathi, Ashish, Shraddha Upadhaya, Arun Kumar Singh, Krishna Kant Singh, Arush Jain, Pushpa Choudhary, and Prem Chand Vashist. "Deep Learning Architecture and Framework." In Deep Learning in Visual Computing and Signal Processing, 1–27. Boca Raton: Apple Academic Press, 2022. http://dx.doi.org/10.1201/9781003277224-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wüthrich, Mario V., and Michael Merz. "Deep Learning." In Springer Actuarial, 267–379. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_7.

Повний текст джерела
Анотація:
AbstractThe core of this book are deep learning methods and neural networks. This chapter considers deep feed-forward neural (FN) networks. We introduce the generic architecture of deep FN networks, and we discuss universality theorems of FN networks. We present network fitting, back-propagation, embedding layers for categorical variables and insurance-specific issues such as the balance property in network fitting, as well as network ensembling to reduce model uncertainty. This chapter is complemented by many examples on non-life insurance pricing, but also on mortality modeling, as well as tools that help to explain deep FN network regression results.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Krig, Scott. "Feature Learning and Deep Learning Architecture Survey." In Computer Vision Metrics, 375–514. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-33762-3_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Blubaugh, David Allen, Steven D. Harbour, Benjamin Sears, and Michael J. Findler. "Subsumption Cognitive Architecture." In Intelligent Autonomous Drones with Cognitive Deep Learning, 377–406. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-6803-2_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kang, Mingu, Sujan Gonugondla, and Naresh R. Shanbhag. "The Deep In-Memory Architecture (DIMA)." In Deep In-memory Architectures for Machine Learning, 7–47. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-35971-3_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gridin, Ivan. "Multi-trial Neural Architecture Search." In Automated Deep Learning Using Neural Network Intelligence, 185–256. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8149-9_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gridin, Ivan. "One-Shot Neural Architecture Search." In Automated Deep Learning Using Neural Network Intelligence, 257–318. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8149-9_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ye, Andre. "Successful Neural Network Architecture Design." In Modern Deep Learning Design and Application Development, 327–400. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7413-2_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hou, June-Hao, and Chi-Li Cheng. "Reconstructing Photogrammetric 3D Model by Using Deep Learning." In Formal Methods in Architecture, 295–304. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-57509-0_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Deep learning architecture"

1

Kaskavalci, Halil Can, and Sezer Goren. "A Deep Learning Based Distributed Smart Surveillance Architecture using Edge and Cloud Computing." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wistuba, Martin. "Practical Deep Learning Architecture Optimization." In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, 2018. http://dx.doi.org/10.1109/dsaa.2018.00037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kakanakova, Irina, and Stefan Stoyanov. "Outlier Detection via Deep Learning Architecture." In CompSysTech'17: 18th International Conference on Computer Systems and Technologies. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3134302.3134337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

B, Sangeetha, Senthil Prabha R, and Ravitha Rajalakshmi N. "Deep Learning Architecture For Fruit Classification." In Proceedings of the First International Conference on Combinatorial and Optimization, ICCAP 2021, December 7-8 2021, Chennai, India. EAI, 2021. http://dx.doi.org/10.4108/eai.7-12-2021.2314483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hervert Hernandez, Esau, Yan Cao, and Nasser Kehtarnavaz. "Deep learning architecture search for real-time image denoising." In Real-Time Image Processing and Deep Learning 2022, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2022. http://dx.doi.org/10.1117/12.2620349.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sharify, Sayeh, Alberto Delmas Lascorz, Mostafa Mahmoud, Milos Nikolic, Kevin Siu, Dylan Malone Stuart, Zissis Poulos, and Andreas Moshovos. "Laconic deep learning inference acceleration." In ISCA '19: The 46th Annual International Symposium on Computer Architecture. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3307650.3322255.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Setyanto, Arief, Kusrini Kusrini, Theopilus Bayu Sasongko, Adhitya Bagasmiwa Permana, and Andhy Panca Saputra. "Efficient Deep Learning Architecture for Facemask Detection." In 2021 4th International Conference on Information and Communications Technology (ICOIACT). IEEE, 2021. http://dx.doi.org/10.1109/icoiact53268.2021.9564011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gan, Yiming, Yuxian Qiu, Jingwen Leng, Minyi Guo, and Yuhao Zhu. "Ptolemy: Architecture Support for Robust Deep Learning." In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020. http://dx.doi.org/10.1109/micro50266.2020.00031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Çano, Erion, and Maurizio Morisio. "A deep learning architecture for sentiment analysis." In ICGDA '18: 2018 the International Conference on Geoinformatics and Data Analysis, ICGDA '18. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3220228.3220229.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

De Hertog, Dirk, and Anaïs Tack. "Deep Learning Architecture for Complex Word Identification." In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/w18-0539.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Deep learning architecture"

1

Cooper, Alexis, Xin Zhou, Scott Heidbrink, and Daniel Dunlavy. Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1668457.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cooper, Alexis, Xin Zhou, Daniel Dunlavy, and Scott Heidbrink. Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1668929.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, January 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Повний текст джерела
Анотація:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and feature selection/reduction. These pre-processing techniques play an important role in training a neural network to optimize its performance. This research studies the impact of applying normalization techniques as a pre-processing step to learning, as used by the IDSs. The impacts of pre-processing techniques play an important role in training neural networks to optimize its performance. This report proposes a Deep Neural Network (DNN) model with two hidden layers for IDS architecture and compares two commonly used normalization pre-processing techniques. Our findings are evaluated using accuracy, Area Under Curve (AUC), Receiver Operator Characteristic (ROC), F-1 Score, and loss. The experimentations demonstrate that Z-Score outperforms no-normalization and the use of Min-Max normalization.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pettit, Chris, and D. Wilson. A physics-informed neural network for sound propagation in the atmospheric boundary layer. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/41034.

Повний текст джерела
Анотація:
We describe what we believe is the first effort to develop a physics-informed neural network (PINN) to predict sound propagation through the atmospheric boundary layer. PINN is a recent innovation in the application of deep learning to simulate physics. The motivation is to combine the strengths of data-driven models and physics models, thereby producing a regularized surrogate model using less data than a purely data-driven model. In a PINN, the data-driven loss function is augmented with penalty terms for deviations from the underlying physics, e.g., a governing equation or a boundary condition. Training data are obtained from Crank-Nicholson solutions of the parabolic equation with homogeneous ground impedance and Monin-Obukhov similarity theory for the effective sound speed in the moving atmosphere. Training data are random samples from an ensemble of solutions for combinations of parameters governing the impedance and the effective sound speed. PINN output is processed to produce realizations of transmission loss that look much like the Crank-Nicholson solutions. We describe the framework for implementing PINN for outdoor sound, and we outline practical matters related to network architecture, the size of the training set, the physics-informed loss function, and challenge of managing the spatial complexity of the complex pressure.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії