Дисертації з теми "Efficient Neural Networks"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Efficient Neural Networks".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.
Повний текст джерелаLos algoritmos de aprendizaje profundo han tenido un éxito notable en aplicaciones como el reconocimiento automático de voz y la traducción automática. Por ende, estas aplicaciones son omnipresentes en nuestras vidas y se encuentran en una gran cantidad de dispositivos. Estos algoritmos se componen de Redes Neuronales Profundas (DNN), tales como las Redes Neuronales Convolucionales y Redes Neuronales Recurrentes (RNN), las cuales tienen un gran número de parámetros y cálculos. Por esto implementar DNNs en dispositivos móviles y servidores es un reto debido a los requisitos de memoria y energía. Las RNN se usan para resolver problemas de secuencia a secuencia tales como traducción automática. Estas contienen dependencias de datos entre las ejecuciones de cada time-step, por ello la cantidad de paralelismo es limitado. Por eso la evaluación de RNNs de forma energéticamente eficiente es un reto. En esta tesis se estudian RNNs para mejorar su eficiencia energética en arquitecturas especializadas. Para esto, proponemos técnicas de ahorro energético y arquitecturas de alta eficiencia adaptadas a la evaluación de RNN. Primero, caracterizamos un conjunto de RNN ejecutándose en un SoC. Luego identificamos que acceder a la memoria para leer los pesos es la mayor fuente de consumo energético el cual llega hasta un 80%. Por ende, creamos E-PUR: una unidad de procesamiento para RNN. E-PUR logra una aceleración de 6.8x y mejora el consumo energético en 88x en comparación con el SoC. Esas mejoras se deben a la maximización de la ubicación temporal de los pesos. En E-PUR, la lectura de los pesos representa el mayor consumo energético. Por ende, nos enfocamos en reducir los accesos a la memoria y creamos un esquema que reutiliza resultados calculados previamente. La observación es que al evaluar las secuencias de entrada de un RNN, la salida de una neurona dada tiende a cambiar ligeramente entre evaluaciones consecutivas, por lo que ideamos un esquema que almacena en caché las salidas de las neuronas y las reutiliza cada vez que detecta un cambio pequeño entre el valor de salida actual y el valor previo, lo que evita leer los pesos. Para decidir cuándo usar un cálculo anterior utilizamos una Red Neuronal Binaria (BNN) como predictor de reutilización, dado que su salida está altamente correlacionada con la salida de la RNN. Esta propuesta evita más del 24.2% de los cálculos y reduce el consumo energético promedio en 18.5%. El tamaño de la memoria de los modelos RNN suele reducirse utilizando baja precisión para la evaluación y el almacenamiento de los pesos. En este caso, la precisión mínima utilizada se identifica de forma estática y se establece de manera que la RNN mantenga su exactitud. Normalmente, este método utiliza la misma precisión para todo los cálculos. Sin embargo, observamos que algunos cálculos se pueden evaluar con una precisión menor sin afectar la exactitud. Por eso, ideamos una técnica que selecciona dinámicamente la precisión utilizada para calcular cada time-step. Un reto de esta propuesta es como elegir una precisión menor. Abordamos este problema reconociendo que el resultado de una evaluación previa se puede emplear para determinar la precisión requerida en el time-step actual. Nuestro esquema evalúa el 57% de los cálculos con una precisión menor que la precisión fija empleada por los métodos estáticos. Por último, la evaluación en E-PUR muestra una aceleración de 1.46x con un ahorro de energía promedio de 19.2%
Golea, Mostefa. "On efficient learning algorithms for neural networks." Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6508.
Повний текст джерелаIslam, Taj-ul. "Channel routing : efficient solutions using neural networks /." Online version of thesis, 1993. http://hdl.handle.net/1850/11154.
Повний текст джерелаZhao, Wei. "Efficient neural networks for prediction of turbulent flow." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/16939.
Повний текст джерелаBillings, Rachel Mae. "On Efficient Computer Vision Applications for Neural Networks." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102957.
Повний текст джерелаMaster of Science
The subject of machine learning and its associated jargon have become ubiquitous in the past decade as industries seek to develop automated tools and applications and researchers continue to develop new methods for artificial intelligence and improve upon existing ones. Neural networks are a type of machine learning algorithm that can make predictions in complex situations based on input data with human-like (or better) accuracy. Real-time, low-power, and low-cost systems using these algorithms are increasingly used in consumer and industry applications, often improving the efficiency of completing mundane and hazardous tasks traditionally performed by humans. The focus of this work is (1) to explore when and why neural networks may make incorrect decisions in the domain of image-based prediction tasks, (2) the demonstration of a low-power, low-cost machine learning use case using a mask recognition system intended to be suitable for deployment in support of COVID-19-related mask regulations, and (3) the investigation of how neural networks may be implemented on resource-limited technology in an efficient manner using an emerging form of computing.
Bozorgmehr, Pouya. "An efficient online feature extraction algorithm for neural networks." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1470604.
Повний текст джерелаTitle from first page of PDF file (viewed January 13, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 61-63).
Al-Hindi, Khalid A. "Flexible basis function neural networks for efficient analog implementations /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074367.
Повний текст джерелаEkman, Carl. "Traffic Sign Classification Using Computationally Efficient Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157453.
Повний текст джерелаAdamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.
Повний текст джерелаEtchells, Terence Anthony. "Rule extraction from neural networks : a practical and efficient approach." Thesis, Liverpool John Moores University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402847.
Повний текст джерелаLundström, Dennis. "Data-efficient Transfer Learning with Pre-trained Networks." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138612.
Повний текст джерелаRiggelsen, Carsten. "Approximation methods for efficient learning of Bayesian networks /." Amsterdam ; Washington, DC : IOS Press, 2008. http://www.loc.gov/catdir/toc/fy0804/2007942192.html.
Повний текст джерелаPonca, Marek Scarbata Gerd. "Towards efficient implementation of artificial neural networks in systems on chip /." Ilmenau : ISLE, 2007. http://www.gbv.de/dms/ilmenau/toc/530583380.PDF.
Повний текст джерелаHarper, Kevin M. "Challenging the Efficient Market Hypothesis with Dynamically Trained Artificial Neural Networks." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/718.
Повний текст джерелаAllen, Michael James. "Artificial intelligence techniques for efficient object location in image sequences." Thesis, University of Wolverhampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343257.
Повний текст джерелаStorkey, Amos James. "Efficient covariance matrix methods for Bayesian Gaussian processes and Hopfield neural networks." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313335.
Повний текст джерелаHu, Xu. "Towards efficient learning of graphical models and neural networks with variational techniques." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1037.
Повний текст джерелаIn this thesis, I will mainly focus on variational inference and probabilistic models. In particular, I will cover several projects I have been working on during my PhD about improving the efficiency of AI/ML systems with variational techniques. The thesis consists of two parts. In the first part, the computational efficiency of probabilistic graphical models is studied. In the second part, several problems of learning deep neural networks are investigated, which are related to either energy efficiency or sample efficiency
Highlander, Tyler. "Efficient Training of Small Kernel Convolutional Neural Networks using Fast Fourier Transform." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1432747175.
Повний текст джерелаIoannou, Yani Andrew. "Structural priors in deep neural networks." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278976.
Повний текст джерелаLee, Hyuk-Jae 1965. "An efficient cooling algorithm for annealed neural networks with applications to optimization problems." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/278008.
Повний текст джерелаGeras, Krzysztof Jerzy. "Exploiting diversity for efficient machine learning." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28839.
Повний текст джерелаAboubakar, Moussa. "Efficient management of IoT low power networks." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2571.
Повний текст джерелаIn these recent years, several connected objects such as computer, sensors and smart watches became part of modern living and form the Internet of Things (IoT). The basic idea of IoT is to enable interaction among connected objects in order to achieve a desirable goal. IoT paradigm spans across many areas of our daily life such as smart transportation, smart city, smart agriculture, smart factory and so forth. Nowadays, IoT networks are characterized by the presence of billions of heterogeneous embedded devices with limited resources (e.g. limited memory, battery, CPU and bandwidth) deployed to enable various IoT applications. However, due to both resource constraints and the heterogeneity of IoT devices, IoT networks are facing with various problems (e.g. link quality deterioration, node failure, network congestion, etc.). Considering that, it is therefore important to perform an efficient management of IoT low power networks in order to ensure good performance of those networks. To achieve this, the network management solution should be able to perform self-configuration of devices to cope with the complexity introduced by current IoT networks (due to the increasing number of IoT devices and the dynamic nature of IoT networks). Moreover, the network management should provide a mechanism to deal with the heterogeneity of the IoT ecosystem and it should also be energy efficient in order to prolong the operational time of IoT devices in case they are using batteries. Thereby, in this thesis we addressed the problem of configuration of IoT low power networks by proposing efficient solutions that help to optimize the performance of IoT networks. We started by providing a comparative analysis of existing solutions for the management of IoT low power networks. Then we propose an intelligent solution that uses a deep neural network model to determine the efficient transmission power of RPL networks. The performance evaluation shows that the proposed solution enables the configuration of the transmission range that allows a reduction of the network energy consumption while maintaining the network connectivity. Besides, we also propose an efficient and adaptative solution for configuring the IEEE 802.15.4 MAC parameters of devices in dynamic IoT low power networks. Simulation results show that our proposal improves the end-to-end delay compared to the usage of the standard IEEE 802.15.4 MAC. Additionally, we develop a study on solutions for congestion control in IoT low power networks and propose a novel scheme for collecting the congestion state of devices in a given routing path of an IoT network so as to enable an efficient mitigation of the congestion by the network manager (the device in charge of configuration of the IoT network)
Jackson, Thomas C. "Building Efficient Neuromorphic Networks in Hardware with Mixed Signal Techniques and Emerging Technologies." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1096.
Повний текст джерелаCross, Richard J. (Richard John). "Efficient Tools For Reliability Analysis Using Finite Mixture Distributions." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4853.
Повний текст джерелаMinasny, Budiman. "Efficient Methods for Predicting Soil Hydraulic Properties." University of Sydney. Land, Water & Crop Sciences, 2000. http://hdl.handle.net/2123/853.
Повний текст джерелаKarlsson, Nils. "Comparison of linear regression and neural networks for stock price prediction." Thesis, Uppsala universitet, Signaler och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445237.
Повний текст джерелаFonda, James William. "Energy efficient wireless sensor network protocols for monitoring and prognostics of large scale systems." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/fonda_09007dcc805070d4.pdf.
Повний текст джерелаVita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed May 27, 2008) Includes bibliographical references.
Hayward, Ross. "Analytic and inductive learning in an efficient connectionist rule-based reasoning system." Thesis, Queensland University of Technology, 2001.
Знайти повний текст джерелаKuai, Wenming. "Neural networks constructed using families of dense subsets of L[subscript]2(R) functions and their capabilities in efficient and flexible training." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/29587.
Повний текст джерелаPeh, Lawrence T. W. "An efficient algorithm for extracting Boolean functions from linear threshold gates, and a synthetic decompositional approach to extracting Boolean functions from feedforward neural networks with arbitrary transfer functions." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0013.
Повний текст джерелаHallberg, David, and Erik Renström. "PC Regression, Vector Autoregression, and Recurrent Neural Networks: How do they compare when predicting stock index returns for building efficient portfolios?" Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252557.
Повний текст джерелаDetta examensarbete undersöker den statistiska och ekonomiska prestationen i att modellera och prognostisera aktieindexavkastning via applikation av flertalet statistiska modeller på en datamängd bestående av makroekonomiska och finansiella variabler. Genom att kombinera linjär huvudkomponentsregression (principal component analysis), vektorautoregression och den återkopplande neurala nätverksmodellen LSTM finner författarna att även om majoriteten av modellerna påvisar hög statistisk signifikans så överpresterar praktiskt taget ingen av dem mot klassisk portföljteori på effektiva marknader, sett till riskjusterad avkastning. Flera implikationer diskuteras också baserat på resultaten
Vogel, Sebastian A. A. Verfasser], Gerd [Akademischer Betreuer] [Ascheid, and Walter [Akademischer Betreuer] Stechele. "Design and implementation of number representations for efficient multiplierless acceleration of convolutional neural networks / Sebastian A. A. Vogel ; Gerd Ascheid, Walter Stechele." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1220082716/34.
Повний текст джерелаPhan, Leon L. "A methodology for the efficient integration of transient constraints in the design of aircraft dynamic systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34750.
Повний текст джерелаValenti, Giacomo. "Secure, efficient automatic speaker verification for embedded applications." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS471.
Повний текст джерелаThis industrial CIFRE PhD thesis addresses automatic speaker verification (ASV) issues in the context of embedded applications. The first part of this thesis focuses on more traditional problems and topics. The first work investigates the minimum enrolment data requirements for a practical, text-dependent short-utterance ASV system. Contributions in part A of the thesis consist in a statistical analysis whose objective is to isolate text-dependent factors and prove they are consistent across different sets of speakers. For very short utterances, the influence of a specific text content on the system performance can be considered a speaker-independent factor. Part B of the thesis focuses on neural network-based solutions. While it was clear that neural networks and deep learning were becoming state-of-the-art in several machine learning domains, their use for embedded solutions was hindered by their complexity. Contributions described in the second part of the thesis comprise blue-sky, experimental research which tackles the substitution of hand-crafted, traditional speaker features in favour of operating directly upon the audio waveform and the search for optimal network architectures and weights by means of genetic algorithms. This work is the most fundamental contribution: lightweight, neuro-evolved network structures which are able to learn from the raw audio input
Westphal, Florian. "Efficient Document Image Binarization using Heterogeneous Computing and Interactive Machine Learning." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16797.
Повний текст джерелаScalable resource-efficient systems for big data analytics
Elbita, Abdulhakim M. "Efficient Processing of Corneal Confocal Microscopy Images. Development of a computer system for the pre-processing, feature extraction, classification, enhancement and registration of a sequence of corneal images." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6463.
Повний текст джерелаThe data and image files accompanying this thesis are not available online.
Bergström, Carl, and Oscar Hjelm. "Impact of Time Steps on Stock Market Prediction with LSTM." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262221.
Повний текст джерелаMaskininlärningsmodeller som redskap för att förutspå tidsserier har de senaste åren visat sig prestera exceptionellt bra. Vad gäller finansiella tidsserier i formen av aktieindex, som har en inneboende komplexitet, och är föremål för störningar och volatilitet, har förutsägelse av aktiemarknadsrörelser visat sig vara särskilt svårt igenom omfattande forskning. Målet med denna studie är att grundligt undersöka LSTM-arkitekturen för neurala nätverk och dess prestanda när den appliceras på aktieindexet S&P 500. Huvudfrågan kretsar kring att kvantifiera inverkan som varierande av antal tidssteg i LTSM-modellen har på prediktivprestanda när den appliceras på aktieindexet S&P 500. Data som använts i modellen är av hög pålitlighet, nedladdad frånBloomberg-terminalen, där stängningskurs har använts som feature i modellen. Andra beståndsdelar av modellen har baserats i tidigare forskning, där tillfredsställande resultat har uppnåtts. Resultaten indikerar att bland de testade tidsstegen så producerartio tidssteg bäst resultat. Dock verkar inte påverkan av antalet tidssteg vara särskilt signifikant för modellens övergripandeprestanda. Slutligen så presenterar sig implikationerna av resultaten för forskningsområdet som god grund för framtida forskning, där parametrar kan varieras och finjusteras i strävan efter optimal prestanda.
Elbita, Abdulhakim Mehemed. "Efficient processing of corneal confocal microscopy images : development of a computer system for the pre-processing, feature extraction, classification, enhancement and registration of a sequence of corneal images." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6463.
Повний текст джерелаFernandez, Brillet Lucas. "Réseaux de neurones CNN pour la vision embarquée." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM043.
Повний текст джерелаRecently, Convolutional Neural Networks have become the state-of-the-art soluion(SOA) to most computer vision problems. In order to achieve high accuracy rates, CNNs require a high parameter count, as well as a high number of operations. This greatly complicates the deployment of such solutions in embedded systems, which strive to reduce memory size. Indeed, while most embedded systems are typically in the range of a few KBytes of memory, CNN models from the SOA usually account for multiple MBytes, or even GBytes in model size. Throughout this thesis, multiple novel ideas allowing to ease this issue are proposed. This requires to jointly design the solution across three main axes: Application, Algorithm and Hardware.In this manuscript, the main levers allowing to tailor computational complexity of a generic CNN-based object detector are identified and studied. Since object detection requires scanning every possible location and scale across an image through a fixed-input CNN classifier, the number of operations quickly grows for high-resolution images. In order to perform object detection in an efficient way, the detection process is divided into two stages. The first stage involves a region proposal network which allows to trade-off recall for the number of operations required to perform the search, as well as the number of regions passed on to the next stage. Techniques such as bounding box regression also greatly help reduce the dimension of the search space. This in turn simplifies the second stage, since it allows to reduce the task’s complexity to the set of possible proposals. Therefore, parameter counts can greatly be reduced.Furthermore, CNNs also exhibit properties that confirm their over-dimensionment. This over-dimensionement is one of the key success factors of CNNs in practice, since it eases the optimization process by allowing a large set of equivalent solutions. However, this also greatly increases computational complexity, and therefore complicates deploying the inference stage of these algorithms on embedded systems. In order to ease this problem, we propose a CNN compression method which is based on Principal Component Analysis (PCA). PCA allows to find, for each layer of the network independently, a new representation of the set of learned filters by expressing them in a more appropriate PCA basis. This PCA basis is hierarchical, meaning that basis terms are ordered by importance, and by removing the least important basis terms, it is possible to optimally trade-off approximation error for parameter count. Through this method, it is possible to compress, for example, a ResNet-32 network by a factor of ×2 both in the number of parameters and operations with a loss of accuracy <2%. It is also shown that the proposed method is compatible with other SOA methods which exploit other CNN properties in order to reduce computational complexity, mainly pruning, winograd and quantization. Through this method, we have been able to reduce the size of a ResNet-110 from 6.88Mbytes to 370kbytes, i.e. a x19 memory gain with a 3.9 % accuracy loss.All this knowledge, is applied in order to achieve an efficient CNN-based solution for a consumer face detection scenario. The proposed solution consists of just 29.3kBytes model size. This is x65 smaller than other SOA CNN face detectors, while providing equal detection performance and lower number of operations. Our face detector is also compared to a more traditional Viola-Jones face detector, exhibiting approximately an order of magnitude faster computation, as well as the ability to scale to higher detection rates by slightly increasing computational complexity.Both networks are finally implemented in a custom embedded multiprocessor, verifying that theorical and measured gains from PCA are consistent. Furthermore, parallelizing the PCA compressed network over 8 PEs achieves a x11.68 speed-up with respect to the original network running on a single PE
Nasser, Yehya. "An Efficient Computer-Aided Design Methodology for FPGA&ASIC High-Level Power Estimation Based on Machine Learning." Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0014.
Повний текст джерелаNowadays, advanced digital systems are required to address complex functionnalities in a very wide range of applications. Systems complexity imposes designers to respect different design constraints such as the performance, area, power consumption and the time-to-market. The best design choice is the one that respects all of these constraints. To select an efficient design, designers need to quickly assess the possible architectures. In this thesis, we focus on facilitating the evaluation of the power consumption for both signal processing and hardware design engineers, so that it is possible to maintain fast, accurate and flexible power estimation. We present NeuPowas a system-level FPGA/ASIC power estimation method based on machine learning. We exploit neural networks to aid the designers in exploring the dynamic power consumption of possible architectural solutions. NeuPow relies on propagating the signals throughout connected neural models to predict the power consumption of a composite system at high-level of abstractions. We also provide an upgraded version that is frequency aware estimation. To prove the effectiveness of the proposed methodology, assessments such as technology and scalability studies have been conducted on ASIC and FPGA. Results show very good estimationaccuracy with less than 10% of relative error independently from the technology and the design size. NeuPow maintains high design productivity, where the simulation time obtained is significantly improved compared to those obtained with conventional design tools
Shuvo, Md Kamruzzaman. "Hardware Efficient Deep Neural Network Implementation on FPGA." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2792.
Повний текст джерелаLimnios, Stratis. "Graph Degeneracy Studies for Advanced Learning Methods on Graphs and Theoretical Results Edge degeneracy: Algorithmic and structural results Degeneracy Hierarchy Generator and Efficient Connectivity Degeneracy Algorithm A Degeneracy Framework for Graph Similarity Hcore-Init: Neural Network Initialization based on Graph Degeneracy." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX038.
Повний текст джерелаExtracting Meaningful substructures from graphs has always been a key part in graph studies. In machine learning frameworks, supervised or unsupervised, as well as in theoretical graph analysis, finding dense subgraphs and specific decompositions is primordial in many social and biological applications among many others.In this thesis we aim at studying graph degeneracy, starting from a theoretical point of view, and building upon our results to find the most suited decompositions for the tasks at hand.Hence the first part of the thesis we work on structural results in graphs with bounded edge admissibility, proving that such graphs can be reconstructed by aggregating graphs with almost-bounded-edge-degree. We also provide computational complexity guarantees for the different degeneracy decompositions, i.e. if they are NP-complete or polynomial, depending on the length of the paths on which the given degeneracy is defined.In the second part we unify the degeneracy and admissibility frameworks based on degree and connectivity. Within those frameworks we pick the most expressive, on the one hand, and computationally efficient on the other hand, namely the 1-edge-connectivity degeneracy, to experiment on standard degeneracy tasks, such as finding influential spreaders.Following the previous results that proved to perform poorly we go back to using the k-core but plugging it in a supervised framework, i.e. graph kernels. Thus providing a general framework named core-kernel, we use the k-core decomposition as a preprocessing step for the kernel and apply the latter on every subgraph obtained by the decomposition for comparison. We are able to achieve state-of-the-art performance on graph classification for a small computational cost trade-off.Finally we design a novel degree degeneracy framework for hypergraphs and simultaneously on bipartite graphs as they are hypergraphs incidence graph. This decomposition is then applied directly to pretrained neural network architectures as they induce bipartite graphs and use the coreness of the neurons to re-initialize the neural network weights. This framework not only outperforms state-of-the-art initialization techniques but is also applicable to any pair of layers convolutional and linear thus being applicable however needed to any type of architecture
Naoto, Chiche Benjamin. "Video classification with memory and computation-efficient convolutional neural network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254678.
Повний текст джерелаVideoförståelse innebär problem som videoklassificering, som består av att annotera videor baserat på deras innehåll och ramar. I många verkliga applikationer, som robotteknik, självkörande bilar, förstärkt verklighet (AR) och sakernas internet (IoT) måste videoförståelsuppgifter utföras i realtid på en enhet med begränsade minnesresurser och beräkningsförmåga, samtidigt som det uppfyller krav på låg fördröjning.I det här sammanhanget, medan neurala nätverk som är minnesoch beräkningseffektiva, dvs den aktuella presentationen har en rimlig avvägning mellan noggrannhet och effektivitet (med avseende på minnesstorlek och beräkningar) utvecklats för bildigenkänningsuppgifter, har studier om videoklassificering inte fullt utnyttjat dessa tekniker. För att fylla denna lucka i vetenskapen svarar det här projektet på följande forskningsfråga: hur bygger man videoklassificeringspipelines som bygger på minne och beräkningseffektiva faltningsnätverk (CNN) och hur utförs det sistnämnda?För att svara på denna fråga bygger projektet och utvärderar videoklassificeringspipelines som är nya artefakter. Den empiriska forskningsmetoden används i denna forskning som involverar triangulering (dvs kvalitativt och kvantitativt samtidigt). Artefakterna är baserade på ett befintligt minnesoch beräkningseffektivt CNN och dess utvärdering baseras på en öppet tillgängligt dataset för videoklassificering. Fallstudieforskningsstrategin antas: Vi försöker att generalisera erhållna resultat så långt som möjligt till andra minnesoch beräkningseffektiva CNNs och videoklassificeringsdataset. Som resultat byggs artefakterna och visar tillfredsställande prestandamätningar jämfört med baslinjeresultat som också utvecklas i denna avhandling och värden som rapporteras i andra forskningspapper baserat på samma dataset. Sammanfattningsvis kan video-klassificeringsledningar baserade på ett minne och beräkningseffektivt CNN byggas genom att utforma och utveckla artefakter som kombinerar metoder inspirerade av befintliga papper och nya tillvägagångssätt och dessa artefakter presenterar tillfredsställande prestanda. I synnerhet observerar vi att nedgången i noggrannhet som induceras av ett minne och beräkningseffektivt CNN vid hantering av videoramar kompenseras till viss del genom att ta upp tidsmässig information genom beaktande av sekvensen av dessa ramar.
Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.
Повний текст джерелаHarte, T. P. "Efficient neural network classification of magnetic resonance images of the breast." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603805.
Повний текст джерелаZhou, Helong, and 周賀龍. "Efficient Kernel Sharing Convolutional Neural Networks." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/hx3vqj.
Повний текст джерела國立臺灣科技大學
電子工程系
106
Increasing focus has been put on pursuing computation efficient convolutional neural network (CNN) models. To lessen the redundancy of convolutional kernels, this paper proposes two new convolutional structures, i.e., kernel sharing convolution (KSC) and weighted kernel sharing convolution (WKSC), where an extra weighting is imposed for each input in WKSC to manifest the diversity of input channels. Inspired by the fact that in traditional convolution, each input channel has its respective kernel to convolute with, which may lead to redundant kernels, both of the proposed schemes gather the inputs using the same kernel together, so the inputs in each group can share the same convolutional kernel. As a consequence, the number of kernels can be greatly reduced, leading to a reduction of model parameters and the speedup of inference. Moreover, WKSC is also combined with depthwise separable convolutions, resulting in a highly compressed architecture. Extensive experiments on CIFAR-100, Caltech-256 and ImageNet classification demonstrate the effectiveness of the new approach in both computation cost and the parameters required compared with the state-of-the-art works.
Stanley, Kenneth Owen. "Efficient evolution of neural networks through complexification." Thesis, 2004. http://hdl.handle.net/2152/1266.
Повний текст джерела"Energy Efficient Hardware Design of Neural Networks." Master's thesis, 2018. http://hdl.handle.net/2286/R.I.51597.
Повний текст джерелаDissertation/Thesis
Masters Thesis Electrical Engineering 2018
Stanley, Kenneth Owen Miikkulainen Risto. "Efficient evolution of neural networks through complexification." 2004. http://wwwlib.umi.com/cr/utexas/fullcit?p3143474.
Повний текст джерелаGupta, Kartik. "Towards Efficient and Reliable Deep Neural Networks." Phd thesis, 2022. http://hdl.handle.net/1885/275682.
Повний текст джерела