Dissertations / Theses on the topic 'DEEP framework'

To see the other types of publications on this topic, follow the link: DEEP framework.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'DEEP framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Haque, Ashraful. "A Deep Learning-based Dynamic Demand Response Framework." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104927.

Full text
Abstract:
The electric power grid is evolving in terms of generation, transmission and distribution network architecture. On the generation side, distributed energy resources (DER) are participating at a much larger scale. Transmission and distribution networks are transforming to a decentralized architecture from a centralized one. Residential and commercial buildings are now considered as active elements of the electric grid which can participate in grid operation through applications such as the Demand Response (DR). DR is an application through which electric power consumption during the peak demand periods can be curtailed. DR applications ensure an economic and stable operation of the electric grid by eliminating grid stress conditions. In addition to that, DR can be utilized as a mechanism to increase the participation of green electricity in an electric grid. The DR applications, in general, are passive in nature. During the peak demand periods, common practice is to shut down the operation of pre-selected electrical equipment i.e., heating, ventilation and air conditioning (HVAC) and lights to reduce power consumption. This approach, however, is not optimal and does not take into consideration any user preference. Furthermore, this does not provide any information related to demand flexibility beforehand. Under the broad concept of grid modernization, the focus is now on the applications of data analytics in grid operation to ensure an economic, stable and resilient operation of the electric grid. The work presented here utilizes data analytics in DR application that will transform the DR application from a static, look-up-based reactive function to a dynamic, context-aware proactive solution. The dynamic demand response framework presented in this dissertation performs three major functionalities: electrical load forecast, electrical load disaggregation and peak load reduction during DR periods. The building-level electrical load forecasting quantifies required peak load reduction during DR periods. The electrical load disaggregation provides equipment-level power consumption. This will quantify the available building-level demand flexibility. The peak load reduction methodology provides optimal HVAC setpoint and brightness during DR periods to reduce the peak demand of a building. The control scheme takes user preference and context into consideration. A detailed methodology with relevant case studies regarding the design process of the network architecture of a deep learning algorithm for electrical load forecasting and load disaggregation is presented. A case study regarding peak load reduction through HVAC setpoint and brightness adjustment is also presented. To ensure the scalability and interoperability of the proposed framework, a layer-based software architecture to replicate the framework within a cloud environment is demonstrated.
Doctor of Philosophy
The modern power grid, known as the smart grid, is transforming how electricity is generated, transmitted and distributed across the US. In a legacy power grid, the utilities are the suppliers and the residential or commercial buildings are the consumers of electricity. However, the smart grid considers these buildings as active grid elements which can contribute to the economic, stable and resilient operation of an electric grid. Demand Response (DR) is a grid application that reduces electrical power consumption during peak demand periods. The objective of DR application is to reduce stress conditions of the electric grid. The current DR practice is to shut down pre-selected electrical equipment i.e., HVAC, lights during peak demand periods. However, this approach is static, pre-fixed and does not consider any consumer preference. The proposed framework in this dissertation transforms the DR application from a look-up-based function to a dynamic context-aware solution. The proposed dynamic demand response framework performs three major functionalities: electrical load forecasting, electrical load disaggregation and peak load reduction. The electrical load forecasting quantifies building-level power consumption that needs to be curtailed during the DR periods. The electrical load disaggregation quantifies demand flexibility through equipment-level power consumption disaggregation. The peak load reduction methodology provides actionable intelligence that can be utilized to reduce the peak demand during DR periods. The work leverages functionalities of a deep learning algorithm to increase forecasting accuracy. An interoperable and scalable software implementation is presented to allow integration of the framework with existing energy management systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Rawat, Sharad. "DEEP LEARNING BASED FRAMEWORK FOR STRUCTURAL TOPOLOGY DESIGN." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1559560543458263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Waldow, Walter E. "An Adversarial Framework for Deep 3D Target Template Generation." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1597334881614898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kylén, Jonas. "Deep compositing in VFX : Creating a framework for deciding when to render deep images or traditional renders." Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lavangnananda, Kittichai. "A framework for qualitative model-based reasoning about mechanisms." Thesis, Cardiff University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hanchate, Narender. "A game theoretic framework for interconnect optimization in deep submicron and nanometer design." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anzalone, Evan John. "Agent and model-based simulation framework for deep space navigation analysis and design." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52163.

Full text
Abstract:
As the number of spacecraft in simultaneous operation continues to grow, there is an increased dependency on ground-based navigation support. The current baseline system for deep space navigation utilizes Earth-based radiometric tracking, which requires long duration, often global, observations to perform orbit determination and generate a state update. The age, complexity, and high utilization of the assets that make up the Deep Space Network (DSN) pose a risk to spacecraft navigation performance. With increasingly complex mission operations, such as automated asteroid rendezvous or pinpoint planetary landing, the need for high accuracy and autonomous navigation capability is further reinforced. The Network-Based Navigation (NNAV) method developed in this research takes advantage of the growing inter-spacecraft communication network infrastructure to allow for autonomous state measurement. By embedding navigation headers into the data packets transmitted between nodes in the communication network, it is possible to provide an additional source of navigation capability. Simulation results indicate that as NNAV is implemented across the deep space network, the state estimation capability continues to improve, providing an embedded navigation network. To analyze the capabilities of NNAV, an analysis and simulation framework is designed that integrates navigation and communication analysis. Model-Based Systems Engineering (MBSE) and Agent-Based Modeling (ABM) techniques are utilized to foster a modular, expandable, and robust framework. This research has developed the Space Navigation Analysis and Performance Evaluation (SNAPE) framework. This framework allows for design, analysis, and optimization of deep space navigation and communication architectures. SNAPE captures high-level performance requirements and bridges them to specific functional requirements of the analytical implementation. The SNAPE framework is implemented in a representative prototype environment using the Python language and verified using industry standard packages. The capability of SNAPE is validated through a series of example test cases. These analyses focus on the performance of specific state measurements to state estimation performance, and demonstrate the core analytic functionality of the framework. Specific cases analyze the effects of initial error and measurement uncertainty on state estimation performance. The timing and frequency of state measurements are also investigated to show the need for frequent state measurements to minimize navigation errors. The dependence of navigation accuracy on timing stability and accuracy is also demonstrated. These test cases capture the functionality of the tool as well as validate its performance. The SNAPE framework is utilized to capture and analyze NNAV, both conceptually and analytically. Multiple evaluation cases are presented that focus on the Mars Science Laboratory's (MSL) Martian transfer mission phase. These evaluation cases validate NNAV and provide concrete evidence of its operational capability for this particular application. Improvement to onboard state estimation performance and reduced reliance on Earth-based assets is demonstrated through simulation of the MSL spacecraft utilizing NNAV processes and embedded packets within a limited network containing DSN and MRO. From the demonstrated state estimation performance, NNAV is shown to be a capable and viable method of deep space navigation. Through its implementation as a state augmentation method, the concept integrates with traditional measurements and reduces the dependence on Earth-based updates. Future development of this concept focuses on a growing network of assets and spacecraft, which allows for improved operational flexibility and accuracy in spacecraft state estimation capability and a growing solar system-wide navigation network.
APA, Harvard, Vancouver, ISO, and other styles
8

Wagh, Ameya Yatindra. "A Deep 3D Object Pose Estimation Framework for Robots with RGB-D Sensors." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1287.

Full text
Abstract:
The task of object detection and pose estimation has widely been done using template matching techniques. However, these algorithms are sensitive to outliers and occlusions, and have high latency due to their iterative nature. Recent research in computer vision and deep learning has shown great improvements in the robustness of these algorithms. However, one of the major drawbacks of these algorithms is that they are specific to the objects. Moreover, the estimation of pose depends significantly on their RGB image features. As these algorithms are trained on meticulously labeled large datasets for object's ground truth pose, it is difficult to re-train these for real-world applications. To overcome this problem, we propose a two-stage pipeline of convolutional neural networks which uses RGB images to localize objects in 2D space and depth images to estimate a 6DoF pose. Thus the pose estimation network learns only the geometric features of the object and is not biased by its color features. We evaluate the performance of this framework on LINEMOD dataset, which is widely used to benchmark object pose estimation frameworks. We found the results to be comparable with the state of the art algorithms using RGB-D images. Secondly, to show the transferability of the proposed pipeline, we implement this on ATLAS robot for a pick and place experiment. As the distribution of images in LINEMOD dataset and the images captured by the MultiSense sensor on ATLAS are different, we generate a synthetic dataset out of very few real-world images captured from the MultiSense sensor. We use this dataset to train just the object detection networks used in the ATLAS Robot experiment.
APA, Harvard, Vancouver, ISO, and other styles
9

Cherti, Mehdi. "Deep generative neural networks for novelty generation : a foundational framework, metrics and experiments." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS029/document.

Full text
Abstract:
Des avancées significatives sur les réseaux de neurones profonds ont récemment permis le développement de technologies importantes comme les voitures autonomes et les assistants personnels intelligents basés sur la commande vocale. La plupart des succès en apprentissage profond concernent la prédiction, alors que les percées initiales viennent des modèles génératifs. Actuellement, même s'il existe des outils puissants dans la littérature des modèles génératifs basés sur les réseaux profonds, ces techniques sont essentiellement utilisées pour la prédiction ou pour générer des objets connus (i.e., des images de haute qualité qui appartiennent à des classes connues) : un objet généré qui est à priori inconnu est considéré comme une erreur (Salimans et al., 2016) ou comme un objet fallacieux (Bengio et al., 2013b). En d'autres termes, quand la prédiction est considérée comme le seul objectif possible, la nouveauté est vue comme une erreur - que les chercheurs ont essayé d'éliminer au maximum. Cette thèse défends le point de vue que, plutôt que d'éliminer ces nouveautés, on devrait les étudier et étudier le potentiel génératif des réseaux neuronaux pour créer de la nouveauté utile - particulièrement sachant l'importance économique et sociétale de la création d'objets nouveaux dans les sociétés contemporaines. Cette thèse a pour objectif d'étudier la génération de la nouveauté et sa relation avec les modèles de connaissance produits par les réseaux neurones profonds génératifs. Notre première contribution est la démonstration de l'importance des représentations et leur impact sur le type de nouveautés qui peuvent être générées : une conséquence clé est qu'un agent créatif a besoin de re-représenter les objets connus et utiliser cette représentation pour générer des objets nouveaux. Ensuite, on démontre que les fonctions objectives traditionnelles utilisées dans la théorie de l'apprentissage statistique, comme le maximum de vraisemblance, ne sont pas nécessairement les plus adaptées pour étudier la génération de nouveauté. On propose plusieurs alternatives à un niveau conceptuel. Un deuxième résultat clé est la confirmation que les modèles actuels - qui utilisent les fonctions objectives traditionnelles - peuvent en effet générer des objets inconnus. Cela montre que même si les fonctions objectives comme le maximum de vraisemblance s'efforcent à éliminer la nouveauté, les implémentations en pratique échouent à le faire. A travers une série d'expérimentations, on étudie le comportement de ces modèles ainsi que les objets qu'ils génèrent. En particulier, on propose une nouvelle tâche et des métriques pour la sélection de bons modèles génératifs pour la génération de la nouveauté. Finalement, la thèse conclue avec une série d'expérimentations qui clarifie les caractéristiques des modèles qui génèrent de la nouveauté. Les expériences montrent que la sparsité, le niveaux du niveau de corruption et la restriction de la capacité des modèles tuent la nouveauté et que les modèles qui arrivent à reconnaître des objets nouveaux arrivent généralement aussi à générer de la nouveauté
In recent years, significant advances made in deep neural networks enabled the creation of groundbreaking technologies such as self-driving cars and voice-enabled personal assistants. Almost all successes of deep neural networks are about prediction, whereas the initial breakthroughs came from generative models. Today, although we have very powerful deep generative modeling techniques, these techniques are essentially being used for prediction or for generating known objects (i.e., good quality images of known classes): any generated object that is a priori unknown is considered as a failure mode (Salimans et al., 2016) or as spurious (Bengio et al., 2013b). In other words, when prediction seems to be the only possible objective, novelty is seen as an error that researchers have been trying hard to eliminate. This thesis defends the point of view that, instead of trying to eliminate these novelties, we should study them and the generative potential of deep nets to create useful novelty, especially given the economic and societal importance of creating new objects in contemporary societies. The thesis sets out to study novelty generation in relationship with data-driven knowledge models produced by deep generative neural networks. Our first key contribution is the clarification of the importance of representations and their impact on the kind of novelties that can be generated: a key consequence is that a creative agent might need to rerepresent known objects to access various kinds of novelty. We then demonstrate that traditional objective functions of statistical learning theory, such as maximum likelihood, are not necessarily the best theoretical framework for studying novelty generation. We propose several other alternatives at the conceptual level. A second key result is the confirmation that current models, with traditional objective functions, can indeed generate unknown objects. This also shows that even though objectives like maximum likelihood are designed to eliminate novelty, practical implementations do generate novelty. Through a series of experiments, we study the behavior of these models and the novelty they generate. In particular, we propose a new task setup and metrics for selecting good generative models. Finally, the thesis concludes with a series of experiments clarifying the characteristics of models that can exhibit novelty. Experiments show that sparsity, noise level, and restricting the capacity of the net eliminates novelty and that models that are better at recognizing novelty are also good at generating novelty
APA, Harvard, Vancouver, ISO, and other styles
10

McClintick, Kyle W. "Training Data Generation Framework For Machine-Learning Based Classifiers." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1276.

Full text
Abstract:
In this thesis, we propose a new framework for the generation of training data for machine learning techniques used for classification in communications applications. Machine learning-based signal classifiers do not generalize well when training data does not describe the underlying probability distribution of real signals. The simplest way to accomplish statistical similarity between training and testing data is to synthesize training data passed through a permutation of plausible forms of noise. To accomplish this, a framework is proposed that implements arbitrary channel conditions and baseband signals. A dataset generated using the framework is considered, and is shown to be appropriately sized by having $11\%$ lower entropy than state-of-the-art datasets. Furthermore, unsupervised domain adaptation can allow for powerful generalized training via deep feature transforms on unlabeled evaluation-time signals. A novel Deep Reconstruction-Classification Network (DRCN) application is introduced, which attempts to maintain near-peak signal classification accuracy despite dataset bias, or perturbations on testing data unforeseen in training. Together, feature transforms and diverse training data generated from the proposed framework, teaching a range of plausible noise, can train a deep neural net to classify signals well in many real-world scenarios despite unforeseen perturbations.
APA, Harvard, Vancouver, ISO, and other styles
11

Ekstedt, Erik. "A Deep Reinforcement Learning Framework where Agents Learn a Basic form of Social Movement." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-349381.

Full text
Abstract:
For social robots to move and behave appropriately in dynamic and complex social contexts they need to be flexible in their movement behaviors. The natural complexity of social interaction makes this a difficult property to encode programmatically. Instead of programming these algorithms by hand it could be preferable to have the system learn these behaviors. In this project a framework is created in which an agent, through deep reinforcement learning, can learn how to mimic poses, here defined as the most basic case of social movements. The framework aimed to be as agent agnostic as possible and suitable for both real life robots and virtual agents through an approach called "dancer in the mirror". The framework utilized a learning algorithm called PPO and trained agents, as a proof of concept, on both a virtual environment for the humanoid robot Pepper and for virtual agents in a physics simulation environment. The framework was meant to be a simple starting point that could be extended to incorporate more and more complex tasks. This project shows that this framework was functional for agents to learn to mimic poses on a simplified environment.
APA, Harvard, Vancouver, ISO, and other styles
12

Moreno, Felipe(Felipe I. ). "Expresso-AI : a framework for explainable video based deep learning models through gestures and expressions." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130700.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 95-102).
We have developed a framework for Analyzing Facial Videos and applying it to Automatic Depression Detection. We also developed a video based models We have developed a framework to analyze the decisions of Deep Neural Networks trained on facial videos. We test this framework on Automatic Depression Detection. We first train Deep Convolutional Neural Networks (DCNN) pre-trained on Action Recognition datasets and fine-tune on the facial videos. We interpret the model's saliency maps by analyzing face regions and temporal expression semantics. Our framework generates both visual and quantitative explanations on the model's decision. Simultaneously, our video based modeling has improved previous single-face benchmarks of visual Automatic Depression Detection (ADD). We conclude successfully that we have developed the ability to generate hypotheses from a facial model's decisions, and improved Automatic Depression Detection's predictive performance.
by Felipe Moreno.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
13

Solsona, Berga Alba. "Advancement of methods for passive acoustic monitoring : a framework for the study of deep-diving cetacean." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/665710.

Full text
Abstract:
Marine mammals face numerous anthropogenic threats, including fisheries interactions, ocean noise, ship strikes, and marine debris. Monitoring the negative impact on marine mammals through the assessment of population trends requires information about population size, spatiotemporal distribution, population structure, and animal behavior. Passive acoustic monitoring has become a viable method for gathering long-term data on highly mobile and notoriously cryptic marine mammals. However, passive acoustic monitoring still faces major challenges requiring further development of robust analysis tools, especially as it becomes increasingly used in applied conservation for long-term and large-scale studies of endangered or data deficient species such as sperm or beaked whales. Further challenges lie in the translation of animal presence into quantitative population density estimates since methods must control for variation in acoustic detectability of the target species, environmental factors, and for species-specific vocalization rates. The main contribution of this thesis is the advancement of the framework for long-term quantitative monitoring of cetacean species, applied to deep-divers like sperm and beaked whales. Fully-automated methods were developed and implemented to different populations of beaked whales in different conditions. This provided insight into generalization capabilities of these automatic techniques and best practices. However, implementing these tool kits is not always practical, and alternative methods for additional data processing were developed to expeditiously serve multiple purposes including annotation of individual sounds, evaluation of data in order to provide a highly dynamic technique, and classification for quantitative monitoring studies. This work also presents the longest time series of sperm whale presence using passive acoustic monitoring for over seven years in the Gulf of Mexico. Echolocation clicks were detected and discriminated from other sounds to understand the spatiotemporal distribution and structure of the population. A series of steps were implemented to provide adequate parameters and characteristics of the target population for density estimation using an echolocation click-based method. This allowed for the study of the Gulf of Mexico’s sperm whale population, providing significant progress towards the understanding of the population structure, distribution, and trends, in addition to potential long-term impacts of the well-known catastrophic Deepwater Horizon oil spill and other anthropogenic activities. The emergence of innovative approaches for detecting the presence of marine mammals and documenting human interactions can provide insight into ecosystem change. These species can be used as sentinels of ocean health to ensure the conservation of their marine environment into the next epoch.
Els mamífers marins s'enfronten a nombroses amenaces antropogèniques, incloses les interaccions pesqueres, la contaminació acústica als oceans, les coalicions amb vaixells i els residus marins. El seguiment de l'impacte d’aquestes amenaces en els mamífers marins mitjançant l'avaluació de les tendències poblacionals requereix informació sobre la mida i l’estructura poblacional, la distribució espaciotemporal i el comportament dels animals. El seguiment amb sistemes d’acústica passiva s'ha convertit en un mètode viable per recollir dades a llarg termini de mamífers marins altament mòbils i críptics. Tanmateix, el seguiment acústic passiu encara ha d’afrontar reptes importants en el desenvolupament d'eines d'anàlisi robustes, especialment de cara al recent increment en el seu ús en la conservació aplicada a seguiments a llarg termini i a gran escala d'espècies en perill d'extinció o amb dades insuficients com ara el catxalot o els zífids. Altres reptes són traduir la presència d’animals a estimacions quantitatives de densitat poblacional, degut a que els mètodes han de controlar la variabilitat en la detecció acústica de les espècies en qüestió, els factors ambientals i les freqüències de vocalització específiques de cada espècie. La principal contribució d'aquesta tesi és l'avanç en els mètodes de seguiment quantitatiu a llarg termini de les espècies de cetacis, aplicat a espècies que viuen a grans profunditats com el catxalot i els zífids. Durant aquesta tesi, s’han desenvolupat i aplicat mètodes totalment automatitzats per detectar zífids de diferents poblacions i en diferents condicions. Aquests mètodes han proporcionat informació sobre la capacitat de generalització d'aquestes tècniques automàtiques i han permès fer recomanacions de bones pràctiques. Tanmateix, degut a que la implementació d’aquestes eines no és sempre pràctic, s’han desenvolupat mètodes per al processament de dades de forma expeditiva, que tenen diversos propòsits, que inclouen l’anotació de sons individuals, l’avaluació de dades per proporcionar una tècnica més dinàmica i la classificació per a estudis de seguiment quantitatiu. Aquest treball també presenta la sèrie temporal més llarga documentada de la presència de catxalots obtinguda mitjançant tècniques de seguiment acústic passiu durant més de set anys al Golf de Mèxic. S’han detectat i discriminat les senyals d'ecolocalització d'altres sons per tal de comprendre la distribució i l'estructura espaciotemporal d’aquesta població de catxalots. S’han implementat una sèrie de passos per proporcionar paràmetres i característiques de la població amb l’objectiu d'estimar la densitat mitjançant un mètode basat en senyals d’ecolocalització. Aquesta implementació ha permès l'estudi de la població de catxalots del Golf de Mèxic i ha suposat un progrés significatiu per la comprensió de l'estructura, la distribució i les tendències poblacionals, així com dels potencials impactes a llarg termini del catastròfic vessament de petroli de la plataforma Deepwater Horizon i altres activitats antropogèniques.
APA, Harvard, Vancouver, ISO, and other styles
14

Cofré, Martel Sergio Manuel Ignacio. "A deep learning based framework for physical assets' health prognostics under uncertainty for big Machinery Data." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168080.

Full text
Abstract:
Magíster en Ciencias de la Ingeniería, Mención Mecánica
El desarrollo en tecnología de mediciones ha permitido el monitoreo continuo de sistemas complejos a través de múltiples sensores, generando así grandes bases de datos. Estos datos normalmente son almacenados para ser posteriormente analizados con técnicas tradicionales de Prognostics and Health Management (PHM). Sin embargo, muchas veces, gran parte de esta información es desperdiciada, ya que los métodos tradicionales de PHM requieren de conocimiento experto sobre el sistema para su implementación. Es por esto que, para estimar parámetros relacionados a confiabilidad, los enfoques basados en análisis de datos pueden utilizarse para complementar los métodos de PHM. El objetivo de esta tesis consiste en desarrollar e implementar un marco de trabajo basado en técnicas de Aprendizaje Profundo para la estimación del estado de salud de sistemas y componentes, utilizando datos multisensoriales de monitoreo. Para esto, se definen los siguientes objetivos específicos: Desarrollar una arquitectura capaz de extraer características temporales y espaciales de los datos. Proponer un marco de trabajo para la estimación del estado de salud, y validarlo utilizando dos conjuntos de datos: C-MAPSS turbofan engine, y baterías ion-litio CS2. Finalmente, entregar una estimación de la propagación de la incertidumbre en los pronósticos del estado de salud. Se propone una estructura que integre las ventajas de relación espacial de las Convolutional Neural Networks, junto con el análisis secuencial de las Long-Short Term Memory Recurrent Neural Networks. Utilizando Dropout tanto para la regularización, como también para una aproximación bayesiana para la estimación de incertidumbre de los modelos. De acuerdo con lo anterior, la arquitectura propuesta recibe el nombre CNNBiLSTM. Para los datos de C-MAPSS se entrenan cuatro modelos diferentes, uno para cada subconjunto de datos, con el objetivo de estimar la vida remanente útil. Los modelos arrojan resultados superiores al estado del arte en la raíz del error medio cuadrado (RMSE), mostrando robustez en el proceso de entrenamiento, y baja incertidumbre en sus predicciones. Resultados similares se obtienen para el conjunto de datos CS2, donde el modelo entrenado con todas las celdas de batería logra estimar el estado de carga y el estado de salud con un bajo RMSE y una pequeña incertidumbre sobre su estimación de valores. Los resultados obtenidos por los modelos entrenados muestran que la arquitectura propuesta es adaptable a diferentes sistemas y puede obtener relaciones temporales abstractas de los datos sensoriales para la evaluación de confiabilidad. Además, los modelos muestran robustez durante el proceso de entrenamiento, así como una estimación precisa con baja incertidumbre.
APA, Harvard, Vancouver, ISO, and other styles
15

Granger, Matthew G. "A Combined Framework for Control and Fault Monitoring of a DC Microgrid for Deep Space Applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1607694725020458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shaikh, Farooq Israr Ahmed. "Security Framework for the Internet of Things Leveraging Network Telescopes and Machine Learning." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7935.

Full text
Abstract:
The recent advancements in computing and sensor technologies, coupled with improvements in embedded system design methodologies, have resulted in the novel paradigm called the Internet of Things (IoT). IoT is essentially a network of small embedded devices enabled with sensing capabilities that can interact with multiple entities to relay information about their environments. This sensing information can also be stored in the cloud for further analysis, thereby reducing storage requirements on the devices themselves. The above factors, coupled with the ever increasing needs of modern society to stay connected at all times, has resulted in IoT technology penetrating all facets of modern life. In fact IoT systems are already seeing widespread applications across multiple industries such as transport, utility, manufacturing, healthcare, home automation, etc. Although the above developments promise tremendous benefits in terms of productivity and efficiency, they also bring forth a plethora of security challenges. Namely, the current design philosophy of IoT devices, which focuses more on rapid prototyping and usability, results in security often being an afterthought. Furthermore, one needs to remember that unlike traditional computing systems, these devices operate under the assumption of tight resource constraints. As such this makes IoT devices a lucrative target for exploitation by adversaries. This inherent flaw of IoT setups has manifested itself in the form of various distributed denial of service (DDoS) attacks that have achieved massive throughputs without the need for techniques such as amplification, etc. Furthermore, once exploited, an IoT device can also function as a pivot point for adversaries to move laterally across the network and exploit other, potentially more valuable, systems and services. Finally, vulnerable IoT devices operating in industrial control systems and other critical infrastructure setups can cause sizable loss of property and in some cases even lives, a very sobering fact. In light of the above, this dissertation research presents several novel strategies for identifying known and zero-day attacks against IoT devices, as well as identifying infected IoT devices present inside a network along with some mitigation strategies. To this end, network telescopes are leveraged to generate Internet-scale notions of maliciousness in conjunction with signatures that can be used to identify such devices in a network. This strategy is further extended by developing a taxonomy-based methodology which is capable of categorizing unsolicited IoT behavior by leveraging machine learning (ML) techniques, such as ensemble learners, to identify similar threats in near-real time. Furthermore, to overcome the challenge of insufficient (malicious) training data within the IoT realm, a generative adversarial network (GAN) based framework is also developed to identify known and unseen attacks on IoT devices. Finally, a software defined networking (SDN) based solution is proposed to mitigate threats from unsolicited IoT devices.
APA, Harvard, Vancouver, ISO, and other styles
17

Nilsson, Alexander, and Martin Thönners. "A Framework for Generative Product Design Powered by Deep Learning and Artificial Intelligence : Applied on Everyday Products." Thesis, Linköpings universitet, Maskinkonstruktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149454.

Full text
Abstract:
In this master’s thesis we explore the idea of using artificial intelligence in the product design process and seek to develop a conceptual framework for how it can be incorporated to make user customized products more accessible and affordable for everyone. We show how generative deep learning models such as Variational Auto Encoders and Generative Adversarial Networks can be implemented to generate design variations of windows and clarify the general implementation process along with insights from recent research in the field. The proposed framework consists of three parts: (1) A morphological matrix connecting several identified possibilities of implementation to specific parts of the product design process. (2) A general step-by-step process on how to incorporate generative deep learning. (3) A description of common challenges, strategies andsolutions related to the implementation process. Together with the framework we also provide a system for automatic gathering and cleaning of image data as well as a dataset containing 4564 images of windows in a front view perspective.
APA, Harvard, Vancouver, ISO, and other styles
18

Tomashenko, Natalia. "Speaker adaptation of deep neural network acoustic models using Gaussian mixture model framework in automatic speech recognition systems." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1040/document.

Full text
Abstract:
Les différences entre conditions d'apprentissage et conditions de test peuvent considérablement dégrader la qualité des transcriptions produites par un système de reconnaissance automatique de la parole (RAP). L'adaptation est un moyen efficace pour réduire l'inadéquation entre les modèles du système et les données liées à un locuteur ou un canal acoustique particulier. Il existe deux types dominants de modèles acoustiques utilisés en RAP : les modèles de mélanges gaussiens (GMM) et les réseaux de neurones profonds (DNN). L'approche par modèles de Markov cachés (HMM) combinés à des GMM (GMM-HMM) a été l'une des techniques les plus utilisées dans les systèmes de RAP pendant de nombreuses décennies. Plusieurs techniques d'adaptation ont été développées pour ce type de modèles. Les modèles acoustiques combinant HMM et DNN (DNN-HMM) ont récemment permis de grandes avancées et surpassé les modèles GMM-HMM pour diverses tâches de RAP, mais l'adaptation au locuteur reste très difficile pour les modèles DNN-HMM. L'objectif principal de cette thèse est de développer une méthode de transfert efficace des algorithmes d'adaptation des modèles GMM aux modèles DNN. Une nouvelle approche pour l'adaptation au locuteur des modèles acoustiques de type DNN est proposée et étudiée : elle s'appuie sur l'utilisation de fonctions dérivées de GMM comme entrée d'un DNN. La technique proposée fournit un cadre général pour le transfert des algorithmes d'adaptation développés pour les GMM à l'adaptation des DNN. Elle est étudiée pour différents systèmes de RAP à l'état de l'art et s'avère efficace par rapport à d'autres techniques d'adaptation au locuteur, ainsi que complémentaire
Differences between training and testing conditions may significantly degrade recognition accuracy in automatic speech recognition (ASR) systems. Adaptation is an efficient way to reduce the mismatch between models and data from a particular speaker or channel. There are two dominant types of acoustic models (AMs) used in ASR: Gaussian mixture models (GMMs) and deep neural networks (DNNs). The GMM hidden Markov model (GMM-HMM) approach has been one of the most common technique in ASR systems for many decades. Speaker adaptation is very effective for these AMs and various adaptation techniques have been developed for them. On the other hand, DNN-HMM AMs have recently achieved big advances and outperformed GMM-HMM models for various ASR tasks. However, speaker adaptation is still very challenging for these AMs. Many adaptation algorithms that work well for GMMs systems cannot be easily applied to DNNs because of the different nature of these models. The main purpose of this thesis is to develop a method for efficient transfer of adaptation algorithms from the GMM framework to DNN models. A novel approach for speaker adaptation of DNN AMs is proposed and investigated. The idea of this approach is based on using so-called GMM-derived features as input to a DNN. The proposed technique provides a general framework for transferring adaptation algorithms, developed for GMMs, to DNN adaptation. It is explored for various state-of-the-art ASR systems and is shown to be effective in comparison with other speaker adaptation techniques and complementary to them
APA, Harvard, Vancouver, ISO, and other styles
19

Griffiths, I. G. "Social theory and sustainability : deep ecology, eco-Marxism, Anthony Giddens and a new progressive policy framework for sustainable development." Thesis, Queen's University Belfast, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fuentes, Magdalena. "Multi-scale computational rhythm analysis : a framework for sections, downbeats, beats, and microtiming." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS404.

Full text
Abstract:
La modélisation computationnelle du rythme a pour objet l'extraction et le traitement d’informations rythmiques à partir d’un signal audio de musique. Cela s'avère être une tâche extrêmement complexe car, pour traiter un enregistrement audio réel, il faut pouvoir gérer sa complexité acoustique et sémantique à plusieurs niveaux de représentation. Les méthodes d’analyse rythmique existantes se concentrent généralement sur l'un de ces aspects à la fois et n’exploitent pas la richesse de la structure musicale, ce qui compromet la cohérence musicale des estimations automatiques. Dans ce travail, nous proposons de nouvelles approches tirant parti des informations multi-échelles pour l'analyse automatique du rythme. Nos modèles prennent en compte des interdépendances intrinsèques aux signaux audio de musique, en permettant ainsi l’interaction entre différentes échelles de temps et en assurant la cohérence musicale entre elles. En particulier, nous effectuons une analyse systématique des systèmes de l’état de l’art pour la détection des premiers temps, ce qui nous conduit à nous tourner vers des architectures convolutionnelles et récurrentes qui exploitent la modélisation acoustique à court et long terme; nous introduisons un modèle de champ aléatoire conditionnel à saut de chaîne pour la détection des premiers temps. Ce système est conçu pour tirer parti des informations de structure musicale (c'est-à-dire des répétitions de sections musicales) dans un cadre unifié. Nous proposons également un modèle linguistique pour la détection conjointe des temps et du micro-timing dans la musique afro-latino-américaine. Nos méthodes sont systématiquement évaluées sur diverses bases de données, allant de la musique occidentale à des genres plus spécifiques culturellement, et comparés à des systèmes de l’état de l’art, ainsi qu’à des variantes plus simples. Les résultats globaux montrent que nos modèles d’estimation des premiers temps sont aussi performants que ceux de l’état de l'art, tout en étant plus cohérents sur le plan musical. De plus, notre modèle d’estimation conjointe des temps et du microtiming représente une avancée vers des systèmes plus interprétables. Les méthodes présentées ici offrent des alternatives nouvelles et plus holistiques pour l'analyse numérique du rythme, ouvrant des perspectives vers une analyse automatique plus complète de la musique
Computational rhythm analysis deals with extracting and processing meaningful rhythmical information from musical audio. It proves to be a highly complex task, since dealing with real audio recordings requires the ability to handle its acoustic and semantic complexity at multiple levels of representation. Existing methods for rhythmic analysis typically focus on one of those levels, failing to exploit music’s rich structure and compromising the musical consistency of automatic estimations. In this work, we propose novel approaches for leveraging multi-scale information for computational rhythm analysis. Our models account for interrelated dependencies that musical audio naturally conveys, allowing the interplay between different time scales and accounting for music coherence across them. In particular, we conduct a systematic analysis of downbeat tracking systems, leading to convolutional-recurrent architectures that exploit short and long term acoustic modeling; we introduce a skip-chain conditional random field model for downbeat tracking designed to take advantage of music structure information (i.e. music sections repetitions) in a unified framework; and we propose a language model for joint tracking of beats and micro-timing in Afro-Latin American music. Our methods are systematically evaluated on a diverse group of datasets, ranging from Western music to more culturally specific genres, and compared to state-of-the-art systems and simpler variations. The overall results show that our models for downbeat tracking perform on par with the state of the art, while being more musically consistent. Moreover, our model for the joint estimation of beats and microtiming takes further steps towards more interpretable systems. The methods presented here offer novel and more holistic alternatives for computational rhythm analysis, towards a more comprehensive automatic analysis of music
APA, Harvard, Vancouver, ISO, and other styles
21

Abumallouh, Arafat. "A Framework for Enhancing Speaker Age and Gender Classification by Using a New Feature Set and Deep Neural Network Architectures." Thesis, University of Bridgeport, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10689188.

Full text
Abstract:

Speaker age and gender classification is one of the most challenging problems in speech processing. Recently with developing technologies, identifying a speaker age and gender has become a necessity for speaker verification and identification systems such as identifying suspects in criminal cases, improving human-machine interaction, and adapting music for awaiting people queue. Although many studies have been carried out focusing on feature extraction and classifier design for improvement, classification accuracies are still not satisfactory. The key issue in identifying speaker’s age and gender is to generate robust features and to design an in-depth classifier. Age and gender information is concealed in speaker’s speech, which is liable for many factors such as, background noise, speech contents, and phonetic divergences.

In this work, different methods are proposed to enhance the speaker age and gender classification based on the deep neural networks (DNNs) as a feature extractor and classifier. First, a model for generating new features from a DNN is proposed. The proposed method uses the Hidden Markov Model toolkit (HTK) tool to find tied-state triphones for all utterances, which are used as labels for the output layer in the DNN. The DNN with a bottleneck layer is trained in an unsupervised manner for calculating the initial weights between layers, then it is trained and tuned in a supervised manner to generate transformed mel-frequency cepstral coefficients (T-MFCCs). Second, the shared class labels method is introduced among misclassified classes to regularize the weights in DNN. Third, DNN-based speakers models using the SDC feature set is proposed. The speakers-aware model can capture the characteristics of the speaker age and gender more effectively than a model that represents a group of speakers. In addition, AGender-Tune system is proposed to classify the speaker age and gender by jointly fine-tuning two DNN models; the first model is pre-trained to classify the speaker age, and second model is pre-trained to classify the speaker gender. Moreover, the new T-MFCCs feature set is used as the input of a fusion model of two systems. The first system is the DNN-based class model and the second system is the DNN-based speaker model. Utilizing the T-MFCCs as input and fusing the final score with the score of a DNN-based class model enhanced the classification accuracies. Finally, the DNN-based speaker models are embedded into an AGender-Tune system to exploit the advantages of each method for a better speaker age and gender classification.

The experimental results on a public challenging database showed the effectiveness of the proposed methods for enhancing the speaker age and gender classification and achieved the state of the art on this database.

APA, Harvard, Vancouver, ISO, and other styles
22

Poggi, Cavalletti Stefano. "Utilizzo di tecniche di Machine Learning per l'analisi di dataset in ambito sanitario." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21743/.

Full text
Abstract:
L’intelligenza artificiale è una disciplina molto vasta che presenta numerosi campi di applicazione. In questo lavoro si vuole mostrare come sia possibile utilizzare alcune tecniche di Machine e Deep Learning per l’analisi di dataset nell'ambito sanitario. In particolare, dopo una prima introduzione all’argomento e ai principali algoritmi di apprendimento, viene analizzato un framework caratterizzato da un approccio orientato all’ingegneria del software che utilizza tecniche di machine learning per migliorare l’efficienza dei sistemi sanitari. Vengono poi descritte le varie fasi di un esperimento svolto che consiste nell’analisi di un dataset e la successiva creazione di un modello di classificazione per la predizione di malattie cardiache nei pazienti, utilizzando le reti neurali artificiali.
APA, Harvard, Vancouver, ISO, and other styles
23

Calle, Ortiz Eduardo R. "Robot-Enhanced ABA Therapy: Exploring Emerging Artificial Intelligence Embedded Systems in Socially Assistive Robots for the Treatment of Autism." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1349.

Full text
Abstract:
In the last decade, socially assistive robots have been used in therapeutic treatments for individuals diagnosed with Autism Spectrum Disorders (ASDs). Preliminary studies have demonstrated positive results using the Penguin for Autism Behavioral Intervention (PABI) developed by the AIM Lab at WPI to assist individuals diagnosed with ASDs in Applied Behavioral Analysis (ABA) therapy treatments. In recent years, power-efficient embedded AI computing devices have emerged as a powerful technology by reducing the complexity of the hardware platforms while providing support for parallel models of computation. This new hardware architecture seems to be an important step in the improvement of socially assistive robots in ABA therapy. In this thesis, we explore the use of a power-efficient embedded AI computing device and pre-trained deep learning models to improve PABI’s performance. Five main contributions are made in this work. First, a robot-enhanced ABA therapy framework is designed. Second, a multilayer pattern software architecture for a robot-enhanced ABA therapy framework is explored. Third, a multifactorial experiment is completed in order to benchmark the performance of three popular deep learning frameworks over the AI computing device. Experimental results demonstrate that some deep learning frameworks utilize the resources of GPU power while others utilize the multicore ARM-CPU system of the device for its parallel model of computation. Fourth, the robustness of state-of-the-art pre-trained deep learning models for feature extraction is analyzed and contrasted with the previous approach used by PABI. Experimental results indicate that pre-trained deep learning models overcome the traditional approaches in some fields; however, combining different pre-trained models in a process reduces its accuracy. Fifth, a patient-tracking algorithm based on an identity verification approach is developed to improve the autonomy, usability, and interactions of patients with the robot. Experimental results show that the developed algorithm has the potential to perform as well as the previous algorithm used by PABI based on a deep learning classifier approach.
APA, Harvard, Vancouver, ISO, and other styles
24

White, Dan, and res cand@acu edu au. "Pedagogy – The Missing Link in Religious Education: Implications of brain-based learning theory for the development of a pedagogical framework for religious education." Australian Catholic University. School of Religious Education, 2004. http://dlibrary.acu.edu.au/digitaltheses/public/adt-acuvp60.29082005.

Full text
Abstract:
Over the past three decades, the development of religious education in Australia has been largely shaped by catechetical and curriculum approaches to teaching and learning. To date, little emphasis has been placed on the pedagogical dimension of religious education. The purpose of this research project is to explore the manner in which ‘brain-based’ learning theory contributes to pedagogical development in primary religious education. The project utilises an action research methodology combining concept mapping, the application of ‘brain-based’ teaching strategies and focus group dialogue with diocesan Religious Education Coordinators (RECs). The insights derived contribute to the formulation and validation of an appropriate pedagogical model for primary religious education, entitled the ‘DEEP Framework’. The model reflects an integration of insights from brain-based theory with nuances from the contemporary Australian religious education literature. The project identifies four key, interactive principles that are crucial to pedagogical development in religious education, namely: Discernment, Enrichment, Engagement and Participation. It also recognises a fifth principle, ‘an orientation towards wholeness’, as significant in combining the various pedagogical principles into a coherent whole. The DEEP framework enables teachers to more successfully select and evaluate appropriate, interconnecting teaching strategies within the religious education classroom. The framework underpins the pedagogical rationale of the recently developed Archdiocese of Hobart religious education program and forms the basis for the implementation of a coherent professional development program across the Archdiocese.
APA, Harvard, Vancouver, ISO, and other styles
25

Siddiqui, Mohammad Faridul Haque. "A Multi-modal Emotion Recognition Framework Through The Fusion Of Speech With Visible And Infrared Images." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1556459232937498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Amiruzzaman, Md. "Studying geospatial urban visual appearance and diversity to understand social phenomena." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1618904789316283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wara, Ummul. "A Framework for Fashion Data Gathering, Hierarchical-Annotation and Analysis for Social Media and Online Shop : TOOLKIT FOR DETAILED STYLE ANNOTATIONS FOR ENHANCED FASHION RECOMMENDATION." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234285.

Full text
Abstract:
Due to the transformation of different recommendation system from contentbased to hybrid cross-domain-based, there is an urge to prepare a socialnetwork dataset which will provide sufficient data as well as detail-level annotation from a predefined hierarchical clothing category and attribute based vocabulary by considering user interactions. However, existing fashionbased datasets lack either in hierarchical-category based representation or user interactions of social network. The thesis intends to represent two datasets- one from photo-sharing platform Instagram which gathers fashionistas images with all possible user-interactions and another from online-shop Zalando with every cloths detail. We present a design of a customized crawler that enables the user to crawl data based on category or attributes. Moreover, an efficient and collaborative web-solution is designed and implemented to facilitate large-scale hierarchical category-based detaillevel annotation of Instagram data. By considering all user-interactions, the developed solution provides a detail-level annotation facility that reflects the user’s preference. The web-solution is evaluated by the team as well as the Amazon Turk Service. The annotated output from different users proofs the usability of the web-solution in terms of availability and clarity. In addition to data crawling and annotation web-solution development, this project analyzes the Instagram and Zalando data distribution in terms of cloth category, subcategory and pattern to provide meaningful insight over data. Researcher community will benefit by using these datasets if they intend to work on a rich annotated dataset that represents social network and resembles in-detail cloth information.
Med tanke på trenden inom forskning av rekommendationssystem, där allt fler rekommendationssystem blir hybrida och designade för flera domäner, så finns det ett behov att framställa en datamängd från sociala medier som innehåller detaljerad information om klädkategorier, klädattribut, samt användarinteraktioner. Nuvarande datasets med inriktning mot mode saknar antingen en hierarkisk kategoristruktur eller information om användarinteraktion från sociala nätverk. Detta projekt har syftet att ta fram två dataset, ett dataset som insamlats från fotodelningsplattformen Instagram, som innehåller foton, text och användarinteraktioner från fashionistas, samt ett dataset som insamlats från klädutbutdet som ges av onlinebutiken Zalando. Vi presenterar designen av en webbcrawler som är anpassad för att kunna hämta data från de nämnda domänerna och är optimiserad för mode och klädattribut. Vi presenterar även en effektiv webblösning som är designad och implementerad för att möjliggöra annotering av stora mängder data från Instagram med väldigt detaljerad information om kläder. Genom att vi inkluderar användarinteraktioner i applikationen så kan vår webblösning ge användaranpassad annotering av data. Webblösningen har utvärderats av utvecklarna samt genom AmazonTurk tjänsten. Den annoterade datan från olika användare demonstrerar användarvänligheten av webblösningen. Utöver insamling av data och utveckling av ett system för webb-baserad annotering av data så har datadistributionerna i två modedomäner, Instagram och Zalando, analyserats. Datadistributionerna analyserades utifrån klädkategorier och med syftet att ge datainsikter. Forskning inom detta område kan dra nytta av våra resultat och våra datasets. Specifikt så kan våra datasets användas i domäner som kräver information om detaljerad klädinformation och användarinteraktioner.
APA, Harvard, Vancouver, ISO, and other styles
28

Mihalčin, Tomáš. "Hluboké neuronové sítě pro rozpoznání tváří ve videu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385952.

Full text
Abstract:
This diploma thesis focuses on a face recognition from a video, specifically how to aggregate feature vectors into a single discriminatory vector also called a template. It examines the issue of the extremely angled faces with respect to the accuracy of the verification. Also compares the relationship between templates made from vectors extracted from video frames and vectors from photos. Suggested hypothesis is tested by two deep convolutional neural networks, namely the well-known VGG-16 network model and a model called Fingera provided by company Innovatrics. Several experiments were carried out in the course of the work and the results of which confirm the success of proposed technique. As an accuracy metric was chosen the ROC curve. For work with neural networks was used framework Caffe.
APA, Harvard, Vancouver, ISO, and other styles
29

Singh, Amarjot. "ScatterNet hybrid frameworks for deep learning." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/285997.

Full text
Abstract:
Image understanding is the task of interpreting images by effectively solving the individual tasks of object recognition and semantic image segmentation. An image understanding system must have the capacity to distinguish between similar looking image regions while being invariant in its response to regions that have been altered by the appearance-altering transformation. The fundamental challenge for any such system lies within this simultaneous requirement for both invariance and specificity. Many image understanding systems have been proposed that capture geometric properties such as shapes, textures, motion and 3D perspective projections using filtering, non-linear modulus, and pooling operations. Deep learning networks ignore these geometric considerations and compute descriptors having suitable invariance and stability to geometric transformations using (end-to-end) learned multi-layered network filters. These deep learning networks in recent years have come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite the success of these deep networks, there remains a fundamental lack of understanding in the design and optimization of these networks which makes it difficult to develop them. Also, training of these networks requires large labeled datasets which in numerous applications may not be available. In this dissertation, we propose the ScatterNet Hybrid Framework for Deep Learning that is inspired by the circuitry of the visual cortex. The framework uses a hand-crafted front-end, an unsupervised learning based middle-section, and a supervised back-end to rapidly learn hierarchical features from unlabelled data. Each layer in the proposed framework is automatically optimized to produce the desired computationally efficient architecture. The term `Hybrid' is coined because the framework uses both unsupervised as well as supervised learning. We propose two hand-crafted front-ends that can extract locally invariant features from the input signals. Next, two ScatterNet Hybrid Deep Learning (SHDL) networks (a generative and a deterministic) were introduced by combining the proposed front-ends with two unsupervised learning modules which learn hierarchical features. These hierarchical features were finally used by a supervised learning module to solve the task of either object recognition or semantic image segmentation. The proposed front-ends have also been shown to improve the performance and learning of current Deep Supervised Learning Networks (VGG, NIN, ResNet) with reduced computing overhead.
APA, Harvard, Vancouver, ISO, and other styles
30

Mutloane, Mphati Ntebaleng. "Post-Apartheid Legislative Recognition of Traditional Leaders in South Africa: Weak Legal Pluralism in the Guise of Deep Legal Pluralism An analysis and critique of the legislative framework for the recognition of traditional leadership in South Africa under the 1996 Constitution." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/15202.

Full text
Abstract:
This study explores the limitations of recognising traditional leadership as institution through legislation. The legislative recognition of traditional leadership has serious implications for the processes of change within customary law from 'official' customary law to 'living' customary law. The advent of the 1996 Constitution and its emphasis on freedom, dignity, equality and accountability has opened up avenues for democratic political participation, which is changing the nature of customary law through a bottom-up process involving community members in the evolution of customary law. This process of evolution draws on various sources of law, including aspects of official customary law, community norms and procedures as well as the Constitution, particularly rights discourse. Deep legal pluralism has taken root through living customary law and is changing the way in which community members relate to traditional leaders by empowering rural citizens to demand accountability from traditional leaders. Legislative recognition of traditional leadership has been characterised as necessary for the restoration of the dignity of African justice systems. Though constitutionally sanctioned through the rule of law, the legislative framework recognising and regulating traditional leaders has had a negative impact on the processes of change and democratisation described above at grassroots level. Gaining an understanding of these consequences and how they have come about is at the heart of this study, especially given that they are unintended consequences of a government policy meant to improve the lives of rural citizens. Legal pluralism as a theory of law provides a critical lens through which the shortcomings of legislation recognising traditional leadership can be perceived, and probing questions can be asked about the effect of state law on non-state legal orders. However, in South Africa the situation is quite complicated given that the distinction between state law and non-state law with regard to African customary law is not always easy to make. The two systems have existed not only in juxtaposition for many years, but have bled into each other in layered ways. These layers have been moulded very deeply through the influence of various politicolegal orders in existence at particular times and their impact on social relations in South African society. As a theory of law, legal pluralism is used in this study to try and peel back a few of these layers, enabling observation and analysis of how the distribution of political power from the different politico-legal frameworks of governance in South Africa namely, colonialism, apartheid, and constitutional democracy, have shaped traditional leadership; and the impact of these processes on the power relationships between traditional leaders and rural citizens. Law, mostly in the form of legislation, has been an important factor in the establishment, destruction, and re-establishment of these power relationships. This forms the basis of the study, at the end of which it is determined that although legislation is necessary for the recognition and regulation of traditional leadership, as a requirement of the rule of law, the current and proposed legislative framework for traditional leadership is an inappropriate framework. It centralises legislative, judicial and executive power in an unelected arm of government, namely traditional leaders, which is unconstitutional on the basis of the separation of powers principle which is a founding value of South Africa's constitutional democratic dispensation.
APA, Harvard, Vancouver, ISO, and other styles
31

Silvestri, Mattia. "Deep Reinforcement Learning for Combinatorial Optimization: Theoretical Frameworks and Experimental Developments." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
In questo lavoro di tesi, è stato utilizzato il Deep Q-learning, un algoritmo di Reinforcement Learning, per risolvere il Problema dell'Allocazione di Prodotti in un magazzino, un noto problema NP-hard. I risultati sono sono stati confrontati con quelli ottenuti da una semplice euristica, da una meta-euristica proveniente dalla Ricerca Operativa e altri approcci basati sul Deep Reinforcement Learning. Nei test eseguiti, la meta-euristica ha individuato i risultati migliori ma l'approccio proposto in questa tesi ha ottenuto soluzioni ad essa molto vicine, e dimostrando di essere milgliore rispetto a gli altri metodi testati.
APA, Harvard, Vancouver, ISO, and other styles
32

Janurberg, Norman, and Christian Luksitch. "Exploring Deep Learning Frameworks for Multiclass Segmentation of 4D Cardiac Computed Tomography." Thesis, Linköpings universitet, Institutionen för hälsa, medicin och vård, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178648.

Full text
Abstract:
By combining computed tomography data with computational fluid dynamics, the cardiac hemodynamics of a patient can be assessed for diagnosis and treatment of cardiac disease. The advantage of computed tomography over other medical imaging modalities is its capability of producing detailed high resolution images containing geometric measurements relevant to the simulation of cardiac blood flow. To extract these geometries from computed tomography data, segmentation of 4D cardiac computed tomography (CT) data has been performed using two deep learning frameworks that combine methods which have previously shown success in other research. The aim of this thesis work was to develop and evaluate a deep learning based technique to segment the left ventricle, ascending aorta, left atrium, left atrial appendage and the proximal pulmonary vein inlets. Two frameworks have been studied where both utilise a 2D multi-axis implementation to segment a single CT volume by examining it in three perpendicular planes, while one of them has also employed a 3D binary model to extract and crop the foreground from surrounding background. Both frameworks determine a segmentation prediction by reconstructing three volumes after 2D segmentation in each plane and combining their probabilities in an ensemble for a 3D output.  The results of both frameworks show similarities in their performance and ability to properly segment 3D CT data. While the framework that examines 2D slices of full size volumes produces an overall higher Dice score, it is less successful than the cropping framework at segmenting the smaller left atrial appendage. Since the full size 2D slices also contain background information in each slice, it is believed that this is the main reason for better segmentation performance. While the cropping framework provides a higher proportion of each foreground label, making it easier for the model to identify smaller structures. Both frameworks show success for use in 3D cardiac CT segmentation, and with further research and tuning of each network, even better results can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
33

Airola, Rasmus, and Kristoffer Hager. "Image Classification, Deep Learning and Convolutional Neural Networks : A Comparative Study of Machine Learning Frameworks." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-55129.

Full text
Abstract:
The use of machine learning and specifically neural networks is a growing trend in software development, and has grown immensely in the last couple of years in the light of an increasing need to handle big data and large information flows. Machine learning has a broad area of application, such as human-computer interaction, predicting stock prices, real-time translation, and self driving vehicles. Large companies such as Microsoft and Google have already implemented machine learning in some of their commercial products such as their search engines, and their intelligent personal assistants Cortana and Google Assistant. The main goal of this project was to evaluate the two deep learning frameworks Google TensorFlow and Microsoft CNTK, primarily based on their performance in the training time of neural networks. We chose to use the third-party API Keras instead of TensorFlow's own API when working with TensorFlow. CNTK was found to perform better in regards of training time compared to TensorFlow with Keras as frontend. Even though CNTK performed better on the benchmarking tests, we found Keras with TensorFlow as backend to be much easier and more intuitive to work with. In addition, CNTKs underlying implementation of the machine learning algorithms and functions differ from that of the literature and of other frameworks. Therefore, if we had to choose a framework to continue working in, we would choose Keras with TensorFlow as backend, even though the performance is less compared to CNTK.
APA, Harvard, Vancouver, ISO, and other styles
34

Alohaly, Manar Fathi. "Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707241/.

Full text
Abstract:
In this disseration we propose semi-automated top-down policy engineering approaches for attribute-based access control (ABAC) development. Further, we propose a hybrid ABAC policy engineering approach to combine the benefits and address the shortcomings of both top-down and bottom-up approaches. In particular, we propose three frameworks: (i) ABAC attributes extraction, (ii) ABAC constraints extraction, and (iii) hybrid ABAC policy engineering. Attributes extraction framework comprises of five modules that operate together to extract attributes values from natural language access control policies (NLACPs); map the extracted values to attribute keys; and assign each key-value pair to an appropriate entity. For ABAC constraints extraction framework, we design a two-phase process to extract ABAC constraints from NLACPs. The process begins with the identification phase which focuses on identifying the right boundary of constraint expressions. Next is the normalization phase, that aims at extracting the actual elements that pose a constraint. On the other hand, our hybrid ABAC policy engineering framework consists of 5 modules. This framework combines top-down and bottom-up policy engineering techniques to overcome the shortcomings of both approaches and to generate policies that are more intuitive and relevant to actual organization policies. With this, we believe that our work takes essential steps towards a semi-automated ABAC policy development experience.
APA, Harvard, Vancouver, ISO, and other styles
35

Awan, Ammar Ahmad. "Co-designing Communication Middleware and Deep Learning Frameworks for High-Performance DNN Training on HPC Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587433770960088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Alabdulrahman, Rabaa. "Towards Personalized Recommendation Systems: Domain-Driven Machine Learning Techniques and Frameworks." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41012.

Full text
Abstract:
Recommendation systems have been widely utilized in e-commerce settings to aid users through their shopping experiences. The principal advantage of these systems is their ability to narrow down the purchase options in addition to marketing items to customers. However, a number of challenges remain, notably those related to obtaining a clearer understanding of users, their profiles, and their preferences in terms of purchased items. Specifically, recommender systems based on collaborative filtering recommend items that have been rated by other users with preferences similar to those of the targeted users. Intuitively, the more information and ratings collected about the user, the more accurate are the recommendations such systems suggest. In a typical recommender systems database, the data are sparse. Sparsity occurs when the number of ratings obtained by the users is much lower than the number required to build a prediction model. This usually occurs because of the users’ reluctance to share their reviews, either due to privacy issues or an unwillingness to make the extra effort. Grey-sheep users pose another challenge. These are users who shared their reviews and ratings yet disagree with the majority in the systems. The current state-of-the-art typically treats these users as outliers and removes them from the system. Our goal is to determine whether keeping these users in the system may benefit learning. Thirdly, cold-start problems refer to the scenario whereby a new item or user enters the system and is another area of active research. In this case, the system will have no information about the new user or item, making it problematic to find a correlation with others in the system. This thesis addresses the three above-mentioned research challenges through the development of machine learning methods for use within the recommendation system setting. First, we focus on the label and data sparsity though the development of the Hybrid Cluster analysis and Classification learning (HCC-Learn) framework, combining supervised and unsupervised learning methods. We show that combining classification algorithms such as k-nearest neighbors and ensembles based on feature subspaces with cluster analysis algorithms such as expectation maximization, hierarchical clustering, canopy, k-means, and cascade k-means methods, generally produces high-quality results when applied to benchmark datasets. That is, cluster analysis clearly benefits the learning process, leading to high predictive accuracies for existing users. Second, to address the cold-start problem, we present the Popular Users Personalized Predictions (PUPP-DA) framework. This framework combines cluster analysis and active learning, or so-called user-in-the-loop, to assign new customers to the most appropriate groups in our framework. Based on our findings from the HCC-Learn framework, we employ the expectation maximization soft clustering technique to create our user segmentations in the PUPP-DA framework, and we further incorporate Convolutional Neural Networks into our design. Our results show the benefits of user segmentation based on soft clustering and the use of active learning to improve predictions for new users. Furthermore, our findings show that focusing on frequent or popular users clearly improves classification accuracy. In addition, we demonstrate that deep learning outperforms machine learning techniques, notably resulting in more accurate predictions for individual users. Thirdly, we address the grey-sheep problem in our Grey-sheep One-class Recommendations (GSOR) framework. The existence of grey-sheep users in the system results in a class imbalance whereby the majority of users will belong to one class and a small portion (grey-sheep users) will fall into the minority class. In this framework, we use one-class classification to provide a class structure for the training examples. As a pre-assessment stage, we assess the characteristics of grey-sheep users and study their impact on model accuracy. Next, as mentioned above, we utilize one-class learning, whereby we focus on the majority class to first learn the decision boundary in order to generate prediction lists for the grey-sheep (minority class). Our results indicate that including grey-sheep users in the training step, as opposed to treating them as outliers and removing them prior to learning, has a positive impact on the general predictive accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Larsen, Randy T. "A Conceptual Framework for Understanding Effects of Wildlife Water Developments in the Western United States." DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/189.

Full text
Abstract:
Free water can be a limiting factor to wildlife in arid regions of the world. In the western United States, management agencies have installed numerous, expensive wildlife water developments (e.g. catchments, guzzlers, wells) to: 1) increase the distribution or density of target species, 2) influence animal movements, and 3) mitigate for the loss of available free water. Despite over 50 years as an active management practice, water developments have become controversial for several species. We lack an integrated understanding of the ways free water influences animal populations. In particular, we have not meshed understanding of evolutionary adaptations that reduce the need for free water and behavioral constraints that may limit use of otherwise available free water with management practices. I propose a conceptual framework for understanding more generally how, when, and where wildlife water developments are likely to benefit wildlife species. I argue that the following five elements are fundamental to an integrated understanding: 1) consideration of the variable nature in time and space of available free water, 2) location and availability of pre-formed and/or metabolic water, 3) seasonal temperature and precipitation patterns that influence the physiological need for water, 4) behavioral constraints that limit use of otherwise available free water, and 5) proper spacing of water sources for target species. I developed this framework from work done primarily with chukars (Alectoris chukar). I also report supporting evidence from research with mule deer (Odocoileus hemionus). Chukars demonstrated a spatial response to available free water when estimates of dietary moisture content were < 40%. Mule deer photo counts were reduced at water sources with small-perimeter fencing, suggesting increased predation risk caused mule deer to behaviorally avoid use of otherwise available free water. When all five framework elements are considered, I found strong evidence that wildlife water developments have benefited some chukar populations. Historic chukar counts suggested a population benefit following installation of wildlife water developments. Experimental removal of access to free water caused increased movements and decreased survival of adult chukars.
APA, Harvard, Vancouver, ISO, and other styles
38

Sommerlot, Andrew Richard. "Coupling Physical and Machine Learning Models with High Resolution Information Transfer and Rapid Update Frameworks for Environmental Applications." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/89893.

Full text
Abstract:
Few current modeling tools are designed to predict short-term, high-risk runoff from critical source areas (CSAs) in watersheds which are significant sources of non point source (NPS) pollution. This study couples the Soil and Water Assessment Tool-Variable Source Area (SWAT-VSA) model with the Climate Forecast System Reanalysis (CFSR) model and the Global Forecast System (GFS) model short-term weather forecast, to develop a CSA prediction tool designed to assist producers, landowners, and planners in identifying high-risk areas generating storm runoff and pollution. Short-term predictions for streamflow, runoff probability, and soil moisture levels were estimated in the South Fork of the Shenandoah river watershed in Virginia. In order to allow land managers access to the CSA predictions a free and open source software based web was developed. The forecast system consists of three primary components; (1) the model, which preprocesses the necessary hydrologic forcings, runs the watershed model, and outputs spatially distributed VSA forecasts; (2) a data management structure, which converts high resolution rasters into overlay web map tiles; and (3) the user interface component, a web page that allows the user, to interact with the processed output. The resulting framework satisfied most design requirements with free and open source software and scored better than similar tools in usability metrics. One of the potential problems is that the CSA model, utilizing physically based modeling techniques requires significant computational time to execute and process. Thus, as an alternative, a deep learning (DL) model was developed and trained on the process based model output. The DL model resulted in a 9% increase in predictive power compared to the physically based model and a ten-fold decrease in run time. Additionally, DL interpretation methods applicable beyond this study are described including hidden layer visualization and equation extractions describing a quantifiable amount of variance in hidden layer values. Finally, a large-scale analysis of soil phosphorus (P) levels was conducted in the Chesapeake Bay watershed, a current location of several short-term forecast tools. Based on Bayesian inference methodologies, 31 years of soil P history at the county scale were estimated, with the associated uncertainty for each estimate. These data will assist in the planning and implantation of short term forecast tools with P management goals. The short term modeling and communication tools developed in this work contribute to filling a gap in scientific tools aimed at improving water quality through informing land manager's decisions.
PHD
APA, Harvard, Vancouver, ISO, and other styles
39

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
40

Stinson, Derek L. "Deep Learning with Go." Thesis, 2020. http://hdl.handle.net/1805/22729.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Current research in deep learning is primarily focused on using Python as a support language. Go, an emerging language, that has many benefits including native support for concurrency has seen a rise in adoption over the past few years. However, this language is not widely used to develop learning models due to the lack of supporting libraries and frameworks for model development. In this thesis, the use of Go for the development of neural network models in general and convolution neural networks is explored. The proposed study is based on a Go-CUDA implementation of neural network models called GoCuNets. This implementation is then compared to a Go-CPU deep learning implementation that takes advantage of Go's built in concurrency called ConvNetGo. A comparison of these two implementations shows a significant performance gain when using GoCuNets compared to ConvNetGo.
APA, Harvard, Vancouver, ISO, and other styles
41

CHIANG, MING-HAN, and 江明翰. "Design of Incremental Deep Learning Framework for Industrial IoT." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/8p29xb.

Full text
Abstract:
碩士
逢甲大學
資訊工程學系
107
In recent years, the Internet of Things and AI technology have promoted the rapid development of Industry 4.0, and the digitalization and smart production of factories have become an unstoppable trend. However, due to the large increase in the number of devices connected to the Internet, the amount of data transmitted at the same time is very large for data collection. Also, the database design and storage requirements are also important challenges. The transport protocol of the Internet of Things usually uses MQTT to process messages; when the broker for message forwarding accepts too many connections at the same time, data loss and instability may occur. Therefore, how to configure a stable IoT infrastructure is very important. In order to solve the above problems, the thesis proposes an incremental learning framework based on the Industrial Internet of Things, which allows factories to implement smart manufacturing in Industry 4.0 based on the architecture. We also proposed a stable MQTT connection method and data compression method. In the proposed system, MQTT's broker acts as a bridge to forward messages, thereby sharing the pressure on a single broker. In addition, we have also designed a compression mechanism for transmitting sensor data to get the correct data under the condition of less data transmission. Also, the system will retain the data first in case of sudden disconnection is proposed to avoid the data loss during the disconnection period. Finally, we deploy the proposed architecture in a cooperated precision machinery factory in Taichung to collect sensor data, and the operation of the system and the problems that may be encountered are analyzed and discussed in the thesis.
APA, Harvard, Vancouver, ISO, and other styles
42

(8812109), Derek Leigh Stinson. "Deep Learning with Go." Thesis, 2020.

Find full text
Abstract:
Current research in deep learning is primarily focused on using Python as a support language. Go, an emerging language, that has many benefits including native support for concurrency has seen a rise in adoption over the past few years. However, this language is not widely used to develop learning models due to the lack of supporting libraries and frameworks for model development. In this thesis, the use of Go for the development of neural network models in general and convolution neural networks is explored. The proposed study is based on a Go-CUDA implementation of neural network models called GoCuNets. This implementation is then compared to a Go-CPU deep learning implementation that takes advantage of Go's built in concurrency called ConvNetGo. A comparison of these two implementations shows a significant performance gain when using GoCuNets compared to ConvNetGo.
APA, Harvard, Vancouver, ISO, and other styles
43

KUO, CHAN-FU, and 郭展甫. "The Implementation of Caffe Deep Learning Framework Using Intel Xeon Phi." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2mv99v.

Full text
Abstract:
碩士
東海大學
資訊工程學系
105
In recent years, with the increase in processor computing power, a substantial increase in the development of many scientific applications, such as weather forecast, financial market analysis, medical technology and so on. Deep Learning can help the computer understand the abstract information such as images, text and sound. Through the neural network, the computer can have the same observation and learning ability as human beings, and even better than human. In this paper, we will use the famous deep learning framework: Caffe, implement to Xeon Phi through the optimization, including the use of vectorization, OpenMP parallel processing, message transfer Interface (MPI), etc., To improve the availability of deep learning framework. Intel recently launched the second generation of Xeon Phi, in addition to the first generation of coprocessor (Coprocessor) products retained, but also added up to 72 core of the main processor, with the power can not be ignored. We evaluate the performance of the deep computing framework across a variety of Intel Xeon platform, including the accuracy comparison between the number of iterations of the test in the training model, and the training time on the different machines before and after optimization, and the use of two Xeon Phi multi-node tests to provide the researchers with a white paper for measurement.
APA, Harvard, Vancouver, ISO, and other styles
44

Mu-HsuanCheng and 鄭沐軒. "On Designing the Adaptive Computation Framework for Distributed Deep Learning Models." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/6krb5t.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系
106
We propose the computation framework that facilitates the inference of the distributed deep learning model to be performed collaboratively by the devices in a distributed computing hierarchy. For example, in Internet-of-Things (IoT) applications, the three-tier computing hierarchy consists of end devices, gateways, and server(s), and the model inference could be done adaptively by one or more computing tiers from the bottom to the top of the hierarchy. By allowing the trained models to run on the actually distributed systems, which has not done by the previous work, the proposed framework enables the co-design of the distributed deep learning models and systems. In particular, in addition to the model accuracy, which is the major concern for the model designers, we found that as various types of computing platforms are present in IoT applications fields, measuring the delivered performance of the developed models on the actual systems is also critical to making sure that the model inference does not cost too much time on the end devices. Furthermore, the measured performance of the model (and the system) would be a good input to the model/system design in the next design cycle, e.g., to determine a better mapping of the network layers onto the hierarchy tiers. On top of the framework, we have built the surveillance system for detecting objects as a case study. In our experiments, we evaluate the delivered performance of model designs on the two-tier computing hierarchy, show the advantages of the adaptive inference computation, analyze the system capacity under the given workloads, and discuss the impact of the model parameter setting on the system capacity. We believe that the enablement of the performance evaluation expedites the design process of the distributed deep learning models/systems.
APA, Harvard, Vancouver, ISO, and other styles
45

KUO, JUAN-YU, and 郭冠佑. "Human Behavior Recognition Based on a Multi-view Framework Using Deep Learning." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3as4qz.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
106
With the proliferation of deep learning techniques, a significant amount of applications related to home caring systems emerge recently. In particular, detecting abnormal events in a smart home environment has become more mature. According to recent statistics, the most common injury in the elderly population is falling. Existing approaches have mostly conducted intelligence video analysis of single camera for two cases of fall and non-fall. In this paper, we adopt deep learning techniques including convolutional neural networks (CNN) and long short-term memory (LSTM) to construct deep networks for human behavior recognition in a multi-view framework. We set up our experimental environment as a normal residence to collect a large amount of data, and falling is one of the actions included in our dataset. It is not just identifying either fall or non-fall. Our model can identify six human behaviors in total, namely walking, falling, lying down, climbing up, bending, and sitting down. Additionally, we use two cameras as our sensors to efficiently overcome the problem of blind angles and improve performance based on the multi-view setting. After performing a series of image preprocessing in the raw data, we obtain the human silhouette images as the input to our training model. In addition, because the real-world datasets are complicated for analyzing and understanding, the assignment of labeling data is time-consuming and money-consuming. Therefore, we present image clustering based on stacked convolutional auto-encoder (SCAE) which applies the clustering labels to replace the manual labels for auto-labeling. Finally, the experimental results demonstrate that the performance and novelty of our proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
46

Al-Waisy, Alaa S., Rami S. R. Qahwaji, Stanley S. Ipson, and Shumoos Al-Fahdawi. "A multimodal deep learning framework using local feature representations for face recognition." 2017. http://hdl.handle.net/10454/13122.

Full text
Abstract:
Yes
The most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that theCurvelet transform, a newanisotropic and multidirectional transform, can efficiently represent themain structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR)framework, to add feature representations by training aDBNon top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yi-Jie, and 黃奕傑. "A Deep Reinforcement Learning Based Logic Synthesis Framework for Further Area Optimization." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/a3934e.

Full text
Abstract:
碩士
逢甲大學
電子工程學系
107
It is very well-known in the industry that the Synopsys Design Compiler can achieve better area reduction while maintaining given design constraints by carefully selecting a smaller target gate-level cell library from primitive one at synthesis stage. The selecting process is extremely time consuming and inefficient. Therefore, an automatic selecting procedure is in demand. In this paper, we propose a deep reinforcement learning based logic synthesis framework to achieve further area reduction while maintaining given design constraints, and we apply transfer learning based on the same framework to improve the optimization quality. Experimental results show that we can obtain up to 25.61% area reduction with transfer learning.
APA, Harvard, Vancouver, ISO, and other styles
48

Kuo, Po-Yi, and 郭柏誼. "A Framework for Fusing Video and Wearable Sensing Data by Deep Learning." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/zu27b7.

Full text
Abstract:
碩士
國立交通大學
網路工程研究所
108
Both cameras and IoT devices have their particular capabilities in tracking human behaviors and statuses. Their correlations are, however, unclear. In this work, we propose a framework for integrating video and wearable sensing data for smart surveillance, such as people identification and tracking. Using biometric features such as fingerprint, iris, gait, and face may lead to good recognition results. However, these approaches all have their limitations in distance and privacy concerns. In this work, we present a data fusion framework based on deep learning for fusing the aforementioned data. Here, using deep learning is to help adaptively learn the hidden bindings of these data. We demonstrate how to retrieve data of interest from IoT devices, which are attached on human objects, and correctly tag them on the human objects captured by a camera, thus correlating video and IoT data. Potential applications of this framework include smart surveillance and friendly visualization. We then show several case studies, including integrating video data with body movement and physiological data.
APA, Harvard, Vancouver, ISO, and other styles
49

Shi, Lu-Yi, and 許露譯. "Efficient Face Detection by Applying Convolution Kernel Decomposition and Deep Learning Framework." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394053%22.&searchmode=basic.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系所
107
Face Detection is one of the most commonly used techniques in Deep Learning Reaearch, such as autofocus built into digital cameras, Face Recognition Payment in Unmanned Stores, Personnel Access Control System, Face Unlocking of Smart Phones, etc... Convolutional Neural Network is one of the currently convenient and accurate methods for implementing face detection technology in Deep Learning approach. Among the many researches on Face Detection, the most famous paper could be MTCNN(Multi-task Cascaded Convolutional Networks). The MTCNN Model has higher accuracy and shorter detection time than most other face detection methods. The major challenge is that the overall accuracy of MTCNN model is proportional to the amount of training data and the model training is usually very consuming. In general, the common way to improve the accuracy of the neural network model is to increase the number of neurons in the neural layer of the model structure(widening) or add a new neural layer(deepening). Although the accuracy of the model can be improved after using these two approaches, the neural network structure will become more complicated. An overly complex neural network structure does not only makes it difficult to improve its performance, also increases the overall model training time. In this thesis, we use the principle of diversity in data augmentation and use of convolution kernel decomposition to improve the accuracy of the MTCNN model. Try to improve accuracy as well as to reduce the amount of training data and training time applying. From the experiments, we found that the accuracy of the MTCNN model after the diversity principle and the convolution kernel decomposition mechanism is much higher than the original model and the training time on each stage is also shortened. In order to further enhance the effect of the MTCNN model, the XLA compiler function of the deep learning framework, TensorFlow, is included to improve the computational power of the model. Consequently, both the detection accuracy and the training time reduction can be effectively improved.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Ren Hung, and 黃任鴻. "Color Analysis and Identification Based on ROS Framework and Deep Learning Algorithm." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107CGU05442030%22.&searchmode=basic.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography