Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Energy Efficient Machine Learning System.

Rozprawy doktorskie na temat „Energy Efficient Machine Learning System”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Energy Efficient Machine Learning System”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

OSTA, MARIO. "Energy-efficient embedded machine learning algorithms for smart sensing systems". Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/997732.

Pełny tekst źródła
Streszczenie:
Embedded autonomous electronic systems are required in numerous application domains such as Internet of Things (IoT), wearable devices, and biomedical systems. Embedded electronic systems usually host sensors, and each sensor hosts multiple input channels (e.g., tactile, vision), tightly coupled to the electronic computing unit (ECU). The ECU extracts information by often employing sophisticated methods, e.g., Machine Learning. However, embedding Machine Learning algorithms poses essential challenges in terms of hardware resources and energy consumption because of: 1) the high amount of data to be processed; 2) computationally demanding methods. Leveraging on the trade-off between quality requirements versus computational complexity and time latency could reduce the system complexity without affecting the performance. The objectives of the thesis are to develop: 1) energy-efficient arithmetic circuits outperforming state of the art solutions for embedded machine learning algorithms, 2) an energy-efficient embedded electronic system for the “electronic-skin” (e-skin) application. As such, this thesis exploits two main approaches: Approximate Computing: In recent years, the approximate computing paradigm became a significant major field of research since it is able to enhance the energy efficiency and performance of digital systems. “Approximate Computing”(AC) turned out to be a practical approach to trade accuracy for better power, latency, and size . AC targets error-resilient applications and offers promising benefits by conserving some resources. Usually, approximate results are acceptable for many applications, e.g., tactile data processing,image processing , and data mining ; thus, it is highly recommended to take advantage of energy reduction with minimal variation in performance . In our work, we developed two approximate multipliers: 1) the first one is called “META” multiplier and is based on the Error Tolerant Adder (ETA), 2) the second one is called “Approximate Baugh-Wooley(BW)” multiplier where the approximations are implemented in the generation of the partial products. We showed that the proposed approximate arithmetic circuits could achieve a relevant reduction in power consumption and time delay around 80.4% and 24%, respectively, with respect to the exact BW multiplier. Next, to prove the feasibility of AC in real world applications, we explored the approximate multipliers on a case study as the e-skin application. The e-skin application is defined as multiple sensing components, including 1) structural materials, 2) signal processing, 3) data acquisition, and 4) data processing. Particularly, processing the originated data from the e-skin into low or high-level information is the main problem to be addressed by the embedded electronic system. Many studies have shown that Machine Learning is a promising approach in processing tactile data when classifying input touch modalities. In our work, we proposed a methodology for evaluating the behavior of the system when introducing approximate arithmetic circuits in the main stages (i.e., signal and data processing stages) of the system. Based on the proposed methodology, we first implemented the approximate multipliers on the low-pass Finite Impulse Response (FIR) filter in the signal processing stage of the application. We noticed that the FIR filter based on (Approx-BW) outperforms state of the art solutions, while respecting the tradeoff between accuracy and power consumption, with an SNR degradation of 1.39dB. Second, we implemented approximate adders and multipliers respectively into the Coordinate Rotational Digital Computer (CORDIC) and the Singular Value Decomposition (SVD) circuits; since CORDIC and SVD take a significant part of the computationally expensive Machine Learning algorithms employed in tactile data processing. We showed benefits of up to 21% and 19% in power reduction at the cost of less than 5% accuracy loss for CORDIC and SVD circuits when scaling the number of approximated bits. 2) Parallel Computing Platforms (PCP): Exploiting parallel architectures for near-threshold computing based on multi-core clusters is a promising approach to improve the performance of smart sensing systems. In our work, we exploited a novel computing platform embedding a Parallel Ultra Low Power processor (PULP), called “Mr. Wolf,” for the implementation of Machine Learning (ML) algorithms for touch modalities classification. First, we tested the ML algorithms at the software level; for RGB images as a case study and tactile dataset, we achieved accuracy respectively equal to 97% and 83.5%. After validating the effectiveness of the ML algorithm at the software level, we performed the on-board classification of two touch modalities, demonstrating the promising use of Mr. Wolf for smart sensing systems. Moreover, we proposed a memory management strategy for storing the needed amount of trained tensors (i.e., 50 trained tensors for each class) in the on-chip memory. We evaluated the execution cycles for Mr. Wolf using a single core, 2 cores, and 3 cores, taking advantage of the benefits of the parallelization. We presented a comparison with the popular low power ARM Cortex-M4F microcontroller employed, usually for battery-operated devices. We showed that the ML algorithm on the proposed platform runs 3.7 times faster than ARM Cortex M4F (STM32F40), consuming only 28 mW. The proposed platform achieves 15× better energy efficiency than the classification done on the STM32F40, consuming 81mJ per classification and 150 pJ per operation.
Style APA, Harvard, Vancouver, ISO itp.
2

Azmat, Freeha. "Machine learning and energy efficient cognitive radio". Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/85990/.

Pełny tekst źródła
Streszczenie:
With an explosion of wireless mobile devices and services, system designers are facing a challenge of spectrum scarcity and high energy consumption. Cognitive radio (CR) is a promising solution for fulfilling the growing demand of radio spectrum using dynamic spectrum access. It has the ability of sensing, allocating, sharing and adapting to the radio environment. In this thesis, an analytical performance evaluation of the machine learning and energy efficient cognitive radio systems has been investigated while taking some realistic conditions into account. Firstly, bio-inspired techniques, including re y algorithm (FFA), fish school search (FSS) and particle swarm optimization (PSO), have been utilized in this thesis to evaluate the optimal weighting vectors for cooperative spectrum sensing (CSS) and spectrum allocation in the cognitive radio systems. This evaluation is performed for more realistic signals that suffer from the non-linear distortions, caused by the power amplifiers. The thesis then takes the investigation further by analysing the spectrum occupancy in the cognitive radio systems using different machine learning techniques. Four machine learning algorithms, including naive bayesian classifier (NBC), decision trees (DT), support vector machine (SVM) and hidden markov model (HMM) have been studied to find the best technique with the highest classification accuracy (CA). A detailed comparison of the supervised and unsupervised algorithms in terms of the computational time and classification accuracy has been presented. In addition to this, the thesis investigates the energy efficient cognitive radio systems because energy harvesting enables the perpetual operation of the wireless networks without the need of battery change. In particular, energy can be harvested from the radio waves in the radio frequency spectrum. For ensuring reliable performance, energy prediction has been proposed as a key component for optimizing the energy harvesting because it equips the harvesting nodes with adaptation to the energy availability. Two machine learning techniques, linear regression (LR) and decision trees (DT) have been utilized to predict the harvested energy using real-time power measurements in the radio spectrum. Furthermore, the conventional energy harvesting cognitive radios do not assume any energy harvesting capability at the primary users (PUs). However, this is not the case when primary users are wirelessly powered. In this thesis, a novel framework has been proposed where PUs possess the energy harvesting capabilities and can get benefit from the presence of the secondary user (SU) without any predetermined agreement. The performances of the wireless powered PUs and the SU has also been analysed. Numerical results have been presented to show the accuracy of the analysis. First, it has been observed that bio-inspired techniques outperform the conventional algorithms used for collaborative spectrum sensing and allocation. Second, it has been noticed that SVM is the best algorithm among all the supervised and unsupervised classifiers. Based on this, a new SVM algorithm has been proposed by combining SVM with FFA. It has also been observed that SVM+FFA outperform all other machine leaning classifiers Third, it has been noticed in the energy predictive modelling framework that LR outperforms DT by achieving smaller prediction error. It has also been shown that optimal time and frequency attained using energy predictive model can be used for defining the scheduling policies of the harvesting nodes. Last, it has been shown that wirelessly powered PUs having energy harvesting capabilities can attain energy gain from the transmission of SU and SU can attain the throughput gain from the extra transmission time allocated for energy harvesting PUs.
Style APA, Harvard, Vancouver, ISO itp.
3

García-Martín, Eva. "Extraction and Energy Efficient Processing of Streaming Data". Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15532.

Pełny tekst źródła
Streszczenie:
The interest in machine learning algorithms is increasing, in parallel with the advancements in hardware and software required to mine large-scale datasets. Machine learning algorithms account for a significant amount of energy consumed in data centers, which impacts the global energy consumption. However, machine learning algorithms are optimized towards predictive performance and scalability. Algorithms with low energy consumption are necessary for embedded systems and other resource constrained devices; and desirable for platforms that require many computations, such as data centers. Data stream mining investigates how to process potentially infinite streams of data without the need to store all the data. This ability is particularly useful for companies that are generating data at a high rate, such as social networks. This thesis investigates algorithms in the data stream mining domain from an energy efficiency perspective. The thesis comprises of two parts. The first part explores how to extract and analyze data from Twitter, with a pilot study that investigates a correlation between hashtags and followers. The second and main part investigates how energy is consumed and optimized in an online learning algorithm, suitable for data stream mining tasks. The second part of the thesis focuses on analyzing, understanding, and reformulating the Very Fast Decision Tree (VFDT) algorithm, the original Hoeffding tree algorithm, into an energy efficient version. It presents three key contributions. First, it shows how energy varies in the VFDT from a high-level view by tuning different parameters. Second, it presents a methodology to identify energy bottlenecks in machine learning algorithms, by portraying the functions of the VFDT that consume the largest amount of energy. Third, it introduces dynamic parameter adaptation for Hoeffding trees, a method to dynamically adapt the parameters of Hoeffding trees to reduce their energy consumption. The results show an average energy reduction of 23% on the VFDT algorithm.
Scalable resource-efficient systems for big data analytics
Style APA, Harvard, Vancouver, ISO itp.
4

Harmer, Keith. "An energy efficient brushless drive system for a domestic washing machine". Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265571.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cui, Henggang. "Exploiting Application Characteristics for Efficient System Support of Data-Parallel Machine Learning". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/908.

Pełny tekst źródła
Streszczenie:
Large scale machine learning has many characteristics that can be exploited in the system designs to improve its efficiency. This dissertation demonstrates that the characteristics of the ML computations can be exploited in the design and implementation of parameter server systems, to greatly improve the efficiency by an order of magnitude or more. We support this thesis statement with three case study systems, IterStore, GeePS, and MLtuner. IterStore is an optimized parameter server system design that exploits the repeated data access pattern characteristic of ML computations. The designed optimizations allow IterStore to reduce the total run time of our ML benchmarks by up to 50×. GeePS is a parameter server that is specialized for deep learning on distributed GPUs. By exploiting the layer-by-layer data access and computation pattern of deep learning, GeePS provides almost linear scalability from single-machine baselines (13× more training throughput with 16 machines), and also supports neural networks that do not fit in GPU memory. MLtuner is a system for automatically tuning the training tunables of ML tasks. It exploits the characteristic that the best tunable settings can often be decided quickly with just a short trial time. By making use of optimization-guided online trial-and-error, MLtuner can robustly find and re-tune tunable settings for a variety of machine learning applications, including image classification, video classification, and matrix factorization, and is over an order of magnitude faster than traditional hyperparameter tuning approaches.
Style APA, Harvard, Vancouver, ISO itp.
6

Le, Borgne Yann-Aël. "Learning in wireless sensor networks for energy-efficient environmental monitoring". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210334.

Pełny tekst źródła
Streszczenie:
Wireless sensor networks form an emerging class of computing devices capable of observing the world with an unprecedented resolution, and promise to provide a revolutionary instrument for environmental monitoring. Such a network is composed of a collection of battery-operated wireless sensors, or sensor nodes, each of which is equipped with sensing, processing and wireless communication capabilities. Thanks to advances in microelectronics and wireless technologies, wireless sensors are small in size, and can be deployed at low cost over different kinds of environments in order to monitor both over space and time the variations of physical quantities such as temperature, humidity, light, or sound.

In environmental monitoring studies, many applications are expected to run unattended for months or years. Sensor nodes are however constrained by limited resources, particularly in terms of energy. Since communication is one order of magnitude more energy-consuming than processing, the design of data collection schemes that limit the amount of transmitted data is therefore recognized as a central issue for wireless sensor networks.

An efficient way to address this challenge is to approximate, by means of mathematical models, the evolution of the measurements taken by sensors over space and/or time. Indeed, whenever a mathematical model may be used in place of the true measurements, significant gains in communications may be obtained by only transmitting the parameters of the model instead of the set of real measurements. Since in most cases there is little or no a priori information about the variations taken by sensor measurements, the models must be identified in an automated manner. This calls for the use of machine learning techniques, which allow to model the variations of future measurements on the basis of past measurements.

This thesis brings two main contributions to the use of learning techniques in a sensor network. First, we propose an approach which combines time series prediction and model selection for reducing the amount of communication. The rationale of this approach, called adaptive model selection, is to let the sensors determine in an automated manner a prediction model that does not only fits their measurements, but that also reduces the amount of transmitted data.

The second main contribution is the design of a distributed approach for modeling sensed data, based on the principal component analysis (PCA). The proposed method allows to transform along a routing tree the measurements taken in such a way that (i) most of the variability in the measurements is retained, and (ii) the network load sustained by sensor nodes is reduced and more evenly distributed, which in turn extends the overall network lifetime. The framework can be seen as a truly distributed approach for the principal component analysis, and finds applications not only for approximated data collection tasks, but also for event detection or recognition tasks.

/

Les réseaux de capteurs sans fil forment une nouvelle famille de systèmes informatiques permettant d'observer le monde avec une résolution sans précédent. En particulier, ces systèmes promettent de révolutionner le domaine de l'étude environnementale. Un tel réseau est composé d'un ensemble de capteurs sans fil, ou unités sensorielles, capables de collecter, traiter, et transmettre de l'information. Grâce aux avancées dans les domaines de la microélectronique et des technologies sans fil, ces systèmes sont à la fois peu volumineux et peu coûteux. Ceci permet leurs deploiements dans différents types d'environnements, afin d'observer l'évolution dans le temps et l'espace de quantités physiques telles que la température, l'humidité, la lumière ou le son.

Dans le domaine de l'étude environnementale, les systèmes de prise de mesures doivent souvent fonctionner de manière autonome pendant plusieurs mois ou plusieurs années. Les capteurs sans fil ont cependant des ressources limitées, particulièrement en terme d'énergie. Les communications radios étant d'un ordre de grandeur plus coûteuses en énergie que l'utilisation du processeur, la conception de méthodes de collecte de données limitant la transmission de données est devenue l'un des principaux défis soulevés par cette technologie.

Ce défi peut être abordé de manière efficace par l'utilisation de modèles mathématiques modélisant l'évolution spatiotemporelle des mesures prises par les capteurs. En effet, si un tel modèle peut être utilisé à la place des mesures, d'importants gains en communications peuvent être obtenus en utilisant les paramètres du modèle comme substitut des mesures. Cependant, dans la majorité des cas, peu ou aucune information sur la nature des mesures prises par les capteurs ne sont disponibles, et donc aucun modèle ne peut être a priori défini. Dans ces cas, les techniques issues du domaine de l'apprentissage machine sont particulièrement appropriées. Ces techniques ont pour but de créer ces modèles de façon autonome, en anticipant les mesures à venir sur la base des mesures passées.

Dans cette thèse, deux contributions sont principalement apportées permettant l'applica-tion de techniques d'apprentissage machine dans le domaine des réseaux de capteurs sans fil. Premièrement, nous proposons une approche qui combine la prédiction de série temporelle avec la sélection de modèles afin de réduire la communication. La logique de cette approche, appelée sélection de modèle adaptive, est de permettre aux unités sensorielles de determiner de manière autonome un modèle de prédiction qui anticipe correctement leurs mesures, tout en réduisant l'utilisation de leur radio.

Deuxièmement, nous avons conçu une méthode permettant de modéliser de façon distribuée les mesures collectées, qui se base sur l'analyse en composantes principales (ACP). La méthode permet de transformer les mesures le long d'un arbre de routage, de façon à ce que (i) la majeure partie des variations dans les mesures des capteurs soient conservées, et (ii) la charge réseau soit réduite et mieux distribuée, ce qui permet d'augmenter également la durée de vie du réseau. L'approche proposée permet de véritablement distribuer l'ACP, et peut être utilisée pour des applications impliquant la collecte de données, mais également pour la détection ou la classification d'événements.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

Style APA, Harvard, Vancouver, ISO itp.
7

Yurur, Ozgur. "Energy Efficient Context-Aware Framework in Mobile Sensing". Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4797.

Pełny tekst źródła
Streszczenie:
The ever-increasing technological advances in embedded systems engineering, together with the proliferation of small-size sensor design and deployment, have enabled mobile devices (e.g., smartphones) to recognize daily occurring human based actions, activities and interactions. Therefore, inferring a vast variety of mobile device user based activities from a very diverse context obtained by a series of sensory observations has drawn much interest in the research area of ubiquitous sensing. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users, and this allows network services to respond proactively and intelligently based on such awareness. Hence, with the evolution of smartphones, software developers are empowered to create context aware applications for recognizing human-centric or community based innovative social and cognitive activities in any situation and from anywhere. This leads to the exciting vision of forming a society of ``Internet of Things" which facilitates applications to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network which is capable of making autonomous logical decisions to actuate environmental objects. More significantly, it is believed that introducing the intelligence and situational awareness into recognition process of human-centric event patterns could give a better understanding of human behaviors, and it also could give a chance for proactively assisting individuals in order to enhance the quality of lives. Mobile devices supporting emerging computationally pervasive applications will constitute a significant part of future mobile technologies by providing highly proactive services requiring continuous monitoring of user related contexts. However, the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth as compared to the capabilities of PCs and servers. Above all, power concerns are major restrictions standing up to implementation of context-aware applications. These requirements unfortunately shorten device battery lifetimes due to high energy consumption caused by both sensor and processor operations. Specifically, continuously capturing user context through sensors imposes heavy workloads in hardware and computations, and hence drains the battery power rapidly. Therefore, mobile device batteries do not last a long time while operating sensor(s) constantly. In addition to that, the growing deployment of sensor technologies in mobile devices and innumerable software applications utilizing sensors have led to the creation of a layered system architecture (i.e., context aware middleware) so that the desired architecture can not only offer a wide range of user-specific services, but also respond effectively towards diversity in sensor utilization, large sensory data acquisitions, ever-increasing application requirements, pervasive context processing software libraries, mobile device based constraints and so on. Due to the ubiquity of these computing devices in a dynamic environment where the sensor network topologies actively change, it yields applications to behave opportunistically and adaptively without a priori assumptions in response to the availability of diverse resources in the physical world as well as in response to scalability, modularity, extensibility and interoperability among heterogeneous physical hardware. In this sense, this dissertation aims at proposing novel solutions to enhance the existing tradeoffs in mobile sensing between accuracy and power consumption while context is being inferred under the intrinsic constraints of mobile devices and around the emerging concepts in context-aware middleware framework.
Style APA, Harvard, Vancouver, ISO itp.
8

Westphal, Florian. "Efficient Document Image Binarization using Heterogeneous Computing and Interactive Machine Learning". Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16797.

Pełny tekst źródła
Streszczenie:
Large collections of historical document images have been collected by companies and government institutions for decades. More recently, these collections have been made available to a larger public via the Internet. However, to make accessing them truly useful, the contained images need to be made readable and searchable. One step in that direction is document image binarization, the separation of text foreground from page background. This separation makes the text shown in the document images easier to process by humans and other image processing algorithms alike. While reasonably well working binarization algorithms exist, it is not sufficient to just being able to perform the separation of foreground and background well. This separation also has to be achieved in an efficient manner, in terms of execution time, but also in terms of training data used by machine learning based methods. This is necessary to make binarization not only theoretically possible, but also practically viable. In this thesis, we explore different ways to achieve efficient binarization in terms of execution time by improving the implementation and the algorithm of a state-of-the-art binarization method. We find that parameter prediction, as well as mapping the algorithm onto the graphics processing unit (GPU) help to improve its execution performance. Furthermore, we propose a binarization algorithm based on recurrent neural networks and evaluate the choice of its design parameters with respect to their impact on execution time and binarization quality. Here, we identify a trade-off between binarization quality and execution performance based on the algorithm’s footprint size and show that dynamically weighted training loss tends to improve the binarization quality. Lastly, we address the problem of training data efficiency by evaluating the use of interactive machine learning for reducing the required amount of training data for our recurrent neural network based method. We show that user feedback can help to achieve better binarization quality with less training data and that visualized uncertainty helps to guide users to give more relevant feedback.
Scalable resource-efficient systems for big data analytics
Style APA, Harvard, Vancouver, ISO itp.
9

Chakraborty, Debaditya. "Detection of Faults in HVAC Systems using Tree-based Ensemble Models and Dynamic Thresholds". University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543582336141076.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Sala, Cardoso Enric. "Advanced energy management strategies for HVAC systems in smart buildings". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668528.

Pełny tekst źródła
Streszczenie:
The efficacy of the energy management systems at dealing with energy consumption in buildings has been a topic with a growing interest in recent years due to the ever-increasing global energy demand and the large percentage of energy being currently used by buildings. The scale of this sector has attracted research effort with the objective of uncovering potential improvement avenues and materializing them with the help of recent technological advances that could be exploited to lower the energetic footprint of buildings. Specifically, in the area of heating, ventilating and air conditioning installations, the availability of large amounts of historical data in building management software suites makes possible the study of how resource-efficient these systems really are when entrusted with ensuring occupant comfort. Actually, recent reports have shown that there is a gap between the ideal operating performance and the performance achieved in practice. Accordingly, this thesis considers the research of novel energy management strategies for heating, ventilating and air conditioning installations in buildings, aimed at narrowing the performance gap by employing data-driven methods to increase their context awareness, allowing management systems to steer the operation towards higher efficiency. This includes the advancement of modeling methodologies capable of extracting actionable knowledge from historical building behavior databases, through load forecasting and equipment operational performance estimation supporting the identification of a building’s context and energetic needs, and the development of a generalizable multi-objective optimization strategy aimed at meeting these needs while minimizing the consumption of energy. The experimental results obtained from the implementation of the developed methodologies show a significant potential for increasing energy efficiency of heating, ventilating and air conditioning systems while being sufficiently generic to support their usage in different installations having diverse equipment. In conclusion, a complete analysis and actuation framework was developed, implemented and validated by means of an experimental database acquired from a pilot plant during the research period of this thesis. The obtained results demonstrate the efficacy of the proposed standalone contributions, and as a whole represent a suitable solution for helping to increase the performance of heating, ventilating and air conditioning installations without affecting the comfort of their occupants.
L’eficàcia dels sistemes de gestió d’energia per afrontar el consum d’energia en edificis és un tema que ha rebut un interès en augment durant els darrers anys a causa de la creixent demanda global d’energia i del gran percentatge d’energia que n’utilitzen actualment els edificis. L’escala d’aquest sector ha atret l'atenció de nombrosa investigació amb l’objectiu de descobrir possibles vies de millora i materialitzar-les amb l’ajuda de recents avenços tecnològics que es podrien aprofitar per disminuir les necessitats energètiques dels edificis. Concretament, en l’àrea d’instal·lacions de calefacció, ventilació i climatització, la disponibilitat de grans bases de dades històriques als sistemes de gestió d’edificis fa possible l’estudi de com d'eficients són realment aquests sistemes quan s’encarreguen d'assegurar el confort dels seus ocupants. En realitat, informes recents indiquen que hi ha una diferència entre el rendiment operatiu ideal i el rendiment generalment assolit a la pràctica. En conseqüència, aquesta tesi considera la investigació de noves estratègies de gestió de l’energia per a instal·lacions de calefacció, ventilació i climatització en edificis, destinades a reduir la diferència de rendiment mitjançant l’ús de mètodes basats en dades per tal d'augmentar el seu coneixement contextual, permetent als sistemes de gestió dirigir l’operació cap a zones de treball amb un rendiment superior. Això inclou tant l’avanç de metodologies de modelat capaces d’extreure coneixement de bases de dades de comportaments històrics d’edificis a través de la previsió de càrregues de consum i l’estimació del rendiment operatiu dels equips que recolzin la identificació del context operatiu i de les necessitats energètiques d’un edifici, tant com del desenvolupament d’una estratègia d’optimització multi-objectiu generalitzable per tal de minimitzar el consum d’energia mentre es satisfan aquestes necessitats energètiques. Els resultats experimentals obtinguts a partir de la implementació de les metodologies desenvolupades mostren un potencial important per augmentar l'eficiència energètica dels sistemes de climatització, mentre que són prou genèrics com per permetre el seu ús en diferents instal·lacions i suportant equips diversos. En conclusió, durant aquesta tesi es va desenvolupar, implementar i validar un marc d’anàlisi i actuació complet mitjançant una base de dades experimental adquirida en una planta pilot durant el període d’investigació de la tesi. Els resultats obtinguts demostren l’eficàcia de les contribucions de manera individual i, en conjunt, representen una solució idònia per ajudar a augmentar el rendiment de les instal·lacions de climatització sense afectar el confort dels seus ocupants
Style APA, Harvard, Vancouver, ISO itp.
11

Tirumalareddy, Rohan Reddy. "BLE Beacon Based Indoor Positioning System in an Office Building using Machine Learning". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20221.

Pełny tekst źródła
Streszczenie:
Context: Indoor positioning systems have become more widespread over the past decade, mainly due to devices such as Bluetooth Low Energy beacons which are low at cost and work effectively. The context of this thesis is to localize and help people navigate to the office equipment, meeting rooms, etc., in an office environment using machine learning algorithms. This can help the employees to work more effectively and conveniently saving time. Objective: To perform a literature review of various machine learning models in indoor positioning that are suitable for an office environment. Also, to experiment with those selected models and compare the results based on their performance. Android smartphone and BLE beacons have been used to collect RSSI values along with their respective location coordinates for the dataset. Besides, the accuracy of positioning is determined by using state-of-the-art machine learning algorithms to train the dataset. Using performance metrics such as Euclidean distance error, CDF curve of Euclidean distance error, RMSE and MAE to compare results and select the best model for this research. Methods: A Fingerprinting method for indoor positioning is studied and applied for the collection of the RSSI values and (x, y) location coordinates from the fixed beacons. A literature review is performed on various machine learning models appropriate for indoor positioning. The chosen models were experimented and compared based on their performances using performance metrics such as CDF curve, MAE, RSME and Euclidean distance error. Results: The literature study shows that Long Short Term Memory and Multi-layer perceptron, Gradient boosting, XG boosting and Ada boosting is suitable for models for indoor positioning. The experimentation and comparison of these models show that the overall performance of Long short-term memory network was better than multiplayer Perceptron, Gradient boosting, XG boosting and Adaboosting. Conclusions: After analysing the acquired results and taking into account the real-world scenarios to which this thesis is intended, it can be stated that the LSTM network provides the most accurate location estimation using beacons. This system can be monitored in real-time for maintenance and personnel tracking in an office environment.
Style APA, Harvard, Vancouver, ISO itp.
12

Goutham, Mithun. "Machine learning based user activity prediction for smart homes". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595493258565743.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Hayward, Ross. "Analytic and inductive learning in an efficient connectionist rule-based reasoning system". Thesis, Queensland University of Technology, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Lundström, Dennis. "Data-efficient Transfer Learning with Pre-trained Networks". Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138612.

Pełny tekst źródła
Streszczenie:
Deep learning has dominated the computer vision field since 2012, but a common criticism of deep learning methods is their dependence on large amounts of data. To combat this criticism research into data-efficient deep learning is growing. The foremost success in data-efficient deep learning is transfer learning with networks pre-trained on the ImageNet dataset. Pre-trained networks have achieved state-of-the-art performance on many tasks. We consider the pre-trained network method for a new task where we have to collect the data. We hypothesize that the data efficiency of pre-trained networks can be improved through informed data collection. After exhaustive experiments on CaffeNet and VGG16, we conclude that the data efficiency indeed can be improved. Furthermore, we investigate an alternative approach to data-efficient learning, namely adding domain knowledge in the form of a spatial transformer to the pre-trained networks. We find that spatial transformers are difficult to train and seem to not improve data efficiency.
Style APA, Harvard, Vancouver, ISO itp.
15

Zayene, Mariem. "Cooperative data exchange for wireless networks : Delay-aware and energy-efficient approaches". Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0033/document.

Pełny tekst źródła
Streszczenie:
Avec le nombre croissant d’appareils intelligents à faible puissance, au cours ces dernières années, la question de l’efficacité énergétique a joué un rôle de plus en plus indispensable dans la conception des systèmes de communication. Cette thèse vise à concevoir des schémas de transmission distribués à faible consommation d’énergie pour les réseaux sans fil, utilisant la théorie des jeux et le codage réseau instantanément décodable (IDNC), qui est une sous-classe prometteuse du codage réseau. En outre, nous étudions le modèle de l'échange coopératif de donnée (CDE) dans lequel tous les périphériques coopèrent en échangeant des paquets codés dans le réseau, jusqu’à ce qu’ils récupèrent tous l’ensemble des informations requises. En effet, la mise en œuvre du CDE basé sur l’IDNC soulève plusieurs défis intéressants, notamment la prolongation de la durée de vie du réseau et la réduction du nombre de transmissions afin de répondre aux besoins des applications temps réel. Par conséquent, contrairement à la plupart des travaux existants concernant l’IDNC, nous nous concentrons non seulement sur le délai, mais également sur l’énergie consommée. En premier lieu, nous étudions le problème de minimisation de l’énergie consommée et du délai au sein d’un petit réseau IDNC coopératif, entièrement connecté et à faible puissance. Nous modélisons le problème en utilisant la théorie des jeux coopératifs de formation de coalitions. Nous proposons un algorithme distribué (appelé “merge and split“) permettant aux nœuds sans fil de s’auto-organiser, de manière distribuée, en coalitions disjointes et indépendantes. L’algorithme proposé garantit une consommation d’énergie réduite et minimise le délai de complétion dans le réseau clustérisé résultant. Par ailleurs, nous ne considérons pas seulement l'énergie de transmission, mais aussi la consommation de l'énergie de calcul des nœuds. De plus, nous nous concentrons sur la question de la mobilité et nous analysons comment, à travers la solution proposée, les nœuds peuvent s’adapter à la topologie dynamique du réseau. Par la suite, nous étudions le même problème au sein d’un réseau large et partiellement connecté. En effet, nous examinons le modèle de CDE multi-sauts. Dans un tel modèle, nous considérons que les nœuds peuvent choisir la puissance d’émission et change ainsi de rayon de transmission et le nombre de voisin avec lesquels il peut entrer en coalition. Pour ce faire, nous modélisons le problème avec un jeu à deux étages; un jeu non-coopératif de contrôle de puissance et un jeu coopératif de formation de coalitions. La solution optimale du premier jeu permet aux joueurs de coopérer à travers des rayons de transmission limités en utilisant la théorie des jeux coopérative. En outre, nous proposons un algorithme distribué “merge and split“ afin de former des coalitions dans lesquelles les joueurs maximisent leurs utilités en termes de délai et de consommation d’énergie. La solution proposée permet la création d’une partition stable avec une interférence réduite et une complexité raisonnable. Nous démontrons que la coopération entre les nœuds au sein du réseau résultant, permet de réduire considérablement la consommation d’énergie par rapport au modèle coopératif optimal qui maintient le rayon de transmission maximal
With significantly growing number of smart low-power devices during recent years, the issue of energy efficiency has taken an increasingly essential role in the communication systems’ design. This thesis aims at designing distributed and energy efficient transmission schemes for wireless networks using game theory and instantly decodable network coding (IDNC) which is a promising network coding subclass. We study the cooperative data exchange (CDE) scenario in which all devices cooperate with each other by exchanging network coded packets until all of them receive all the required information. In fact, enabling the IDNC-based CDE setting brings several challenges such us how to extend the network lifetime and how to reduce the number of transmissions in order to satisfy urgent delay requirements. Therefore, unlike most of existing works concerning IDNC, we focus not only on the decoding delay, but also the consumed energy. First, we investigate the IDNC-based CDE problem within small fully connected networks across energy-constrained devices and model the problem using the cooperative game theory in partition form. We propose a distributed merge-and-split algorithm to allow the wireless nodes to self-organize into independent disjoint coalitions in a distributed manner. The proposed algorithm guarantees reduced energy consumption and minimizes the delay in the resulting clustered network structure. We do not only consider the transmission energy, but also the computational energy consumption. Furthermore, we focus on the mobility issue and we analyse how, in the proposed framework, nodes can adapt to the dynamic topology of the network. Thereafter, we study the IDNC-based CDE problem within large-scale partially connected networks. We considerate that each player uses no longer his maximum transmission power, rather, he controls his transmission range dynamically. In fact, we investigate multi-hop CDE using the IDNC at decentralized wireless nodes. In such model, we focus on how these wireless nodes can cooperate in limited transmission ranges without increasing the IDNC delay nor their energy consumption. For that purpose, we model the problem using a two-stage game theoretical framework. We first model the power control problem using non-cooperative game theory where users jointly choose their desired transmission power selfishly in order to reduce their energy consumption and their IDNC delay. The optimal solution of this game allows the players at the next stage to cooperate with each other through limited transmission ranges using cooperative game theory in partition form. Thereafter, a distributed multihop merge-and-split algorithm is defined to form coalitions where players maximize their utilities in terms of decoding delays and energy consumption. The solution of the proposed framework determines a stable feasible partition for the wireless nodes with reduced interference and reasonable complexity. We demonstrate that the co-operation between nodes in the multihop cooperative scheme achieves a significant minimization of the energy consumption with respect to the most stable cooperative scheme in maximum transmission range without hurting the IDNC delay
Style APA, Harvard, Vancouver, ISO itp.
16

Kheffache, Mansour. "Energy-Efficient Detection of Atrial Fibrillation in the Context of Resource-Restrained Devices". Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76394.

Pełny tekst źródła
Streszczenie:
eHealth is a recently emerging practice at the intersection between the ICT and healthcare fields where computing and communication technology is used to improve the traditional healthcare processes or create new opportunities to provide better health services, and eHealth can be considered under the umbrella of the Internet of Things. A common practice in eHealth is the use of machine learning for a computer-aided diagnosis, where an algorithm would be fed some biomedical signal to provide a diagnosis, in the same way a trained radiologist would do. This work considers the task of Atrial Fibrillation detection and proposes a novel range of algorithms to achieve energy-efficiency. Based on our working hypothesis, that computationally simple operations and low-precision data types are key for energy-efficiency, we evaluate various algorithms in the context of resource-restrained health-monitoring wearable devices. Finally, we assess the sustainability dimension of the proposed solution.
Style APA, Harvard, Vancouver, ISO itp.
17

Montgomery, Weston C. "Implementing a Data Acquisition System for the Training of Cloud Coverage Neural Networks". DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2310.

Pełny tekst źródła
Streszczenie:
Cal Poly is home to a solar farm designed to nominally generate 4.5 MW of electricity. The Gold Tree Solar Farm (GTSF) is currently the largest photovoltaic array in the California State University (CSU) system, and it was claimed to be able to produce approximately 11 GWh per year. These types of projections come from power generation models which have been developed to predict power production of these large solar fields. However, when it comes to near-term forecasting of power generation with variable sources such as wind and solar, there is definitely room for improvement. The two primary factors that could impact solar power generation are shading and the angle of the sun. The angle of the sun relative to GTSF’s panels can be analytically calculated using geometry. Shading due to cloud coverage, on the other hand, can be very difficult to map. Due to this, artificial neural networks (NN) have a lot of potential for accurate near-term cloud coverage forecasting. Much of the necessary training data (e.g. wind speeds, temperature, humidity, etc.) can be acquired from online sources, but the most important dataset needs to be captured at GTSF: sky images showing the exact location of the clouds over the solar field. Therefore, a new image capturing digital acquisition (DAQ) system has been implemented to gather the necessary training data for a goal of forecasting cloud coverage 15-30 minutes into the future.
Style APA, Harvard, Vancouver, ISO itp.
18

Kanwal, Summrina. "Towards a novel medical diagnosis system for clinical decision support system applications". Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/25397.

Pełny tekst źródła
Streszczenie:
Clinical diagnosis of chronic disease is a vital and challenging research problem which requires intensive clinical practice guidelines in order to ensure consistent and efficient patient care. Conventional medical diagnosis systems inculcate certain limitations, like complex diagnosis processes, lack of expertise, lack of well described procedures for conducting diagnoses, low computing skills, and so on. Automated clinical decision support system (CDSS) can help physicians and radiologists to overcome these challenges by combining the competency of radiologists and physicians with the capabilities of computers. CDSS depend on many techniques from the fields of image acquisition, image processing, pattern recognition, machine learning as well as optimization for medical data analysis to produce efficient diagnoses. In this dissertation, we discuss the current challenges in designing an efficient CDSS as well as a number of the latest techniques (while identifying best practices for each stage of the framework) to meet these challenges by finding informative patterns in the medical dataset, analysing them and building a descriptive model of the object of interest and thus aiding in medical diagnosis. To meet these challenges, we propose an extension of conventional clinical decision support system framework, by incorporating artificial immune network (AIN) based hyper-parameter optimization as integral part of it. We applied the conventional as well as optimized CDSS on four case studies (most of them comprise medical images) for efficient medical diagnosis and compared the results. The first key contribution is the novel application of a local energy-based shape histogram (LESH) as the feature set for the recognition of abnormalities in mammograms. We investigated the implication of this technique for the mammogram datasets of the Mammographic Image Analysis Society and INbreast. In the evaluation, regions of interest were extracted from the mammograms, their LESH features were calculated, and they were fed to support vector machine (SVM) and echo state network (ESN) classifiers. In addition, the impact of selecting a subset of LESH features based on the classification performance was also observed and benchmarked against a state-of-the-art wavelet based feature extraction method. The second key contribution is to apply the LESH technique to detect lung cancer. The JSRT Digital Image Database of chest radiographs was selected for research experimentation. Prior to LESH feature extraction, we enhanced the radiograph images using a contrast limited adaptive histogram equalization (CLAHE) approach. Selected state-of-the-art cognitive machine learning classifiers, namely the extreme learning machine (ELM), SVM and ESN, were then applied using the LESH extracted features to enable the efficient diagnosis of a correct medical state (the existence of benign or malignant cancer) in the x-ray images. Comparative simulation results, evaluated using the classification accuracy performance measure, were further benchmarked against state-of-the-art wavelet based features, and authenticated the distinct capability of our proposed framework for enhancing the diagnosis outcome. As the third contribution, this thesis presents a novel technique for detecting breast cancer in volumetric medical images based on a three-dimensional (3D) LESH model. It is a hybrid approach, and combines the 3D LESH feature extraction technique with machine learning classifiers to detect breast cancer from MRI images. The proposed system applies CLAHE to the MRI images before extracting the 3D LESH features. Furthermore, a selected subset of features is fed to a machine learning classifier, namely the SVM, ELM or ESN, to detect abnormalities and to distinguish between different stages of abnormality. The results indicate the high performance of the proposed system. When compared with the wavelet-based feature extraction technique, statistical analysis testifies to the significance of our proposed algorithm. The fourth contribution is a novel application of the (AIN) for optimizing machine learning classification algorithms as part of CDSS. We employed our proposed technique in conjunction with selected machine learning classifiers, namely the ELM, SVM and ESN, and validated it using the benchmark medical datasets of PIMA India diabetes and BUPA liver disorders, two-dimensional (2D) medical images, namely MIAS and INbreast and JSRT chest radiographs, as well as on the three-dimensional TCGA-BRCA breast MRI dataset. The results were investigated using the classification accuracy measure and the learning time. We also compared our methodology with the benchmarked multi-objective genetic algorithm (ES)-based optimization technique. The results authenticate the potential of the AIN optimised CDSS.
Style APA, Harvard, Vancouver, ISO itp.
19

Mämmelä, O. (Olli). "Algorithms for efficient and energy-aware network resource management in autonomous communications systems". Doctoral thesis, Oulun yliopisto, 2017. http://urn.fi/urn:isbn:9789526216089.

Pełny tekst źródła
Streszczenie:
Abstract According to industry estimates, monthly global mobile data traffic will surpass 30.6 exabytes by 2020 and global mobile data traffic will increase nearly eightfold between 2015 and 2020. Most of the mobile data traffic is generated by smartphones, and the total number of smartphones is expected to continue growing by 2020, which results in rapid traffic growth. In addition, the upcoming 5G networks and Internet of Things based communication are estimated to involve a large amount of network traffic. The increase in mobile data traffic and in the number of connected devices poses a challenge to network operators, service providers, and data center operators. If the transmission capacity of the network and the amount of data traffic are not in line with each other, congestion may occur and ultimately the quality of experience degrades. Mobile networks are also becoming more reliant on data centers that provide efficient computing power. However, the energy consumption of data centers has grown in recent years, which is a problem for data center operators. A traditional strategy to overcome these problems is to scale up the resources or by providing more efficient hardware. Resource over-provisioning increases operating and capital expenditures without a guarantee of increased average revenue per user. In addition, the growing complexity and dynamics of communication systems is a challenge for efficient resource management. Intelligent and resilient methods that can efficiently use existing resources by making autonomous decisions without intervention from human administrators are thus needed. The goal of this research is to implement, develop, model, and test algorithms that can enable efficient and energy-aware network resource management in autonomous communications systems. First, an energy-aware algorithm is introduced for high-performance computing data centers to reduce the energy consumption within a single data center and across a federation of data centers. For network access selection in heterogeneous wireless networks, two algorithms are proposed, a client side algorithm that tries to optimize users' quality of experience and a network side algorithm that focuses on optimizing the global resource usage of the network. Finally, for a video service, an algorithm is presented that can enhance the video content delivery in a controllable and resource-efficient way without major changes in the mobile network infrastructure
Tiivistelmä Langattoman tietoliikenteen nopean kasvun ennustetaan jatkuvan edelleen lähivuosinakin ja alan teollisuuden arvioiden mukaan matkapuhelinliikenteen määrä ylittäisi globaalisti 30,6 eksatavua vuoteen 2020 mennessä. Tämä tarkoittaisi liikennemäärän kahdeksankertaistumista ajanjaksolla 2015–2020. Älypuhelimet tuottavat suurimman osan matkapuhelinliikenteestä, ja älypuhelimien lukumäärän arvioidaan jatkavan kasvuaan vuoteen 2020 saakka, mikä johtaa nopeaan liikenteen kasvuun. Tämän lisäksi arvioidaan, että 5G verkot ja esineiden Internet tuottavat suuren määrän verkkoliikennettä. Matkapuhelinliikenteen ja laitteiden määrän kasvu tuo haasteita verkko-operaattoreille, palvelun tarjoajille, ja datakeskusoperaattoreille. Mikäli verkossa ei ole tarpeeksi siirtokapasiteettia dataliikenteen määrää varten, verkko ruuhkautuu ja lopulta palvelukokemus kärsii. Matkapuhelinverkot tulevat myös tulevaisuudessa tarvitsemaan datakeskusten laskentakapasiteettia. Datakeskusten energiankulutus on kuitenkin kasvanut viime vuosina, mikä on ongelma datakeskusoperaattoreille. Perinteinen strategia ongelmien ratkaisemiseksi on lisätä resurssien määrää tai tarjota tehokkaampaa laitteistoa. Resurssien liiallinen lisääminen kasvattaa kuitenkin sekä käyttö- että pääomakustannuksia ilman takuuta siitä, että keskimääräinen myyntitulo per käyttäjä kasvaisi. Tämän lisäksi tietoliikennejärjestelmät ovat monimutkaisia ja dynaamisia järjestelmiä, minkä vuoksi tehokas resurssienhallinta on haastavaa. Tämän vuoksi tarvitaan älykkäitä ja kestäviä metodeja, jotka pystyvät käyttämään olemassa olevia resursseja tehokkaasti tekemällä autonomisia päätöksiä ilman ylläpitäjän väliintuloa. Tämän tutkimuksen tavoitteena on toteuttaa, kehittää, mallintaa, ja testata algoritmeja, jotka mahdollistavat tehokkaan ja energiatietoisen verkkoresurssien hallinnan autonomisissa tietoliikennejärjestelmissä. Tutkimus esittää aluksi supertietokonedatakeskuksiin energiatietoisen algoritmin, jonka avulla voidaan vähentää energiankulutusta yhden datakeskuksen sisällä sekä usean eri datakeskuksen välillä. Verkkoyhteyden valintaan heterogeenisissä langattomissa verkoissa esitetään kaksi algoritmia. Ensimmäinen on käyttäjäkohtainen algoritmi, joka pyrkii optimoimaan yksittäisen käyttäjän palvelukokemusta. Toinen on verkon puolen algoritmi, joka keskittyy optimoimaan verkon kokonaisresurssien käyttöä. Lopuksi esitetään videopalvelulle algoritmi, joka parantaa videosisällön jakoa kontrolloidusti ja resurssitehokkaasti ilman että matkapuhelinverkon infrastruktuurille tarvitaan muutoksia
Style APA, Harvard, Vancouver, ISO itp.
20

Bogdanov, Daniil. "The development and analysis of a computationally efficient data driven suit jacket fit recommendation system". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222341.

Pełny tekst źródła
Streszczenie:
In this master thesis work we design and analyze a data driven suit jacket fit recommendation system which aim to guide shoppers in the process of assessing garment fit over the web. The system is divided into two stages. In the first stage we analyze labelled customer data, train supervised learning models as to be able to predict optimal suit jacket dimensions of unseen shoppers and determine appropriate models for each suit jacket dimension. In stage two the recommendation system uses the results from stage one and sorts a garment collection from best fit to least fit. The sorted collection is what the fit recommendation system is to return. In this thesis work we propose a particular design of stage two that aim to reduce the complexity of the system but at a cost of reduced quality of the results. The trade-offs are identified and weighed against each other. The results in stage one show that simple supervised learning models with linear regression functions suffice when the independent and dependent variables align at particular landmarks on the body. If style preferences are also to be incorporated into the supervised learning models, non-linear regression functions should be considered as to account for increased complexity. The results in stage two show that the complexity of the recommendation system can be made independent from the complexity of how fit is assessed. And as technology is enabling for more advanced ways of assessing garment fit, such as 3D body scanning techniques, the proposed design of reducing the complexity of the recommendation system enables for highly complex techniques to be utilized without affecting the responsiveness of the system in run-time.
I detta masterexamensarbete designar och analyserar vi ett datadrivet rekommendationssystem för kavajer med mål att vägleda nät-handlare i deras process i att bedöma passform över internet. Systemet är uppdelat i två steg. I det första steget analyserar vi märkt data och tränar modeller i att lära sig att framställa prognoser av optimala kavajmått för shoppare som inte systemet har tidigare exponeras för. I steg två tar rekommendationssystemet resultatet ifrån steg ett och sorterar plaggkollektionen från bästa till sämsta passform. Den sorterade kollektionen är vad systemet är tänkt att retunera. I detta arbete föreslåar vi en specifik utformning gällande steg två med mål att reducera komplexiteten av systemet men till en kostnad i noggrannhet vad det gäller resultat. För- och nackdelar identifieras och vägs mot varandra. Resultatet i steg två visar att enkla modeller med linjära regressionsfunktioner räcker när de obereoende och beroende variabler sammanfaller på specifika punkter på kroppen. Om stil-preferenser också vill inkorpereras i dessa modeller bör icke-linjära regressionsfunktioner betraktas för att redogöra för den ökade komplexitet som medföljer. Resultaten i steg två visar att komplexiteten av rekommendationssystemet kan göras obereoende av komplexiteten för hur passform bedöms. Och då teknologin möjliggör för allt mer avancerade sätt att bedöma passform, såsom 3D-scannings tekniker, kan mer komplexa tekniker utnyttjas utan att påverka responstiden för systemet under körtid.
Style APA, Harvard, Vancouver, ISO itp.
21

Leyva, Mayorga Israel. "On reliable and energy efficient massive wireless communications: the road to 5G". Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/115484.

Pełny tekst źródła
Streszczenie:
La quinta generación de redes móviles (5G) se encuentra a la vuelta de la esquina. Se espera provea de beneficios extraordinarios a la población y que resuelva la mayoría de los problemas de las redes 4G actuales. El éxito de 5G, cuya primera fase de estandarización ha sido completada, depende de tres pilares: comunicaciones tipo-máquina masivas, banda ancha móvil mejorada y comunicaciones ultra fiables y de baja latencia (mMTC, eMBB y URLLC, respectivamente). En esta tesis nos enfocamos en el primer pilar de 5G, mMTC, pero también proveemos una solución para lograr eMBB en escenarios de distribución masiva de contenidos. Específicamente, las principales contribuciones son en las áreas de: 1) soporte eficiente de mMTC en redes celulares; 2) acceso aleatorio para el reporte de eventos en redes inalámbricas de sensores (WSNs); y 3) cooperación para la distribución masiva de contenidos en redes celulares. En el apartado de mMTC en redes celulares, esta tesis provee un análisis profundo del desempeño del procedimiento de acceso aleatorio, que es la forma mediante la cual los dispositivos móviles acceden a la red. Estos análisis fueron inicialmente llevados a cabo por simulaciones y, posteriormente, por medio de un modelo analítico. Ambos modelos fueron desarrollados específicamente para este propósito e incluyen uno de los esquemas de control de acceso más prometedores: access class barring (ACB). Nuestro modelo es uno de los más precisos que se pueden encontrar en la literatura y el único que incorpora el esquema de ACB. Los resultados obtenidos por medio de este modelo y por simulación son claros: los accesos altamente sincronizados que ocurren en aplicaciones de mMTC pueden causar congestión severa en el canal de acceso. Por otro lado, también son claros en que esta congestión se puede prevenir con una adecuada configuración del ACB. Sin embargo, los parámetros de configuración del ACB deben ser continuamente adaptados a la intensidad de accesos para poder obtener un desempeño óptimo. En la tesis se propone una solución práctica a este problema en la forma de un esquema de configuración automática para el ACB; lo llamamos ACBC. Los resultados muestran que nuestro esquema puede lograr un desempeño muy cercano al óptimo sin importar la intensidad de los accesos. Asimismo, puede ser directamente implementado en redes celulares para soportar el tráfico mMTC, ya que ha sido diseñado teniendo en cuenta los estándares del 3GPP. Además de los análisis descritos anteriormente para redes celulares, se realiza un análisis general para aplicaciones de contadores inteligentes. Es decir, estudiamos un escenario de mMTC desde la perspectiva de las WSNs. Específicamente, desarrollamos un modelo híbrido para el análisis de desempeño y la optimización de protocolos de WSNs de acceso aleatorio y basados en cluster. Los resultados muestran la utilidad de escuchar el medio inalámbrico para minimizar el número de transmisiones y también de modificar las probabilidades de transmisión después de una colisión. En lo que respecta a eMBB, nos enfocamos en un escenario de distribución masiva de contenidos, en el que un mismo contenido es enviado de forma simultánea a un gran número de usuarios móviles. Este escenario es problemático, ya que las estaciones base de la red celular no cuentan con mecanismos eficientes de multicast o broadcast. Por lo tanto, la solución que se adopta comúnmente es la de replicar e contenido para cada uno de los usuarios que lo soliciten; está claro que esto es altamente ineficiente. Para resolver este problema, proponemos el uso de esquemas de network coding y de arquitecturas cooperativas llamadas nubes móviles. En concreto, desarrollamos un protocolo para la distribución masiva de contenidos, junto con un modelo analítico para su optimización. Los resultados demuestran que el modelo propuesto es simple y preciso, y que el protocolo puede reducir el con
La cinquena generació de xarxes mòbils (5G) es troba molt a la vora. S'espera que proveïsca de beneficis extraordinaris a la població i que resolga la majoria dels problemes de les xarxes 4G actuals. L'èxit de 5G, per a la qual ja ha sigut completada la primera fase del qual d'estandardització, depén de tres pilars: comunicacions tipus-màquina massives, banda ampla mòbil millorada, i comunicacions ultra fiables i de baixa latència (mMTC, eMBB i URLLC, respectivament, per les seues sigles en anglés). En aquesta tesi ens enfoquem en el primer pilar de 5G, mMTC, però també proveïm una solució per a aconseguir eMBB en escenaris de distribució massiva de continguts. Específicament, les principals contribucions són en les àrees de: 1) suport eficient de mMTC en xarxes cel·lulars; 2) accés aleatori per al report d'esdeveniments en xarxes sense fils de sensors (WSNs); i 3) cooperació per a la distribució massiva de continguts en xarxes cel·lulars. En l'apartat de mMTC en xarxes cel·lulars, aquesta tesi realitza una anàlisi profunda de l'acompliment del procediment d'accés aleatori, que és la forma mitjançant la qual els dispositius mòbils accedeixen a la xarxa. Aquestes anàlisis van ser inicialment dutes per mitjà de simulacions i, posteriorment, per mitjà d'un model analític. Els models van ser desenvolupats específicament per a aquest propòsit i inclouen un dels esquemes de control d'accés més prometedors: el access class barring (ACB). El nostre model és un dels més precisos que es poden trobar i l'únic que incorpora l'esquema d'ACB. Els resultats obtinguts per mitjà d'aquest model i per simulació són clars: els accessos altament sincronitzats que ocorren en aplicacions de mMTC poden causar congestió severa en el canal d'accés. D'altra banda, també són clars en què aquesta congestió es pot previndre amb una adequada configuració de l'ACB. No obstant això, els paràmetres de configuració de l'ACB han de ser contínuament adaptats a la intensitat d'accessos per a poder obtindre unes prestacions òptimes. En la tesi es proposa una solució pràctica a aquest problema en la forma d'un esquema de configuració automàtica per a l'ACB; l'anomenem ACBC. Els resultats mostren que el nostre esquema pot aconseguir un acompliment molt proper a l'òptim sense importar la intensitat dels accessos. Així mateix, pot ser directament implementat en xarxes cel·lulars per a suportar el trànsit mMTC, ja que ha sigut dissenyat tenint en compte els estàndards del 3GPP. A més de les anàlisis descrites anteriorment per a xarxes cel·lulars, es realitza una anàlisi general per a aplicacions de comptadors intel·ligents. És a dir, estudiem un escenari de mMTC des de la perspectiva de les WSNs. Específicament, desenvolupem un model híbrid per a l'anàlisi de prestacions i l'optimització de protocols de WSNs d'accés aleatori i basats en clúster. Els resultats mostren la utilitat d'escoltar el mitjà sense fil per a minimitzar el nombre de transmissions i també de modificar les probabilitats de transmissió després d'una col·lisió. Pel que fa a eMBB, ens enfoquem en un escenari de distribució massiva de continguts, en el qual un mateix contingut és enviat de forma simultània a un gran nombre d'usuaris mòbils. Aquest escenari és problemàtic, ja que les estacions base de la xarxa cel·lular no compten amb mecanismes eficients de multicast o broadcast. Per tant, la solució que s'adopta comunament és la de replicar el contingut per a cadascun dels usuaris que ho sol·liciten; és clar que això és altament ineficient. Per a resoldre aquest problema, proposem l'ús d'esquemes de network coding i d'arquitectures cooperatives anomenades núvols mòbils. En concret, desenvolupem un protocol per a realitzar la distribució massiva de continguts de forma eficient, juntament amb un model analític per a la seua optimització. Els resultats demostren que el model proposat és simple i precís
The 5th generation (5G) of mobile networks is just around the corner. It is expected to bring extraordinary benefits to the population and to solve the majority of the problems of current 4th generation (4G) systems. The success of 5G, whose first phase of standardization has concluded, relies in three pillars that correspond to its main use cases: massive machine-type communication (mMTC), enhanced mobile broadband (eMBB), and ultra-reliable low latency communication (URLLC). This thesis mainly focuses on the first pillar of 5G: mMTC, but also provides a solution for the eMBB in massive content delivery scenarios. Specifically, its main contributions are in the areas of: 1) efficient support of mMTC in cellular networks; 2) random access (RA) event-reporting in wireless sensor networks (WSNs); and 3) cooperative massive content delivery in cellular networks. Regarding mMTC in cellular networks, this thesis provides a thorough performance analysis of the RA procedure (RAP), used by the mobile devices to switch from idle to connected mode. These analyses were first conducted by simulation and then by an analytical model; both of these were developed with this specific purpose and include one of the most promising access control schemes: the access class barring (ACB). To the best of our knowledge, this is one of the most accurate analytical models reported in the literature and the only one that incorporates the ACB scheme. Our results clearly show that the highly-synchronized accesses that occur in mMTC applications can lead to severe congestion. On the other hand, it is also clear that congestion can be prevented with an adequate configuration of the ACB scheme. However, the configuration parameters of the ACB scheme must be continuously adapted to the intensity of access attempts if an optimal performance is to be obtained. We developed a practical solution to this problem in the form of a scheme to automatically configure the ACB; we call it access class barring configuration (ACBC) scheme. The results show that our ACBC scheme leads to a near-optimal performance regardless of the intensity of access attempts. Furthermore, it can be directly implemented in 3rd Generation Partnership Project (3GPP) cellular systems to efficiently handle mMTC because it has been designed to comply with the 3GPP standards. In addition to the analyses described above for cellular networks, a general analysis for smart metering applications is performed. That is, we study an mMTC scenario from the perspective of event detection and reporting WSNs. Specifically, we provide a hybrid model for the performance analysis and optimization of cluster-based RA WSN protocols. Results showcase the utility of overhearing to minimize the number of packet transmissions, but also of the adaptation of transmission parameters after a collision occurs. Building on this, we are able to provide some guidelines that can drastically increase the performance of a wide range of RA protocols and systems in event reporting applications. Regarding eMBB, we focus on a massive content delivery scenario in which the exact same content is transmitted to a large number of mobile users simultaneously. Such a scenario may arise, for example, with video streaming services that offer a particularly popular content. This is a problematic scenario because cellular base stations have no efficient multicast or broadcast mechanisms. Hence, the traditional solution is to replicate the content for each requesting user, which is highly inefficient. To solve this problem, we propose the use of network coding (NC) schemes in combination with cooperative architectures named mobile clouds (MCs). Specifically, we develop a protocol for efficient massive content delivery, along with the analytical model for its optimization. Results show the proposed model is simple and accurate, and the protocol can lead to energy savings of up to 37 percent when compared to the traditional approach.
Leyva Mayorga, I. (2018). On reliable and energy efficient massive wireless communications: the road to 5G [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/115484
TESIS
Style APA, Harvard, Vancouver, ISO itp.
22

Ballivian, Sergio Marlon. "Anonymous Indoor Positioning System using Depth Sensors for Context-aware Human-Building Interaction". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/89612.

Pełny tekst źródła
Streszczenie:
Indoor Localization Systems (ILS), also known as Indoor Positioning Systems (IPS), has been created to determine the position of individuals and other assets inside facilities. Indoor Localization Systems have been implemented for monitoring individuals and objects in a variety of sectors. In addition, ILS could be used for energy and sustainability purposes. Energy management is a complex and important challenge in the Built Environment. The indoor localization market is expected to increase by 33.8 billion in the next 5 years based on the 2016 global survey report (Marketsandmarkets.com). Therefore, this thesis focused on exploring and investigating "depth sensors" application in detecting occupants' indoor positions to be used for smarter management of energy consumption in buildings. An interconnected passive depth-sensor-based system of occupants' positioning was investigated for human-building interaction applications. This research investigates the fundamental requirements for depth-sensing technology to detect, identify and track subjects as they move across different spaces. This depth-based approach is capable of sensing and identifying individuals by accounting for the privacy concerns of users in an indoor environment. The proposed system relies on a fixed depth sensor that detects the skeleton, measures the depth, and further extracts multiple features from the characteristics of the human body to identify them through a classifier. An example application of such a system is to capture an individuals' thermal preferences in an environment and deliver services (targeted air conditioning) accordingly while they move in the building. The outcome of this study will enable the application of cost-effective depth sensors for identification and tracking purposes in indoor environments. This research will contribute to the feasibility of accurate detection of individuals and smarter energy management using depth sensing technologies by proposing new features and creating combinations with typical biometric features. The addition of features such as the area and volume of human body surface was shown to increase the accuracy of the identification of individuals. Depth-sensing imaging could be combined with different ILS approaches and provide reliable information for service delivery in building spaces. The proposed sensing technology could enable the inference of people location and thermal preferences across different indoor spaces, as well as, sustainable operations by detecting unoccupied rooms in buildings.
Master of Science
Although Global Positioning System (GPS) has a satisfactory performance navigating outdoors, it fails in indoor environments due to the line of sight requirements. Physical obstacles such as walls, overhead floors, and roofs weaken GPS functionality in closed environments. This limitation has opened a new direction of studies, technologies, and research efforts to create indoor location sensing capabilities. In this study, we have explored the feasibility of using an indoor positioning system that seeks to detect occupants’ location and preferences accurately without raising privacy concerns. Context-aware systems were created to learn dynamics of interactions between human and buildings, examples are sensing, localizing, and distinguishing individuals. An example application is to enable a responsive air-conditioning system to adapt to personalized thermal preferences of occupants in an indoor environment as they move across spaces. To this end, we have proposed to leverage depth sensing technology, such as Microsoft Kinect sensor, that could provide information on human activities and unique skeletal attributes for identification. The proposed sensing technology could enable the inference of people location and preferences at any time and their activity levels across different indoor spaces. This system could be used for sustainable operations in buildings by detecting unoccupied rooms in buildings to save energy and reduce the cost of heating, lighting or air conditioning equipment by delivering air conditioning according to the preferences of occupants. This thesis has explored the feasibility and challenges of using depth-sensing technology for the aforementioned objectives. In doing so, we have conducted experimental studies, as well as data analyses, using different scenarios for human-environment interactions. The results have shown that we could achieve an acceptable level of accuracy in detecting individuals across different spaces for different actions.
Style APA, Harvard, Vancouver, ISO itp.
23

Ahmed, Omar W. "Enhanced flare prediction by advanced feature extraction from solar images : developing automated imaging and machine learning techniques for processing solar images and extracting features from active regions to enable the efficient prediction of solar flares". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5407.

Pełny tekst źródła
Streszczenie:
Space weather has become an international issue due to the catastrophic impact it can have on modern societies. Solar flares are one of the major solar activities that drive space weather and yet their occurrence is not fully understood. Research is required to yield a better understanding of flare occurrence and enable the development of an accurate flare prediction system, which can warn industries most at risk to take preventative measures to mitigate or avoid the effects of space weather. This thesis introduces novel technologies developed by combining advances in statistical physics, image processing, machine learning, and feature selection algorithms, with advances in solar physics in order to extract valuable knowledge from historical solar data, related to active regions and flares. The aim of this thesis is to achieve the followings: i) The design of a new measurement, inspired by the physical Ising model, to estimate the magnetic complexity in active regions using solar images and an investigation of this measurement in relation to flare occurrence. The proposed name of the measurement is the Ising Magnetic Complexity (IMC). ii) Determination of the flare prediction capability of active region properties generated by the new active region detection system SMART (Solar Monitor Active Region Tracking) to enable the design of a new flare prediction system. iii) Determination of the active region properties that are most related to flare occurrence in order to enhance understanding of the underlying physics behind flare occurrence. The achieved results can be summarised as follows: i) The new active region measurement (IMC) appears to be related to flare occurrence and it has a potential use in predicting flare occurrence and location. ii) Combining machine learning with SMART¿s active region properties has the potential to provide more accurate flare predictions than the current flare prediction systems i.e. ASAP (Automated Solar Activity Prediction). iii) Reduced set of 6 active region properties seems to be the most significant properties related to flare occurrence and they can achieve similar degree of flare prediction accuracy as the full 21 SMART active region properties. The developed technologies and the findings achieved in this thesis will work as a corner stone to enhance the accuracy of flare prediction; develop efficient flare prediction systems; and enhance our understanding of flare occurrence. The algorithms, implementation, results, and future work are explained in this thesis.
Style APA, Harvard, Vancouver, ISO itp.
24

Ahmed, Omar Wahab. "Enhanced flare prediction by advanced feature extraction from solar images : developing automated imaging and machine learning techniques for processing solar images and extracting features from active regions to enable the efficient prediction of solar flares". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5407.

Pełny tekst źródła
Streszczenie:
Space weather has become an international issue due to the catastrophic impact it can have on modern societies. Solar flares are one of the major solar activities that drive space weather and yet their occurrence is not fully understood. Research is required to yield a better understanding of flare occurrence and enable the development of an accurate flare prediction system, which can warn industries most at risk to take preventative measures to mitigate or avoid the effects of space weather. This thesis introduces novel technologies developed by combining advances in statistical physics, image processing, machine learning, and feature selection algorithms, with advances in solar physics in order to extract valuable knowledge from historical solar data, related to active regions and flares. The aim of this thesis is to achieve the followings: i) The design of a new measurement, inspired by the physical Ising model, to estimate the magnetic complexity in active regions using solar images and an investigation of this measurement in relation to flare occurrence. The proposed name of the measurement is the Ising Magnetic Complexity (IMC). ii) Determination of the flare prediction capability of active region properties generated by the new active region detection system SMART (Solar Monitor Active Region Tracking) to enable the design of a new flare prediction system. iii) Determination of the active region properties that are most related to flare occurrence in order to enhance understanding of the underlying physics behind flare occurrence. The achieved results can be summarised as follows: i) The new active region measurement (IMC) appears to be related to flare occurrence and it has a potential use in predicting flare occurrence and location. ii) Combining machine learning with SMART's active region properties has the potential to provide more accurate flare predictions than the current flare prediction systems i.e. ASAP (Automated Solar Activity Prediction). iii) Reduced set of 6 active region properties seems to be the most significant properties related to flare occurrence and they can achieve similar degree of flare prediction accuracy as the full 21 SMART active region properties. The developed technologies and the findings achieved in this thesis will work as a corner stone to enhance the accuracy of flare prediction; develop efficient flare prediction systems; and enhance our understanding of flare occurrence. The algorithms, implementation, results, and future work are explained in this thesis.
Style APA, Harvard, Vancouver, ISO itp.
25

Kurén, Jonathan, Simon Leijon, Petter Sigfridsson i Hampus Widén. "Fault Detection AI For Solar Panels". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413319.

Pełny tekst źródła
Streszczenie:
The increased usage of solar panels worldwide highlights the importance of being able to detect faults in systems that use these panels. In this project, the historical power output (kWh) from solar panels combined with meteorological data was used to train a machine learning model to predict the expected power output of a given solar panel system. Using the expected power output, a comparison was made between the expected and the actual power output to analyze if the system was exposed to a fault. The result was that when applying the explained method an expected output could be created which closely resembled the actual output of a given solar panel system with some over- and undershooting. Consequentially, when simulating a fault (50% decrease of the power output), it was possible for the system to detect all faults if analyzed over a two-week period. These results show that it is possible to model the predicted output of a solar panel system with a machine learning model (using meteorological data) and use it to evaluate if the system is producing as much power as it should be. Improvements can be made to the system where adding additional meteorological data, increasing the precision of the meteorological data and training the machine learning model on more data are some of the options.
Med en ökande användning av solpaneler runt om i världen ökar även betydelsen av att kunna upptäcka driftstörningar i panelerna. Genom att utnyttja den historiska uteffekten (kWh) från solpaneler samt meteorologisk data används maskininlärningsmodeller för att förutspå den förväntade uteffekten för ett givet solpanelssystem. Den förväntade uteffekten används sedan i en jämförelse med den faktiska uteffekten för att upptäcka om en driftstörning har uppstått i systemet. Resultatet av att använda den här metoden är att en förväntad uteffekt som efterliknar den faktiska uteffekten modelleras. Följaktligen, när ett fel simuleras (50% minskning av uteffekt), så är det möjligt för systemet att hitta alla introducerade fel vid analys över ett tidsspann på två veckor. Dessa resultat visar att det är möjligt att modellera en förväntad uteffekt av ett solpanelssystem med en maskininlärningsmodell och att använda den för att utvärdera om systemet producerar så mycket uteffekt som det bör göra. Systemet kan förbättras på några vis där tilläggandet av fler meteorologiska parametrar, öka precision av den meteorologiska datan och träna maskininlärningsmodellen på mer data är några möjligheter.
Style APA, Harvard, Vancouver, ISO itp.
26

Mpawenimana, Innocent. "Modélisation et conception d’objets connectés au service des maisons intelligentes : Évaluation et optimisation de leur autonomie et de leur QoS". Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4107.

Pełny tekst źródła
Streszczenie:
Cette thèse s’inscrit dans le domaine des maisons intelligentes, plus précisément dans l’optimisation énergétique et l’utilisation d’un système de récupération et stockage de l’énergie ambiante. L’objectif est de proposer, après collecte d’un ensemble d’informations pertinentes (courant, puissance active et réactive, température, etc.), des services liés à la gestion de la consommation électrique domestique et favorisant l’autoconsommation. Dans cette thèse, la collecte des données a tout d’abord été basée sur une approche intrusive. A défaut de pouvoir construire notre propre base de données, nous avons utilisé une base de données disponible en ligne. Différents algorithmes d’apprentissage supervisés ont été évalués à partir de ces données afin de reconnaître un appareil électrique. Nos résultats ont montré que les puissances active et réactive seules suffisent à identifier de manière précise un appareil électrique. Afin d’améliorer l’identification des différents appareils, une technique basée sur une moyenne glissante a été utilisée pour le pré-traitement des données. Dans cette thèse, une approche non-intrusive consistant à mesurer la consommation électrique d’une habitation de manière globale, a finalement été privilégiée. A partir de cette mesure globale, des prédictions de l’énergie globale consommée à partir d’algorithmes d’apprentissage automatique (LSTM) a été proposée. L’algorithme LSTM (Long Short-Term Memory) a également été utilisé afin de prédire la puissance récupérée par des cellules photovoltaïques, ceci pour différents profils d’ensoleillement. Ces prédictions de l’énergie consommée et récupérée sont finalement exploitées par un algorithme de gestion de l’énergie favorisant l’autoconsommation
This PhD thesis is in the field of smart homes, and more specifically in the energy consumption optimization process for a home having an ambient energy source harvesting and storage system. The objective is to propose services to handle the household energy consumption and to promote self-consumption. To do so, relevant data must be first collected (current, active and reactive power consumption, temperature and so on). In this PhD, data have been first sensed using an intrusive load approach. Despite our efforts to build our own data base, we decided to use an online available dataset for the rest of this study. Different supervised machine learning algorithms have been evaluated from this dataset to identify home appliances with accuracy. Obtained results showed that only active and reactive power can be used for that purpose. To further optimize the accuracy, we proposed to use a moving average function for reducing the random variations in the observations. A non-intrusive load approach has been finally adopted to rather determine the global household active energy consumption. Using an online existing dataset, a machine learning algorithm based on Long Short-Term Memory (LSTM) has then been proposed to predict, over different time scale, the global household consumed energy. Long Short-Term Memory was also used to predict, for different weather profiles, the power that can be harvested from solar cells. Those predictions of consumed and harvested energy have been finally exploited by a Home Energy Management policy optimizing self-consumption. Simulation results show that the size of the solar cells as well as the battery impacts the self-consumption rate and must be therefore meticulously chosen
Style APA, Harvard, Vancouver, ISO itp.
27

Lallement, Guénolé. "Extension of socs mission capabilities by offering near-zero-power performances and enabling continuous functionality for Iot systems". Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0573.

Pełny tekst źródła
Streszczenie:
Les développements récents dans le domaine des circuits intégrés (IC) à basse tension ont ouvert la voie à des dispositifs électroniques économes en énergie dans un réseau mondial en plein essor appelé l’internet des objets (IoT) ou l’internet des choses (IoE). Cependant, la durabilité de tous ces capteurs interconnectés est compromise par le besoin constant d’une batterie embarquée – qui doit être rechargée ou remplacée – ou d’un récupérateur d’énergie à rendement très limité. La consommation d’énergie des systèmes électroniques grand public actuels est en effet cinquante fois plus élevée que celle d’un collecteur d’une taille de l’ordre du cm 2 , ou limitée à quelques mois sur une petite batterie. Cela contraint la viabilité de solutions fonctionnant à l’échelle d’une vie humaine. Les systèmes sur puce (SoCs) à venir nécessitent donc de relever le défi de cette lacune énergétique en optimisant l’architecture, de la technologie au niveau du système. L’approche technique de ce travail vise à démontrer la faisabilité d’un SoC efficient, ultra-basse tension (ULV) et ultra-basse puissance (ULP) utilisant exclusivement les dernières directives industrielles en matière de technologies FD-SOI (Fully Depleted Silicon On Insulator) 28 nm et 22 nm. Plusieurs SoCs multi-domaines basés sur des cœurs ARM sont implémentés pour démontrer des stratégies de réveil basées sur les entrées des capteurs. Ainsi, en optimisant l’architecture du système, en sélectionnant et en concevant correctement les composants avec des caractéristiques technologiques choisies de manière adéquate, et en ajustant soigneusement l’implémentation physique, on obtient un SoC entièrement optimisé en énergie
Recent developments in the field of low voltage integrated circuits (IC) have paved the way towards energy efficient electronic devices in a booming global network called the internet-of-things (IoT) or the internet-of-everything (IoE). However, the sustainability of all these inter- connected sensors is still undermined by the constant need for either an on-board battery – that must be recharged or replaced – or an energy harvester with very limited power efficiency. The power consumption of present consumer electronic systems is fifty times higher than the energy available by cm 2-size harvester or limited to a few months on a small battery, thus hardly viable for lifetime solutions. Upcoming systems-on-chip (SoCs) must overcome the challenge of this energy gap by architecture optimizations from technology to system level. The technical approach of this work aims to demonstrate the feasibility of an efficient ultra-low-voltage (ULV) and ultra-low-power (ULP) SoC using exclusively latest industrial guidelines in 28 nm and 22 nm fully depleted silicon on insulator (FD-SOI) technologies. Several multi-power-domain SoCs based on ARM cores are implemented to demonstrate wake up strategies based on sensors inputs. By optimizing the system architecture, properly selecting and designing compo- nents with technology features chosen adequately, carefully tuning the implementation, a fully energy-optimized SoC is realized
Style APA, Harvard, Vancouver, ISO itp.
28

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries". Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Pełny tekst źródła
Streszczenie:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
Style APA, Harvard, Vancouver, ISO itp.
29

Koziel, Sylvie Evelyne. "From data collection to electric grid performance : How can data analytics support asset management decisions for an efficient transition toward smart grids?" Licentiate thesis, KTH, Elektroteknisk teori och konstruktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292323.

Pełny tekst źródła
Streszczenie:
Physical asset management in the electric power sector encompasses the scheduling of the maintenance and replacement of grid components, as well as decisions about investments in new components. Data plays a crucial role in these decisions. The importance of data is increasing with the transformation of the power system and its evolution toward smart grids. This thesis deals with questions related to data management as a way to improve the performance of asset management decisions. Data management is defined as the collection, processing, and storage of data. Here, the focus is on the collection and processing of data. First, the influence of data on the decisions related to assets is explored. In particular, the impacts of data quality on the replacement time of a generic component (a line for example) are quantified using a scenario approach, and failure modeling. In fact, decisions based on data of poor quality are most likely not optimal. In this case, faulty data related to the age of the component leads to a non-optimal scheduling of component replacement. The corresponding costs are calculated for different levels of data quality. A framework has been developed to evaluate the amount of investment needed into data quality improvement, and its profitability. Then, the ways to use available data efficiently are investigated. Especially, the possibility to use machine learning algorithms on real-world datasets is examined. New approaches are developed to use only available data for component ranking and failure prediction, which are two important concepts often used to prioritize components and schedule maintenance and replacement. A large part of the scientific literature assumes that the future of smart grids lies in big data collection, and in developing algorithms to process huge amounts of data. On the contrary, this work contributes to show how automatization and machine learning techniques can actually be used to reduce the need to collect huge amount of data, by using the available data more efficiently. One major challenge is the trade-offs needed between precision of modeling results, and costs of data management.

QC 20210330

Style APA, Harvard, Vancouver, ISO itp.
30

Denis, Nicolas. "Système de gestion d'énergie d'un véhicule électrique hybride rechargeable à trois roues". Thèse, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5856.

Pełny tekst źródła
Streszczenie:
Résumé : Depuis la fin du XXème siècle, l’augmentation du prix du pétrole brut et les problématiques environnementales poussent l’industrie automobile à développer des technologies plus économes en carburant et générant moins d’émissions de gaz à effet de serre. Parmi ces technologies, les véhicules électriques hybrides constituent une solution viable et performante. En alliant un moteur électrique et un moteur à combustion, ces véhicules possèdent un fort potentiel de réduction de la consommation de carburant sans sacrifier son autonomie. La présence de deux moteurs et de deux sources d’énergie requiert un contrôleur, appelé système de gestion d’énergie, responsable de la commande simultanée des deux moteurs. Les performances du véhicule en matière de consommation dépendent en partie de la conception de ce contrôleur. Les véhicules électriques hybrides rechargeables, plus récents que leur équivalent non rechargeable, se distinguent par l’ajout d’un chargeur interne permettant la recharge de la batterie pendant l’arrêt du véhicule et par conséquent la décharge de celle-ci au cours d’un trajet. Cette particularité ajoute un degré de complexité pour ce qui est de la conception du système de gestion d’énergie. Dans cette thèse, nous proposons un modèle complet du véhicule dédié à la conception du contrôleur. Nous étudions ensuite la dépendance de la commande optimale des deux moteurs par rapport au profil de vitesse suivi au cours d’un trajet ainsi qu’à la quantité d’énergie électrique disponible au début d’un trajet. Cela nous amène à proposer une technique d’auto-apprentissage visant l’amélioration de la stratégie de gestion d’énergie en exploitant un certain nombre de données enregistrées sur les trajets antérieurs. La technique proposée permet l’adaptation de la stratégie de contrôle vis-à-vis du trajet en cours en se basant sur une pseudo-prédiction de la totalité du profil de vitesse. Nous évaluerons les performances de la technique proposée en matière de consommation de carburant en la comparant avec une stratégie optimale bénéficiant de la connaissance exacte du profil de vitesse ainsi qu’avec une stratégie de base utilisée couramment dans l’industrie. // Abstract : Since the end of the XXth century, the increase in crude oil price and the environmental concerns lead the automotive industry to develop technologies that can improve fuel savings and decrease greenhouse gases emissions. Among these technologies, the hybrid electric vehicles stand as a reliable and efficient solution. By combining an electrical motor and an internal combustion engine, these vehicles can bring a noticeable improvement in terms of fuel consumption without sacrificing the vehicle autonomy. The two motors and the two energy storage systems require a control unit, called energy management system, which is responsible for the command decision of both motors. The vehicle performances in terms of fuel consumption greatly depend on this control unit. The plug-in hybrid electric vehicles are a more recent technology compared to their non plug-in counterparts. They have an extra internal battery charger that allows the battery to be charged during OFF state, implying a possible discharge during a trip. This particularity adds complexity when it comes to the design of the energy management system. In this thesis, a complete vehicle model is proposed and used for the design of the controller. A study is then carried out to show the dependence between the optimal control of the motors and the speed profile followed during a trip as well as the available electrical energy at the beginning of a trip. According to this study, a self-learning optimization technique that aims at improving the energy management strategy by exploiting some driving data recorded on previous trips is proposed. The technique allows the adaptation of the control strategy to the current trip based on a pseudo-prediction of the total speed profile. Fuel consumption performances for the proposed technique will be evaluated by comparing it with an optimal control strategy that benefits from the exact a priori knowledge of the speed profile as well as a basic strategy commonly used in industry.
Style APA, Harvard, Vancouver, ISO itp.
31

Baskar, Ashish Guhan, i Araavind Sridhar. "Short Term Regulation in Hydropower Plants using Batteries : A case study of hydropower pants in lower Oreälven river". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289407.

Pełny tekst źródła
Streszczenie:
Hydropower is one of the oldest renewable energy (RE) sources and constitutes a major share in the Swedish electricity mix. Though hydropower is renewable, there exist some issues pertaining to the local aquatic conditions. With more environmental laws being implemented, regulating the use and management of water is jeopardizing the flexibility of hydropower plants. The decided national plan for new environmental conditions in Sweden is expected to start being implemented in 2025 and more restrictions are expected. Analysing a battery energy storage system's capabilities plants may improve flexibility in hydropower plant operation. This thesis is focused on the short-term regulation in lower Oreälven river where the hydropower plants Skattungbyn, Unnån and Hansjö are located. The combined hydropower plant and battery system is simulated being employed in the day-ahead market and a techno-economic optimization of the combined system is performed. The combined system's operation is modelled using Mixed Integer Linear Programming. The future electricity market analysis is modelled using Machine Learning techniques. Three different electricity market scenarios were developed based on different Swedish nuclear energy targets for 2040 to capture the future. The first scenario developed complies with the Swedish energy target of 100 % renewable production in 2040. The second scenario has still two nuclear power plants in operation by 2040 and the third scenario has the same nuclear capacity as of 2020. It is observed from the results that with the current battery costs (~3,6 Million SEK/MWh), the implementation of a battery system for the short term regulation of the combined battery/hydropower system is not profitable and the cost of battery needs to be less than 0,5 Million SEK/MWh to make it profitable. The thesis also discusses the possibility of utilizing batteries’ second life and the techno-economic analysis of their performance.
Vattenkraft är en av de allra äldsta förnybara energikällorna och utgör idag en väsentlig del av Sveriges energimix. Trots att vattenkraft är förnybar, har den lett till vissa utmaningar i den lokala vattenmiljön. Som en följd av att fler miljölagar har implementerats för att reglera nyttjandet av vattendrag och sjöar, minskar flexibiliteten i vattenkraftproduktionen. Den av den svenska regeringen i juni 2020 beslutade nationella planen för miljöanpassning av vattenkraften i Sverige, förväntas börja genomföras med start 2025 och tros då resultera i fler flexibilitetsbegränsningar. Genom att analysera driften av batteriers energilagringssystem kombinerade med vattenkraftverk, bedöms flexibiliteten i sådana kombinerade system kunna ökas. Denna studie fokuserar på den kortsiktiga regleringen av nedre Oreälven med vattenkraftverken Skattungbyn, Unnån och Hansjö. En kombination av vattenkraftverken med batterisystem simuleras mot spot-marknaden och en teknisk-ekonomisk optimering av det kombinerade systemet utförs. Driften av det kombinerade systemet modelleras med linjärprogrammering och den framtida analysen av elmarknaden modelleras med maskininlärningstekniker. Tre olika scenarier för elmarknaden utvecklades baserade på målen för den svenska kärnkraften år 2040. Det första scenariot som utvecklades är i linje med det svenska energimålet om 100 % förnybar produktion till 2040. Det andra scenariot utvecklades med två kärnkraftverk fortfarande i drift 2040 och det tredje scenariot med samma kärnkraftskapacitet som 2020. Från resultaten kan särskilt noteras att med nuvarande batterikostnader (~3,6 miljoner SEK/MWh) kommer införandet av batterier för att kortsiktigt reglera vattenkraftverken inte att vara lönsamt om inte batterikostnaden reduceras till som högst 0,5 miljoner SEK/MWh. Denna studie diskuterar även möjligheterna att använda andrahandsbatterier samt en teknisk-ekonomisk analys för dess prestanda.
Style APA, Harvard, Vancouver, ISO itp.
32

Murach, Thomas. "Monoscopic Analysis of H.E.S.S. Phase II Data on PSR B1259–63/LS 2883". Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18484.

Pełny tekst źródła
Streszczenie:
Cherenkov-Teleskope sind in der Lage, das schwache Cherenkovlicht aus Teilchenschauern zu detektieren, die von kosmischen Teilchen mit Energien von ca. 100 GeV bis 100 TeV in der Erdatmosphäre initiiert werden. Das Ziel ist die Detektion von Cherenkovlicht aus Schauern, die von Gammastrahlen erzeugt wurden, der größte Teil der Schauer stammt jedoch von geladenen Teilchen. Im Jahr 2012 wurde das H.E.S.S.-Observatorium in Namibia, bis dahin bestehend aus vier Teleskopen mit 100 m²-Spiegeln, um ein fünftes Teleskop mit einer Spiegelfläche von ca. 600 m² ergänzt. Aufgrund der großen Spiegelfläche besitzt dieses Teleskop die niedrigste Energieschwelle aller Teleskope dieser Art. In dieser Dissertation wird ein schneller Algorithmus namens MonoReco präsentiert, der grundlegende Eigenschaften der Gammastrahlen wie ihre Energien und Richtungen rekonstruieren kann. Dieser Algorithmus kann weiterhin unterscheiden, ob Schauer von Gammastrahlen oder von geladenen Teilchen der kosmischen Strahlung initiiert wurden. Diese Aufgaben werden mit mithilfe von künstlichen neuronalen Netzwerken erfüllt, welche ausschließlich die Momente der Intensitätsverteilungen in der Kamera des neuen Teleskops analysieren. Eine Energieschwelle von 59 GeV und Richtungsauflösungen von 0.1°-0.3° werden erreicht. Das Energiebias liegt bei wenigen Prozent, die Energieauflösung bei 20-30%. Unter anderem mit dem MonoReco-Algorithmus wurden Daten, die in der Zeit um das Periastron des Binärsystems PSR B1259-63/LS 2883 im Jahre 2014 genommen wurden, analysiert. Es handelt sich hierbei um einen Neutronenstern, der sich in einem 3,4-Jahres-Orbit um einen massereichen Stern mit einer den Stern umgebenden Scheibe aus Gas und Plasmen befindet. Zum ersten Mal konnte H.E.S.S. das Gammastrahlenspektrum dieses Systems bei Energien unterhalb von 200 GeV messen. Weiterhin wurde bei erstmaligen Beobachtungen zur Zeit des Periastrons ein lokales Flussminimum gemessen. Sowohl vor dem ersten als auch nach dem zweiten Transit des Neutronensterns durch die Scheibe wurden hohe Flüsse gemessen. Im zweiten Fall wurden Beobachtungen erstmals zeitgleich mit dem Fermi-LAT-Experiment durchgeführt, das wiederholt sehr hohe Flüsse in diesem Teil des Orbits messen konnte. Ein Vergleich der gemessenen Flüsse mit Vorhersagen eines leptonischen Modells zeigt gute Übereinstimmungen.
Cherenkov telescopes can detect the faint Cherenkov light emitted by air showers that were initiated by cosmic particles with energies between approximately 100 GeV and 100 TeV in the Earth's atmosphere. Aiming for the detection of Cherenkov light emitted by gamma ray-initiated air showers, the vast majority of all detected showers are initiated by charged cosmic rays. In 2012 the H.E.S.S. observatory, until then comprising four telescopes with 100 m² mirrors each, was extended by adding a much larger fifth telescope with a very large mirror area of 600 m². Due to the large mirror area, this telescope has the lowest energy threshold of all telescopes of this kind. In this dissertation, a fast algorithm called MonoReco is presented that can reconstruct fundamental properties of the primary gamma rays like their direction or their energy. Furthermore, this algorithm can distinguish between air showers initiated either by gamma rays or by charged cosmic rays. Those tasks are accomplished with the help of artificial neural networks, which analyse moments of the intensity distributions in the camera of the new telescope exclusively. The energy threshold is 59 GeV and angular resolutions of 0.1°-0.3° are achieved. The energy reconstruction bias is at the level of a few percent, the energy resolution is at the level of 20-30%. Data taken around the 2014 periastron passage of the gamma-ray binary PSR B1259-63/LS 2883 were analysed with, among others, the MonoReco algorithm. This binary system comprises a neutron star in a 3.4 year orbit around a massive star with a circumstellar disk consisting of gas and plasma. For the first time the gamma-ray spectrum of this system could be measured by H.E.S.S. down to below 200 GeV. Furthermore, a local flux minimum could be measured during unprecedented measurements at the time of periastron. High fluxes were measured both before the first and after the second transit of the neutron star through the disk. In the second case measurements could be performed for the first time contemporaneously with the Fermi-LAT experiment, which has repeatedly detected very high fluxes at this part of the orbit. A good agreement between measured fluxes and predictions of a leptonic model is found.
Style APA, Harvard, Vancouver, ISO itp.
33

Takhirov, Zafar. "Designing energy-efficient computing systems using equalization and machine learning". Thesis, 2018. https://hdl.handle.net/2144/27408.

Pełny tekst źródła
Streszczenie:
As technology scaling slows down in the nanometer CMOS regime and mobile computing becomes more ubiquitous, designing energy-efficient hardware for mobile systems is becoming increasingly critical and challenging. Although various approaches like near-threshold computing (NTC), aggressive voltage scaling with shadow latches, etc. have been proposed to get the most out of limited battery life, there is still no “silver bullet” to increasing power-performance demands of the mobile systems. Moreover, given that a mobile system could operate in a variety of environmental conditions, like different temperatures, have varying performance requirements, etc., there is a growing need for designing tunable/reconfigurable systems in order to achieve energy-efficient operation. In this work we propose to address the energy- efficiency problem of mobile systems using two different approaches: circuit tunability and distributed adaptive algorithms. Inspired by the communication systems, we developed feedback equalization based digital logic that changes the threshold of its gates based on the input pattern. We showed that feedback equalization in static complementary CMOS logic enabled up to 20% reduction in energy dissipation while maintaining the performance metrics. We also achieved 30% reduction in energy dissipation for pass-transistor digital logic (PTL) with equalization while maintaining performance. In addition, we proposed a mechanism that leverages feedback equalization techniques to achieve near optimal operation of static complementary CMOS logic blocks over the entire voltage range from near threshold supply voltage to nominal supply voltage. Using energy-delay product (EDP) as a metric we analyzed the use of the feedback equalizer as part of various sequential computational blocks. Our analysis shows that for near-threshold voltage operation, when equalization was used, we can improve the operating frequency by up to 30%, while the energy increase was less than 15%, with an overall EDP reduction of ≈10%. We also observe an EDP reduction of close to 5% across entire above-threshold voltage range. On the distributed adaptive algorithm front, we explored energy-efficient hardware implementation of machine learning algorithms. We proposed an adaptive classifier that leverages the wide variability in data complexity to enable energy-efficient data classification operations for mobile systems. Our approach takes advantage of varying classification hardness across data to dynamically allocate resources and improve energy efficiency. On average, our adaptive classifier is ≈100× more energy efficient but has ≈1% higher error rate than a complex radial basis function classifier and is ≈10× less energy efficient but has ≈40% lower error rate than a simple linear classifier across a wide range of classification data sets. We also developed a field of groves (FoG) implementation of random forests (RF) that achieves an accuracy comparable to Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) under tight energy budgets. The FoG architecture takes advantage of the fact that in random forests a small portion of the weak classifiers (decision trees) might be sufficient to achieve high statistical performance. By dividing the random forest into smaller forests (Groves), and conditionally executing the rest of the forest, FoG is able to achieve much higher energy efficiency levels for comparable error rates. We also take advantage of the distributed nature of the FoG to achieve high level of parallelism. Our evaluation shows that at maximum achievable accuracies FoG consumes ≈1.48×, ≈24×, ≈2.5×, and ≈34.7× lower energy per classification compared to conventional RF, SVM-RBF , Multi-Layer Perceptron Network (MLP), and CNN, respectively. FoG is 6.5× less energy efficient than SVM-LR, but achieves 18% higher accuracy on average across all considered datasets.
Style APA, Harvard, Vancouver, ISO itp.
34

LUZI, MASSIMILIANO. "Design and implementation of machine learning techniques for modeling and managing battery energy storage systems". Doctoral thesis, 2019. http://hdl.handle.net/11573/1239311.

Pełny tekst źródła
Streszczenie:
The fast technological evolution and industrialization that have interested the humankind since the fifties has caused a progressive and exponential increase of CO2 emissions and Earth temperature. Therefore, the research community and the political authorities have recognized the need of a deep technological revolution in both the transportation and the energy distribution systems to hinder climate changes. Thus, pure and hybrid electric powertrains, smart grids, and microgrids are key technologies for achieving the expected goals. Nevertheless, the development of the above mentioned technologies require very effective and performing Battery Energy Storage Systems (BESSs), and even more effective Battery Management Systems (BMSs). Considering the above background, this Ph.D. thesis has focused on the development of an innovative and advanced BMS that involves the use of machine learning techniques for improving the BESS effectiveness and efficiency. Great attention has been paid to the State of Charge (SoC) estimation problem, aiming at investigating solutions for achieving more accurate and reliable estimations. To this aim, the main contribution has concerned the development of accurate and flexible models of electrochemical cells. Three main modeling requirements have been pursued for ensuring accurate SoC estimations: insight on the cell physics, nonlinear approximation capability, and flexible system identification procedures. Thus, the research activity has aimed at fulfilling these requirements by developing and investigating three different modeling approaches, namely black, white, and gray box techniques. Extreme Learning Machines, Radial Basis Function Neural Networks, and Wavelet Neural Networks were considered among the black box models, but none of them were able to achieve satisfactory SoC estimation performances. The white box Equivalent Circuit Models (ECMs) have achieved better results, proving the benefit that the insight on the cell physics provides to the SoC estimation task. Nevertheless, it has appeared clear that the linearity of ECMs has reduced their effectiveness in the SoC task. Thus, the gray box Neural Networks Ensemble (NNE) and the white box Equivalent Neural Networks Circuit (ENNC) models have been developed aiming at exploiting the neural networks theory in order to achieve accurate models, ensuring at the same time very flexible system identification procedures together with nonlinear approximation capabilities. The performances of NNE and ENNC have been compelling. In particular, the white box ENNC has reached the most effective performances, achieving accurate SoC estimations, together with a simple architecture and a flexible system identification procedure. The outcome of this thesis makes it possible the development of an interesting scenario in which a suitable cloud framework provides remote assistance to several BMSs in order to adapt the managing algorithms to the aging of BESSs, even considering different and distinct applications.
Style APA, Harvard, Vancouver, ISO itp.
35

(9412388), Maryam Parsa. "Bayesian-based Multi-Objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neuromorphic System Designs". Thesis, 2020.

Znajdź pełny tekst źródła
Streszczenie:
Neuromorphic systems promise a novel alternative to the standard von-Neumann architectures that are computationally expensive for analyzing big data, and are not efficient for learning and inference. This novel generation of computing aims at ``mimicking" the human brain based on deploying neural networks on event-driven hardware architectures. A key bottleneck in designing such brain-inspired architectures is the complexity of co-optimizing the algorithm’s speed and accuracy along with the hardware’s performance and energy efficiency. This complexity stems from numerous intrinsic hyperparameters in both software and hardware that need to be optimized for an optimum design.

In this work, we present a versatile hierarchical pseudo agent-based multi-objective hyperparameter optimization approach for automatically tuning the hyperparameters of several training algorithms (such as traditional artificial neural networks (ANN), and evolutionary-based, binary, back-propagation-based, and conversion-based techniques in spiking neural networks (SNNs)) on digital and mixed-signal neural accelerators. By utilizing the proposed hyperparameter optimization approach we achieve improved performance over the previous state-of-the-art on those training algorithms and close some of the performance gaps that exist between SNNs and standard deep learning architectures.

We demonstrate >2% improvement in accuracy and more than 5X reduction in the training/inference time for a back-propagation-based SNN algorithm on the dynamic vision sensor (DVS) gesture dataset. In the case of ANN-SNN conversion-based techniques, we demonstrate 30% reduction in time-steps while surpassing the accuracy of state-of-the-art networks on an image classification dataset (CIFAR10) on a simpler and shallower architecture. Further, our analysis shows that in some cases even a seemingly minor change in hyperparameters may change the accuracy of these networks by 5‑6X. From the application perspective, we show that the optimum set of hyperparameters might drastically improve the performance (52% to 71% for Pole-Balance control application). In addition, we demonstrate resiliency of different input/output encoding, training neural network, or the underlying accelerator modules in a neuromorphic system to the changes of the hyperparameters.
Style APA, Harvard, Vancouver, ISO itp.
36

(9833915), G. Shafiullah. "Application of wireless sensor networking techniques for train health monitoring". Thesis, 2009. https://figshare.com/articles/thesis/Application_of_wireless_sensor_networking_techniques_for_train_health_monitoring/20380206.

Pełny tekst źródła
Streszczenie:

The use of wireless sensor networking in conjunction with modern machine learning tech- niques is a growing area of interest in the development of vehicle health monitoring (VHM) system. This VHM system informs forward -looking decision making and the initiation of suitable actions to prevent any future disastrous events. The main objective of this thesis is to investigate the design and possible deployment of a less expensive, low-power VHM system for railway operations. 

The performance of rail vehicles running on railway tracks is governed by the dynamic behaviours of railway bogies, especially in the cases of lateral instability and track irregular- ities. The proposed VHM system measures and interprets vertical accelerations of railway wagons attached to a moving locomotive using a wireless sensor network (WSN) and ma- chine learning techniques to monitor lateral instability and track irregularities. Therefore this system enables reduction of maintenance and inspection requirements of railway systems while preserving the necessary high levels of safety and reliability.
The thesis is divided into three major sections. First, an energy -efficient data commu- nication system is proposed for railway applications using WSN technology. Initially, a conceptual design of sensor nodes with appropriate hardware design is presented. Then an energy -efficient adaptive time division multiple access (TDMA) protocol is developed, further reducing the power consumption of the data communication system. This data communication system collects data from sensor nodes on the wagons and passes it to the locomotive. Secondly, a data acquisition model involving machine learning techniques is used to further reduce power consumption, computational load and hardware cost of the overall condition monitoring system. Only three sensor nodes are required on each railway wagon body to collect sufficient data to develop a VHM system instead of four sensor nodes in an existing system. Finally, a VHM system is developed to interpret the vertical acceler- ation behaviour of railway wagons using popular regression algorithms that predicts typical dynamic behaviour of railway wagons due to track irregularities and lateral instability. 

To summarise, this study introduces wireless sensor networking technology that enables the development of an energy-efficient, reliable and low cost data communication system for railway operational applications. By using machine learning techniques, an energy -efficient VHM system is developed which can be used to continuously monitor railway systems, particularly railway track irregularities and derailment potential with integrity. A major benefit of the developed system is a reduction in maintenance and inspection requirements of railway systems.    

Style APA, Harvard, Vancouver, ISO itp.
37

(10506350), Amogh Agrawal. "Compute-in-Memory Primitives for Energy-Efficient Machine Learning". Thesis, 2021.

Znajdź pełny tekst źródła
Streszczenie:
Machine Learning (ML) workloads, being memory and compute-intensive, consume large amounts of power running on conventional computing systems, restricting their implementations to large-scale data centers. Thus, there is a need for building domain-specific hardware primitives for energy-efficient ML processing at the edge. One such approach is in-memory computing, which eliminates frequent and unnecessary data-transfers between the memory and the compute units, by directly computing the data where it is stored. Most of the chip area is consumed by on-chip SRAMs in both conventional von-Neumann systems (e.g. CPU/GPU) as well as application-specific ICs (e.g. TPU). Thus, we propose various circuit techniques to enable a range of computations such as bitwise Boolean and arithmetic computations, binary convolution operations, non-Boolean dot-product operations, lookup-table based computations, and spiking neural network implementation - all within standard SRAM memory arrays.

First, we propose X-SRAM, where, by using skewed sense amplifiers, bitwise Boolean operations such as NAND/NOR/XOR/IMP etc. can be enabled within 6T and 8T SRAM arrays. Moreover, exploiting the decoupled read/write ports in 8T SRAMs, we propose read-compute-store scheme where the computed data can directly be written back in the array simultaneously.

Second, we propose Xcel-RAM, where we show how binary convolutions can be enabled in 10T SRAM arrays for accelerating binary neural networks. We present charge sharing approach for performing XNOR operations followed by a population count (popcount) using both analog and digital techniques, highlighting the accuracy-energy tradeoff.

Third, we take this concept further and propose CASH-RAM, to accelerate non-Boolean operations, such as dot-products within standard 8T-SRAM arrays by utilizing the parasitic capacitances of bitlines and sourcelines. We analyze the non-idealities that arise due to analog computations and propose a self-compensation technique which reduces the effects of non-idealities, thereby reducing the errors.

Fourth, we propose ROM-embedded caches, RECache, using standard 8T SRAMs, useful for lookup-table (LUT) based computations. We show that just by adding an extra word-line (WL) or a source-line (SL), the same bit-cell can store a ROM bit, as well as the usual RAM bit, while maintaining the performance and area-efficiency, thereby doubling the memory density. Further we propose SPARE, an in-memory, distributed processing architecture built on RECache, for accelerating spiking neural networks (SNNs), which often require high-order polynomials and transcendental functions for solving complex neuro-synaptic models.

Finally, we propose IMPULSE, a 10T-SRAM compute-in-memory (CIM) macro, specifically designed for state-of-the-art SNN inference. The inherent dynamics of the neuron membrane potential in SNNs allows processing of sequential learning tasks, avoiding the complexity of recurrent neural networks. The highly-sparse spike-based computations in such spatio-temporal data can be leveraged for energy-efficiency. However, the membrane potential incurs additional memory access bottlenecks in current SNN hardware. IMPULSE triew to tackle the above challenges. It consists of a fused weight (WMEM) and membrane potential (VMEM) memory and inherently exploits sparsity in input spikes. We propose staggered data mapping and re-configurable peripherals for handling different bit-precision requirements of WMEM and VMEM, while supporting multiple neuron functionalities. The proposed macro was fabricated in 65nm CMOS technology. We demonstrate a sentiment classification task from the IMDB dataset of movie reviews and show that the SNN achieves competitive accuracy with only a fraction of trainable parameters and effective operations compared to an LSTM network.

These circuit explorations to embed computations in standard memory structures shows that on-chip SRAMs can do much more than just store data and can be re-purposed as on-demand accelerators for a variety of applications.
Style APA, Harvard, Vancouver, ISO itp.
38

"Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms". Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.55506.

Pełny tekst źródła
Streszczenie:
abstract: While machine/deep learning algorithms have been successfully used in many practical applications including object detection and image/video classification, accurate, fast, and low-power hardware implementations of such algorithms are still a challenging task, especially for mobile systems such as Internet of Things, autonomous vehicles, and smart drones. This work presents an energy-efficient programmable application-specific integrated circuit (ASIC) accelerator for object detection. The proposed ASIC supports multi-class (face/traffic sign/car license plate/pedestrian), many-object (up to 50) in one image with different sizes (6 down-/11 up-scaling), and high accuracy (87% for face detection datasets). The proposed accelerator is composed of an integral channel detector with 2,000 classifiers for five rigid boosted templates to make a strong object detection. By jointly optimizing the algorithm and efficient hardware architecture, the prototype chip implemented in 65nm demonstrates real-time object detection of 20-50 frames/s with 22.5-181.7mW (0.54-1.75nJ/pixel) at 0.58-1.1V supply. In this work, to reduce computation without accuracy degradation, an energy-efficient deep convolutional neural network (DCNN) accelerator is proposed based on a novel conditional computing scheme and integrates convolution with subsequent max-pooling operations. This way, the total number of bit-wise convolutions could be reduced by ~2x, without affecting the output feature values. This work also has been developing an optimized dataflow that exploits sparsity, maximizes data re-use and minimizes off-chip memory access, which can improve upon existing hardware works. The total off-chip memory access can be saved by 2.12x. Preliminary results of the proposed DCNN accelerator achieved a peak 7.35 TOPS/W for VGG-16 by post-layout simulation results in 40nm. A number of recent efforts have attempted to design custom inference engine based on various approaches, including the systolic architecture, near memory processing, and in-meomry computing concept. This work evaluates a comprehensive comparison of these various approaches in a unified framework. This work also presents the proposed energy-efficient in-memory computing accelerator for deep neural networks (DNNs) by integrating many instances of in-memory computing macros with an ensemble of peripheral digital circuits, which supports configurable multibit activations and large-scale DNNs seamlessly while substantially improving the chip-level energy-efficiency. Proposed accelerator is fully designed in 65nm, demonstrating ultralow energy consumption for DNNs.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2019
Style APA, Harvard, Vancouver, ISO itp.
39

(11180610), Indranil Chakraborty. "Toward Energy-Efficient Machine Learning: Algorithms and Analog Compute-In-Memory Hardware". Thesis, 2021.

Znajdź pełny tekst źródła
Streszczenie:
The ‘Internet of Things’ has increased the demand for artificial intelligence (AI)-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles. However, the growing complexity of machine learning workloads requires rethinking to make AI amenable to resource constrained environments such as edge devices. To that effect, the entire stack of machine learning, from algorithms to hardware primitives, have been explored to enable energy-efficient intelligence at the edge.

From the algorithmic aspect, model compression techniques such as quantization are powerful tools to address the growing computational cost of ML workloads. However, quantization, particularly, can result in substantial loss of performance for complex image classification tasks. To address this, a principal component analysis (PCA)-driven methodology to identify the important layers of a binary network, and design mixed-precision networks. The proposed Hybrid-Net achieves a significant improvement in classification accuracy over binary networks such as XNOR-Net for ResNet and VGG architectures on CIFAR-100 and ImageNet datasets, while still achieving up remarkable energy-efficiency.

Having explored compressed neural networks, there is a need to investigate suitable computing systems to further the energy efficiency. Memristive crossbars have been extensively explored as an alternative to traditional CMOS based systems for deep learning accelerators due to their high on-chip storage density and efficient Matrix Vector Multiplication (MVM) compared to digital CMOS. However, the analog nature of computing poses significant issues due to various non-idealities such as: parasitic resistances, non-linear I-V characteristics of the memristor device etc. To address this, a simplified equation-based modelling of the non-ideal behavior of crossbars is performed and correspondingly, a modified technology aware training algorithm is proposed. Building on the drawbacks of equation-based modeling, a Generalized Approach to Emulating Non-Ideality in Memristive Crossbars using Neural Networks (GENIEx) is proposed where a neural network is trained on HSPICE simulation data to learn the transfer characteristics of the non-ideal crossbar. Next, a functional simulator was developed which includes key architectural facets such as tiling, and bit-slicing to analyze the impact of non-idealities on the classification accuracy of large-scale neural networks.

To truly realize the benefits of hardware primitives and the algorithms on top of the stack, it is necessary to build efficient devices that mimic the behavior of the fundamental units of a neural network, namely, neurons and synapses. However, efforts have largely been invested in implementations in the electrical domain with potential limitations of switching speed, functional errors due to analog computing, etc. As an alternative, a purely photonic operation of an Integrate-and-Fire Spiking neuron is proposed, based on the phase change dynamics of Ge2Sb2Te5 (GST) embedded on top of a microring resonator, which alleviates the energy constraints of PCMs in electrical domain. Further, the inherent parallelism of wavelength-division multiplexing (WDM) was leveraged to propose a photonic dot-product engine. The proposed computing platform was used to emulate a SNN inferencing engine for image-classification tasks. These explorations at different levels of the stack can enable energy-efficient machine learning for edge intelligence.

Having explored various domains to design efficient DNN models and studying various hardware primitives based on emerging technologies, we focus on Silicon implementation of compute-in-memory (CIM) primitives for machine learning acceleration based on the more available CMOS technology. CIM primitives enable efficient matrix-vector multiplications (MVM) through parallelized multiply-and-accumulate operations inside the memory array itself. As CIM primitives deploy bit-serial computing, the computations are exposed bit-level sparsity of inputs and weights in a ML model. To that effect, we present an energy-efficient sparsity-aware reconfigurable-precision compute-in-memory (CIM) 8T-SRAM macro for machine learning (ML) applications. Standard 8T-SRAM arrays are re-purposed to enable MAC operations using selective current flow through the read-port transistors. The proposed macro dynamically leverages workload sparsity by reconfiguring the output precision in the peripheral circuitry without degrading application accuracy. Specifically, we propose a new energy-efficient reconfigurable-precision SAR ADC design with the ability to form (n+m)-bit precision using n-bit and m-bit ADCs. Additionally, the transimpedance amplifier (TIA) –required to convert the summed current into voltage before conversion—is reconfigured based on sparsity to improve sense margin at lower output precision. The proposed macro, fabricated in 65 nm technology, provides 35.5-127.2 TOPS/W as the ADC precision varies from 6-bit to 2-bit, respectively. Building on top of the fabricated macro, we next design a hierarchical CIM core micro-architecture that addresses the existing CIM scaling challenges. The proposed CIM core micro-architecture consists of 32 proposed sparsity-aware CIM macros. The 32 macros are divided into 4 matrix-vector multiplication units (MVMUs) consisting of 8 macros each. The core has three unique features: i) it can adaptively reconfigure ADC precision to achieve energy-efficiency and lower latency based on input and weight sparsity, determined by a sparsity controller, ii) it deploys row-gating feature to maintain SNR requirements for accurate DNN computations, and iii) hardware support for load balancing to balance latency mismatches occurring due to different ADC precisions in different compute units. Besides the CIM macros, the core micro-architecture consists of input, weight, and output memories, along with instruction memory and control circuits. The instruction set architecture allows for flexible dataflows and mapping in the proposed core micro-architecture. The sparsity-aware processing core is scheduled to be taped out next month. The proposed CIM demonstrations complemented by our previous analysis on analog CIM systems progressed our understanding of this emerging paradigm in pertinence to ML acceleration.
Style APA, Harvard, Vancouver, ISO itp.
40

Nguyen, Kha Thi, i Kha Thi Nguyen. "Optimized Machine Learning Regression System for Efficient Forecast of Construction Corporate Stock Price". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/07898515145094636070.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
營建工程系
105
Time series forecasting has been widely used to determine the future prices of stock, and the analysis and modeling of finance time series importantly guide investor’s decisions and trades. In addition, in a dynamic environment such as the stock market, the non-linearity of the time series is pronounced, immediately affecting the efficacy of stock price forecasts. Thus, this work proposes an intelligent time series prediction system that uses sliding-window metaheuristic optimization for the purpose of predicting the stock prices of Taiwan construction companies one step ahead. It may be of great interest to home brokers who do not possess sufficient knowledge to invest in such companies. The system has a graphical user interface and functions as a stand-alone application. The proposed approach exploits a sliding-window metaheuristic-optimized machine learning regression technique. To illustrate the approach as well as to train and test it, it is applied to historical data of eight stock indices over six years from 2011 to 2017. The performance of the system was evaluated by calculating Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Mean Square Error (MSE), Correlation Coefficient (R) and Non-linear Regression Multiple Correlation Coefficient (R2). The proposed hybrid prediction model exhibited outstanding prediction performance and it improves overall profit for investment performance. The proposed model is a promising predictive technique for highly non-linear time series, whose patterns are difficult to capture by traditional models.
Style APA, Harvard, Vancouver, ISO itp.
41

Alqerm, Ismail. "Novel Machine Learning-Based Techniques for Efficient Resource Allocation in Next Generation Wireless Networks". Diss., 2018. http://hdl.handle.net/10754/627159.

Pełny tekst źródła
Streszczenie:
There is a large demand for applications of high data rates in wireless networks. These networks are becoming more complex and challenging to manage due to the heterogeneity of users and applications specifically in sophisticated networks such as the upcoming 5G. Energy efficiency in the future 5G network is one of the essential problems that needs consideration due to the interference and heterogeneity of the network topology. Smart resource allocation, environmental adaptivity, user-awareness and energy efficiency are essential features in the future networks. It is important to support these features at different networks topologies with various applications. Cognitive radio has been found to be the paradigm that is able to satisfy the above requirements. It is a very interdisciplinary topic that incorporates flexible system architectures, machine learning, context awareness and cooperative networking. Mitola’s vision about cognitive radio intended to build context-sensitive smart radios that are able to adapt to the wireless environment conditions while maintaining quality of service support for different applications. Artificial intelligence techniques including heuristics algorithms and machine learning are the shining tools that are employed to serve the new vision of cognitive radio. In addition, these techniques show a potential to be utilized in an efficient resource allocation for the upcoming 5G networks’ structures such as heterogeneous multi-tier 5G networks and heterogeneous cloud radio access networks due to their capability to allocate resources according to real-time data analytics. In this thesis, we study cognitive radio from a system point of view focusing closely on architectures, artificial intelligence techniques that can enable intelligent radio resource allocation and efficient radio parameters reconfiguration. We propose a modular cognitive resource management architecture, which facilitates a development of flexible control for resources management in diverse wireless networks. The core operation of the proposed architecture is decision-making for resource allocation and system’s parameters adaptation. Thus, we develop the decision-making mechanism using different artificial intelligence techniques, evaluate the performance achieved and determine the tradeoff of using one technique over the others. The techniques include decision-trees, genetic algorithm, hybrid engine based on decision-trees and case based reasoning, and supervised engine with machine learning contribution to determine the ultimate technique that suits the current environment conditions. All the proposed techniques are evaluated using testbed implementation in different topologies and scenarios. LTE networks have been considered as a potential environment for demonstration of our proposed cognitive based resource allocation techniques as they lack of radio resource management. In addition, we explore the use of enhanced online learning to perform efficient resource allocation in the upcoming 5G networks to maximize energy efficiency and data rate. The considered 5G structures are heterogeneous multi-tier networks with device to device communication and heterogeneous cloud radio access networks. We propose power and resource blocks allocation schemes to maximize energy efficiency and data rate in heterogeneous 5G networks. Moreover, traffic offloading from large cells to small cells in 5G heterogeneous networks is investigated and an online learning based traffic offloading strategy is developed to enhance energy efficiency. Energy efficiency problem in heterogeneous cloud radio access networks is tackled using online learning in centralized and distributed fashions. The proposed online learning comprises improvement features that reduce the algorithms complexities and enhance the performance achieved.
Style APA, Harvard, Vancouver, ISO itp.
42

Feng, Menghong. "SunDown: Model-driven Per-Panel Solar Anomaly Detection for Residential Arrays". 2020. https://scholarworks.umass.edu/masters_theses_2/894.

Pełny tekst źródła
Streszczenie:
There has been significant growth in both utility-scale and residential-scale solar installa- tions in recent years, driven by rapid technology improvements and falling prices. Unlike utility-scale solar farms that are professionally managed and maintained, smaller residential- scale installations often lack sensing and instrumentation for performance monitoring and fault detection. As a result, faults may go undetected for long periods of time, resulting in generation and revenue losses for the homeowner. In this thesis, we present SunDown, a sensorless approach designed to detect per-panel faults in residential solar arrays. SunDown does not require any new sensors for its fault detection and instead uses a model-driven ap- proach that leverages correlations between the power produced by adjacent panels to de- tect deviations from expected behavior. SunDown can handle concurrent faults in multiple panels and perform anomaly classification to determine probable causes. Using two years of solar generation data from a real home and a manually generated dataset of multiple solar faults, we show that our approach has a MAPE of 2.98% when predicting per-panel output. Our results also show that SunDown is able to detect and classify faults, including from snow cover, leaves and debris, and electrical failures with 99.13% accuracy, and can detect multi- ple concurrent faults with 97.2% accuracy.
Style APA, Harvard, Vancouver, ISO itp.
43

Cheng-JuYu i 余承儒. "A Recommendation Information System for Selecting the Most Efficient Milling Machine by Using Case-Based Reasoning Technology of Machine Learning". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3a87b6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Zhang, Xing. "An intelligent energy allocation method for hybrid energy storage systems for electrified vehicles". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9416.

Pełny tekst źródła
Streszczenie:
Electrified vehicles (EVs) with a large electric energy storage system (ESS), including Plug-in Hybrid Electric Vehicles (PHEVs) and Pure Electric Vehicles (PEVs), provide a promising solution to utilize clean grid energy that can be generated from renewable sources and to address the increasing environmental concerns. Effectively extending the operation life of the large and costly ESS, thus lowering the lifecycle cost of EVs presents a major technical challenge at present. A hybrid energy storage system (HESS) that combines batteries and ultracapacitors (UCs) presents unique energy storage capability over traditional ESS made of pure batteries or UCs. With optimal energy management system (EMS) techniques, the HESS can considerably reduce the frequent charges and discharges on the batteries, extending their life, and fully utilizing their high energy density advantage. In this work, an intelligent energy allocation (IEA) algorithm that is based on Q-learning has been introduced. The new IEA method dynamically generate sub-optimal energy allocation strategy for the HESS based on each recognized trip of the EV. In each repeated trip, the self-learning IEA algorithm generates the optimal control schemes to distribute required current between the batteries and UCs according to the learned Q values. A RBF neural networks is trained and updated to approximate the Q values during the trip. This new method provides continuously improved energy sharing solutions better suited to each trip made by the EV, outperforming the present passive HESS and fixed-cutoff-frequency method. To efficiently recognize the repeated trips, an extended Support Vector Machine (e-SVM) method has been developed to extract significant features for classification. Comparing with the standard 2-norm SVM and linear 1-norm SVM, the new e-SVM provides a better balance between quality of classification and feature numbers, and measures feature observability. The e-SVM method is thus able to replace features with bad observability with other more observable features. Moreover, a novel pattern classification algorithm, Inertial Matching Pursuit Classification (IMPC), has been introduced for recognizing vehicle driving patterns within a shorter period of time, allowing timely update of energy management strategies, leading to improved Driver Performance Record (DPR) system resolution and accuracy. Simulation results proved that the new IMPC method is able to correctly recognize driving patterns with incomplete and inaccurate vehicle signal sample data. The combination of intelligent energy allocation (IEA) with improved e-SVM feature extraction and IMPC pattern classification techniques allowed the best characteristics of batteries and UCs in the integrated HESS to be fully utilized, while overcoming their inherent drawbacks, leading to optimal EMS for EVs with improved energy efficiency, performance, battery life, and lifecycle cost.
Graduate
Style APA, Harvard, Vancouver, ISO itp.
45

Arienti, João Henrique Leal. "Time series forecasting applied to an energy management system ‐ A comparison between Deep Learning Models and other Machine Learning Models". Master's thesis, 2020. http://hdl.handle.net/10362/108172.

Pełny tekst źródła
Streszczenie:
Project Work presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
A large amount of energy used by the world comes from buildings’ energy consumption. HVAC (Heat, Ventilation, and Air Conditioning) systems are the biggest offenders when it comes to buildings’ energy consumption. It is important to provide environmental comfort in buildings but indoor wellbeing is directly related to an increase in energy consumption. This dilemma creates a huge opportunity for a solution that balances occupant comfort and energy consumption. Within this context, the Ambiosensing project was launched to develop a complete energy management system that differentiates itself from other existing commercial solutions by being an inexpensive and intelligent system. The Ambiosensing project focused on the topic of Time Series Forecasting to achieve the goal of creating predictive models to help the energy management system to anticipate indoor environmental scenarios. A good approach for Time Series Forecasting problems is to apply Machine Learning, more specifically Deep Learning. This work project intends to investigate and develop Deep Learning and other Machine Learning models that can deal with multivariate Time Series Forecasting, to assess how well can a Deep Learning approach perform on a Time Series Forecasting problem, especially, LSTM (Long Short-Term Memory) Recurrent Neural Networks (RNN) and to establish a comparison between Deep Learning and other Machine Learning models like Linear Regression, Decision Trees, Random Forest, Gradient Boosting Machines and others within this context.
Style APA, Harvard, Vancouver, ISO itp.
46

Chiang, Chang-Yi, i 蔣昌益. "An Application of Machine Learning for Performing Diagnoses of Functional Disturbances on an Intelligent Energy Management System". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/k2446v.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

"Building Energy Modeling: A Data-Driven Approach". Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.38640.

Pełny tekst źródła
Streszczenie:
abstract: Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on off-line model is applied based on system identification and Kalman filtering methods. The developed data-driven modeling framework is validated on various genres of buildings, and the experimental results demonstrate desired performance on building energy forecasting in terms of accuracy and computational efficiency. The framework could be easily implemented into building energy model predictive control (MPC), demand response (DR) analysis and real-time operation decision support systems.
Dissertation/Thesis
Doctoral Dissertation Industrial Engineering 2016
Style APA, Harvard, Vancouver, ISO itp.
48

TUCCI, FRANCESCO ALDO. "Artificial intelligence for digital twins in energy systems and turbomachinery: development of machine learning frameworks for design, optimization and maintenance". Doctoral thesis, 2023. https://hdl.handle.net/11573/1667401.

Pełny tekst źródła
Streszczenie:
The expression Industry4.0 identifies a new industrial paradigm that includes the development of Cyber-Physical Systems (CPS) and Digital Twins promoting the use of Big-Data, Internet of Things (IoT) and Artificial Intelligence (AI) tools. Digital Twins aims to build a dynamic environment in which, with the help of vertical, horizontal and end-to-end integration among industrial processes, smart technologies can communicate and exchange data to analyze and solve production problems, increase productivity and provide cost, time and energy savings. Specifically in the energy systems field, the introduction of AI technologies can lead to significant improvements in both machine design and optimization and maintenance procedures. Over the past decade, data from engineering processes have grown in scale. In fact, the use of more technologically sophisticated sensors and the increase in available computing power have enabled both experimental measurements and highresolution numerical simulations, making available an enormous amount of data on the performance of energy systems. Therefore, to build a Digital Twin model capable of exploring these unorganized data pools collected from massive and heterogeneous resources, new Artificial Intelligence and Machine Learning strategies need to be developed. In light of the exponential growth in the use of smart technologies in manufacturing processes, this thesis aims at enhancing traditional approaches to the design, analysis, and optimization phases of turbomachinery and energy systems, which today are still predominantly based on empirical procedures or computationally intensive CFD-based optimizations. This improvement is made possible by the implementation of Digital Twins models, which, being based primarily on the use of Machine Learning that exploits performance Big-Data collected from energy systems, are acknowledged as crucial technologies to remain competitive in the dynamic energy production landscape. The introduction of Digital Twin models changes the overall structure of design and maintenance approaches and results in modern support tools that facilitate real-time informed decision making. In addition, the introduction of supervised learning algorithms facilitates the exploration of the design space by providing easy-to-run analytical models, which can also be used as cost functions in multi-objective optimization problems, avoiding the need for time-consuming numerical simulations or experimental campaings. Unsupervised learning methods can be applied, for example, to extract new insights from turbomachinery performance data and improve designers’ understanding of blade-flow interaction. Alternatively, Artificial Intelligence frameworks can be developed for Condition-Based Maintenance, allowing the transition from preventive to predictive maintenance. This thesis can be conceptually divided into two parts. The first reviews the state of the art of Cyber-Physical Systems and Digital Twins, highlighting the crucial role of Artificial Intelligence in supporting informed decision making during the design, optimization, and maintenance phases of energy systems. The second part covers the development of Machine Learning strategies to improve the classical approach to turbomachinery design and maintenance strategies for energy systems by exploiting data from numerical simulations, experimental campaigns, and sensor datasets (SCADA). The different Machine Learning approaches adopted include clustering algorithms, regression algorithms and dimensionality reduction techniques: Autoencoder and Principal Component Analysis. A first work shows the potential of unsupervised learning approaches (clustering algorithms) in exploring a Design of Experiment of 76 numerical simulations for turbomachinery design purposes. The second work takes advantage of a nonsequential experimental dataset, measured on a rotating turbine rig characterized by 48 blades divided into 7 sectors that share the same baseline rotor geometry but have different tip designs, to infer and dissect the causal relationship among different tip geometries and unsteady aero-thermodynamic performance via a novel Machine-Learning procedure based on dimensionality reduction techniques. The last application proposes a new anomaly detection framework for gensets in DH networks, based on SCADA data that exploits and compares the performance of regression algorithms such as XGBoost and Multi-layer Perceptron.
Style APA, Harvard, Vancouver, ISO itp.
49

(8790188), Abhishek Navarkar. "MACHINE LEARNING MODEL FOR ESTIMATION OF SYSTEM PROPERTIES DURING CYCLING OF COAL-FIRED STEAM GENERATOR". Thesis, 2020.

Znajdź pełny tekst źródła
Streszczenie:
The intermittent nature of renewable energy, variations in energy demand, and fluctuations in oil and gas prices have all contributed to variable demand for power generation from coal-burning power plants. The varying demand leads to load-follow and on/off operations referred to as cycling. Cycling causes transients of properties such as pressure and temperature within various components of the steam generation system. The transients can cause increased damage because of fatigue and creep-fatigue interactions shortening the life of components. The data-driven model based on artificial neural networks (ANN) is developed for the first time to estimate properties of the steam generator components during cycling operations of a power plant. This approach utilizes data from the Coal Creek Station power plant located in North Dakota, USA collected over 10 years with a 1-hour resolution. Cycling characteristics of the plant are identified using a time-series of gross power. The ANN model estimates the component properties, for a given gross power profile and initial conditions, as they vary during cycling operations. As a representative example, the ANN estimates are presented for the superheater outlet pressure, reheater inlet temperature, and flue gas temperature at the air heater inlet. The changes in these variables as a function of the gross power over the time duration are compared with measurements to assess the predictive capability of the model. Mean square errors of 4.49E-04 for superheater outlet pressure, 1.62E-03 for reheater inlet temperature, and 4.14E-04 for flue gas temperature at the air heater inlet were observed.
Style APA, Harvard, Vancouver, ISO itp.
50

Guerra, David José Santos. "Implementation of an effective and efficient anti-money laundering & counter terrorism financing system: the adoption of a behaviorally view". Master's thesis, 2019. http://hdl.handle.net/10362/79383.

Pełny tekst źródła
Streszczenie:
Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
Money laundering and the financing of terrorism has been and is increasingly becoming an even greater concern for governments and for the operating institutions. The present document describes the activities partaken by the trainee, during his one-year internship at the Millennium BCP’s Compliance Office. The trainee first started with an integration/training period, having meetings and lectures with representatives and specialists of the multiple Compliance Office departments, on their duties and respective department role on the overall Compliance Office’s schema. An e-learning component was also complied, whereas the trainee had access to a battery of mandatory courses on legal and internal operation regulations, with both learning and evaluation modules. The trainee was then integrated in the Information Systems and Analytics Department’s team, and in its AML/CFT System implementation project. Once integrated, the trainee did some data analysis to the bank’s customer database, to uncover discrepancies and inconsistences, further providing possible solutions to correct these, in order to prepare for the AML/CFT system implementation. Relative to the implementation of the AML/CFT system, the trainee participated in the whole implementation and configuration process: the ETL process; the parametrization, definition and analysis of the system; The development, definition and calibration of detection algorithms on transactions suspicious of Money Laundering and/or Terrorism Financing; the preparation/adjustment of the decision algorithms according with the evaluation algorithms' assessments; The statistical assessment on the transactions and decisions made in accordance with the reintegration on the previous decision models; And the transactional behavior analysis.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii